text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Delete Rows of pandas DataFrame Conditionally in Python (4 Examples)
In this article you’ll learn how to drop rows of a pandas DataFrame in the Python programming language.
The tutorial will consist of this:
Here’s how to do it.
Example Data & Add-On Packages
First, we have to import the pandas library:
import pandas as pd # Import pandas library in Python
Furthermore, consider the example data below:
data = pd.DataFrame({"x1":range(1, 7), # Create pandas DataFrame "x2":["a", "b", "c", "d", "e", "f"], "x3":[5, 1, 5, 1, 5, 1]}) print(data) # Print pandas DataFrame
As you can see based on Table 1, our example data is a DataFrame and comprises six rows and three variables called “x1”, “x2”, and “x3”.
Example 1: Remove Rows of pandas DataFrame Using Logical Condition
This example shows how to delete certain rows of a pandas DataFrame based on a column of this DataFrame.
The following Python code specifies a DataFrame subset, where only rows with values unequal to 5 in the variable x3 are retained:
data1 = data[data.x3 != 5] # Using logical condition print(data1) # Print updated DataFrame
The output of the previous syntax is revealed in Table 2: We have constructed a pandas DataFrame subset with only three rows out of the six input rows.
Example 2: Remove Rows of pandas DataFrame Using drop() Function & index Attribute
Example 1 has shown how to use a logical condition specifying the rows that we want to keep in our data set.
In this example, I’ll demonstrate how to use the drop() function and the index attribute to specify a logical condition that removes particular rows from our data matrix (i.e. the other way around as in Example 1).
Have a look at the following Python syntax:
data2 = data.drop(data[data.x3 == 5].index) # Using drop() function print(data2) # Print updated DataFrame
Table 3 visualizes the output of the previous Python programming code – The output is exactly the same as in Example 1. However, this time we have used the drop function to create a DataFrame subset.
Example 3: Remove Rows of pandas DataFrame Using Multiple Logical Conditions
In this example, I’ll demonstrate how to specify different logical conditions for multiple columns to tell Python which rows of our data should be deleted.
The Python syntax below gets rid of all rows where the variable x3 is unequal to 5 and the variable x1 is greater than 2:
data3 = data[(data["x3"] != 5) & (data["x1"] > 2)] # Multiple logical conditions print(data3) # Print updated DataFrame
After running the previously shown Python syntax, the DataFrame subset illustrated in Table 4 has been created.
Example 4: Remove Rows of pandas DataFrame Based On List Object
So far, we have removed DataFrame rows based on a column of this DataFrame.
This example demonstrates how to drop specific rows of a pandas DataFrame according to the values in a list object (or an array object).
For this example, we first have to create an exemplifying list object in Python:
my_list = ["yes", "yes", "no", "yes", "no", "yes"] # Create example list print(my_list) # Print example list # ['yes', 'yes', 'no', 'yes', 'no', 'yes']
The previous output shows the structure of our list: It contains the character strings “yes” and “no”.
Now, we can use this list object to specify a logical condition as basement to subset our data.
The following Python code deletes all lines from our data set, where the corresponding list element of our list object my_list is equal to “no”:
data4 = data[[x == "yes" for x in my_list]] # Using list to remove rows print(data4) # Print updated DataFrame
As shown in Table 5, we have created another pandas DataFrame subset according to the items in our example list.
Video, Further Resources & Summary
Have a look at the following video of the Data Science Tutorials YouTube channel. In the video, the speaker explains how to delete rows and columns of a pandas DataFrame.
Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.
If you accept this notice, your choice will be saved and the page will refresh.
In addition, you might read some of the related posts on my website. You can find some articles below.
- Drop Duplicates from pandas DataFrame in Python
- Delete Column of pandas DataFrame in Python
- Drop Rows with Blank Values from pandas DataFrame
- Drop Infinite Values from pandas DataFrame
- Remove Rows with NaN from pandas DataFrame
- All Python Programming Examples
In summary: You have learned in this tutorial how to remove rows of a pandas DataFrame in the Python programming language. In case you have additional questions, please let me know in the comments section below.
|
https://statisticsglobe.com/delete-rows-in-pandas-dataframe-conditionally-python
|
CC-MAIN-2021-31
|
refinedweb
| 793
| 54.05
|
Time ago, our friend Javier Fuentes illustrated us with an introduction to Shapeless.
Some months after that, at Scala Madrid meetup, he offered a pretty interesting speech about structural induction with Shapeless and HLists. We couldn’t avoid it and we got the enthusiasm flu 😛
What we want to achieve
Let’s set as our study case what I think more than one has thought before: how can I compose in the same for-comprehension different types. Something like:
import scala.util.Try for { v1 <- List(1,2,3) v2 <- Try("hi, person ") } yield s"$v2 $v1"
which usually comes from the hand of the following frustrating error:
<console>:15: error: type mismatch; found : scala.util.Try[String] required: scala.collection.GenTraversableOnce[?] v2 <- Try("hi, person ") ^
Therefore we need a way to transform these data types (
Future,
Try) into iterable ‘stuff’ (
GenTraversable[T] might work). In our example we won’t have in mind the information that we loose about the error that might happen, for example, if certain
Try or
Future expression has failed. For have a better understanding about the problem we present, let’s talk about some theory.
Monomorphism vs polymorphism
We say a method is monomorphic when you can only invoke it with parameters whose concrete type is explicitly declared in the method signature, whilst the polymorphic methods can take parameters of any type while it fits in the signature (in case of Scala language: parameter types). Speaking proper English:
def monomorphic(parameter: Int): String def polymorphic[T](parameter: T): String
Polimorphism
It’s also good to know that a method can be polymorphic due to parameter types or just to parameter subtyping. E.g.:
def parametricallyPolymorphic[T](parameter: T): String def subtypedPolymorphic(parameter: Animal): String subtypedPolymorphic(new Cat)
If we use parameter types and we have NO information at all about those types, we are in front of a parametric polymorphism case.
If we use parameter types but we need any extra view / context bound for that type (
T <: Whatever o
T:TypeClass), then we are talking about ad-hoc polymorphism.
Problem: Function values
There’s not such a problem when using parametric polymorphism and methods but, what about function values? In Scala, it cannot be achieved and therefore it produces some lack of expressiveness:
val monomorphic: Int => String = _.toString val anotherMonomorphic: List[Int] => Set[Int] = _.toSet
Please, notice the definition of the function that trasforms a List into a Set. It could be totally independant of the list element type, but Scala syntax doesn’t allow to define something similar. We could try asigning the method to a
val (Eta expansion):
def polymorphic[T](l: List[T]): Set[T] = l.toSet val sadlyMonomorphic = polymorphic _
But the compiler (as clever as usual) wil infer that the list contained type is
Nothing: a special one, but concrete as the most.
Shapeless parametric polymorphism
How does Shapeless solve this problem? If we had to define a transformation function from
Option to
List in Scala, we’d have the previously mentioned limitation for using function values and we could only achieve it by defining a method:
def toList[T](o: Option[T]): List[T] = o.toList
However, Shapeless, using its alchemy, provides us some ways to do so. In category theory, when talking about type constructors transformations, it’s so called natural transformations. First of them has the following notation:
import shapeless.poly._ val polyFunction = new (Option ~> List){ def apply[T](f: Option[T]): List[T] = f.toList }
If you have a closer look at it, the parametric polymorphism is moved to the method inside the object. Using this function is as simple as:
val result: List[Int] = polyFunction(Option(2))
The other possible notation consists on defining the function behavior based on cases, in other words, if we want the function to be defined only for Int, String and Boolean, we’ll add a case for each of them.
import shapeless.Poly1 object polymorphic extends Poly1 { implicit optionIntCase = at[Option[Int]](_.toList.map(_ + 1)) implicit optionStringCase = at[Option[String]](_.toList.map(_ + " mutated")) implicit optionBooleanCase = at[Option[Boolean]](_.toList.map(!_)) }
As you can see, if we want our function to be defined at the case where the input parameter is an
Option[Int], we define that all contained elements in the list are added to 1.
This expression returns a
this.Case[Option[Int]], where this refers to
polymorphic object that we are defining:
implicit optionIntCase = at[Option[Int]](_.toList.map(_ + 1))
The good part of this? In case we wanted to use the funcion on a input type that doesn’t have a defined case at the function, we’ll get a compile-time error (Awesome, right?):
The result
Applying this last way (based on cases), we get the expected result that we mentioned in the introductory section: to be able to use a for-comprehension for composing different typed values: iterables, Try, Future…
The proposed solution can be found in the following file.
In our function we have a case for
GenTraversable, another for
Try and
Future (this last one needs to have its expression evaluated for being able to iterate over it, so we need a timeout for waiting):
object values extends Poly1 { implicit def genTraversableCase[T,C[_]](implicit ev: C[T] => GenTraversable[T]) = at[C[T]](_.toStream) implicit def tryCase[T,C[_]](implicit ev: C[T] => Try[T]) = at[C[T]](_.toOption.toStream) implicit def futureCase[T,C[_]](implicit ev: C[T] => Future[T], atMost: Duration = Duration.Inf) = at[C[T]](f => Try(Await.result(f,atMost)).toOption.toStream) }
Now we can use it in our controversial code snippet:
import scala.concurrent.ExecutionContext.Implicits.global case class User(name: String, age: Int) val result: Stream[_] = for { v1 <- values(List(1,2,3)) v2 <- values(Set("hi","bye")) v3 <- values(Option(true)) v4 <- values(Try(2.0)) v5 <- values(Future(User("john",15))) } yield (v1,v2,v3,v4,v5)
The sole solution?
At all! you can always implement it using traditional Scala type classes, though it implies defining a trait that represent the ADT iterable. You can take a look at the example at the following gist content.
Peace out!
|
https://scalerablog.wordpress.com/tag/future/
|
CC-MAIN-2019-26
|
refinedweb
| 1,043
| 51.89
|
5. Write a program that determines the number of digits in a number:
Enter a number: 374
The number 374 has 3 digits
You may assume that the number has no more than 4 digits. HINT: Use if statements to test the number. For example, if the number is between 0 and 9, it has one digit, in between 10 and 99 = 2 digits, etc.
... i havent been able to get anything right for this code except:
/* Program that determines the number of digits in a number */
/* DATE: 07-22-02 */
#include <stdio.h>
int main()
{
int number;
printf("Enter a number: ");
scanf("%d", & number);
... here's where it gets loco
if (0>= number>= 9);
printf("The number %d has 1 digit", number);
.... am i going ab this totally wrong or what?
helppppppppppppp!!!
|
http://cboard.cprogramming.com/c-programming/22170-aughhhhhhhhhhhhhhhhhhhh-program-problems.html
|
CC-MAIN-2014-35
|
refinedweb
| 133
| 70.13
|
Read and Write properties file in Java- Examples
.properties files are used to store information in the form of
Key-Value pair in java. These are those values which we can not directly hard code into our program or some values which are user or client specific and needs to be configured based on the user environment.
For example let us say we make an application that interacts with the database and to access the database we need database name, username, password and the port no to access it which is definitely different for different user. o if our application needs to be run by multiple clients then we can not directly hard code these values and need a way to change them according to user. So in such cases we just create some properties files which contains all these vlaues and we dynamically read these values from there to perform our operations.
So lets see with examples..
How to create and write properties file in java
We can either
create properties files with extension .properties manually or we can directly create them dynamically through code. So lets create and write some values to it.
WritePropertyFile.java
public class WritePropertyFile{ public static void main(String[] args) { try (OutputStream output = new FileOutputStream("config.properties")) { Properties prop = new Properties(); // set the properties value prop.setProperty("database", "localhost"); prop.setProperty("username", "Codingeek"); prop.setProperty("password", "Codingeek"); // save properties to project root folder. prop.store(output, null); } catch (IOException exception) { exception.printStackTrace(); } } }
#Sun Aug 24 13:34:24 IST 2014
password=Codingeek
database=localhost
username=Codingeek
- In the above code we use try with resources so that the output stream gets closed automatically.
FileOutputStream("config.properties")is used so as to either create a file or use the existing one with the name config.properties.
- Then we use the Properties class to set the properties into the file.
- This file gets stored in your project root folder else you have to specify the path if you want to store it somewhere else.
How to Read properties file in java
To read properties files we need to have the file name and we should also know about the keys of the key-value pair in the property file in order to read the respective values or you can traverse the complete list in case you don’t know the name.
In this exapmle we will use
FileInputStream(file name) in order to read the
resource file which lies in our project root folder.
public class ReadPropertyFile { public static void main(String[] args) { Properties prop = new Properties(); try (InputStream input = new FileInputStream("config.properties")) { // load a properties file prop.load(input); // get the property value and print it out System.out.println("Database - " + prop.getProperty("database")); System.out.println("Username - " + prop.getProperty("username")); System.out.println("Password - " + prop.getProperty("password")); } catch (IOException ex) { ex.printStackTrace(); } } }
Database – localhost
Username – Codingeek
Password – Codingeek
Follow this link to read how to read resources or properties files using class loader or when the files are in the source folder of your project.
- Borna
|
http://www.codingeek.com/java/read-and-write-properties-file-in-java-examples/
|
CC-MAIN-2017-04
|
refinedweb
| 513
| 53.92
|
You can subscribe to this list here.
Showing
6
results of 6
Hi,
I want to start out by praising you for a great application.
It's a fresh initiative when it sometimes feel like the rest
of the community seem to think Tux can conquer the desktop
if they just release that extra window manager... :)
There, down to business.
I am using the ROX-filer to organize/view my growing collection
of digital images (just got myself a Canon IXUS 300), but there
are a couple of things that are a little troublesome.
When double-clicking outside a thumbnail, or switching between
different display modes (huge/small icons etc) the window
resizes itself. This is annoying since I work in a two-pane
situation with two Filers open with preferrably fixed sizes no matter =
what. Could the resizing be make into an option?
It's become much better since you implemented the option to
disable the autosizing of images, but it doesn't seem to
apply in these situations I've mentioned above.
Oh, and now an actual feature request. Would it be possible
to add a "Create Thumbs" alternative to the right-click/Dir menu?
It'd be pretty neat to be able to create thumbnails in a subdir before =
entering it. Using a dialogue about recursive
operation would make it even better still. That way one could
select a few sub-folders and choose to generate thumbs for them while =
doing something else.
kind regards
Andreas
Hi all!
I've been playing with this Rox stuff for a few days now and have found it
very interesting. However, I am running into a few problems. I converted
the RPM packages of Rox-base, Rox-libs, and Rox into .tgz packages (I run a
modified Slackware) and dropped them into my system. However, none of the
utilities want to run and they all seem to have the same error. I have
copied some of these below. I was hoping you could help me with a hint for
what I am missing?
Tracback (most recent call last):
File "/home/alan/Edit/AppRun", line 6, in ?
from gtk import *
ImportError: No module named gtk
and
Traceback (most recent call last):
File "/home/alan/archive-0.1.2/Archive/AppRun", line 7, in ?
from gtk import *
ImportError: No module named gtk
I am running the 1.0.0 version of Filer as I do not wish to install the
developer libraries to compile at the moment. Any help you can give me
would be appreciated. I am very fascinated by this project and running the
ROX desktop in conjunction with Oroborus WM is proving to be quite a tiny
but powerful interface!
TIA
Alan VanDerHeyden
If I were Thomas, I would be honoured to know how much profesors are
active on a project I started. If I were profesor at my school they
would all use linux, with rox, of course
regards,
mimooh
Hello.
I have packaged ROX-Base and ROX-Filer in RPM format for
Red Hat Linux 7.1. They are available at
Romildo
--=20
Prof. Jos=E9 Romildo Malaquias <romildo@...>
Departamento de Computa=E7=E3o
Universidade Federal de Ouro Preto
Brasil
Hello.
I am building RPM packages of ROX-Base and ROX-Filer and
I have come accross a question about how to tell the
version of the current CVS ROX-Base. As it has been
updated with new resources than the released ROX-Base 1.0.0,
it is more interesting for users. But how to give a
version number? None of the files in the distribution
mentions a version number. Should I call it ROX-Base 1.0.1?
Or ROX-Base 1.0.5 (to go with ROX-Filer 1.0.5)? By now
I am packaging it as rox-base-1.0.0-3 (just increased
the RPM release number).
Any comments?
--=20
Prof. Jos=E9 Romildo Malaquias <romildo@...>
Departamento de Computa=E7=E3o
Universidade Federal de Ouro Preto
Brasil
Hi all,
My Mandrake RPMs might not work on other RPM-based
systems due to different versions of system libraries
and compiler (Mdk 8 and RH 7/7.1 uses gcc-2.96
snapshots).
The source RPMs are provided and should recompile
nicely, it is designed to fit into Mandrake's menu
system but is clearly tagged in the .spec file and can
be removed, also Mandrake's grouping of software is
different so you might want to change it from
"Graphical desktop/Other" to something else. Can't
remember what Red Hat uses, was it 'Applications/X11'
or something.
Regards,
Michel
____________________________________________________________
Do You Yahoo!?
Get your free @yahoo.co.uk address at
or your free @yahoo.ie address at
|
http://sourceforge.net/p/rox/mailman/rox-users/?viewmonth=200106&viewday=11
|
CC-MAIN-2014-23
|
refinedweb
| 787
| 66.74
|
You probably know Netflix for it's mind-boggling plethora of binge-worthy content for adults, but the ubiquitous streaming service also carries a wide variety of educational programming for kids, as well. This curated list of the 10 best early learning shows on Netflix will engage, entertain, and educate - with subject material ranging from science to reading, and a healthy dose of 21st Century skills, to boot.
She will work to expand access to and representation within the network's arts programming.
The post Former NEA Chairman Jane Chu Joins PBS as Arts Adviser appeared first on ARTnews..
And that, advocates say, is likely to increase—not decrease—opioid overdoses by pushing users away from drug treatment out of fear the information they reveal could be used against them. The fear is real: Unlike other medical conditions, drug addiction leaves patients open to criminal prosecution, as well as stigmatization and other negative social consequences if their status as drug treatment or maintenance patients is revealed.
This bill, H.R. 6082, the Overdose Prevention and Patient Safety Act, would remove drug treatment patients' ability to control the disclosure of information to health plans, health care providers, and other entities, leaving them with only the lesser privacy protections afforded to all patients under the Health Insurance Portability and Accountability Act (HIPAA) of 1996.
"The confidentiality law is often the only shield between an individual in recovery and the many forms of discrimination that could irreparably damage their lives and future," said Paul Samuels, president and director of the Legal Action Center. "Unfortunately, there is a very real danger of serious negative consequences for people whose history of substance use disorder is disclosed without their explicit consent."
The Legal Action Center is spearheading the effort to block this bill with the Campaign to Protect Patients' Privacy Rights, which counts more than a hundred organizations, including the American Association for the Treatment of Opioid Dependence, AIDS United, Community Catalyst, Faces and Voices of Recovery, Facing Addiction, Harm Reduction Coalition, National Advocates for Pregnant Women, National Alliance for Medication Assisted Recovery, and National Council on Alcoholism and Drug Dependence.
The current patient privacy protections, known as 42 C.F.R. Part 2 ("Part 2"), were established more than 40 years ago to ensure that people with a substance use disorder (SUD) are not made more vulnerable to discriminatory practices and legal consequences as a result of seeking treatment. The rules prevent treatment providers from disclosing information about a patient’s substance use treatment without patient consent in most circumstances. The bill’s plan to replace Part 2’s confidentiality requirements with HIPAA’s more relaxed standards would not sufficiently protect people seeking and receiving SUD treatment and could expose patients to great harm, the advocates charge.
"They should call this the Taking Away Protections Act," said Jocelyn Woods, head of the National Alliance for Medication-Assisted Recovery. "People will be afraid to go into treatment. I'm getting emails from people who want to leave treatment before this happens. If I were going into a program and they can't tell me my information will be safe, I would think about turning around and walking out," she told the Independent Media Institute.
"Many of us," added Patty McCarthy Metcalf, executive director of Faces and Voices of Recovery.
The push for the bill is being led by health information software companies and behavioral health providers, such as Hazelden and the Betty Ford Center, and it prioritizes convenience over patient privacy.
"This is because the behavioral health people see complying with the privacy requirements as a pain in the ass," said Woods. "They're going to have to fix their computer systems to block out any treatment program licensed by the federal government—not just methadone programs—and they don't want to do that. One of the software companies, Netsmart, complained that they don't want to mess with their programming," she said.
"We need Part 2," Woods continued. "It keeps police out of the program. Without it, police can walk right in. They already sit outside methadone clinics and bust people for DUI on the way out. If this passes, they will walk right in. If the police see anyone they think has a warrant or committed a crime, they're gone.”
While the bill has made its way through the House, advocates are hopeful it will stall in the Senate.
"The House pushed this through because they wanted to look like they were doing something and because the behavioral health people were pushing for it," Woods said, "but my sense is that it's moving slowly in the Senate. We have this crazy president, and there's immigration, and the congressional break, and then campaign season. My hope is we can push this past the elections and a blue wave in November will give us a fighting chance."
But the campaign isn't taking any chances and is mobilized to fight on the Hill in the next few months to block the bill. As Mark Parrino, president of the American Association for the Treatment of Opioid Dependence, warned: "In the midst of the worst opioid epidemic in our nation’s history, we cannot afford to have patients fearful of seeking treatment because they do not have faith that their confidentiality will be protected."
"key paragraph of Boris Johnson's resignation letter is the second one. the British people "were told," he writes, that with Brexit, they could do all sorts of things. they were. and it was Boris Johnson who told them."
."
Last time on the Code Challenge #11 we solved problems in JavaScript using functions to manipulate objec...
Netflix isn’t holding back on its original content and could spend US$13 billion this year on its shows and movies. To put that in perspective, Apple is moving aggressively with its $1 billion investment in original content and still well above more traditional content creators. David Z. Morris writing at Fortune said,.
The streaming media company has plans for 82 feature films this year, and could be spending $22.5 billion a year on content by 2022. That moves the bar for Amazon, HBO, Hulu, and now Apple.
Sams Teach Yourself Go in 24 Hours: Next Generation Systems Programming with Golang by George OrnboEnglish | 28 Dec. 2017 | ISBN: 0672338033 | 368 Pages | EPUB | 14.6 MB In just 24 sessions of one hour or less, … Continue reading →
The post Go in 24 Hours, Sams Teach Yourself: Next Generation Systems Programming with Golang appeared first on Download Free Ebook Magazine Magbook.
I think a lot of people know this at this point, but I love writing code. To wake myself up in the morning and get my brain running, I normally solve a code puzzle first thing when I get to work.
I normally find these on Project Euler, CodeWars, or HackerRank -- though I'm totally open to suggestions if anybody has a really good alternate resource! I really love the problem-solving nature of these code challenges, and how I can sometimes can come up with really cool out of the box solutions to them. I do feel like they help me become a stronger programmer, and an added bonus is that if I am interviewing at the time a lot of interview questions are similar to these (for better or for worse).
Since I'm already doing these every day, I thought it would be really cool to share my results on social media to keep me accountable and to see how other people solve the same problem. I normally solve these challenges in Python or JavaScript, but other people have been posting in Ruby, Rust, Ramda, and Clojure so far (which is so cool to see!). Some of them I have solved before at some point, so I post multiple answers in different languages or my refactors!
If you would like to follow along and post your own solutions (or just see other people's), follow me on Twitter or I've been using the hashtag #CodingPuzzle on them, so you could follow there instead! I normally post in between 8 and 9 AM EST since that is normally when I get to work! It's also led to some interesting discussions on efficiency and the strengths of different programming languages for solving these types of problems! These problems are also a variety of difficulties, so if one day's problem is too hard, still follow the next day for a new one!
To get started, today's problem was Project Euler Problem 2: Even Fibonacci Numbers: Each new term in the Fibonacci sequence is generated by adding the previous two terms. Find the sum of the even-valued terms. Please post your answer here!
Ali Spittel 💁
@aspittel
#CodingPuzzle day 4: Even Fibonacci NumbersEach new term in the Fibonacci sequence is generated by adding the previous two terms. find the sum of the even-valued terms. projecteuler.net/problem=2
12:13 PM - 10 Jul 2018
12
If you want to catch up on the previous ones, they were:
Ali Spittel 💁
@aspittel
#CodingPuzzle day 3:Stop gninnipS My sdroW!Write a function that takes in a string of one or more words, and returns the same string, but with all five or more letter words reversed.From codewars.com/kata/stop-gnin…
13:00 PM - 09 Jul 2018
00
Ali Spittel 💁
@aspittel
#CodingPuzzle day 2: Stacks: Balanced BracketsGiven strings of brackets, determine whether each sequence of brackets is balanced. If a string is balanced, print YES on a new line; otherwise, print NO on a new line.hackerrank.com/challenges/ctc…
12:30 PM - 06 Jul 2018
01
Ali Spittel 💁
@aspittel
#CodingPuzzle day 1: Simple Pig Latin "Move the first letter of each word to the end of it, then add "ay" to the end of the word. Leave punctuation marks untouched." From codewars.com/kata/simple-pi…
12:31 PM - 05 Jul 2018
011
Hope to see your answers!
I made a new programming language! (((Note: right now it can only add number literals to each other and export constants, but it runs!!))
In the recent time, I've been working on making a brand new programming language and it's called Slate! This is going to be a series, more or less start to "finish" as I document my progress making the compiler, standard library, and maybe even some programs in Slate.
Slate is the (first?) programming language that compiles directly from source code to WebAssembly. Yes, that's why I've been asking about WASM for so long :). The syntax is largely inspired by JavaScript ES2015+ with other influences from Java, Kotlin, and more.
Right now this is about all it does.
/**
*
*/
//
export const expected = 80;
//
export function main(): i32 {
return 48 + 32;
}
What do?
You can export integer constant literals and export a function that adds integer literals together. That's about it. But slate.js can fully parse this and export a WASM Module that does the same, albeit very literal for now.
slate.js
Why make?
I really love JavaScript. This love stems from a broader love of the Web as a platform as a whole and JS is all we get. Until now! WebAssembly is the answer to the old question "is the Web getting any other languages other than JavaScript?". With WASM gains access ALL THE LANGUAGES[1]. So in part of a love for JS, part for a desire to make my own language because, and to try to implement features I've never seen before, I set out to make a language specifically for the Web through WASM.
[1]: provided aforementioned language has the proper toolchain
And through this series I'm going to document more or less the entire journey.
How different?
Slate is strongly typed. So that's one thing that's different. But I also want to add things like operator overloading, object extensions(Like adding onto <Object>.prototype but in a statically typed lang), and more.
<Object>.prototype
Can I use it?
Technically yes! If you'd like to compile the program above and run it in your very own WebAssembly-supporting browser then you can do the following:
import * as slate from ""
const slate_program = `
/**
*
*/
//
export const expected = 80;
//
export function main(): i32 {
return 48 + 32;
}
`;
Promise.resolve(slate_program)
.then(x => slate.parse(x))
.then(x => x.instance)
.then(x => x.exports)
.then(x => {
// `x` == { expected: 80, main: func() { [native code] } }
});
Like any other language, there are a number of steps that are similar between making a compiler and all of them take place in Slate as well.
This step was made easy because the majority of code used for this part was already written when I made an HTML preprocessor and added to my basalt javascript library. The code for Slate's lexer can be found here.
Our lexer will take the text of our program and do some really handy things for us. It has to remove the comments, as well as convert the code into a list of tokens with data that we can then later pass onto the parser.
So with the lexer set up properly, basalt will turn our test program into something like the code below.
[
Key("export"),
Key("const"),
Id("expected"),
Symbol(=),
Int(80),
Symbol(;),
Key("export"),
Key("function"),
Id("main"),
Symbol("("),
Symbol(")"),
Symbol(":"),
Id("i32"),
Symbol("{"),
Key("return"),
Int(48),
Symbol("+"),
Int(32),
Symbol(";"),
Symbol("}")
]
This part of the process was also very involved when I made my HTML preprocessor so parsing is also a module in basalt. Basalt helps us build a parser but we still have to add all the magic. Slate's parser is here. Those familiar with the computer science here, we are attempting to create a formal language by means of a pseudo-context-free grammar. ANTLR is another big project in this space of creating a lexer/parser in a format much more similar to Backus–Naur form.
Simply put, we have to come up with a series of patters that can take our token list from before and compress it down into a single express that we can then analyze later to create our program.
After that process, our test program looks more like this:
Note: I'm skipping the code demo part because the output from the parser is very verbose and the next step we're going to condense it down a bit to show the same information but in a lot more useful format
This part is done in Slate by the "converter" which takes the very verbose output from the parser, verifies it, and generates the AST. The source for the Slate converter can be found here.
So now what does our program look like?
File(
Export(
Const(
"expected"
80
)
)
Export(
Function(
"main"
Parameters(
)
"i32"
Block(
Return(
Add(
48
32
)
)
)
)
)
)
Whew! Almost there! At this point we have a nice AST but now need to compile to WebAssembly so that it's able to run by WebAssembly.instantiateStreaming(), etc. Since I wanted to make this a little easier on myself, I decided to have my compiler generate WASM in the text format as opposed to the binary format and then to use wabt to convert the text to binary WASM. Trust me, I love WebAssembly and what it stands for, but even trying to figure out the text format has been difficult. There is very little docs on the formats currently and most of what I've going off is the WASM platform spec tests and output from various WASM playgrounds.
WebAssembly.instantiateStreaming()
The code for generating WAST from our AST is actually attached to the objects sent out of the converter, so that code is here. After generation of said WAST we shoudld get the following:
(module
(memory $0 1)
(global i32 (i32.const 80) )
(export "expected" (global 0) )
(func (export "main") (result i32)
(i32.add (i32.const 48) (i32.const 32) )
)
)
Hooray! 🙌
For now we're actually done. Imports are not currently implemented and there is no standard library yet, so this phase will have to come later.
Thanks for reading! If you liked this, let me know what you'd like to see in the future of Slate and stay tuned for more!
Coming up in future installments:
if
while
for
Links to save you a scroll:
Follow Slate on GitHub!
Follow me on Twitter!
Follow me on Dev!
Ted Johnson / Variety:
The FCC prepares to take steps toward easing rules on the amount and type of children's programming required to be provided by broadcasters — WASHINGTON — The FCC on Thursday will take the first step toward easing a set of rules that requires the amount and type of children's programming …
Andrew Wallenstein / Variety:
Amid his controversial quotes about HBO, John Stankey committed to increasing the network's programming budget, in order to compete with Netflix globally — Commentary: John Stankey's blunt talk was just what Richard Plepler wanted to hear — When a conversation from a high-level internal …
Selwyn Community Education is based at Selwyn College, Kohimarama, Auckland.
This course runs on Wednesday 11 July 2018 10am - 4pm with technology teacher Melvin Din.
This workshop is for children aged 8 - 11 ...
Auckland | Wednesday, 11 July.
Judy McLane (MAMMA MIA!) and Philip Hernandez (LES MISERABLES) will be singing numbers from the Broadway songbook.
"Whether it's THE IMPOSSIBLE DREAM from MAN OF LA MANCHA or familiar tunes from MAMMA MIA!, "Broadway Favorites: A Summer Cabaret" is sure to be a joyous experience" said Cabaret Director Noah Himmelstein. "We are delighted to reunite the remarkably talented stars of 2017's LOS OTROS for an evening of best loved Broadway entertainment.
Tickets are $40 for general admission (table seating) and are available online at, by phone at 410-752-2208 or at the Everyman Theatre Box Office (315 W. Fayette Street, Baltimore, MD.
The following is a Q. and A. with Hernandez, Himmelstein, and Musical Director Dan Green.
FOR PHILIP HERNANDEZ
1.How and when did you know about this Cabaret performance?
Noah Himmelstein, our director from Los Otros, called and said he was interested in having Judy and I kick off a Cabaret Series at Everyman. So, of course, I said 'Yes.'
2. You and Judy worked so well together. How do you feel about being reunited?
Working with Judy and Noah on Los Otros was a wonderful experience and I jumped at the chance to work with them both again. This is the fourth time that Judy and I will be working together. She's an exceptional talent and always a joy to work with. Everyman is a world class theatre and I'm so excited to be working here again.
FOR Noah Himmelstein:
1. How and when did you get the opportunity to direct BROADWAY FAVORITES?
As Associate Artistic Director at Everyman, I am often looking for programming that compliments the productions happening on the main stage. Vincent M. Lancisi, the Artistic Director, and I were talking about how wonderful it was to do the new musical Los Otros last year and how much we wanted to get music back into our space this summer. Creating a cabaret for two weekends this summer was a way to do this during a time when we are usually not producing. It so happened that the first idea I had-to bring back down two of our favorite artists, Judy McLane and Philip Hernandez-happily worked out.
2. Have you ever directed musicals or cabarets before?
Much of my work is in the development and directing of new musicals. I did a couple cabarets when I was first starting out, and have directed several gala performances-which requires crafting pre-existing material in the form of songs and stories to create a compelling, whole experience. I recently collaborated with writers Lynn Ahrens and Stephen Flaherty in this type of work.
3. The shows that are being represented include EVITA, WEST SIDE STORY, LES MIS, WICKED BRIDGES OF MADISON COUNTY, MAN OF LA MANCHA...who selected these? By the way....BRIDGES is one of my favorites.
I wanted this to be a celebration of the music and memories of two artists who have an immense body of work on the Broadway, off Broadway and regional stage in many joyous and well-known pieces, in addition to material I believe our audiences would love to hear them do and that they would get a kick out of learning. It's been a very happy rehearsal period of much laughter and exquisite singing and storytelling.
4. Have you ever worked with Judy or Philip?
Last year, I directed Los Otros; a beautiful new musical by Ellen Fitzhugh and Michael John LaChiusa at Everyman that featured Judy and Philip. It's a stunning piece about the identity, social and racial politics of two Californians covering 40+ years of their lives. Judy and Philip gave remarkable performances as Lillian and Carlos, from ages 12 into middle age. The audience adored their epic talents and warm, engaging personalities. I'm thrilled to be working with them again in a much more informal environment where they'll be performing in our spacious rehearsal room.
5. How much time will you have to rehearse with them and how does one direct something like this?
It's about selecting specific material and arrangements with the actors and creating a narrative that feels part of, comments on, enriches the space and time we are performing it in. It's about balancing humor, pathos, and fun, in the hopes of crafting an evening that can deliver a theatrical experience with the life stories of two wonderful artists. With our second collaboration, we've developed a shorthand in working together. Judy and Philip also have performed in several shows together before Los Otros, including the original Broadway production of Kiss of the Spider Woman, so there is a long relationship to build upon.
6. What are you next working on?
I have several projects in development coming up at regional theatres and in New York. Most notably at Everyman, I'm directing a really terrific play by Chelsea Marcantel-Everything is Wonderful-about an Amish family who is irrevocably transformed by a freak accident in their community. I think those who were fans of The Book of Joseph will especially be thrilled by this piece, which also involves a deeply emotional time convention and mysticism to allow a family to really see each other. It begins performances late January 2019.
FOR DAN GREEN:
1.How did you get involved with this Cabaret?
I first met Noah Himmelstein, the director, from work we had done together with the composer Andrew Lippa on his oratorio I Am Harvey Milk-this fantastic choral piece celebrating the life of the gay rights activist. I'm also a composer and Andrew has been a longtime mentor to me, so it was nice to meet Noah in that capacity. I've split my career between composing and music directing. I have a passion for both, and I've been fortunate to meet some performers, directors, etc. on projects that I was music directing that I was then able to work with on my writing, and vice versa. I have a couple musicals that are hopefully getting premiere productions in the next couple years, in New York and around the country, so stay tuned!
2. What is it like be on the piano in the pit for the hit musical WICKED?
I've been lucky to sub for several Broadway shows since moving to New York. Wicked was one of my first and, thankfully, it has enjoyed a healthy run! I'm not there every night, so it takes a while to get used to jumping back into the show, but I've played it long enough now that I can relax and enjoy it-and the crowd cheering at the top never gets old.
3. How did you enjoy being associate conductor of ROCKY and how would you define Associate Conductor.
Rocky was a fantastic experience. I started assisting the songwriters, Lynn Ahrens and Stephen Flaherty, before the very first reading, and stayed with the show through its development until Broadway. Associate conducting meant that I was usually playing piano, but a couple times a week would get to conduct the show. The end of the show was the epic boxing match between Rocky and Apollo and the ring actually slid out into the house, over the conductor podium! We had a little button to press to bring the podium down to the floor so we didn't get decapitated. I wish we'd been able to keep the show open longer, but all of us had a blast and I was so lucky to work on it.
4. Have you worked before with Judy and/or Philip?
This is my first time working with Judy and Philip, but they had me in stitches within minutes of our first meeting, so I know Baltimore audiences are in for a treat!
5. Will there be any other instruments besides you on the piano?
Judy and Philip are pros who have worked together before, so they know what they want out of an evening like this-my job is to give them a nice musical support system. I'll write a few arrangements of existing tunes and accompany them from the piano. It will be great to celebrate some classic Broadway songs as well as some more contemporary numbers-there should be something for everyone.
cgshubow@broadwayworld.com
Broadway Actors Judy McLane and Philip Hernandez Return to Baltimore for Everyman Theatre in a program called "Broadway Favorites: A Summer Cabaret"
Cabaret is only six performances on week-ends from July 13 t0 July 22, 2018.
The following is the result of questions to Hernandez, Himmelstein, and Musical Director Dan Green.
eBook Details: Paperback: 284 pages Publisher: WOW! eBook; 1st edition (July 30, 2017) Language: English ISBN-10: 1680502336 ISBN-13: 978-1680502336 eBook Description: Functional Programming: A PragPub Anthology: Exploring Clojure, Elixir, Haskell, Scala, and Swift
The post Functional Programming: A PragPub Anthology appeared first on eBookee: Free eBooks Download..
After eight years of leadership, Colin Hovde will step down as the artistic director of Theater Alliance in June 2019, at the conclusion of the 18/19 Season. A search for his successor will begin this summer.
"These past years at Theater Alliance have been challenging and fulfilling, and I am so proud of all the work," Hovde said. "The Theater Alliance of today is a culmination of the vision, and commitment of its founders, the community, and all who have been a part of our mission. I am honored to have been able to serve as the artistic director over these last years. Through the dedication and support of so many, we've had a greater impact than I could have imagined."
"It is time that we make space for new leadership at Theater Alliance, and it is also time that I make space for myself. I cannot wait to see the next artistic director build on the success of the company and take it to new heights. I look forward to Theater Alliance continuing, as the resident company at Anacostia Playhouse, to serve as a part of the Anacostia community and the Washington theater family. Over the next year I am committed to working alongside the new artistic director to ensure a smooth transition."
A catalyst for innovation and diversity, Theater Alliance produces thought-provoking and socially pertinent work, successfully uniting audiences of all backgrounds through the power of creative presentation and participation.
Board President Molly Singer praises Hovde's work and impact on the community. "Colin. I have particularly loved seeing actors, playwrights, and directors cultivate careers under his guidance, and through his commitment to fostering socially conscious theater. I think every person who has attended a Theater Alliance production and post-performance discussion has learned how to see, listen, and participate in new ways."
During Hovde's tenure, Theater Alliance has staged 26 productions and received 16 Helen Hayes Awards. It has seen productions re-mounted, presented locally and even produced off-Broadway. The company has anchored its artistic work in community action, partnering with groups like Black Lives Matter, TransLatin@ Coalition, Whitman-Walker Health, Black Girl Vision, Girls Inc, TransLaw, TransLatina, Words Beats Life, and Young Playwrights Theatre to explore how the themes onstage interface with our daily lives. It has made its programming accessible through partnerships with DC schools and name your own price tickets to all productions. And it has supported local talent and new play development, enriching our vibrant artistic community.
The board of Theater Alliance will be launching a local search for a new artistic director and, as part of that search, will be conducting focus groups among artists, supporters, and community members. To participate in a focus group, send an email to board@theateralliance.com.
"Under my tenure at Theater Alliance, I have been honored to work with so many fiercely talented artists, hungry and engaged audiences, and courageous donors and supporters. I have no regrets about the hours and passion that I have poured into this company alongside so many others. But I know that now is the time to make space for someone else to take over, to step in and catapult this company to new heights. I am certain that the next leader of Theater Alliance will share the company's vision of how theater can foster nuanced conversations around shared stories, represent the full spectrum of who we are in this moment, and act as a catalyst for focused personal and social action in our community..
NASHVILLE, TN / ACCESSWIRE / July 10, 2018 / American Rebel (OTCQB: AREB) CEO Andy Ross and his band performed during the Bud Ross - Celebration of Life May 20 in Chanute, KS. Bud Ross, 77, passed away March 10, 2018, after a short battle with cancer. Chanute was the home of Kustom Electronics and Birdview Satellite, renowned public companies founded by Andy's father Bud Ross. Thousands of Chanute residents worked at either Kustom or Birdview and many of them attended the Celebration of Life and shared with Andy how important to them their experiences and life lessons learned at Kustom and Birdview had been. Bud Ross founded Kustom Electronics in Chanute in 1964 and built the company into the world's largest manufacturer of sound equipment. By 1966 Kustom had become known for its powerful clean sound and iconic tuck-and-roll upholstery look. Kustom amps and speaker cabinets were used by The Grateful Dead, Creedence Clearwater Revival, Leon Russell, Johnny Cash, The Jackson Five, and many others. Bud Ross was inducted into the Kansas Music Hall of Fame in 2006 for his contributions to music. In 1981, Bud Ross founded Birdview Satellite, which made the first widely affordable and technically advanced home satellite systems which became wildly popular in rural areas of the country. Bud Ross's well-known entrepreneurial spirit and ability to identify an opportunity clearly passed on to his son and is a strong element in the foundation of American Rebel. "When someone tells me I'm a chip off the old block, that's the greatest compliment anyone could ever give me," said Andy Ross. When Bud was playing in rock-n-roll bands in the early 1960s he became tired of amplifiers breaking down all the time, so Bud taught himself about electronics. Bud moved toward solid state technology and away from temperamental tube technology that was customary at that time. When Bud founded Birdview Satellite he identified an opportunity to improve upon technology and create a more affordable, consumer-friendly and dependable method of receiving television programming. Bud's son Andy wanted to have a backpack for his every day use that would safely conceal his handgun. Bud had loved to say, "How can you dream big if you're not seeing big?" Bud Ross had clearly shown his family and his employees how to see big and in 2004 Andy founded and took public Digital Ally (DGLY) and in 2014 Andy founded American Rebel - America's Patriotic Brand and their innovative line of concealed carry backpacks, coats, jackets, and apparel.
The Bud Ross - Celebration of Life featured six bands that performed in downtown Chanute to a big crowd that included youngsters and old friends of Bud's that had traveled many miles to pay tribute to their friend. There were a lot of tears when Andy dedicated his performance of his song "I Am My Father's Son" to his dad. It was a fitting tribute to a ground-breaking entrepreneur that had impacted the lives of his employees and their families, often paying college tuition for his employees' children and doing everything he could to lend a hand to those in need.
About American Rebel
American Rebel (OTC:jamespainter@emergingmarketsllc.com
SOURCE: American Rebel Holdings, Inc.
ReleaseID: 504907
Advance Gender Equity in the Arts (AGE) is proud to announce the recipients of the 2018 AGE Equity Grants. $15,000 was awarded to Shaking the Tree Theatre, $12,000 to PassinArt: A Theatre Company, $8,000 to Artists Repertory Theatre, and $5,000 to Boom Arts at the 35th Drammy Awards on Monday, June 25, 2018 at Portland Center Stage at The Armory.
The annual AGE Equity Grants are offered to Portland-area professional theatre companies that demonstrate a commitment to intersectional gender equity for womxn across the lifespan in theatre. This year's recipients were awarded for their courageous efforts to advance the stories of all womxn through play selection, directing and casting, as well as instituting policies for the safety and dignity of women in the theatre.
The grant review process involved 26 diverse equity advocates representing the Portland arts industries. An outstanding field of Portland theatre companies applied for the 2018 AGE Equity Grants. Congratulations to this years semi-finalists: Bag&Baggage, CoHo Productions, Corrib Theatre, Milagro Theatre, PETE, Portland Center Stage, Portland Playhouse, and Profile Theatre.
"AGE commends the impressive slate of 2018 applicants. The Portland theatre community is working hard to advance equity, diversity, and inclusion in the arts," said Jane Vogel, AGE Founder and Board President. "Theatre is the place where truth is told about life (Stella Adler). Truth means that at least half the stories told in theatre are by womxn, for womxn, about womxn."
The application window for the 2019 AGE Equity Grants will open January 2019.
Information:
AGE Equity Grants:
About Advance Gender Equity in the Arts
Advance Gender Equity in the Arts (AGE) is a social justice movement founded in 2014 by actor and activist Jane Vogel, to advance intersectional gender equity in the arts. It was created with a mission to empower all women across the lifespan of the arts; and with a vision that all women have the opportunity to achieve their pull potential.
The annual AGE in the Arts Awards recognize Portland-area professional theatre companies that promote and exhibit intersectional gender equity in their programming, leadership and casting. Recipients have received a total of $100,000 since the awards were launched in 2016.
AGE further advances its mission and objectives through community engagement programs, special events, and collaborations.
The Oregon Shakespeare Festival will open the world premiere of Idris Goodwin's The Way the Mountain Moved, directed by May Adrales, on July 14 in the Thomas Theatre. Preview performances are July 10, 12 and 13, and the play runs through Oct. 28, 2018.
Goodwin's American Revolutions commission journeys into the genesis of the Transcontinental Railroad and explores often untold perspectives of an iconic chapter in American history and the events that shaped the country's moral and environmental future. In a remote desert in the 1850s, four men-a U.S. Army lieutenant, a sharpshooter, a botanist and an artist?
"What is so fascinating about this play is that as people were moving out West, we were at this precipice where we wondered about America: What is it and what could it be? Idris's play presents a question of what could it have been," says director May Adrales, who previously directed 2016's Vietgone at OSF. "Could America have been a place where there is multiplicity and is there a way that these different values and cultures could have lived side by side? Did there have to be a winner?"
The Way the Mountain Moved joins All The Way, Sweat, Roe, Party People and other commissions from American Revolutions: The United States History Cycle that explore key moments of change in U.S. history.
"When Idris so generously agreed to create a play for American Revolutions, we talked about him touching on a moment of change in the history of our environment," says Alison Carey, director of American Revolutions. "The Way the Mountain Moved brings to life not only the people that headed West in the 1800s, but the land itself and the creatures on it. It is a story of beauty and changing bounty, and it is a gorgeous testament to the richness of the world around us."
The cast of The Way the Mountain Moved features Christiana Clark as Martha, Tuwuda, Rodney Gardiner as Orson, Sara Bruner as Phyllis Cooke, Maddy Flemming as Helen Cooke, Krista Unverferth as Hannah, Robert Vincent Frank as Bart, Shyla Lefner as Kusavi and Jen Olivares as Chuxa.
Scenic design for The Way the Mountain Moved is by Sara Ryung Clement, costumes are by Deborah M. Dryden and lighting is by Keith Parham. Charles Coes and Nathan Roberts are composers and sound designers. Projections are by Shawn Duan and Laura A. Brueckner is production dramaturg. David Carey and Rebecca Clark Carey are voice and text directors and U. Jonathan Toppo is fight director. Amy Miranda Bender is production stage manager and Rachel Gonzalez is production assistant.
The Way the Mountain Moved runs through Oct. 28, 2018, in the Thomas Theatre. Due to high demand, nine bonus performances have been added: July 31, Aug. 5, Aug. 26, Sept. 2, Sept. 5, Sept. 20, Sept. 26 and Oct. 11. Tickets for all performances are available at the OSF Box Office, via phone at 800-219-8161, or online at and osfashland.org/TheWayTheMountainMoved.
Upcoming engagement programming related to The Way the Mountain Moved includes Festival Noons Prefaces July 12 and July 28, free Festival Noons Park Talks July 15 (with actor Christopher Salazar) and July 24 (with actor Michael Gabriel Goodfriend) and a Festival Noons talk July 25 titled "American Revolutions: The Way the Mountain Moved and Beyond" featuring American Revolutions Director Alison Carey and Associate Director Julie Felise Dubiner. Tickets and information are available at osfashland.org/FestivalNoons. More engagement programming for The Way the Mountain Moved and other 2018 productions for August and beyond is to be announced.
The Way the Mountain Moved is the recipient of an Edgerton Foundation New Play Award. Lead Sponsor is Louise L. Gund. Sponsors are Amy and Mort Friedkin and The Hobbes Family. Partners are Nancy and Donald de Brier, Cynthia Muss Lawrence and the National Endowment for the Arts. OSF's 2018 season is sponsored by U.S. Bank.."
Photo credit: The Way the Mountain Moved (2018): Christopher Salazar, Christiana Clark, Sara Bruner, Al Espinosa. Courtesy of the Oregon Shakespeare Festival.
Rust Programming Language, The (Manga Guide) by Carol NicholsEnglish | 20 Mar. 2018 | ISBN: 1593278284 | 488 Pages | EPUB | 3.88 MB The Rust Programming Language is the official book on Rust; a community-developed, systems programming language that … Continue reading →
The post The Rust Programming Language appeared first on Download Free Ebook Magazine Magbook.
Here's how you can use the R programming language to create interesting summarizations of expert StarCraft gameplay. ...
La revue de presse hebdomadaire des technologies Big Data, DevOps et Web, architectures Java et mobilité dans des environnements agiles, proposée par Xebia Agilité Planning as a social event – scaling agile at LEGO Mobilité Apple repousse la deadline pour supporter App Transport Security Le créateur de Swift part d’Apple Craftsmanship Pair Programming Essentials Front...
L’article Revue de Presse Xebia est apparu en premier sur Blog Xebia - Expertise Technologique & Méthodes Agiles.
Pro Bodybuilder, Entrepreneur, and Coach Anthony Monetti engages Dr. Joe Klemczewski in a lively, far-reaching interview. After 12 years of attempting to win his WNBF Pro card, Anthony hired Dr. Joe Klemczewski, author and mastermind of the “perfect peaking” protocol as his coach. Twelve weeks later Monetti won the Heavy Weight division at the 2008 INBF Hercules Championships. Some of the topics covered:
00:15 Recruiting quality people to join your staff/team
03:00 Outsourcing and mistakes as an entrepreneur
03:50 The evolution of The Diet Doc
05:50 Joe reinvents himself based upon statistics
06:45 Licensee vs Franchisee
11:45 Diet Doc online curriculum
14:45 The ideal client
16:35 The “Why” behind Dr Joe’s hustle
20:35 Joe’s tribe, Home schooling and the Amish
21:50 Verbal bitch slapping and dealing with change
24:50 Lack of proper communication and mistakes as an entrepreneur
25:30 Ask The Diet Doc
25:50 How do you counteract cellular inflammation so that your body can lose weight faster?
28:00 Clean or IIFYM. Are they equally as effective?
28:00 Better meal planning / timing for weight loss
31:55 Can Intermittent Fasting be used with a macro-based diet and can it accelerate fat loss?
34:35 Is it okay to eat your last meal late at night / close to bedtime?
36:10 Is there a relation between feeling hungry and your metabolism burning fat?
38:25 If you need to lose 100 lbs. or more, can you kick-start the process?
38:55 Cleansing by means of a 3-day shit-a-thin?
40:50 Joe’s future as a creative writer
43:00 Health starts in the mind but is easily overlooked
We at THE DIET DOC are currently helping people in several countries and dozens of states build a sustainable business in the health and fitness industry. The Diet Doc LLC’s comprehensive resources, pioneering systems, and success model are comprised in a licensing program that helps fitness professionals become the nutrition expert in their community.
In fact, we have spent fifteen years building the curriculum through multiple books and a digital platform similar to that used by major universities, and continue to refine our programming with our license owners in mind. We focus on progressive clinical skills through ongoing training—that’s what made us leaders in the industry—but we specialize in helping entrepreneurs create their dream business.
A business analyst who spent years working in the franchising department of the largest accounting firm in the world evaluated our company. He said he had never seen a company do more for their license owners and immediately valued our program at ten times the entry point.
We are proud of our staff and the program owners that have helped us forge a new direction for weight-loss solutions. We combine science and support in ways that others can’t match.
We are confident that we can help you become a health and fitness professional—a leader who earns respect, changes lives, and grows a thriving consulting practice. I hope you’re as excited as we are!
If you want to start or build onto an existing nutrition and fitness career, we are waiting to talk.
Click here to learn what it means to be a Diet Doc program owner
RISE Church is a new church launching in fall of 2018 in San Antonio, TX. Our mission is to create engaging environments where people far from God can learn, trust and follow Jesus.
We are looking for passionate people thaat want to be a part of what God is doing in SATX.
Worship
Sound Engineering
Programming
Video
Lighting
PASTORAL CARE TEAMS
Prayer
Hospital Visitation
Biblical Counseling
FAMILY TEAMS
Kids
Students
Seniors
HOSPITALITY TEAMS
Greeters
Parking
Coffee Bar & Snacks
Facilities
Set up/Tear Down
COMMUNITY OUTREACH
Big Brothers & Sisters
Neighboring
Nursing Homes
First Responders
**We are currently in the prelaunch stage of our church. No salaries or stipends will be considered until after we launch regular weekend services. please contact our Executive Pastor, Jason Martin, by call or text at 210.775.8975 or sending him an email at jason@risechurchtx.com
Our Leadership
Twitter:
Instagram:
TRINITY CHURCH Student Minister Opportunity Profile & Job Description
Opportunity Profile
At Trinity, we are passionate about seeing a diverse group of people find a common story in Christ. Since Trinity started 12 years ago, we have committed ourselves to exalting Jesus Christ and honoring God’s Word in all that we say and do. Our
four-part strategy is to help people Come to faith in Jesus, Grow as his disciples, Serve as Jesus served and Reach those who do not know him. As God has brought more people, we have continued to be one church which now meets in three Virginia Beach locations
and in Stuttgart, Germany with a fifth campus planned to open this fall in Downtown Norfolk.
To better reach students with the Gospel and to help them Grow in Christ, Trinity is hiring a part-time (25 hour/week) Student Minister for our Downtown Norfolk campus. The Student Minister will lead the student ministry at their campus to reach
6th-12th grade students and invite them to come to Christ, help students Grow to become disciples through small group Bible studies and assist students in Serving in the church as well as Reaching their friends for Christ. Our Student Minister will be part
of a Campus staff led by the Campus Pastor and will also work collaboratively with Student Ministers from other campuses to jointly select curriculum, plan summer camps and mission trips together, and work hard to make sure that Trinity’s student ministry
pursues the same strategy across all of our campuses while embracing local distinctives.
The Trinity Church staff is comprised of people who love Christ. We value truth, humility, teachability, collegiality and entrepreneurial risk-taking. As we chase the dreams God has put in Trinity’s collective heart, we are willing to try new things
and then evaluate so we can learn from those experiences together. Our goal is not excellence in and of itself, but to do everything as unto the Lord for His glory and the good of the people we reach and serve.
We are currently looking for a student minister for our Downtown Norfolk Campus that will be launching in Fall 2018. If you naturally connect with students, are passionate about leading them to Christ and love leading adults to exercise their gifts
in student ministry, see the Job Description below for what it will take to be a Trinity Church Student Minister. Furthermore, if interested, please send an email to
robbieh@trinitychurchvb.com with your resume and a cover letter. In your letter, please include a brief statement of faith including when you first trusted Christ as your Savior, and, share why you are interested in serving the Lord at Trinity.
Job Description:
Position Overview:
The Student Minister is someone with a mature, vibrant, and deep love for Jesus Christ who is both passionate about and gifted in leading students in worshiping and serving God. The Student Minister is responsible for leading the Student Ministry
at their campus to pursue Trinity’s mission by accomplishing the church’s Come, Grow, Serve & Reach strategy. The Student Minister will report to and take direction from the Campus Pastor. The Student Minister will also collaborate with other Trinity Student
Ministers and will receive coaching from the Team Leader of Trinity’s Student Ministry Team.
Specific Responsibilities:
Leadership:
1. Develop and execute a plan to accomplish Trinity’s 4-part strategy through that campus’
student ministry.
2. Recruit, train, deploy, and support volunteer adult leaders.
3. Provide pastoral care to the high school and middle school students.
4. Work collegially with fellow Trinity student ministers to insure our campus student ministries are strategically aligned with similar programming, select a common curriculum, and plan quarterly church-wide special events (e.g., camps, retreats,
and mission trips), 5. Serve on the Campus Leadership Team which meets to provide counsel to the campus pastor and help staff reach the local community.
Grow Strategy:
Develop a Grow strategy designed to help students grow to become more like Christ. The primary means for accomplishing this discipling strategy is through community groups and weekly campus group events. The secondary means is through quarterly
church-wide events.
The Student Minister will:
1. Develop and multiply campus community groups including recruiting, training, and supervising group leaders.
2. Organize and be the primary teacher for the weekly campus high school Sunday evening event (SNL) and Wednesday middle school evening event (SHIFT).
3. Work with fellow student ministers to organize quarterly special events such as a high school student retreat, a middle school student retreat, Middle School Weekend, and a summer camp for all students.
4. Encourage and equip parents (e.g. seminars).
Serve Strategy:
Develop a Serve strategy to help every student discover their spiritual gifts and passions and find a place to serve in the local body.
Reach Strategy:
Develop a Reach strategy that helps the students see their local schools and community as their mission field and lead them by example by:
1. Investing deeply in reaching students at assigned local high schools (at least 2) and middle schools (at least 3) near the campus.
2. Seeking out students who attend the campus for the first time or who attend worship but are not involved in the student ministry.
3. Planning domestic and international mission trip opportunities for high school students.
4. Teaching students how to reach others and share their faith with them.
Administration:
1. Regularly communicate with parents, students, volunteers, and staff.
2. Maintain an annual ministry calendar coordinated with campus and student ministry staff.
3. Prepare and manage the campus Student Ministry budget.
Qualifications/Skills/Gifts:
1. Mature Christian walk.
2. Spiritual gift of leadership and/or administration and gift of teaching.
3. Loves students and is effective in reaching and discipling them.
4. Proven ability to lead and organize students and adult volunteers with at least one year of church staff experience or two years as a volunteer leader.
Responsible/Accountable to:
1. Directly responsible to the Campus Pastor.
2. Indirectly responsible to the Team Lead Student Minister.
Expectations:
1. Continue to grow spiritually with a daily commitment to prayer and Scripture.
2. Maintain strong marriage/family.
3. Be involved in the life of Trinity Church.
4. Defend the doctrinal positions of Trinity Church.
5. Support the leadership and decisions of the Elder Board.
6. Work part-time including evening and weekend commitments.
7. Attend weekly staff meetings and monthly all-Trinity staff meetings.
8. Cooperate with semi-annual reviews with Campus Pastor and Team Lead Student Minister.
Support (what Trinity Church provides):
1. Salary and Benefits.
2. Continuing Education/Conference Attendance.
3. Resources (computer, phone provider expense reimbursements, office space, etc.)
4. Personal support from Elder Board and Pastoral Staff.. “I am honored to have led the incredible team at CBS5 and 3TV!
The post KPHO GM Ed Munson will retire after 40-year career appeared first on AZ Big Media..).
Cygnet's 2018 Summer Benefit promises to be anything but a drag. For two nights only, the six-time Tony Award-winning musical La Cage aux Folles comes to Cygnet Theatre for a staged concert reading. Proceeds benefit Cygnet Theatre's Artist Advocate program, which provides the financial support needed to hire the most talented artists and to pay them a meaningful wage. With book by Harvey Fierstein, lyrics and music by Jerry Herman, and directed by Sean Murray with musical direction by Patrick Marion, the benefit takes place August 6 and 7.
After twenty years of un-wedded bliss Georges and Albin, two men partnered for better-or-worse, get a bit of both when Georges' son and heartwarming results. La Cage aux Folles is based on the play by the same name, and a precursor to the popular film The Birdcage. The show is one of Broadway's biggest hits, boasting multiple revivals and beautiful musical numbers including "I Am What I Am" and "The Best of Times".
Each year, Cygnet Theatre presents a staged reading of a musical as a fundraiser. Past productions include Monty Python's Spamalot, Hair and Evita. The evening will include a hosted reception with light hors devours as well as a silent auction on the patio before the show. This year, thanks to a generous Cygnet Donor, all new and increased donations are being matched dollar-for-dollar. The benefit is sponsored by Ralph Johnson. Tickets are on sale now and may be purchased in person at the box office located at 4040 Twiggs St., by calling 619-337-1525 or by visiting.
33.00 USD
⍟ Listing is for one wand + tank top! (Makes a great gift!)⍟ Wand is intuitively chosen⍟ 96% polyester, 4% elastane⍟ Comfortable, stretchy material that stretches and recovers on the cross and lengthwise grains.⍟ Precision-cut and hand-sewn after printingThis tank top has everything you could possibly need – vibrant colors, soft material and a relaxed fit that will make you look fabulous!--Crystal technology // Orgone Pyramids designed with re-harmonizing frequencies, intentions and crystals to bring spiritual awareness, joy, creativity and protection in your field. The sculptures are encoded with galactic frequencies designed to empower and protect the wearers energy field. All devices can be programmed (but function regardless of your belief system) - I suggest programming them to your specific desires (love, joy, prosperity, protection etc) as they work 10 fold with intention. You create your own reality, so take it upon yourself to manifest all that you desire with focus, intention, feeling and action. Each device is functional art sculptures. No two are alike! ♥ Each device can take anywhere from 1-3udged with Palo Santo, re-programmed under a vogel crystal light rack and encoded with galactic codes.♥ Made with love and cosmic light ♥/// healing crystals are spiritual allies to healing and are not meant as health care information or prescriptions ///
This article covers 5 free 3D game maker software. These software are some of the well-known game engines which you can use to develop 3D games. To work with these game engines, you must have good programming skills and must be familiar with some programming languages including JavaScript, C#, C++, Python, etc.
The post 5 Free 3D Game Maker Software For Windows appeared first on I Love Free
Final versions all the above
5
15th of October
Attendance of the three workshops, including facilitation of some sessions including travel days).)
Participation at the MICS Data Processing Workshop.
Report summarizing the initial assessment of MICS CAPI system for five MICS6 surveys in Latin America and Caribbean Region outlining recommendations and discussion points for further improvements.
Final technical review and testing of CAPI application, setting up CAPI system locally, providing in-country support during training and first few days of fieldwork.
Technical review of the SPSS analysis files for five MICS6 surveys in Latin America and Caribbean Region.
Technical review of the adopted SPSS syntaxes for five MICS6 surveys in Latin America and Caribbean Region.
Support MICS Data Processing team in New York HQ with work on standard data processing materials, including manuals and guidelines, standard data collection application and standard SPSS tabulation syntaxes, and their translation to Spanish.
Duty Station:Remote-based.
Travel to MICS Regional Data Processing Workshop and visits to the implementing countries in Latin America and Caribbean Region. The exact timing of the travel will depend on the survey schedules of the countries.
Suggested time of the country visits:
Prior to the start of the fieldwork, in order to advise on the set up of digital data collection system, and provide support during last week of training and first week of fieldwork.
Timeframe
Start date: 6 August 2018 End date: 31 July 2019
Duration
Workshop participation report (including recommendations for further improvements of sessions, presentations and training materials)
10 days
1 September 2018
Reports summarizing the initial assessment of MICS CAPI system for MICS6 surveys in Latin America and Caribbean Region, outlining recommendations and discussion points for further improvements (for 5 MICS6 surveys)
25 days
10 October 2018
Final CAPI application review report, including detailed description of local CAPI system settings, as well as system performance during training and first few days of fieldwork (for 5 MICS 6 surveys)
75 days
1 February 2018
Technical report on final SPSS datasets, including suggestions for improvement (for 5 MICS 6 surveys)
1 March 2019
Technical report on final SPSS tabulation syntaxes, including suggestions for improvement (for 5 MICS 6 surveys)
1 April 2019
Technical review of standard data processing materials, including manuals and guidelines, standard data collection application and standard SPSS tabulation syntaxes.
30 days
31 July 2019
160 days
Please indicate your ability, availability and daily/monthly rate (in US$) to undertake the terms of reference above (including travel and daily subsistence allowance, if applicable). Applications submitted without a daily/monthly rate will not be considered.
Plan International Logistics & Procurement are inviting interested parties to bid as part of a negotiated tender process for the provision of a consultant writer for Emergency Response Manual Revision. Successful Tenderers will be expected to enter into a formal contract with Plan International. 1 – “Confirmation of Intention to Tender” as soon as possible, and thereafter complete and submit all the required documents as listed in Section 5.
This tender dossier has been issued for the sole purpose of obtaining offers for the supply of goods or services against the specification contained within this document and Annexes. Plan International reserves the right not to enter into or award a contract as a result of this invitation to tender.
Any attempt by the Tenderer to obtain confidential information, enter into unlawful agreements with competitors or influence the evaluation committee or Plan International during the process of examining, clarifying, evaluating and comparing tenders will lead to the rejection of its offers and may result in the termination of a current contract where applicable..
Read more about Plan International's Global Strategy: 100 Million Reasons at
Objective
To revise books 1 & 2 of the current Plan International Disaster Risk Management (DRM) manual, aligning with Plan International’s Global Strategy and changes in DRM approaches, ensuring a user-friendly handbook is in place to support Country Office Emergency Response Teams and Plan International’s Global Emergency Roster members implement high quality responses. The manual must be easy to access with clear schematics and processes without long narrative text clearly spelling out processes and systems to be implemented during the early phases of an emergency response.
Introduction
There are currently 3 DRM manuals;
Book 1; Response Activities Guidance. To guide Plan International staff to understand activities that should be done at each alert level. It also details the expected outcomes, available resources on Planet, and guiding principles (WHAT).
Book 2; Roles & Responsibilities. Gives details on who executes the activities listed in Book 1. It gives more details on how this will be done (WHO and HOW). There are a number of functions under which specific expectations are outlined depending on the alert level.
Book 3; Programme Guide. Covers six core programme chapters (Education in Emergency; Child Protection in Emergency; WASH; Food Assistance and Nutrition; Camp Management/ Shelter and Non-Food Items; Health; and crosscutting issues like Gender-Based Violence, Psycho- Social Support, Cash Programming, among others). It gives guidance at different levels, from disaster preparedness to post-emergency closeout.
A light touch review of the current DRM manuals was conducted in 2017 with users in Country Offices, Regional Offices, National Organisations and International Headquarters. Key issues identified were:
· The need for a single tool rather than three books plus a communications manual.
· The need for a user-friendly manual aimed at staff with limited or no emergency response experience.
· The manual to be available on a variety of platforms.
· Increased inclusion of standard tools and formats.
Respondents reported that Book 3 is the most used book, followed by books 1 and 2 equally. The majority of respondents know where to access the books on Plan International’s intranet, PLANET.
Key feedback is summarised below;
Positive Feedback on the Manuals
· Clarity around roles and responsibilities of staff
· Clear information regarding emergency response alert levels
Negative Feedback on the Manuals
· Manuals are very general and don’t provide the level of support required
· Format is not user friendly
· Too complicated to use
· Too many manuals – combine in to 1
· Lack of templates and key tools
· No links to quality standards
· No focus on gender
A new Global Strategy is now in place and the DRM tools and manuals need to reflect this change in organisation focus and approach, as well as wider changes within the humanitarian sector as a whole. In particular the inclusion of gender across the DRM manual is required.
A consultant writer is required to work closely with technical specialists to revise the manual. This is not a major overhaul of the manuals, but a refresh to ensure alignment with the new Global Strategy, inclusion of missing areas and to consider the user experience.
A small reference group will be established, comprised of representatives from across the organisation, to provide input and support to the process;
· 1 Regional DRM
· 1 National Organisation representative
· 1 Country Office representative
· 1 operational support representative
· 1 International Programmes Department representative
· 1 Global Influence and Partnership representative
· 1 IH DRM representative
· 1 Gender Specialist
· Head of Disaster Response
The reference group will provide a sounding board to the consultant as well as strategic guidance on new content, ensuring that all material is kept to the core essentials. Technical specialists and networks may be co-opted to revise their sectoral chapters or provide specific input (for example sponsorship, finance etc). With an increased focus on policy and influencing work, together with an increase in partnership it is important that the revised manuals reflects this and provides the guidance needed to Country Offices.
A write shop is envisaged in late September 2018 to provide focused writing and to bring together appropriate specialists from across the organisation to drive the revision process forwards.
The aim of the revision process is to ensure that a simplified and accessible printed manual is developed and in use by May 2019.
Peer agencies have all invested significantly in developing manuals and we will seek to learn from their experience and to incorporate best practice in to our revised DRM manual. The manual will also link to external resources to support Country Offices with quality programming and the most up to date tools / guidance for example the MHM in emergencies toolkit, SPHERE, CHS, INEE and CPMS.
Activities:
· Conduct a 5 day write-shop with key staff from different functions developing initial drafts of key chapters. The write-shop is tentatively planned for 24 -28 September 2018 in Bangkok.
· Develop a standard format for chapters where possible.
· Support development of chapter contents with technical specialists.
· Streamline content developed ensuring repetition is removed and information is kept to the essentials.
· Rework draft chapters developed during the write-shop in to final versions, suitable for piloting with County Offices and roster members.
· Support the revision of current tools as well as development of new tools to be used in a Country Office’s emergency response activities.
· Incorporate feedback from piloting the material in to revised chapters.
Chapter Outline
The chapters identified to date are:
· Human Resources
· Safety and Security
· Programme Implementation and Management
· Logistics, Procurement and IT
· Finance
· Resource Mobilisation
· Information Management
· PESA/Safeguarding
· Communications and Media
· Commitment to the Children
· Gender transformative approaches
· DRM Policy, Strategy and Vision
· International Standards—CHS, INEE, CPiE etc.
· Decision Making
· Key Organizational Roles
· Advocacy and Influencing
· Nexus
· Multi-County Models
· Partnerships
Experience
A consultant is required to provide technical support and input on the revision process, and to support the writing process.
Essential
· Significant emergency response experience in a number of different contexts (eg. rapid onset, refugee, conflict, slow onset).
· Previous experience in the development of emergency response guidance notes and tools.
· Previous experience of writing emergency response manuals.
· Strong understanding of integrating gender in to emergency response.
· High-level skills in writing in plain English for non-technical specialists.
· Ability to integrate good practice in to the revised manuals.
· Experience of coordinating a diverse group of specialists to meet tight deadlines.
· Experience in designing write shops, and co-facilitating these.
Desirable
· Familiarity with Plan International’s current emergency response tools and guidance notes.
Launch of Tender 10th July 2018
Supplier opportunity for any Questions & Answers surrounding this Tender 27th July 2018
Deadline for submission of offers 10th August 12 noon (BST) 2018
Supplier Short-List Notification 13th August 2018
Supplier Presentations 15/16th August 2018
The organisation should establish environmental standards and good practices that follow the principles of ISO 14001 Environmental Management Systems, and in particular to ensure compliance with environmental legislation
The organisation should seek to set reduction targets in areas where the organisation’s activities lead to significant environmental impacts
The offer must be sent to the address specified on page 1. It must be via registered post with acknowledgement of receipt or hand-delivered against receipt signed by a Plan International representative.
Offers must be received before the deadline specified in the “Tender Main Facts Table” above.
The offer and all correspondence and documents related to the tender must be written in English or native language
All offers must be submitted in one signed original, marked “original”. As well as the paper responses there should also be one copy in the form of a USB. In case of discrepancies, information in the “original” shall prevail on “copy”.
All offers, inclusive of any annexes or supporting documents, must be submitted in one sealed envelope bearing only:
a) The address;
b) The tender reference number/name stated in the “Tender Main Facts Table”;
d) The words “Not to be opened before the tender opening session”;
e) The name and address of the Tenderer.
Each Tenderer or member of consortium or sub-contractor may submit only one offer. The offer can be for one entire lot or more entire lots.
Offers are to remain fixed for a two year period after the deadline for submission date. There is the potential for a one year extension if prices quoted remain the same as during the first two years.
Plan International, at its sole discretion, will select the winner of this tender.
Plan international shall be free to:
· Accept the whole, or part only, of any tender
· Accept none of the proposals tenders
· Republish this request for Tenders
Plan International will not be liable for any costs or expenses incurred in the preparation of the tender.
Plan International reserves the right to keep confidential the circumstances that have been considered for the selection of the offers.
Part of the evaluation process may include a presentation from the Tenderer and a site visit by Plan International staff.
Value for money is very important to Plan International, as every additional £ saved is money that we can use on our humanitarian and development work throughout the world.
Plan International reserves the right to alter the schedule of tender and contract awarding.
Plan International reserves the right to cancel this tender process at any time and not to award any contract.
Plan International reserves the right not to enter into or award a contract as a result of this invitation to tender.
Plan International does not bind itself to accept the lowest or any tender.
Plan International shall not be liable in respect of any costs incurred by the Tenderer in the preparation of the offer nor any associated work effort, including the production of presentation materials, brochures, product specifications or manuals for evaluation.
Plan will evaluate the responses based on a number of criteria that will include, but not be limited to:
· Understanding of requirements.
· Adherence to this ITT process
· Quality and relevance of proposal documentation
· Price
· Ability to deliver expected outcomes against timeline
· Evidence of delivering innovative solution
· Flexibility
The exact criteria of selection will not be published to Suppliers.
The onus is on the Tenderer to ensure that its offer is complete and meets Plan International’s requirements. Failure to comply may lead to the offer being rejected without any reason being given. Please therefore ensure that you read this document carefully and answer fully all questions asked.
For the full Tender Dossier Pack or for any questions please contact procurement@plan-international.org
Please quote "FY19 - 080 Emergency Response Manual review" in the subject line of all correspondence.
The role: DEC Appeal Phase II Final Evaluation Consultant
Qualifications and experience
Contract length: 30 days Email: Please apply with a covering letter and up-to-date CV to: 'A.Aloqaily.51695.3830@savethechildrenint.aplitrak.com'
I understand why this is hard to find and why this might not be the best course of action; however, I need a way to close certain pages if they are left "idle" for 20 minutes. I thought using some Javascript code like function loaded() { setTimeout(function()
{ window.close(); },10000); } would work, but it does not. I can throw in an alert in the function and the alert pops on the page, but the page will not close.
Most of my users utilize Chrome or Safari. If a user accesses Page.aspx (for example), I need to somehow notice they are idle for 20 minutes and kill the page. Now, if the "idle" is not feasible, then I can just set the page to the 20-minute timer and kill
it at 20 minutes. If they refresh the page, the timer should restart. That's fine. I don't necessarily want to kill the browser, just that window.
Any ideas on how I can do this in a "browser neutral" function without programming it into a button (I'm not going to get a user to say, "Hey, I've been idle for 20 minutes, so let me close this page, log back in, and come right back to where I'm at now".
Users are that honest :).
The nearly empty 20-lane road stretching from across the National Parliament building in Nay Pyi Taw, the capital city of Myanmar. Photo: Romeo Gacad/AFP
World Bank Vice President for East Asia and Pacific, Ms. Victoria Kwakwa, concluded a three-day visit to Myanmar to assess progress and discuss future support for peace, social inclusion and economic transition, according to the devdiscourse website., according to the, […]
The post Value Proposition Guidance for Recovering Programming Generalists appeared first on DaedTech.
Please pick a number between 1 and 10, and don’t tell anybody what you
picked.
Got it? Great.
Now, I know you chose a number with deep personal meaning to you. Maybe
the number you chose could cause you some trouble if other people knew
it? If it helps you get into the spirit of this exercise, you can
pretend your number represents your salary or the number of hours you
sleep per day or the number of showers you take per week.
At this point, everyone reading this post has picked a number, and none
of you want to share it. You especially don’t want to give it to
Facebook or Cambridge Analytica or any of these big companies that make
their money by selling what they know about you.
But you’re probably curious. How many other people picked the same
number? How popular were the numbers you didn’t pick? (So long as you
can keep your privacy, there are lots of good reasons to contribute to
aggregate research, such as medical studies.)
We’ll answer those questions with a histogram. But I’ll walk you
through techniques that ensure that you don’t have to reveal your secret
number to anyone in the process of computing the histogram, and
furthermore that someone who learns the information in the histogram
still can’t find out what your secret number was.
There will be nothing new here for experts in differential privacy or
secure multiparty computation; instead I’m aiming for a tutorial for
programmers who aren’t familiar with these fields. Also, don’t rely on
my explanations to implement anything where you really care about
security. (This falls under the general advice, “don’t roll your own
crypto.”) Rather I just want to spread awareness of several decades of
research that are very relevant to us today.
You might expect that a histogram is pretty good for keeping individual
numbers secret. If a hundred people are counted in the same histogram,
how could I prove which bin you’re in?
However: there’s a long history of people publishing data that seemed
like it shouldn’t breach anyone’s privacy, only for someone to discover
that in conjunction with other information, it wasn’t nearly as private
as it seemed. An extreme case here would be if I know the secret number
of everyone except you: I can then tell which bin you’re in by
elimination. By the same token, if you and I are the only participants
in this project, then I can subtract my contribution from the histogram
and yours will be all that’s left. Privacy compromises can get much more
complicated than either of these two cases, of course.
One well-known example was when Netflix published anonymized movie
rental histories for their Netflix Prize competition in 2006. By
2007, researchers had demonstrated how to correlate the anonymized
Netflix ratings with public movie ratings from IMDb, successfully
re-identifying private movie rating records in the Netflix dataset, with
implications for the privacy of people’s sexuality and their political
and religious views1.
If you want to know whether a piece of information is going to deprive
someone of their privacy, the lesson here is you can’t just reason about
what an adversary can learn from the information you’re publishing. You
have to consider everything they could possibly have known already.
So now we’ve gone from “this seems fine” to “this seems impossible!”
It turns out however that there are general techniques which can provably
preserve privacy! The field is called “differential privacy”. Research in this
area kicked off with Dwork, McSherry, Nissim, and Smith publishing
“Calibrating Noise to Sensitivity in Private Data Analysis” in
2006, although that paper did not yet use the “differential privacy”
terminology.
Differential privacy promises that the data generated from a dataset
won’t be significantly different whether your data is in that dataset or
not, so you might as well participate. Because a dataset that includes
you is essentially indistinguishable from one you aren’t in, it doesn’t
matter what the adversary knows!
There is a trade-off though between privacy and utility. Because the
definition of differential privacy assumes the adversary could already
know anything and everything, differentially private results must always
have random noise added to them to obscure individual contributions to
the results. Papers in this area are largely about deciding exactly how
little noise you can get away with adding, because the more noise there
is, the less useful the result is.
An alternative definition called “Distributional Differential Privacy”
(DDP), introduced in Bassily, Groce, Katz, Smith: “Coupled-Worlds
Privacy: Exploiting Adversarial Uncertainty in Statistical Data
Privacy”, relaxes the assumption about an adversary’s abilities.
Instead of assuming they could know everything, we define our
assumptions about exactly what the adversary could know in terms of
probability distributions. This requires some caution because if we omit
the distribution that turns out to reflect some adversary’s actual
knowledge, then all our privacy claims are invalid. But as long as we’re
careful we can get more useful results by recognizing that real
adversaries are not actually all-powerful.
One algorithm given in the original DDP papers is for privacy-preserving
histograms, which is exactly what we need here2!
For now, let’s assume you believe I’m a trustworthy person, so you’re
willing to give me your secret number and I’ll promise not to reveal it
to anyone else. (We’ll improve on this assumption later.)
Once everyone has handed me their secret numbers, I can take the
following steps:
I’ll pick a probability p and a whole number k based on how many
people are participating and on how much privacy loss I’m willing to
accept.
For each participant, I decide at random whether to use their number
at all. Every participant needs to have the same probability, p, of
being included. It’s very important that nobody be able to predict
these decisions.
Count how many of the selected participants fall in each bin.
If any bin has a count less than k, report that bin as 0 instead.
Section 6.2 of Adam Groce’s PhD thesis, “New Notions and Mechanisms for
Statistical Privacy”, contains the proof of the following
statement, given that I follow the above procedure:
An adversary cannot learn about an individual, even if the attacker
knows that the individual was in the sample. Consequently, the
adversary cannot determine if a given individual was in the sample to
begin with.
An adversary cannot learn about an individual, even if the attacker
knows that the individual was in the sample. Consequently, the
adversary cannot determine if a given individual was in the sample to
begin with.
This should be a fairly comforting result. Sadly there are several
details in the proof I haven’t been able to follow, so I have to take
Groce’s word for it.
I’ll also refer you to the thesis for alternate choices of sampling
distributions, as well as how to pick p and k.
What if you don’t trust me with your secret number? (I’m hurt, but I’ll
get over it.)
“Secure Multi-Party Computation” (often abbreviated “MPC”) is a
sub-field of cryptography where people who have secrets they aren’t
willing to share with each other can nonetheless work together to
compute some function combining all those secrets.
In this case we’re going to use MPC techniques so you don’t have to
reveal your secret number to anyone. Instead of relying on me to combine
everyone’s secrets, you’ll work directly with the other participants to
run the above algorithm, and they won’t learn your secret number either.
Researchers have proposed a variety of MPC techniques that are
remarkably generic: they can compute anything that could be computed by
an arbitrary combination of either boolean logic gates or addition and
multiplication. It turns out you can express those in terms of each
other, but some algorithms are simpler in boolean logic and others are
simpler in arithmetic.
The above histogram algorithm can be expressed fairly clearly in either
style, but I think I can better explain a boolean circuit solution than
the alternatives based on number theory, so let’s focus on that.
The boolean-circuit style of MPC originates in a 1987 paper called “How
to Solve any Multi-Party Protocol Problem” (by Goldreich, Micali,
and Wigderson). That paper focused on presenting a constructive proof
that all polytime-computable functions can be computed privately.
Which is super cool, but they didn’t address questions of efficiency, or
a variety of other implementation details one might care about.
Instead let’s look at a paper that follows the above construction but
makes specific implementation choices and evaluates performance on a
real implementation. Choi et al published “Secure Multi-Party
Computation of Boolean Circuits with Applications to Privacy in On-Line
Marketplaces” in 2012. As a non-expert in this
field, I found their paper relatively easy to follow, which is somewhat
unusual in cryptography papers! The paper provides a nice overview
together with plenty of citations to papers describing the fundamental
building blocks. The authors also published the source code of their
implementation, if you want something to play with, although I haven’t
tested whether it still builds.
In this framework, every computation is expressed as a circuit composed
of AND gates and XOR gates3. If you’re used to writing
software, at first glance this feels super limiting. Where are the loops
and if-statements and function calls? How do I do math or manipulate
text or construct fancy data structures?
Hardware designers, on the other hand, deal with this all the time.
And so on. A task that takes just a few lines of code in your favorite
programming language might blow up into a giant circuit, but still, many
more things are possible than one might expect.
Once you’ve described your program as a boolean circuit, I encourage you
to read either of the papers I cited above to learn how to turn that
into a secure computation4. But I’ll note a couple
of important details now:
As a warm-up, let’s devise three kinds of circuits, which we’ll then
rearrange and combine in various ways to solve the histogram problem.
The first circuit we need adds two 1-bit numbers and a 1-bit carry
input, producing their 1-bit sum and a 1-bit carry output. Hardware
designers call this circuit a full adder, and if Wikipedia doesn’t
satisfy your curiosity you should be able to find plenty of details in
any introductory digital circuits textbook. I learned about them from a
children’s book, personally, but I may have had a strange
childhood5.
A full adder needs 3 XOR gates and 2 AND gates6. Remember, in this style of multi-party computation, XOR is fast while AND is slow.
To add numbers that are N bits wide, the simplest option is to chain
together N copies of the full adder circuit. Connect the carry-output
for the least-significant bit to the carry-input of the next bit, to
form what’s called a ripple-carry adder. This means we need 3·N
XOR gates and 2·N AND gates.
This is a complete boolean circuit in its own right, and at this point
we could use Choi et al’s implementation of multi-party computation to
securely find the sum of any number of secret inputs.
The next circuit we’ll build compares two N-bit numbers A and B,
producing a 1-bit output that is 1 if A<B and 0 otherwise.
We could modify a full adder to perform subtraction instead, but a
specialized digital comparator is easier to think through.
This needs 4 XOR gates and 2 AND gates per bit.
At this point we can solve variants of Yao’s Millionaires’
Problem, a thought experiment in cryptography where two
people want to determine which of them has more money without actually
revealing how much they have to each other.
However, every time we’re going to use this comparison circuit in the
final histogram circuit, it turns out that either A or B is a public
constant, so some of the inputs to these gates are known in advance.
That allows us to simplify the circuit to 1 AND gate, and between 0 and
2 XOR gates, per bit7.
The third basic circuit we need is a little different, because it
doesn’t correspond to any physical digital logic circuit. Instead, this
one relies on a specific property of Choi et al’s approach to
multi-party computation.
Recall that Groce’s private histogram algorithm requires us to decide at
random whether to include any given person’s private data in the final
output, in such a way that the adversary can’t predict our decision.
So to start with, let’s figure out how to generate a single random bit,
equally likely to be a 1 or a 0. None of the participants are allowed to
know the value of this bit, but everyone needs to be able to use it in
further computation.
Remember that in Choi et al’s approach, a participant’s secret input
bit is represented by giving every participant a random bit, constrained
such that the XOR of all those bits produces the true value.
So to generate a secret random bit, instead of distributing shares of
somebody’s secret input, each participant should just generate their own
random bit, and together all those bits will represent a truly random
bit8.
We can extend this to generate a random bit that is 1 with probability
p. If we concatenate N uniform random bits, we’ll get a uniform
random N-bit number. With probability p, that number will be less
than p times 2^N, so we can use our earlier less-than circuit to get
the desired output.
We just have to pick a large enough N that it can represent p in
fixed point with enough precision. To put that another way, p needs to
be close to an integer multiple of 1/2^N. On the other hand, since the
comparison circuit uses N AND gates and those are expensive, it’s
worth picking the smallest possible value of N that lets us represent
p precisely enough for our needs.
With that background in mind, let’s sketch a circuit for
privacy-preserving histograms!
First we have to think about how each participant’s input should be
encoded. I can think of two reasonable ways to represent your input:
either as the binary encoding of your chosen secret number; or as one
bit per histogram bin, where exactly one of the bits is 1. Using one bit
per bin simplifies the circuit a little, but means you have to take
extra steps to ensure that nobody sneakily puts themselves in multiple
bins. And you can do that with “zero-knowledge proofs”, but it’s
probably better to use an encoding with no redundancy or illegal states
in it.
In advance, everyone agrees on a probability p, a whole number k,
and the shape of the circuit we’re about to evaluate. These are
public parameters to the algorithm. If you believe the parameters
aren’t good enough to protect your privacy, you can refuse to
participate without revealing anything about your secret number.
If everyone’s secret number is encoded in binary, then we need to
convert to a bit-per-bin representation. Electrical engineers have
lots of designs for converting between binary and so-called
“one-hot” encoding, but for simplicity just assume we’ll use the
less-than circuit a bunch of times.
Next you all need to decide whose inputs are actually going to get
included in this histogram. For each participant, generate a random
bit that is 1 with probability p. Then AND that bit with each bin
of that participant’s input.
The level of paranoia in our random number circuit may have seemed
excessive. But to make the correctness proof hold, we’re relying on
the fact that none of the participants ever learn what any of the
random numbers actually were, and also that none of them can
influence whether somebody gets counted or not, so long as even one
participant is honest.
Use a bunch of full-adder circuits to add up the number of 1-bits in
each bin.
For each bin, use the less-than circuit to check if k-1 is less
than the count in that bin; the result will be 1 if the count is big
enough and 0 otherwise. The final result for that bin is the AND of
this bit with each of the bits of the count.
Ta-da! Using distributional differential privacy, we saw how to make
sure that the histogram output doesn’t reveal more than we intended
to. Now we’ve seen how to ensure that the inputs are never revealed to
anybody either.
Many functions are inconvenient to express as boolean circuits. I mean,
you can do it, but the circuits get huge. So I’d like to give a brief
sense of what this would look like in an arithmetic setting instead.
Instead of taking AND and XOR as our basic operations, we can choose
multiplication and addition over integers. However, like machine
arithmetic on CPUs, these operations wrap if the result is bigger than
some implementation-specified constant.
Some parts of the histogram algorithm are a lot simpler in this setting.
Adding up the count of participants in each bin takes one addition per
participant, instead of 2 AND gates per bit per participant. Even
better, addition is the cheap operation while multiplication is
expensive, so that stage becomes basically free.
Other parts are more complicated, because in general you can’t turn
addition and multiplication into “return 1 if A<B and 0
otherwise”. Fortunately in this setting we aren’t actually working with
arbitrary-precision integers, so there are tricks you can play, such as
detecting that a computation overflowed. I’m not going to go into detail
here, but there are specific methods for implementing the less-than and
random-number circuits we relied on in the previous section. If you’re
interested, see “Multiparty Computation for Interval, Equality, and
Comparison without Bit-Decomposition Protocol” by
Nishide and Ohta for one approach.
Since all the primitives we need exist in this multiparty arithmetic
setting, clearly we can implement private histograms this way too! If
you’re looking for a challenge, you could go implement this algorithm
both ways and find out which one is more efficient, because I honestly
don’t know…
Well, the number you chose has remained your secret this whole time,
even as you let people learn some aggregate information about a crowd of
people that included you.
Recent news reports about businesses like Cambridge Analytica vacuuming
up huge amounts of personal information to create creepy profiles of
people have been scary wake-up calls regarding our privacy in the era of
big data and algorithms. However, there are many positive uses for data
collection, ranging from serious medical studies to helping people
discover some entertaining thing they’ll enjoy.
With this post, I hope I’ve conveyed that it’s theoretically possible to
get the personal and societal benefits of data collection, without the
harms due to our personal data being used in ways we didn’t intend.
I’ve only touched on a few specific examples of research from the past
several decades, and there’s a lot of work to be done on making these
approaches efficient, scalable, and usable. But the foundations are
there, so we should reject the corporate story that we have to give up
all our privacy to get these benefits.
Narayanan and Shmatikov. “Robust De-anonymization of Large Sparse Datasets” ↩
Much of the differential privacy literature uses sums or counts as
examples, because those are easy to analyze—and a histogram is
just a collection of counts. So finding a histogram algorithm in a
paper in this field is not actually very difficult. I’ve chosen to
write about this particular algorithm for two reasons. First, it’s
pretty straightforward to adapt to the secure multiparty computation
setting, but it has some interesting twists that let me discuss a
bit more about that setting. Second, I started off trying to solve a
different problem, and this algorithm has nice properties for my use
case, though I won’t discuss that further right now because I don’t
have a proof yet that my modifications to the algorithm are actually
safe.
If you’d like to compare this with an algorithm that was designed
from the start to satisfy the original differential privacy
definition in a multiparty computation setting, I recommend
Bindschaedler et al, “Achieving Differential Privacy in Secure
Multiparty Data Aggregation Protocols on Star Networks”.
That algorithm is also fault-tolerant against some number of
participants going offline partway through. ↩
The original proposal by Goldreich et al used NOT gates rather than
XOR, but if you provide an always-true input, “NOT b” can be
rewritten as “1 XOR b”. So Choi et al can compute everything
Goldreich et al can, with the same efficiency. ↩
If you really get into the implementation details of the boolean
circuit approach, you may also be interested in papers on “oblivious
transfer”, such as Donald Beaver, “Precomputing Oblivious
Transfer”, and Li et al, “Efficient reduction of
1 out of n oblivious transfers in random oracle
model”. ↩
I learned about boolean circuits for addition from David Macaulay’s
1988 children’s book, “The Way Things Work”, which
illustrates everything with the help of a herd of friendly woolly
mammoths. Honestly I think it’s a fantastic book for curious people
of all ages. Apparently there have been two newer editions since the
version I read as a child, if you’re concerned about the book’s
relevance thirty years later. ↩
There are several combinations of gates that can work to implement a
full adder. Wikipedia shows the two AND gates being combined with an
OR gate to produce the carry output. However, one of the AND gates
computes A AND B, while the other has an input from A XOR B. Since
those can’t both be 1 at the same time, the inputs to the OR gate
also can’t both be 1 at the same time, so in this case OR and XOR
will produce the same result. ↩
In a digital comparator where one operand is constant, the total
number of XOR gates depends on several factors. If the left-hand
side is constant, then the number of XOR gates is twice the number
of 0 bits. If the right-hand side is constant, the number of XOR
gates is twice the number of 1 bits, plus the number of 0 bits.
If you wanted to micro-optimize this circuit, you could note that on
integers, A<B is equivalent to either NOT (B-1<A) or NOT
(B<A+1). That means that if the constant is the left-hand operand
you can swap it to the right, or vice versa, for the cost of an
additional XOR gate.
Of course this is not worth doing, because in this system, XOR gates
are basically free compared to AND gates. But I spent so much time
thinking about this useless micro-optimization that, gosh darn it,
I’m at least allowed to put it in a footnote. ↩
This protocol for generating a random bit struck me as the obvious
thing to do, and simultaneously as way too easy for it to possibly
be correct. It wasn’t at all obvious to me that the XOR of a bunch
of uniform random bits would still be uniform random, or that a
malicious participant couldn’t influence the result one way or the
other. Fortunately my friend Joe Ranweiler dug up the perfect
Mathematics Stack Exchange answer for this question: “How to prove
uniform distribution of m⊕k if k is uniformly
distributed?” That proof shows that as long as at
least one participant generates their bit uniformly at random, it
doesn’t matter what anyone else does: the result will still be a
uniform random number. ↩
by gjjaros
Related Item: Rurik: Dawn of Kiev : 1. Pria & wanita 2. Min 23 tahun 3. Mahir php programming, HTML language, dan MySQL 4. Domisili […]
Info selengkapnya Pengajar Freelance Web Programming FLASHCOM INDONESIA, Surabaya cek di : Kualifikasi : 1. Pria & wanita 2. Min 23 tahun 3. Mahir java android programming, android studio dan […]
Info selengkapnya Pengajar Freelance Android Programming FLASHCOM INDONESIA, Surabaya cek di
|
https://googlier.com/search/2018_07_11/programming.html
|
CC-MAIN-2018-30
|
refinedweb
| 15,994
| 50.46
|
Note from the publisher: You have managed to find some of our old content and it may be outdated and/or incorrect. Try searching in our docs for current information.
As anyone who has spent much time on the command line of a UNIX-based system knows,
sudo is an incredibly powerful tool that allows you to temporarily perform actions as the “root” user, making a wide range of privileged actions possible.
You can actually do quite a lot on a Linux system without
sudo, and much of our user base has been happy without it. There are certain things, however, that simply require root privileges. There were also instances where it was annoying for our users to do things one way on their dev and prod machines where they had root access, and do things another way on CircleCI where they didn’t… until now!
What exactly is new?
Quite simply, any custom build steps that use the
sudo command will just work. Additionally, if you SSH into a CircleCI build container, you will no longer be prompted for a password when using
sudo, which makes SSH an even more powerful tool for troubleshooting build issues.
What can sudo do?
There are a lot of things that you can do with
sudo, but here is a handful of common use cases:
Install packages: If you need a custom package that isn’t already on CircleCI, just run e.g.
sudo apt-get update; sudo apt-get install gnu-smalltalkas usual. See our docs for more details.
Install a custom version of a service: For example, if you want to use an older version of cassandra, you can run
apt-get update; apt-get remove cassandra; apt-get install cassandra=1.1.9
Edit system files: For example, if you want to tweak some global MySQL options, you can just edit
/etc/mysql/my.cnf
Bind low ports: If you have integration tests that expect a web server to be accessible on port 80, you can now start processes that listen on that port.
Manually start services: If you want to use Docker, but you need the Docker daemon to listen on a TCP port instead of a socket, you can run
sudo docker -d -e lxc -s btrfs -H 0.0.0.0:5555instead of starting it from the “services” section of your
circle.yml
A few constraints
Even with the
sudo command, there are still a few constraints to what you can do inside of a build container. You won’t be able to do things like mount a filesystem, reformat a disk, or install a kernel module. In general, if you are the kind of person who is trying to do these things, you probably won’t be surprised to see that we prevent you from doing it. That said, none of these restrictions should be very limiting for most application-level needs.
A note on security
We use a number of tools, including AppArmor, unprivileged lxc containers, and user namespaces to constrain what our users can do (hence the limitations above). We also carried out a thorough third-party security review before granting
sudo capabilities to any users. Please contact us if you have any further security-related questions.
This sudo thing sounds amazing! I’ll take 10!
You can get all the sudos you want by signing up here. Or if you’re already a CircleCI user, just add some commands that use
sudo to your
circle.yml file to experience the awesome power. If you’re an existing user who has asked us to apply custom privileged commands to your project, you can now do them yourself in your
circle.yml and ask us to remove the commands from our end. Remember, you can also always SSH into build containers to experiment!
|
https://circleci.com/blog/sudo-circleci-make-me-a-sandwich/
|
CC-MAIN-2022-21
|
refinedweb
| 638
| 61.26
|
Using the numbers 1, 3, 4 and 6, create an algebraic expression that equals 24. All four numbers must be used and each number may only be used once. Those four numbers are the only numbers permitted in the expression. The expression is restricted to using addition, subtraction, multiplication and division. A particular mathematical operator maybe applied more than once. The expression may contain parentheses to define the order of operations.
Here are some valid expressions that do not equal 24:
((6 * 4) - 3) - 1
(3 + 4) * (1 + 6)
4 * (1 + (6 / 3))
Here are some invalid expressions:
((6 * 4) + (-3)) + 1
34 * (1 + 6)
3.4 * (1 + 6)
4 * (1 + (6 ^ 3))
6 * (3 + 1)
1 + 4 + (6 * 4)
The first expression used the negation unary operator. The symbol for negation is the same symbol used in subtraction, but it’s an entirely different mathematical function. The second expression attempted to form 34 by putting the numbers 3 and 4 adjacent to each other. The third expression attempted to form 3.4 by introducing a decimal point between 3 and 4. Numbers can only be combined using the four operators stated in the problem. The fourth expression used 63. Exponents are not permitted even though it is possible to express them without an explicit operator like ^. The fifth expression does not include 4. The sixth expression includes 4 twice.
^
1, 3, 4 and 6 are base-10, real numbers. The expression must evaluate to 24 in base-10. During evaluation, each sub-expression evaluates to a real number. There is no implicit conversion to integers via the floor, ceiling or round functions at any stage during evaluation.
A simple solution exists. The answer equals exactly 24 and it doesn’t apply some sort of trick or loophole that breaks the rules explained above. But, I bet you can’t find it.
As far as I can determine, the only way to find the solution is via a brute force search. Is it feasible to do that by hand? How large is the problem space? How many expressions exists with the constraints specified in the problem?
The number of possible expressions can be determined by multiplying together the following 3 values:
The number of permutations of 1, 3, 4 and 6 is 4! = 24. That follows from the fact that we have four choices for the first number in the expression. Once that number is in place, we can select among one of the three remaining numbers for the second number in the expression. And so on: 4 * 3 * 2 * 1 = 24.
Next, we need to compute the number of operator permutations where these permutations may contain repeats. But, how many operators are in the expression? Does it vary? Any expression containing parentheses can be expressed in Reverse Polish notation (RPN, a.k.a. Postfix notation) using no parentheses. An RPN expression is easy to evaluate using a stack. The expression is evaluated from left-to-right. When a number is encountered, it is pushed onto the stack and the stack size increases by 1. When one of the binary operators above is encountered, 2 values are popped from the stack, the operator is applied and the result is pushed back onto the stack. In that case, the stack size decreases by 1. The final result is left on the stack. Since the expression contains 4 numbers and after evaluation the stack size is 1, the expression must always contain exactly 3 operators, each of which reduces the stack size by 1.
We have four choices for each of the three operators in the expression. 4*4*4 = 43 = 64.
Finally, how many expression forms are there? Meaning, how many RPN expressions exists containing 4 numbers and 3 binary operators? Let N denote a number and B a binary operator. Consider this RPN expression:
N N N N
We need to insert 3 B’s. Let’s number the insertion points:
1 N 2 N 3 N 4 N 5
A binary operator consumes the top 2 values on the stack. Hence, we can’t put a B at point 1 or 2 because it needs to follow at least 2 N’s. If we insert a B at point 3, we can’t put a second B there because the first reduced the stack size to 1. As described above, the stack size at any point is the number of N’s minus the number of B’s. Analyzing the remaining possibilities yields these 5 expressions:
N N B N B N B
N N B N N B B
N N N B B N B
N N N B N B B
N N N N B B B
The infix version of those expressions is:
((N B N) B N) B N
(N B N) B (N B N)
(N B (N B N)) B N
N B ((N B N) B N)
N B (N B (N B N))
If we plug in the permutations of the four numbers and the permutations (with repeats) of the operators, we get 24 * 64 * 5 = 7680 expressions. That’s way too many to do by hand. Instead, I used a simple C# program.
First, I manually listed out the permutations of the 4 numbers:
1: private double[,] permutations = {
2: { 1, 3, 4, 6, },
3: { 1, 3, 6, 4, },
4: { 1, 4, 3, 6, },
5: { 1, 4, 6, 3, },
6: { 1, 6, 3, 4, },
7: { 1, 6, 4, 3, },
8: { 3, 1, 4, 6, },
9: { 3, 1, 6, 4, },
10: { 3, 4, 1, 6, },
11: { 3, 4, 6, 1, },
12: { 3, 6, 1, 4, },
13: { 3, 6, 4, 1, },
14: { 4, 1, 3, 6, },
15: { 4, 1, 6, 3, },
16: { 4, 3, 1, 6, },
17: { 4, 3, 6, 1, },
18: { 4, 6, 1, 3, },
19: { 4, 6, 3, 1, },
20: { 6, 1, 3, 4, },
21: { 6, 1, 4, 3, },
22: { 6, 3, 1, 4, },
23: { 6, 3, 4, 1, },
24: { 6, 4, 1, 3, },
25: { 6, 4, 3, 1, },
26: };
Next, I created an array of the 4 operators and I defined a function that can apply them:
1: private char[] operators = { '+', '-', '*', '/' };
2:
3: private double F(double a, char op, double b) {
4: switch (op) {
5: case '+':
6: return a + b;
7: case '-':
8: return a - b;
9: case '*':
10: return a * b;
11: default:
12: return a / b;
13: }
14: }
Finally, I created a function that contains 4 nested loops. The outer 3 loops iterate over the operator permutations. The inner-most loop iterates over the number permutation array. The body of that loop evaluates the 5 expression forms and tests for equality against 24.
1: public void Solve() {
2: foreach (char p in operators) {
3: foreach (char q in operators) {
4: foreach (char r in operators) {
5: for (int i = 0; i < 24; i++) {
6: double a = permutations[i, 0];
7: double b = permutations[i, 1];
8: double c = permutations[i, 2];
9: double d = permutations[i, 3];
10:
11: if (F(F(a, p, b), r, F(c, q, d)) == 24) {
12: Console.WriteLine("({0} {1} {2}) {3} ({4} {5} {6}) = 24",
13: a, p, b, r, c, q, d);
14: return;
15: }
16: if (F(F(F(a, p, b), q, c), r, d) == 24) {
17: Console.WriteLine("(({0} {1} {2}) {3} {4}) {5} {6} == 24",
18: a, p, b, r, c, q, d);
19: return;
20: }
21: if (F(F(a, p, F(b, q, c)), r, d) == 24) {
22: Console.WriteLine("({0} {1} ({2} {3} {4})) {5} {6} == 24",
23: a, p, b, r, c, q, d);
24: return;
25: }
26: if (F(a, p, F(b, q, F(c, r, d))) == 24) {
27: Console.WriteLine("{0} {1} ({2} {3} ({4} {5} {6})) == 24",
28: a, p, b, r, c, q, d);
29: return;
30: }
31: if (F(a, p, F(F(b, q, c), r, d)) == 24) {
32: Console.WriteLine("{0} {1} (({2} {3} {4}) {5} {6}) == 24",
33: a, p, b, r, c, q, d);
34: return;
35: }
36: }
37: }
38: }
39: }
40: }
One of those if-statements discovers and prints the answer. I omit the answer from this article just incase you’re crazy enough to attempt to solve this by hand.
Why stop there? What about a computational solution to a generalized version of the problem? Consider expressions containing N numbers, N-1 binary operators and any number of unary operators. The goal is to discover an expression that equals some specified target value.
Unary operators, such square-root and negation, are kind of a problem. In an RPN expression, a unary operator pops a single value off the stack, it applies its associated unary function and it pushes the result back onto the stack. Unary operators don’t alter the stack size; they can be applied indefinitely. To limit the search space, we’re forced to specify the number of unary operators that can be in the expression.
First, let’s consider ways of generating permutations. Generating permutations of the operators is easier than generating permutations of the numbers because repeats are allowed. It can be done with a simple recursive function:
1: public void PrintPermutations(char[] operators) {
2: PrintPermutations(operators, 0, new char[operators.Length]);
3: }
4:
5: private void PrintPermutations(char[] operators, int index,
char[] permutation) {
6: if (index == permutation.Length) {
7: PrintArray(permutation);
8: } else {
9: for (int i = 0; i < operators.Length; i++) {
10: permutation[index] = operators[i];
11: PrintPermutations(operators, index + 1, permutation);
12: }
13: }
14: }
The bootstrap function on line 1 accepts the array of operators. It creates a second array on line 2 to store an individual permutation and it makes a call to the recursive function on line 5. The recursive function acts like the set of nested loops in the prior example. The index variable is an index into the permutation array, but it's essentially the nesting level. The loop on line 9 plugs every possible operator into that position of the array and then it recursively calls the function again to do the same to the rest of the array. Note that index + 1 is passed in the call. If the index equals the length of the array, then it’s full and it’s printed. The output looks like this:
index
permutation
index + 1
+ + + +
+ + + -
+ + + *
+ + + /
+ + - +
+ + - -
+ + - *
+ + - /
+ + * +
+ + * -
+ + * *
+ + * /
+ + / +
+ + / -
+ + / *
+ + / /
+ - + +
+ - + -
+ - + *
+ - + /
...
/ / * *
/ / * /
/ / / +
/ / / -
/ / / *
/ / / /
Permuting the numbers can be done using a similar technique; however, additional logic is required to prevent repeats:
1: public void PrintPermutations(double[] numbers) {
2: Array.Sort(numbers);
3: bool[] available = new bool[numbers.Length];
4: for (int i = 0; i < available.Length; i++) {
5: available[i] = true;
6: }
7: PrintPermutations(numbers, 0, new double[numbers.Length], available);
8: }
9:
10: private void PrintPermutations(double[] numbers, int index,
11: double[] permutation, bool[] available) {
12: if (index == numbers.Length) {
13: PrintArray(permutation);
14: } else {
15: double lastNumber = Double.NaN;
16: for (int i = 0; i < available.Length; i++) {
17: if (available[i] && numbers[i] != lastNumber) {
18: available[i] = false;
19: permutation[index] = lastNumber = numbers[i];
20: PrintPermutations(numbers, index + 1, permutation, available);
21: available[i] = true;
22: }
23: }
24: }
25: }
The bootstrap function on line 1 accepts the numbers to permute. That array may contain duplicate numbers. To make it easier to discover duplicates, the array is sorted on line 2. On line 3, an array of boolean flags is allocated to the same length as the numbers array. Lines 4—6 initialize the elements to true. As each number is placed into the permutation, the code will note that it can’t be used again by setting the corresponding index of the available array to false. The arrays are passed to the recursive function on line 10. The loop on lines 16—24 scans the available array for the next unused number. Since the sorted numbers array may contain duplicates, before the loop plugs an available number into the permutation array, it checks that it doesn’t match the last number it plugged into the same position. After the recursive call on line 20, it renders the number available again.
true
available
false
numbers
My generalized solution uses those recursive techniques of producing permutations as part of building RPN expressions. An RPN expression is a list of numbers and operators. I abstracted the elements of the list into this interface:
1: public interface IElement {
2: void Evaluate(Stack<double> stack);
3: void Print(Stack<string> stack);
4: }
Each element is able to evaluate itself and to print itself. For evaluation, it pops 0, 1 or 2 values off the stack, it applies a function to those values and it pushes a new value back onto the stack. It prints itself in infix notation also by pulling values off a stack and pushing on a new one.
Here’s the implementation of the number element:
1: public class Number : IElement {
2:
3: public double value;
4:
5: public Number(double value) {
6: this.value = value;
7: }
8:
9: public void Evaluate(Stack<double> stack) {
10: stack.Push(value);
11: }
12:
13: public void Print(Stack<string> stack) {
14: stack.Push(value.ToString());
15: }
16: }
Number is evaluated and printed by simply pushing its contained value onto the stack.
The binary operators are modeled as follows:
1: public delegate double BinaryFunction(double a, double b);
2:
3: public class BinaryOperator : IElement {
4:
5: private BinaryFunction binaryFunction;
6: private string name;
7:
8: public BinaryOperator(BinaryFunction binaryFunction, string name) {
9: this.binaryFunction = binaryFunction;
10: this.name = name;
11: }
12:
13: public void Evaluate(Stack<double> stack) {
14: double b = stack.Pop();
15: double a = stack.Pop();
16: stack.Push(binaryFunction(a, b));
17: }
18:
19: public void Print(Stack<string> stack) {
20: string b = stack.Pop();
21: string a = stack.Pop();
22: stack.Push(string.Format("({0} {1} {2})", a, name, b));
23: }
24: }
The constructor accepts a binary function, defined by the delegate, and a name. The Evaluate() and Print() functions are straightforward. They pop 2 values off the stack, they process them and they push a new value back onto the stack.
Evaluate()
Print()
The same idea is used for unary operators:
1: public class UnaryOperator : IElement {
2:
3: private UnaryFunction unaryFunction;
4: private string name;
5: private bool before;
6:
7: public UnaryOperator(
8: UnaryFunction unaryFunction, string name, bool before) {
9: this.unaryFunction = unaryFunction;
10: this.name = name;
11: this.before = before;
12: }
13:
14: public void Evaluate(Stack<double> stack) {
15: stack.Push(unaryFunction(stack.Pop()));
16: }
17:
18: public void Print(Stack<string> stack) {
19: stack.Push(string.Format(
20: before ? "{0}({1})" : "({1}){0}", name, stack.Pop()));
21: }
22: }
Unary operators may be printed prefix or postfix. For instance, negation is denoted by a minus-sign to the left of the value. Factorial, on the other hand, is denoted by an exclamation mark to the right of the value. The before parameter passed into the constructor enables Print() to show the operator before or after the value.
before
A list of IElement objects represents an RPN expression in deferred execution form. To evaluate and to print the expression, the following methods are used:
IElement
1: private double EvalutateExpression(List<IElement> elements,
2: Stack<double> stack) {
3: foreach (IElement element in elements) {
4: element.Evaluate(stack);
5: }
6: return stack.Pop();
7: }
8:
9: private void PrintExpression(List<IElement> elements, double target,
10: Stack<string> stack) {
11: foreach (IElement element in elements) {
12: element.Print(stack);
13: }
14: string expression = stack.Pop();
15: Console.WriteLine("{0} = {1}", expression, target);
16: }
The stack passed into the methods starts out empty. The methods iterate over the list using the stack to hold intermediate values. The final value in the stack is returned.
Finally, to find an expression that equals a specified target value, a large recursive method is used. I’ll break it down for you:
1: private void Solve(
2: Number[] numbers,
3: UnaryOperator[] unaryOperators,
4: BinaryOperator[] binaryOperators,
5: double target,
6: List<IElement> elements,
7: bool[] numbersAvailable,
8: bool[] unaryOperatorsAvailable,
9: int stackSize,
10: Stack<double> numberStack,
11: Stack<string> stringStack) {
The numbers, unaryOperators, binaryOperators and target are the parameters that specify the problem. The elements list is the RPN expression under construction. The numbersAvailable boolean array is used as shown in a prior example for computing the permutations of the numbers. Since unary operators can be applied indefinitely, this recursive function prevents duplicates using the unaryOperatorsAvailable array. If you want the expression to contain a specific unary operator a certain number of times, list the operator that number of times within the unaryOperators array. The stackSize represents the size of the stack if the partially formed RPN expression in elements were to be evaluated. Finally, numberStack and stringStack are used for evaluating and printing the RPN expression once it is fully formed.
unaryOperators
binaryOperators
target
elements
numbersAvailable
unaryOperatorsAvailable
stackSize
numberStack
stringStack
The method expands the partial RPN expression by plugging in every possible number:
1: bool remainingNumbers = false;
2: double lastNumber = Double.NaN;
3: for (int i = 0; i < numbersAvailable.Length; i++) {
4: if (numbersAvailable[i] && numbers[i].value != lastNumber) {
5: remainingNumbers = true;
6: elements.Add(numbers[i]);
7: numbersAvailable[i] = false;
8: Solve(numbers, unaryOperators, binaryOperators, target, elements,
9: numbersAvailable, unaryOperatorsAvailable, stackSize + 1,
10: numberStack, stringStack);
11: numbersAvailable[i] = true;
12: elements.RemoveAt(elements.Count - 1);
13: }
14: }
The loop is analogous to the loop used for computing the number permutations seen in a prior example. Each iteration checks if the number is available and if the number was already plugged-in in a previous iteration (the numbers array is pre-sorted). If a number is available, it is appended to the RPN expression, the recursive method is called again and finally the number is removed from the expression. The remainingNumbers flag is set if it discovered an available number. Note that appending a number to the expression will increase the stack size by 1. See the value passed into the recursive call.
remainingNumbers
Next, the method does virtually the same thing for the unary functions. The only difference being that it also has to check if the stack size is at least 1. When the recursive call is made, the stackSize variable is left the same.
1: if (stackSize >= 1) {
2: for (int i = 0; i < unaryOperatorsAvailable.Length; i++) {
3: if (unaryOperatorsAvailable[i]) {
4: elements.Add(unaryOperators[i]);
5: unaryOperatorsAvailable[i] = false;
6: Solve(numbers, unaryOperators, binaryOperators, target, elements,
7: numbersAvailable, unaryOperatorsAvailable, stackSize,
8: numberStack, stringStack);
9: unaryOperatorsAvailable[i] = true;
10: elements.RemoveAt(elements.Count - 1);
11: }
12: }
13: }
Then, the method plugs-in every possible binary operator. This code is simpler because it doesn’t have to check for duplicates:
1: if (stackSize >= 2) {
2: for (int i = 0; i < binaryOperators.Length; i++) {
3: elements.Add(binaryOperators[i]);
4: Solve(numbers, unaryOperators, binaryOperators, target, elements,
5: numbersAvailable, unaryOperatorsAvailable, stackSize - 1,
6: numberStack, stringStack);
7: elements.RemoveAt(elements.Count - 1);
8: }
9: }
Finally, here’s the remainder of the recursive method:
1: if (stackSize == 1 && !remainingNumbers) {
2: if (target == EvalutateExpression(elements, numberStack)) {
3: PrintExpression(elements, target, stringStack);
4: }
5: }
6: }
It checks if there is only 1 element in the stack and that it ran out of numbers to append to the RPN expression. If so, it evaluates the expression and compares the result to the target. If it’s expression that we’re looking for, it gets printed.
Here’s the bootstrap method that calls the recursive function:
1: public void Solve(double[] numbers, UnaryOperator[] unaryOperators,
2: BinaryOperator[] binaryOperators, double target) {
3: Array.Sort(numbers);
4: Number[] nums = new Number[numbers.Length];
5: for (int i = 0; i < numbers.Length; i++) {
6: nums[i] = new Number(numbers[i]);
7: }
8: List<IElement> elements = new List<IElement>();
9: bool[] numbersAvailable = new bool[numbers.Length];
10: for (int i = 0; i < numbersAvailable.Length; i++) {
11: numbersAvailable[i] = true;
12: }
13: bool[] unaryOperatorsAvailable = new bool[unaryOperators.Length];
14: for (int i = 0; i < unaryOperatorsAvailable.Length; i++) {
15: unaryOperatorsAvailable[i] = true;
16: }
17: Stack<double> numberStack = new Stack<double>();
18: Stack<string> stringStack = new Stack<string>();
19: Solve(nums, unaryOperators, binaryOperators, target, elements,
20: numbersAvailable, unaryOperatorsAvailable, 0, numberStack,
21: stringStack);
22: }
It’s pretty straightforward. Note that numberStack and stringStack are allocated here and they are only used for evaluating and printing fully formed RPN expressions.
To use the bootstrap method for solving The 24 Puzzle, the following call is made:
1: public static void Solve24Puzzle() {
2: Example4 example4 = new Example4();
3: example4.Solve(
4: new double[] { 1, 3, 4, 6 },
5: new UnaryOperator[0],
6: new BinaryOperator[] {
7: new BinaryOperator((a, b) => a + b, "+"),
8: new BinaryOperator((a, b) => a - b, "-"),
9: new BinaryOperator((a, b) => a * b, "*"),
10: new BinaryOperator((a, b) => a / b, "/")
11: },
12: 24);
13: }
How many integers can you generate using only four 4’s and any set of functions? The recursive method used in The 24 Puzzle solution can be slightly modified to solve The Four 4’s puzzle. Instead of hunting for a target value, all expressions that evaluate to an integer are stored in a SortedDictionary<int, List<IElement>>. To make it more interesting, only the shortest expressions are kept. Here’s the modified segment of code:
SortedDictionary<int, List<IElement>>
1: if (stackSize == 1 && !remainingNumbers) {
2: double value = EvalutateExpression(elements, numberStack);
3: int floored = (int)Math.Floor(value);
4: if (value == floored) {
5: if (expressions.ContainsKey(floored)) {
6: if (elements.Count < expressions[floored].Count) {
7: expressions[floored] = new List<IElement>(elements);
8: }
9: } else {
10: expressions[floored] = new List<IElement>(elements);
11: }
12: }
13: }
For binary operators, I’ll use addition, subtraction, multiplication, division and exponentiation. For unary operators I’ll use negation, square-root and factorial. However, to reduce the computation time, I’ll request that each unary operator be used at most once. If you want to use a particular unary operator more than once, simply add it to the UnaryOperator array as many times as desired.
UnaryOperator
1: public static void SolveFour4sPuzzle() {
2: Example5 example5 = new Example5();
3: UnaryOperator negate = new UnaryOperator(a => -a, "-", true);
4: UnaryOperator squareRoot
5: = new UnaryOperator(a => Math.Sqrt(a), "sqrt", true);
6: UnaryOperator factorial = new UnaryOperator(a => {
7: if (a == Math.Floor(a) && a >= 0 && a <= 12) {
8: return factorials[(int)a];
9: } else {
10: return Double.NaN;
11: }
12: }, "!", false);
13: example5.FindAll(
14: new double[] { 4, 4, 4, 4 },
15: new UnaryOperator[] { negate, squareRoot, factorial },
16: new BinaryOperator[] {
17: new BinaryOperator((a, b) => a + b, "+"),
18: new BinaryOperator((a, b) => a - b, "-"),
19: new BinaryOperator((a, b) => a * b, "*"),
20: new BinaryOperator((a, b) => a / b, "/"),
21: new BinaryOperator((a, b) => Math.Pow(a, b), "^")
22: });
23: }
For factorial, I defined an unsigned integer array for 0! to 12!. It can only be applied to integer intermediate values. The if-statement returns NaN if it can’t evaluate the input value. Consequentially, the entire RPN expression will equal NaN in that case.
Here’s a sample of the output: = (4 + (4 + (4 - sqrt(4))))
11 = (4 + ((4 + (4)!) / 4))
12 = (4 * (4 - (4 / 4)))
13 = ((4 + (sqrt(4) * (4)!)) / 4)
14 = (4 + (4 + (4 + sqrt(4))))
15 = ((4 * 4) - (4 / 4))
16 = (4 + (4 + (4 + 4)))
17 = ((4 * 4) + (4 / 4))
18 = (4 * (4 + (sqrt(4) / 4)))
19 = ((4)! - (4 + (4 / 4)))
20 = (4 * (4 + (4 / 4)))
21 = ((4 / 4) - (4 - (4)!))
22 = (4 + ((4 * 4) + sqrt(4)))
23 = (((4 * (4)!) - 4) / 4)
24 = (4 + (4 + (4 * 4)))
25 = ((4 + (4 * (4)!)) / 4)
26 = (((4 + 4) / 4) + (4)!)
27 = (4 - ((4 / 4) - (4)!))
28 = ((4 * (4 + 4)) - 4)
29 = (4 + ((4 / 4) + (4)!))
30 = ((4 * (4 + 4)) - sqrt(4))
This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)
Michael Birken wrote:Can you present a non-brute force approach that you can do by hand?
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/28435/The-24-Puzzle
|
CC-MAIN-2015-48
|
refinedweb
| 4,008
| 54.63
|
The Raspberry Pi has some great add-on hardware, such as Pi Tops that fit directly on top of the Pi module and wired components.
A good number of the wired Arduino designed parts now can also be used with Rasp PI’s. Some examples of this includes the HT16K33 and TM1637 seven segment displays.
Nothing beats using real hardware to show Pi values and status, but if you’re missing the hardware or you’d like to duplicate a displayed value remotely, then a soft version of the hardware can be very useful.
In this blog we’ll look at a three Python soft display examples, a seven-segment display, a LCD Keypad Top and a gauge.
Seven Segment Display
The tk_tools module is based on the Python tkinter module and it is has some cool components such as LEDs, Charts, Gauges and Seven Segment displays. The module is installed by:
pip install tk_tools
The tk_tools Seven Segment component can function like an Arduino TM1637 or HT16K33 display component. The tk_tools seven-segment display supports a height, digit_color and a background color.
Below is a some example code that shows the Pi’s CPU temperature in the soft seven segment display.
import tkinter as tk import tk_tools root = tk.Tk() root.title("CPU Temp") ss = tk_tools.SevenSegmentDigits(root, digits=5, background='black', digit_color='yellow', height=100) ss.grid(row=0, column=1, sticky='news') # Update the Pi CPU Temperature every 1 second def update_gauge(): # Get the Raspberry CPU Temp tFile = open('/sys/class/thermal/thermal_zone0/temp') # Scale the temp from milliC to C thetemp = int(float(tFile.read())/1000) ss.set_value(str(thetemp)) root.after(1000, update_gauge) root.after(500, update_gauge) root.mainloop()
LCD Keypad
The LCD Keypad I’ve used on a lot of my Pi Projects, (below is a PI FM radio example). Its supports 2 lines of text and it has 5 (or 6) buttons that can be used in your Python app.
The standard Python Tkinter library can be used to create a custom LCD keypad display. For my example I tried to replicate the look-and-feel of the Pi Top that I had, but you could enhance or change it to meet your requirements.
Below is an example that writes the button pushed to the 2 line label.
import tkinter as tk def myfunc(action): print ("Requested action: ",action) Line1.config(text = "Requested action: \n" + action) root = tk.Tk() root.title("LCD Keypad Shield") root.configure(background='black') Line1 = tk.Label(root, text="ADC key testing \nRight Key OK ", fg = "white", bg = "blue", font = "Courier 45", borderwidth=4, relief="raised") Line1.grid(row = 0,column = 0, columnspan =15, rowspan = 2) selectB = tk.Button(root, width=10,text= "SELECT",bg='silver' , command = lambda: myfunc("SELECT"),relief="raised") selectB.grid(row = 3,column = 0) leftB = tk.Button(root, width=10,text= "LEFT", bg='silver' , command = lambda: myfunc("LEFT"),relief="raised") leftB.grid(row = 3,column = 1) rootB = tk.Button(root, width=10,text= "UP", bg='silver' , command = lambda: myfunc("UP"),relief="raised") rootB.grid(row = 2,column = 2) rightB = tk.Button(root, width=10,text= "DOWN", bg='silver' , command = lambda: myfunc("DOWN"),relief="raised") rightB.grid(row = 3,column = 3) bottomB = tk.Button(root, width=10,text= "RIGHT", bg='silver', command = lambda: myfunc("RIGHT"),relief="raised") bottomB.grid(row = 4,column = 2) rstB = tk.Button(root, width=10,text= "RST", bg='silver' , command = lambda: myfunc("RESET"),relief="raised") rstB.grid(row = 3,column = 4) root.mainloop()
Gauge and Rotary Scale
There aren’t any mainstream low cost gauges that are available for the Rasp Pi, but I wanted to show how to setup a soft gauge.
The tk_tools gauge component is very similar to a speedometer. The rotary scale is more like a 180° circular meter. Both components support digital values, units and color scales.
Below is a gauge example that reads the Pi CPU temperature every second.
import tkinter as tk import tk_tools root = tk.Tk() root.title("CPU Temp") my_gauge = tk_tools.Gauge(root, height = 200, width = 400, max_value=70, label='CPU Temp', unit='°C', bg='grey') my_gauge.grid(row=0, column=0, sticky='news') def update_gauge(): # Get the Raspberry CPU Temp tFile = open('/sys/class/thermal/thermal_zone0/temp') # Scale the temp from milliC to C thetemp = int(float(tFile.read())/1000) my_gauge.set_value(thetemp) # update the gauges according to their value root.after(1000, update_gauge) root.after(500, update_gauge) root.mainloop()
Final Thoughts
There are a lot of soft hardware components that could be created.
I found myself getting tripped up thinking : “What would be a good tkinter component and what should be a Web component”. This is especially true when looking at charting examples, or when I was looking a remote connections.
2 thoughts on “Simulate Raspberry Pi Hardware”
I used the 7 segment display to create a scoreboard.
Nice
|
https://funprojects.blog/2019/01/02/simulated-raspberry-pi-hardware/
|
CC-MAIN-2022-40
|
refinedweb
| 808
| 59.3
|
How can I remove a retweet from a user who has blocked me via the Twitter API?
I'm using the POST method on the following API call:
I've set Authorization Type to OAuth 1.0 and "Add params to header".
How can I get my Consumer Key, Consumer Secret, Token and Token Secret for my user account?
I added my phone number and tried creating an App at, using those keys to send the request.
But I still get this error when sending the request:
{ "errors": [ { "code": 32, "message": "Could not authenticate you." } ] }
What am I doing wrong?
See also questions close to this topic
- Python REST API not working properly in Docker container
I created a simple ML API using python with flask. It gets data from sample.csv and trains a logistic regression model based on that. I also have a '/predict' endpoint where I can input parameters in for the model to predict.
Example
localhost:80/predict?weight1=1.2&weight2=0.00123&&weight3=0.45will output
{ "predicted": 1}
main.py:
from sklearn.linear_model import LogisticRegression from flask import Flask, request import numpy as np # Create Flask object to run app = Flask(__name__) @app.route('/') def home(): return "Predicting status from other features" @app.route('/predict') def predict(): # Get values from the server weight1 = request.args['weight1'] weight2 = request.args['weight2'] weight3 = request.args['weight3'] testData = np.array([weight1, weight2, weight3]).reshape(1, -1) class_predicted = logisticRegr.predict(testData.astype(float)) output = "{ \"predicted\": " + "\"" + class_predicted[0] + "\"" + "}" return output # Train and load the model based on the MetroPCS_Sample def load_model(): global logisticRegr label_y = [] label_x = [] with open('sample.csv') as f: lines = f.readlines() for line in lines[1:]: # Adding labels to label_y label_y.append(int(line[0])) line = line.strip().split(",") x_data = [] for e in line[1:]: # Adding other features to label_x x_data.append(float(e)) label_x.append(x_data) train_x = label_x[:700] train_y = label_y[:700] test_x = label_x[700:1000] test_y = label_y[700:1000] logisticRegr = LogisticRegression() logisticRegr.fit(train_x, train_y) predictions = logisticRegr.predict(test_x) score = logisticRegr.score(test_x, test_y) # print score if __name__ == "__main__": print("Starting Server...") # Call function that loads Model load_model() # Run Server app.run(host="127.0.0.1", debug=True, port=80)
Everything works well when I run this script without a container.
However, when I placed it in a container and run it, I get the following error:
NameError: global name 'logisticRegr' is not not defined
Dockerfile
FROM tiangolo/uwsgi-nginx-flask:python2.7 # copy over our requirements.txt file COPY requirements.txt /tmp/ # upgrade pip and install required python packages RUN pip install -U pip RUN pip install -r /tmp/requirements.txt COPY ./app /app ENV MESSAGE "hello"
requirements.txt
Flask numpy sklearn scipy
Do you know what can cause the NameError when the script is inside a container?
- Secure way to implement a deferred authentication
I am implementing a system where the user can invoke authorization through a third party system. The authentication can take a long time to finish. What happens is that a user has to open an app on his phone and then enter a code.
From the API perspective, right now, the flow looks like this:
Request:
POST /login
body: { "user": "alice" }
Response: (after 10-20 seconds)
body: { "status": "success", "token": "valid.jwt.token" }
The thing is, the front-end client has to keep the connection open for a long time and it seems like an anti-pattern for me. The connection is open for 10-20 seconds.
What I wanted to do is change the way it works and do it like this instead:
Request:
POST /login
body: { "user": "alice" }
Response: (instant)
body: { "uuid": "auth_request_id", }
And then the user would poll a GET /auth/{auth_request_id} endpoint that would respond with the CURRENT state of the authentication.
So for example, this is how the response would look like before the user opened the phone auth app.
Response:
GET /auth/{auth_request_id}
body: { "ready": false, "state": "user_opened_app" }
And this when they finished the phone flow successfully:
Response:
GET /auth/{auth_request_id}
body: { "ready": true, "state": "finished", "token": "valid.jwt"token" }
I have researched the topic online and came across OAuth 2 and all the other standards but they do not fit my use case.
The reason is that the user will stay on the page where the log-in form is and perform no redirects at all. Then he will open another device (phone) where he will perform 2 factor authentication.
I want to dynamically show the user the state of his authorization. For example, if the user didn't open the app yet, I want to tell him to do so. Additionally, I want to increase the resiliency of the log-in system by getting rid of the long lasting HTTPS connection. With this polling solution, if one api server goes down, another one will respond and take over based on the request UUID.
The only problem I have is knowing whether a set-up like that is secure. I understand if someone would steal the uuid of and start polling the GET /login/{uuid} endpoint, they could in theory steal the final JWT token.
How would I design this flow instead so it is safe? Or is this design safe for authentication?
In the end, if someone steals your cookies or JWT, they'd have your authentication details anyway - so maybe this cannot be protected against.
Note: Assume that the traffic will ALWAYS happen over HTTPS so the UUID cannot be stolen by network packet capture.
- Create new url to push api data dynamically with jQuery
I have an API, and would like the user to be able to click on one of many titles [ex. Title 1], which loads a custom url. If my site is example.com, clicking Title 1 would bring you to example.com/title-1. The new link would include the title, description, date publication, etc.
The API is called with ajax, and I'm using jquery.
//... let byMonth = Object.entries(groupByMonth) .map(([k, v]) => ({[k]: v})); var blogPosts = $('#blog-posts'); $.each(byMonth, function(key, value) { var outer = byMonth[key] $.each(outer, function(k, v) { var inner = outer[k] var monthBlogPosts = $('<div class = "month"> </div>').appendTo(blogPosts); $.each(inner, function(i, obj) { var title = inner[i].Title var description = inner[i].Description var date = inner[i].DatePublished $('<div class = "post-title"><h3>' + title + '</h3>').appendTo(monthBlogPosts) }) }) });
This is some of my code so far, which sorts JSON data into a new array
groupByMonth, and outputs post titles to their respective div by month. The titles are temporarily in
<h3>tags, rather than
<a>tags.
- How do I authenticate Twitter user with oauth without reloading page?
I am using Twitter's api through [twitter-lite][1] to log in and post as a user.
Right now I redirect the user to twitter, which sends back the necessary oauth token for me to post as them.
But I want to be able to do this from a form with other data on it, without leaving that form page and losing the data. Is this possible? Or does the user have to be logged in first?
- Android twitter sdk integration failed
I want to authorize my app user but following the twitter kit provided by twitter its not working. App crashes help me if any one have experience with it. Thanks in advance
- HTTP 403 error when using lookupUsers for a list of twitter handles
I have a list of twitter handles in a csv and I am trying to extract data for all these handles.My csv contains around 200 twitter handles
users <- read.csv("Twitter.csv") users1 <- lookupUsers(users[1:nrow(users),1])
however, I am getting the following error:
Error in twInterfaceObj$doAPICall(paste("users", "lookup", sep = "/"), : Forbidden (HTTP 403).
Anybody knows why am I getting this error and how can I fix it?
- Postman Chrome Plugin returns correct response but not the Postman App
I am trying to implement token based authentication in web api and going through this tutorial
When using the browser or the postman chrome plugin, I am getting correct response - current server time. However when using postman app, I am not getting the same
Can you please explain why this could be happening? I would like to use the app.
- Header param with underscore in http requests not available at server side when requesting via postman
Following is the curl export of the API call which is failing -
curl -X GET \ '' \ -H 'Content-Type: application/json' \ -H 'auth_token: iubsaicbsaicbasiucbsa'
The header param
auth_tokenis not available at all in the server side, as checked from logs.
The same curl however works when directly issued as a command. I have the latest postman version v6.2.3 installed. Also, the same API end point works when requested via other tools like Advanced REST client of chrome.
Previously, I had also checked read this thread
Many servers, like nginx, have a config which if set, discards headers with underscore in name.
However, I could not verify the same because I could not find out exactly how is the server deployed in this. It is a node application and we run this command to run the application -
nohup /bin/forever start -o logs/out.log -e logs/err.log app.js
ps -ef | grep nodeshows the following -
root 5981 1 0 Jul19 ? 00:00:00 /root/.nvm/v7.2.1/bin/node /usr/lib/node_modules/forever/bin/monitor app.js root 5991 5981 0 Jul19 ? 00:00:04 /root/.nvm/v7.2.1/bin/node /usr/local/another/path/to/app.js
- How to check the post method in postman from an array
How can i send this data using post method in postman?
["name": "fenna", "question": [["answer": "Yes, always", "question": "45"], ["answer": "Very satisfied", "question": "46"], ["answer": "Very easy", "question": "47"]]]
This is the data which i need to post it.But how to i give in the json in postman?
- Premium Developer Account Usage in from C# to the Twitter Api Without Any Library Like Tweetinvi/Tweetsharp
first of all I really hard tried to solve my question, I can't find any solution because of my main language is not english , and it is complicated. First, with MVC in Visual Studio with tweetinvi, get the data who and what tweeted about a topic, but that was a standart search,now I have a developer account/premium.In standart search some data come null, and tweetinvi doesn't support premium. So I plan to access to Twitter Api without any library like tweetinv etc. with my developer account, get all data withing a website made with MVC.So I want to develop a website analyze twitter data but I am in the startline.
How can I contact twitter withing visual studio without libraries like tweetinvi ? Do I have to use Rest or RestSharp ? Or I have to use twurl? I am really confused ,will be glad if you help.
- How to change Twitter OAuth permissions
I'm using twitter-kit-ios. I wanna change Twitter OAuth permissions from Read to Read and Write . And says
If a permission level is changed, any user tokens already issued to that Twitter app must be discarded and users must re-authorize the app in order for the token to inherit the updated permissions.
What does this mean? Do I have to somehow get users logged out? Or does it automatically manage to change permissions ?
- Django - Social Auth (Request URL is showing http instead of https)
I have intergrated SSL certificate in my application and i have https in my site but in django social auth the Request URL goes only with http. I know the Twitter does not allow requests from http. Link :
My site has https but the request goes with http .Why ? and How do i resolve it ? Thanks .
|
http://quabr.com/48757831/how-can-i-remove-a-retweet-from-a-user-who-has-blocked-me-via-the-twitter-api
|
CC-MAIN-2018-34
|
refinedweb
| 1,977
| 65.52
|
Transform one string into another, to a given length
#include <string.h> size_t strxfrm( char* dst, const char* src, size_t n );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The strxfrm() function transforms, for no more than n characters, the string pointed to by src to the buffer pointed to by dst. The transformation uses the collating sequence selected by setlocale() so that two transformed strings compare identically (using the strncmp() function) to a comparison of the original two strings using strcoll().
If the collating sequence is selected from the "C" locale, strxfrm() is equivalent to strncpy(), except that strxfrm() doesn't pad the dst argument with null characters when the argument src is shorter than n characters.
The length of the transformed string. If this length is more than n, the contents of the array pointed to by dst are indeterminate.
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <locale.h> char src[] = { "A sample STRING" }; char dst[20]; int main( void ) { size_t len; setlocale( LC_ALL, "C" ); printf( "%s\n", src ); len = strxfrm( dst, src, 20 ); printf( "%s (%u)\n", dst, len ); return EXIT_SUCCESS; }
produces the output:
A sample STRING A sample STRING (15)
ANSI, POSIX 1003.1
setlocale(), strcoll(), wcsxfrm()
|
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/s/strxfrm.html
|
CC-MAIN-2018-13
|
refinedweb
| 215
| 63.9
|
How to import JSON files in Rollup plugin JSON
Importing JSON files is pretty important. That’s how most of us store config settings about our app or even store mock data.
That’s what I do most of the time. Store mock data for local testing purposes.
I’m using Rollup with my Svelte project, and as I tried to import my mock data. I got this error message:
Error: 'default' is not exported by src/__mock__/networks.json
It yelled at me that I couldn’t export the data from my networks.json file.
The problem is that I need to convert my JSON files into ES6 modules.
The solution: Rollup plugin json module
We can solve this problem by using the rollup-plugin-json module.
npm i -D @rollup/plugin-json
Now you must update your rollup.config.js file.
import json from "@rollup/plugin-json"; export default { // ... other configs plugins: [ // ... other rollup plugins json() ] };
Another cool thing about this plugin is that you can compact the bundle size.
For example, your JSON file can get huge and you might not need everything. You can pass the parameter compact as
true, and it will generate the smallest amount of code.
json({ compact: true, })
I like to tweet about Rollup and post helpful code snippets. Follow me there if you would like some too!
|
https://linguinecode.com/post/rollup-plugin-json-import
|
CC-MAIN-2020-29
|
refinedweb
| 226
| 75.81
|
Synchronization Primitives¶
Source code: Lib/asyncio/locks.py
asyncio synchronization primitives are designed to be similar to
those of the
threading module with two important caveats:
asyncio primitives are not thread-safe, therefore they should not be used for OS thread synchronization (use
threadingfor that);
methods of these synchronization primitives do not accept the timeout argument; use the
asyncio.wait_for()function to perform operations with timeouts.
asyncio has the following basic synchronization primitives:
Lock¶
- class
asyncio.
Lock¶
Implements a mutex lock for asyncio tasks. Not thread-safe.
An asyncio lock can be used to guarantee exclusive access to a shared resource.
The preferred way to use a Lock is an
async withstatement:
lock = asyncio.Lock() # ... later async with lock: # access shared state
which is equivalent to:
lock = asyncio.Lock() # ... later await lock.acquire() try: # access shared state finally: lock.release()
Changed in version 3.10: Removed the loop parameter.
- coroutine
acquire()¶
Acquire the lock.
This method waits until the lock is unlocked, sets it to locked and returns
True.
When more than one coroutine is blocked in
acquire()waiting for the lock to be unlocked, only one coroutine eventually proceeds.
Acquiring a lock is fair: the coroutine that proceeds will be the first coroutine that started waiting on the lock.
release()¶
Release the lock.
When the lock is locked, reset it to unlocked and return.
If the lock is unlocked, a
RuntimeErroris raised.
Event¶
- class
asyncio.
Event¶
An event object. Not thread-safe.
An asyncio event can be used to notify multiple asyncio tasks that some event has happened.
An Event object manages an internal flag that can be set to true with the
set()method and reset to false with the
clear()method. The
wait()method blocks until the flag is set to true. The flag is set to false initially.
Changed in version 3.10: Removed the loop parameter.
Example:
async def waiter(event): print('waiting for it ...') await event.wait() print('... got it!') async def main(): # Create an Event object. event = asyncio.Event() # Spawn a Task to wait until 'event' is set. waiter_task = asyncio.create_task(waiter(event)) # Sleep for 1 second and set the event. await asyncio.sleep(1) event.set() # Wait until the waiter task is finished. await waiter_task asyncio.run(main())
- coroutine
wait()¶
Wait until the event is set.
If the event is set, return
Trueimmediately. Otherwise block until another task calls
set().
clear()¶
Clear (unset) the event.
Tasks awaiting on
wait()will now block until the
set()method is called again.
Condition¶
- class
asyncio.
Condition(lock=None)¶
A Condition object. Not thread-safe.
An asyncio condition primitive can be used by a task to wait for some event to happen and then get exclusive access to a shared resource.
In essence, a Condition object combines the functionality of an
Eventand a
Lock. It is possible to have multiple Condition objects share one Lock, which allows coordinating exclusive access to a shared resource between different tasks interested in particular states of that shared resource.
The optional lock argument must be a
Lockobject or
None. In the latter case a new Lock object is created automatically.
Changed in version 3.10: Removed the loop parameter.
The preferred way to use a Condition is an
async withstatement:
cond = asyncio.Condition() # ... later async with cond: await cond.wait()
which is equivalent to:
cond = asyncio.Condition() # ... later await cond.acquire() try: await cond.wait() finally: cond.release()
- coroutine
acquire()¶
Acquire the underlying lock.
This method waits until the underlying lock is unlocked, sets it to locked and returns
True.
notify(n=1)¶
Wake up at most n tasks (1 by default) waiting on this condition. The method is no-op if no tasks are waiting.
The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a
RuntimeErrorerror is raised.
notify_all()¶
Wake up all tasks waiting on this condition.
This method acts like
notify(), but wakes up all waiting tasks.
The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a
RuntimeErrorerror is raised.
release()¶
Release the underlying lock.
When invoked on an unlocked lock, a
RuntimeErroris raised.
- coroutine
wait()¶
Wait until notified.
If the calling task has not acquired the lock when this method is called, a
RuntimeErroris raised.
This method releases the underlying lock, and then blocks until it is awakened by a
notify()or
notify_all()call. Once awakened, the Condition re-acquires its lock and this method returns
True.
Semaphore¶
- class
asyncio.
Semaphore(value=1)¶
A Semaphore object. Not thread-safe.
A semaphore manages an internal counter which is decremented by each
acquire()call and incremented by each
release()call. The counter can never go below zero; when
acquire()finds that it is zero, it blocks, waiting until some task calls
release().
The optional value argument gives the initial value for the internal counter (
1by default). If the given value is less than
0a
ValueErroris raised.
Changed in version 3.10: Removed the loop parameter.
The preferred way to use a Semaphore is an
async withstatement:
sem = asyncio.Semaphore(10) # ... later async with sem: # work with shared resource
which is equivalent to:
sem = asyncio.Semaphore(10) # ... later await sem.acquire() try: # work with shared resource finally: sem.release()
- coroutine
acquire()¶
Acquire a semaphore.
If the internal counter is greater than zero, decrement it by one and return
Trueimmediately. If it is zero, wait until a
release()is called and return
True.
release()¶
Release a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore.
Unlike
BoundedSemaphore,
Semaphoreallows making more
release()calls than
acquire()calls.
BoundedSemaphore¶
- class
asyncio.
BoundedSemaphore(value=1)¶
A bounded semaphore object. Not thread-safe.
Bounded Semaphore is a version of
Semaphorethat raises a
ValueErrorin
release()if it increases the internal counter above the initial value.
Changed in version 3.10: Removed the loop parameter.
|
https://docs.python.org/3/library/asyncio-sync.html?highlight=asyncio%20event
|
CC-MAIN-2022-21
|
refinedweb
| 987
| 61.33
|
Hi, first post here
I'm writing a program in C on Linux (Ubuntu) for a school project, one of the modules of the program is getting two arguments (patt and repl) and is also taking lines from stdin (using getline()) and must replace every occurrence of patt with repl.
Two problems:
A. I get segmentation faults when I'm trying to free the pointer allocated by getline()
B. Even if I comment out that free() statement, the string I get after running the replace function is printed as if no replacement occured! I'm passing a pointer to that string to the replacing function.
here's the short version of the code:
ThanxThanxCode:#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> int strreplace(char *str, char *patt, char *repl); int main(int argc, char** argv) { char *str; //replace pattern size_t rsize; //num of read files by getline() int nbytes; char *patt = argv[1]; //pattern to look for char *repl = argv[2]; //replace found pattern with this str = NULL; //getline() needs NULL and 0 in order to allocate new mem rsize = 0; //getline() allocates the need memory for the line in to *str //it is user's responsibility to free() the allocated mem if((nbytes = getline(&str, &rsize, stdin)) < 0) { //end of file exit(1); } if( argc == 3 && (strstr(str, patt) != NULL) ) { strreplace(str, patt, repl); //this function replaces patt with repl printf("str: %s", str); } return (EXIT_SUCCESS); } int strreplace(char *str, char *patt, char *repl) { char *occ = NULL; //ptr to first char of first occurence of the pattern in str char *temp; //temp string to hold str int poss, pose; //poss - pos of the first char of patt in str, pose - after last char of patt int diff = (strlen(repl) - strlen(patt)); int tempsize; while((occ = strstr(str, patt)) != NULL) { tempsize = strlen(str) + diff + 1; temp = (char*)malloc(sizeof(char)*(tempsize)); poss = strlen(str) - strlen(occ); //getting the pos of the first char of patt in str pose = poss + strlen(patt); //getting pos of the char after last char of patt in str memcpy(temp, str, poss); memcpy(temp + poss, repl, strlen(repl)); memcpy(temp + poss + strlen(repl), str + pose, strlen(&(str[pose]))); temp[tempsize] = '\0'; free(str); // here I get the seg.fault if i don't comment this out str = (char*)malloc(sizeof(char)*tempsize); memcpy(str, temp, tempsize); free(temp); //printf("-----\nstr: %s strlen: %d\n------\n", str, strlen(str)); } return 0; //success }
ps. I've also posted this question on the Ubuntu forums (no answer yet
))
|
http://cboard.cprogramming.com/c-programming/110586-problems-pointer-segfault-very-basic-prog.html
|
CC-MAIN-2013-48
|
refinedweb
| 425
| 61.6
|
A JavaScript Exif info parser.
ExifReader is a JavaScript library that parses image files and extracts the metadata. It can also extract an embedded thumbnail. It can be used either in a browser or from Node. Supports JPEG, TIFF, PNG, HEIC, and WebP files with Exif, IPTC, XMP, ICC, and MPF metadata (depending on file type).
ExifReader is highly and easily configurable and the resulting bundle can be as small as 3 KiB (gzipped) if you're only interested in a few tags (e.g. date and/or GPS values). See section below on making a custom build.
ExifReader supports module formats ESM, AMD, CommonJS, and globals and can therefore easily be used from Webpack, RequireJS, Browserify, Node etc.
You can try it out on the examples site.
Support table
| File type | Exif | IPTC | XMP | ICC | MPF | Thumbnail | | ----------|---------|---------|---------|---------|---------|-----------| | JPEG | yes | yes | yes | yes | yes | yes | | TIFF | yes | yes | yes | yes | ??? | no | | PNG | no | no | yes | no | no | no | | HEIC/HEIF | yes | no | no | yes | ??? | no | | WebP | yes | no | yes | yes | ??? | yes |
??? = MPF may be supported in any file type using Exif since it's an Exif extension, but it has only been tested on JPEGs.
If you're missing something that you think should be supported, file an issue with an attached example image and I'll see what I can do.
Notes for exif-js users
If you come here from the popular but now dead exif-js package, please let me know if you're missing anything from it and I will try to help you. Some notes:
Monetary support is not necessary for me to continue working on this, but in case you like this library and want to support its development you are very welcome to click the button below.
Easiest is through npm or Bower:
npm install exifreader --save
bower install exifreader --save
If you want to clone the git repository instead:
git clone [email protected]:mattiasw/ExifReader.git cd ExifReader npm install
After that, the transpiled, concatenated and minified ES5 file will be in the
distfolder together with a sourcemap file.
Type definitions for TypeScript are included in the package. If you're missing any definitions for tags or something else, a pull-request would be very much welcome since I'm not using TypeScript myself.
ES module syntax:
import ExifReader from 'exifreader';
NOTE: TypeScript/Angular seems to sometimes have problems when using the default export. If you're seeing issues, use this syntax instead:
import * as ExifReader from 'exifreader';
CommonJS/Node modules:
const ExifReader = require('exifreader');
AMD modules:
requirejs(['/path/to/exif-reader.js'], function (ExifReader) { ... });
scripttag:
There are two ways to load the tags. Either have ExifReader do the loading of the image file, or load the file yourself first and pass in the file buffer. The main difference is that the first one is asynchronous and the second one is synchronous.
const tags = await ExifReader.load(file); const imageDate = tags['DateTimeOriginal'].description; const unprocessedTagValue = tags['DateTimeOriginal'].value;
Where
fileis one of
const tags = ExifReader.load(fileBuffer);
Where
fileBufferis one of
ArrayBufferor
SharedArrayBuffer(browser)
Buffer(Node.js)
See the examples site for more directions on how to use the library.
By default, Exif, IPTC and XMP tags are grouped together. This means that if e.g.
Orientationexists in both Exif and XMP, the first value (Exif) will be overwritten by the second (XMP). If you need to separate between these values, pass in an options object with the property
expandedset to
true:
const tags = ExifReader.load(fileBuffer, {expanded: true});
Tags that are unknown, either because they have been excluded by making a custom build or they are yet to be added into ExifReader, are by default not included in the output. If you need to see them there is an option that can be passed in:
const tags = ExifReader.load(fileBuffer, {includeUnknown: true});
If you discover an unknown tag that should be handled by ExifReader, please reach out by filing an issue.
If
expanded: trueis specified in the options, there will be a
gpsgroup. This group currently contains
Latitude,
Longitude, and
Altitudewhich will be negative for values that are south of the equator, west of the IRM, or below sealevel. These are often more convenient values for regular use. For some elaboration or if you need the original values, see Notes below.
The thumbnail and its details will be accessible through
tags['Thumbnail']. There is information about e.g. width and height, and the thumbnail image data is stored in
tags['Thumbnail'].image.
How you use it is going to depend on your environment. For a web browser you can either use the raw byte data in
tags['Thumbnail'].imageand use it the way you want, or you can use the helper property
tags['Thumbnail'].base64that is a base64 representation of the image. It can be used for a data URI like this:
const tags = ExifReader.load(fileBuffer); imageElement.src = ' + tags['Thumbnail'].base64;
If you're using node, you can store it as a new file like this:
const fs = require('fs'); const tags = ExifReader.load(fileBuffer); fs.writeFileSync('/path/to/new/thumbnail.jpg', Buffer.from(tags['Thumbnail'].image));
See the examples site for more details.
The most important step will be to use a custom build so please do that.
If you are using Webpack 4 or lower and are only targeting web browsers, make sure to add this to your Webpack config (probably the
webpack.config.jsfile):
node: { Buffer: false }
Bufferis only used in Node.js but if Webpack sees a reference to it it will include a
Buffershim for browsers. This configuration will stop Webpack from doing that. Webpack 5 does this automatically.
Configuring a custom build can reduce the bundle size significantly.
NOTE 1: This functionality is in beta but should work fine. Please file an issue if you're having problems or ideas on how to make it better.
NOTE 2: This only changes the built file (
exifreader/dist/exif-reader.js), not the source code. That means it's not possible to use the ES module (from the
srcfolder) or any tree shaking to get the benefit of a custom build. Tree shaking will actually have close to no effect at all here so don't rely on it.
This is for npm users that use the built file. To specify what functionality you want you can either use include pattern (start with an empty set and include) or exclude pattern (start with full functionality and exclude). If an include pattern is set, excludes will not be used.
For Exif and IPTC it's also possible to specify which tags you're interested in. Those tag groups have huge dictionaries of tags and you may not be interested in all of them. (Note that it's not possible to specify tags to exclude.)
The configuration is added to your project's
package.jsonfile.
Example 1: Only include JPEG files and Exif tags (this makes the bundle almost half the size of the full one (non-gzipped)):
"exifreader": { "include": { "jpeg": true, "exif": true } }
Example 2: Only include TIFF files, and the Exif
DateTimetag and the GPS tags (resulting bundle will be ~16 % of a full build):
"exifreader": { "include": { "tiff": true, "exif": [ "DateTime", "GPSLatitude", "GPSLatitudeRef", "GPSLongitude", "GPSLongitudeRef", "GPSAltitude", "GPSAltitudeRef" ] } }
Example 3: Exclude XMP tags:
"exifreader": { "exclude": { "xmp": true } }
Then, if you didn't install ExifReader yet, just run
npm install exifreader. Otherwise you have to re-build the library:
npm rebuild exifreader
If you use
yarn, simply run
yarn add exifreaderto rebuild the library.
After that the new bundle is here:
node_modules/exifreader/dist/exif-reader.js
If you are using
vite, you will need to clear the dependency cache after a rebuild.
If you're using the include pattern config, remember to include everything you want to use. If you want
xmpand don't specify any file types, you will get "Invalid image format", and if you specify
jpegbut don't mention any tag types no tags will be found.
Possible modules to include or exclude:
| Module | Description | | ----------- | -------------------------------------------------------------- | |
jpeg| JPEG images. | |
tiff| TIFF images. | |
png| PNG images. | |
heic| HEIC/HEIF images. | |
webp| WebP images. | |
file| JPEG file details: image width, height etc. | |
png_file| PNG file details: image width, height etc. | |
exif| Regular Exif tags. If excluded, will also exclude
mpfand
thumbnail. For TIFF files, excluding this will also exclude IPTC, XMP, and ICC. | |
iptc| IPTC tags. | |
xmp| XMP tags. | |
icc| ICC color profile tags. | |
mpf| Multi-picture Format tags. | |
thumbnail| Thumbnail. Needs
exif. |
GPSLatitude,
GPSLongitude) and the reference value (
GPSLatitudeRef,
GPSLongitudeRef). Use the references to know whether the coordinate is north/south and east/west. Often you will see north and east represented as positive values, and south and west represented as negative values (e.g. in Google Maps). This setup is also used for the altitude using
GPSAltitudeand
GPSAltitudeRefwhere the latter specifies if it's above sea level (positive) or below sea level (negative). If you don't want to calculate the final values yourself, see the section on GPS for pre-calculated ones.
Orientationvalue of
3will have
Rotate 180in the
descriptionproperty. If you would like more XMP tags to have a processed description, please file an issue or create a pull request.
descriptionproperty of tags can change in a minor update. If you want to process a tag's value somehow, use the
valueproperty to be sure nothing breaks between updates.
The library makes use of the DataView API which is supported in Chrome 9+, Firefox 15+, Internet Explorer 10+, Edge, Safari 5.1+, Opera 12.1+. For Node.js at least version 10 is required if you want to parse XMP tags, otherwise earlier versions will also work.
Full HTML example pages and a Node.js example are located on the examples site.
Testing is done with Mocha and Chai. Run with:
npm test
Test coverage can be generated like this:
npm run coverage
See CONTRIBUTING.md.
This project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms..
Regions(see issue #129 for more details)
.valueon rational tags). Rational values are now kept in their original numerator/denominator pair instead of being calculated into a float. In addition to
.valueon rational tags some descriptions have also changed into better ones, e.g. ExposureTime now looks like
1/200instead of
0.005.
|
https://xscode.com/mattiasw/ExifReader
|
CC-MAIN-2021-49
|
refinedweb
| 1,728
| 57.67
|
Tracy R Reed wrote: > Questions/problems/TODO's: > > This is a fairly simple structured programming implementation. No OO. > Should I be using some classes somewhere? There doesn't seem to be any need for it. > > The config file is just a module which I import which causes all of my > configs to become globals. Globals are bad. Is there a better way or > would just about anything else be overkill? A singleton config class or > something? Overkill? Instead of from lollerskates_config import * try import lollerskates_config as config or some other abbreviated name. Then instead of using e.g. 'macros' in your program use 'config.macros'. This has the advantage that it is clear where macros is defined and the config variables are not global to your module. IMO singletons are greatly overused in general; in Python a module is a unique namespace. > > I have several loops in this code for processing the logfiles. I once > tried to convert these for loops to list comprehensions and totally > confused myself and backed out the change (yeay svn!). Is there any > benefit to list comprehensions in this case? I don't see any loops that lend themselves to list comprehensions. A list comp is a shorthand way to build a new list from an existing list. You don't do that. > > I would kinda like to play with unit tests. Not sure how I would > construct unit tests for this. And again, perhaps overkill. But some > people tell me you should write the unit tests before you even begin > coding and code until the tests are satisfied. So even on a small > project you would have tests. I run into a couple nasty bugs I created > which caused the script to never return anything from the logfile (so > you don't immediately realize something is broken) where I thought "It > sure would be nice to have a test to catch that if it ever happens again." Yep. Have you looked at the unit test module? Many of your functions are unit-testable though it will be easier if you write them so they don't rely on external state (i.e. globals) (this is one reason globals are evil). For example replace_tokens() could be tested, it would be easier if macros was a parameter instead of taken from the global state. process_line() would be easier to test if it didn't rely on the global events list, but either took the list as a parameter or returned the line or None and let the caller deal with it. Or maybe you do want a class that can hold the shared state... One more note: if re.match("^$",line): could just be if not line: Overall it looks pretty clean and well-written. Kent
|
http://mail.python.org/pipermail/tutor/2006-October/050283.html
|
CC-MAIN-2013-20
|
refinedweb
| 460
| 74.29
|
It’s amazing how time flies, isn’t it..? Spotting today’s date I realised that there’s only a week left before the closing date of the UK Discovery Developer Competition, which is making available several UK “cultural metadata” datasets from library catalogue and activity data, EDINA OpenUrl resolver data, National Archives images and Engligh Heritage places metadata, as well as ArchivesHub project related Linked Data and Tyne and Wear Museums Collections metadata.
I was intending to have a look at how easy it was to engage with datasets (e.g. by blogging intitial explorations for each dataset along the lines of the text processing tricks I posted around the EDINA data in Postcards from a Text Processing Excursion, Playing With Large (ish) CSV Files, and Using Them as a Database from the Command Line: EDINA OpenURL Logs and Visualising OpenURL Referrals Using Gource), but I seem to have left it a bit let considering other work I need to get done this week… (i.e. marking:-(
..except for posting this old bit of code I don’t think I’ve posted before that demonstrates how to use the Python scripting language to parse an XML file, such as the Huddersfield University library MOSAIC activity data, and create a CSV/text file that we can then run simple text processing tools against.
If you download the Huddersfield or Lincoln data (e.g. from) and have a look at a few lines from the XML files (e.g. using the Unix command line tool head, as in head -n 150 filename.xml to show the first 150 lines of the file), you will notice records of the form:
<useRecordCollection> <useRecord> <from> <institution>University of Huddersfield</institution> <academicYear>2008</academicYear> <extractedOn> <year>2009</year> <month>6</month> <day>4</day> </extractedOn> <source>LMS</source> </from> <resource> <media>book</media> <globalID type="ISBN">1903365430</globalID> <author>Elizabeth I, Queen of England, 1533-1603.</author> <title>Elizabeth I : the golden reign of Gloriana /</title> <localID>585543</localID> <catalogueURL></catalogueURL> <publisher>National Archives</publisher> <published>2003</published> </resource> <context> <courseCode type="ucas">LQV0</courseCode> <courseName>BA(H) Humanities</courseName> <progression>UG2</progression> </context> </useRecord> </useRecordCollection>
Suppose we want to extract the data showing which courses each resource was borrowed against. That is, for each use record, we want to extract a localID and the courseCode. The following script achieves that:
from lxml import etree import csv #Inspired by #---------------------------------------------------------------------- def parseMOSAIC_Level1_XML(xmlFile,writer): context = etree.iterparse(xmlFile) record = {} # we are going to use record to create a record containing UCAS codes # record={ucasCode:[],localID:''} records = [] print 'starting...' for action, elem in context: if elem.tag=='useRecord' and action=='end': #we have parsed the end of a useRecord, so output course data if 'ucasCode' in record and 'localID' in record: for cc in record['ucasCode']: writer.writerow([record['localID'],cc]) record={} if elem.tag=='localID': record['localID']=elem.text elif elem.tag=='courseCode' and 'type' in elem.attrib and elem.attrib['type']=='ucas': if 'ucasCode' not in record: record['ucasCode']=[] record['ucasCode'].append(elem.text) elif elem.tag=='progression' and elem.text=='staff': record['staff']='staff' #return records writer = csv.writer(open("test.csv", "wb")) f='mosaic.2008.level1.1265378452.0000001.xml' s='mosaic.2008.sampledata.xml' parseMOSAIC_Level1_XML(f,writer)
Usage (if you save the code to the file mosaicXML2csv.py): python mosaicXML2csv.py
Note: this minimal example uses the file specified by f=, in the above case mosaic.2008.level1.1265378452.0000001.xml and writes the CSV out to test.csv
(You can also find the code as a gist on Github: simple Python XML2CSV converter)
Running the script gives data of the form:
185215,L500
231109,L500
180965,W400
181384,W400
180554,W400
201002,W400
...
Note that we might add an additional column for progression. Add in something like:
if elem.tag=='progression': record['progression']=elem.text
and modify the write command to something like writer.writerow([record['localID'],cc,record['progression']])
We can now generate quick reports over the simplified test.csv data file.
For example, how many records did we extract:
wc -l test.csv
If we sort the records (and by so doing, group duplicated rows) [sort test.csv], we can then pull out unique rows and count the number of times they repeat [uniq -c], then sort them by the number of reoccurrences [sort -k 1 -n -r] and pull out the top 20 [head -n 20] using the combined, piped Unix commandline command:
sort test.csv | uniq -c | sort -k 1 -n -r | head -n 20
This gives and output of the form:
186 220759,L500
134 176259,L500
130 176895,L500
showing that resource with localID 220759 was taken out on course L500 186 times.
If we just want to count the number of books taken out on a course as a whole, we can just pull out the coursecode column using the cut command, setting the delimiter to be a comma:
cut -f 2 -d ',' test.csv
Having extracted the course code column, we can sort, find repeat counts, sort again and show the courses with the most borrowings against them:
cut -f 2 -d ',' test.csv | sort | uniq -c | sort -k 1 -n -r | head -n 10
This gives a result of the form:
13476 L500
8799 M931
7499 P301
In other words, we can create very quick and dirty reports over the data using simple commandline tools once we generate a row based, simply delimited text file version of the original XML data report.
Having got the data in a simple test.csv text file, we can also load it directly into the graph plotting tool Gephi, where the two columns (localID and courseCode) are both interpreted as nodes, with an edge going from the localID to the courseCode. (That is, we can treat the two column CSV file as defining a bipartite graph structure.)
Running a clustering statistic and a statistic that allows us to size nodes according to degree, we can generate a view over the data that shows the relative activity against courses:
Here’s another view, using a different layout:
Also note that by choosing an appropriate layout algorithm, the network structure visually identifies courses that are “similar” by virtue of being connected to similar resources. The thickness of the edges is proportional to the number of times a resource was borrowed against a particular course, so we can also visually identify such items at a glance.
|
https://blog.ouseful.info/2011/07/25/quick-command-line-reports-from-csv-data-parsed-out-of-xml-data-files/
|
CC-MAIN-2020-40
|
refinedweb
| 1,076
| 50.77
|
Using import mapsUsing import maps
Deno supports import maps which allow you to supply Deno with information about how to resolve modules that overrides the default behavior. Import maps are a web platform standard that is increasingly being included natively in browsers. They are specifically useful with adapting Node code to work well with Deno, as you can use import maps to map "bare" specifiers to a specific module.
When coupled with Deno friendly CDNs import maps can be a powerful tool in managing code and dependencies without need of a package management tool.
Bare and extension-less specifiersBare and extension-less specifiers
Deno will only load a fully qualified module, including the extension. The
import specifier needs to either be relative or absolute. Specifiers that are
neither relative or absolute are often called "bare" specifiers. For example
"./lodash/index.js" is a relative specifier and
is an absolute specifier. Whereas
"lodash"
would be a bare specifier.
Also Deno requires that for local modules, the module to load is fully
resolve-able. When an extension is not present, Deno would have to "guess" what
the author intended to be loaded. For example does
"./lodash" mean
./lodash.js,
./lodash.ts,
./lodash.tsx,
./lodash.jsx,
./lodash/index.js,
./lodash/index.ts,
./lodash/index.jsx, or
./lodash/index.tsx?
When dealing with remote modules, Deno allows the CDN/web server define whatever semantics around resolution the server wants to define. It just treats a URL, including its query string, as a "unique" module that can be loaded. It expects the CDN/web server to provide it with a valid media/content type to instruct Deno how to handle the file. More information on how media types impact how Deno handles modules can be found in the Determining the type of file section of the manual.
Node does have defined semantics for resolving specifiers, but they are complex, assume unfettered access to the local file system to query it. Deno has chosen not to go down that path.
But, import maps can be used to provide some of the ease of the developer experience if you wish to use bare specifiers. For example, if we want to do the following in our code:
import lodash from "lodash";
We can accomplish this using an import map, and we don't even have to install
the
lodash package locally. We would want to create a JSON file (for example
import_map.json) with the following:
{ "imports": { "lodash": " } }
And we would run our program like:
> deno run --import-map ./import_map.json example.ts
If you wanted to manage the versions in the import map, you could do this as well. For example if you were using Skypack CDN, you can used a pinned URL for the dependency in your import map. For example, to pin to lodash version 4.17.21 (and minified production ready version), you would do this:
{ "imports": { "lodash": " } }
Overriding importsOverriding imports
The other situation where import maps can be very useful is the situation where you have tried your best to make something work, but have failed. For example you are using an npm package which has a dependency on some code that just doesn't work under Deno, and you want to substitute another module that "polyfills" the incompatible APIs.
For example, let's say we have a package that is using a version of the built-in
"fs" module that we have a local module we want to replace it with when it
tries to import it, but we want other code we are loading to use the standard
library replacement module for
"fs". We would want to create an import map
that looked something like this:
{ "imports": { "fs": " }, "scopes": { " { "fs": "./patched/fs.ts" } } }
Import maps can be very powerful, check out the official standards README for more information.
|
https://deno.land/manual/node/import_maps
|
CC-MAIN-2022-21
|
refinedweb
| 637
| 50.97
|
Hello folks I've come across this problem where my bitmap will not become initialized.I've doubled checked my code and I can't see anything wrong, but it might be because I'm pretty tired atm.
Here is the relevant code;Main.cpp - This isn't really relevant code, just another question here, is this a proper way of approaching what I am trying to accomplish? A small 2D game:
#include "initialize.h"
int main()
{
const float FPS = 60.0;
const float frameFPS = 10.0;
initialize *init = new initialize();
init->initSystem(FPS, frameFPS);
return 0;
}
Inside initialize.cpp:
al_install_keyboard();
al_init_image_addon();
c = new Character("player.png", "Hero", 40, 40, 3);
std::cout << c->getName() << ": " << c->getHealth() << " HP." << std::endl;
Character.cpp:
Character::Character(std::string filePath, std::string n, int var, int var2, int speed)
{
image = al_load_bitmap(filePath.c_str());
if(image == NULL)
{
std::cout << filePath.c_str() << " could not be loaded." << std::endl;
al_destroy_bitmap(image);
}
name = n;
maxHealth = 10, health = maxHealth;
x = var, y = var2, this->speed = speed;
}
Post the full code for the initialize constructor and for initSystem.
The loaded png may be null because it was not found.
See for more help.
My Website! | EAGLE GUI Library Demos | My Deviant Art Gallery | Spiraloid Preview | A4 FontMaker | Skyline! (Missile Defense)
Elizabeth Warren for President 2020! | Modern Allegro 4 and 5 binaries | Allegro 5 compile guideKing Piccolo will make it so
No, the image is certainly there.. I will post the rest of the code in 2-3 minutes.
initialize::initialize()
{
display = NULL;
event_queue = NULL;
timer = NULL;
frameTimer = NULL;
loop = true, redraw = false;
}
No, the image is certainly there.
Stick a al_filename_exists("player.png") somewhere and make sure it returns true. The only other real reason it wouldn't work is if the image addon was compiled without png support (unlikely unless you compiled it yourself) or the png file itself is corrupted.
"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]
I am pretty sure that the image file is NOT corrupted, that being said I've both jpg and bmp too which I just saved in paint which usually works.. I'm using pre-compiled Allegro 5.0.10 bins.
Edit: I tried al_filename_exists("player.png") and it returns true.
Edit edit: The program works now ALL OF A SUDDEN!?!? I haven't changed a thing.I still do appreciate everyone's help, thanks
That's the Allegro way.
Hm, random things happing tend to mean you have a memory corruption issue change a line of code, even adding a printf can "hide" them" --
|
https://www.allegro.cc/forums/thread/615086
|
CC-MAIN-2018-47
|
refinedweb
| 440
| 59.7
|
WIFI Pumpkin – Framework for Rogue Wi-Fi Access Point Attack
WiFi-Pumpkin is a security tool that provides the Rogue access point to Man-In-The-Middle and network attacks.
Installation
Kali 2.0/WifiSlax 4.11.1/Parrot 2.0.5
- Python 2.7
git clone cd WiFi-Pumpkin chmod +x installer.sh ./installer.sh --install
refer to the wiki for Installation
Features
- Rogue Wi-Fi Access Point
- Deauth Attack Clients AP
- Probe Request Monitor
- DHCP Starvation Attack
- Credentials Monitor
- Transparent Proxy
- Windows Update Attack
- Phishing Manager
- Partial Bypass HSTS protocol
- Support beef hook
- Mac Changer
- ARP Poison
- DNS Spoof
Plugins
Transparent Proxy
Transparent proxies that you can use to intercept and manipulate HTTP/HTTPS traffic modifying requests and responses, that allow to inject javascripts into the targets visited. You can easily implement a module to inject data into pages creating a python file in directory “Proxy” automatically will be listed on PumpProxy tab.
Plugins Example
The following is a sample module that injects some contents into the tag to set blur filter into body html page:
from Plugin import PluginProxy class blurpage(PluginProxy): ''' this module proxy set blur into body page html response''' _name = 'blur_page' _activated = False _instance = None _requiresArgs = False @staticmethod def getInstance(): if blurpage._instance is None: blurpage._instance = blurpage() return blurpage._instance def __init__(self): self.LoggerInjector() self.injection_code = [] def setInjectionCode(self, code): self.injection_code.append(code) def inject(self, data, url): injection_code = '''<head> <style type="text/css"> body{ filter: blur(2px); -webkit-filter: blur(2px);} </style>''' self.logging.info("Injected: %s" % (url)) return data.replace('<head>',injection_code )
I can’t install it
have a look at the Installation
I have this message warning Error Network Card
You system does not support Wifi-Pumpkin. Run it with a Wireless network adapter
Does it work on X Wireless Adapters ?
I can’t install package X
Try installing the package via pip, Google is your friend!
Is it Windows supported?
No
8 thoughts on “WIFI Pumpkin – Framework for Rogue Wi-Fi Access Point Attack”
Would it be possible to install all this on a Raspberry Pi 2?
I have not tested it on a Raspberry Pi2 before, but I strongly doubt that it will work. I might be wrong….
Going to be my project for the day, downloading the Kali image now, will keep you updated.
Cool. Let me know if it works! 🙂
Okay so now I’m running in to some issues. after installing Kali image to micro sd, I got on and run the command apt-get update && apt-get upgrade -y then everything went smoothly. Rebooted and then followed the install from this site and Wi-Fi Pumpkin installed with no issues or errors. I cd to the directory and ./wifi-pumpkin.py and it gives me an error saying “ImportError: No module named netifaces” I tried apt-get upgrade -f to install the unfinished upgrades then rebooted still the same error. Also tried $ sudo apt-get install python-dev and then $ sudo pip install netifaces. still not working. Any possible ideas and im using ALFA AWUS036NHA.
@MrSwordan
Thanks for the feedback on your project.
Regarding the error. I saw on the installation page that they do recommend your WIFI adapter but can you still test the following and whether it picks it up
Your USB WIFI adapter must have support AP/monitor mode. how to check this ? execute this command on terminal:
iw list
If there is ‘AP’ in the list of “Supported interface modes” your device will support. The adapter needs to have drivers for GNU/Linux.
Here’s a link for your WIFI adapter to make it work on Kali –
Let me know if you come right. 🙂
i have problem after installin wifi-pupmkin
root@madhav-PC:~/WiFi-Pumpkin# sudo wifi-pumpkin
[✘] hostapd is not installed.
Traceback (most recent call last):
File “wifi-pumpkin.py”, line 50, in
from core.main import Initialize
File “/usr/share/WiFi-Pumpkin/core/main.py”, line 55, in
from netfilterqueue import NetfilterQueue
ImportError: No module named netfilterqueue
How can i solve this problem? Pls rply
|
https://haxf4rall.com/2016/05/18/wifi-pumpkin-framework-for-rogue-wi-fi-access-point-attack/
|
CC-MAIN-2017-13
|
refinedweb
| 680
| 56.76
|
Microsoft’s VC++ compiler has an option to generate instructions for new instruction sets such as AVX and AVX2, which can lead to more efficient code when running on compatible CPUs. So, an obvious tactic is to compile critical math-heavy functions twice, once with and once without /arch:AVX (or whatever instruction set you want to optionally support).
It seems like a good idea, and it’s been used in various forms for years, but it’s devilishly difficult to do safely. It usually works, but guaranteeing that is trickier than I had realized.
Let’s say that we have a function called NonAVXMath. This function works great but we know that it would be faster if compiled with /arch:AVX. So we copy our function to another source file (or use the pre-processor to give the same effect), rename the copy to AVXMath, compile the new source file with /arch:AVX, and then at runtime choose the appropriate function to call.
This seems simple enough, but it isn’t. Let’s imagine that NonAVXMath calls some helper functions. Those functions are probably in header files so we don’t need to copy them – they will be pulled in as needed by the preprocessor. They will be compiled once with /arch:AVX and once without, and will be inlined into the functions, giving ideal code. And indeed, this is what happens most of the time.
But what happens if the inline functions aren’t inlined? For each translation unit the compiler will generate a copy of the inline functions that were not inlined. It is then the linker’s job to discard all but one copy of these functions. This is supposed to be safe because the function bodies are supposed to be identical. But they’re not because some use AVX instructions and some don’t.
It’s an ODR violation, essentially.
This cannot end well
If the linker chooses the copy of the inline function that was compiled without AVX then your code will run everywhere but will run more slowly, because it is switching back and forth between AVX and SSE math.
If the linker chooses the copy that was compiled with AVX then your code will crash on machines that don’t support AVX! This includes older CPUs that don’t support the AVX instruction set, older operating systems that don’t support AVX, or computers that have had AVX support disabled (on Windows you can do this with “bcdedit /set xsavedisable 1” and doing this used to be the recommended way of working around an old Windows 7 bug). In short, your program will crash for some customers.
Oops.
I created a sample project that demonstrates this. While the issue can happen in fully optimized LTCG builds (and indeed it did happen recently to Chrome) it is easier to demonstrate in a debug build. My test project contains two source files which both call floorf, one of which is compiled with /arch:AVX. The build.bat file compiles both and links them twice, once with the AVX file first and once with the AVX file last. Then it disassembles the floorf function in both executables to demonstrate that it varies. Here are the results when the AVX source file is linked last:
avx_last!floorf:
push ebp
mov ebp,esp
push ecx
cvtss2sd xmm0,dword ptr [ebp+8]
sub esp,8
movsd mmword ptr [esp],xmm0
call avx_last!floor (001b3b60)
add esp,8
fstp dword ptr [ebp-4]
fld dword ptr [ebp-4]
mov esp,ebp
pop ebp
ret
And here are the results when the AVX source file is linked first:
avx_first!floorf:
push ebp
mov ebp,esp
push ecx
vcvtss2sd xmm0,xmm0,dword ptr [ebp+8]
sub esp,8
vmovsd qword ptr [esp],xmm0
call avx_first!floor (00bb3b60)
add esp,8
fstp dword ptr [ebp-4]
fld dword ptr [ebp-4]
mov esp,ebp
pop ebp
ret
The difference is subtle but important – instead of cvtss2sd the second version uses vcvtss2sd – the AVX variant of this instruction. In both cases the same floorf function will be called by both the AVX and non-AVX functions.
Now the problem is clear – but what is the solution?
Careful link ordering
If you are careful to link the AVX files last then the compiler should grab the non-AVX versions. This seems like a terrible solution to me. It relies on undefined behavior in the linker, it won’t work reliably with code that is in static link libraries, it is probably flaky in the face of LTCG, and it guarantees that your AVX code will be a mixture of SSE and AVX code that then runs slower than it should.
__forceinline
If you mark all of the relevant functions as __forceinline then the compiler is more likely to inline the functions. Your debug builds will probably still be broken, but maybe that’s okay. However even __forceinline doesn’t guarantee inlining (some functions cannot be inlined) and it feels a bit sketchy to use __forceinline for correctness.
Namespaces
If you include all of the inline function definitions from an anonymous namespace or AVX-specific namespace then the functions are no longer considered the same and the linker will not collapse them. This technique has the advantage of actually guaranteeing correctness. You can either use an anonymous namespace or an AVX specific namespace. Using an AVX specific namespace is probably a better idea because it avoids the risk of ending up with multiple copies of functions that aren’t inlined – one per translation unit. The problem with this solution is that many header files don’t like being added to an unexpected namespace – C/C++ standard headers are particularly unlikely to tolerate this.
static
Marking all of your inline functions as static works similarly to using an anonymous namespace. This means that it comes with the risk of getting multiple copies of non-inlined inline functions. However most linkers can automatically discard duplicate functions if the code bytes are identical – the /OPT:ICF option in the Visual C++ linker does this. Using static also guarantees correctness, as long as you tag every inline function in this manner.
math.h
But what about system header files such as math.h? This is the file that I used in my example and it is the one that has twice caused problems for Google’s Chrome web browser. The current VC++ version of this file includes 49 __inline functions, including floorf which is our culprit today. Well, when there aren’t any elegant solutions you have to go with inelegant. The solution that Chrome went with when we hit this problem was essentially:
#define __inline static __inline
#include <math.h>
#undef __inline
Look, we’re not proud of this solution, but it works. The ideal solution would be for Microsoft to modify math.h – and other header files – to mark inline functions as static. This is what gcc does. Otherwise /arch:AVX cannot be used safely without extraordinary measures. I’ve filed a bug to request this.
A separate DLL
There actually is one way to use /arch:AVX without gross hackery and that is to put all of the AVX code into a separate DLL, compiled entirely with /arch:AVX. Whether this works for you depends on your build system and method of distribution.
Toolchain fixes
Having VC++ tag the inline functions that it ships with static, like gcc/clang do, would avoid the specific problem of floorf and friends. But what about template functions such as std::min, or inline functions written by random developers. A toolchain fix that defuse this landmine once and for all would be much better. A tempting option was suggested on twitter. If all non-inlined inline functions had their name mangling altered to include a /arch: prefix then this problem would be resolved. My test binary would end up with _floorf and _floorf:avx and the linker would trivially resolve the correct functions. The programmer’s intent would be preserved, without the linker inefficiencies of marking every inline function as static (which isn’t even possible for template member functions).
Insert credits here
This problem was previously encountered a while ago by some other developers who use Chromium. They reported their internal bug here, and filed a VC++ bug here. They also contacted me to share their findings, which I appreciate.
Thanks to those on the Chrome team who came up with the (ugly but effective) static __inline solution, thus fixing Chrome’s canary builds for non-AVX capable customers, without having to disable /arch:AVX.
Reddit discussion is here, announcement tweet is here, hacker news discussion is here.
if I remember well Intel compiler does automatic dispatch at runtime, which would make useful enabling ALL the arch options. To me, it would me nice to have a declspec which enables the runtime dispatch (and maybe for which runtimes) only for specific functions, to avoid code-bloat. The cost of this would be only once per function.
“your code will run everywhere but will run more slowly, because it is switching back and forth between AVX and SSE math”
Pictures or it didn’t happen…
Are you saying here, you don’t get all code you wanted as AVX? Or are you saying there is a cost to switching between SSE and AVX. Or to be pendantic, switching between 256 bit wide and 128 bit wide operands – for on some Intel microarchitectures there is a switchover cost. And, further, I believe this cost does not exist on AMD (citation or it didn’t happen, ok ok).
Anyway we are men of science and I hereby demand you make this sentence more accurate.
I did not measure a switching cost, but there will definitely be switching as the AVX code calls a non-AVX floorf function, and my understanding is that this switching between AVX and SSE has a cost, on some CPUs.
The switching can be seen by stepping through the AVX function into the non-AVX floorf, in one of the variants in my sample.
I am happy with this clarification, it meets your own exacting standards. You may proceed.
SSE/AVX transition penalties are unlikely. When compilers don’t inline a function, they normally use vzeroupper before calling it, if any ymm uppers are potentially dirty. (And before returning). i.e. AVX usage is kept “private” to the scope of a function.
I could *imagine* that problem if the compiler was expecting a call between two AVX-using functions and doing inter-procedural optimization, it might skip vzeroupper. Just like it might make some assumptions about calling convention, or even use a *custom* calling convention.
GCC would make a `foo.clone123` version of a function if it wanted to propagate one constant arg into the function, or do other kinds of IPO, but I don’t know about MSVC.
QbProg yes, but does Intel compiler still only dispatch fully optimized code for GenuineIntel (TM) processors?
>older machines
Ha ha, I wish. All the latest Pentiums, Celerons, and, of course, Atoms, do not support AVX at all.
I’m curious why you feel separate dlls are too complex? That’s the solution I prefer. It’s often nice to have all your important performance critical math bits in a separate project/dll for other reasons as well (namely, it’s easier for the perf-minded people to work on it while making it more difficult for the non-perf-minded people to break it).
Also interested; suspect combinatorial explosion may feature in the answer.
Separate DLLs are a perfect solution in many cases. One extreme version would be recompiling the entire project with /arch:AVX and distributing two or more versions. That is ‘easy’ except for the distribution changes.
Having key functions in separate DLLs is a nice compromise but it can get a bit messy because those separate DLLs cannot share *any* libraries with the rest of the code. How tricky this is really depends on the build system used. If it works then go for it. I know that in Chrome we don’t want to lightly add more DLLs.
Maybe “too much work in some circumstances” would be a better statement. I would be interested to hear reports from developers who have used this technique.
This is slightly offtopic but it made me think of separate images, for example the way Process Explorer packages both 32 and 64 bit images and decides which one to run at launch. Why does Chrome still default to 32 bit?
Chrome defaults to 64 bit now, if you install it freshly on a 64-bit OS. However existing 32-bit installs are not (yet) being upgraded to 64-bit. Sysinternals does indeed do this very well, but downloading both versions of Chrome to all consumer machines (which is effectively what sysinternals does) would be too much of an increase in download size, I think.
Ok, but you could make it a function of the installer. It would download the build best usable for that machine. And put a check in Chrome that can detect if the current CPU does support all the features it expects. (People may switch to new CPUs when changing computers and restore from a backup.)
Isn’t an inline namespace be a better solution?
An anonymous namespace could work, but is less targeted than changing inline functions to static inline. If you try an anonymous namespace (perhaps starting with my test project) then let me know how it goes.
No, I mean inline namespace
namespace math {
#ifdef AVX
inline
#endif
namespace AVX {
float floorf(float);
}
#ifdef SSE2
inline
#endif
namespace SSE2 {
float floorf(float);
}
}
Ah – gotcha. Well, just gotta try it with math.h I guess and see if it works.
Shouldn’t the AVX version of floor be calling vroundps?
Er vroundss as this is scalar in the example.
vroundss only works if the rounding mode is set to a non-default value, and changing the rounding mode is frequently expensive. floorf ends up calling floor and I didn’t bother looking at its implementation – it presumably comes with a whole host of new problems. I used floorf because that was what caused the problem in Chrome and because it made for an easy demo repro. The issue could occur with any inline math function.
I don’t think roundss requires you to change the default rounding mode, if you look at immintrin.h, the rounding mode is passed in as a compile time constant.
#define _mm256_floor_ps(val) _mm256_round_ps((val), _MM_FROUND_FLOOR)
Ya this is more of an aside about MS’s implementation of floor..(and trunc/ceil which have the same problem).
Is this something MS could fix with a linker enhancement? If the linker was /arch:AVX aware, then it could choose from an AVX and a non AVX version of the function, depending on which translation unit it was called from.
On twitter it was suggested – – that name mangling should include modifiers for /arch: options, so /arch:AVX would produce a floorf:AVX symbol instead of floorf. If this was done for all non-inlined inline functions then the entire problem would disappear. No linker changes would be needed, just a simple compiler change.
thaaaanx
Thank goodness the C pre-processor still exists, despite the last 20 years of C++ changes attempting to kill it. This is what it’s there for – the kind of meta-language hack that you shouldn’t need to do in a perfect world, but do in reality.
A colleague of mine encountered a similar problem a couple of years ago. At that time we could not figure out why this happened but was able to quickly work around it by moving some math function calls to another source file for which we did not use /arch:AVX. Having read this article, we now understand the problem more clearly. Thanks.
Given that we have potentially the same problem of ODR violation for any inline function or function template such as std::min(), we cannot but conclude that there is no perfect solution other than building a separate DLL for each architecture, though this seems to be a little cumbersome to me because dispatching inside a DLL is just so handy.
Finally, I would like to point out that this problem is definitely not restricted to floating-point arithmetic but also affects some integer arithmetic because, since Visual Studio 2015, the /arch:AVX2 option allows the compiler to emit bit manipulation instructions called BMI1/BMI2 such as SHLX (shift logical left without affecting flags), rendering virtually any code unsafe.
When floorf is NOT inlined why does the linker consider the resulting functions as duplicates? Is it just going by name? The generated assembly isn’t identical after all as is evidenced by that one instruction?
When floorf is not inlined the linker is *required* to treat the resulting functions as duplicates, and those duplicates are required to be identical. A failure of those functions to be identical is an ODR violation. The big question is whether this ODR violation is the fault of the compiler, or of the author of the inline function.
A number of ABIs could benefit from name mangling. On the ARM, for example, it would be helpful to have a naming convention for functions that accept floating-point values in FPU registers and for those which accept them in integer registers; code which defines either form could generate a strong symbol for the form it’s expecting and a weak symbol with a stub that would translate the arguments and chain to the other. Code which expects to call a function with one convention could then define a weak symbol with a stub for the form it’s calling whcih would chain to the other.
Using such a pattern, if caller and callee expect the same convention, both would weakly define stubs which never get called and could thus be omitted; if they expect different conventions, the weakly-defined stubs would bridge between the caller and the called function.
One difficulty with using naming mangling to bridge such issues, however, is that C allows function pointers to be passed between modules, and there’s no nice way that code with a pointer to a function of one style could convert it into a pointer to a function of the other style. Perhaps that could be resolved by having the linker produce tables of the entry points of all function whose addresses are taken, using a separate table for each calling convention, and then have “function pointers” actually be offsets into the table. Such a design would require support from linkers and compilers, but would facilitate interoperation among code which is compiled with different “preferred CPU” options.
I think that it’s okay for function pointers to not be handled. The real problem (as far as I can tell) is when there is cross-talk between two domains, as discussed and demonstrated here. When a developer is using a function pointer they are presumed to be in control and have understanding of what they are doing, so they need to take responsibility for when they cross architecture domains. I think that’s better than having compiler magic to make arbitrarily ‘fat’ function pointers.
But name mangling for non-inlined inline functions seems crucial.
Do you like my suggested approach for using name mangling to allow clean inter-operation between domains? The performance of code using that approach wouldn’t be quite as good as that of code which was all compiled the same way (e.g. if “foo() is compiled to use FPU registers and “bar()” isn’t, having “foo” load values into FPU registers and then call a stub that reads them out would be less efficient than having “foo” put the values where they’re expected, but the code would work *correctly* in any case. Code which uses function pointers will be a problem, but otherwise I don’t see why things shouldn’t be able to interact a lot more smoothly than they do.
The idea of using name mangling and thunks for allowing cleaner inter-operation between domains with different calling conventions could work nicely. Although, in many cases it would be better to generate two versions of the functions, or else use the name mangling to indicate the calling convention such that calling the wrong one becomes a linker error.
I’ve implemented dynamic codepaths between AVX and SSE including the appropriate CPU checks, but I’ve found that AVX actually hurts performance unless you’ve got significant intensive bursts of very heavily vectorised code.
The problem is that when the upper half of the YMM registers and ALU/FPU are not used, the chip powers them down, and in fact the “base clock” and “turbo clock” speeds quoted for your CPU is with these units powered down.
When the chip sees them used, it powers up the circuitry (takes about 60microseconds to do so, during which it actually implements AVX ops by doing 2 128 bits ops internally). Now this extra circuitry consumes electricity, increasing the power consumption and heat produced by the CPU, so the base & turbo clock speeds are actually reduced. If you’re using “low power” AVX instructions this base clock reduction is only about 10%, but for “high power” AVX instructions it’s more like 20%.
Only when the wider registers haven’t been used for 700 microseconds (0.7ms) are these circuits powered down again.
So if you’re going to be intensively crunching numbers for a few milliseconds, all well and good – the 256 bits registers will more than make up for a 10 or 20% reduction in clock speed, but if (as happens with our maths library) the use of wide registers is in short bursts of a few microseconds at a time, but maybe 4 or 5 times a millisecond, then the 90% of the code that is NOT benefitting from the use of wider registers runs 10% slower, and the net overall result is a slowdown in performance.
See for a longer explanation including links to Intel’s notes about this base clock reset.
|
https://randomascii.wordpress.com/2016/12/05/vc-archavx-option-unsafe-at-any-speed/
|
CC-MAIN-2021-43
|
refinedweb
| 3,713
| 59.84
|
From: David Abrahams (abrahams_at_[hidden])
Date: 2001-05-18 11:51:17
----- Original Message -----
From: "Gary Powell" <Gary.Powell_at_[hidden]>
> I have mixed thoughts about this next suggestion, and that is to
implement
> a member function "swap" and a specialization in the namespace "std" of
> swap. Partly because I'm not sure that for the types, float, etc it really
> makes much difference, and partly because for a user defined type there
are
> already a host of issues with std::swap etc. (there is plenty of previous
> discussion and I don't really want to restart that.)
And there are active issues in the core and library working groups of the
standards committee on this topic. See (lib 225) (lib 229) (core 229)
-Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2001/05/11960.php
|
CC-MAIN-2020-50
|
refinedweb
| 146
| 73.78
|
To compile a program, you doesn’t really need a main method in your program. But, while execution JVM searches for the main method. In the Java the main method is the entry point Whenever you execute a program in Java JVM searches for the main method and starts executing from it.
The main method must be public, static, with return type void, and a String array as argument.
public static int main(String[] args){ }
You can write a program without defining a main it gets compiled without compilation errors. But when you execute it a run time error is generated saying “Main method not found”.
In the following Java program, we have two methods of same name (overloading) addition and, without main method. You can compile this program without compilation errors.
public class Calculator { int addition(int a , int b){ int result = a+b; return result; } int addition(int a , int b, int c){ int result = a+b+c; return result; } }
But, when you try to execute this program following error will be generated.
D:\>javac Calculator.java D:\>java Calculator Error: Main method not found in class Calculator, please define the main method as: public static void main(String[] args) or a JavaFX application class must extend javafx.application.Application
To resolve this you need to define main method in this program and call the methods of the class.
public class Calculator { int addition(int a , int b){ int result = a+b; return result; } int addition(int a , int b, int c){ int result = a+b+c; return result; } public static void main(String args[]){ Calculator obj = new Calculator(); System.out.println(obj.addition(12, 13)); System.out.println(obj.addition(12, 13, 15)); } }
25 40
|
https://www.tutorialspoint.com/is-main-method-compulsory-in-java
|
CC-MAIN-2020-50
|
refinedweb
| 288
| 52.7
|
COREBlog and pygments
This is a little recipe for all of you, COREBlog users out there.
In this very site, I've posted some interesting things about (for example) Django, or some shell scripting examples. As I'm using reStructuredText as my markup language, I always marked source code snippets with the usual :: (which in rst will be translated into <pre></pre> in HTML format). This ends in the sourcecode appearing as any other pre/code/similar text, which isn't as cool as seeing the code as in my favourite development editor.
In Emacs, I use a special mode for each language I usually work with, so the code gets (among other things) colorized. Wouldn't it be cool to have my code colorized online too?
After searching a little bit, I found Pygments:
a generic syntax highlighter for general use in all kinds of software such as forum systems, wikis or other applications that need to prettify source code.
Of course, it is Python code, so I thought that it should be pretty easy to integrate it with some other Python-based apps like COREBlog (in fact I've created a little Zope Product to create paste sites on top of Pygments, but that's another story).
To integrate Pygments and COREBlog all you need is to get the file rst-directive.py from Pygments and save it inside your COREBlog Product folder:
[Frey] /usr/local/www/Zope210/e-shell/Products/COREBlog> ls -l rst_directive.py -rw-r--r-- 1 wu wheel 2490 Nov 21 2007 rst_directive.py [Frey] /usr/local/www/Zope210/e-shell/Products/COREBlog>
(I've renamed the file to rst_directive.py to avoid problems importing the file later)
Then, all you have to do is import this module into COREBlog. To do that, add the following line on top of the COREBlog.py file inside your Product folder (where you put the rst-directive file):
import rst_directive
(You will see more imports there, put that line wherever you think is a good place for it)
Now restart your Zope instance and you will be able to use the sourcecode directive in rst, someway similar to:
.. sourcecode:: python import rst_directive
(which I've used some lines above)
That will generate all the necessary HTML code to highlight properly the source code snippet.
Finally, you will have to add some CSS styles to your site stylesheet. You can get all the necessary stuff to put into your css files using the get_style_defs() method from Pygments:
>>> from pygments.formatters import HtmlFormatter >>> print HtmlFormatter().get_style_defs()
Just copy the CSS style definitions you will see on-screen to your COREBlog skin CSS (probably located, inside the ZMI in yoursite -> contents tab -> skins -> yourskin -> style_css dtml method) and you are done!
Note: of course you can colorize much more source code than only Python, just take a look at Pygments Lexers to know how to call the proper lexer for your programming language.
|
http://blog.e-shell.org/71
|
CC-MAIN-2019-22
|
refinedweb
| 493
| 66.37
|
itertools.accumulatefunction was added; it provides a fast way to build an accumulated list and can be used for efficiently approaching this problem. See comments below for more information.
A problem I frequently run into is to randomly select an element from some kind of container, with the chances of each element to be selected not being equal, but defined by relative "weights" (or probabilities). This is called weighted random selection [1].
Simple "linear" approach
The following is a simple function to implement weighted random selection in Python. Given a list of weights, it returns an index randomly, according to these weights [2].
For example, given [2, 3, 5] it returns 0 (the index of the first element) with probability 0.2, 1 with probability 0.3 and 2 with probability 0.5. The weights need not sum up to anything in particular, and can actually be arbitrary Python floating point numbers.
import random def weighted_choice(weights): totals = [] running_total = 0 for w in weights: running_total += w totals.append(running_total) rnd = random.random() * running_total for i, total in enumerate(totals): if rnd < total: return i
Binary search
Note that the loop in the end of the function is simply looking for a place to insert rnd in a sorted list. Therefore, it can be sped up by employing binary search. Python comes with one built-in, just use the bisect module:
import random import bisect def weighted_choice_b(weights): totals = [] running_total = 0 for w in weights: running_total += w totals.append(running_total) rnd = random.random() * running_total return bisect.bisect_right(totals, rnd)
Functionally, the two are similar, but the second version is faster. For short lists (2-element long) the difference is ~10%, and for long lists (1000 elements) it gets to ~30%.
Giving up the temporary list
It turns out that the temporary totals list isn't required at all. Employing some ingenuity, we can keep track of where we are in the list of weights by subtracting the current weight from the total:
def weighted_choice_sub(weights): rnd = random.random() * sum(weights) for i, w in enumerate(weights): rnd -= w if rnd < 0: return i
This method is about twice as fast as the binary-search technique, although it has the same complexity overall. Building the temporary list of totals turns out to be a major part of the function's runtime.
This approach has another interesting property. If we manage to sort the weights in descending order before passing them to weighted_choice_sub, it will run even faster, since the random call returns a uniformly distributed value and larger chunks of the total weight will be skipped in the beginning.
Indeed, when pre-sorted the runtime is further reduced by about 20%.
King of the hill
All the methods shown so far use the same technique - generate a random number between 0 and the sum of the weights, and find out into which "slice" it falls. Sometimes it's also called the "roulette method".
There is a different approach however:
def weighted_choice_king(weights): total = 0 winner = 0 for i, w in enumerate(weights): total += w if random.random() * total < w: winner = i return winner
An interesting property of this algorithm is that you don't need to know the amount of weights in advance in order to use it - so it could be used on some kind of stream.
Cool as this method is, it's much slower than the others. I suspect this has something to do with Python's random module not being too speedy, but it's a fact. Even the simple linear approach is ~25% faster on long lists.
Preprocessing
In some cases you may want to select many random objects from the same weight distribution. In these cases, the totals list of weights can be precomputed and the binary-search approach will be very fast for just selecting the numbers. Something like this can be used:
class WeightedRandomGenerator(object): def __init__(self, weights): self.totals = [] running_total = 0 for w in weights: running_total += w self.totals.append(running_total) def next(self): rnd = random.random() * self.totals[-1] return bisect.bisect_right(self.totals, rnd) def __call__(self): return self.next()
As expected, for long lists this approach is about 100 times faster than calling weighted_random all the time. This is hardly a fair comparison, of course, but still a useful one to keep in mind when programming.
Conclusion
Here's a graph comparing the performance of the various methods presented:
The subtraction method is the fastest when you need one-off selections with different weights. If you can manage to supply a pre-sorted weights list, all the better - you'll have a nice performance gain.
However, if you need to generate many numbers from the same set of weights, it definitely pays to pre-compute the table of cumulative totals and then only use binary search for each call (the wrg method). This is by far the fastest approach.
Note: after the initial posting on 22.01, this article went through a major revision on 25.01, incorporating information provided in the comments as well as other methods and comparisons.
|
http://eli.thegreenplace.net/2010/01/22/weighted-random-generation-in-python/
|
CC-MAIN-2017-09
|
refinedweb
| 851
| 55.03
|
Styles.
What Are Styles and Themes?
Styles and themes are essentially the same thing: a collection of properties. These properties can be anything from button color to the "wrap content" attribute or the size of your text. The crucial difference is how they’re applied to your project:
- A style is applied to a View.
- A theme is applied to individual activities or an entire application.
Why Should I Use Themes and Styles?
Implementing styles and themes has several benefits.
- Efficiency: If you’re going to be using the same collection of attributes throughout your application, defining them in advance turns this:
- Consistency: Defining a style or theme helps to ensure a consistent look and feel.
- Flexibility: When you inevitably come to tweak your UI, you only need to touch the code in one location. This change is then automatically replicated, potentially across your entire application. Styles and themes give you the freedom to quickly and easily update your UI, without touching previously-tested code.
<TextView android:
Into this:
<TextView style="@style/disclaimerFont" android:
Step 1: Create a Custom Style
Defining and referencing custom styles in Android is similar to using string resources, so we’ll dive straight in and implement a custom style.
Open the res/values/styles.xml file. This is where you’ll define your styles.
Ensure the styles.xml file has opening and closing "resource" tags:
<resources> </resources>
Give your style a unique identifier. In this tutorial, we’ll use "headline":
<style name="headline">
Add your attributes and their values as a list of items.
<item name="android:textStyle">bold</item>
Once you’ve finished adding items, remember the closing tag:
</style>
This is the custom style we’ll use in this tutorial:
<resources> <style name="headline"> <item name="android:layout_width">fill_parent</item> <item name="android:layout_height">wrap_content</item> <item name="android:typeface">monospace</item> <item name="android:textColor">#ff0000ff</item> <item name="android:textStyle">bold</item> <item name="android:textSize">20dp</item> </style> </resources>
Step 2: Apply Your Style
Applying a style to a View is easy. Open your layout file, and then locate the View and add:
<TextView style="@style/headline" android:
Tip: Note the missing ‘android:’ XML prefix. This prefix is omitted because the "headline" style isn’t defined in the android namespace.
Boot up the emulator and take a look at your custom style in action.
Step 3: Examine the Predefined Styles
You’ve seen how easy it is to define and apply a custom style, but the Android platform features plenty of predefined styles, too. You can access these by examining the Android source code.
- Locate the Android SDK installed on your hard drive and follow the path: platforms/android/data/res/values
- Locate the ‘styles.xml’ file inside this folder. This file contains the code for all of Android’s predefined styles.
Step 4: Apply a Default Style
Pick a style to apply. In this example, we’ll use the following:
Return to your layout file, and add the following to your View:
<TextView android:
Experiment with other default styles to see what different effects can be achieved.
Step 5: Create a themes.xml File
Now that you’re familiar with custom and default styles, we’ll move onto themes. Themes are very similar to styles, but with a few important differences.
Before we apply and define some themes, it’s a good idea to create a dedicated themes.xml file in your project’s "Values" folder. This is especially important if you’re going to be using both styles and themes in your project.
To create this file:
- Right-click on the "Values" folder.
- Select "New", followed by "Other".
- In the subsequent dialog, select the "Android XML Values File" option and click "Next".
- Enter the filename "themes" and select "Finish".
Step 6: Apply a Default Theme
Unlike styles, themes are applied in the Android Manifest, not the layout file.
Pick a theme to work with by opening the ‘themes’ file in platforms/android/data/res/values and scrolling through the XML code. In this example, we’ll use the following:
Open the Android Manifest file. Depending on the desired scope of your theme, either apply it to:
A single activity:
<activity android:theme="@android:style/Theme.NoTitleBar"
Or the entire application:
<application android:theme="@android:style/Theme.NoTitleBar"
Finally, check how this looks in the emulator:
Step 7: Cut Corners Using Inheritance
Although you can define custom themes from scratch, it’s usually more efficient to extend one of the Android platform’s predefined themes using inheritance. By setting a “parent” theme, you implement all of the attributes of the predefined theme, but with the option of re-defining and adding attributes to quickly create a tailor-made theme.
- In values/themes.xml, set the unique identifier as normal, but specify a “parent” theme to inherit from.
- Define each attribute that’s being added or modified. In this example we’re implementing the default ‘Theme’ but with a new font colour:
- Open the Android Manifest and locate either the application or activity tag.
- Apply your theme:
- As always, check your work in the emulator:
<style name="PinkTheme" parent="android:style/Theme"> </style>
Tip. Check platforms/android/data/res/values/themes for parent themes to extend.
<style name="PinkTheme" parent="android:style/Theme"> <item name="android:textColorPrimary">#FF69B4</item> </style>
<application android:
Conclusion
In this tutorial, we covered using the Android platform’s predefined styles and themes as well as creating your own. If you want to learn more about what’s possible with themes and styles, the easiest way is to spend some time browsing through the corresponding files in the platforms/android/data/res/values folder. Not only will this give you an idea of the different effects that can be achieved, but it’s also a good way to find parents that can be extended using inheritance.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
Envato Market has a range of items for sale to help get you started.
|
http://code.tutsplus.com/tutorials/android-sdk-exploring-styles-and-themes--mobile-15087
|
CC-MAIN-2016-22
|
refinedweb
| 1,014
| 54.83
|
deploy rest web services on weblogic
Hi all, I'm new with weblogic and EJB v3; I've published a RestFul webservices using EJB3: package rest; import javax.ejb.Stateless; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Stateless @Path(value="/fattura") public class FatturaServiceBean { @GET @Produces(value="text/plain") public String generaProssimoNumero(){ return "1234"; } }
As you can see very very easy and it works, but i can call it
Now I'm asking why weblogic has added the word resources in the URI, tomEE dosn't add the word resources in the URI!!!! Is that a feature I can customize?
And also is there a way to tell weblogic to print in the log every web services is installed on it when it startup?
thanks
really much
Tagged:
|
https://community.oracle.com/tech/apps-infra/discussion/4486970/deploy-rest-web-services-on-weblogic
|
CC-MAIN-2022-33
|
refinedweb
| 136
| 65.22
|
[ Usenet FAQs | Search | Web FAQs | Documents | RFC Index ]
Search the FAQ Archives
Search the FAQ Archives
-
FAQ: TeleUSE GUI Builder
From: mahesh@jaguNET.com (B.G. Mahesh) Newsgroups: comp.windows.ui-builders.teleuse, comp.windows.x, comp.windows.x.motif Subject: FAQ: TeleUSE GUI Builder Date: 18 Mar 1997 11:16:28 -0500 Message-ID: <5gmf4s$occ@shado.jaguNET.com> Reply-To: mahesh@mahesh.com Summary: Frequently Asked questions for TeleUSE GUI Builder Keywords: FAQ TeleUSE Motif GUI Builder Archive-name: ui-builders/TeleUSE Last-Modified: March 18, 1997 CVS Version: $Id: TeleUSE.FAQ,v 1.2 1997/03/18 16:15:45 mahesh Exp $ Welcome to TeleUSE GUI Builder FAQ :-) Currently the FAQ is being maintained by B.G. Mahesh [mahesh@mahesh.com] Frequency: This will be posted on the first of every month on comp.windows.x, comp.windows.x.motif, comp.windows.ui-builders.teleuse and the TeleUSE mailing list. Individuals are encouraged to submit corrections, questions and answers to mahesh@mahesh.com directly. In many answers below, submitters are noted in parentheses at the beginning of comments. (Comments may be slightly edited.) Myself and all contributors claim no responsibility for the accuracy of the information in the FAQ. The FAQ is not meant to be specific technical advice. It is only a starting point. I am not responsible for the material you read on all the links on this page. Many FAQs, including this one, are available via FTP on the archive site rtfm.mit.edu in the directory pub/usenet/news.answers/ui-builders. To get this FAQ by E-mail, you should send a message to mail-server@rtfm.mit.edu with send usenet/news.answers/ui-builders/TeleUSE in the body of the message. Make sure you don't have a signature in body of the message. The ASCII version of the FAQ can be downloaded from rtfm.mit.edu:pub/usenet/news.answers/ui-builders/TeleUSE The HTML version of the FAQ can be accessed from This article includes answers to the following questions. Questions marked with a + indicate questions new to this issue; those with significant changes of content since the last issue are marked by *: Table of Contents ----------------- Q: What is TeleUSE ? Q: Where can I get the product information from? *Q: Is there a TeleUSE mailing list? If so, how do I subscribe to this list? Q: Is there a TeleUSE Usenet newsgroup? Q: What other FAQs should I refer to ? Q: On what platforms is TeleUSE available? Q: To what other platforms is Thomson planning to port TeleUSE? Q: Does TeleUSE generate C and/or C++ ? Q Does TeleUSE generate Ada? Q: Do I need the TeleUSE library to execute an application? Q: Is there a D-mode for emacs/lemacs/xemacs ? Q: What versions of Motif do I need to use with TeleUSE? Q: When I call "create widget" in D using the optional parameters for another display and screen, the widgets appear on my default display/screen. How can I get "create widget" to create widgets on another display? Q: In an attribute window, what is the meaning of a gray setting marker ? Q: Does VIP allow you to add comments to a particular template ? If so, how? Q: What is the purpose of a user-defined attribute ? Q: How do you use the Drag-On box? What is the purpose of the Drag-On box in the value windows (colors, pixmaps, fonts) ? Q: What is a D event field? How can it be accessed in D ? Q: What is the difference between a Devent and a rule? Q: Why is it that the Devents attached to my "createCallback" attributes aren't always sent? Q: Where can you look up the widget class specific pre-defined event fields ? Q: What is the scope of a D event field for a given D event? Q: If there is no callback available for a particular event of interest, how can we attach a D event to it? Q: What are the valid UDA types that I can define? Q: Is there a way in d-code to check for the existence of a user-defined attribute before accessing it ? Q: How to access an UDA in C? Q: How do I send a D event from C ? Q: Can any C routine be called from D ? What are the limitations? Q: What is the purpose of the 'opaque' type? Q: Are type conversions in D the same as type casting in C ? Q: Is it possible to declare a D event that is global across more than one D modules ? Q: Is it possible to declare a D variable that is global across more than one D module? Q: What 3 things must be true in order for a widget to be visible ? Q: What does AIM stand for? Q: How can the UI builder generate AIM file? Q: What does uxb guess do? Q: If I have a string in the uxb.conf file that matches my platform name such as 'sun', uxb does not accept it. How can I get around this? Q: What is the difference between 'c' and 'pure-c' mode ? Q: What is the difference between LANGUAGE C and LANGUAGE KRC ? Q: How can I allow for the ehdb file to be in another directory besides the current working directory? Q: If I have a string in D that represents the name of a D event, how can I send the D event? Q: Can I use the string_list operators directly on a C global variable of type 'char **'? Q: How can I set resources on the eht help windows? Q: When I try to turn off the file list in the XmFileSelectionBox by setting 'showList' to false, I get an empty square window in the upper left corner of the FileSelectionBox. How can I remove? Q How can I specify the geometry of an application from the command-line? When I use the -geometry option with a TeleUSE generated application (e.g., <app> -geometry 600x800+50+50), the option is ignored. Q: Which 3rd party widgets have been integrated with TeleUSE? Q: Can I use the xm_string_list operations directly on the resources of the XmList ? Questions & Answers ------------------- Q: What is TeleUSE ? A: [From B.G. Mahesh, mahesh@mahesh.com] TeleUSE is a true UIMS (User Interface Management System). A UIMS is designed to manage the graphical user interface efficiently during the life cycle of an application. TeleUSE is one of the leading Motif GUI builders in the industry today. Q: Where can I get the product information from? A: [From Marianne Worley, mworley@thomsoft.com] Marianne Worley Aonix (formally Thomson Software Products) 10251 Vista Sorrento Parkway, Suite 300 San Diego, CA 92121 Tel: 619-457-2700 x244 Fax: 619-452-2117 E-mail: guiinfo@thomsoft.com Web: *Q: Is there a TeleUSE mailing list? If so, how do I subscribe to this list? A: [From B.G. Mahesh, mahesh@mahesh.com] Yes, there is a TeleUSE mailing list. Send an email to TeleUSErs-request@sd.aonix.com with the Subject set to "subscribe your_email_address". Contributions to the mailing list should be sent to newtu@sd.aonix.com Q: Is there a TeleUSE Usenet newsgroup? A: [From B.G. Mahesh, mahesh@mahesh.com] Yes. The name of the group is comp.windows.ui-builders.teleuse Q: What other FAQs should I refer to ? A: [From B.G. Mahesh, mahesh@mahesh.com] Subject: X windows FTP : rtfm.mit.edu:pub/usenet/news.answers/x-faq URL : Subject: Motif FTP : URL : MW3: Motif on the World Wide Web Q: On what platforms is TeleUSE available? A: [From Rhoda, rhoda@thomsoft.com] ---------------------------------------------------------------- Platform OS version TeleUSE version ---------------------------------------------------------------- DecAlpha AXP 3000/500 OSF1 v3.2 3.0.2 DecAlpha AXP 3000/500 OpenVMS 6.1 3.0.2 DecAlpha VAX station 3100 OpenVMS 6.1 3.0.2 HP HP/UX 9.0.1/10.0 3.0.2 HP Softbench HP/UX 9.03 3.0.2 IBM RS/6000 AIX 3.2.5 3.0.2 IBM RS/6000 AIX 4.1.4 3.0.2 SGI Irix 5.3 3.0.2 Sun SunOS 4.1.3 3.0.2 Sun Solaris 2.3 3.0.2 Sun Solaris 2.4/2.5 3.0.2 DG AviiON GX/UX 5.4 3.0.2 Intel x86 SVr3 SCO ODT 5.0 3.0.2 Intel x86 SVr4 NCR 2.02 3.0.2 Intel x86 NT 3.51/Win 95 TU/Win 3.0 Q: To what other platforms is Thomson planning to port TeleUSE? A: [From Paul, paul@thomsoft.com] HP HP/UX 10.10/10.20 3.0.2 Q: Does TeleUSE generate C and/or C++ ? A: [From Rhoda, rhoda@thomsoft.com] Yes, TeleUSE genertes code in C and C++. You can tell the TeleUSE code generator to generate code in either KRC, ansi style C, or C++. If you are interested in C++ code, you can tell the code generator to create C++ classes for any specified widget hierarchies (templates) that you have developed. You can also specify that widget descendants in the hierarchy should be available in the private/protected/public section of the C++ class generated. In addition, you can specify what resources for any widget in the hierarchy will be available via get/set member functions in the C++ generated class. Q Does TeleUSE generate Ada? A: [From Rhoda, rhoda@thomsoft.com] Ada code can also be generated using the TeleUSE/Ada product. This product will generate Ada code for the presentation layer (pcd files), the dialog layer (D modules) and all other intermediate source files (e.g. main program). The TeleUSE/Ada product as well as the TeleUSE product allows you to call ada subprograms from D. TeleUSE/Ada is available for the following platforms/compilers: Solaris 2.3 (AdaWorld/Verdix/RISCAda) SunOS 4.1.3 (AdaWorld) HPUX 9.03 (AdaWorld) SCO ODT 3.0 (AdaWorld/Verdix) Q: Do I need the TeleUSE library to execute an application? A: [From Rhoda, rhoda@thomsoft.com] If system A and System B the same platform/same OS, then if you link the application statically, you can move the executable to the other machine that does NOT have TeleUSE libraries and execute it there. By default, the TeleUSE libraries will be required to re-build the application. So if you need to move to another platform, you need to have the TeleUSE libraries available on that target. You can purchase them from Thomson Software Products if the target is a platform that we already support or you can build them yourself since the source code to the runtime libraries is included free with TeleUSE. Alternatively, you have tell the TeleUSE code generator to generate code that is NOT dependent on any runtime libraries, but in order to use this 'pure-c' mode you will not be able to take advantage of some of the TeleUSE features, such as D, C++, UserDefinedAttributes, in your application. Q: Is there a D-mode for emacs/lemacs/xemacs ? A: [From B.G. Mahesh, mahesh@mahesh.com] You should be able to find one in the $TeleUSE/TeleUSE/lib/emacs [where $TeleUSE is the directory in which you have installed TeleUSE] [From TJ Phan, phan@aur.alcatel.com] I've set up emacs/xemacs syntax highlighting for d-mode. This might be of use to other people. You can download this file from Q: What versions of Motif do I need to use with TeleUSE? A: [From B.G. Mahesh, mahesh@mahesh.com] TeleUSE 3.0.x requires Motif 1.2 TeleUSE 2.1.x requires Motif 1.1. The appropriate version of Motif and X is provided with TeleUSE on most platforms as a convenience to the customer. Q: When I call "create widget" in D using the optional parameters for another display and screen, the widgets appear on my default display/screen. How can I get "create widget" to create widgets on another display? A: [From John Goodsen, jgoodsen@radsoft.com] There is missing information in the 3.01 documentation set (page 4-19 of the "Developing Dialog Components" manual) Before attempting to create any widgets on the display, you must open the display using "ux_define_display(...)". For example, we would first in C define the display "spot:0" with the logical name "spot" for the application "test": ux_define_display("test", "Test", "spot", "spot:0", NULL, 0, &argc, argv, &status); Then we could in D create a window on this display: remote := create widget("spotview", nil, nil, "spot", 0); This statement creates a widget named "spotview" on the default logical screen (0) of the remote display "spot". Q: In an attribute window, what is the meaning of a gray setting marker ? A: [From Thomson Software Products] A gray setting marker indicates that the value shown is inherited. Q: Does VIP allow you to add comments to a particular template ? If so, how? A: [From Thomson Software Products] If the widget is selected in the work area, you can enter text as comments in the right hand side of the status area (above the work area). These comments are saved in the pcd file as a 'comments' attribute. Q: What is the purpose of a user-defined attribute ? A: [From Thomson Software Products] A User-Defined Attribute allows a widget to carry data with it. They are useful to use instead of having to maintain so many local variables in a D module. Q: How do you use the Drag-On box? What is the purpose of the Drag-On box in the value windows (colors, pixmaps, fonts) ? A; [From Thomson Software Products] You can drag icon templates OR nodes from a tree OR widgets in the work area into a Drag-On box. In the Value Window, the drag-On box can be used to add to the list of values on the left hand side of the Value Window. For example, if a template is using colors that are defined into VIP, you can drag the template into the Drag-On box for the Color Value Window and the new colors will be added to the list of colors on the left side of the Value Window. Q: What is a D event field? How can it be accessed in D ? A: [From Thomson Software Products] Data carried with a D event. It is accessed in D using the form: <D event name>.<D event field> Q: What is the difference between a Devent and a rule? A: [From Larry Young, lyoung@dalmatian.com] A Devent is a "signal" (not in the UNIX sense!) that something has occurred, which contains certain contextual data. It can also be thought of as a notification object (instance) that has certain data associated with it, and that when it is "sent" it will notify all "rules" that have registered with it. A Rule is simply a body of code that will be run when its condition (i.e. the part before the keyword "does") is true. Normally, the condition contains the name of a Devent, so that when that Devent is sent, it will notify the rule and the rule will execute it's body of code. Note that if multiple rules are registered to the same Devent, their order of execution is undefined. Q: Why is it that the Devents attached to my "createCallback" attributes aren't always sent? A: [From Larry Young, lyoung@dalmatian.com] The problem isn't that the Devent isn't sent, it's that your rule hasn't been registered for that Devent yet. In TeleUSE, each Dmodule is initialized separately, and it is during this "init" process that each rule is registered with its specific Devent, local data is created, and the dmodule's INITIALLY rule is run. Thus, if the first Dmodule initialized is the one that creates the widgets, then the other Dmodules won't have had a chance to register their rules yet, therefore the createCallbacks appear to get lost. The simple solution is to not create widgets in your INITIALLY rule (creating and initializing local variables is ok!). Instead, do the following in one of your Dmodules (I usually create a separate Dmodule called "Main" for this and other generic Devent handlers): devents: MainInit :local []; locals: top_w : widget; rules: INITIALLY does send (MainInit); -- allow all INITIALLY's to run first !! end does; MainInit does top_w := create widget ("MyApplShell", nil, nil); top_w.show; -- -- ... whatever else needs to be done ... -- end does; end dmodule; Q: Where can you look up the widget class specific pre-defined event fields ? A: [From Thomson Software Products] Each of the members of the callback structure is available as a D event field. These callback structures can be found in the Motif Programmer's Reference for any given widget class, under 'Callback Information' or you can use 'man' on the widget class or 'tuqref <widget class>'' Q: What is the scope of a D event field for a given D event? A: [From Thomson Software Products] The scope of a D event field is limited to the associated rule. It is possible to assign to another D event's D event field before sending the D event but it is erroneous to read another D event's event field. Q: If there is no callback available for a particular event of interest, how can we attach a D event to it? A: [From Thomson Software Products] A translation can be used. Also an event handler can be used which ties a C function to an X event. The event handler (C function) could 'send' a D event. Q: What are the valid UDA types that I can define? A: [From Larry Young, lyoung@dalmatian.com] The list provided in the Vip "Define User Attributes" window is only a subset of the possible choices ... not exactly intuitive! Basically, you can use any valid X resource type that has been registered with the XtConvert mechanism (at the time 'vip' was built!). So you can use types like: Alignment (XmRAlignment) SiblingWidget (TuRSiblingWidget) WidgetChild (TuRWidgetChild) Cursor (XtRCursor) XRectangleList (TuRXRectangleList) IntTable (TuRIntTable) StringTable (XtRStringTable) Pointer (XtRPointer) ... etc ... When you define UDAs in D code (e.g. "w.define("Name", "XType");"), you may use any valid D datatype, including any "User Datatypes" defined in your AIM files, as well as all the X resource types. Note that X types don't currently work from D in TU3.0.2beta :( Also, don't use the "Callback" type listed in the UDA Type popup menu in Vip, it doesn't have a converter registered for it! Q: Is there a way in d-code to check for the existence of a user-defined attribute before accessing it ? A: [From Larry Young, lyoung@dalmatian.com] If you are using TeleUSE 3.0.2 (the final release, not the beta), you can use the predefined operation "is_defined()" on the widget datatype. Actually, it returns the type of the attribute, but if it hasn't been defined it returns "nil". And this works for normal widget attributes as well as UDAs. So basically you can test for existence like this: if (my_w.is_defined("MyUDA") != nil) then -- the UDA exists end if; If you are using an older version of TeleUSE, you'll have to call "tk_user_attr_type(widget, uda_name)" which basically does the same thing as "is_defined" does but only works for UDAs. In fact, "is_defined" uses this "tk" function internally. Also, if you want to use the "tk" function from D, you'll have to put its function definition into an AIM file. You can find the function definition in $TeleUSE/include/teleuse/tk_widops.h. Q: How to access and set UDAs in C? A: [From Paul Thornton, paul@thomsoft.com] In $TeleUSE/include/teleuse/tk_widops.h you will find 2 undocumented routines. These routines allow you to set/get UDAs from c code: extern void tk_get_widget_attr(); /* Widget widget; */ /* tu_string attr; */ /* tu_string * prtype; */ /* tu_pointer * pvalue; */ /* tu_status_t * status; */ extern void tk_set_widget_attr(); /* Widget widget; */ /* tu_string attr; */ /* tu_string rtype; */ /* tu_pointer value; */ /* tu_status_t * status; */ Example: tu_string uda_type; tu_pointer uda_value; tu_status_t status; tk_get_widget_attr(wid, "My_UDA", &uda_type, &uda_value, &status); -- If My_UDA is a string, char *result_str; result_str = (char *)uda_value; To Set UDAs: tk_set_widget_attr (wid, "Other_UDA", XtRString, uda_value, &status); ^^^^^^^^^ Found in $TeleUSE/X11R5/include/StringDefs.h Q: How do I send a D event from C ? [From Tony Giaccone, tgia@radix.net] Caution: The following example does NO ERROR CHECKING. This is not a wise practice and will with out doubt get you into trouble. However, it is meant as an example and as an example it has no real context. With out a context to work in Error processing is pretty difficult to manage. You on the other hand have a context. So do the error checking. There are two steps to sending a devent from C. 1. Creating the devent instance. 2. Dispatching the event. Here's a general purpose c function which handles dispatching any devent. void send_d_event( char *devent_name, /* char *dfield_name, 0 or more repititions of these */ /* char *dtype_name, 3 fields */ /* XtPointer data_addr, */ /* NULL) NULL must be present, terminates list */ { tu_status_t status; ux_devent_instance devent_inst; va_list argsPtr; char *dfield_name; char *dtype_name; XtPointer data_addr; devent = ux_get_devent(devent_name, NULL,0, &status); if (status.all != tu_status_ok) { /* Handle the error condition */ } /* ** first parse the arguments which might have been passed into ** our devent. This routine handles the general case if you wanted ** to handle only one D Event, you could simplify this code. ** */ va_start(arg_ptr, devent_name); while ((dfield_name = va_arg(arg_ptr, char *)) != NULL) { dtype_name = va_arg(arg_ptr, char *); data_addr = va_arg(arg_ptr, XtPointer); ux_assign_devent_field( devent, dfield_name,dtype_name,(tu_pointer)data_addr, &status); if (status.all != tu_status_ok) { /* Handle the error condition */ } /* if (status.all != tu_status_ok) */ } /* while */ va_end(arg_ptr); /* CALL ONLY ONE OF THE FOLLOWING ROUTINES */ /* ** This routine actually calls the devent. With the ** appropriate fields set. The event is placed into the ** devent queue and will be handled when it reaches the ** top of the queue. */ tu_queue_event(devent,&status); /* ** If you called tu_queue_event don't forget to check the ** status of the call. */ /* ** Or call this routine..... */ /* ** this routine calls the devent immediately. There is no delay ** it's like makeing a send(devent,0) call in D. */ tu_dispatch_event(devent); } /* send_d_event */ So, given a D event called MyDEvent which has the following definition MyDEvent [ AnInt : integer; AString : string; AStruct : Out_C_Struct; ]; where Our_C_Struct is a c structure defined in an aim file. In the calling code you would make the following call: send_d_event("MyDEvent", "AnInt", XtRInt, c_int_variable, "AString", XtRString, c_char_ptr, "AStruct", XtRPointer, a_c_structure, NULL); Q: Can any C routine be called from D ? What are the limitations? A: [From Thomson Software Products] If an only if the parameters and the return value can be accurately represented in D. Q: What is the purpose of the 'opaque' type? A: [From Thomson Software Products] The opaque thpe is PRIMARILY intended for holding pointers returned by C function calls. But it can hold anything that is 32 bits. If it holds an address, there is no way, in D, to reference the object that is pointing to. Q: Are type conversions in D the same as type casting in C ? A: [From Thomson Software Products] Type conversion in D actually changes the underlying bit representation of the object where casting does not. Q: Is it possible to declare a D event that is global across more than one D modules ? A: [From Thomson Software Products] Yes. Specify it in a D events file (.de file). The D devent can also be declared in both D modules but both declarations must be identical and they must not be declared as 'local'. The latter method is not recommended. Q: Is it possible to declare a D variable that is global across more than one D module? A: [From David Quin-Conroy] A local variable in a D module instance can be declared "exported" which makes it accessible from other D module instances. This is described in "Developing Dialog Components" page 4-58. The same is true for D events. By using the "exported" feature, there is probably no need to use .de files. The ability to 'export' local variables was introduced with TeleUSE 3.0. Although D variables are still not directly share-able, you CAN access another D modules local variables IF 1) they are exported and IF 2) you have a handle to the D module instance. Q: What 3 things must be true in order for a widget to be visible ? A: [From Thomson Software Products] It must be realized, managed, and mapped. If a widget has no parent, though, managing it is meaningless. A widget needs to be explicitly realized ONLY if none of its ancestors have been realized or it has no ancestors. Non-shell widget are managed automatically when they are created so no explicit managing is necessary. TopLevelShell and ApplicationShells should be opened/closed, after they have been realized, using: top_shell.do_popup(); <-- opens shell top_shell.do_popdown; <-- closes shell Q: What does AIM stand for? A: [From Thomson Software Products] Application Interface Mapping. Used for defining the C function that are to be called from D. Q: How can the UI builder generate AIM file? A: [From Thomson Software Products] Using the AIMEXTR entry in the configuration file. You must have access to the C source code in order to uxb to be able to generate the aim file. Q: What does uxb guess do? A: [From Thomson Software Products] Creates a file called uxb.conf based on the files it finds in the local directory. Q: If I have a string in the uxb.conf file that matches my platform name such as 'sun', uxb does not accept it. How can I get around this? A: [From Rhoda, rhoda@thomsoft.com] You can put the following in uxb.conf to undefine the word 'sun' in case user need that word in the config file. Example: if a directory has xxx/xxx/sun/xxx/xxx . #ifdef sun #undef sun #define sun sun #endif Q: What is the difference between 'c' and 'pure-c' mode ? A: [From Thomson Software Products] In 'pure-c' mode, no TeleUSE runtime calls are made from the generated c file. In this mode, no D can be used. Q: What is the difference between LANGUAGE C and LANGUAGE KRC ? A: [From Thomson Software Products] With C, the INCLUDE section is used, with KRC the INCLUDE section is ignored. With C, all enteries must be prototyped in a header file that is included and all structure definition must be in header file. If you are using any new features, such as structures or globals you need to use LANGUAGE C. Q: How can I allow for the ehdb file to be in another directory besides the current working directory? A: [From Rhoda, rhoda@thomsoft.com] In order to allow for the ehdb file to be in another directory you can use the DESTDIR option in the configuration file. Then run: > uxb -- to build > uxb install -- to move the file to the DESTDIR The DESTDIR configuration option changes the automatically generated uxb_mainc.c so that the ehdb file is looked for in the destination directory. The user can rely on an environment variable to find the file in the directory, or the path can be fixed. Q: If I have a string in D that represents the name of a D event, how can I send the D event? A: [From Rhoda, rhoda@thomsoft.com] Here is an example: *********** devents: foo :local[]; locals: x: string; d: devent; rules: SomeCallback does x := "foo"; d := self.(x); send(d); end does; foo does printf("In foo\n"); end does; ************ If you have a UDA of type string that represents the name of a D event field, you can use: ************* x := SomeCallback.source_widget.StringUDA; d := self.(x); send(d); -- or simply: d := self.(SomeCallback.source_widget.StringUDA); send(d); ************* Q: Can I use the string_list operators directly on a C global variable of type 'char **'? A: [From Rhoda, rhoda@thomsoft.com] No. Globals can only be modified by direct assignment. Dot operators will not work on globals directly. If you want to use the string_list operations, you must use a temporary local variable and then assign it to the global. Example: If 'global_String_list' is declared in an aim file as: TYPE "char **" <--> string_list; ENTRY global_string_list : "char **"; In D use: -- build the string_list: local_string_list := create string_list(); local_string_list.insert("item1",0); local_string_list.insert("item2",0); local_string_list.insert("item3",0); local_string_list.insert("item4",0); -- assign to the global: global_string_list := local_string_list; Q: How can I set resources on the eht help windows? A: [From Rhoda, rhoda@thomsoft.com] You can set resources for the Help Windows in a resource file. The name of the eht help window TopLevelShell widget is 'eht'. To set the iconPixmap of the help windows in a resource file use: *eht*iconPixmap: <bitmap file name> To set the fontList for the buttons in the help window use: *eht*fontlist: <font specification> To set the fontlist for a help window frame use: *eht*help_text.fontlist: <font specification> Q: When I try to turn off the file list in the XmFileSelectionBox by setting 'showList' to false, I get an empty square window in the upper left corner of the FileSelectionBox. How can I remove it? A: [From Rhoda, rhoda@thomsoft.com] The problem is occurring because XmScrolledWindow parent of the file list needs to be unmapped. To remove the problem, use: dmodule main #include <teleuse/teleuse.h> ... INIITIALLY does top := create widget ... list_wid : widget := XmFileSelectionBoxGetChild(top->FSB, XmDIALOG_LIST); list_wid.parent.mapped := false; ^^^^^^ set the XmScrolledWindow not just the XmList. also use: uxb.conf ------- DINCLUDEDIR $TeleUSE/include to define teleuse/teleuse.h in? A: [From Rhoda, rhoda@thomsoft.com] You can set the background of the WHOLE scrolledWindow in D using: top->ScrolledWindowClipWindow.background := top->scrolledWindow.background; top->VertScrollBar.background := top->scrolledWindow.background; top->HorScrollBar.background := top->scrolledWindow.background; The internal widget names for a scrolledWindow are: ScrolledWindowClipWindow VertScrollBar HorScrollBar Q How can I specify the geometry of an application from the command-line? When I use the -geometry option with a TeleUSE generated application (e.g., <app> -geometry 600x800+50+50), the option is ignored. A: [From Larry Young (lyoung@dalmatian.com) and Rhoda Quate (rhoda@thomsoft.com)] The "-geometry" command-line option is defined by Xt as ".geometry", not "*geometry". When TeleUSE builds an application, it places an "invisible" shell above all your shells to provide a consistent way of naming resources; therefore, your "-geometry" specification is being applied to this invisible shell instead of to the one you had intended. Instead, use the -xrm option (e.g., <app> -xrm "*geometry: 600x800+50+50"). The downside of this approach is that it will affect ALL TopLevelShells that are in your application for which the size/position is not hard-coded! You can limit this problem if you know the name of the shell at the top of the widget tree (assuming you know its name). In that case, replace the "*" with "*widget_name." (e.g., <app> -xrm "*AppShell.geometry: 600x800+50+50"). Q: Which 3rd party widgets have been integrated with TeleUSE? A: [From Rhoda Quate, rhoda@thomsoft.com] Name : XRT widgets Vendor : KL Group Description : Integrations are maintained by Thomson Software for: XRT/3d The easiest way to build informative and dynamic 3-D charts and graphs into Motif applications. XRT/field The easiest way to build professional data-entry fields into Motif applications. XRT/gear (integration promised soon) The essential collection of add-on widgets and utilities for Motif. XRT/graph The easiest way to build powerful 2-D charts and graphs into Motif applications. XRT/table The essential multi-purpose widget for displaying and editing lists, tables and forms. Email : info@klg.om Home Page : Download Integration: Name : Xbae Vendor : Public Domain Description : XbaeMatrix is a Motif widget which presents an editable array of string data to the user in a scrollable table similar to a spreadsheet. XbaeCaption is a simple Motif manager widget used to associate an XmLabel (caption) with it's single child. Source : Download Integration: Name : Tuw Widgets Description : TuwItemBox Container that manages special kinds of objects called Tuw items. A Tuw item can consist of an icon and an optional name. TuwItemMenu Designed to handle the common situation requiring a dynamic menu, one in which the number of items varies as the application executes. TuwSource Displays a text file with line numbers in front of each line. TuwSource widget was developed to support source display in a debugger. TuwTree Displays a tree structure of nodes. TuwTable Allows you to build tables with different kinds of objects. Source : Included with TeleUSE in $TeleUSE/conf/examples/tuw Name : DT Widgets Description : SpinButton A widget that allows you to cycle through a list of values in the forward or reverse direction. ComboBox A widget that contains a text and an arrow which posts a list of options for selection upon button press. Source : Included with TeleUSE $TeleUSE/conf/examples/cde Name : EnhancementPak Widgets Vendor : ICS Description : A collection of general purpose widgets, consisting of controls, geometry managers, and resource editors. Email : info@ics.com Home Page : Q: Can I use the xm_string_list operations directly on the resources of the XmList ? A: [From B.G. Mahesh, mahesh@mahesh.com and Rhoda, rhoda@thomsoft.com] Is the following code valid ? top->nameScrolledList.selectedItems.rewind; if (top->nameScrolledList.selectedItems.more) then sv := top->nameScrolledList.selectedItems.next; end if; Syntatically it is correct but you should not do such operations (like rewind) directly on the scrolled list. Load top->nameScrolledList.selectedItems to a local variable before operating on it. This is because this expression invokes a function call and stores the result in a temporary variable. Operating on the temporary variable is useless. ------------------------------------------------------------------ B.G. Mahesh | Home Page: Internet Consultant | FAQ Maintainer of TeleUSE GUI Builder Email: mahesh@mahesh.com |
[ Usenet FAQs | Search | Web FAQs | Documents | RFC Index ]
Send corrections/additions to the FAQ Maintainer:Send corrections/additions to the FAQ Maintainer:
mahesh@mahesh.com
Last Update July 06 2009 @ 00:07 AM
|
http://www.faqs.org/faqs/ui-builders/TeleUSE/
|
crawl-002
|
refinedweb
| 5,692
| 66.54
|
After fiddling with Project Euler, I started to wonder how to compute the sum of divisors in the most efficient way (albeit, most the problems in PE does not require any super-sophisticated solution). The trivial approach is doing division and adding up the divisors sequentially. This may be good enough for small
, but as
grows this approach gives an infeasible time complexity.
The next thing one may think of is to generate a list of prime numbers (using a sieve) and for each number try all the numbers in the list. This is faster than the trivial approach, but still requires
. Since the sieve requires time
, we do not need to do better than that. But can we match it? Turns out we can.
First, we need to establish a simple number theoretical fact: the
is the divisor counting function which has the properties
(and, thus,
) and
.
The solution is quite simple; consider the first element in the prime set
. That is
. We split the range into even numbers and for even
compute
.
Then for
, compute
, and so forth up to
. The requires
operations.
Running it for all powers and primes, we use in total
operations. Now, we have
. So the complexity is
. This is a harmonic series and treating it as an integral, we have
. Of course, we do only generate primes up to
, so the final complexity is
. So we are done 🙂
As always, let us look at a Python representation:
def sum_of_divisors(N): N += 1 # extend limit to inclusive primes = primetools.sieve(N) numbers = [1]*N for q in primes: tmp = {} mult = q while mult <= N: for i in range(mult, N, mult): if i in tmp: tmp[i] += mult else: tmp[i] = 1 + mult mult *= q for j in tmp: numbers[j] *= tmp[j] return numbers[2:]
Below is a plot of the timing of the code. Since the code stores data in memory, some undesired effects comes into play for largers values (due to the need of shuffling data back and forth) and is therefore not present in the graph.
Bounded by the sieving complexity, the algoritm cannot be any faster using the sieve. Can we do better? Note that the algorithm returns a list of sums. If we are really interested in the sum of sums, we may instead compute
, which is
and uses basically no memory. This can be written as
def sum_of_sum_of_divisors(N): div_sum = 0 for i in range(1, N): div_sum += i * (N / i) return div_sum
How does it work? For each
, it computes the cardinality of the set of numbers divisible by
,i.e.,
, which is exactly
. This contributes with
for every element, so the total contribution is
. Doing it for all possible divisors, we end up with the sum of sum of divisors.
Turns out, we can actually do it in
. Here is how:
def square_root_time_sum_of_sum_of_divisors(N): global dcached if N in dcached: return dcached[N] div_sum, i = 0, 1 q = int(math.sqrt(N)) while i <= q: div_sum += (i * (N / i)) i += 1 i = 1 while i <= N / (q + 1): m = N / i k = N / (i + 1) div_sum += (i * (m * (m + 1) - k * (k + 1)) / 2) i += 1 dcached[N] = div_sum return div_sum
Consider the below example. In the left table, we have the sum
and in the right table
where
:
Clearly, it is the same table/matrix but transposed, so the sum of elements are the same. As we notice in the matrices above, their rows become sparser for increasing
. Think about it! What if we were to compute only the bold numbers and adding them? Then we need only two function calls each for left and right. The point where we stop computing left and start to compute right can be chosen arbitrarily, but the optimal is
. Then
.
Example: Project Euler 23
This is a simple problem, where we are supposed to compute all numbers which cannot be expressed as a sum of two abundant numbers. Using the above code, we can solve the problem in about 100 ms using pypy:
def abundant_numbers(N): numbers = sum_of_divisors(N) abundants = [] for i in range(0, N): if 2*i < numbers[i]: abundants.append(i) return abundants[1:] N = 28123 abundants = abundant_numbers(N) s = [0]*N for x in abundants: for y in abundants: ls = x + y if ls < N: s[ls] = 1 else: break print sum(i for i in range(0, N) if not s[i])
Example: Project Euler 439
This problem is quite nice, because it requires a very well-thoughtout approach for it to be feasible solving it (it is also among the hardest problems on PE). Here is the question (notation slightly altered):
Let
be the sum of all divisors ofbe the sum of all divisors of
. We define the function. We define the function
. For example,. For example,
You are given that
andand
. Find. Find
..
So, running our algorithm would be fine for lower values, but if we would try to store
values in memory, we will run into a wall very fast (this is about
TBs assuming that each number is a 64-bit integer – note the modular reduction which can be done in every step). Clearly, we need to use the memoryless approach.
Let us first disregard the fact that
only holds when
. Then
. The case of
, we will have
OK, so what about
? We can estimate the how many numbers in the product for which it that holds
. Take,
. We have
, so
In particular,
. Let us consider the case
. The (negative) contribution from this set
.
Running over all possible gcds (all numbers
), we get
The following code computes the smaller results in matter of milliseconds, but for
, it takes too long (as we are aiming for the sub-minute mark):
cached = {} dcached = {} mod = 10**9 def S(N): global cached if N in cached: return cached[N] dv = square_root_time_sum_of_sum_of_divisors(N) S_sum, i = 0, 2 while i <= N: # max recursion depth log2(10^11) ~ 33 S_sum += (i * S(N / i)) % mod i += 1 cached[N] = (dv * dv - S_sum) % mod return cached[N]
OK. Further optimization needed… note that
Using the same approach as before, we can derive that
where
is the Möbius function, defined
We can compute this in
time. The Möbius function can be pre-computed in
like so:
def mobius(N): L = [1]*(N+1) i = 2 while i <= N: # if i is not sq. free, then no multiple is if L[i] == 1 or L[i] == -1: L[i] = -L[i] sq = i * i j = i + i while j <= N: # we use 2 to mark non-primes if abs(L[j]) == 2: L[j] = -L[j] else: L[j] = -L[j]*2 j += i j = sq while j <= N: L[j] = 0 j += sq i += 1 i = 2 while i <= (N): if abs(L[i]) == 2: L[i] = L[i] / 2 i += 1 return L
|
https://grocid.net/2016/06/30/computing-sum-of-divisors/
|
CC-MAIN-2017-17
|
refinedweb
| 1,148
| 69.52
|
You haven’t heard from me for a while because I’ve been taking a bit of a break for the past month to help out at home with our new daughter. I returned to work on Monday. As I’m getting back into the swing of things, I thought it would be a great time to write a new blog post. I apologize for the length – there’s a lot to say and I didn’t really have the time to break it into chunks and dribble it out.
Expanding our Dogfooding
As you know the VSTS team has been using TFS internally for well over a year. John Lawrence () has been blogging various statistics from our dogfood server for quite some time. I think I’ve also mentioned that after we released V1, our plan was to roll out TFS to the rest of the Developer Division. Well, we’ve been working on that for the past couple of months – getting internal tools to work with TFS, updating scripts, porting data, migrating teams, etc. While we’re not nearly done, we’ve made some great progress and I thought I’d share it with you.
Getting all of DevDiv to use TFS is a significant challenge. A single branch of our source code is over 600,000 files today. We are in the process of adding all of our test source to our branches (historically they have been in a separate repository). This will add more than 1,000,000 more files to each branch. By the time all is said and done, considering all of the branches we use, the TFS server will contain 100,000,000 files or more (no, that’s not a typo J). This will make it one of the (if not the) largest source code databases on the planet. There are other systems in the world with more source (for example the Windows NT source base is bigger, however it is broken up across approximately 9 different version control databases).
This is an ambitious goal to take on within a month of shipping TFS. Today we have about 1,000 users and over 13,000,000 files in our TFS installation. We’ve learned a great deal from the exercise and have plans to continue growing both the number of users and number of branches over the next 6 months or so. Over that time, I’m sure we’ll learn even more. We simulated very large databases in our labs before we shipped V1 but there’s nothing quite like seeing what happens in a production environment where the unbridled masses are unleashedJ.
Current stats
Here’s a snapshot of the overall server stats that John has been publishing for a while so that you can see how it has grown in all of the dimensions.
Users
Recent users: 806
Users with assigned work items: 1,023
Version control users: 1,229
Work items
Work items: 94,695
Areas & Iterations: 5,897
Work item versions: 721,531
Attached files: 24,569
Stored Queries: 9,160
Version control
Files/Folders: 13,137,178/1,677,668
LocalVersion: 82,217,411
Total compressed file sizes: 171.8GB
Workspaces: 2,376
Shelvesets: 3,711
Total checkins: 72,223
Pending changes: 106,113
Commands (last 7 days)
Work Item queries: 22,085
Work Item updates: 5,909
Work Item opens: 35,072
Gets: 21,088
Downloads: 10,705,779
Checkins: 7,216
Uploads: 192,832
Shelves: 749
Our experience
As I said, we’ve learned a lot from the exercise so far and have made several product fixes as a result. We are rolling all of the fixes we’ve made into a service pack that we will make available publicly later this year (please don’t ask me to me more concrete than that as we are still in the midst of planning the release J).
The good news is that with relatively minor changes the system is performing well under impressive load and scale. Most of the growth pains we’ve had have been around version control data. As you can see above the number of files under version control has grown by almost a factor of 10 since John last reported it in February. I’d also like to point out that although I’m going to describe some issues we’ve dealt with in the roll out to DevDiv, no where in the process have we experienced a single data corruption of any type.
As we started growing the amount of data in the server we ran into some service issues. These service issues were caused by operations taking way longer than they should and blocking other people from using the server for periods of time. With the patches and operational changes we’ve made over the past month, we’ve restored the server’s level of performance and availability.
This is what dogfooding at Microsoft is all about. It allows us to really push the system under real production conditions with massive amounts of data and significant load. We can examine carefully how the server is behaving. We can experiment (albeit carefully) with alternative approaches to really push the scale of the system. In the spirit of transparency (and hopefully to share some interesting challenges we’ve faced), I’m going to describe many of the issues we’ve hit, what we’ve done and what further we are planning on doing. I hope your take away from this is not simply that the server has had problems – every system has problems at some level of scale or load. What I hope you take away is that we are in this with our customers and are pushing the system as hard as we can and addressing the issues. All of our customers (including those internally at Microsoft) will benefit from the effort we put in as we continually use and improve the system.
What we’ve learned
Most of the issues can ultimately be traced back to the size of data in some form. There are two dimensions of size that affect the server – how much data is in the server (making tables and indexes larger, etc) and how big individual operations are (consuming memory, CPU, locks, etc). As I mentioned above – a single branch in the VS database is over 600,000 files. As we started to manipulate single branches this large – checkin all 600,000 files, merge 100,000’s of changes, etc we ran into some issues. We found that having 600,000 pending changes in a single workspace didn’t work well. As the warehouse data grew to millions and millions of files, it had some issues. Etc. Our initial approach to work around some of these problems was to break up these really large operations and do them in smaller chunks (for example – checkin 50,000 files at a time rather than 600,000). In many cases (but not all) this is a fine solution and only a minor annoyance.
We’ve completed our analysis of the underlying constraints that we were hitting and have fixed, or are working on fixes to address all of them. Here’s a summary of some of the things we have learned.
Sprocs & Query plans
We’ve made a variety of tweaks to our stored procedures to induce better query plans when we find ones that aren’t working very well. As the amount of data grows, any query plan that is not optimal can quickly go from a few milliseconds to minutes. Changes we’ve made include:
Checkin, Undo, Rename – Changed the sprocs because the performance degraded when there were 100,000’s of pending changes in a single workspace
SetPermission, SetPermissionInheritance – Fixed a performance problem that appeared when the number of files in the system got really large and the depth of the tree grew.
Get – We discovered that many of the get operations were for individual files or for very small groups (often done by automated tools). Our V1 implementation took approximately the same amount of time whether you were doing a get of a single file or an entire workspace of 10,000’s of files (ignoring any file download time). We have enhanced the get sproc now to optimize for what is actually asked for. The result is that single file gets have dropped from about 5 seconds to a few hundred milliseconds.
Merge – We found that a big part of the time spent in merge was computing what changes needed to be made to update the client. This was causing blocking of other operations on the server. We moved the client calculation outside the merge transaction and reduced concurrency problems.
Locking
We’ve run into an issue with database locking. The biggest effect has been on the pending change table when doing things like really large checkins. In these scenarios we need to take write locks for every pending change being affected. SQL’s row level locking does a great job. However, to limit the amount of memory a transaction can use for locks, at a point (about 5,000 row locks) SQL stops locking each individual row and “escalates” to a table lock. This means that the transaction has every row in the table locked. While, for the vast majority of applications, this is not a consideration (either tables aren’t that big, not that many rows need to be locked or concurrency is low) – it is an issue for our application. It means that for the duration of the checkin transaction of over 5,000 items no one else can pend any changes (checkout) because the table is locked. For checkins of small numbers of 1,000’s of items (5,000, 10,000, 20,000) it’s not too noticable because the checkin is fast enough. However checkins of 100,000’s of files can take many minutes and everyone screams when they can’t checkout for minutes at a time.
The pending change table is not the only place we can hit this. We can also hit it on the LocalVersion table, the Version table, etc but it tends to be the one we’ve hit the most. The problem, however, is general and we are looking for a general solution.
We have worked with the SQL team to understand all of our options. The first option we are trying is to disable row locks on the tables/indexes where this affects us. This will cause SQL to use page locks instead. Based on the way our tables and indexes are clustered, we don’t believe this lower granularity of locking is going to be a problem for us. Another approach we will try, either together with the first or separately, is to disable lock escalation causing SQL to continue to take finer grained locks of how many are needed (never escalating to table locks). Of course, this means that the server can use more memory for locks but we are planning for that. We’re in the process of upgrading to our final production hardware which is a 64 bit server with 32GB of RAM. We won’t know for sure that this fully addresses the problem until we’ve finished testing it.
In the mean time we’ve chunked most of the really large operations up (as I described above) and moved them more to off hours and this is not creating a problem for us at the moment.
Read Committed Snapshot Isolation (RCSI)
RCSI is a really cool new feature in SQL server that allows readers to maintain read committed isolation semantics without taking read locks. It does this by proactively making copies of data that is changed and allowing readers to access these “old versions” of the rows when needed. RCSI is a database level setting so when enabled, it applies to all tables in the database. There are some tables (most notably the LocalVersion table) that benefit substantially from RCSI. RCSI gives us substantially better concurrency and performance.
However, we have discovered a side effect. It’s not huge but it aggravates the locking issues a bit. RCSI is pretty smart about how it works and it actually only copies the changed portions of a row to minimize the amount of data copied. However, some of our really large operations delete large numbers of rows from some tables. For example, in the 600,000 file checkin case, we need to delete 600,000 pending change rows when the checkin is complete. RCSI has to make a copy of the “changed” data but because the operation is a delete, all of the data in the row “changes”. This means that RCSI has to copy all 600,000 rows. We have found that under load this alone can take several minutes and is sometimes the single most expensive part of an operation. When we disable RCSI, the same deletes complete less than half the time.
We are investigating a variety of options. The first is to change the pattern so that rather than deleting the rows in the transaction, we instead change the value of a non-indexed column to indicate that the row is “deleted”. We could then delete those rows in a separate transaction with an isolation level below read committed (read uncommitted) and there by avoid the row copying that happens with RCSI. If we’re not happy with that, our alternative is to change RCSI from “on” for the entire database to “enabled” and then go through all of the sprocs and specifically state which transactions need that semantic and which don’t. This would be a much more impactful change requiring much more testing.
AT memory
When we started trying the huge operations (like checkins of 600,000 files), we saw a variety of out of memory problems on the application tier server. After investigation, we discovered that some of these operations were requiring up to 1.5GB of memory on the AT to hold the results before sending them back to the client. Because our AT is a 32 bit application we started running out of virtual memory and the web service would recycle. Recall, normal 32-bit process have only 2GB of virtual address space. Although there is a configuration to increase this to 3GB, it is generally not recommended for IIS worker processes
The biggest occurrence of this was operations that return what we call “GetOperations”. This is data that tells the client what needs to be updated. For example, if I call the “Get” web service, it returns an array of “GetOperations” to tell the client what to do to bring itself up to date. This also happens with delete, rename, merge and others. When doing these operations on 100,000’s of files the AT would run out of memory. We also saw the Warehouse hit this problem when it tried to process code churn information for a single checkin of 100,000’s of files for the same reason.
We’ve taken a variety of approaches to help with this problem. First, the single biggest portion of a GetOperation is the “download ticket” that allows you to download the file. Some of the operations getting these large numbers of GetOperations had no intention of downloading the files, so we modified them to request that the tickets not be generated. Some operations were changed to request the download tickets in chunks after the initial list of GetOperations is returned. We also made some memory consumption optimizations to reduce how much memory is used during processing.
Ultimately, there will always be some level at which the AT will run out of memory. We can trim down and optimize so maybe it will take a single operation of 10,000,000 files instead of 500,000 but there will always be a limit. Our future approach will be to make the AT work as a 64-bit process so that virtual memory will no longer be a meaningful limitation.
Warehouse
The warehouse too has had issues as a result of the flood of data. I mentioned the problem with AT memory above. In addition we have had an issue with the warehouse consuming too many resources (CPU and RAM) on the live server. With all of that data, it got to the point that the hourly cube processing could consume high CPU load for as much as 20 minutes. We are investigating the causes for this high load with the Analysis Services team. As a short term fix we made a change to our server topology to move the warehouse off of the live server and only a separate warehouse server. In general, the large scale data warehouse best practice is to put your warehouse be on different hardware than your operational server. Unfortunately, this is not a supported production configuration in TFS V1 so I can not recommend it. Doing this WILL create problems with the serviceability of a TFS installation. We are investigating making this topology supported in the future.
When looking hard at the warehouse performance in the light of all of the data we were pumping we noticed that there was quite a lot of network traffic between AS and the SQL server (I don’t remember the details). As a result we updated all of the server to server network connections to be Gigabit ethernet.
We hit another problem (that required a sproc change) when we rapidly (with a tool) added 2,000 to 3,000 Areas. The work item tracking system was unable to consume this much change so rapidly and stopped working until we could fix it. We now have a patch available that can remedy the issue. If anyone reading this hits the issue, Customer Support should be able to help you.
Summary
As you can see we’ve hit a wide range of issues. I expect that as we continue the rollout and increase the data size substantially we’ll uncover new problems. However, the system is already holding up under a stunningly large amount of data. We’ll continue to fix issues as we find them and will make sure to deliver those fixes in the commercially available product. Many of the fixes we’ve done so far actually help performance even when the data sizes are not nearly so big. It’s just that the fixes go from being “nice to have” with smaller data set sizes to “necessary” at REALLY large data set sizes.
We set out to build TFS as a product that could scale from small teams to the largest enterprises. I’m immensely proud of what we have accomplished with the product so far. I really do hope this information doesn’t scare people. As I’ve said, every system has limits and will run into problems at some level of scale or load – we’re just being up front about what ours are. Those of you with hundreds of thousands for even small numbers of millions of files can use TFS V1 just fine with no additional fixes or patches and it will serve you well. As you grow, we’ll be ready for you. We won’t stop until TFS has more headroom than any team development system in existence.
Thanks for listening – if you’ve actually read this far I’m impressed
Brian
Join the conversationAdd Comment
Cool!
I really like what’s been explained in your post. It shows me a real case study, especially when you said that MS is currently using it internally for the developer division. Is it almost all parts of the division?
I absolutely agree with you that the next version of TFS should provide an option to move data warehouse server (the analysis service server, furthermore the reporting service sever as well) to servers other than the transactional database server for maximum scalability.
Also, I think TFS should support installation on 64-bit machine to scale well for the next version, not just the SQL Server part, which is currently the only supported scenario.
I think it’s really a good work for a V1 product.
Thanks and keep up the good work!
It’s great to see you guys following through on pushing TFS into DevDiv. It’s this kind of "extreme" dogfooding that reassures me that our paltry sized (in comparison) development teams won’t hit these issues because you will have found & fixed them first. I’m looking forward to SP1. BTW is there any chance that we will get a "move team project" utility to move a project from one TF server to another in SP1?
Brian,
Thanks for sharing – I have been part of a TFS pilot program here at Intel and am trying to evangelize its use and having this information is very valuable to back up my efforts. Could you also share the queries that you are running to get the statistics you listed? I would like to do similar things for my system.
Thanks for the comments so far. I’m going to roll up some responses to the first 3 comments in one.
No, we’re not nearly to all parts of the division yet. I’d guess we’re in the ballpark of a 3rd of the way – maybe a little less on file count.
No, I’m afraid "move team project" is much more work than we can do in a service pack. However, we are already working hard on it for our next release.
I’ll post some of the queries I use in my next blog post. Yesterday I spent a couple of hour tuning them because it was starting to take quite a few minutes to gather all of the statistics. I’ll try to get that out by the end of next week.
Thanks again and keep the comments and questions coming…
Brian
Brian Harry gives an excellent post on Deploying TFS in the Developer Division at Microsoft.
James…
DevDiv는 Developer Division의 약자로 제가 속한 디비젼으로…
nice shoemoney site at <a href="">shoemoney</a>”>">shoemoney</a> 16
Great article, Brian! How’s that next blog post coming – you know, the one with the queries you use to get TFS statistics?
Sorry about that, I just posted them
TFS Server Error on Branching (about 5GB of data in main branch)…
SQL Error..>See below…
————
String or binary data would be truncated.
MRPTEST2.TfsVersionControl..prc_PendBranch: Database Update Failure – Error 8152 executing INSERT statement for tbl_Lock
The statement has been terminated. (type SqlException)
————
An error report has been generated & sent off etc.
Spewing! Just at the last part of our eveluation process!!!
Is there anywhere I can get patches for this bug by any chance?
The Branch operation worked on the main branch in the past, and still works for small parts, but the whole of the trunk can no longer be branched!
TF53010: An unexpected condition has occurred in a Team Foundation component. The information contained here should be made available to your site administrative staff.
Technical Information (for the administrative staff):
Date (UTC): 15/06/2006 3:07:13 AM
Machine: MRPTEST2
Application Domain: /LM/W3SVC/3/Root/VersionControl-3-127948074529668170
Assembly: Microsoft.TeamFoundation.Common, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v2.0.50727
Process Details:
Process Name: w3wp
Process Id: 2672
Thread Id: 3992
Account name: NSWPSn1115635
Detailed Message: TF14105: An exception occurred in the Team Foundation Source Control System.
Web Request Details
Url: [method: POST]
User Agent: Team Foundation (devenv.exe, 8.0.50727.147)
Headers: Content-Length=585&Content-Type=application%2fsoap%2bxml%3b+charset%3dutf-8&Accept-Encoding=gzip&Accept-Language=en-AU&Expect=100-continue&Host=mrptest2%3a8080&User-Agent=Team+Foundation+(devenv.exe%2c+8.0.50727.147)&X-TFS-Version=1.0.0.0&X-VersionControl-Instance=f755ccec-a571-4ddd-92bc-b5cbdb71e9b8
Path: /VersionControl/v1.0/repository.asmx
Local Request: True
Host Address: 10.12.112.60
User: NSWPSn1115635 [authentication type: NTLM]
Exception Message: String or binary data would be truncated.
MRPTEST2.TfsVersionControl..prc_PendBranch: Database Update Failure – Error 8152 executing INSERT statement for tbl_Lock
The statement has been terminated. (type SqlException)
SQL Exception Class: 16
SQL Exception Number: 8152
SQL Exception Procedure: prc_PendBranch
SQL Exception Line Number: 570
SQL Exception Server: MRPTEST2
SQL Exception State: 13
SQL Error(s):
SQL Error[1]: System.Data.SqlClient.SqlError: MRPTEST2.TfsVersionControl..prc_PendBranch: Database Update Failure – Error 8152 executing INSERT statement for tbl_Lock
Class: 16
Number: 500004
Server: MRPTEST2
Source: .Net SqlClient Data Provider
State: 1
Procedure: prc_PendBranch
Line Number: 597
SQL Error[2]: System.Data.SqlClient.SqlError: The statement has been terminated.
Class: 0
Number: 3621
Server: MRPTEST2
Source: .Net SqlClient Data Provider
State: 0
Procedure: prc_PendBranch
Line Number: 570
Exception Data Dictionary:
HelpLink.ProdName = Microsoft SQL Server
HelpLink.ProdVer = 09.00.2047
HelpLink.EvtSrc = MSSQLServer
HelpLink.EvtID = 8152
HelpLink.BaseHelpUrl =
HelpLink.LinkId = 20476
Exception Stack Trace:.TeamFoundation.Server.SqlResourceComponent.execute(ExecuteType executeType, CommandBehavior behavior)
at Microsoft.TeamFoundation.VersionControl.Server.VersionControlSqlResourceComponent.execute(ExecuteType executeType, CommandBehavior behavior)
at Microsoft.TeamFoundation.Server.SqlResourceComponent.ExecuteReader()
at Microsoft.TeamFoundation.VersionControl.Server.VersionedItemComponent.PendBranch(Workspace workspace, String sourceServerItem, String targetServerItem, List`1 dontBranch, IList gets, IList failures)
at Microsoft.TeamFoundation.VersionControl.Server.ChangeRequest.PendChange(ExpandedChange expandedChange, VersionedItemComponent db, Workspace workspace, IPrincipal userPrincipal, Set`1 attempts, IList successes, IList warnings, IList failures)
at Microsoft.TeamFoundation.VersionControl.Server.ChangeRequest.PendChanges(IPrincipal userPrincipal, Workspace workspace, ChangeRequest[] requests, ArrayList& failures)
at Microsoft.TeamFoundation.VersionControl.Server.Repository.PendChanges(String workspaceName, String ownerName, ChangeRequest[] changes, ArrayList& failures)
For more information, see Help and Support Center at.
I just sent it to the dev team and I’ll let you know what I hear.
Here’s what I got back:
It looks like the most likely cause of this is one of the items in the target of the branch exceeds 260 characters. There is an error check which checks for it – but due to a faulty join, it requires the target of the branch to be mapped in the working folders. Our assumption is that the target is not mapped – skipping the error check and failing when it actually inserts into tbl_Lock. If the customer maps the target of the branch – it will give them the item which is too long. This has been fixed in SP1.
So it sounds like fundementally the problem is that you are trying to branch a tree that is resulting in a path that is greater than 260 characters. You need to branch it higher in the tree and/or use a shorter foldername. TFS can only store paths up to 260 characters long.
Let me know if this doesn’t seem like the problem.
Brian
Interresting.
Can you tell us more about the size/physical configuration of the server you used ?
Some info about :
– nb of Servers
– CPUs
– RAMs
– HD
Thanx,
Messaoud
I’m very proud of Visual Studio Team System and what Microsoft have done here – I’ve blogged about it
In case you have become bored due to us not releasing anything since Saturday ( Visual Studio 2005 SP1
Configuration and Management of Team Foundation Server
PingBack from
Replying to an old post, but "TFS can only store paths up to 260 characters long" – that’s kind of rubbish, isn’t it?
I don’t particularly think so. There’s no doubt it’s a limitation and it will affect some people but not very many. The reason for this limit is that the Win32 API only supports 259 character paths using the normal syntax. To support paths with longer than 259 character, they have to be prefixed with \?. It’s not that it’s amazingly hard to do but there is some work in it and it is relatively poorly understood. The result is that very few apps support more than 260 character paths.
Let’s think a bit about what a 260 character path means. With an 80 character wide console window, it wraps around the screen more than 3 times. Not particularly legible.
That said, it’s on our radar to get to removing the limitation because there are sometimes legitimate reasons for such long paths.
Brian
Longer paths (260) are usually a pain for me since there are many times that I reach that point. And it is not because I create a very deep hierarchy, it is because MS products do it. Two examples are the datasources files that are inserted in the web references (which concatenate all the path of the namespaces in the filename) Or the Visual Studio Database project that also create a really deep file hierarchy….
I appreciate the feedback.
Brian
|
https://blogs.msdn.microsoft.com/bharry/2006/05/17/deploying-tfs-in-the-developer-division-at-microsoft/
|
CC-MAIN-2016-30
|
refinedweb
| 4,785
| 60.85
|
A while back, we started using the Nestable jQuery Plugin for H2O. It provides interactive hierarchical list functionality – or the ability to sort and nest items.
Diagram from Nestable jQuery Plugin representing interactive hierarchical sort and list functionality.
I touched on H2O's data model in this post, but it essentially mimics the diagram above; A user can build sortable and nestable lists. Nesting is visible at up to 4 levels. Each list is accessible and editable as its own resource, owned by a single user. The plugin is ideal for working with the data model, however, I needed a bit of customization that I'll describe in this post.
Limiting Nestability to Specific List Items
The feature I was requested to develop was to limit nesting to items owned by the current authorized (logged in) user. Users can add items to their list that are owned by other users, but they can not modify the list elements for that list item. In visual form, it might look something like the diagram below, where green represents the items owned by the user which allow modified nesting, and red represents items that are not owned by the user which can not be modified. In the diagram below, I would not be able to add to or reorder the contents of Item 5 (including Items 6 - 8), and I would not be able to add any nested elements to Item 10. I can, however, reorder elements inside Item 2, which means e.g. I can move Item 5 above Item 3.
Example of nesting where nesting is prohibited among some items (red), but allowed under others (green).
There are a couple of tricky bits to note in developing this functionality:
- The plugin doesn't support this type of functionality, nor is it currently maintained, so there are absolutely no expectations of this being an included feature.
- These pages are fully cached for performance optimization, so there is no per-user logic that can be run to generate modified HTML. The solution here was implemented using JavaScript and CSS.
Background Notes on the Plugin
There are a couple of background notes on the plugin before I go into the implemented solution:
- The plugin uses <ol> tags to represent lists. Only items in <ol> elements are nestable and sortable.
- The plugin recognizes .dd-handle as the draggable handle on a list item. If an item doesn't have a .dd-handle element, no part of it can be dragged.
- The plugin creates a <div> with a class of dd-placeholder to represent the placeholder where an item is about to be dropped. The default appearance for this is a white box with dashed outline.
- The plugin has an on change event which is triggered whenever any item is dropped in the list or any part of the list is reordered.
Step 1: JavaScript Update for Limiting Nestability
After the content loads, as well as after additional list items are added, a method called set_nestability is run to modify the HTML of the content, represented by the pseudocode below:
set_nestability: function() { // if user is not logged in // nestability is never enabled // if user is logged in user is not a superadmin (superadmins can edit all) // loop through each list item // if the list item data('user_id') != $.cookie('user_id') // remove dd-handle class from all list item children .dd-handle elements // replace all <ol> tags with <ul> tags inside that list item
The simple bit of pseudocode does two things: It removes the .dd-handle class for elements that can't be reordered, and it replaces <ol> tags with <ul> tags to enable CSS. The only thing to take note of here is that in theory, a "hacker" can change their cookie to enable nestability of certain items, but there is additional server-side logic that would prohibit an update.
Step 2: CSS Updates
ul .dd-placeholder { display: none; }
Next up, the simple CSS change above was made. This results in the placeholder div being hidden in any non-editable list. I made several small CSS modifications so that the ul and ol elements would look the same otherwise.
Step 3: JavaScript On Change Updates
Finally, I modified the on change event:
$('div#list').on('change', function(el) { // if item is dropped // if dropped item is not inside <ol> tag, return (do nothing) // else continue with dropped logic // else if position is changed // trigger position change logic }
The on change event does nothing when the dropped item is not inside an editable list. Otherwise, the dropped logic continues as the user has permissions to edit the list.
Conclusion
The functionality described here has worked well. What may become a bit trickier is when advanced rights and roles will allow non-owners to edit specific content, which I haven't gotten to yet. I haven't found additional resources that offer sortable and nestable functionality in jQuery, but it'd be great to see a new well-supported plugin in the future.
|
http://blog.endpoint.com/2014/07/customizing-nestable-jquery-plugin.html
|
CC-MAIN-2015-27
|
refinedweb
| 835
| 59.94
|
Frontity Tutorial: The React framework for WordPress
In this tutorial, we'll learn about Frontity, The React framework for WordPress.
WordPress is the most popular content management system built on top of PHP while React is the most front-end library for building user interfaces. Combining these two tools will allow you to build amazing modern apps.
You can also migrate your WordPress apps to use a modern React front-end and keep using the WordPress as a content management system while modernizing your apps with the latest technologies in the web today.
Use Frontity for Building WordPress/React Apps
Let's learn about Frontity, a tool that allows you to create websites using WordPress and React in the easiest way.
According to the official website:
Frontity is the easiest way to create lightning fast websites using WordPress and React. Open source and free to use.
How Frontity Works
When using WordPress and React, WordPress works as a headless CMS – just for creating and managing your content thanks to the WordPress REST-API which enables you to retrieve your content from the JavaScript/React interface.
Frontity apps built with React serve your content and run separately on a Node.js server.
Frontity connects seamlessly with WordPress so you can focus on building your website or blog. You don't need to deal with complex configuration.
Using Frontity by Example
Let's now see how to use Frontity by example. First, as a prerequisite, you need to have Node.js installed on your local development machine.
You can run frontity using the
npx command as follows:
$ npx frontity create wpreactapp
Wait for the command to install the dependencies and generate your app then run the following commands to start a local development server:
$ cd wpreactapp $ npx frontity dev
Thanks to Frontity, you’ll be able to connect your React app to WordPress REST API, style, and deploy your app in no time with bundling and server-side rendering for boosting performance and SEO purposes!
Configuring your Frontity App
You can configure your Frontity app in the
frontity.settings.js file.
For example, you can simply set the
state.source.api attribute to the URL of your WordPress REST API to connect your CMS content and get all your posts.
You can configure routing, navigation, packages, and head tags, etc. You can find more options in the docs.
Connecting WordPress REST API to your Frontity App
You can easily connect your WordPress REST API URL to Frontity which works both with
WordPress.org and
WordPress.com websites.
Simply open and add your WordPress REST endpoint to
state.source.api attribute as follows:
//frontity.settings.js export const settings = { packages: [ { name: "@frontity/wp-source", state: { source: { api: "" }, }, } ], };
Learn more about connecting WordPress.
Styling your Frontity Theme with CSS-in-JS
After connecting your WordPress REST API, you'll want to customize your UI using CSS in JS which is is a popular approach among React developers.
These are the available options for styling your app.
Styling with Styled
import { styled } from "frontity"; const StyledDiv = styled.div` text-align: center; background: white; `;
The CSS prop
import { css } from "frontity"; const Component = () => ( <div css={css`background: red`}> React with WordPress App </div> );
React’s style prop
const Page = () => ( <div style=> React with WordPress App </div> );
Rather than using React’s style prop, you’re probably better off using the first two methods listed above.
Using
<Global>
You can also add global styles in the Global component as follows:
import { Global, css } from "frontity"; const Page = () => ( <> <Global styles={css` body { margin: 0; color: red; } `} /> <OtherContent /> </> );
Deploying your Frontity App
After connecting your web app to WordPress and style the React UI with CSS. You'll be ready to deploy your it to any Node.js or serverless host.
Head back to your command-line interface and run the following commands:
$ npx frontity build
You can host the content of the
build folder to any Node.js hosting.
See more information in the docs.
Check out the official guide on how to deploy Frontity with Vercel.
Frontity recommends that you use two domains, or sub-domain: one for the WordPress backend, and a main domain for the Frontity/React frontend.
You can also serve a production ready version of your app locally using the following command:
$ npx frontity serve
Conclusion
In this tutorial, we've learned about Frontity, a tool for building modern web apps with WordPress REST API and a React front-end.
By using both the most popular CMS WordPress and the most popular front-end React library, you'll be able to take benefits from both worlds. WordPress with its easy to use content management features and modern user interfaces.
|
https://www.techiediaries.com/frontity-tutorial-react-framework-wordpress/
|
CC-MAIN-2021-39
|
refinedweb
| 786
| 52.19
|
UpdateUpdate
PR #2021 is merged. Please follow OwlCarousel2.
This plugin is a fork of original plugin OwlCarousel2 with a fix that maintains the event binded to the slides while creating cloned slides when
loop:true
Until the PR #2021 is merged, feel free to use this repo in your projects.
Owl Carousel 2Owl Carousel 2
Touch enabled jQuery plugin that lets you create a beautiful, responsive carousel slider.
Quick startQuick start
Or download the latest release.
LoadLoad
WebpackWebpack
Load the required stylesheet and JS:
import 'owl.carousel2/dist/assets/owl.carousel.css'; import $ from 'jquery'; import 'imports?jQuery=jquery!owl.carousel';
Static HTMLStatic HTML
Put the required stylesheet at the top of your markup:
<link rel="stylesheet" href="/node_modules/owl.carousel22/dist/owl.carousel.min.js"></script>
UsageUsage(); });
BuildingBuildingContributing
Please read CONTRIBUTING.md.
LicenseLicense
The code and the documentation are released under the MIT License.
|
https://www.npmjs.com/package/owl.carousel2
|
CC-MAIN-2022-05
|
refinedweb
| 145
| 50.53
|
This action might not be possible to undo. Are you sure you want to continue?
, property, or industry to a common fund, with the intention of dividing the profits among themselves. Two or more persons may also form a partnership for the exercise of a profession. (1665a). y NOMINATE There is a name given by the law Contract of Partnership: CONSENSUAL (meaning it is perfected by both parties) y PERSONS Includes not only natural persons but also JURIDICAL persons. A corporation may NOT be a partner but it may engage in JOINT VENTURES. y BIND THEMSELVES Must be capable and competent, meaning, the following may are not included: 1. Minors 2. Emancipated Minors 3. Those under civil interdiction ± accessory penalty of being convicted of crimes 4. Insane persons 5. Incompetent persons (see oblicon notes) HOWEVER, if the person is only a SUSPECT, he may still bind himself into a contract since there is no final verdict yet. y TO CONTRIBUTE MONEY, PROPERTY OR INDUSTRY Makes the contract onerous since this is MUTAL and ALL must give either one of the above Examples: 1. A and B create a partnership with a promise of contributing P10,000 each in cash. A gave his share while B gave a check worth P10,000. Is the issuance a contribution of money? No, unless the check is encashed. 2. Considering the same information above but with B contributing P10,000 in equivalent dollars. No, the contribution must be made using the legal tender, in this case, Philippine pesos. Property contributed may be movable, immovable or intangible property. (Ex: equipment, land, patents, etc.) If the partnership did not contribute money or property, then industry was contributed. Note: Contributions may differ for each of the partners. y TO A COMMON FUND TO DIVIDE PROFITS AMONGST EACH OTHER The primary objective of partnerships is to make profits. Sharing profits need not be equal. Sharing ratios are determined by the partner¶s agreement, and if there was no agreement, then the ratios will be based on the ratio of the partners¶ contributions. Sharing ratios for losses will be the same as the sharing ratios for profits. The industrial partner shall NOT share in losses. The industrial partner is exempt only to the partners but rd not to 3 parties without prejudice to his right. A1816 y CONSENT (DELECTUS PERSONAE) You can¶t join a partnership without the consent of ALL partners. Why? Because the partnership will need to be dissolved before you are admitted and a new partnership will be made in its place.
Article 1768 The partnership has a juridical personality separate and distinct from that of each of the partners, even in case of failure to comply with the requirements of article 1772, first paragraph. (n) y Example If A and B form a partnership with X & Co., the property of X & Co. is not A & B¶s property and likewise, A & B¶s property is not X & Co.¶s. Since X & Co is a juridical entity, it can acquire any property since the partners are merely agents. Thus the obligations of X & Co are not those of A & B¶s. The partnership of X & Co can file against A & B and be sued by A & B, likewise, if a third party sues X & Co., A & B are not affected. The partnership will still be a juridical entity even without compliance with A1772. If X & Co. is exempted from certain things, it does not follow that A & B are included. y Consequences of being a Juridical Person Can sue and be sued Acquire any kind of property Insolvency of a partnership does not mean that the partners themselves are insolvent. Article 1769 In determining whether a partnership exists, these rules shall apply: (1) Except as provided by article 1825, persons who are not partners as to each other are not partners as to third persons. (2) Co-ownership or co-possession odes in the profits of a business is prima facie evidence that he amounts of payment vary with the profits of the business (e) As consideration for the sale of a goodwill of a business or other property by installments or otherwise. (n) y Provides the rule in determining partnerships y Example for (1) If A & B say PUBLICLY that they are not partners, then according to A1825, if they told C that they are and C enters into a contract of partnership with them, then A and B are in a PARTNERSHIP OF ESTOPPEL. y Example for (2) If A & B inherited land from their parents and subsequently leased the land out for P50,000/month, then it can be said that they share profits, but are they in a partnership? No, they are merely co-owners. The P50,000 profit is merely incidental and besides, it was not derived from BUSINESS OPERATIONS.
If they bought the land for P1,000,000 each to build a house but instead opted to sell it for P2,500,000 then they have a profit of P500,000 but are they partners? No, because even if there was a profit of P500,000, this is merely incidental to the sale and not from business operations of A&B. If the land was instead used to build an apartment that is rented out? Yes, because A & B share profits from RENTING, this can be considered as ordinary business operations. y Example for (3) If a person owns a big tract of land for planting rice and entered into an agreement with a farmer that they will divide the harvest, is the farmer partners with the owner of the land? No because of the following reasons: (1) The farmer had no contribution (2) The farmer has no say in the disposition of the land (3) The farmer has no say in management (4) In case of loss, the owner shall carry the entire burden and the farmer need not pay anything y Example for (4) A partnership borrowed P50,000 and instead of giving the creditor a specific amount to be repaid, they agreed that the creditor will receive 1% of the partnership¶s annual gross profit. Is the creditor a partner? No because the receipt of share in net income happens to be because of an existing debt. y To determine whether a person is a partner: (1) Required contribution (2) Say in management (3) Share in losses Article) y The partnership must have a lawful object or purpose Lawful object refers to CAPITAL Lawful purpose refers to the BUSINESS itself y There must be common interest and benefit y Unlawfulness of the partnership will cause it to be dissolved and profits shall be confiscated y Example of unlawful purpose: GAMBLING A & B are partners where A contributed P100,000 in cash and B contributes gambling paraphernalia. They were raided and the gambling paraphernalia was confiscated. Can the P100,000 also be confiscated? No because the P100,000 was not the reason for the crime in anyway. The state is therefore required to return this amount to A. y Legal effects of a Judicial Dissolution Partnership is considered void from the beginning Profit and instrument of the crime is confiscated The only returnable items are those that were never related to or connected with the crime committed Article 1771 A partnership may be constituted in any form, except where immovable property or real rights are contributed thereto, in which case, a public instrument shall be necessary (1667a)
-
y Can a partnership be created orally? Yes. A partnership may be constituted in any form (as stated in Article 1771) y Partnerships are not covered by the Statute of Fraud since these are not necessarily required to be in writing (contract of partnership can be in any form) y If immovable property and/or real rights are contributed to the partnership, then the contract must be in a public instrument (notarized documents) y In order to bind 3rd persons, the transfer of OWNERSHIP of immovable property MUST BE REGISTERED with the REGISTRY OF PROPERTY in the province or city where the property is located y The article shows that partnerships can be perfected by MERE CONSENT. Article 1772 Every contract of partnership having a capital of P3,000.00) y If the partnership¶s capital is P3, 000.00 or more (in any form), it must be in a public instrument, recorded with the SEC and note that property referred to here is MOVABLE since immovable property is covered by Article 1771. y Failure to comply with the requirements of Article 1772 will not affect the liability of the partnership to 3rd persons. Isn¶t this inconsistent with Article 1358? No, remember that in Article 1358, if the contract terms exceed P500.00 then the contract must be in writing. This is merely for purposes of convenience and not validity or enforceability of the law. Also note that according to Article 1768, the partnership will still be valid and have a juridical entity. How do we reconcile this with Article 1358 and 1357? Article 1358 is for purposes of convenience and not for validity or enforceability of the law. Article 1357 states that contracting parties have the right to compel each other to place the contract into writing. y Purpose of Registration: (1) Condition for obtaining a license to engage in business and in trade rd (2) 3 persons want proof that the partnership is existent, who the partners are and what the capitalization is before they enter into contracts/engage in business. (3) The government requires this so that tax liabilities may not be avoided (BIR) y Failure to comply with the Article¶s requirements will not prevent the formation of the partnership y The Statute of Fraud will only apply if there was an agreement made by the contracting parties y Example: A and B promise to contribute to their partnership money worth P10,000.00 each within one year from their agreement. A contributes early but when the time comes for B to contribute his share, he refuses to do so. Can A compel B to give his contribution? No, A cannot compel B to pay his contribution to the partnership. Why? Because the contract/agreement between the two parties was purely ORAL and never really written, and it has already been one year since they agreed to their contract terms.
Article 1773 A contract of partnership is void, whenever immovable property is contributed thereto, if an inventory of said property is not made, signed by the parties and attached to the public instrument. (1668a) y Refers specifically where one or both of the parties contribute immovable property. The requirements are: (1) The contract must be in a public instrument (2) An inventory of the immovable property must be made, signed by BOTH parties and attached to the public instrument, otherwise the partnership is VOIDED. y Actual Case in Applying Article 1773: A and B agree to form a partnership engaging in a fish pond business where both partners will contribute cash; the cash is later used to buy land that is converted into a fish pond. C comes along and points out that the partnership is void because no inventory of the land was made. Is the partnership really void? No, the partnership is not void because according to the Supreme Court, Article 1773 need not apply since the land was BOUGHT from the CASH CONTRIBUTION. Suppose a partnership contributes immovable property but does not conduct an inventory and enters into a contract with A. The partnership does not fulfill its obligation to A and A sues the partnership. Was A right in suing the partnership? No, since the partnership was void from the beginning. A should instead file against the ³partners´ themselves. They will be sued under the legal basis of them being partners by estoppels, as stated in Article 1825. If A wishes to be in a partnership with B and promises to contribute land but subsequently sells the same plot to C, who immediately registers the transfer, who owns the land? C owns the land because A never registered the transfer. y Estafa: when the owner of a property sells the same to two or more different persons. Article 1774 Any immovable property or an interest therein may be acquired in the partnership name. Title so acquired can be conveyed only in the partnership name. (n) y Being a juridical entity, a partnership can acquire property and subsequently become its owner. Article 1775 Associations and societies whose articles are kept secret among members, and wherein anyone of the members may contract in his own name with third persons, shall have no juridical personality and shall be governed by the provisions relating to co-ownership. (1669) y There is no juridical entity since the members can contract with 3rd persons in their own name without binding others. y In a partnership: (1) The partners are merely agents who cannot act alone (2) Articles of Partnership are known to ALL partners AND to the GENERAL PUBLIC. Article 1776 As to its object, a partnership is either universal or particular. As regards to the liability of the partners, a partnership may be general or limited. (1671a) y Classifications of Partnerships: (1) As to the Object:
(a) Universal Partnership of All Present Property ± defined in Article 1778 (b) Universal Partnership of All Profits ± defined in Article 1780 (c) Particular Partnerships ± defined in Article 1783 (2) As to the Liability: (a) General ± general partners are liable PRO-RATA and subsidiarily, sometimes solitarily, with their own property/assets if the partnership is insolvent. (may include industrial partners) (b) Limited ± limited partners are liable only up to the extent of their contribution (3) As to Duration: (a) At will ± no particular undertaking, can be dissolved at any time (b) With a Fixed Term ± may only be dissolved upon the end of its term unless continued by the partners (4) As to Legality of Existence: (a) De Jure ± complied with ALL requirements (b) De Facto ± failed to comply with ALL requirements (5) As to Representation to Others: (a) Ordinary/Real ± actually exists (b) Ostensible/by Estoppel ± exists only to partners (6) As to Publicity: (a) Secret ± some partners are not known to the public (b) Open/Notorious ± all partners are known to the public (7) As to Purpose: (a) Commercial/Trading ± business transactions (b) Professional/Non-Trading ± exercise of professions y Kinds of Partners: (1) Under the Civil Code: (a) Capitalist ± contributes money/property (b) Industrial ± contributes industry (c) General ± liability extends to personal assets (d) Limited ± liability up to contribution only (e) Managing ± manages the partnership (f) Liquidating ± responsible during dissolution (g) By Estoppel ± not really a partner (h) Continuing ± continues business after dissolution (i) Surviving ± remains after partner¶s death (j) Sub-partner ± contracts with partners, Article 1804 (2) Other Classifications: (a) Ostensible ± active, known to the public (b) Secret ± active, unknown to the public (c) Silent ± inactive, known to the public (d) Dormant ± inactive, unknown to the public (e) Original ± member at time of organization (f) Incoming ± about to become a member (g) Retiring ± about to withdraw Article 1777 A universal partnership may refer to all the present property or to all the profits. (1672) Article 1778 A partnership of all present property is that in which the partners contribute all the property which actually belongs to them to a common fund, with the intention of dividing the same among themselves, as well as the profits which they may acquire therewith. (1673) Article) y Why is the universal partnership of all present property not popular in the Philippines? y Property owned at the time of contribution will become common property of the partnership eventually because only the profits acquired through the contribution will become common property, unless there was a stipulation that says otherwise. y Example: A and B form a Universal Partnership of All Present Property and stipulate that property and profits that are acquired during business operations will become common property even if these were not due to their contributions and that if anyone inherits property, it will become common property as well. A acquires land as part of his compensation package from AyalaLand and B inherits land from his parents. Whose property will become common property? Only A¶s land will become common property because it was essentially PAYMENT while B¶s was inherited. The article prohibits donations to become common property, only fruits of such can become common property. y In a partnership, contributions must be determinate/certain and partners are akin to donors. Donations cannot comprehend future property but profits can be stipulated. Article) y Example: Suppose A and B form a Universal Partnership of All Profits and A wins in the lotto, P100,000.00. B tries to share in 50% citing the existence of their partnership and that A used the partnership¶s money to purchase the lottery ticket. Can B really share in the lotto winnings? No, B cannot since it came from CHANCE, not WORK. If the P100,000.00 instead came from A¶s work in DLSU, can B share in the profits of A? Yes, because it came from WORK. y As long as it is PROFIT, the profit becomes common property to the partners UNLESS there was a stipulation in their agreement y If A and B form a Universal Partnership of All Profits for a Taxi-Cab business and both contribute vehicles that will serve as the taxi, what they were actually contributing is the USE or the RIGHT TO USE their vehicles. Upon dissolution, the vehicles will be returned to them since there was never a transfer of ownership. y Unique feature of the Universal Partnership of All Profits: The partners retain the title of ownership. Article 1781 Articles of Universal Partnership, entered into without specification of its nature, only constitute a universal partnership of profits (1676)
y If the articles of universal partnership are doubtful or unclear then the presumption is that it is a universal partnership of all profits. Because a universal partnership of all profits require less obligations and is less onerous since the partners get to retain ownership over the property that they contribute. Article 1782 Persons who are prohibited from giving each other any donation or advantage cannot enter into a universal partnership. (1677) y A husband and wife cannot join a universal partnership. They are not allowed to donate to each other and a universal partnership essentially requires that the partners donate to each other. They can join a particular partnership instead. y A partnership formed in violation of this article shall be null and void. It shall not have any legal personality either. y Illustrative Case: A, B and C form a partnership to engage in the importation, marketing and operation of automatic phonographs, radios, television sets, amusement machines and their parts accessories, with B and C as limited partners. Subsequently, A and B got married and thereafter, C sold his share to A and B for a nominal amount. Was the partnership dissolved after the marriage of A and B and C¶s sale to them of his share in the partnership? No, the firm was not a universal partnership but a particular one. y Pertinent Legal Provisions (1) Article 87: Every donation or grant of gratuitous advantage, direct or indirect, between spouses during their marriage, valid or not, shall be void except moderate gifts which the spouses may give each other on the occasion of any family rejoicing. (2) Article 739: The following donations shall be void: (a) Those made between persons who were guilty of adultery or concubinage at the time of the donation (b) Those made between persons found guilty of the same criminal offense, in consideration thereof (c) Those made to a public officer or his wife, descendants and ascendants by reason of his office Article 1783 A particular partnership has for its object determinate things, their use or fruits, or a specific undertaking, or the exercise of a profession or vocation (1678) y Defines what a particular partnership is y Particular partnerships are those that are: Neither a universal partnership for all present property nor a universal partnership for all profits Example: Those that are formed for the acquisition and sale of property, Accounting Firms, Law Firms, etc. Popular because it is easy to join Chapter 2 ± Obligations of the Partners Section 1 ± Obligations of the Partners amongst Themselves y Relations created by a contract of partnership (1) Relations among the partners themselves (2) Relations of the partners with the partnership (3) Relations of the partnership with third persons (4) Relations of the partners with third persons
Article 1784 A partnership begins from the moment of the execution of the contract, unless it is otherwise stipulated. (1679) y Partnership is perfected by mere consent and if ALL the requirements are met y Notwithstanding the fact that the partners have not given their contributions yet y Example: A and B agree to form a partnership that will begin on December 1 and upon the arrival of certain machinery needed by the business. In this situation, are A and B in already in a partnership? As long as the agreement remains executory, then A and B are NOT partners therefore there is no partnership yet. y Partners may agree to form a partnership to take effect in the future y Example: A and B agree to form a partnership 1.5 years later, with contributions of P100,000.00 each. A contributes his share early but when the time comes for B to contribute his share, he refuses and says he no longer wants to partake in the partnership. Can A compel B to contribute his share to the partnership? NO. Because they cannot enforce the contract since it was perfected 1.5 years ago and the contract was only oral. Since the contract was for 1.5 years, it was greater than 1 year and should have been written instead. y The Statute of Fraud does not usually apply but to some particular cases such as the example above, it will. y If the contribution is immovable property, comply with Article 1773 otherwise the partnership will be void. Article) y A partnership with a fixed term/particular undertaking is continued without express agreement Rights and duties remain the same as they were at termination. y Example: If A and B form a partnership to last until December 30, 2011 and A is the manager and they share profits 50-50 and after December 30, 2011 they continue with their partnership. What happens? A and B retain their rights, meaning A is still the manager and they still share profits 50-50. y If there was express agreement for the term of existence, then when the term expires, the partnership is dissolved and becomes a partnership at will y Continuation is when there is NO settlement/liquidation. There must be prima facie evidence, meaning it must be seen on first glance. Article) Article the account of the partnership. (n) Article) y Suppose A, B and C are partners. A promises to contribute a RED CAR, B promises to contribute GOODS WORTH P50,000.00 and C promises to contribute P50,000.00 IN CASH on October 2011. On October 2011, none of them comply. What happens? A, B and C thus become debtors to the partnership. y Suppose B and C contribute their parts but A does not. Can B and C ask for the recission or annulment of the contract? NO. If one of the partners fails to comply with his requirements, then the others can request for specific performance with damages from the defaulting partner A. y What are the obligations of A before October 2011? (1) To contribute what he promised (2) To be held liable to answer for eviction if the partnership is deprived of his contribution (3) To take care of the contribution with the diligence of a good father of a family. y Suppose A leased the car out and gets it back by December 2011. Then A must deliver the car and the fruits (profits from lease) to the partnership because there was a delay. y Suppose that after A contributes the car, a 3rd person, D claims to the real owner of the car and is able to prove so. Then A is held liable for eviction because the partnership is deprived for a specific thing. A is also held liable for damages to BOTH the partnership and to D. y What about B? Can the partnership determine the value of the goods he contributed? In Article 1787, it clearly states that the goods SHOULD be appraised by the partnership. If there was no agreement/stipulation, then the partnership shall have the goods appraised by an expert. y What if the goods appreciate/depreciate? It will be charged to the partnership¶s account. y What will happen if C fails to comply with his obligation? C will be liable for his contribution plus interest and damages from the date he was supposed to contribute. The same rule will apply if the partners take money from the partnership¶s funds without everyone¶s consent. He will however, not be charged for theft or estafa and his obligation will only be to
return the money he took plus interest and damages from the time he took the money. y When will a partner be held criminally liable? Suppose the partners set aside P10,000.00 for payment to one of their creditors. A takes this amount from the fund and is subsequently discovered to have done so. Then A can be charged for estafa since he misappropriated the money ALREADY SET ASIDE. Article) y An industrial partner contributes his industry Partnership has the EXCLUSIVE RIGHT to his industry Prohibited from the engaging in business of ANY kind unless the partnership has expressly permitted him to do so. y Example: Suppose that a partnership is engaged in a automobile repair shop. A is the industrial partner (chief mechanic) and works only up to 5PM every working day. Can he go home and work on the partnership¶s customers¶ autos, even if he says it to the capitalist partners EVERY DAY before he leaves? The law says that there must be EXPRESSED permission, in this situation, all A has is IMPLIED permission. The capitalist partners¶ remedy is therefore to either: (only one) (1) Avail of the benefits from A¶s ³business´ (2) Exclude A from the partnership and demand for damages y Capitalist partners are prohibited from engaging in SIMILAR businesses only. y Industrial partners have the same remedies as capitalist partners. Article 1790 Unless there is a stipulation to the contrary, the partners shall contribute equal shares to the capital of the partnership. (n) y The partners shall contribute to the capital of the partnership as per their agreement, except if there was no agreement in the first place, in which case, they shall contribute equally. y Example: A and B decide to form a partnership and agree to contribute to the capital in the ratio of 60:40, how much should the partners contribute to the partnership? The partners shall contribute in the ratio of 60:40, meaning if their partnership capital is a combined total of P10, 000.00 then A contributed P6, 000.00 and B contributed P4, 000.00. A and B decide to form a partnership but did not say how much the other should contribute, how much should each partner contribute to the partnership? Since the partners did not give any sort of agreement as to the ratio of their capital contribution, we shall assume that they will contribute in equal proportions, meaning if the partnership capital is a combined total of P10, 000.00, then each partner contributed P5, 000.00. Article 1791 If there is no agreement to the contrary, in case of imminent loss of the business of the partnership, any partner who refuses to contribute an additional share to
the capital, except an industrial partner, to sav4e the venture, shall be obliged to sell his interest to the other partners. (n) y If there is an imminent loss in the partnership, the partner who refuses to contribute additional funds, IF HE IS CAPABLE TO DO SO, shall sell his share TO THE PARTNERS, unless he is an industrial partner. Imminent Loss There is a need for the capitalist partners to contribute additional funds to save the partnership The industrial partner need not do so because he has already given 100% of his efforts If the capitalist partner is WILLING but NOT FINANCIALLY CAPABLE, the article will NOT apply to him because he is already insolvent Selling of interest Refusal to contribute additional funds to save the partnership means that the partner no longer has any interest in the partnership He should not be allowed to reap the benefits that the other partners have worked hard for because he had not done anything to help anyway He cannot complain of being removed from the partnership because he will be paid what is due to him for his share in the interest of the partnership Agreement that the partner need not contribute additional funds in cases of loss The capitalist partner will not be required since it was in their agreement in the first place. Note that more contribution to the partnership capital would mean you share more in the profits but this should be voluntary y Things to consider: (1) There must be an IMMINENT LOSS (2) The partner who is unwilling to contribute must be SOLVENT/FINANCIALLY CAPABLE (3) There was no agreement that the partners will not have to contribute additional funds in cases of loss y If the purpose of additional contribution is simply to raise capital, then this article will not apply. Article that partner should be more onerous to him. (1684) y A and B are in a partnership where A is the managing partner. C owes A a sum of P5,000.00 and the partnership a sum of P10,000.00. The credit to A is due on September 1 while the partnership¶s is due on September 15, both debts are due and demandable. A collects from C a total of P3,000.00 only and A subsequently issues a receipt in his name. Is the partnership entitled to share in the P3,000.00? Yes but in proportion to their respective debts so A gets P1,000.00 and the partnership gets P2,000.00. y Supposing there was no mention as to who the managing partner is, will the requisites of Article 1792 still be present?
Yes, in the absence of information relating to the identity of the managing partner, the assumption shall be that ALL partners are managing partners. y If A issues a receipt on the name of the partnership instead, to whose credit will the P3,000.00 be put? The entire P3,000.00 will go to the partnership. y Supposing the credit of A carries 18% while that of the partnership carries only 10%. C pays A and says that the P3,000.00 shall be applied to A¶s credit. Is the partnership entitled to share in the P3,000.00 still? No, the debtor is given the right to apply payment to whichever debt is more onerous. y Things to remember: The two conditions should be both present in order for the Article to apply, otherwise, the entire amount will go to whoever collects payment from the debtor. (1) 2 debts and both are due and demandable (2) The one collecting should be the managing partner Article 1793 A partner who) y In this case, there is only ONE debt but 2 or more debtors, both of which are partners. y Example: A and B are partners and C owes the partnership a sum of P10,000.00. B is the managing partner but A collects his share in the P10,000.00 and C pays A P5,000.00 to which A issues a receipt in his name. When B¶s turn to collect comes, C is already insolvent. What should A do? A shall return his P5,000.00 to the partnership and split it with B because C has already become insolvent. y Take not that whoever collects doesn¶t matter as it doesn¶t make a difference y If you get your share early and the other parties cannot get theirs because the debtor has become insolvent, then you must return YOUR share to the partnership so that no one gets more than he should have. Article¶s extraordinary efforts in other activities of the partnership, unusual profits have been realized. (1686a) y Why compensation will not apply: Compensation will not apply because in compensation, you should be both a debtor and a creditor at the same time. However, the partner here is only a DEBTOR for damages and he cannot compensate using his profits and benefits earned for the partnership because it IS HIS DUTY to do so in the first place. y Responsibility may be equitably mitigated by the courts if, through extraordinary efforts of the partner, unusual profits are recognized/realized. y Example: A partnership between A and B is engaged in an autoshop business. A customer brought his car in to be painted YELLOW but A bought RED paint instead and the car is
painted RED. Damages are suffered by the partnership for P30,000.00 due to the repainting. Can A compensate this loss using the profits he earned for the partnership? A cannot compensate it with the profits he earned because it is his obligation to bring profits in the first place. The responsibility of the P30,000.00, however, may be mitigated by the court if by other activities, A is able to bring about unusual or extraordinary profits, meaning, he may be allowed by the courts to pay back just P15,000.00 instead. y Follows that if the partner is guilty of fraud or damages, he shall be liable for that. Article 1795 The risk of specific and determinate things which are not fungible, contributed to the partnership so that only their use and fruits may be for the common benefit, shall be borne by the partner who owns them. If the things contributed are fungible, or cannot be kept without deteriorating, or if they were contributed to be sold, the risk shall be borne by the partnership. In the absence of stipulation, the risk of things brought and appraised in the inventory, shall also be borne by the partnership, and in such case the claim shall be limited to the value at which they were appraised. (1687) y Refers to rules as to who bears the risks made by contributions y If the contribution is determinate and non-fungible but only the use is contributed, when it is lost, then the one who contributes it is liable for it. y If fungible things are contributed, the partnership shall be the one to shoulder the risks y The partnership shall also be the one to bear the risk for items brought for sale in inventory for appraisal for the value at which they were appraised. Article 1796 The partnership shall be responsible to every partner for the amounts he may have disbursed on behalf of the partnership and for the corresponding interest from the time the expenses are made; it shall also answer to each partner for the obligations he may have contracted in good faith in the interest of the partnership business, and for the risks in consequence of its management. (1688a) y Refers to the obligation of the partnership to the partners y The partners are merely agents so they are not personally liable except if they are at fault or if they exceeded their expressed authority y Obligations of the Partnership: (1) To reimburse any amount disbursed by the partners in behalf of the partnership Example: A partnership borrows from the bank a sum of P10,000.00 for additional funds but cannot pay it back when it is due to be paid back. A pays back the P10,000.00 using his personal funds. Should he be reimbursed by the partnership? Yes, the partnership should reimburse A for the sum of P10,000.00 PLUS legal interest starting from the date A disbursed the P10,000.00. (2) To answer for any obligation contracted in good faith Example: A partnership needs office supplies so B contracts for P10,000.00 worth of supplies. Who will pay for the contract price of P10,000.00? The partnership shall be the one to shoulder the cost as it was made in good faith and B did not overstep his authority.
If it was stated that the partners cannot contract for more than P5,000.00 worth of supplies and B still contracts for P10,000.00, how much will the partnership pay? The partnership will only pay what was allowed, that is, P5,000.00 and B will pay the remaining balance since B overstepped his authority. (3) To answer for risks in management Example: A partnership is engaged in selling goods and a customer keeps asking for discounts and an argument ensues between the customer, C and the partner A. A gets injured and is brought to the hospital. Who shall shoulder the hospital bills? The partnership shall shoulder the hospital bills as it was during A¶s time in managing the business that he was injured. Article) Article 1798 If the partners have agreed to entrust entrusted to one of the partners. (1690) Article 1799 A stipulation which excludes one or more partners from any share in the profits or losses is void. (1691) y Lays out the rules in the distribution of profits and losses y A, B and C are partners with the following capital contributions, P30,000.00, P20,000.00 and P10,000.00 respectively, where C is a capitalist-industrialist partner. For one year of their operations, their partnership had earned net profits of P17,000.00. How shall these profits be divided among the partners? (C is entitled to receive P2,000.00 out of the entire P17,000.00) (1) In accordance with any existing agreement between the partners as to how they shall share. (2) If there was no agreement, then the partners shall share on a pro-rata basis (3) The industrial partner shall get what is JUST and EQUITABLE in the circumstances. (BONUS TO PARTNER)
CAPITAL CONTRIBUTION SHARE IN DISTRIBUTABLE PROFIT TOTAL SHARE IN PROFITS
A B C
P 30,000.00 P 20,000.00 P 10,000.00
3/6 2/6 1/6
P 7,500.00 P 5,000.00 P 2,500.00
P 2,000.00
P 7,500.00 P 5,000.00 P 4,500.00 P 17,000.00
TOTAL
P 60,000.00
6/6
P 15,000.00
P 2,000.00
y The same rules shall apply for losses in the partnership¶s operations, however the industrial partner shall not share in the losses as there is no way for him to retract his industry and in the event of losses, his efforts would have been for vain and it can thus be said that he has already shared. y What is the legal effect of having a stipulation that excludes a partner from sharing in the profits or losses? Under Article 1799, the stipulation shall be void because there must be mutual sharing of profits and losses. y Can the partners appoint a 3rd person to designate the division of their profits and losses? Yes and they will not be allowed to question his decisions unless the designation of shares is manifestly inequitable. y 2 cases where partners ABSOLUTELY cannot question designated shares by the 3rd parties: (1) When a partner begins to execute the 3rd party¶s decision (2) When complaints are raised AFTER three months from the point of knowledge of the designation y Can the partners designate one of themselves to distribute profits or losses? No, the law prohibits this situation because there may be disparities when it comes to the distribution of net profits. Article 1800 The partner who has been appointed manager in the articles of partnership may execute all acts of administration despite the opposition of his partners, unless he should act in bad faith; and his power is irrevocable without just and lawful cause. The vote of the partners representing the controlling interest shall be necessary for such revocation of power. A power granted after the partnership has been constituted may be revoked any time. (1692a) y 2 Kinds of Managing Partners: (1) Appointed DURING the Constitution of the Partnership May execute all administrative acts unless he acted in bad faith. His power may not be revoked unless there is a JUST and LAWFUL cause and the vote of the partners with controlling interest Even if there are objections as to his decisions coming from the partners, his authority will prevail UNLESS he has acted in bad faith Acts of administration: ordinary business and administrative transactions Why can he note be revoked for no reason? Because if you revoke his power, you are in effect changing the terms of the contract of partnership. (2) Appointed AFTER the Constitution of the Partnership May have his power revoked with or without cause Decided upon by those partners who own controlling interest in the partnership Article 1801 If two or more partners have been entrusted with the management of the partnership without specification of their respective duties, or without stipulation that one of them shall not act without the consent of the others, each one may separately execute all acts of
P
RATIO
BONUS
administration, but if any of them should oppose the acts of the others, the decision of the majority shall prevail. In case of tie, the matter shall be decided by the partners owning the controlling interest. (1693a) y Assume that A, B, C and D are all managing partners. A appoints E as a secretary but B objects to this. Is the appointment of E valid? Yes since majority votes are first counted by head. If C&D were the ones to object, and they owned a combined total of 51% of partnership interest, then the appointment will not be valid. However, if B was still the one who objected and he owns 51% of partnership interest, the appointment will still be valid because majority votes are first counted by head. y If the partnership cannot make a decision and ends up in a tie (head count and interest), then the partnership is to be dissolved. This will be the only remedy, unless one of the other partners will relent. Article) y This is a case wherein two partners, A and B, stipulate that one cannot act without the consent of the other. Thus, there must always be concurrence between the two before any transactions may be entered into, the absence of the other¶s consent shall not be used as an excuse. y Illustrative Case: A sold to B, one of the managing partners of Partnership X, the other being C, a certain number of mining claims without the consent of C. In an action by A to recover the unpaid balance of the purchase price against Partnership X, C claims that the contract is not binding upon the partnership for the reason that under the articles of partnership, there is a stipulation that one of the partners cannot bind the firm by a written contract without the consent of others. Is the transaction made by B binding upon the partnership? According to the Supreme Court, the stipulation applies only to B and C. A has the right to assume that B was authorized to complete the transaction. Therefore, the partnership is liable, and since B violated the terms of contract between himself and C, he is required to reimburse C for the amount C will be paying A on behalf of the partnership, the reason being, it would be unfair to C who had no knowledge of B¶s transaction to have to pay when he never agreed anyway. y The only instance in which a partner may transact without concurrence is when there is imminent danger of grave or irreparable damage to the partnership if he does not do so. However, the party involved must be able to prove so else he shall become liable for what he has done. y Example: A and B are in a partnership where they sell fruits, B notices that the fruits in the warehouse are starting to rot so, without consent of A, he sells them. This will be alright because if the fruits rot, then it would have been bad on the part of the partnership. Article 1803 When the manner of management has not been agreed upon, the following rules shall be observed: (1) All of the partners shall be considered agents and whatever any one of them may do alone
shall bind the partnership, without prejudice to the provisions of article 1801. ¶s intervention may be sought. (1695a) y If there is no agreement as to who will be the managing partners, during constitution and after constitution of the partnership, then the assumption shall be that ALL the partners are managing partners, without prejudice to Article 1801, meaning Article 1801 will then apply to their case. y The second paragraph of this article provides that the partners cannot simply alter immovable property owned by the partnership without the consent of the other partners because this is NOT an act of administration but of OWNERSHIP. y Note that consent here is no qualified, so it may be expressed or it may be implied. y Example: Suppose A, B, C and D are in a partnership where the managing partner is not specified and A decides to put up a warehouse in a piece of land owned by the partnership without consent of other partners because he believes it to be useful and beneficial to the partnership. His partners come over, once the warehouse is finished, to look at it and did not object to its existence. Was this valid? Yes, since the partners did not object, then there is IMPLIED consent. Since consent was never qualified in the article, it is assumed that implied consent is enough. Suppose before A builds the warehouse, he asks for the consent of the other partners, who refuse to give it. When A tries to convince them and asks why they refuse to give consent, they simply say that they do not want it to be there, making their objection manifestly prejudicial, meaning, there is really no reason for their objection, what then, is the remedy of A in this situation? A may bring the matter to court. If the court finds the other partners of having no solid reason to object, it may compel the other partners to give their consent. Article 1804 Every partner may associate another person with him in his share, but the associate shall not be admitted into the partnership without the consent of all the other partners, even if the partner having an associate should be a manager. (1696) y Refers to SUBPARTNERSHIP y A, B and C are in a partnership wherein A is the managing partner. A enters into a contract with D that states D will receive 50% of A¶s share in partnership profits. Can A do this even without the consent of the other partners? Yes, because a sub-partnership will not affect the composition of the partnership and D will not be able to interfere with the partnership¶s management anyway. y When are you required to share your partnership profits with 3rd persons? When you contract with 3rd persons because perhaps in some past event you needed money and they provided you with it, and in your contract, it was agreed upon that you will share in the partnership profits. rd The 3 person can also opt to receive ALL profits. y Can D become a partner without the consent of the other partners, if he associates with the managing partner?
No, D would need to get the consent of all partners because this would change the partnership composition. Article 1805 The partnership books shall be kept, subject to any agreement between the partners, at the principal place of business of the partnership, and every partner shall at any reasonable hour have access to and may inspect and copy any of them. (n) y The partnership books shall be kept in the following places, in order: (1) In accordance with partnership agreements (2) If there were no agreements, then the partnership books shall be kept in the principal place of business of the partnership (ex: headquarters) y Each partner will have access to ALL partnership books. y When will the partner be allowed to access the partnership books? The partner is allowed to access partnership books during REASONABLE HOURS OF BUSINESS (8am-5pm), according to the law. The one who is keeping the partnership books cannot state when it can be inspected. Article 1806 Partners shall render on demand true and full information of all things affecting the partnership to any partner or legal representative of any deceased partner or of any partner under legal disability. (n) y The article does not mean that the partners need wait for demands before disclosing information, when they get hold of the information, they should disclose it immediately, although additional details may be demanded. y If information is not disclosed and it is found out later on, the partner/s who did not disclose such will be held liable for it and be charged for misrepresentation. y Suppose A, B and C are in a partnership wherein A is sent to inspect partnership property in Mindanao. A realizes that the property contains oil deposits and does not disclose this information to B and C. He also lies and says that the property is completely useless for their business and offers to buy B and C¶s interests in the partnership. When A is the only one holding the business, he develops the land and gains substantial profits from the oil deposits. B and C later on learn about the information A kept hidden from them and demand that they be given their shares in the oil profits. The question now is, can B and C, after having sold their interests in the partnership, still share in the profits? Yes, they will be allowed to share in the profits because the information regarding oil deposits was present when they sold their share to A, just that it was hidden from them. Article) y A partner who receives benefits or profits derived without consent of others shall account for it as the partnerships. y If particular property is mortgaged and foreclose, the partner who uses personal funds is able to get the property back will not become the new owner, he will only be its trustee. y If the partner gets the property back after ONE year from the rd 3 party involved, then it shall become his as it was a private transaction, so long as he uses his own funds.
y Example: A and B are partners engaged in the operation of a cinema business. The theater was mortgaged to C who foreclosed the mortgaged debt. A, in his own behalf, redeemed the property with his own private funds. Subsequently, A files a petition for the cancellation of the old title of the partnership and the issuance of a new title in HIS name alone. Did A become the absolute owner of the property? No, the law says that he will only hold the property as the trustee and will be entitled to reimbursement plus interest from the time he redeemed the property. Article 1808 The capitalist partners cannot engage for their own account in any operation which is of the kind of business in any operation which is of the kind of business in which the partnership is engaged, unless there is a stipulation to the contrary. Any capitalist partner violating this prohibition shall bring to the common fund any profits accruing to him from his transaction, and shall personally bear all the losses. (n) y The article is with regards to a capitalist partner engaging in other businesses. y Is the capitalist partner allowed to engage in other businesses aside from the one he has with the partnership? Yes, as long as the business he engages in is something dissimilar or different from the of the partnership¶s. y What will happen if the capitalist partner violates the law regarding his ability to engage in other businesses? Then he shall have to bring the profits he gained from the other business to the partnership and be liable for losses suffered by the partnership. y Why is the capitalist partner not allowed to engage in a similar line of business? Because he might take advantage of the information in the partnership or of their clients, resulting in a conflict of interest between himself and the other partners. y The capitalist partner can engage in a business similar to the partnership if there was a stipulation in the contract of partnership and if the business he operates exists in a different area or place. Article 1809 Article 1807 (4) Whenever other circumstances render it just and reasonable. (n) y General Rule: During existence, a partner is not required to demand for an accounting because his interest is already protected by two Articles of the law, Article 1805 and Article 1806. But for specific cases, the law provides that he can DEMAND for an accounting of the partnership books. y 4 Cases where a partner can demand for an accounting: (1) When he is wrongfully excluded from the partnership operations (business and property possession) (2) If the right exists under their agreement (3) Under Article 1807 (4) Other circumstances which render it just and reasonable.
Section 2 ± Property Rights of a Partner Article 1810 The property rights of a partner are: (1) His rights in specific partnership property (2) His interest in the partnership (3) His right to participate in the management. (n) y The partner has the following rights: (1) Right to the ownership of partnership property (2) Right to his interest in the partnership (3) Right to participate in partnership management Article¶s right in specific partnership property is not assignable except in connection with the assignment of rights of all the partners in the same property; (3) A partner¶s¶s right in specific partnership property is not subject to legal support under Article 291. (n) y The partners are considered co-owners of specific partnership property y If A, B and C are partners who own specific property under the partnership¶s name, what are their rights? (1) They can use it for partnership business purposes (2) They cannot use it for personal purposes WITHOUT the consent of others. y Why can¶t A simply assign his right with respect to the partnership¶s property? (1) It doesn¶t belong to him (2) The extent of his interest with regards to the property cannot be determined before dissolution y The partnership can altogether assign a 3rd party with the right to use the property for partnership business purposes. y The right of the partners as to the property is not subject to attachment unless it is a claim against the partnership due to the reason that any one partner is not the owner of it. y Under Article 291, the specific partnership property cannot be used as the subject of legal support because it does not belong to any one of the partners. Article 1812 A partner¶s interest in the partnership is his share of the profits and surplus. (n) y The article defines what the partner¶s interest in the partnership is. y What is the partner¶s interest in the partnership?
(1) DURING operations, the partner¶s interest is his share in profits and losses (2) AFTER operations/LIQUIDATION/DISSOLUTION, his interest is in the surplus of partnership assets after all debts have been cleared. y Interest can be subject to attachment or execution because it belongs to the partner, not the partnership. Article 1813 A conveyance by a partner of his whole interest in the partnership does not of itself dissolve the partner, dissolution of the partnership, the assignee is entitled to receive his assignor¶s interest and may require an account from the date only of the last account agreed to by all the partners. (n) y How can a partner convey his interest in the partnership without getting the partnership dissolved? (1) By selling it to a 3rd person rd (2) By donating it to a 3 person rd (3) By using it as security on a loan from a 3 person y Example: D offers to buy A¶s interest of P50,000.00 for P1,000,000.00 and A agrees to sell his interest. What happens now? D becomes the assignee and A becomes the assignor but the partnership will not be dissolved because his interest in profits and surplus is the one being sold. A will also continue to be the partner but D will be the one to receive his profits. y This is similar to sub-partnerships, so the consent of others is not required for interest to be conveyed. y The assignee does not have any say in the management y Rights of the Assignee: (1) He shall get the assignor¶s share in profits/surplus (2) He may avail of legal remedies of the partners in cases of fraud by the assignor (3) He can demand for an accounting upon dissolution but only starting from the date of the last accounting undertaken by the partnership (4) Can ask for the dissolution of the partnership if it has reached the end term or anytime if the partnership is one at will, because he is interested in the surplus. y The assignee, however, cannot become a partner without the consent of the other partners because it will entail a change in the partnership¶s composition. Article 1814 Without prejudice to the preferred rights of a partnership creditor dissolution: (1) With separate property, by any one or more of the partners . (n) y Refers to a partner who obtained a loan from a 3 person and was unable to repay such. y For example, PARTNER A failed to pay CREDITOR C a sum of P50,000.00, so C files against A, knowing that A, being a partner, will receive his interest. C wins the case but A is still unable to pay, so C asks that A¶s interest be attached so that it goes to C and cancels out A¶s debt. Done to protect C¶s interest Attached interest can be redeemed using the property of the partners or the partnership¶s property, as long as all partners consent to this, and are given reimbursement from the defaulting partner Amount charged must e sufficient to pay the loan plus legal interest SECTION 3 ± Obligations of the Partners as to 3) y Firm names are required for partnerships because they are juridical persons in need of separate names so that they are distinguishable from the partners and other partnerships. rd The name can come from any of the partners or 3 persons. rd y If a 3 person¶s name is used with his consent, then he shall be liable as a partner without the rights of a partner because the partnership uses his name. y Partnership name must be registered with the (DTI) DEPARTMENT OF TRADE AND INDSUTRY because if there was already such an existing name, there might be cases of duplication. y You cannot choose the name of a deceased partner as his death caused the partnership¶s dissolution. y Sample General and Limited Partnership Names: (1) GENERAL ± A & Company (2) LIMITED ± A, Ltd. Article) Article 1817
rd rd
Any stipulation against the liability laid down in the preceding article shall be void, expect as among the partners. (n) y As to 3 persons, ALL partners are liable pro-rata and subsidiary, but as to each other, they are liable in proportion to their capital contribution. y Examples: (1) A, B and C are in a partnership where C is the industrial partner and a sum of P26,000.00 is owed to D. A and B contributed P15,000.00 and P5,000.00 respectively. How shall the debt be shared? As to D, the partners will share equally in the debt left after exhausting all assets (P6,000.00) so they will each have to pay P2,000.00 regardless of C being an industrial partner. If C is insolvent, or if B died, or if A has left the country, the liability of the partners cannot be increased. As to each other, they are liable in proportion to their capital contribution, so B and C will be reimbursed by A. (2) A, B, C, D and E are sued in court but E is later cleared of his charges. The court orders A, B, C and D to pay their creditor, but C moves to reconsider that all should be charged, but this move was denied. Can A, B, C and D alone be liable for the debt? According to the Supreme Court, the 4 partners cannot alone be liable for the debt because in excluding E, they have increased the other partners¶ liability and this is prohibited by the law. The law states that the liability of the partners cannot be increased such that they shoulder the liability of another partner. (3) What if there was an agreement that stated B is only liable up to P5,000.00? How will A, B and C share in their liability? The stipulation shall be void as to 3rd persons, so they will still share pro-rata. Anyway, B and C will be reimbursed by A, because as among themselves, the stipulation is valid and C is an industrial partner. Article no: (1) Assign the partnership property in trust for creditors or o the assignee¶s promise to pay the debts of the partnership (2) Dispose of the goodwill of the business (3) Do any other act which would make impossible to carry on the ordinary business of a partnership (4) Confess a judgment (5) Enter into a compromise concerning a partnership claim or liability (6) Submit a partnership claim or liability to arbitration
rd
(7) Renounce a claim of the partnership No act of a partner in contravention of a restriction on authority shall bind the partnership to persons having knowledge of the restriction. (n) y Qualifies the authority of partners. y Authority must be in the usual course of business. y Transactions beyond a partner¶s authority is binding if it is in the usual course of business because the 3rd person is assumed to have no knowledge of his lack of authority. y When are transactions not binding? (1) When a transaction is not in the usual course of business and has no consent from all other partners (2) When the 3rd person had knowledge of the lack of authority of the acting partner Article 1819 Where title to real property is in the partnership name, any partner may convey title to such property by a conveyance executed in the partnership name; but the partnership may recover such property unless the partner¶s act binds the partnership under the provisions partner¶s act does not bind the partnership under Article 1818, unless the purchaser of his assignee, is a holder for value without knowledge. Where title to real property is in the name of one or more or all partners, or in a 3trd person in trust for the partnership, a conveyance executed by a partner in the partnership name, or in his name, passes the equitable interest of the partnership, provided the act is one within the authority of the partner under Article 1818. Where title to real property is in the names of all the partners a conveyance executed by all the partners passes all their rights in such property. (n) y Refers to the conveyance of immovable property y Suppose A, B and C are partners engaged in the buying and selling of property, and the following situations occur: (1) A, without authority, sells land to D in the partnership¶s name but D immediately sells it to E. The land title was originally under the partnership¶s name. Can the partnership recover the land? Title passes to D, then to E. The partnership cannot recover the land once it has transferred to E but if the land was still with D, they could have recovered it if the contract was not binding . (2) What if A sells the property under his name? Only the equitable title passes to D. (3) What if A sells the property and the land title is registered under his name? Title passes to D because land is registered under the partner¶s names. This will hold true if A, B and C are coowners of the land, even if only A sold it to D.
(4) Land title belongs to one or more or all of the partners or a 3rd person in trust for the partnership. Only the equitable title will pass to D if the seller had no authority to sell such to D. (5) A, B and C ALL sell the land to D, with the land title belonging to ALL of them. Title passes to D because ALL partners sell to him. Article 1820 An admission or representation made by any partner concerning the partnership affairs within the scope of his authority in accordance with this Title is evidence against the partnership. (n) y Anything a partner says or admits, as long as it is concerning the partnership affairs and it is within the scope of his authority, is sufficient evidence against the partnership. y This article is a rule of evidence y In order that admission/representation made can be used as evidence, the existence of the partnership must be established and proved first. y Example: (1) Partner A borrows money from the bank and declares that the money borrowed is for the partnership. This statement, made by A, is enough evidence against the partnership and the bank may use this in case the partnership does not pay back the money borrowed. (2) A, B, and C are partners. A told D, a 3rd person, that the debtor already paid his obligation to the partnership. Is this enough evidence against the partnership? YES, since it concerns partnership affairs and the partner has authority to say so. Article his notice to or knowledge of the partnership, except in the case of a fraud on the partnership, committed by or with the consent of that partner. (n) y IN SHORT, notice to ANY of the partners is notice to the partnership. (You don't have to notify EVERY partner in relation to partnership affairs). y Knowledge of a partner acting in a particular manner (meaning the partner is a managing partner), or knowledge of any partner who SHOULD HAVE communicated it to the managing partner, is knowledge to the partnership. y This is so EVEN IF the non-managerial partner does not communicate the information he knows regarding partnership affairs. The partner SHOULD have communicated this. Non knowledge by other partners is not a reason to evade from obligations. y If notice is delivered to a partner, that is an effective communication to the partnership, notwithstanding the failure of the partner to communicate such notice or knowledge to the other partners. y Example: (1) A, B, and C are partners where B is the managing partner. D, a 3rd person, filed a case against the partners AND the partnership for some unknown reason. Does D need to notify all of them? If this is done, D just needs to notify either A, B, or C, but doesn't have to notify ALL OF THEM (imagine if there are 100 partners, it would be burdensome and costly to notify all 100). So if A is notified about the
(2)
(3)
(4)
y
case, that is considered by D as notice to EVERYONE even if A is not a managerial partner (since A should communicate this to all partners). Suppose D wants to sell a piece of land to the partnership and notifies B (the managing partner) about it, but warns him that the land is under litigation and there is a possibility of the land to be claimed by E. B took the risk and purchased the land. Later on, E still claimed the land. Can the partners reclaim this? Even though ALL partners were not informed about the litigation, the partnership cannot get the land anymore since B was informed about it. Notice to B, the acting partner, is already notice to the partnership. Suppose before B became a partner, D was able to talk to him about the piece of land under litigation. Later on, B became a managing partner and purchased the land D told him about a long time ago. E won the litigation and was able to claim the land. Can the partnership reclaim the land? The partnership cannot get it anymore. Even if D was not informed WHILE he was a partner, the information was still present in his mind. The issue here would be: If B can still recall the conversation he had with D before he became a managing partner. Suppose D informed C (who is not a managing partner) about the land under litigation. Later on, D sold the land to B, the managing partner, without informing him that the land was under litigation (take note: the information was given to C). Is notice to C, a notice to B? YES, because C should have communicated the information. In cases (2), (3) and (4), the partnership can't file action for damages against D since the "partnership had knowledge" about the litigation but the partners still took the risk of buying the land.
(a) A, B, and C are partners. A made an act of omission with D as the victim. He caused P50,000 worth of injury to D. What can D do? D can go to A for the full amount of P50,000 OR FROM B OR C. (b) Can D go to B for the whole e P50,000 since B is the richest among the partners? This is allowable since the partners have a solidary obligation through A¶s act of omission. B will be entitled for reimbursement from the one responsible, A. Any one of A, B, OR C, or all partners including the partnership can pay without prejudice to the rights of partners to get reimbursement from the one responsible for the crime (2) A partner, within the scope of his authority, receives money or property from a third person and misapplies it. Example: A partnership is engaged in a pawnshop business. D, a 3rd person, pawned his watch to A and A sells it. Who is liable for the watch? All partners are solidarily liable to D since A misapplies the watch received from D. (3) The partnership, in its ordinary course of business, rd receives money or property from a 3 person and a partner misapplies it while in the custody of the partnership. Example: The partnership is engaged in a pawnshop business where it received a watch from D to be pawned. The watch is placed in the partnership VAULT. B, a partner, gets the watch from the vault and sells it. Who is liable for the watch? All partners are solidarily to its being made: (1) When a partnership liability results, he is liable as though he were an actual member of the partnership; (2) When no partnership liability results, he is liable pro rata with the other persons, if any, so consenting to the contract or representation as to incur liability, otherwise separately.. (n)
Article 1822 Where, by any wrongful act or omission of any partner acting in the ordinary course of the business of the partnership or with the authority of his co-partner, loss or injury is caused to any person, not being a partner in the partnership, or any penalty is incurred, the partnership is liable therefore to the same extent as the partner so acting or omitting to act. ) y In the following cases, obligation is not pro-rata or equal, but a solidary obligation. Any partner MAY pay for the obligation (Unlike in article 1816, each partner should only pay for their SHARE): (1) When by an unlawful act or omission, loss or injury is caused to 3rd person. Example:
y 2 things being mentioned: (1) PARTNERSHIP by estoppels There is an existing partnership, and partners misrepresent themselves together with a 3rd person. EXAMPLE: (a) Suppose there is a partnership, X, with partners A, B, and C. D told E that he is a partner of A, B, and C. E verified from the actual partners of X partnership if D is really a partner, A, B, and C consented. E entered in a contract with D, believing he was a partner. This is partnership by estoppels since A, B, and C verified D as a partner. In this case, E can go after A, B, and C. (b) Suppose only A and B consented, is there a partnership by estoppels? There will be no partnership by estoppels since only A and B, not all partners, consented to D¶s misrepresentation. (2) PARTNERS by estoppels 2 or more persons pretend to be partners in the eyes of 3rd persons. Example: A, B, AND C said they were partners to D and entered in a contract with the ³partners´. When it was time for them to pay D for their obligation, they cannot for the reason that they are not partners. What is their obligation to D? Their obligation to D will be pro rata, as if they were partners (since they are partners by estoppels) Article 1826 A person admitted as a partner into an existing partnership is liable for all the obligation of the partnership arising before his admission as though he had been a partner when such obligation were incurred, except that this liability shall be satisfied only out of partnership property, unless there is a stipulation to the contrary. (n) y A new partner admitted to an existing partnership is also liable to the obligations existing before he was admitted, but his liability only extends to his contribution to the partnership UNLESS stipulated. y A new partner is liable to his separate property when the obligation was incurred when he was already a partner. y Example A, B, and C are the original partners of the partnership X with contributions of P10,000.00 each. X partnership owes D P40,000.00. Later on, E entered the partnership and contributed P4,000.00. How shall the debt be paid? P34,000.00 will be paid to D out of the partnership assets, and the P6,000 will be paid through A, B, and C¶s personal assets. The P6,000.00 will be divided among the 3 original partners pro rata. Article 1827 The creditors of the partnership shall be preferred to those of each partner as regards the partnership property. Without prejudice to this right, the private creditors of each partner may ask for the attachment and public sale of the share of the latter in the partnership assets. (n) y Partnership creditors have BETTER RIGHTS to partner obligation WITH REGARD TO PARTNERSHIP PROPERTY.
y Personal creditors of partners have BETTER RIGHT than a partnership creditor with regards to PERSONAL PROPERTY of the partner. y EXAMPLE: (1) A, B, and C are partners. A OWES E P6,000.00. The PARTNERSHIP OWES D P28,000.00. The total partnership assets amount to P40,000.00. Who has better right to the partnership property? In this case, D, the partnership creditor, has a better right to the partnership property. When obligation to D is paid, what will be left for the partners to share is P4,000.00. If E, the personal creditor of A, demands to be paid out of partnership property, he will only get P4,000.00 from it since the priority is the partnership creditor. The P2,000.00 will be paid out from A¶s personal property. (2) If total partnership assets is only P28,000.00, and the liability of the partnership is P40,000,, how shall the debt be paid? A, B, and C will have to pay E P6,000.00 each. (3) If A only had P6,000.00 of personal property, who will have the better right to this? A¶s priority is his personal creditor, E. So D cannot collect A¶s share of P4,000.00. D cannot, also, increase the obligation of the other partners to be able to collect their debt. Chapter 3 ± Dissolution and Winding Up Article 1828 The dissolution of a partnership is the change in the relation of partners caused by any partner ceasing to be associated in the carrying on as distinguished from the winding up of business. (n) Article 1829 On dissolution, the partnership is not terminated, but continues until the winding up of partnership affairs is completed. (n) Article 1830 Dissolution is caused: acquire the ownership thereof; (5) By the death of any partner; (6) By the insolvency of any partner or of the partnership (7) By the civil interdiction of any partner; (8) By degree of court under the following article. (1700a and 1701a)
y Dissolution is usually caused by change a change of relation between partners. y If there is dissolution, no new partnership business may be undertaken y Upon dissolution, partnership continues until winding up and liquidation is completed. y CAUSES OF DISSOLUTION: (1) WITHOUT VIOLATION OF AGREEMENT (a) Termination/expiration of term or specific undertaking (b) Upon express will of any partner if there is no term or specific undertaking AS LONG AS PARTERS ACT IN GOOD FAITH. (c) Upon the will of the partners whose interest is not assigned or charged. Example: A sold his interest to E, and B¶s interest is charged to F because he borrowed P50,000 from him. C and D are the only ones who can ask for dissolution since their interest is not assigned or charged. (d) Expulsion bona fide of a partner (a partner is expelled in good faith in accordance with agreement. (e) Expulsion has the effect of decreasing the # of partners. (2) IN VIOLATION OF THE AGREEMENT Example: A, B, and C agreed that the term of their partnership is only until Dec. 31, 2011. A goes to premature resignation (resigns early from partnership). No one can prevent A from resigning, but the partners can ask for damages for not staying with the agreement. (3) When it becomes unlawful for a partnership to carry on the business or partner to carry on his role (4) When specific thing is contributed, and before deliver, it is lost. If it is lost after delivery, partnership is not dissolved. If use is contributed, it is lost before or after delivery (it doesn¶t matter when it was lost), partnership is dissolved. If what is to be contributed is generic, and it is lost, there is no dissolution. Article 1831 On application by or for a partner, the court shall decree dissolution whenever: breach of the partnership agreement, or otherwise so conducts himself in matters relating to the partnership business that it is not reasonably practicable to carry on the business in partnership with him; (5) The business of the partnership can only be carried on at a loss (6) Other circumstances that will render dissolution equitable On the application of the purchaser of a partner¶s interest under Article 1813 or 1814:
(1) After the termination of the specific term or particular undertaking (2) At any time the partnership was a partnership at will when the interest was assigned or when the charging order was issued. (n) y When can a partnership be dissolved judicially? (1) When a partner is DECLARED insane (2) When he becomes incapable of performing his part in the partnership (3) Misconduct of a partner prejudicially to the business (4) Persistent breach of partnership agreement (5) The business can only be carried out on a loss (6) Other circumstances: (a) Abandonment of the business (b) Fraud (c) Refusal to render an accounting (7) On application of 3rd parties¶ (who purchased or have charged a partner¶s interest) right as per Articles 1813 and 1814 Article 1832 Except so far as may be necessary to wind up partnership affairs or to complete transactions begun but not then finished, dissolution terminates all authority of any partner to act of the partnership: (1) With respect to the partners (a) When the dissolution is not by the act, insolvency or death of a partner (b) When the dissolution is by such act, insolvency or death of a partner, in cases where Article 1833 so requires (2) With respect to persons not partners, as declared in Article 1834 y General Rule: When partnerships are dissolved, partners cannot engage in new business transactions because their authority to do so terminates upon the occurrence of dissolution. y 2 Cases with are Contrary to the General Rule: (1) During the WINDING UP of Business Transactions relating to the winding up of business such as the liquidation of partnership assets can be entered into because the partners¶ authorities to do so shall continue. (2) To complete unfinished transactions during dissolution Example: A and B are in a partnership where they have contracted with C to deliver goods in two installments. B resigns after the first delivery is made, thus dissolving the partnership. Can A and B cease to continue with their obligation? NO. A and B must continue on with their obligation to complete unfinished transactions. y If dissolution is not by an act, insolvency or death, the authority of partners as among themselves is terminated. Example: A partnership was dissolved due to the expiration of the term. If C transacts with D after this and he defaults, he will be the only one liable AS TO THE PARTNERS. If A & B are to pay D, C shall reimburse them. Article 1833 Where the dissolution is caused by the act, death or insolvency of a partner, each partner is liable to his copartners for his share of any liability created by any partner acting for the partnership as if the partnership had not been dissolved unless:
(1) The dissolution being by act of any partner, the partner acting for the partnership had knowledge of the dissolution (2) The dissolution being by death or insolvency of a partner, the partner acting for the partnership had knowledge or notice of the death or insolvency y If dissolution is caused by an act, insolvency or death, then each partner shall share in the liability of the partnership due to the actions of a partner, unless he had knowledge of an act, insolvency or death, or notice of the insolvency or death. y Example: (1) B told A that he is resigning TODAY. The partnership is thus dissolved. Should A enter into a contract with D, who shall be liable? As among themselves, only A because he had knowledge of B¶s resignation, thus knowing that they are no longer in a partnership. (2) If B texts his resignation to A because A is in Mindanao and A contracts with D, was his authority terminated when the text arrived? No, A¶s authority was not terminated as he has only received a NOTICE. Mere notice cannot terminate the authority of partners because the grounds are BY AN ACT, and because of this it should be PERSONALLY KNOWN by the acting partner. (3) If C texts A that B had died, does their authority terminate once A gets the text message? Their authority is terminated because in this case, the cause of dissolution is death. Mere notice is sufficient to terminate authority if the grounds are due to the insolvency or to the death of a partner. Article 1834 After dissolution, a partner can bind the partnership, except as provided in the third paragraph of this article: (1) By an act appropriate for winding up partnership affairs or completing transactions unfinished at dissolution (2) By any transaction which would bind the partnership is dissolution had not taken place, provided the other party to the transaction: (a) Had extended credit to the partnership prior was regularly carried on. The liability of a partner under the first paragraph, No. 2, shall be satisfied out of partnership assets alone when such had no authority to wind up partnership affairs; except by a transaction with one who ± (a) Had extended credit to the partnership prior to dissolution and had no knowledge or notice of his want of authority;). Nothing in this article shall affect the liability under article 1825 of any person who after dissolution represents himself or consents to another representing him as a partner in a partnership engaged in carrying on business (n) y Partners may still bind the partnership to transactions even after dissolution if the transactions are with respect to the winding up or the completion of unfinished transactions. y The transaction will be binding if: (1) Credit was extended without knowledge of the dissolution before the dissolution (2) No credit was extended but there was knowledge of the partnership¶s existence and none of the dissolution y The partnership is required to have the dissolution be announced in general circulation newspapers of the place of operations. As long as they do this, then it is sufficient notice to all third persons. (If you don¶t read broadsheets, that¶s your fault, not the partnership¶s) y Liabilities shall be satisfied out of partnership assets alone if the partner being dealt with is a DORMANT partner. y Upon dissolution, the partnership is no longer bound by transactions : (1) When it becomes unlawful to carry on the business (2) Insolvency of a partner (3) Unauthorized winding up, except when (a) Credit was extended and there was no knowledge of the lack of authority (b) No credit was extended and there was no knowledge of the dissolution because there was no advertisement of such y In the case wherein ³A´ still represents himself as a partner even if the partnership has already been dissolved, then he is a PARTNER BY ESTOPPEL. Article) y Dissolution does not discharge the partnership and/or the partners from existing liabilities
y EXAMPLE: Suppose A, B and C are in a partnership (X & Co.) and owe D a sum of P 26,000.00. Total partnership assets equate to a sum of P 20,000.00. (1) What if C dies and his total assets are worth P2,000.00? The law says that C¶s individual property shall be used to clear his liabilities when he was still alive. In all cases, the PERSONAL CREDITOR has priority. (2) What if A resigns? Can he ask to be discharged from his obligation to pay D? A can only be discharged from his obligation to pay D the sum of P2,000.00 if it was agreed upon by all concerned parties. Agreement can be EXPRESSED or IMPLIED, based on our interpretation of the law. Article) y Who can wind up partnership affairs? (1) Whoever is so assigned by the agreement (2) Partners who did not wrongfully cause the dissolution (3) Legal representatives of the last surviving partner (who is not insolvent) (4) The court in a judicial winding up of partnership affairs. Article: (1) Each partner who has not caused dissolution wrongfully shall have: (a) All the rights specified in the first paragraph of this article, and (b) The right, as against each partner who caused the dissolution wrongfully to damages for breach of the agreement . ¶s interest the value of the goodwill of the business shall not be considered. (n) y Suppose there is a situation wherein A, B and C are in a partnership, X & Co., with total assets of P 26,000.00 and liabilities to D amounting to P 20,000.00. If the partnership is dissolved WITHOUT VIOLATION OF ANY AGREEMENTS, naturally, the liability will be cleared because the partnership assets are more than enough, and the surplus will be given to each of the partners in proportion to their interest in the partnership or as per their agreement. y What if the partnership was dissolved due to EXPULSION? Suppose that A was the one expelled from the partnership, then he can only get a share in the NET PROCEEDS of the surplus that would have originally been his. y What if the partnership was dissolved due to VIOLATION OF AGREEMENTS? Determine the rights of the INNOCENT and GUILTY parties. Suppose that in this situation, A was the one guilty of violating an agreement. Then B and C will be allowed the following rights: (1) Apply partnership assets to partnership liabilities and distribute the cash surplus amongst themselves. (2) To be indemnified for the damages that A has caused. (3) To continue the business up to the agreed term. (4) To possess partnership property. While A will have the following rights: (1) Partners decide not to continue the business (a) Right to claim his share in the cash surplus, but only the net proceeds of such meaning, the cash surplus less damages. (2) Continue the business (a) Ascertain his interest in the business. (b) Freedom from existing and future liabilities of the partnership. Article 1838 Where) y Considers a case wherein a partner was induced to join the partnership by means of fraud or misrepresentation y The victim can ask for the recision or restitution of the contract of partnership (return of all his contributions) y He has the right to the surplus for certain purposes y He has the rights of a 3rd person or a subrogated creditor after the liabilities have already been paid to recollect what he paid when he entered into the partnership. y He is entitled to be indemnified for all debts and liabilities that he paid for during his time in the partnership. the cr4editor: (a) Those owing to separate creditors
(b) Those owing to partnership creditors (c) Those owing to partners by way of contribution (n) y Considers the case of liquidation and the distribution of partnership assets y Liquidation is when all the assets of the partnership is converted to cash. y Total assets will include GOODWILL as well as the original CONTRIBUTIONS of the partners. y Order of payment during liquidation: rd (1) 3 persons/outside creditors (2) Partner creditors (partners who have claims) (3) Normal partners (all partners) (a) In accordance with the agreement (b) In proportion to their contribution Article 1840 In the following cases, creditors of the dissolved partnership are also creditors of the person or partnership continuing the business: partner. The liability of a third person becoming a partner in the partnership continuing the business, under this article, to the creditors of the dissolved partnership shall be satisfied out of the partnership property only, unless there is a stipulation to the contrary.¶s) y Explains the rights of the creditor in case of partnership dissolution because of membership changes and the business is continued without liquidation. y The membership changes include RETIREMENT, EXPULSION, DEATH or ADDITION. y Note that the creditor of the OLD partnership will still be the creditor of the NEW partnership if there is still an old partner/original partner with the NEW partnership. (debt will not be cleared or discharged) y The creditor will continue to be the creditor of the remaining/new partnership in all cases except when: (1) Rights are assigned to other people (no old partners) (2) Unless there is a promise to pay debt from the new partners or if the creditor can set aside the right of the new partners on the ground of fraud. Article, providing by Article 1840, third paragraph. (n) y Suppose that A retires but B and C continue the business without liquidation. What are the rights of A? The rights of A are as follows: (1) That his interest be ascertained as of dissolution date (2) Collect his interest in the partnership plus interest or profits by the use of his right to these as a creditor If A dies, and the same situation occurs (he did not retire), then his legal representatives have the same rights as mentioned above. Article)
y Who can demand to know how much his interest is in the partnership and from whom? All involved parties can demand to know how much his interest is. He can demand to know these from the SURVIVING, CONTINUING and WINDING UP partners. CHAPTER 4 ± LIMITED PARTNERSHIP. y Defines what a limited partnership is. y It is sufficient that there is 1 general and 1 limited partner in a limited partnership. y The reason for the existence of a limited partnership is to address the needs of all those who wish to join a partnership without the risk of losing any personal property. y Characteristics: (1) Comply with the statutory requirements of Article 1824 (2) General partners control the partnership and are personally liable for partnership debts. (3) Limited partners contribute capital and are not liable personally for partnership debts. Article 1844 Two or more persons desiring to form a limited partnership shall: (1) Sign and swear to a certificate, which shall state (a) The name of the partnership, adding thereto the word ³Limited´ give, b way of income, and the nature of such priority
(m) The right, if given, of the remaining general partner or partners to continue the business on the death, retirement, civil interdiction, insanity or insolvency of a general partner . y Two requirements in a limited partnership: (1) Sign and swear to a certificate containing the data mentioned in the article (a) to (n) (2) Have the certificate recorded with the SEC y Can a limited partnership be formed orally? No. A limited partnership contract is not perfected by mere agreement as it requires formal proceedings. y Partnership must SUBSTANTIALLY comply with the requirements. y What if the partnership does not comply with the requirements? Will it be void? No, it will only become a GENERAL PARTNERSHIP. y Why is it that the certificate must be registered? rd Registration is the notice, to all 3 persons who will be dealing with or are dealing with the partnership, that there are partners with limited liability. y The presumption is that when a partnership deals with a 3rd person, the partnership is a GENERAL partnership. Article 1845 The contributions of a limited partner may be cash or other property, but not services. y Limited partners can only contribute cash or other property, not services because if he does so, then he shall become a GENERAL INDUSTRIAL PARTNER. y Contribution must be given immediately. If he has promised additional contribution, then it should be given on the date promised or agreed upon. Article 1846 The surname of a limited partner shall not appear in the partnership name unless: (1) It is also the surname of a general partner (2) Prior to the time when the limited partner became such, the business had. y The surname of the limited partner should not appear except if it is also the surname of a general partner or if at the time of his admission, it was already being used. y If the limited partner allows that his surname be used, then he shall be held liable as a general partner as to 3rd persons who extended credit not knowing he was a limited partner. y If the creditor has knowledge of his being a limited partner, then this rule shall not apply. Article 1847
If the certificate contains a false statement, one who suffers loss by reliance on such statement may hold liable any party to the certificate who knew the statement to be false: (1) At the time he signed the certificate (2) Subsequently, but within a sufficient time before the statement was relied upon to enable him to cancel or amend the certificate, or to file a petition for its cancellation or amendment as provided in Article 1865. y If there are false statements in the certification and 3rd persons should suffer loss due to these, then he can hold liable all those who had knowledge of the false statement at the time certification was signed. y The same shall apply if the partners concerned had sufficient time to have the certificate cancelled but did not do so. Article 1848 A limited partner shall not become liable as a general partner unless, in addition to the exercise of his rights and powers as a limited partner, he takes part in the control of the business. y The limited partner who, aside from his powers, participates in the management of the partnership becomes liable as a general partner. Article 1849 After the formation of a limited partnership, additional limited partners may be admitted upon filing an amendment to the original certificate in accordance with the requirements of Article 1865. y Suppose that in a limited partnership, there are only 2 general partners and 1 limited partner. Can you add another limited partner? Yes, amend the certificate under Article 1865 and do so. Article: (1) Do any act in contravention of the certificate Refers to the power, liabilities and limitations of general partners in a limited partnership. y A general partner has the same rights, powers and limitations in a limited partnership as when he would have been in a general partnership.
y A general partner, without written consent from ALL limited partners, cannot: (1) Do any act in contravention of the certificate (2) Do any act which would make it impossible to carry on the ordinary business of the partnership (3) Confess a judgment against the partnership (4) Possess partnership property, or assign their rights in specific partnership property If there are 100 general partners and 1 dies, the partnership will be dissolved. However, this rule will not apply in the case of limited partners. If there are 5 limited partners and 1 dies, then the partnership will still continue. y A limited partnership will continue (not dissolve) even in cases of the death of a limited partner as long as there is still ONE surviving limited partner in the partnership. (3) Have dissolution and winding up by decree of court A limited partner shall have the right to receive a share of the profits or other compensation by way of income and to the return of his contribution as provided in Articles 1856 and 1857. y This Article is important as far as the limited partner is concerned as it shows them what rights they have. y A limited partner is given the same rights as the general partner, that is: (1) They can require that the partnership books be kept at the principal place of business. (2) Inspect and copy partnership books. (3) Demand true and full information regarding all matters concerning the partnership. (4) Demand for legal winding up or dissolution (5) Share in profits, other compensation by way of income and the return of contributions. Article. y Refers to a failure to create a limited partnership.
y Suppose A, B and C form a limited partnership, with C being the limited partner with a contribution of P20,000.00. The certificate that they sign says that C is a general partner. What, then, if C, believing himself to be a limited partner, begins to exercise his rights as such? C cannot be held liable, as a general partner, if upon his realization of the error, he promptly renounces his involvement with the partnership, except: (1) If he participates in the management of the partnership (2) If his surname is used in the partnership name y Consider the situation above, but this time, C¶s name is not mentioned at all. What happens then? If that is the case, then there is no limited partnership because there is no limited partner mentioned . y The law anticipates a situation where in the person is a limited partner but his name is not mentioned as such or not mentioned at all in the certificate. Article. y A partner can be a limited and general partner at the same time provided that this fact is STATED IN THE CERTIFICATE that he signs. rd y Who are they to 3 persons then? rd They are general partners as to 3 persons but as amongst the partners themselves, they are seen as limited partners with regards to their contribution. Article 1854 A limited partner also may loan money to and transact with other businesses with the partnership, and, unless he is also a general partner, receive on account of resulting claims against the partnership, with general creditors, a pro rata share of the assets. No limited partner shall in respect to any such claim: (1) Receive or hold as collateral security any partnership property (2) Receive from a general partner or the partnership any payment, conveyance, or release from liability, if at the time the assets of the partnership are not sufficient to discharge partnership liabilities to persons not claiming as general or limited partners. The receiving of collateral security, or a payment, conveyance or release in violation of the foregoing provisions is a fraud on the creditors of the partnership. y Provides that a limited partner can extend credit or transact with partnerships that he is part of. y He is also entitled to partnership assets pro rata to creditors but it cannot be used as collateral from the partnership. y Suppose X & Co. owes D a sum of P20,000.00 and C, a limited partner, P20,000.00. The total assets of the partnership is P50,000.00. How shall these be settled? Both C and D can simultaneously collect from the partnership as partnership assets are sufficient to cover BOTH. However, if partnership assets are only P20,000.00, C cannot share in it because it would prejudice D¶s claim.
Article. y Suppose that there are three limited partners. These partners can agree (because there are more than 1) that one of them can have priority over the others provided that such SHOULD BE STATED IN THE CERTIFICATE. Article 1856. y The limited partner is entitled to share in payment by share in profits or other compensation by way of income provided that the partnership assets are sufficient to meet such. y To determine total liability, do not deduct contributed capital. y Liabilities owed to general partners are not considered part of the partnership¶s total liabilities. y The ability of the limited partner to share is based on the total liability, which must be known. y Suppose that A, B and C are in partnership wherein C is the limited partner and total assets are P50,000.00. They owe D a sum of P10,000.00, C P15,000.00 and A P50,000.00, is C still entitled to share in the surplus after clearing liabilities? Yes, because total liabilities in this case is only P25,000.00 and the assets are still sufficient to pay out the surplus. Article 1857 A limited partner shall not receive from a general partner or out of partnership property any part of (3) The certificate is cancelled or so amended as to set forth the withdrawal or reduction Subject to the provisions of the first paragraph, a limited partner may rightfully demand the return of his contribution: (1) On the dissolution of a partnership (2) When the date specified in the certificate for its return has arrived (3) After he has given six months¶ (2) The other liabilities of the partnership have not been paid, or the partnership property is insufficient for their payment as required by the first paragraph, No. 1, and the limited partner would otherwise be entitled to the return of his contribution. y What are the requisites for the limited partner to be entitled to the return of his contribution? (1) When, after deducting partnership liabilities, partnership assets are sufficient to do so. (2) If he has the consent of all partners unless the right can be demanded. (3) The certificate must be amended to reflect the return of his contribution. y When may a limited partner rightfully demand the return of his contribution? (1) During dissolution (2) Upon arrival of the date of return of his contribution (3) After he has given 6 months¶ notice, WRITTEN, and there was no date of return nor dissolution y The limited partner is only entitled to the return of his contribution, IN CASH, except: (1) If it was agreed upon (2) He has the consent of all the partners y When can a limited partner ask for dissolution? (1) He rightfully but unsuccessfully demanded the return (2) If he was entitled to receive his contribution and the certificate was already amended but partnership assets are not sufficient to pay off partnership creditors. Article 1858 A limited partner is liable to the partnership: (1) For the difference between his contribution as actually made and that stated in the certificate as having been made (2) For any unpaid contribution which he agreed in the certificate to make in the future of the time and on the conditions stated in the certificate A limited partner holds as trustee for the partnership: (1) Specific property stated in the certificate as contributed by him, but which was not contributed or which has been wrongfully returned (2) Money or other property wrongfully paid or conveyed to him on account of his contribution The liabilities of a limited partner as set forth in this..
y Suppose A promises to contribute P20,000.00 but only pays P15,000.00. What is his obligation to the partnership? Then A must pay the P5,000.00 difference NOW. y Suppose C, the limited partner, promises to contribute P20,000.00 more. What should be done? It should be paid on the date he promised to pay it. y When can a limited partner be held as trustee? (1) When he promises specific things but does not follow through with the promise of delivery (2) In circumstances of wrongful returns (3) In cases of money and/or property that is wrongfully conveyed y Can the partnership waive the difference of contributions? (EX: the first situation) Yes, as long as it will not affect creditors who had extended credit before the waiver of such. y Can the partnership reclaim the returns if it is needed? (EX: C¶s contribution was already returned but the partnership needs it to finish paying off D, a creditor) Yes, as long as the claim came into existence before the return of contribution. Article 1859 A limited partner¶s58. y The interest of a limited partner can be assigned. His interest is his share in profits, other compensation by way of income or his return. y A substituted limited partner is the person admitted and has all the rights of a limited partner who dies or has assigned his interest. y What if the person is not qualified to be a substituted limited partner? Then he shall remain an assignee with the following rights and limitations: (1) Receive share in profits, other compensation by way of income or return of contribution (2) Cannot demand information on partnership activities nor inspect partnership books. y When will the assignee become a substituted limited partner? (1) If consent from all other partners was given
(2) If the limited partner is empowered by the certificate to constitute a substituted limited partner, and the certificate is amended under Article 1865 y What are the rights of a substituted limited partner? He has all the powers, limitations and liabilities as his assignor except those which he was ignorant of at the time he became a limited partner and those that could not be ascertained from the certificate. y What about the assignor? The assignor is still liable for false statements and claims before the admittance of a substitute limited partner, as in Articles 1847 and 1858. Article 1860 The retirement, death, insolvency, insanity or civil interdiction of a general partner dissolves the partnership, unless the business is continued by the remaining general partners: (1) Under a right so to do stated in the certificate (2) With the consent of all the members y Again, this does not apply to limited partners because as long as there is ONE limited partner still living, then the partnership is continued. y General partners can only continue the business if: (1) The right was stated in the certificate (2) All partners consent to such. Article 1861 On the death of a limited partner, his executor or administrator shall have the rights of a limited partner for the purpose of settling his estate, and such power as the deceased had to constitute his assignee a substituted limited partner. The estate of a deceased limited partner shall be liable for all his liabilities as a limited partner. y The executor/administrator has the power to settle the dead partner¶s estate and those to constitute his assignee as a substituted limited partner, if the limited partner originally had the power to do so, or was allowed such. y The estate of a limited partner will pay for all his liabilities as a limited partner. Article. y Similar to Article 1814 for general partnerships. rd y If a 3 person files a case against the limited partners for non-payment or non-compliance with their contract, he can ask for the partners¶ interests to be attached. y The attached interest may be redeemed using separate general partners¶ property but not partnership property UNLESS all partners have consented to such. Article 1863
In settling accounts after dissolution, the liabilities of the partnership shall be entitled to payment in the following order: (1) Those to creditors, in the order of priority as provided by the law, except those to limited partners on account of their contributions, and to general partners . y Who has priority over distribution of assets in a limited partnership? (1) Creditors, including limited partners who have a claim against the partnership. (2) Limited partners¶ share in profits (3) Limited partners¶ return of capital contribution (4) General partners who have claims against the partnership (5) General partners¶ share in profits (6) General partners¶ return of capital contribution y The difference of this with general partnerships is that in a general partnership, capital contributions are returned BEFORE profits from surplus are shared. Article 1864 The certificate shall be cancelled (10) The members desire to make a change in any other statement in the certificate in order that it shall accurately represent the agreement among them. y When should a certificate be cancelled?
(1) Upon DISSOLUTION (2) When ALL limited partners cease to be such y When should the certificate be amended? In all cases other than those that will cause the certificate to be cancelled. Article 1865 The writing to amend a certificate shall: (1) Conform to the requirements of Article 1844 as far as necessary to set forth clearly the change in the certificate which it is desired to make (2) Be signed and sworn to by all members, and an amendment substitution. (2) A certified copy of the order of court in accordance with the provisions of the fourth paragraph (3) After the certificate is duly amended in accordance with this article, the amended certificate shall thereafter be for all purposes the certificate provided for in this Chapter. y What are the requisites for certificates to be amended or cancelled? (1) It must be in writing (2) It must be signed AND sworn by ALL concerned parties (3) It must be registered with the SEC Article 1866 A contributor, unless he is a general partner, is not a proper party to proceedings by or against a partnership, except where the object is to enforce a limited partner¶s right against or liability to the partnership. y A limited partner is a mere contributor, meaning, he is practically a stranger. This is because he has no participation in management and control and is only liable to rd the partnership, not to 3 persons and if he is filed against as a general partner, he can file a counterclaim for wrongful inclusion. y 2 exceptions to this rule: (1) To enforce his right against the partnership
(2) If he refuses to restore his contribution when the partnership assets are not sufficient to pay creditors. y This is a transitory law. y Articles 145 to 150 of the Code of Commerce used to govern limited partnerships. y What happens to a limited partnership existing before the Civil Code? The partnership must first comply with the following requirements before they can become a limited partnership under the Civil Code: (1) State the amount of contribution and the time it was contributed (2) After paying off all liabilities, the total assets of the partnership must be greater than the contribution of all limited partners, otherwise, it will continue to be governed by the Code of Commerce.
|
https://www.scribd.com/doc/98397043/78909484-PARTCOR
|
CC-MAIN-2017-13
|
refinedweb
| 19,043
| 57.1
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Map11:32 with Pasan Premaratne
The map function allows us to apply a transformation to each element in an array. In this video let's explore how the map function works and how we can build it ourselves.
Code Snippets
import Foundation struct Formatter { static let formatter = NSDateFormatter() static func dateFromString(dateString: String) -> NSDate? { formatter.dateFormat = "d MMM yyyy" return formatter.dateFromString(dateString) } } let dateStrings = ["10 Oct 1988", "11 Jan 1947", "28 Mar 2002"]
- 0:00
[MUSIC]
- 0:04
Over the next series of videos, we're going to build up some of the standard
- 0:08
library functions from scratch and
- 0:10
really see how closures play an important role in building these functions.
- 0:14
Like before I've added a new playground page.
- 0:17
So if you download the start of files just open up the project navigator and
- 0:21
go to standard library functions.
- 0:24
With all the code we've written so far across different courses,
- 0:28
we've iterated over quite a few arrays using for loops.
- 0:32
Broadly speaking,
- 0:33
there are two primary reasons you iterate over a collection using a loop.
- 0:37
One, to use the elements in the array for some computation.
- 0:41
Let’s say you have an array of names.
- 0:43
And for every name in the array.
- 0:45
You want to use it to print out a string of some sort.
- 0:49
The second case is when you want to transform the elements in the array.
- 0:53
Let’s say you have an array of numbers like you just did and
- 0:57
we want to double each value in this array.
- 0:59
So we could do it like this.
- 1:01
So say, let values = we'll add some numbers in here.
- 1:10
Now, we want another array containing the values in this array doubled,
- 1:15
so I can create a newArray, that's a variable array.
- 1:21
Initialize it to an empty array.
- 1:25
Then using a for loop, we can loop over all the values in the original array.
- 1:31
And with each value we can see a newArray.append(number * 2).
- 1:39
And, if we look at the newArray,
- 1:41
well now we have another array with all the values doubled.
- 1:45
This however, is an imperative approach.
- 1:48
What do I mean by imperative?
- 1:50
With imperative style programming, you tell the compiler
- 1:54
what you want to execute line by line for each step as we just did.
- 1:59
The alternative to this is known as the declarative approach.
- 2:03
in declarative programming you write code that describes your end result.
- 2:09
You don't write the intermediate steps for how you want to get there.
- 2:13
The code we just wrote applying a transformation to each member in the array
- 2:18
is exactly what Map does.
- 2:20
The difference between the two however is that Map is a declarative approach.
- 2:25
So in the following example.
- 2:26
Let me get rid of this.
- 2:28
I'll say, let tripledValues =
- 2:34
values.map { $0 x 3 }.
- 2:40
Now, all I'm saying here is that I want to multiply each value in the array by 3,
- 2:46
and I get a newArray back.
- 2:49
This is the declarative style.
- 2:50
I don't write out what I need to do, how I need to iterate or loop over the array.
- 2:55
None of that.
- 2:56
Using this approach, the two main differences
- 2:59
that are glaringly obvious in the imperative approach up over here.
- 3:04
My final array, newArray is a mutable array.
- 3:07
And remember we don't like mutability.
- 3:09
It can lead to unexpected results.
- 3:12
The declarative approach on the other hand leads to an immutable value.
- 3:17
You may have often heard of functional programming.
- 3:20
Functional programming is a declarative style of programming.
- 3:24
Where we use many small functions that return immutable data types.
- 3:28
We abstract away the code that indicates how we want to do things and
- 3:32
instead we say what we want to do.
- 3:35
Map is a function that is very common in functional programming
- 3:39
regardless of language.
- 3:41
Wherever you see a Map function it should be immediately clear that this means
- 3:45
one thing.
- 3:47
You're applying a transformation function to an array of values and
- 3:51
getting an immutable newArray back that does not mutate your original data set,
- 3:57
so what does this have to do with closures?
- 3:59
Well as you can see the Map function takes a transformation function
- 4:03
in the form of a closure expression.
- 4:06
And then applies that closure body to each element.
- 4:10
Along with Map there are a few other functions in swift if that allow you to
- 4:14
program in a more declarative style with less mutation.
- 4:19
To truly understand how they work let's build up each function from the ground up.
- 4:24
In doing this you'll gaining much better understanding of how a closures are used.
- 4:29
So, let's add a comment here, and I'll say Map.
- 4:33
We're going to declare the Map function as part of the Array type, so
- 4:37
let's do this in an extension.
- 4:41
When we built the apply function earlier on,
- 4:44
we had a parameter that was essentially the transformation function except that
- 4:49
it was hard-coded to accept only integer values.
- 4:53
The Map function doesn't work that way.
- 4:55
Instead, the signature of the transformation function depends on
- 4:59
the array that you call map on If you call Map on an array of integers
- 5:05
it expects a transformation function that takes an integer as a parameter.
- 5:10
Same thing for strings.
- 5:12
The return type well that depends on the return type of the actual
- 5:15
transformation function.
- 5:17
So if your transformation function returns a boolean value
- 5:21
that's what the return type of Map is.
- 5:24
So how do we do this?
- 5:25
How do we write this function?
- 5:27
We do that through the power of generics.
- 5:30
Earlier in a previous course, you learned how you can create generic types and
- 5:34
functions that work with a range of values.
- 5:38
Map is no different.
- 5:39
So we'll say func map and the Map function is going to be generic over type T.
- 5:46
It takes an argument, a transformation function.
- 5:51
So we'll call it transform.
- 5:54
And the signature or the type of this function is element to T.
- 6:01
Element is a placeholder type for the type contained inside the array.
- 6:05
So if the Array is an Array of ints element over here is replaced by int.
- 6:12
Now T is the return type of the transformation function.
- 6:17
If we pass in a transformation function that takes an integer value and
- 6:21
returns a string, T is a string.
- 6:24
Finally, the return type of the map function is an array of T.
- 6:32
Inside the body of the map function is the code that we're used to.
- 6:35
So first, we create an empty array that is the same type as our return type.
- 6:40
So we'll say var result is an empty Array of T.
- 6:47
Next, we're going to iterate over the elements in the array
- 6:50
that we're calling map on and will apply a transformation function to each value.
- 6:55
So say for x in self,
- 6:58
we're going to append to result,
- 7:03
the result of calling transform on x.
- 7:09
So here we iterate over the original Array for every value inside the Array,
- 7:13
we call transform and pass in that value.
- 7:17
This transform function returns a type T, so
- 7:21
we append that value to result, which is the array of type T that we created.
- 7:28
And then after the for loop, we can say return result.
- 7:33
Remember, the goal of a declarative approach is to abstract away the how.
- 7:39
Here our map function is taking care of iterating over the array.
- 7:43
And applying the transformation function to each value, the how.
- 7:48
Our goal in using map is to provide the what, that is the transformation function.
- 7:54
So now, let's try using this.
- 7:56
So right below here, We'll say let integerValues = and
- 8:01
I'm going to create an array strings.
- 8:05
So will say ["1", "2",
- 8:10
"3", "a", "4"] and
- 8:15
on this I'm going to call map.
- 8:26
That is $0.
- 8:29
In this example I'm Iterating over an Array of strings.
- 8:33
The transformation function except a string and
- 8:36
tries to convert that to an integer.
- 8:39
Now why does it exempt a string?
- 8:41
Well, the transform function's type is Element to T.
- 8:46
Element refers to the type of the element inside theAarray.
- 8:51
So over here, these are strings.
- 8:53
So Element is a string.
- 8:55
Now, the function that we're passing in in form of a closure expression,
- 9:00
all the body does is it attempts to convert a string to an integer.
- 9:04
Now the return type of this operation is an optional integer.
- 9:08
So T now is an optional int.
- 9:11
Because the return type of this transformation function is an optional
- 9:15
int, the return type of map is also now an array of optional ints,
- 9:21
as you can see here.
- 9:23
Since the map function is generic the return type is determined
- 9:27
by the return type of the transformation.
- 9:30
So let's look at one more example.
- 9:32
Now I'm going to copy paste some helper code in here.
- 9:37
All I've done here is create a tiny struct with a static method,
- 9:40
that accepts a string containing a date in a particular format, and
- 9:44
returns an optional instance of NSDate.
- 9:48
NSDate is a foundation class to work with date objects.
- 9:52
If we can convert the string to a date, we get an instance of an NSDate.
- 9:56
Otherwise we get null, so next step, I'm going to copy paste another line of code.
- 10:02
Here we have an array containing some strings that are date objects or
- 10:07
they're just dates strings.
- 10:09
So what we can do is map over this array.
- 10:11
So it will say let dates.
- 10:14
And we can transform each string value or
- 10:16
attempt to transform each string value to an instance of NSDate.
- 10:20
So we'll say dateStrings.map.
- 10:22
And in the closure we'll use the static function from my formatter struct.
- 10:29
So we'll say Formatter.dateFromString($0).
- 10:36
And now if I look at dates, you'll see that we have actual date objects.
- 10:45
Map is a super useful function that combines the power of generics and
- 10:49
closures to allow us to transform arrays.
- 10:53
You might be thinking, wow.
- 10:54
Now that I know how to use map, why should I ever use a for loop?
- 10:59
These are two distinct tools, and you shouldn't be confused.
- 11:03
You only use map when you want to transform an array
- 11:06
by applying a transformation function to the individual elements.
- 11:10
For loops, on the other hand, are used when you want to perform side effects.
- 11:15
What does this mean?
- 11:17
Well if the code you want to execute on looping over the array mutates an object
- 11:21
or modifies the state elsewhere in your code, you are performing side effects, and
- 11:27
a for loop is the tool you need.
- 11:29
Okay, enough about map.
- 11:30
On to the next thing.
|
https://teamtreehouse.com/library/closures-in-swift-2/building-standard-library-functions/map
|
CC-MAIN-2017-13
|
refinedweb
| 2,174
| 73.37
|
Manoj Srivastava wrote: >. namespace.txt in debconf-doc gives some general rules and documents a few of them. I will be glad to add more. Or this could be moved into a file included in policy and maintained that way. > >? Shared templates should be identical and must be duplicated in all packages that use them. The most recent text debconf sees will be used. >. There is no need to ask questions in the postinst. It works like this: - preinst a asks shared/foo: unseen so displayed - preinst b asks shared/foo: seen, so question skipped ... - postinst a acts on shared/foo - postinst b acts on shared/foo ... -- see shy jo
Attachment:
pgpEBCTHQr0f2.pgp
Description: PGP signature
|
https://lists.debian.org/debian-devel/2003/07/msg00782.html
|
CC-MAIN-2017-34
|
refinedweb
| 116
| 78.35
|
24, 2007 09:00 AMThreading in Ruby has been a topic of discussion for a long time. Whether future Ruby versions (1.9 and beyond) will use kernel threads instead of userspace threads, is still to be decided. Recently, another path for this set of problems has arrived in Ruby. David Flanagan points out a a new feature in the Ruby 1.9 branch called Fibers:.
Fiber.yield
Ensuring Code Quality in Multi-threaded Applications
Give-away eBook – Confessions of an IT Manager
The Agile Business Analyst: Skills and Techniques needed for Agile
Effective Management of Static Analysis Vulnerabilities and Defects
Usage Landscape: Enterprise Open Source Data Integration
This looks like a slightly overcomplicated version yield/generators in python. Not really sure what the difference is on first look.
Can someone explain why (except for a bit of syntax sugar) this is different from just creating and using an object with persistent member values. For example, something like:
class Fibber
$x, $y = 0, 1
def fib
$x,$y = $y,$x+$y
$x
end
end
fib = Fibber.new
20.times { puts fib.fib }
This feels much more like Object-Oriented "business as usual" than any form of concurrency.
|
http://www.infoq.com/news/2007/08/ruby-1-9-fibers
|
crawl-002
|
refinedweb
| 197
| 55.95
|
Release Notes: December 2021
Release Notes: November 2021
Release Notes: October 2021
Release Notes: September 2021
Release Notes: August 2021
Release Notes: July 2021
Release Notes: June 2021
Release Notes: May 2021
Release Notes: April 2021
Release Notes: March 2021
Release Notes: February 2021
Release Notes: January 2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
Release Notes: May 2021
2021-05-13
New Protocol Options for Scheduled Tests
We have added a "Prefer TCP" option to Endpoint Agent scheduled tests that will ensure the test will fall back to ICMP + TCP Connect if no driver is detected, so that tests are still performed. This option is to avoid situations where Endpoint Agents are not included in TCP tests due to either the Windows driver missing, or agents running versions earlier than 1.75.0.
In addition, the
TCP Connect
checkbox has been removed, and replaced by additional
Protocol
options:
ICMP
ICMP + TCP Connect
Prefer TCP (fallback to ICMP)
TCP (Driver required)
New Onboarding for Catalyst Switching Customers
Starting this release, all Catalyst switching customers entitled to ThousandEyes will see a new onboarding workflow when visiting their newly created ThousandEyes organization. The workflow will guide them through creating agents, tests, and surface common use cases.
API Updates
The alert rules write API now supports the following alert rule metrics: certificate validity, TLS version, and weak cipher metrics.
New Cloud Agent
A new Cloud Agent has been added to Johannesburg, South Africa (Azure southafricanorth). For a full list of Cloud Agents, see
ThousandEyes Cloud Agent Locations
.
2021-05-11 Bug Fixes
An issue was found where some users were receiving emails on cleared alert events even when they deselected the option to receive cleared alert emails in the UI. This has been resolved.
An issue was found where gaps in data were incorrectly displayed on the timeline. This has been resolved.
An issue was found in the v6 reports API where incorrect results were returned for certain 'groupValues' and 'groupProperty'. This has been corrected.
An issue was fixed where changes to a filter on a report or dashboard widget weren't persisting, and reverted back to the previous filter. This has been corrected.
API requests to
/v6/endpoint-tests/
with the
groupId
missing now return a 400 rather than a 500 error. This is more explanatory, since the error is with the input rather than the server.
An issue was found when an Endpoint Agent was uninstalled and later re-installed, where the
Last Contact
field would not accurately reflect the new Endpoint Agent installation date/last contact time. This has been corrected.
An issue was found where newly created organizations were not able to use the AppDynamics Dash Studio integration due to a 0 API rate limit. This has been corrected and fixed for all customers.
2021-05-18
Crypto Node.js Module in Transaction Tests
As of browserbot version 1.163.0 and ThousandEyes Recorder IDE version 1.1.0, the transaction test scripting sandbox now supports importing functionality from
the Node.js crypto module
.
One way to leverage this new functionality is to compute an HMAC:
1
import { credentials } from 'thousandeyes';
2
import { createHmac } from 'crypto';
3
4
const dataToSign = 'hello, world!';
5
const hmac = createHmac('sha256', credentials.get('some-secret-key')).update(dataToSign).digest('base64');
Copied!
2021-05-26
New Cloud Agent
New IPv4 and IPv6 Cloud Agents have been added to Eindhoven, Netherlands (Ziggo). For a full list of Cloud Agents, see
ThousandEyes Cloud Agent Locations
.
Minor Enhancements
Endpoint Agent
Previously, the VPN node in a path visualization merged with the first hop of the overlay. This caused some confusion with customers. We've updated the view so that the VPN node will stand alone in the path, and the first hop of the overlay will appear as a regular node.
2021-05-26 Bug Fixes
An issue was found where, for some disabled alerts, the alert would still fire if the conditions were met. This has been resolved.
An issue was found where location alerts were failing to persist due to the metrics being reported as empty. This lead to location alerts not being cleared when they should have been, and has been resolved.
An issue was found where the SSL certificate expiry alert rule was interpreted incorrectly, and caused an alert to fire when no certificates were expiring. This has been fixed.
An issue was found where the API response to
/v6/tests/<test id>.json
did not return all account IDs and their associated account group names if the user was associated with multiple organizations. This has been fixed.
An Internet Insights issue was found where it appeared that some outages shown in the
Views
timeline were not being displayed in the
Overview
dashboard. This was because the
Overview
and
Views
timeline did not previously refresh at the same interval, resulting in windows where they would be briefly out of sync. The default refresh interval for the
Overview
has been lowered to match the
Views
timeline, and this resolves the issue.
An issue was found where updating the test interval to a higher frequency (for example, changing from 5 minutes to 2 minutes) would cause the timeline to show gaps in data. This has been resolved.
An issue was found where the alerting system was incorrectly interpreting the alert configuration, resulting in alerts firing when the conditions were not met. Alert configuration validation has been improved to resolve this issue.
An issue was found where the
Availability
metric in reports and dashboards was showing as milliseconds instead of percentage. This has been fixed.
An issue was found where, in some cases, the system was not properly appending agent details to alerts. This resulted in the UI incorrectly showing 0 triggered agents against an alert, and has been fixed.
An issue was found where the global
Alert Start Time
for some alerts was showing as significantly before the
Alert Start Time
for the location alerts. This was caused by logic in the platform that carried the first instance of a condition violation for an agent, rather than the most recent violation. This logic has been updated, and the issue is now fixed.
Endpoint Agent
An issue was found where, if ICMP was blocked and preventing end-to-end metrics from being measured, a 100% packet loss alert would be triggered. This issue has been resolved.
An issue was found where the number of agents shown to be running a scheduled test could differ between the
Timeline
and
Table
views. This occurred in instances where there were multiple data points for the same machine in the same round. The way the agent count is done has been adjusted, and only unique machines in a round will now be counted.
Release Notes: June 2021
Release Notes: April 2021
Last modified
4mo ago
Copy link
Contents
2021-05-13
New Protocol Options for Scheduled Tests
New Onboarding for Catalyst Switching Customers
API Updates
New Cloud Agent
2021-05-11 Bug Fixes
2021-05-18
Crypto Node.js Module in Transaction Tests
2021-05-26
New Cloud Agent
Minor Enhancements
2021-05-26 Bug Fixes
|
https://docs.thousandeyes.com/archived-release-notes/2021/2021-05-release-notes
|
CC-MAIN-2022-21
|
refinedweb
| 1,188
| 61.56
|
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Getting disk info into oracle
The following works fro me as an external procedure call:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <ctype.h>
int mount_space(char *fs)
{ FILE *c = NULL;
char bfr[200];
char dv[100];
char dmy[6][80];
int pv = 0;
long int rv = -2;
sprintf(dv, "/usr/bin/bdf %s\n", fs);
c = popen(dv, "r");
if(!c) return(-1);
while(fgets(bfr, sizeof(bfr),c))
{ sscanf(bfr, "%s %s %s %s %s %s",
&dmy[0], &dmy[1], &dmy[2], &dmy[3], &dmy[4], &dmy[5]); if(!strncmp(dmy[5], fs, min(strlen(dmy[5]), strlen(fs)))) rv = atol(dmy[3]); if(!strncmp(dmy[4], fs, min(strlen(dmy[4]), strlen(fs)))) rv = atol(dmy[2]);
It returns the available disk space, in KBYTES. Add to that the bytes from dba_data_files for the same mount point & you've got total available disk space. To build it run:
make -f $ORACLE_HOME/rdbms/demo/demo_rdbms.mk extproc_no_callback SHARED_LIBNAME=<whatever you want>.so OBJS=same_as_before>.o
Dick Goulet
Senior Oracle DBA
Oracle Certified 8i DBA
-----Original Message-----
From: Kline.Michael [mailto:Michael.Kline_at_SunTrust.com] Sent: Thursday, June 03, 2004 11:34 AM
To: oracle-l_at_freelists.org
Subject: Getting disk info into oracle
OS = HP/UX
Oracle = 8.1.3.4
Anyone got a working script that may allow me to strip off from a "bdf" the disk info?
What I'd really like to be able to do is capture the disk size that normally won't change, and then the used and free amount of disk. I'd like to bring this info in with a date stamp.
I'll take that and make a work table out of it.
Then I'd like to take that and merge the "auto extend" stuff with it and keep it for historical purposes.
There is a small chance that perhaps we've got someone already doing this or something very close.
They *MIGHT* have Perl here, but I'll have to check.
Some of these databases are growing by 1-2TB per year, and it would help with the keeping up of which disks are getting full.
Thanks.
Michael Kline
Database Administration
Outside 804.261.9446
Cell 804.744.1545
michael.kline_at_suntrust.com
--:37:31 CDT
Original text of this message
|
http://www.orafaq.com/maillist/oracle-l/2004/06/03/0222.htm
|
CC-MAIN-2016-30
|
refinedweb
| 394
| 76.22
|
While using a classification problem we need to use various metrics like precision, recall, f1-score, support or others to check how efficient our model is working.
For this we need to compute there scores by classification report and confusion matrix. So in this recipie we will learn how to generate classification report and confusion matrix in Python.
This data science python source code does the following:
1. Imports necessary libraries and dataset from sklearn
2. performs train test split on the dataset
3. Applies DecisionTreeClassifier model for prediction
4. Prepares classification report for the output
Get Closer To Your Dream of Becoming a Data Scientist with 70+ Solved End-to-End ML Projects
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
We have imported datasets to use the inbuilt dataframe , DecisionTreeClassifier, train_test_split, classification_report and confusion_matrix.
Here we have used datasets to load the inbuilt wine dataset and we have created objects X and y to store the data and the target value respectively.
wine = datasets.load_wine()
X = wine.data
y = wine.target
We are creating a list of target names and We are using.
class_names = wine.target_names
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
Here we are using DecisionTreeClassifier to predict as a classification model and training it on the train data. After that predicting the output of test data.
classifier_tree = DecisionTreeClassifier()
y_predict = classifier_tree.fit(X_train, y_train).predict(X_test)
Explore More Data Science and Machine Learning Projects for Practice. Fast-Track Your Career Transition with ProjectPro
Let us first have a look on the parameters of Classification Report:
For Confusion Matrix there are two parameters test and predicted values of the data.
print(classification_report(y_test, y_predict, target_names=class_names))
print(confusion_matrix(y_test, y_predict))
So the output comes as
precision recall f1-score support class_0 0.95 0.95 0.95 19 class_1 0.95 0.95 0.95 21 class_2 0.95 0.95 0.95 19 micro avg 0.95 0.95 0.95 59 macro avg 0.95 0.95 0.95 59 weighted avg 0.95 0.95 0.95 59 [[18 1 0] [ 0 20 1] [ 1 0 18]]
|
https://www.projectpro.io/recipes/generate-classification-report-and-confusion-matrix-in-python
|
CC-MAIN-2021-43
|
refinedweb
| 368
| 51.24
|
I am trying to do some raytracing on the GPU via the compute shader in OpenGL and I came across a very strange behaviour.
For every pixel in the screen I launch a compute shader invocation and this is how the compute shader looks like:
#version 430 struct Camera{ vec4 pos, dir, up, xAxis ; float focalLength; float pW, pH; }; struct Sphere{ vec4 position; float radius; }; struct Ray{ vec3 origin; vec3 dir; }; uniform Camera camera; uniform uint width; uniform uint height; uniform image2D outputTexture; float hitSphere(Ray r, Sphere s){ float s_ov = dot(r.origin, r.dir); float s_mv = dot(s.position.xyz, r.dir); float s_mm = dot(s.position.xyz, s.position.xyz); float s_mo = dot(s.position.xyz, r.origin); float s_oo = dot(r.origin, r.origin); float d = s_ov*s_ov-2.0f*s_ov*s_mv+s_mv*s_mv-s_mm+2.0f*s_mo*s_oo+s.radius*s.radius; if(d < 0){ return -1.0f; } else if(d == 0){ return (s_mv-s_ov); } else { float t1 = 0, t2 = 0; t1 = s_mv-s_ov; t2 = (t1-sqrt(d)); t1 = (t1+sqrt(d)); return t1>t2? t2 : t1 ; } } Ray initRay(uint x, uint y, Camera cam){ Ray ray; ray.origin = cam.pos.xyz; ray.dir = cam.dir.xyz * cam.focalLength + vec3(1, 0, 0)*( float(x-(width/2)))*cam.pW + cam.up.xyz * (float(y-(height/2))*cam.pH); ray.dir = normalize(ray.dir); return ray; } layout (local_size_x = 16, local_size_y = 16, local_size_z = 1) in; void main(){ uint x = gl_GlobalInvocationID.x; uint y = gl_GlobalInvocationID.y; if(x < 1024 && y < 768){ float t = 0.0f; Ray r = initRay(x, y, camera); Sphere sp ={vec4(0.0f, 0.0f, 20.0f, 0.0f), 2.0f}; t = hitSphere(r, sp); if(t <= -0.001f){ imageStore(outputTexture, ivec2(x, y), vec4(0.0, 0.0, 1.0, 1.0)); } else { imageStore(outputTexture, ivec2(x, y), vec4(0.0, 1.0, 0.0, 1.0)); } } }
Rendering on the GPU yields the following broken image:
Rendering on the CPU with the same algorithm yields this image:
I can't figure out the problem since I just copied and pasted the "hitSphere()" and "initRay()" functions into my compute shader. First I thought I haven't dispatched enough work groups, but then the background wouldn't be blue, so this can't be the case. This is how I dispatch my compute shader:
#define WORK_GROUP_SIZE 16 //width = 1024, height = 768 void OpenGLRaytracer::renderScene(int width, int height){ glUseProgram(_progID); glDispatchCompute(width/WORK_GROUP_SIZE, height/WORK_GROUP_SIZE,1); glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT); }
Then I changed the position of the sphere in x direction to the right:
In y direction to the top:
And in both directions (right and top):
When I change the position far enough in both directions to the left and to the bottom, then the sphere actually disappears. It seems that all the calculations on the GPU only work in one quarter of the image (top-right) and happen to yield false results in the other three quarters.
I am totally clueless at the moment and don't even know how to start fixing this.
|
http://www.gamedev.net/user/196676-stanlee/?tab=topics
|
CC-MAIN-2013-20
|
refinedweb
| 508
| 62.48
|
Top Ten Tags
2008-02-19 • Algorithms, C++, Python, Shell • Comments
Reworking this website reminded me of another classic sorting algorithm. The sidebar on the front page now has a Top Tags node which lists, in order, the 10 most frequently used tags for articles on this site. What’s the best way to find these?
More generally:
How do you select the N largest items, in order, from a collection?
Maximum
When N is 1, the standard maximum function does the job. That would be
std::max_element in C++ or simply
max in Python. Python’s
max has an optional
key parameter, allowing you to supply your own comparison function; C++ similarly has an overload of
max_element which accepts a comparison predicate.
Sort and Slice
If N isn’t 1, you could sort the whole collection then slice the N largest elements from the end. In Python:
nlargest = sorted(collection)[-N:]
And in shell:
$ sort FILE | tail -$N
Partial Sort
A full sort isn’t required if you just need the top 10, though. For large collections and small N, gains can be had from partially sorting the collection. C++ provides an algorithm, std::partial_sort, which does just that, shuffling the collection in place until the first N elements are ordered and at the front of that collection.
Here’s a complete program based on the C++ partial sort algorithm. It reads integers from standard input into memory then writes the first 10 of them to standard output.
// Program reads integer values from standard input and // writes the N largest of these values, largest first, // to standard output. #include <algorithm> #include <functional> #include <iostream> #include <iterator> #include <vector> int main() { typedef std::vector<long> numbers; typedef numbers::iterator iter; typedef std::istream_iterator<numbers::value_type> in; typedef std::ostream_iterator<numbers::value_type> out; // Read numbers from standard input numbers results; copy(in(std::cin), in(), back_inserter(results)); // Make sure we cope with N > size of results. numbers::size_type const N = 10u; numbers::size_type const n = std::min(results.size(), N); // Find the N largest (hence the "greater" predicate) iter const first = results.begin(); iter const middle = first + n; iter const last = results.end(); partial_sort(first, middle, last, std::greater<long>()); // Copy these to standard out copy(first, middle, out(std::cout, " ")); return 0; }
The C++ standard guarantees the complexity of
partial_sort:
It takes approximately
(last - first) * log(middle - first)comparisons.
The corresponding complexity for a full sort is:
Approximately
(last - first) * log(last - first).
So the speed up is theoretically of the order of
log(S)/log(N). Logarithms grow slowly so the gains aren’t spectacular, but they may well be worth having. I ran some tests on collections of 31 bit numbers generated by the standard C
random() function, with the collection size, S, ranging between 2 million and 10 million.
As you can see, for these test cases the partial sort runs around 50 times more quickly than the full sort; better than expected or predicted!
It’s also worth noting that we can find the top N elements without altering or (fully) copying the original collection: see std::partial_sort_copy() for details.
C++’s in-place partial sort works well with a paging model. To sketch the idea,
partial_sort(first, first + N, last) yields the first page of results, then, if required,
partial_sort(first + N, first + 2 * N, last) yields the second page, and so on. Of course, if we anticipate paging through a large portion of the entire collection, a full sort gets the job done up front.
The complexity guarantee for
partial_sort is the same as for
sort in the limiting case, when
middle equals
last. So an implementation could, I think, claim conformance by implementing
sort on top of
partial_sort.
Partial Sorting with Heaps
In fact the partial and full sort functions use quite different algorithms. Partial sort is based on the heap data structure and, on my present platform, is implemented largely in terms of the standard heap functions. Here’s the important part of the code, which I’ve reformatted for the purposes of this article. Please, compare against the same function in your own implementation.
template<typename RanIt> void partial_sort(RanIt first, RanIt middle, RanIt last) { typedef typename iterator_traits<RanIt>::value_type V; std::make_heap(first, middle); for (RanIt i = middle; i < last; ++i) if (*i < *first) std::__pop_heap(first, middle, i, V(*i)); std::sort_heap(first, middle); }
This function starts by making the half open range
[first, middle) into a heap, which has the result that
*first is the largest element in this range.
It then iterates through the elements
[middle, last). Each time an element is smaller than
*first — that is, smaller than the largest element of
[middle, first) — it calls the implementation’s private
std::__pop_heap() function. This in turn swaps the values at positions
*first and
i and adjusts the range [first, middle) to once more be a heap. Again, look in your standard library for details.
Loosely speaking, every time we see an element in the tail of the collection which is smaller than the largest element in the head of the collection, we swap these elements.
More precisely, the loop invariant is that
[first, middle) is a heap, and all the elements in the range
[middle, i] are greater than all the elements in
[first, middle). It’s subtle, efficient, and dazzlingly clever!
Once the iterator
i gets to the end of the range (
last, that is), the container has been partitioned so the smallest N elements are at its front. All that remains is to sort these elements; and since the front of the container has already been heapified, we can just heap_sort it.
Partitioning with Heaps?
Note the distinction between finding the ordered top ten items in a collection and finding the ten largest items in a collection: the ten largest elements needn’t be ordered.
You may have spotted that if we pull out of the
partial_sort() implementation shown above before applying the final
sort_heap(), then we’ve partitioned the collection so that items in the range
[first, middle) are larger than items in the range
[middle, last).
In fact, there’s a a better way of partitioning the collection to put the N largest elements at the front. It doesn’t use heaps, and, amazingly, can be achieved with a linear algorithm. The C++ standard library provides just such an algorithm filed under the slightly misleading name of std::nth_element.
Sorting with Heaps
I claimed earlier that C++ sort implementers could reuse a special case of partial sort and still meet the C++ Standard’s complexity guarantee. It would be a hard trick to pull off though, since the constant factors differ. Sort is likely to be based on quicksort, acknowledged the most efficient general purpose sorting algorithm. Partial sort, as already mentioned, is a heap sort.
On my platform,
std::sort() in fact delegates to an introsort — a hybrid algorithm which starts with a quicksort and bottoms out to
std::partial_sort() once a heuristically determined recursion depth is exceeded.
I ran a full partial sort head to head against standard sort on my machine, feeding both algorithms large-ish (size up to 10 million) arrays of 31 bit numbers generated using the standard C
random() function. The results indicate sort runs around four times faster than partial sort; someone’s probably got a theoretical proof of the exact multiplier.
N Largest in Python
Python makes no complexity guarantees, but the location of the
nlargest function in the
heapq module gives a pretty big hint about its implementation! Note that
nlargest returns its results in order; it’s more than just a partitioning. Note too that it’s generous enough to handle the case when N is larger than the size of the collection.
Here’s a Python script which imitates our earlier C++ program:
from sys import stdin from heapq import nlargest numbers = map(int, stdin.read().split()) top_ten = nlargest(10, numbers) print "\n".join(map(repr, top_ten))
For the purpose of comparison, I timed the
nlargest() part of this function. I also timed a full (Python) sort of the numbers. Again, I ran on random collections of size S ranging from 2 to 10 million.
This time, the partial sort ran about 30 times more quickly than the full sort. C++ proved about 13 times quicker than Python for the full sort, and 24 times quicker for partial sort.
N Largest in Shell
The Python script shown relies on being able to read the entire file into memory (that’s not a limitation of Python, just of the rather simplistic approach taken by the script). The C++ solution only needs space for the numbers, the input buffering being nicely handled by the iostreams framework. For sizable inputs — of the order of a GB, say, on a modern computer — we’d need to use secondary storage.
The Unix shell pipeline shown earlier has no such limitation. Given enough time and secondary storage, the following command finds the 10 largest numbers in BIGFILE, even if we can’t hold all these numbers in RAM.
$ sort -r -n BIGFILE | head
Executing this command on a ~9GB input file holding one billion 31 bit random numbers took over an hour and a half on my machine.
Parallel Algorithm Analysis
The language used in this article for discussing algorithm analysis works best for a single process running a single uninterrupted thread of execution. If we want to budget time for an algorithm which makes
N * log(N) comparisons we plug in N, divide by the processor speed, and multiply by the number of cycles required for each comparison.
I wonder how well this language will survive in a world where processors have multiple cores. Will a new family of algorithms evolve, ones better equipped to use the new hardware?
This evolution is underway already. In a sequence of articles published in Dr. Dobbs Journal, Herb Sutter teaches programmers the traditional C++ way of doing things; a low-level, platform-dependent approach based on forking threads and locking resources. I’ve come to regard these techniques as a sure route to subtle bugs. On the systems I’ve worked on, a more C-style approach has worked well. At its simplest, a Unix pipeline distributes the load; this archetype generalises to a multi-process architecture, where we develop and prove each (single-threaded!) component in isolation.
There are higher level languages though. Why limit ourselves to a single machine if we can devise a language which blurs the distinction between multiple processors on a single machine and multiple processors on a network? And why not build in some regulation of low level failures? When a task is distributed between workers, it’s natural to ask what should happen if a worker fails, or simply lags behind.
Functional programming turns out to have much to offer in this new, parallel world — Google’s Map-Reduce framework, for example — and it’s nice to know the fundamental ideas are far from being new: rather, their time has come.
Choosing an algorithm
When discussing algorithms it’s all too easy to fret about what happens when inputs grow massive. If we’ve used the standard libraries then resource use for sorting — both memory and CPU cycles — may not be a concern. The code in this article demonstrates highly efficient general purpose sorting routines; and in any final system it’s likely we could use full- and partial- sorting interchangeably without noticeably affecting overall performance.
What is always a concern, though, is readability. If it’s the largest few elements of a collection we want, calling
std::partial_sort() in C++ or
heapq.nlargest() in Python nicely expresses that desire.
|
http://wordaligned.org/articles/top-ten-tags
|
CC-MAIN-2021-49
|
refinedweb
| 1,958
| 60.14
|
In the introduction of this series I mentioned that for the part 1 of this series we shall look at how to upload an image in Angular 2 and then send that image via HTTP request to an API for processing.
This article assumes that you already have angular 2 framework setup.
Based on the scenario defined in the introduction we need a profile page that enables the user to upload an image.
So first lets create a simple html template
<div> <h2>Upload Image</h2> <input type="file" accept="image/*" (change)="handleInputChange($event)"/> </div>
In the template above the only thing going on is the input tag of type file, of course in the real world you would have other form inputs in your profile page. But for the purpose of this article lets keep it simple. You would notice that we are listening to the change event of the input tag which would trigger the
handleInputChange function. Now lets implement that function
import {Component} from '@angular/core'; import { Observable } from 'rxjs/Observable'; import { Http, Headers, Request, Response, RequestOptions } from '@angular/http'; import 'rxjs/add/operator/catch'; import 'rxjs/add/operator/map'; @Component({ selector: 'my-app', templateUrl: 'app.component.html', }) export class AppComponent { private apiBaseUrl = ''; //this is a fake url. Put in your own API url. headers: Headers = new Headers(); constructor(private _http: Http) {} /** * Handles the change event of the input tag, * Extracts the image file uploaded and * makes an Http request with the image file. */ handleInputChange (event) { var image = event.target.files[0]; var pattern = /image-*/; var reader = new FileReader(); if (!image.type.match(pattern)) { console.error('File is not an image'); //of course you can show an alert message here return; } let endPoint = '/upload/profileImage'; //use your own API endpoint let headers = new Headers(); headers.set('Content-Type', 'application/octet-stream'); headers.set('Upload-Content-Type', image.type) this.makeRequest(endPoint, 'POST', image, headers).subscribe( response => {this.handleSuccess(response); }, error => {this.handleError(error); } ); } /** * Makes the HTTP request and returns an Observable */ private makeRequest (endPoint: string, method: string, body = null, headers: Headers = new Headers()): Observable<any> { let url = this.apiBaseUrl + endPoint; this.headers = headers; if (method == 'GET') { let options = new RequestOptions({ headers: this.headers }); return this._http.get(url, options) .map(this.extractData) .catch(this.extractError); } else if (method == 'POST') { let options = new RequestOptions({ headers: this.headers }); return this._http.post(url, body, options) .map(this.extractData) .catch(this.extractError); } } /** * Extracts the response from the API response. */ private extractData (res: Response) { let body = res.json(); return body.response || { }; } private extractError (res: Response) { let errMsg = 'Error received from the API'; return errMsg; } private handleSuccess(response) { console.log('Successfully uploaded image'); //provide your own implementation of handling the response from API } private handleError(errror) { console.error('Error uploading image') //provide your own implementation of displaying the error message } }
It may seem like a lot is going on in the
app.component.ts file, but actually if you take a closer look you would realize that it is not the case. It is actually quite straight forward.
The ‘handleInputChange’ function which is the only public function there is called after a new file is selected for upload from the file input tag.
The interesting part is where it calls the
makeRequest function, you notice the two headers that are passed;
Content-Type and
Upload-Content-Type and there is a good reason for that. In angular 2 when posting a File or Blob the http request class expects the content-type to be
application/octet-stream else it will either fail or cast the body to a string which will cause the API not be able to recognize the content of the body as a File or Blob.
While the other header
Upload-Content-Type which is a custom header is used specially for the API. The API expects it (as you will see in the next article) and uses its value to determine the extension of the image.
And there you have it. You can view the entire code in plunker.
It is worth it to note that in the real world when implementing this it is recommended to abstract the http call in an injectable module. So for instance you would have an HttpService class which will handle the HTTP requests. And then inject the HttpService class into the AppComponent class.
Lets go on to the next article.
|
http://www.sibenye.com/2017/02/17/handling-image-upload-in-a-multitiered-web-application-part-1/
|
CC-MAIN-2019-35
|
refinedweb
| 729
| 56.25
|
10.5.1. Reading Data¶
In this chapter, we will use a data set developed by NASA to test the wing noise from different aircraft to compare these optimization algorithms. We will use the first 1500 examples of the data set, 5 features, and a normalization method to preprocess the data.
%matplotlib inline import d2l from mxnet import autograd, gluon, init, nd from mxnet.gluon import nn import numpy as np # Save to the d2l package. def get_data_ch10(batch_size=10, n=1500): data = np.genfromtxt('../data/airfoil_self_noise.dat', delimiter='\t') data = nd.array((data - data.mean(axis=0)) / data.std(axis=0)) data_iter = d2l.load_array((data[:n, :-1], data[:n, -1]), batch_size, is_train=True) return data_iter, data.shape[1]-1
10.5.2. Implementation from Scratch¶
We have already implemented the mini-batch SGD algorithm in the
Section 3.2..
# Save to the d2l package. def train_ch10(trainer_fn, states, hyperparams, data_iter, feature_dim, num_epochs=2): # Initialization w = nd.random.normal(scale=0.01, shape=(feature_dim, 1)) b = nd.zeros(1) w.attach_grad() b.attach_grad() net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss # Train).mean() l.backward() trainer_fn([w, b], states, hyperparams) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), d2l.evaluate_loss(net, data_iter, loss)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg())) return timer.cumsum(), animator.Y[0].
def train_sgd(lr, batch_size, num_epochs=2): data_iter, feature_dim = get_data_ch10(batch_size) return train_ch10( sgd, None, {'lr': lr}, data_iter, feature_dim, num_epochs) gd_res = train_sgd(1, 1500, 6)
loss: 0.246, 0.051 sec.
sgd_res = train_sgd(0.005, 1)
loss: 0.242, 0.260 sec/epoch
When the batch size equals 100, we use mini-batch SGD for optimization. The time required for one epoch is between the time needed for gradient descent and SGD to complete the same epoch.
mini1_res = train_sgd(.4, 100)
loss: 0.243, 0.007 sec/epoch
Reduce the batch size to 10, the time for each epoch increases because the workload for each batch is less efficient to execute.
mini2_res = train_sgd(.05, 10)
loss: 0.245, 0.026 sec.
d2l.set_figsize([6, 3]) d2l.plot(*list(map(list, zip(gd_res, sgd_res, mini1_res, mini2_res))), 'time (sec)', 'loss', xlim=[1e-2, 10], legend=['gd', 'sgd', 'batch size=100', 'batch size=10']) d2l.plt.gca().set_xscale('log')
10.5.3. Concise Implementation¶
In Gluon, we can use the
Trainer class to call optimization
algorithms. Next, we are going to implement a generic training function
that uses the optimization name
trainer_name and hyperparameter
trainer_hyperparameter to create the instance
Trainer.
# Save to the d2l package. def train_gluon_ch10(trainer_name, trainer_hyperparams, data_iter, num_epochs=2): # Initialization net = nn.Sequential() net.add(nn.Dense(1)) net.initialize(init.Normal(sigma=0.01)) trainer = gluon.Trainer( net.collect_params(), trainer_name, trainer_hyperparams) loss = gluon.loss.L2Loss()) l.backward() trainer.step(X.shape[0]) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), d2l.evaluate_loss(net, data_iter, loss)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg()))
Use Gluon to repeat the last experiment.
data_iter, _ = get_data_ch10(10) train_gluon_ch10('sgd', {'learning_rate': 0.05}, data_iter)
loss: 0.244, 0.030 sec/epoch
10.
|
http://classic.d2l.ai/chapter_optimization/minibatch-sgd.html
|
CC-MAIN-2020-16
|
refinedweb
| 540
| 53.27
|
Opened 8 years ago
Closed 8 years ago
Last modified 5 years ago
#8453 closed (duplicate)
Error in filter "timesince"
Description
def timesince(value, arg=None): """Formats a date as the time since that date (i.e. "4 days, 6 hours").""" from django.utils.timesince import timesince if not value: return u'' if arg: return timesince(arg, value) return timesince(value) timesince.is_safe = False
It should be "return timesince(value, arg)"
Change History (8)
comment:1 Changed 8 years ago by mir
- Needs documentation unset
- Needs tests set
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
comment:2 Changed 8 years ago by jheasly
- Owner changed from nobody to jheasly
- Status changed from new to assigned
comment:3 Changed 8 years ago by kevin
- Resolution set to invalid
- Status changed from assigned to closed
comment:4 Changed 8 years ago by ashearer
- Resolution invalid deleted
- Status changed from closed to reopened
I'd agree that the current behavior is surprising, but I'd also say that it contradicts the documentation. Or two out of its three relevant sentences. See #7443:comment:5 for details.
Am I reading incorrectly? I can't believe I've misread it this many times in a row.
Here's the current behavior (where 'earlier' is a week behind 'now' and 'later' is a week ahead):
- earlier|timesince: 1 week
- earlier|timesince:now: 0 minutes
- earlier|timeuntil: 0 minutes
- earlier|timeuntil:now: 0 minutes
- later|timesince: 0 minutes
- later|timesince:now: 1 week
- later|timeuntil: 1 week
- later|timeuntil:now: 1 week
The first full sentence of the documentation says timesince "[t]akes an optional argument that is a variable containing the date to use as the comparison point (without the argument, the comparison point is now)." However, passing now explicitly as the optional argument leads to results opposite to allowing it to be the default.
The last sentence says '“0 minutes” will be returned for any date that is in the future relative to the comparison point'. According to the first sentence, that corresponds to the 'later|timesince:now' test, which returns '1 week' instead.
The middle sentence, giving an example, seems to contradict the other two.
If this apparent conflict is resolved in favor of having mydate|timesince:now act like mydate|timesince (obviously the approach I favor), 2/3 of the documentation and the analogy with timeuntil already agree. The only hangup would be existing code. But I'd wonder how widespread that is for the two-argument version. Anyone surprised by the behavior or who realized the documentation didn't entirely match the results could have steered clear by swapping arguments and using two-argument timeuntil instead, which is currently an exact synonym of timesince, but matches its own English meaning and its documentation. In any case, if it's going to change to a less surprising behavior someday, the backward compatibility argument points to getting it in by 1.0.
More discussion, a patch, and test cases in #7443.
comment:5 Changed 8 years ago by ubernostrum
- Resolution set to duplicate
- Status changed from reopened to closed
Reopening a ticket to point out that you're arguing about the same thing in another ticket is bad, mmkay?
comment:6 Changed 8 years ago by ashearer
Sorry, my mistake. I had reopened it in order to reset its resolution to 'duplicate', but for some reason thought the option didn't become available. Turns out I was mistaken. Thanks for doing it for me.
comment:7 Changed 8 years ago by carlou
- Cc ouyhui@… added
comment:8 Changed 5 years ago by jacob
- milestone 1.0 deleted
Milestone 1.0 deleted
I discussed this ticket with Malcolm. The current behavior could indeed be considered to be somewhat surprising, in that value is typically expected to be older than now() in the default case, yet is expected to be the newer of the two times where a reference point other than now() is supplied. In effect, the meaning of arg and value are reversed when a reference point is supplied.
That said, the current situation works correctly as documented, and existing code may depend on its behavior. It's not considered to be a bug.
|
https://code.djangoproject.com/ticket/8453
|
CC-MAIN-2016-30
|
refinedweb
| 704
| 50.16
|
<set-variable
DataWeave Scripts
DataWeave is the primary data transformation language for use in Mule flows..
You can write standalone DataWeave scripts in Transform Message components, or you can write inline DataWeave expressions to transform data in-place and dynamically set the value of various properties, such as configuration fields in an event processor or global configuration element. Inline DataWeave expressions are enclosed in
#[ ]code blocks. For example, you can use a DataWeave expression to set conditions in a Choice router or to set the value of a Set Payload or Set Variable component.
The DataWeave code in this example sets a timestamp variable to the current time using the DataWeave
now() function:
You can also store DataWeave code in external files and read them into other DataWeave scripts, or you can factor DataWeave code into modules (libraries) of reusable DataWeave functions that can be shared by all the components in a Mule app.
The Structure of DataWeave Scripts
DataWeave scripts and files are divided into two main sections:
The Header, which defines directives that apply to the body expression (optional).
The Body, which contains the expression to generate the output structure.
When you include a header, the header appears above the body separated by a delimiter consisting of three dashes:
---.
Here is an example of a DataWeave file with an output directive declared in the header, followed by a DataWeave expression to create a user object that contains two child key/value pairs:
%dw 2.0 output application/xml --- { user: { firstName: payload.user_firstname, lastName: payload.user_lastName } }
DataWeave Header
This example shows keywords (such as
import and
var) you can use for header directives.
%dw 2.0 import * from dw::core::Arrays var myVar=13.15 fun toUser(obj) = { firstName: obj.field1, lastName: obj.field2 } type Currency = String { format: "##"} ns ns0 output application/xml --- /* * Body here. * /
%dw: DataWeave version is optional. Default is
2.0, which is for all 2.x versions of DataWeave.
Example:
%dw 2.0
output: Commonly used directive that specifies the mime type that the script outputs.
Example:
output application/xml
Valid values: Supported Data Formats.
Default: If no output is specified, the default output is determined by an algorithm that examines the inputs (payload, variables, and so on) used in the script:
If there is no input, the default is
output application/java.
If all inputs are the same mime type, the script outputs to the same mime type. For example, if all input is
application/json, then it outputs
output application/json.
If the mime types of the inputs differ, and no output is specified, the script throws an exception so that you know to specify an output mime type.
Note that only one output type can be specified.
import: For importing a DataWeave function module. See DataWeave Functions.
var: Global variables for defining constants that you can reference throughout the body of the DataWeave script:Example
%dw 2.0 var conversionRate=13.15 output application/json --- { price_dollars: payload.price, price_localCurrency: payload.price * conversionRate }
For details, see DataWeave Variables.
type: For specifying a custom type that you can use in the expression.
For a more complete example, see Type Coercion with DataWeave.
ns: Namespaces, used to import a namespace.Example
%dw 2.0 output application/xml ns ns0 ns ns1 --- { ns0#myroot: { ns1#secondroot: "hello world" } }
fun: For creating custom functions that can be called from within the body of the script.Example
%dw 2.0 output application/json fun toUser(user) = {firstName: user.name, lastName: user.lastName} --- { user: toUser(payload) }
Including Headers in Inline DataWeave Scripts
You can include header directives when you write inline DataWeave scripts by flattening all the lines in the DataWeave script into a single line. For smaller DataWeave scripts, this allows you to quickly apply header directives (without having to add a separate Transform Message component to set a variable), then substitute the variable in the next Event processor.
For example, here is the Mule configuration XML to create the same valid XML output as the previous Transform Message component:
<set-payload
Note that the DataWeave documentation provides numerous transformation examples.
DataWeave Body
The DataWeave body contains an expression that generates the output structure. Note that MuleSoft provides a canonical way for you to work on data with the DataWeave model: a query, transform, build process.
Here is simple example that provides JSON input for a DataWeave script:
{ "message": "Hello world!" }
This DataWeave script takes the entire payload of the JSON input above and transforms it to the
application/xml format.
%dw 2.0 output application/xml --- payload
The next example shows the XML output produced from the DataWeave script:
<?xml version='1.0' encoding='UTF-8'?> <message>Hello world!</message>
The script above successfully transforms the JSON input to XML output.
Errors (Scripting versus Formatting Errors)
A DataWeave script can throw errors due to DataWeave coding errors and due to formatting errors. So when transforming one data format to another, it is important to keep in mind the constraints of both the language and the formats. For example, XML requires a single root node. If you use the DataWeave script above in the attempt to transform this JSON input to XML, you will receive an error (
Unexpected internal error) because the JSON input lacks a single root:
{ "size" : 1, "person": { "name": "Yoda" } }
A good approach to the creation of a script is to normalize the input to the JSON-like application/dw format. In fact, if you get an error, you can simply transform your input to
application/dw. If the transformation is successful, then the error is likely a formatting error. If it is unsuccessful, then the error is a coding error.
This example changes the output format to
application/dw:
%dw 2.0 output application/dw --- payload
You can see that the script successfully produces
application/dw output from the JSON input example above:
{ size: 1, person: { name: "Yoda" } }
So you know that the previous error (
Unexpected internal error) is specific to the format, not the coding. You can see that the
application/dw output above does not provide a single root element, as required by the XML format. So, to fix the script for XML output, you need to provide a single root element to your script, for example:
%dw 2.0 output application/xml --- { "myroot" : payload }
Now the output meets the requirements of XML, so when you change the output directive back to
application/xml, the result produces valid XML output.
<?xml version='1.0' encoding='UTF-8'?> <myroot> <size>1</size> <person> <name>Yoda</name> </person> </myroot>
DataWeave Comments
Comments that use a Java-like syntax are also accepted by DataWeave.
%dw 2.0 output application/json --- /* Multi-line * comment syntax */ payload // Single-line comment syntax
Escape Special Characters
In DataWeave, you use the backslash (
\) to escape special characters that you are using in an input string.
See Escaping Special Characters for more details.
The input strings in the DataWeave scripts escape special characters, while
the output escapes in accordance with the requirements of the output format,
which can vary depending on whether it is
application/json,
application/xml,
application/csv, or some other format.
This example escapes the internal double quotation mark that is surrounded by double quotation marks.
%dw 2.0 output application/json --- { "a": "something", "b": "dollar sign (\$)", "c": 'single quote (\')', "c": "double quote (\")", "e": `backtick (\`)` }
Notice that the JSON output also escapes the double quotation marks to make the output valid JSON but does not escape the other characters:
{ "a": "something", "b": "dollar sign ($)", "c": "single quote (')", "c": "double quote (\")", "e": "backtick (`)" }
The following example escapes the same characters but outputs to XML.
%dw 2.0 output application/xml --- { xmlExample: { "a": "something", "b": "dollar sign (\$)", "c": 'single quote (\')', "d": "double quote (\")", "e": `backtick (\`)` } }
The XML output (unlike JSON output) is valid without escaping the double quotation marks:
<?xml version='1.0' encoding='UTF-8'?> <xmlExample> <a>something</a> <b>dollar sign ($)</b> <c>single quote (')</c> <d>double quote (")</d> <e>backtick (`)</e> </xmlExample>
Rules for Declaring Valid Identifiers
To declare a valid identifier, its name must meet the following requirements:
It must begin with a letter of the alphabet (a-z), either lowercase or uppercase.
After the first letter, the name can contain any combination of letters, numbers, and underscores (
_).
The name cannot match any DataWeave reserved keyword (see Reserved Keywords for a complete list).
Here are some examples of valid identifiers:
myType
abc123
a1_3BC_22
Z___4
F
The following table provides examples of invalid identifiers and a description of what makes them invalid:
Reserved Keywords
Valid identifiers cannot match any of the reserved DataWeave keywords:
and
Reserved as a logical operator
as
Reserved for type coercion
async
Reserved
case
Reserved for
casestatements, for example, to perform pattern matching or for use with the
updateoperator
default
Reserved for assigning default values for parameters
do
Reserved for
dostatements
else
Reserved for use in if-else and else-if statements
enum
Reserved
false
Reserved as a Boolean value
for
Reserved
fun
Reserved for function definitions in DataWeave Header
if
Reserved for if-else and else-if statements
import
Reserved for the
importdirective in DataWeave Header
input
Reserved
is
Reserved
ns
Reserved for namespace definitions in DataWeave Header
null
Reserved as the value
null
See Null (dw::Core Type).
or
Reserved as a logical operator
output
Reserved for the
outputdirective in DataWeave Header
private
Reserved
throw
Reserved
true
Reserved as a Boolean value
type
Reserved for type definitions in DataWeave Header
unless
Reserved
using
Reserved for backward compability and replaced by
do
var
Reserved for variable definitions in the DataWeave Header
yield
Reserved
dwl File
In addition to specifying DataWeave scripts in the Transform and other components, you can also specify the scripts in a
.dwl file. In Studio projects, your script files are stored in
src/main/resources.
In the Mule app XML, you can use the
${file::filename} syntax to send a script in a
dwl file through any XML tag that expects an expression. For example, see the
when expression="${file::someFile.dwl}" in the Choice router here:
< doc: < > < ><![CDATA[#[${file::transform.dwl}]]]></ </ </ <choice doc: <when expression="${file::someFile.dwl}" > <set-payload </when> <otherwise > <set-payload </otherwise> </choice>
|
https://docs.mulesoft.com/dataweave/2.4/dataweave-language-introduction
|
CC-MAIN-2022-21
|
refinedweb
| 1,701
| 51.78
|
#include <pslib.h>
void PS_symbol_name(PSDoc *psdoc, unsigned char c, int fontid, char *name, int size)
Retrieves the name of a glyph which is at position c in the font encoding vector. The font encoding for a font can be set when loading the font with PS_findfont().
fontid is optional and can be set to zero. In such a case the current font will be used.
The name will be copied into the memory area pointed by
name. Not more than size bytes will be copied.
PS_findfont(3), PS_symbol(3), PS_symbol_width(3)
This manual page was written by Uwe Steinmann ;uwe@steinmann.cx.
|
http://www.makelinux.net/man/3/P/PS_symbol_name
|
CC-MAIN-2015-14
|
refinedweb
| 103
| 74.49
|
{- | Real @'Event' Char@ might represent a key press. ['RTSP'] The Real Time Stream Processor. A value of type @'RTSP' x y@ takes in events of type @x@TA' into an 'RTSP' using 'execRTA' or 'accumulateRTA' depending: 1. Impose an arbitrary but deterministic order on \"simultaneous\" events. 2. Collect the simultaneous events and pass them to the application, on the basis that the application programmer can then impose the appropriate semantics. 3.. -} module Control.RTSP ( -- ** Events Event (..), isBefore, -- ** Real Time Stream Processors EventStream, emitsBefore, nullStream, esFinished, esPeek, esFutures, esProcess, esMerge, RTSP (..), simulateRTSP, execRTSP, stream, accumulate, -- ** Manipulating event times repeatEvent, delay0, delay, -- ** Conditional event processing Cond, streamFilter, cond, cond1, ifThenElse, -- ** Real Time Actions with state RTA, get, put, modify, emit, pause, now, execRTA, accumulateRTA, ) where import Control.Category import Control.Concurrent import Control.Concurrent.STM import Control.Monad import Data.List import Data.Monoid import Data.Sequence import Data.Time import Prelude hiding ((.),id, repeat) infix 4 `isBefore`, `emitsBefore` -- | Real time events. data Event a = Event {eventTime :: UTCTime, eventValue :: a} deriving (Show, Eq) instance Functor Event where fmap f (Event t v) = Event t (f v) -- | True if the first event occurs strictly before the second. This makes @Event@ a poset (partially ordered set). -- Infix priority 4 (the same as other comparison operators). isBefore :: Event a -> Event b -> Bool isBefore ev1 ev2 = eventTime ev1 < eventTime ev2 {- | A real-time event stream cannot be described without reference to unknown future inputs. Hence @EventStream@ embodies two possible futures: * An @Event c@ will be emitted at some time in the future, with a new @EventStream@ representing the future after that event. * An incoming @Event b@ will arrive before the next @Event c@ is emitted, creating a new @EventStream@ representing the response to that event. The old @Event c@ may or may not be part of the new @EventStream@. There are also two degenerate cases: * Wait: no event is scheduled to be emitted, and the @EventStream@ just. -} data EventStream b c = Emit (Event c) (EventStream b c) (RTSP b c) -- ^ The next event to be emitted, the following EventStream, and the function to -- handle an incoming event before then. | Wait (RTSP b c) -- ^ Degenerate case: no event scheduled to be emitted. | Finish -- ^ Semantically equivalent to @Wait eventSink@, but allows completed streams to be GC'd. deriving (Show) -- | Peek at the events that will be emitted by this EventStream if no incoming event interrupts them. esPeek :: EventStream b c -> [Event c] esPeek (Emit ev es1 _) = ev : esPeek es1 esPeek _ = [] -- | True if the first argument is scheduled to emit an event before the second. This makes @EventStream@ a poset -- (partially ordered set). Infix priority 4. emitsBefore :: EventStream b1 c1 -> EventStream b2 c2 -> Bool emitsBefore Finish _ = False emitsBefore (Wait _) _ = False emitsBefore (Emit ev1 _ _) (Emit ev2 _ _) = ev1 `isBefore` ev2 emitsBefore (Emit _ _ _) _ = True -- Only events satisfying the predicate will be passed on. esFilter :: (b -> Bool) -> EventStream b b esFilter = Wait . rtspFilter -- | Only events satisfying the predicate will be passed on. rtspFilter :: (b -> Bool) -> RTSP b b rtspFilter p = RTSP $ \ev -> if p $ eventValue ev then Emit ev (esFilter p) (rtspFilter p) else esFilter p -- | All the possible futures of the event stream. esFutures :: EventStream b c -> [(Event c, EventStream b c)] esFutures (Emit e es1 _) = (e, es1) : esFutures es1 esFutures _ = [] -- | True if the event stream is guaranteed not to emit any future events, regardless of input. esFinished :: EventStream b c -> Bool esFinished Finish = True esFinished _ = False -- | Merge the outputs of two event streams. Input events are delivered -- to both streams. esMerge :: EventStream b c -> EventStream b c -> EventStream b c esMerge Finish es = es esMerge es Finish = es esMerge (Wait k1) (Wait k2) = Wait (splitRTSP k1 k2) esMerge (Emit e es1 k1) es2@(Wait k2) = Emit e (esMerge es1 es2) (splitRTSP k1 k2) esMerge es1@(Wait k1) (Emit e es2 k2) = Emit e (esMerge es1 es2) (splitRTSP k1 k2) esMerge es1@(Emit e1 es1a k1) es2@(Emit e2 es2a k2) = if e2 `isBefore` e1 then Emit e2 (esMerge es1 es2a) (splitRTSP k1 k2) else Emit e1 (esMerge es1a es2) (splitRTSP k1 k2) -- | `isBefore` ev1)@. This precondition is not checked. esProcess :: Event b -> EventStream b c -> EventStream b c esProcess _ Finish = Finish esProcess ev (Wait k) = runRTSP k ev esProcess eIn (Emit eOut rest k) = if eIn `isBefore` eOut then runRTSP k eIn else Emit eOut (esProcess eIn rest) k -- | An event stream that never generates anything. nullStream :: EventStream b c nullStream = Finish -- | Real Time Stream Processor (RTSP) -- -- An EventStream cannot exist independently of some event that caused it to start. Hence the only way to -- create an EventStream is through an RTSP. -- -- * "mempty" is the event sink: it never emits an event. -- -- * "mappend" runs its arguments in parallel and merges their outputs. -- -- * "id" is the null operation: events are passed through unchanged. -- -- * "(.)" is sequential composition: events emitted by the second argument are passed to the first argument. newtype RTSP b c = RTSP {runRTSP :: Event b -> EventStream b c} instance Show (RTSP b c) where show _ = "-RTSP-" instance Monoid (RTSP b c) where mempty = eventSink mappend = splitRTSP instance Monoid (EventStream b c) where mempty = nullStream mappend = esMerge instance Functor (EventStream b) where fmap f (Emit eOut rest k) = Emit (fmap f eOut) (fmap f rest) (fmap f k) fmap f (Wait k) = Wait (fmap f k) fmap _ Finish = Finish instance Functor (RTSP b) where fmap f (RTSP r) = RTSP $ \evt -> fmap f $ r evt {- The (.) operator for EventStream has to deal with several scenarios: 1: (Wait k2) . (Wait k1). This is simple because the only possible event is an input that is piped into k1. The result is the composition of the result with (Wait k2), achieived using the instance for RTSP. 2: (Wait k2) . es1@(Emit e1 es1a k1). In this case there are two timelines: a) e1 : (k2 e1) . es1a -- Event e1 is passed to k2 and the result composed with es1a. b) ev e1: (Wait k . es1a) (k2 . k1) b) e1 e2 : k2 e1 . es1a -- e2 never happens because it is overridden by e1 c) ev ... : es2 . (k1 ev) -- ev overrides e1, and the new output is fed to es2. d) e1 ev e2 : esProcess (es2b . es1a) ev -- e1 overrides e2, giving es1a and es2b to process ev. e) e2 ev e1 : Emit e2 (es2a . (k1 ev)) -- e2 is emitted, then ev overrides e1. -} instance Category EventStream where -- id :: EventStream b c id = Wait id -- (.) :: EventStream c d -> EventStream b c -> EventStream b d Finish . _ = Finish (Wait _) . Finish = Finish (Emit e es1 _). Finish = let future = Emit e (es1 . Finish) (RTSP $ \_ -> future) in future (Wait k2) . (Wait k1) = Wait $ k2 . k1 es2@(Wait k2) . Emit e1 es1a k1 = let k = RTSP $ \ev -> if ev `isBefore` e1 then es2 . runRTSP k1 ev -- Timeline 2b else esProcess ev es -- Timeline 2a es = (runRTSP k2 e1) . es1a -- Future if ev never happens in case es of Emit e2 es2b _ -> Emit e2 es2b k Wait _ -> Wait k Finish -> Wait k -- if ev `isBefore` e1 then es never happens. es2@(Emit e2 es2a _) . es1@(Wait k1) = Emit e2 (es2a . es1) (RTSP $ \ev -> es2 . runRTSP k1 ev) es2@(Emit e2 es2a k2) . es1@(Emit e1 es1a k1) = let es = (runRTSP k2 e1) . es1a in if e1 `isBefore` e2 then -- Timelines 3b, 3c and 3d. e2 never happens. let k = RTSP $ \ev -> if ev `isBefore` e1 then es2 . runRTSP k1 ev -- Timeline 3c else esProcess ev es -- Timeline 3d in case es of Emit e3 es3 _ -> Emit e3 es3 k Wait _ -> Wait k Finish -> Wait k -- As above. else -- Timelines 3a, 3c and 3e. let k = RTSP $ \ev -> es2 . runRTSP k1 ev in Emit e2 (es2a . es1) k instance Category RTSP where -- id :: RTSP b c id = RTSP $ \ev -> Emit ev id id -- (.) :: RTSP c d -> RTSP b c -> RTSP b d r2 . r1 = RTSP $ \e0 -> (Wait r2) . runRTSP r1 e0 -- | Execute an RTSP against a list of events. Useful for testing. simulateRTSP :: RTSP b c -- ^ The processor to execute. -> [Event b] -- ^ The events must be finite and in chronological order. This is unchecked. -> [Event c] simulateRTSP r = esPeek . foldl (flip esProcess) (Wait r) -- | Execute an RTSP in the IO monad. The function returns immediately with an action for pushing events into the RTSP. execRTSP :: RTSP b (IO ()) -- ^ The output of the RTSP is a series of action events that will be executed in a separate thread sequentially at the -- times given. The actions may, of course, fork their own threads as necessary. -- -- execRTSP uses 'atomically', so it cannot be called within 'unsafePerformIO'. -> IO (b -> IO ()) execRTSP r = do eventQ <- newTChanIO let putValue v = do t <- getCurrentTime atomically $ writeTChan eventQ $ Event t v execStream (Emit ev es r1) = do {- "c1" and "c2" are threads that race to put a value in "var". "c1" waits until the next event emission time and then puts "Nothing". "c2" waits for the next input on "eventQ" and puts "Just" the event. Once this happens "mev2" can get the result and both threads are killed. The tricky bit is avoiding a race condition in "c2" where it reads the channel and is then killed by the timeout, which would result in a dropped event. "var" is never emptied, so if "c1" wins the race then "c2" can never complete before being killed, so the event remains on the queue ready for the next race. -} var <- newEmptyTMVarIO c1 <- timeout (eventTime ev) (atomically $ putTMVar var Nothing) c2 <- forkIO $ atomically $ do ev1 <- readTChan eventQ putTMVar var $ Just ev1 mev2 <- atomically $ readTMVar var killThread c1 killThread c2 case mev2 of Just ev2 -> execStream $ runRTSP r1 ev2 Nothing -> do () <- eventValue ev execStream es execStream (Wait r1) = do ev2 <- atomically $ readTChan eventQ execStream $ runRTSP r1 ev2 execStream Finish = return () _ <- forkIO $ execStream $ Wait r return putValue -- Execute the given action at the given time. Returns immediately with the ThreadID that will execute the action. timeout :: UTCTime -> IO () -> IO ThreadId timeout t action = forkIO $ do t0 <- getCurrentTime longDelay (round ((t `diffUTCTime` t0) * 1000000)) action where -- threadDelay takes an Int, which may be as small as 2^29 (a bit over 5 minutes). longDelay :: Integer -> IO () longDelay dt = let (n,dt1) = if dt > 0 then dt `divMod` cycleT else (0,0) in do replicateM_ (fromIntegral n) $ threadDelay (fromIntegral cycleT) threadDelay (fromIntegral dt1) cycleT = 500000000 -- 500 seconds in uSec. Small enough to fit into an Int. -- | An RTSP that never emits events regardless of its inputs. eventSink :: RTSP b c eventSink = RTSP $ \_ -> nullStream -- | A pure function converted into a stream processor stream :: (b -> c) -> RTSP b c stream f = fmap f id -- | Deliver an event to two stream processors and merge the resulting event -- streams. splitRTSP :: RTSP b c -> RTSP b c -> RTSP b c splitRTSP (RTSP r1) (RTSP r2) = RTSP $ \evt -> esMerge (r1 evt) (r2 evt) where -- | Convert an list of events into an event stream. Events coming into this -- stream are ignored. The list must be in chronological order. streamFromList :: [Event c]-> EventStream b c streamFromList [] = Finish streamFromList (e:es) = Emit e (streamFromList es) (RTSP $ \_ -> streamFromList (e:es)) -- |. accumulate :: RTSP b c -> RTSP b c accumulate r = rAccum [] r where rAccum evs@(ev1:evs1) r1 = RTSP $ \ev2 -> if ev2 `isBefore` ev1 -- hpc says this is always true. Can this be proved? then sAccum evs $ runRTSP r1 ev2 else Emit ev1 (sAccum evs1 $ runRTSP r ev2) (rAccum evs r1) rAccum [] r1 = RTSP $ \ev -> sAccum [] $ runRTSP r1 ev sAccum evs1@(ev1:evs1a) es2@(Emit ev2 es2a r2a) = let future = if ev2 `isBefore` ev1 then Emit ev2 (sAccum evs1 es2a) (rAccum (esPeek future) r2a) else Emit ev1 (sAccum evs1a es2) (rAccum (esPeek future) r2a) in future sAccum [] es2@(Emit ev2 es2a r2) = Emit ev2 (sAccum [] es2a) (rAccum (esPeek es2) r2) sAccum evs1@(ev1:evs1a) k2@(Wait r2) = Emit ev1 (sAccum evs1a k2) (rAccum evs1 r2) sAccum [] (Wait r2) = Wait $ accumulate r2 sAccum evs Finish = streamFromList evs -- | repeatEvent :: [NominalDiffTime] -> RTSP b b repeatEvent dts1 = RTSP $ \(Event t0 v) -> let rStream dt0 (dt:dts2) | dt0 <= dt = Emit (Event (dt `addUTCTime` t0) v) (rStream dt dts2) (repeatEvent dts1) | otherwise = error "Control.Applicative.RTSP.streamRepeat: negative time increment." rStream _ [] = Wait (repeatEvent dts1) in rStream 0 dts1 -- | Delay input events by the specified time, but given an event stream @{ev1, ev2, ev3...}@, if ev2 arrives before -- ev1 has been emitted then ev1 will be lost. delay0 :: NominalDiffTime -> RTSP b b delay0 dt = repeatEvent [dt] -- | Delay input events by the specified time. -- -- Unfortunately this requires O(n) time when there are @n@ events queued up due to the use of "accumulate". delay :: NominalDiffTime -> RTSP b b delay = accumulate . delay0 -- | A conditional stream: events matching the predicate will be passed to the stream. type Cond a b = (a -> Bool, RTSP a b) -- | Conditional stream execution: only certain events will be accepted. streamFilter :: Cond a b -> RTSP a b streamFilter (p, r1) = rtspFilter p >>> r1 -- | Send each event to all the streams that accept it. cond :: [Cond a b] -> RTSP a b cond = mconcat . map streamFilter -- | Send each event to the first stream that accepts it, if any. cond1 :: [Cond a b] -> RTSP a b cond1 = foldr ifThenElse eventSink -- |. ifThenElse :: Cond a b -> RTSP a b -> RTSP a b ifThenElse (p,rThen) rElse = ifRTSP (Wait rThen) (Wait rElse) where ifRTSP es1 es2 = RTSP $ \ev -> if p $ eventValue ev then ifStream (esProcess ev es1) es2 else ifStream es1 (esProcess ev es2) ifStream Finish Finish = Finish ifStream Finish es@(Wait _) = Wait $ ifRTSP Finish es ifStream Finish es@(Emit e es1 _) = Emit e (ifStream Finish es1) (ifRTSP Finish es) ifStream es@(Wait _) Finish = Wait $ ifRTSP es Finish ifStream es@(Emit e es1 _) Finish = Emit e (ifStream es1 Finish) (ifRTSP es Finish) ifStream es1@(Wait _) es2@(Wait _) = Wait $ ifRTSP es1 es2 ifStream es1@(Emit e es1a _) es2@(Wait _) = Emit e (ifStream es1a es2) (ifRTSP es1 es2) ifStream es1@(Wait _) es2@(Emit e es2a _) = Emit e (ifStream es1 es2a) (ifRTSP es1 es2) ifStream es1@(Emit e1 es1a _) es2@(Emit e2 es2a _) = if e1 `isBefore` e2 then Emit e1 (ifStream es1a es2) (ifRTSP es1 es2) else Emit e2 (ifStream es1 es2a) (ifRTSP es1 es2) -- |. newtype RTA s c v = RTA {unRTA :: s -> Seq (Event c) -> UTCTime -> (v, s, Seq (Event c), UTCTime)} {- In the RTA definition code, the following initial variable letters are used: b - The type of input events c - The type of output events f - A function of whatever type. q - Queue of output events, of type Seq (Event c) s - Used for both the type and value of the state. t - Time, of type UTCTime v - Used for both the value returned by the current action and its type. z - Non-termination flag. True if the RTSP should respond to future events. -} instance Functor (RTA s c) where fmap f rv = RTA $ \s q t -> let (v1, s1, q1, t1) = unRTA rv s q t in (f v1, s1, q1, t1) instance Monad (RTA s c) where return v = RTA $ \ s q t -> (v, s, q, t) rv >>= f = RTA $ \s q t -> let (v1, s1, q1, t1) = unRTA rv s q t in unRTA (f v1) s1 q1 t1 -- | Get the current time. This is the event time plus any pauses. now :: RTA s c UTCTime now = RTA $ \s q t -> (t, s, q, t) -- | Get the current state. get :: RTA s c s get = RTA $ \s q t -> (s, s, q, t) -- | Put the current state. put :: s -> RTA s c () put v = RTA $ \_ q t -> ((), v, q, t) -- | Apply a function to the current state. modify :: (s -> s) -> RTA s c () modify f = fmap f get >>= put -- | Emit a value as an event. emit :: c -> RTA s c () emit v = RTA $ \s q t -> ((), s, q |> Event t v, t) -- | Pause before the next step. This does not actually delay processing; it merely increments the time of any emitted events. pause :: NominalDiffTime -> RTA s c () pause dt | dt >= 0 = RTA $ \s q t -> ((), s, q, addUTCTime dt t) | otherwise = error $ "pause: negative interval of " ++ show dt -- |. execRTA :: s -- ^ The initial state. State persists between input events. -> (b -> RTA s c Bool) -- ^ A function from the input value to an action. If the action returns @True@ then subsequent input events -- will run the action again. If it returns @False@ then the RTSP finishes and will not respond to further events. -> RTSP b c execRTA s f = RTSP $ \ev -> let t = eventTime ev v = eventValue ev (z, s1, q, _) = unRTA (f v) s empty t queueStream q1 = case viewl q1 of EmptyL -> if z then Wait $ execRTA s1 f else nullStream c :< q2 -> Emit c (queueStream q2) (if z then execRTA s1 f else eventSink) in queueStream q -- | Like "execRTA", except that output events are accumulated. accumulateRTA :: s -> (b -> RTA s c Bool) -> RTSP b c accumulateRTA s f = accumulate $ execRTA s f
|
http://hackage.haskell.org/package/Dflow-0.0.1/docs/src/Control-RTSP.html
|
CC-MAIN-2017-30
|
refinedweb
| 2,816
| 70.33
|
Breaking news from around the world
Get the Bing + MSN extension
I'm seeing some strange issues with SQLite in a Windows Phone 8.1 (universal app) running on the latest Windows Phone 10 builds. The SQLite library often fails to load the database and return any results.
I suspect that as Windows Phone 10 has moved closer to the Windows 10/8 storage namespaces and file system that the dedicated SQLite library for WP81-winrt library is no longer valid and that we may want to switch to the winrt81 library instead (previously
for tablets) but I'm not sure. Or, maybe this just won't work at all.
currently using sqlite-wp81-winrt-3080801 and for the tablet sqlite-winrt-3080801.
Any thoughts?.
|
https://social.msdn.microsoft.com/Forums/windowsapps/en-US/806d2868-2a28-4f6a-8650-6e1e3cf8acde/wp81-windows-phone-81-universal-app-with-sqlite-doesnt-work-correctly-on-windows-phone-10?forum=wpdevelop
|
CC-MAIN-2019-39
|
refinedweb
| 125
| 69.11
|
our DataContext method can look like this:
1: [Function(Name = "dbo.GetFirstName")]
2: public ISingleResult<DummyClass> GetFirstName(int id)
3: {
4: IExecuteResult result = this.ExecuteMethodCall(this, (MethodInfo)MethodInfo.GetCurrentMethod(), id);
5: return (ISingleResult<DummyClass>)result.ReturnValue;
6: }.
UPDATE: The component controller was removed from the MVC framework before the RTM release. For an updated version of this post, click here.
At some point when creating a web app, you're going to want some reusable UI components. This might be because you want the same visual UI snippet repeated more than once on a single page or it might be because you want to use the same component on multiple pages. In a traditional ASP.NET web app, typically you would use a User Control for this type of thing. In MVC you still have the option of using a User Control but you also have the option of rendering your UI snippet with a ComponentController. So the question is, when do you use which? There are pros and cons to each.
The most common scenario for a user control is that you pass in the Model from the containing view page. For example, consider a AddressEditor user control that you use multiple times on a page to edit Contacts - once for Home address and once for Work address. It might look like this:
<%=Html.RenderUserControl("~/Views/Home/AddressEditor.ascx", new AddressViewData(ViewData.Model.Contact.HomeAddress))%>
This is fine for those simple cases but what happens when you want your user control to have its own self contained logic? For example, what if you're buildng a portal or a portal-like website where you want to have several self-contained "widgets" in your page. You don't want the parent controller having to keep track of the models for 10 different widgets and then have to pass them in to each one. A good answer to this is to use a ComponentController rather than a View user control. A ComponentController can render its own views and the containing page can just look like this:
1: <div id="firstWidget" class="widget">
2: <%=Html.RenderComponent<MvcWidget.Controllers.WidgetCompController>(c => c.Widget1()) %>
3: </div>
In this case, the controller class basically looks like most any other controller class with the exception that it inherits from ComponentController:
1: public class WidgetCompController : ComponentController
3: public void Widget1()
5: IWidgetManager widgetManager = new Widget1Manager();
6: List<Foo> list = widgetManager.GetFooData();
7: RenderView("Widget1", list);
8: }
9: }
However, there still are some uses for view user controls here. Let's say you want to render your widgets via an AJAX call. One feature of the "normal" MVC controllers is that the views rendered can be either aspx or ascx. Rendering an ascx user control from an MVC Controller would be a good choice when you want to implement the AJAX HTML Message Design Pattern where you're returning only an HTML snippet from the server rather than an entire page. In this case, we can render our user controls with an AJAX jQuery call. So if we had a couple of HTML divs (called "secondWidget" and "thirdWidget") and a WidgetController class with actions methods Widget2() and Widget3(), we could simply implement this simple jQuery function:
1: <script type="text/javascript">
2: $(function() {
3: $('#secondWidget').load('/Widget/Widget2');
4: $('#thirdWidget').load('/Widget/Widget3');
5: });
6: </script>
These are some rough ideas to get you started. The complete code sample can be downloaded here.
|
http://geekswithblogs.net/michelotti/archive/2008/07.aspx
|
CC-MAIN-2017-26
|
refinedweb
| 575
| 54.22
|
We have been struggling quite a bit with a good approach for modularizing our ADF web applications through the use of (stand alone) Task Flows that are developed in independent projects and assembled into a single Web Application from ADF Libraries. In theory, this is a very structured, decoupled way of developing potentially complex ADF Web Applications – while allowing for reuse. The contextual events mechanism in combination with the task flow input parameters allow definition of a clear interface through which to reuse the task flow. So all seems well.
However, when you try to put this theoretical bliss into actual practice, there are some limitations that you run into. One of the tricky issues we had to deal with is: how can we debug our web application when part of the source of the application is reused from ADF Libraries? How can we put breakpoints in the sources that are part of the ADF Library?
On closer inspection, there seems to be a relatively easy way for doing this – using an additional library definition in JDeveloper that refers to the sources that form the foundation of the ADF Library.
Let’s take a quick look at how this would work:
Create TaskFlow TaskOne and ADF Library
Steps:
1. Create New Fusion Web Application TaskOne.
All subsequent steps are for the ViewController project:
2. Create a class NumberGenerator that returns a list of Integers.
package nl.amis.app.taskone.view; import java.util.ArrayList; import java.util.List; public class NumberGenerator { public List<Integer> getNumbers() { List<Integer> numbers = new ArrayList<Integer>(); for (int i = 0; i < 4; i++) { numbers.add(i); } return numbers; } }
3. Create a new task flow TaskOne.
Configure a managed bean in this task flow based on the NumberGenerator class.
4. Add a View activity to the task flow;Â call it numberView.
5. Edit the JSFF file. Add a panelHeader and inside a Table based on the list of integers returned by the managed bean based on NumberGenerator.
<af:panelHeader <af:table <af:column <af:outputText </af:column> </af:table> </af:panelHeader>
6. Create a new deployment profile – of type ADF Library; pick some central directory (for example C:\ADF_LIBRARIES) as the deployment destination; set the name of the archive to TaskOneLibrary.
7. special step: create another deployment profile – of type JAR – with only the Project Source Path as Contributor (as we only need sources in this jar file); set the name of this archive to TaskOneViewControllerSources.
8. Deploy according to both deployment profiles; this should create two JAR files.
Expose ADF Library as reusable asset in JDeveloper
Steps:
1. Make the resource palette visible in JDeveloper (choose Resource Palette from the View menu)
2. Create a new File System connection in the Resource Palette; link this connection to the directory (for example c:\ADF_LIBRARIES) where you deployed the ADF Library to.
3. The Resource palette should now show the TaskOneLibrary and the TaskOne task flow in it
Create Fusion Web Application using the ADF Library
Steps:
1. Create New Fusion Web Application BigWebApp.
All subsequent steps are for the ViewController project:
2. Create a new JSF page; add a PanelHeader; set a title for the page
3. Drag the TaskOne task flow in the TaskOneLibrary to the PanelHeader; when you drop it, you will be asked by JDeveloper whether to add the ADF Library to the project. Press Yes to accept this.
You can make libraries visible in the project navigator by checking the option Show Libraries in the dropdown in the header of the navigator.
When that option has been enabled, the ADF Library node shows all resources available in the project from ADF Libraries such as the one just added>
4. special step: open the project properties for the ViewController project; go to the Libraries and Classpath tab and add a new library (press add, then press new). Call the Library TaskOneLibrarySources. Click on the Source Path node; click on Add Entry and browse to the TaskOneViewControllerSources.jar.
note that this library does not impact the compilation, build or deploy process in any way as it only contains sources; it provides a means to debug the ADF Library code by making the sources available to JDeveloper, that is all.
5. When the option to show libraries in the project navigator is active, the new library TaskOneLibrarySources should be visible. You can expand it to inspect all sources that are part of this library. Open the NumberGenerator class – and create a debug breakpoint in the method that returns the list of numbers.
6. Debug the BigWebApp application. You should see the debugger hitting the breakpoint that you added in the NumberGenerator class that is used in the task flow that we added from the ADF Library.
After resuming processing from the breakpoint, at long last the utterly unimpressive BigPage is shown in the browser:
Conclusion
Clearly, in this way we can debug sources for objects used inside reusable task flows that are shipped in ADF Libraries, as long as those sources are made available to us alongside the ADF Libraries themselves in source jars. Note that without the source jars, we can still use those ADF Libraries and the task flows inside them; we will just not be able to properly debug the web application in its entirety.
Resources
Download sources for the applications discussed in this article: DebugADFTaskFlowsInADFLibraries.zip.
Excellent! Just what I was looking for. Tnx Lucas
|
https://technology.amis.nl/2010/04/03/adf-11g-debugging-task-flows-embedded-from-adf-libraries-using-source-code-jars/
|
CC-MAIN-2016-40
|
refinedweb
| 905
| 61.77
|
Sorry this took so long – work’s been really busy lately.
Clearly I’m not enough of an egotist[1] to believe that my style is in any way the “one true style”, but over the course of writing the series on style, I realized that I’ve never written down my current coding style, and I think it would be vaguely humorous to write down “Larry’s Coding Conventions”.
So here goes. I’m using the keywords defined in RFC2119 within.
Larry’s Coding Conventions
1. Files – Global file information.
All source files (.C, .CXX, .CPP, etc) MUST be plain text files, with CR/LF (0x0D0x0A) at the end of each line. Each file MUST end with a CRLF. C++ code SHOULD be in files with an extension of .CPP, C code SHOULD be in files with an extension of .C. Header files SHOULD have an extension of .H.
Tab size MUST be set to 4, and tab characters MUST NOT appear in source files, this allows for a user to use any editor and still have the same experience while viewing the code.
Every source file MUST start with the following header:
/*++
* <Copyright Notice (talk to your legal department for the format of the copyright notice)>
*
* Module-Name:
* <file name>
*
* Author:
* <author full name> (<author email address>) <date of creation>
*
* Abstract:
* <Brief abstract of the source file>
*
* Revision History:
*
*--*/
Filenames SHOULD be representative of the contents of the files. There SHOULD be one class (or set of functionality) per file. So the CFoo class should be located in a source file named “Foo.cpp”.
Global variables SHOULD begin with a prefix of g_. Care MUST be taken when declaring global variables, since they are likely sources of synchronization issues.
Source code lines SHOULD be no longer than 100 characters long.
2. Class definitions
All classes SHOULD start with C in the class name. Member variables of classes MUST start with an _ character. Member variables are PascalCased (so a member variable could be _MyMemberVariable).
Classes that are reference counted SHOULD follow the OLE conventions – an AddRef() and a Release() method should be used for reference counting. If a class is reference counted, then the destructor for that class SHOULD be private.
3. Functions and Methods – Info pertaining to various functions.
Each routine MUST have a function header, with the following format:
/*++
* Function Name
*
* <Description of the function>
*
* Inputs:
* <Description of the parameters to the function, or "None.">
*
* Returns:
* <Description of the return value, or "None.">
*
* Remarks:
* <Relevant information about the function, may be empty>
*
*--*/
Function names SHOULD be representative of their function. All function names MUST be in PascalCase as per the CLR coding standards. If the project is using an auto-doc tool, it’s acceptable to tag the inputs closer to their definition.
Parameters to functions are also in PascalCase (note that this is a difference from the CLR coding standard).
Local variables in functions should be camelCased. This allows for the reader to determine the difference between local variables, parameters and class members easily.
Parameter names SHOULD NOT match the names of methods in the class.
Code SHOULD have liberal use of vertical whitespace, with descriptive block comments every five or so lines of source code.
Descriptive comments SHOULD have the format:
:
:
//
//<space><space>Descriptive Comment Line 1
//<space><space>Descriptive Comment Line 2
//
:
:
Each descriptive comment starts 4 spaces from the left margin, there is a single empty comment line before and after the descriptive comment, and two spaces between the // and the start of the comment text.
Functions SHOULD occupy no more than one screen, or about 70 lines, including comments (not including headers). This means that each function SHOULD be at most about 40 lines of code.
4. Predefined Identifiers (Manifest Constants)
Manifest constants SHOULD be in all upper-case. Instead of using #define, enum’s or const’s SHOULD be used when possible, especially if the value being defined is unique, since it allows for better representation in the debugger (yeah, I know I’ve said that source level debuggers are a crutch, but…).
5. Code layout
Code is laid out using BSD-style – braces appear on their own line at the same indentation level as the conditional, code is indented 4 spaces on the line after the brace.
So an if/else statement is formatted as:
if (i < 0)
{
i += 1;
}
else
{
i += 1;
}
In general, unless semantically necessary, I use
<variable> += 1 instead of
<variable>++ or
++<variable>.
Variable declarations SHOULD appear each on their own line.
6. Code Example
The following is an example of code in “Larry’s Style”.
/*++
* ErrorPrint
*
* Print a formatted error string on the debug console.
*
* Inputs:
* Format - printf style format specifier
*
* Returns:
* None.
*
* Remarks:
* The total printf string should be less than DEBUG_STRING_BUFFER_SIZE bytes.
*
*--*/
static void ErrorPrint(LPCSTR Format, ...)
{
int result = 0;
static char outputBuffer[DEBUG_STRING_BUFFER_SIZE];
va_list marker;
va_start(marker, Format);
result = StringCchVPrintfA(outputBuffer, DEBUG_STRING_BUFFER_SIZE, Format, marker);
if (result == S_OK)
{
OutputDebugStringA(buffer);
}
va_end(marker);
}
[1] Ok, I’ve got a blog, that makes me an egotist, but not enough of an egotist[2]
[2] Apologies to KC for stealing her footnoting style 🙂
Edit: pszFormat->Format.
What the…?!??!
Why aren’t you using XML comment format?
That’s redundant data that’s very prone to becoming stale. Also, and especially with long-lived projects, these fields are likely to grow very large if they’re kept up to date. Even after 10 or 20 revisions, the header is going to be huge.
The data is already stored in your source control system, or with respect to the module name in the name of the file itself. Why duplicate this data in an error prone, unguaranteed manner? If I need to know who did what and when to a file, I’ll look it up in the source control history and be assured that I’m getting accurate data.
G. Man: XML comment format doesn’t work for unmanaged C++ code, and that’s all I write these days. For C# code, I’d probably adopt to that.
Todd: Your point is valid, the simple reason is that it’s part of the Windows coding standard from way back when, and I’ve adopted it.
Hope your vacation was relaxing/exciting (delete as appropriate), and I’m glad to have you back — I was getting a bit worried for a moment that you’d done an Eric Lippert and disappeared…
Now, to the topic at hand — with the exception of the indent (I prefer 2 spaces purely for aesthetic reasons — it probably makes code harder to read, but big expanses of white space on the left hand side unnerves me) and the braces (I open on the same line as the conditional, probably as an unconscious analogy with the "Then" in Basic; 90% of my coding is in VB) your style seems to pretty much match mine.
Unfortunately where I work there’s only a few coders all working pretty much in isolation, and a lot of them seem oblivious to the benefits of consistency of style, meaningful variable names, etc. As it’s all a bit ad-hoc (and most of them are senior to me) there’s nothing I can do about it, but I always get a little nervous when I have to modify something someone else has worked on as I spend most of the time trying to fathom out what "OptionButton49" actually signifies; apparently even renaming widgets when you put them on a form is too much work for some people. (And one guy uses no indents at all which beggars belief.)
Gaah, I hate OptionButton49. It’s the first thing I do when I add a control to a windows forms (or MFC) application – rename the stupid control to something meaningful.
Interesting. I don’t work on the Windows platform at all — I’m purely Mac OS X, worked at Apple for a number of years on various versions of Mac OS — but I have adopted very similar coding conventions (function parameters should be in lowercase, though, and I use the Mac style for constant names). I wonder if this code style is an effect of having to maintain code written by other people.
For just one example, I ran across a bug in someone else’s code once that looked like this:
thing[i++] = i;
Here, the value of i is undefined per the C standard, so Microsoft VC was doing one thing with it, and two different versions of CodeWarrior did two different other things with it.
I had been previously warned away from the increment operators by more experienced hands, but that bug certainly put me off them for life.
Chris, code like that’s exactly why I avoid expressions with side effects.
Nick, that’s because you’ve not actually written code using true hungarian (not the hungarian you see in the MS examples).
Once you’ve spent a month or two writing hungarian code, it’s not NEARLY as bad as you’d think, and in many ways it can be quite nice.
I’ve enjoyed all your entries on coding style and agree with almost everything you say.
I used leading underscores for member variables in the past until I came across the advice of Herb Sutter (Exceptional C++, Item 20) to avoid them. To quote:
"Yes, popular books like Design Patterns do use leading underscores in variable names, but don’t do it. The standard reserves some leading-underscore identifiers for the implementation, and the rules are hard enough to remember—for you and for compiler writers—that you may as well avoid leading underscores entirely[1]. Instead, my own preference is to follow the convention of designating member variable names with a trailing underscore."
He goes on to give examples in which some implementation use #define macros that stomp all over member names.
Simon,
That’s a REALLY good point, there IS a conflict with reserved names. The good news is that it’s unlikely that the language specific features will be PascalCase – I know I’m playing with fire though.
On the other hand, I don’t write code for cross platform applications – the only person I write code for is Microsoft (and this blog).
pszFormat or Format? Maybe they’re not supposed to be the same variable, but it seems confusing to (partially) Hungaianize some string variable names but not others. Or would it be PszFormat to give your Pascal casing?
Thanks for not using on lpcwszFormat. Ick!
I know this is off-topic but why is everyone always ragging on the pre-processor directives (Macros)? I find #define, #if, etc etc very useful in my code and have never had a problem with them.
> (so a member variable could be
> _MyMemberVariable).
I used the mouse to copy that as soon as I saw it, preparing to paste it. Then another language lawyer beat me to it, but then you continued:
> The good news is that it’s unlikely that the
> language specific features will be PascalCase
What do you mean by "language specific features" and what is the relationship between them and the standard? When your program has undefined behaviour, an implementation is free to do anything including defining a meaning for your program and doing what you wanted. Meanwhile, not only are other implementations free to do whatever they wish, but your favourite implementation is free to change its mind tomorrow.
> I know I’m playing with fire though.
That’s only part of it. You’re also firing random bullets at your customers. And maintainers inside your company who have to clean up your code will hate you.
By the way, someone once taught me a clever trick, to name arguments of function-like MACROS with a single leading underscore, but of course followed by a lower-case letter not an upper-case letter. No other identifiers would start with an underscore except for those defined by the standard, and except for those defined (and documented) by an implementation when the application needed to use an implementation-defined feature. This really helped assist the readability of macros. Of course when reading the rest of the program it’s still not easy to look and guess whether an innocent-looking function call is really a macro containing a goto of some kind or other, but that problem exists regardless of underscore usage.
Norman: Language specific features: The _ is reserved for local extensions of the C standard – those are either language specific changes (new reserved identifiers like __gc in C++/CLR) or runtime library extensions (like _strnicmp).
Manip: The problem with Macros is twofold. First, they don’t appear in the symbol file, thus they don’t show up in symbolic debuggers, second that the hide complexity.
Drew: I missed that, thanks for pointing it out.
Oh, and Drew: the correct hungarian for a const wchar * is wsz – hungarian doesn’t differentiate between const and non const (I need to verify this when I get back to work). Either way, for 32bit flat code, lp isn’t part of hungarian, and the const can’t be a prefix of ‘c’ because the ‘c’ prefix is for count – lpcwsz is a long pointer to a count of wide character strings.
I’m kinda leery of the ‘_ for members’ for the same reasons as others. We use m_ for members, g_ for globals, s_ for statics, and that seems to work pretty well.
I think my faith in hungarian notation would be stronger if compilers offered you the ability to ‘strongly type’ it.
e.g. szApple = 4; // Does not compile.
Until that time, I’ll continue to live in the risky world of non-hungarian love.
Aha – Pascal case params and don’t sue Hungarian on them? Ok. That clears it up for me.
You’re probably right about that lpcwsz not being proper Hungarian. You can find that notation used in some Windows sources, though. Srch for "lpcwszUrl". I think that one is especially funny.
"Tab size MUST be set to 4, and tab characters MUST NOT appear in source files, this allows for a user to use any editor and still have the same experience while viewing the code."
See, I don’t get this – I also use 4-space tabs, but I DO use tab characters – so that anybody who looks at my code sees indentation at a level they’re comfortable with. Using tabs instead of spaces means that this is one aspect of coding style which can be varied from person-to-person within a project according to preference without impacting anybody else.
Great series. I’ve enjoyed it.
I had a few questions about your coding style. Do other programmers who have to read or maintain your code like the header and block comments? Grow to like them? Keep them up-to-date? Do you think that they are worth the time it takes to maintain them?
Larry,
would you care to elaborate on the rationale of /*++ … –*/ header comment convention? Do the ++ and — play some significant role?
Larry: …or things like _Boolean, _Complex, _Imaginary, that appeared in (IIANM) C99.
Mo,
If every developer on the team also uses only tabs exclusively, this works. But if any developer uses an editor that saves spaces, then the code turns into an unmitigated mess really quickly.
Centaur: The ++ and — don’t play any part, they just look pretty.
David: It’s FAR easier if a team defines a coding standard and sticks with that standard – then everyone uses the same standards and there’s no issue with someone having to "adopt" your standards. Having said that, the post that started this whole "style" series was a rant against developers who try to mix and match programming styles in a single source file – I think it’s a horrible idea.
Larry: You said Macros work to hide the complexity and say so like it is a bad thing.. ? Excuse me but I thought that was the POINT of using a Macro..
Actually, the VC++ tool refresh (apply on top of VS.NET 2005 Beta 1) does have XML documentation support!
Ramsey
11/30/2004 7:39 PM Larry Osterman
> Norman: Language specific features: The _ is
> reserved for local extensions of the C
> standard – those are either language
> specific changes (new reserved identifiers
> like __gc in C++/CLR) or runtime library
> extensions (like _strnicmp).
RTFS. The _ by itself is not reserved (in most situations), for example a program’s identifier can begin with _ and a lower case letter (except in a few situations). The _ followed by another _, and the _ followed by an upper-case letter exactly as you are doing, are reserved for the implementation to use however it wishes (except for a few standard-defined identifiers). The implementation does not have to distinguish runtime library from extensions or from any other characteristics which the implementation wishes to play with however it wishes. It owns that part of the namespace (except for a few identifiers that are owned by the standard).
I still don’t know what you mean by language-specific. C is C, is that language-specific? C++ is C++, is that language-specific? Although there are tons of things (especially in the C standard) which are hard to interpret because they say different things from what the writers said they intended, this is not part of it. When you name identifiers like that, you are shooting random bullets at the customers of your program.
PingBack from
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/larryosterman/2004/11/30/what-does-larrys-style-look-like/
|
CC-MAIN-2017-22
|
refinedweb
| 2,941
| 60.75
|
ML8511 UV Sensor Hookup Guide
Introduction:
Using the ML8511
The ML8511 sensor is very easy to use. It outputs a analog voltage that is linearly related to the measured UV intensity (mW/cm2). If your microcontroller can do an analog to voltage conversion, then you can detect the level of UV.
Grab the following example:
Load it onto the Arduino of your choice.
Next, connect the following ML8511 breakout board to Arduino:
- ML8511 / Arduino
- 3.3V = 3.3V
- OUT = A0
- GND = GND
- EN = 3.3V
- Arduino 3.3V = Arduino A1
These last two connections are a little different. Connect the EN pin on the breakout to 3.3V to enable the device. Also connect the 3.3V pin of the Arduino to Arduino analog pin 1.
This example uses a neat trick. Analog to digital conversions rely completely on VCC. We assume this is 5.0V, but if the board is powered from USB this may be as high as 5.25V or as low as 4.75V. Because of this unknown window, it makes the ADC on the Arduino fairly inaccurate. To fix this, we use the very accurate onboard 3.3V reference (accurate within 1%). So by doing an analog to digital conversion on the 3.3V pin (by connecting it to A1) and then comparing this reading against the reading from the sensor, we can extrapolate a true-to-life reading, no matter what VIN is (as long as it’s above 3.4V).
For example, we know the ADC on the Arduino will output 1023 when it reads VCC. If we read 669 from the connection to 3.3V, what is the voltage powering the Arduino? It’s a simple ratio!
VCC / 1023 = 3.3V / 669
Solving for VCC, we get 5.05V. If you’ve got a digital multimeter, you can verify the 5V pin on your Arduino.
Now that we know precisely what VCC is, we can do a much more accurate ADC on the UV voltage:
UV_Voltage / uvLevel = 3.3 / refLevel
uvLevel is what we read from the
OUT pin.
refLevel is what we read on the 3.3V pin. Solving for
UV_Voltage, we can get an accurate reading.
The ML8511 intensity graph
Mapping the
UV_Voltage to intensity is straight forward. No UV light starts at 1V with a maximum of 15mW/cm2 at around 2.8V. Arduino has a built-in map() function, but map() does not work for floats. Thanks to users on the Arduino forum, we have a simple mapFloat() function:
//The Arduino Map function but for floats //From: float mapfloat(float x, float in_min, float in_max, float out_min, float out_max) { return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min; }
The following line converts the voltage read from the sensor to mW/cm2 intensity:
float uvIntensity = mapfloat(outputVoltage, 0.99, 2.8, 0.0, 15.0); //Convert the voltage to a UV intensity level
Test your sensor by shining daylight or a UV LED onto the sensor. We’ve also found that a bright LED flashlight will change the reading slightly. What other devices around your house might output UV?
UV Burn!
With the UV sensor up an running, what can we do with it? If we integrate the UV exposure over time, we can calculate the total UV load. But how much UV is good?
From the Pacific University of Oregon:
… 15- to 30-min weekly exposure to UV-B required for Vitamin D synthesis …
Monitoring could be good for basic levels. But how much is too much? From the Canadian Centre for Occupational Health and Safety:
For the UV-A or near ultraviolet spectral region (315 to 400 nm), exposure to the eye should not exceed 1 milliwatt per square centimeter (1.0 mW/cm2) for periods greater than 1000 seconds (approximately 16 minutes).
The UV sensor could be used on a pair of eyeglasses, to make sure you don’t sunburn your eyes.
Because skin types vary greatly, it gets a bit harder to predict sun burn and skin damage. Luckily NOAA gives us some direction.
But we have a unit issue. Luckily,the US Navyclears that up:
Irradiance (a dose rate used in photobiology) is described in watts (unit of power) per square meter (W m2) or watts per square centimeter (W cm2). Radiant exposure (H), is dose, and is described in joules (unit of energy) per square meter (J m2) or joules per square centimeter (J cm2). Note that a watt is a joule per second thus the dose rate (W cm2) multiplied by the exposure duration (seconds) equals dose (J cm2).
Resources and Going Further
We hope this gives you the starting point for doing lots of fun things with UV. Now you might want to checkout these tutorials:
|
https://learn.sparkfun.com/tutorials/ml8511-uv-sensor-hookup-guide
|
CC-MAIN-2016-18
|
refinedweb
| 794
| 74.79
|
', ), ),
Total 3 comments
Normally, a module alias is immediately available in your application. But if you declare your module like this:
The
baralias is not available until the module has been instantiated (
Yii::app()->getModule('bar')). This limitation is not likely to be fixed.
When using namespaces, you might see an error about the application not being able to find "Yii.php". If you run into this, there are 2 solutions.
1) Instead of
Tell PHP you want it to look within the Yii namespace so it knows where to find any Yii objects...
2) Change all instances of
to
Though ideally you would use the first solution.
Namespaces in php are like directory paths in console (bash, dos etc)
When you use
namespacephp keyword like this
is like executing
cd a\specific\directoryexcept that the namespace is created if not exists.
Now everything follows is belonging to that namespace. This means that if you want to instantiate, extend or call a static method from eg
fooclass on another namespace you have to
Yii imports a very intuitive convention here that the namespace structure (if implemented) should be reflected on the physical directory structure and additionally makes its Path Alias convenience available for that purpose.
Please be my guest to follow these steps:
1. Create a new web app 2. Go to
protected\componentsand create a folder
foo3. Move
Controller.phpin
foofolder and open it with an editor 4. At line 6, at
Controllerclass declaration import this:
protected\controllers\SiteController.phpfor editing
SiteControllerclass declaration with this
As you will see, your new web app still working fine and
applicationpath alias will point properly at
protectedfolder.
You can find more about php namespaces here
Enjoy coding :>
Please login to leave your comment.
|
http://www.yiiframework.com/doc/guide/1.1/en/basics.namespace
|
CC-MAIN-2017-43
|
refinedweb
| 291
| 54.73
|
Conducting a capital budgeting analysis
The board of directors of Trinity Hospital is working on a five-year strategic plan for the facility. One of the strategic goals is to build a new $1 million cancer research wing in five years. The group is concerned that current economic conditions might reduce revenues over the next five years and they are uncertain about the fate of the construction project. You are part of team tasked with conducting a capital budgeting analysis.
The entire SLP component of the course will involve different aspects of the capital budgeting analysis. You are to perform the calculations and interpret the findings in your SLP papers.
Identify the relevant assets as well as labor and non-labor costs categories that you will want to include in the analysis.
Submit the list of categories and an explanation for why you selected these items in a WORD document.
Construct an EXCEL spreadsheet using these categorical labels
Solution Preview
The main goal is to determine in which manner I will be able to maximize my NPV, and that maximum NPV can be 0 of course. The important is that it is not negative.
How do we plan this?
Remember that a higher risk brings a higher return (in that case a higher discount rate), thus as you project incomes further down the road, you also add risk and uncertainty. Same thing goes for CAPEX of course. 1000 $ spent 5 years from now is worth less than that same 1000$ today. technically yes, but you have to factor in inflation for some of these entrants also, as I will show later, in real terms, the price may be even higher.
I will refrain from making ...
|
https://brainmass.com/business/capital-budgeting/conducting-a-capital-budgeting-analysis-454303
|
CC-MAIN-2017-04
|
refinedweb
| 285
| 61.97
|
I came across some code today where the full path and executable name of the application was needed. It was obtained like this:
String fullPath = Application.ExecutablePath;
The Application class is a Windows Forms class, so the project had a reference to the System.Windows.Forms assembly.
But this particular app was non-visual (it was a Windows service), so the above approach is a bit heavy since the Windows Forms assembly would get loaded into memory and remain there until the app terminated.
A leaner approach is to use the Assembly class found in the System.Reflection namespace. This is in the mscorlib assembly, which is part of every .NET program.
String fullPath2 = Assembly.GetEntryAssembly().Location;
It’s important to use the GetEntryAssembly method and not GetCallingAssembly or GetExecutingAssembly, since those will not necessarily correspond to the application’s executable.
The only difference I noticed between the value returned by the above two approaches is the casing of the executable’s extension. Application.ExecutablePath returned <info>.EXE, while Assembly.GetEntryAssembly().Location returned <info>.exe. Aside from that the casing was the same.
Hope this helps.
|
https://larryparkerdotnet.wordpress.com/2010/10/21/obtaining-the-applications-path-and-executable-name/
|
CC-MAIN-2018-30
|
refinedweb
| 186
| 50.63
|
#include <qstring.h>
Unicode characters are (so far) 16-bit entities without any markup or structure. This class represents such an entity. It is lightweight, so it can be used everywhere. Most compilers treat it like a "short int". (In a few years it may be necessary to make QChar 32-bit when more than 65536 Unicode code points have been defined and come into use.)
QChar provides a full complement of testing/classification functions, converting to and from other formats, converting from composed to decomposed Unicode, and trying to compare and case-convert if you ask it to.
The classification functions include functions like those in ctype.h, but operating on the full range of Unicode characters. They all return TRUE if the character is a certain type of character; otherwise they return FALSE. These classification functions are isNull() (returns TRUE if the character is U+0000), isPrint() (TRUE if the character is any sort of printable character, including whitespace), isPunct() (any sort of punctation), isMark() (Unicode Mark), isLetter (a letter), isNumber() (any sort of numeric character), isLetterOrNumber(), and isDigit() (decimal digits). All of these are wrappers around category() which return the Unicode-defined category of each character.
QChar further provides direction(), which indicates the "natural" writing direction of this character. The joining() function indicates how the character joins with its neighbors (needed mostly for Arabic) and finally mirrored(), which indicates whether the character needs to be mirrored when it is printed in its "unnatural" writing direction.
Composed Unicode characters (like å) can be converted to decomposed Unicode ("a" followed by "ring above") by using decomposition().
In Unicode, comparison is not necessarily possible and case conversion is very difficult at best. Unicode, covering the "entire" world, also includes most of the world's case and sorting problems. Qt tries, but not very hard: operator==() and friends will do comparison based purely on the numeric Unicode value (code point) of the characters, and upper() and lower() will do case changes when the character has a well-defined upper/lower-case equivalent. There is no provision for locale-dependent case folding rules or comparison; these functions are meant to be fast so they can be used unambiguously in data structures. (See QString::localeAwareCompare() though.)
The conversion functions include unicode() (to a scalar), latin1() (to scalar, but converts all non-Latin-1 characters to 0), row() (gives the Unicode row), cell() (gives the Unicode cell), digitValue() (gives the integer value of any of the numerous digit characters), and a host of constructors.
More information can be found in the document About Unicode.
Definition at line 73 of file qstring.h.
|
http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQChar.html
|
CC-MAIN-2018-13
|
refinedweb
| 438
| 52.09
|
“Write once, run anywhere” seems to be the mantra that finds favour with application developers nowadays. This reduces the need for developers to write a lot of redundant code. .NET Core, an open source offering from Microsoft, (yes, Microsoft!) is just the tool for writing code for a cross-platform application that will work on Windows, Linux and macOS systems.
.NET Core is a free and open source framework for developing cross-platform applications targeting Windows, Linux and macOS. It is capable of running applications on devices, the cloud and the IoT. It supports four cross-platform scenarios: ASP.NET Core Web apps, command line apps, libraries and Web APIs. The recently released .NET Core 3 (preview bits) supports Windows rendering forms like WinForms, WPF and UWP, but only on Windows.
Historically, the .NET Framework has worked only on the Windows platform. It was released in 2002, and has millions of applications developed using it, which run on Windows servers and desktops. The stable version of the .NET Framework (Windows only) release is 4.7.2. With the help of the Mono project, developers were able to bring the .NET code on to mobile devices, macOS, and Linux, but not the applications. So they were managing two separate applications for cross-platform compatibility. Now, with the help of .NET Core, they can target any platform without any code changes; in short, they can now “write once, and run anywhere.”
.NET Core is an open source development platform and Microsoft contributed it to the .NET Foundation in 2014. It is now one of the most active .NET Foundation projects. It can be freely adopted by individuals and companies for personal, academic or commercial purposes. Multiple companies use .NET Core as a part of apps, tools, new platforms and hosting services. Some of these companies make significant contributions to .NET Core on GitHub, and provide guidance on product direction as part of the .NET Foundation Technical Steering Group. This group of companies includes Microsoft, Google, Red Hat, Unity, Samsung and JetBrains.
With .NET Core, developers can build and run .NET code on more than one platform of choice. As of this November, .NET Core 2.1 is generally available (production ready) whereas .NET Core 3.0 is under development, having had a few preview releases. .NET 3.0 GA is expected by early 2019.
Why should developers care about .NET Core?
Until 2010, every organisation wanted to design its applications using the SOA (service oriented architecture) pattern with API interfaces. The server operating system was chosen first and then the client components. But once the server platform is confirmed, it’s hard to switch between the platforms. Java is well positioned and has become a natural choice for creating cross-platform apps, with Node, Python, etc, now added to this list.
Key things to consider about .NET Core
1. Cross-platform: It runs on Windows, macOS and Linux operating systems. With .NET Core you can target any application type that’s running on any platform. Developers can reuse skills and code across all of them in a familiar environment. From mobile applications running on iOS, Android and Windows, to enterprise server applications running on Windows Server and Linux, or high scale microservices running in the cloud, .NET Core provides a solution for you.
2. Consistent across architectures: It runs your code with the same behaviour on multiple architectures, including x64, x86 and ARM.
3. Flexible deployment: .NET Core can be included in your app or installed side-by-side, user-wide or machine-wide. It can be used with Docker containers, which typically run Linux today, and it can host ASP.NET Core applications, allowing them to take advantage of the benefits of containers and microservices.
4. Modular: .NET Core supports the use of NuGet packages. Instead of assemblies, now developers can work with NuGet packages. In the .NET Framework, the framework updates are typically serviced through Windows updates, but .NET Core relies on its package manager to receive updates.
5. Microservices architecture: A microservices architecture allows a mix of technologies across a service boundary. This technology mix enables a gradual embrace of .NET Core for new microservices that work with other microservices or services. For example, you can mix microservices or services developed with the .NET Framework, Java, Ruby, or other monolithic technologies.
There are many infrastructure platforms available. Azure Service Fabric, for instance, is designed for large and complex microservice systems. Azure App Service is a good choice for stateless microservices. Other alternatives based on Docker fit any kind of microservices approach, as explained in the section on ‘Containers’. All these platforms support .NET Core and make them ideal for hosting your microservices.
6. Open source: .NET Core is open source and uses the MIT and Apache 2 licences. It is part of the .NET Foundation, which is an independent non-profit supporting the innovative, commercial yet open source-friendly .NET ecosystem. Over 25,000 developers from over 1700 companies outside Microsoft are currently contributing to the .NET open source code base. The .NET community is growing fast, and already has a good number of community projects and libraries available for free. In addition to the community and Microsoft, the Technical Steering Group members like Google, JetBrains, Red Hat, Samsung and Unity are guiding the future of the .NET platform.
7. Tools and productivity:. .NET.
8. Command line tools: These include easy-to-use command line tools for local development and in continuous-integration scenarios.
9. Performance: .NET Core is fast. Yes, faster than Node.js and Go. Really fast! That means applications provide better response times and require less compute power. Stackoverflow serves 5.3 million page views a day on just nine servers. The popular TechEmpower benchmark compares Web application frameworks with tasks like JSON serialisation, database access, and server-side template rendering, and .NET Core performs faster than any other popular framework.
Microsoft recommends running .NET Core with ASP.NET Core for the best performance and scale. This becomes important when hundreds of microservices are used, in which case fewer servers and virtual machines are needed. The efficiency and scalability gained can translate to a better user experience in addition to cost savings. Bing.com runs on .NET Core 2.1 and the platform has seen a 34 per cent improvement in its performance, which means fewer machines are required. New Zealand headquartered Raygun has reported that it increased its throughput 2000 per cent by switching from Node.js to .NET Core.
10. Trusted and secure: .NET Core provides you with immediate security benefits through its managed run time. A collection of services prevents critical issues like bad pointer manipulation or malicious attempts to alter compiled code. Microsoft takes security very seriously and releases updates quickly when threats are discovered.
11. A vast ecosystem: You can leverage the vast .NET ecosystem by incorporating libraries from the NuGet package manager, the extensive partner network, and the Visual Studio Marketplace. You can also find answers to technical challenges from the community, MVPs, and large support network.
12. Support: .NET Core is supported by Microsoft, on Windows, macOS and Linux. It is updated for security and quality several times a year. .NET Core binary distributions are built and tested on Microsoft-maintained servers in Azure and supported just like any Microsoft product. Red Hat supports .NET Core on Red Hat Enterprise Linux (RHEL). It builds .NET Core from source and makes it available in the Red Hat Software Collections. Red Hat and Microsoft collaborate to ensure that .NET Core works well on RHEL.
Comparisons with Mono
Mono is the original cross-platform and open source .NET implementation, and it was first shipped in 2004. It can be thought of as a community clone of the .NET Framework. The Mono project team relied on the open .NET standards (notably ECMA 335) published by Microsoft to provide a compatible implementation.
The major differences between .NET Core and Mono include the following.
- App-models: Mono supports a subset of the .NET Framework app-models (for example, Windows Forms) and some additional ones (like licence and are .NET Foundation projects.
- Focus: The primary focus of Mono in recent years has been mobile platforms, while .NET Core is focused on the cloud and desktop workloads.
- Mobile: Xamarin/Mono is a .NET implementation for running apps on all the major mobile operating systems.
Languages
C#, Visual Basic and F# languages can be used to write applications and libraries for .NET Core. These languages are or can be integrated into your favourite text editors and IDEs, including Visual Studio, Visual Studio Code, Sublime Text and Vim. This integration is provided, in part, by the good folks from the OmniSharp and Ionide projects.
Comparing .NET Core with .NET Framework
There are fundamental differences between the two and your choice depends on what you want to accomplish. Table 2 guides you on when to use each.
Creating a ‘Hello World’ application using .NET Core
Let us look at how to create our first Hello World application using .NET Core in Ubuntu. You can either use the Visual Studio code editor or the command line to write this code. For the sake of simplicity, I am using the command line here.
The prerequisites are:
- .NET Core SDK 2.1
- A text editor or code editor of your choice
I am assuming here that you have Ubuntu 18 for this hands-on tutorial. If you have something else, then please go to to get OS-specific SDK installation instructions.
Step 1: Register Microsoft key and feed: Before installing .NET, we’ll need to register the Microsoft key and the product repository, and install the required dependencies. This only needs to be done once, per machine. Open a command prompt and run the following commands:
$ wget -q $ sudo dpkg -i packages-microsoft-prod.deb
Step 2: Install .NET SDK: Update the products available for installation, then install the .NET SDK. In your command prompt, run the following commands:
$ sudo apt-get install apt-transport-https $ sudo apt-get update $ sudo apt-get install dotnet-sdk-2.1
Step 3: Hello, console app! Open a terminal window and create a folder named Hello. Navigate to the folder we’ve created and type the following command:
$ dotnet new console $ dotnet run
Let’s do a quick walkthrough of these commands. The first command is:
$ dotnet new console
dotnet new creates an up-to-date Hello.csproj project file with the dependencies necessary to build a console app. It also creates Program.cs, a basic file containing the entry point for the application.
Hello.csproj: <Project Sdk=”Microsoft.NET.Sdk”> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.1</TargetFramework> </PropertyGroup> </Project>
The project file specifies everything that’s needed to restore dependencies and build the program. The OutputType tag specifies that we’re building an executable — in other words, a console application. The TargetFramework tag specifies what .NET implementation we’re targeting. In an advanced scenario, we can specify multiple target frameworks and build to all those in a single operation. In this tutorial, we’ll stick to building only for .NET Core 1.0.
Program.cs using System; namespace Hello { class Program { static void Main(string[] args) { Console.WriteLine(“Hello World from DotNET Core Apps in Linux !”); } } }
Note: Starting with .NET Core 2.0, we don’t have to run ‘dotnet restore’ because it’s run implicitly by all commands that require a restore to occur, such as ‘dotnet new’, ‘dotnet build’ and ‘dotnet run’. It’s still a valid command in certain scenarios in which doing an explicit restore makes sense, such as continuous integration builds in Azure DevOps Services or in build systems that need to explicitly control the time at which the restore occurs.
dotnet new calls dotnet restore implicitly. dotnet restore calls into NuGet (.NET package manager) to restore the tree of dependencies. NuGet analyses the Hello.csproj file, downloads the dependencies defined in the file (or grabs them from a cache on your machine) and writes the obj/project.assets.json file, which is necessary to compile and run the sample.
Important: If you’re using a .NET Core 1.x version of the SDK, you’ll have to call dotnet restore yourself after calling dotnet new.
The second command is:
$ dotnet run
dotnet run calls dotnet build to ensure that the build targets have been built, and then calls dotnet <assembly.dll> to run the target application.
$ dotnet run Hello World from DotNET Core Apps in Linux!!
Alternatively, you can also execute dotnet build to compile the code without running the build console applications. This results in a compiled application as a DLL file that can be run with dotnet bin\Debug\netcoreapp2.1\Hello.dll on Windows (use / for non-Windows systems). You may also specify arguments to the application as you’ll see later on.
$ dotnet bin\Debug\netcoreapp2.1\Hello.dll Hello World!
As an advanced scenario, it’s possible to build the application as a self-contained set of platform-specific files that can be deployed and run to a machine that doesn’t necessarily have .NET Core installed.
Analyse your .NET apps for portability to .NET Core
Want to make your libraries multi-platform? Want to see how much work is required to make your application compatible with other .NET implementations and profiles, including .NET Core, .NET Standard, UWP, and Xamarin for iOS, Android and Mac? The .NET Portability Analyser is a tool that provides you with a detailed report on how flexible your program is across .NET implementations by analysing assemblies. The Portability Analyser is offered as a Visual Studio Extension and as a console app. You can get more details about it at.
In my next article, we will look at how to create a ASP.NET Core 2.1 Web application using Visual Studio code in Ubuntu, and run it in Linux containers.
|
https://www.opensourceforu.com/2019/01/building-cross-platform-applications-with-net-core/?amp
|
CC-MAIN-2021-31
|
refinedweb
| 2,322
| 60.21
|
Agenda
See also: IRC log
<shadi>
JK: may be able to use "test subject" class otherwise can use "file content" class.
SAZ: recalls Charles saying we could use "test subject" class which is well formed URI but is not unique
<JohannesK>
SAZ: could give "test subject" a unique ID
JK: can use IP number instead of "file:..." if IP is static
SAZ: although may not have an IP number at all
... is it important to store content of file?
JK: yes, especially if file is not public
... file content needs to go into report if file is not public
SAZ: proposal is to change "http body" in this content
JK: but "http body" does not have a domain yet and could be used anywhere if we change comment
SA: not good to change vocabulary because it should be stand alone
JK: we could create the propery in a namespace
(e.g. "content") and extent "http body" property so "http body" is derived
from it.
... the benefit is that we have "http body" property in proper vocabulary
SAZ: We have property to store URI and can put filename in there. But what are our needs?
JK: In RDF we need unique identifier for
resource but not for property and we need that and to be unique.
... Filename is a label for stored content.
... We need something for storing non-public content.
... We could use something like "body" property from HTTP. Could use as-is and then change describing stuff or could create another property and extend it.
... Could use in "test subject" but what to do when we test source code in various forms?
... May not need "file content" class.
SAZ: Wonders what are disadvantages of creating "file content" class??
<JohannesK> http:body rdfs:subPropertyOf foo:base64Content
<JohannesK> TestSubject has property foo:base64Content
SAZ: If "file content" class we can recommend using properties like "file name" etc.
<shadi> earl:filesource rdfs:subPropertyOf foo:base64Content
<shadi> ...in earl:FileContent classes
SAZ: Can still use generic properties and still
use specific sub-properties.
... Thinks that "file content" class has it's own uses.
... If no other comments will leave for now as we're a small group today.
... No hearing strong feelings for or against proposal so let stew.
<shadi>
JK: Could use "node" property in RDF.
SAZ: We could still propose to use it our way.
JK: uri:uri is part of HTTP request so if not there as property then somethink is missing.
SAZ: It's still there - as part of HTTP request although request is optional. So we have additional uri:uri property.
<shadi>
JK: I'm OK with just using RDF: about.
<shadi> "Term for URI"
JK and SAZ: Do not need uri:uri because we can use rdf: about.
<shadi> earl:filename rdfs:subPropertyOf uri:uri
<davidr> yes
SAZ: Everyone agree we don't need uri:uri in property class?
<davidr> far more intuitive
resolution: Drop uri:uri in webcontent class.
<shadi>
SAZ: Extensibility section not relavent to schema. Should be in EARL guide, not in schema.
<JohannesK> +1
<davidr> agree
Resolution: Move extensibility section from schema to EARL guide.
<shadi>
SAZ: RDF itself does not have it inside the doc but others (like FOAF) do have it inside.
JK: But they don't include the RDF schema in
the spec.
... Don't have strong feelings but can be seperate or include in spec itself.
<shadi>
SAZ: This doc will be a "note" not a
"recommendation".
... Asks that everyone read it.
|
http://www.w3.org/2006/10/04-er-minutes
|
CC-MAIN-2015-48
|
refinedweb
| 583
| 74.29
|
Matt Campbell wrote: > Mike Pall wrote: >> Oh, and you realize that converting doubles to/from unsigned >> integers is _dead slow_ on many platforms (but not necessarily so >> for signed integers). > > On which platforms? $ cat conv.c #include <stdint.h> intmax_t d2i(double x) { return (intmax_t)x; } double i2d(intmax_t x) { return (double)x; } uintmax_t d2u(double x) { return (uintmax_t)x; } double u2d(uintmax_t x) { return (double)x; } Check the assembler output with: cc -Os -fomit-frame-pointer -S -o - conv.c On x86 (x87 FP): d2i: basically fld + fistp, but rounding is set to truncation mode and back i2d: fild qword [esp+4] // One instruction! d2u: call __fixunsdfdi // Ouch!! u2d: two cases for +/- and some bias tricks On x64 (SSE FP): d2i: cvttsd2si rax, xmm0 // One instruction! i2d: cvtsi2sd xmm0, rax // One instruction! d2u: two cases for +/- with FP comparison and some bias tricks u2d: two cases for +/- and some bit shifting tricks Don't have a PPC host at the moment, but AFAIR the situation was pretty similar. --Mike
|
https://lua-users.org/lists/lua-l/2007-11/msg00179.html
|
CC-MAIN-2021-49
|
refinedweb
| 170
| 62.58
|
Java program to find strong number in an array
This tutorial will guide you to learn how to find strong number in an array in Java
Java is also a high programming level language and also easy to learn.
An array is a data type in java that consists of various data separated by a comma and enclosed in square bracket.
So a number is a strong number if the sum of the factorial of its digit is number itself.
Eg 145 here the digits are 1,4 and 5 so factorial of 1 is, 4! is 24 and 5! is 120 after adding
1+24+120=145.
How to find strong number in an array in Java
public class strongNumber { public static void main(String[] args) { int temp, reminder, j,i, fact; int[] array={1,2,9,145,452}; for (j=0;j<array.length;j++) { int sum=0; fact = 1; i = 1; temp=array[j]; reminder = temp % 10; while (i <= reminder) { fact = fact * i; i++; } sum = sum + fact; temp = temp /10; if ( array[j] == sum ) { System.out.println("\n " + array[j] + " is a Strong Number"); } } } }
OUTPUT: 1 is a Strong Number 2 is a Strong Number 145 is a Strong Number
So for coding, we will first take the input array from the user we will extract digit by digit and pass the array after that we initialize a
for loop of array length and then we calculate factorial of each digit. Factorial is calculated we will add up and compare it with the
original number. So the number is equal to the sum of the factorial of its digit it is a strong number we will print it.
You may also read:
|
https://www.codespeedy.com/java-program-to-find-strong-number-in-an-array/
|
CC-MAIN-2020-40
|
refinedweb
| 285
| 66.27
|
Feature #6983
Add option to compress log files
0%
Description
The <entry>.info.log files compress from 100MB down to about 5MB. We run on SAN based VM's where disk isn't free so it would be really nice if glideinwms would compress the log file when it rotates them. We'd like to keep a long period of log files for debugging. We have about a dozen entry points for various reasons so when everything gets multiplied out compression would be helpful.
History
#1 Updated by Joe Boyd almost 6 years ago
This request started from ticket INC000000445637 where Gerard logged:
I'd be happy to see how GWMS relies on standard logrotate daemon and
provides a /etc/logrotate.d/gwms* file where Ops teams can setup the
desired rotation policy.
It would simplify both GWMS code and it's management. After looking a bit
this morning I believe Condor allows to do that if one disables condor
rotation and uses copytruncate option in logrotate. Perhaps we could do the
same with GWMS.
Gerard
I don't care how it's done but I think compression of log files is a reasonable request and we shouldn't roll our own.
#2 Updated by Parag Mhashilkar almost 6 years ago
- Target version set to v3_2_x
#3 Updated by Parag Mhashilkar almost 6 years ago
- Assignee set to Parag Mhashilkar
- Target version changed from v3_2_x to v3_2_8
#4 Updated by Marco Mambelli almost 6 years ago
- Assignee changed from Parag Mhashilkar to Marco Mambelli
#5 Updated by Marco Mambelli over 5 years ago
Possible alternatives:
1. support an encoding for GlideinHandler. BaseRotatingHandler supports bz2 (encoding='bz2-codec' - only Python2, python3 has custom logrotators):
class GlideinHandler( BaseRotatingHandler): ... def __init__(self, filename, maxDays=1, minDays=0, maxMBytes=10, backupCount=5, encoding=None): ... BaseRotatingHandler.__init__(self, filename, mode, encoding) ... def add_processlog_handler(logger_name, log_dir, msg_types, extension, maxDays, minDays, maxMBytes, backupCount=5, encoding=None): ... handler = GlideinHandler(logfile, maxDays, minDays, maxMBytes, backupCount, encoding)
2. add manual compression for the files importing gzip or zipfile, e.g.:
def doRollover(self): ... if self.doCompress: if os.path.exists(dfn + ".zip"): os.remove(dfn + ".zip") file = zipfile.ZipFile(dfn + ".zip", "wb") file.write(dfn, os.path.basename(dfn), zipfile.ZIP_DEFLATED) file.close() os.remove(dfn)
This would still require the same parameter passing as 1.
Then in both 1 and 2 calls to logSupport.add_processlog_handler should access the configuration to decide wether to use compression or not (similar to the other logging parameters)
3. let logrotate handling log rotation. It is standard and will make RHEL/SL sysadmin happy. Not all systems will have it.
Check how the logging works to see if log files are opened and closed each time or some trigger command is needed.
Options to handle this: copytruncate, prescript, postscript, delaycompress
I need some feedback on which alternative would be better.
Thanks,
Marco
#6 Updated by Marco Mambelli over 5 years ago
Committed a first version (needs to be tested).
In the unit test compressed files are verified checking the file name (extension), not the actual content.
Marco
#7 Updated by Marco Mambelli over 5 years ago
for testing I creted a big file:
And I added the following lines.
On a RH5 factory: # to print a lot #MMDB
logSupport.log.info("################### Printing filling file:")
#with open('/opt/file-1m', 'r') as f:
f = open('/opt/file-1m', 'r')
s = f.read()
logSupport.log.info(s)
f.close()
Before the log update in the loop:
# Aggregate Monitoring data periodically
logSupport.log.info("Aggregate monitoring data")
aggregate_stats(factory_downtimes.checkDowntime())
And in a RH6 frontend:
# to print a lot #MMDB
logSupport.log.info("################### Printing filling file:")
with open('/opt/file-1m', 'r') as f:
s = f.read()
logSupport.log.info(s)
Before the log update in the loop:
logSupport.log.info("Aggregate monitoring data") # KEL - can we just call the monitor aggregator method directly? see above
aggregate_stats()
I added to the documentation a note that the max_byte value is truncated (whenever read) . Any value <1 will cause no rotation.
Marco
#8 Updated by Marco Mambelli over 5 years ago
- Status changed from New to Feedback
- Assignee changed from Marco Mambelli to Parag Mhashilkar
ready for review
#9 Updated by Marco Mambelli over 5 years ago
I forgot to add that to make sure that no strange chars were used I generated the test files used to files the log files with:
< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 1M > /opt/file-1m
#10 Updated by Parag Mhashilkar over 5 years ago
- Assignee changed from Parag Mhashilkar to Marco Mambelli
sent feedback separately.
#11 Updated by Marco Mambelli over 5 years ago
- Status changed from Feedback to Resolved
merged
#12 Updated by Parag Mhashilkar over 5 years ago
- Status changed from Resolved to Closed
Also available in: Atom PDF
|
https://cdcvs.fnal.gov/redmine/issues/6983
|
CC-MAIN-2020-34
|
refinedweb
| 806
| 55.24
|
Prev
MFC VC STL Code Index
Headers
Your browser does not support iframes.
Re: Can extra processing threads help in this case?
From:
Jerry Coffin <jerryvcoffin@yahoo.com>
Newsgroups:
microsoft.public.vc.mfc
Date:
Mon, 12 Apr 2010 19:10:30 -0600
Message-ID:
<MPG.262d7262979e242398986f@news.sunsite.dk>
In article <GtSdnVtJ8-u84F7WnZ2dnUVZ_rWdnZ2d@giganews.com>,
NoSpam@OCR4Screen.com says...
[ ... ]
That sure sounds screwy to me. Of the 40 different priority
levels available on Linux, a process with priority of 0
would starve a process with priority of 1? That sure sounds
screwy to me. Can you prove this?
With a few provisos, yes. First proviso: if you have (for example)
two threads and two cores, both threads will run concurrently.
Second, Linux has a "dynamic priority" range (sort of like Windows)
where it adjusts the base priority as it sees fit, and in this range,
the scheduler may override the base difference of 1.
Outside that range, yes, even the smallest possible difference in
priority will make the difference between getting essentially all the
processor time, and virtually none at all (though if you have two or
more at the same priority, processing time will normally be split
roughly evenly between them). Windows works roughly the same way.
Here's a simple demo program:
// warning: this is intended purely to demonstrate one simple point,
// not as particularly exemplary multithreaded code.
#include <windows.h>
#include <iostream>
#include <process.h>
CRITICAL_SECTION s;
unsigned int __stdcall threadproc(void *p) {
int line = (int)p;
COORD pos;
pos.X=0;
pos.Y=(short)p;
DWORD start = GetTickCount();
int count = 0;
HANDLE output = GetStdHandle(STD_OUTPUT_HANDLE);
// run test for 15 second:
while (GetTickCount()-start < 15000) {
// burn some CPU time so we're usually ready to run:
for (int i=0; i<10000000; i++)
;
// print out how often we've run:
EnterCriticalSection(&s);
SetConsoleCursorPosition(output, pos);
printf("%d", ++count);
LeaveCriticalSection(&s);
}
return 0;
}
int main() {
uintptr_t handles[2];
InitializeCriticalSection(&s);
// create a couple of threads:
for (int i=0; i<2; i++) {
handles[i] = _beginthreadex(NULL,
0,
threadproc,
(void *)(i+6),
CREATE_SUSPENDED,
NULL);
// and make sure they run on the same processor/core
SetThreadAffinityMask((HANDLE)handles[i], 1);
}
// reduce priority of one thread:
SetThreadPriority((HANDLE)handles[1], THREAD_PRIORITY_IDLE);
// Let them run:
ResumeThread((HANDLE)handles[0]);
ResumeThread((HANDLE)handles[1]);
// and wait 'til they're done:
WaitForMultipleObjects(2, (HANDLE *)handles, true, INFINITE);
std::cout << "\n\n";
return 0;
}
Running this, I get a count of ~500 for one thread, and 2 or 3 for
the other thread (I.e. the lower priority thread getting somewhere
around half a percent of the CPU time). Changing the exact difference
in priority such as setting to THREAD_PRIORITY_LOWEST or
THREAD_PRIORITY_BELOW_NORMAL has little (if any) real effect on the
amount of CPU time -- it might get it up to 1 whole percent of
processor time instead of a half, but I'm not sure it really makes
any difference at all -- showing that on thread gets two orders of
magnitude more processor time than the other is a lot simpler than
determining with certainty whether a possible difference between .6%
and .8% (for example) of the processor time is statistically
significant.
What I am saying is that telling me that it is bad without
telling me what is bad about it is far worse than useless.
In more than half of the cases now what was bad about my
design was not the design itself but the misconception of
it. Without explaining why you think it is bad, and only
saying that it is bad is really harassment and not helpful.
And what I'm saying is that if you won't bother doing some homework
to learn at least a *little* bit on your own, it's frankly rather
insulting that you constantly ask others to not only give you the
results of the work you should have done, but then turn around and
question their intelligence or honesty and demand proof of results
simply because they don't fit how you imagined things might be.
[DoS attacks]
So what else can be done, nothing?
Of course something can be done. I've already pointed out part of the
task -- get rid of the thread-per-connection model that's so
dangerous. That's roughly the equivalent of advising that when you're
going on a trip that 1) you lock the door before you leave, and 2)
refrain from going to the local hoodlum's hangout and announce to all
who will listen that you're going to be gone, and you're leaving the
door unlocked, and oh, yes, you've got just the most incredible
stereo equipment that'll just be theirs for the taking if they show
up next week!
--
Later,
Jerry.
Generated by PreciseInfo ™
Rabbi Yitzhak Ginsburg declared:
"We have to recognize that Jewish blood and the blood
of a goy are not the same thing."
-- (NY Times, June 6, 1989, p.5).
|
http://preciseinfo.org/Convert/Articles_MFC/STL_Code/MFC-VC-STL-Code-100413041030.html
|
CC-MAIN-2021-49
|
refinedweb
| 825
| 56.29
|
Subscriber portal
When I tell my project to run, i get a list of errors- one for almost each page "Could not load type 'namespace.pagename' " After doing some research, I know it's got something to do with the assemblies loading, and I have double checked that all the properties
in the <%@ Page.... match the location and the name of the code behind files but the knowledgebase article that covers this, only provides a resolution for this error when you are coding in C+.... I've compared the error pages to the ones that do not have
an error, and the <% @ Page... declarations all look the same to me. Does anyone else know what it might be?
Thx ~ Jg
Even though the web page looks similar with each other, their underlying .CS file may have different content.
The error occurs if CLR failed to find the assembly that contains the type you're loading, for this kind of error, you may use
fuslogvw.exe to get more detail information,
you may have a try and paste fuslogvw log here.
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate?
|
https://social.msdn.microsoft.com/Forums/en-US/d2128721-953e-4944-b56c-cd5f3ba90c3b/could-not-load-type-namespacepagename-error-when-using-vbnet
|
CC-MAIN-2018-09
|
refinedweb
| 220
| 71.65
|
one_hot¶
paddle.fluid.layers.
one_hot(input, depth, allow_out_of_range=False)[source]
WARING: This OP requires the last dimension of Tensor shape must be equal to 1. This OP will be deprecated in a future release. It is recommended to use fluid. one_hot .
The operator converts each id in the input to an one-hot vector with a
depthlength. The value in the vector dimension corresponding to the id is 1, and the value in the remaining dimension is 0.
The shape of output Tensor or LoDTensor is generated by adding
depthdimension behind the last dimension of the input shape.
Example 1 (allow_out_of_range=False): input: X.shape = [4, 1] X.data = [[1], [1], [3], [0]] depth = 4 output: Out.shape = [4, 4] Out.data = [[0., 1., 0., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.], [1., 0., 0., 0.]] Example 2 (allow_out_of_range=True): input: X.shape = [4, 1] X.data = [[1], [1], [5], [0]] depth = 4 allow_out_of_range = True output: Out.shape = [4, 4] Out.data = [[0., 1., 0., 0.], [0., 1., 0., 0.], [0., 0., 0., 0.], # This id is 5, which goes beyond depth, so set it all-zeros data. [1., 0., 0., 0.]] Example 3 (allow_out_of_range=False): input: X.shape = [4, 1] X.data = [[1], [1], [5], [0]] depth = 4 allow_out_of_range = False output: Throw an exception for Illegal value The second dimension in X is 5, which is greater than depth. Allow_out_of_range =False means that does not allow the word id to exceed depth, so it throws an exception.
- Parameters
input (Variable) – Tensor or LoDTensor with shape \([N_1, N_2, ..., N_k, 1]\) , which contains at least one dimension and the last dimension must be 1. The data type is int32 or int64.
depth (scalar) – An integer defining the
depthof the one hot dimension. If input is word id, depth is generally the dictionary size.
allow_out_of_range (bool) – A bool value indicating whether the input indices could be out of range \([0, depth)\) . When input indices are out of range, exceptions
Illegal valueis raised if
allow_out_of_rangeis False, or zero-filling representations is created if it is set True. Default: False.
- Returns
The one-hot representations of input. A Tensor or LoDTensor with type float32.
- Return type
Variable
Examples
import paddle.fluid as fluid # Correspond to the first example above, where label.shape is [4, 1] and one_hot_label.shape is [4, 4]. label = fluid.data(name="label", shape=[4, 1], dtype="int64") one_hot_label = fluid.layers.one_hot(input=label, depth=4)
|
https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/one_hot.html
|
CC-MAIN-2020-05
|
refinedweb
| 407
| 68.47
|
Tag:prison
2010 noip improvement group problem solution
MT Use the queue to simulate the meaning of the question #include #include #include using namespace std; int n,m; int head=0,tail=0; int s[1100]; long long ans=0; bool book[1100]; int main(){ cin>>m>>n;memset(book,0,sizeof(book)); for(int i=1;i<=n;++i){ int a;cin>>a; if(book[a]==0){ s[++tail]=a; book[a]=1;// Join the team If (head + m < = tail) // leave the team book[s[head++]]=0; ++ans; } […]
From murderer to high paid coder, he started programming from zero learning and was finally accepted by Silicon Valley!
Zachary has been in custody for 22 years × In a nine foot prison. Now he sits in an open-ended office in San Francisco studying code. At the age of 15, he was sentenced to life imprisonment for murder. The 38 year old, like his colleagues who graduated from Stanford University, has a full-time job […]
|
https://developpaper.com/tag/prison/
|
CC-MAIN-2022-40
|
refinedweb
| 164
| 53.55
|
a side-by-side reference sheet
grammar and invocation | variables and expressions | arithmetic and logic | strings | regexes | dates and time | arrays
dictionaries | algebraic data types | functions | execution control | file handles | files | directories
processes and environment | libraries and namespaces | objects | generic types | reflection | net and web | unit tests
debugging and profiling | contact
General
version used
The compiler version used for this cheatsheat.
show version
How to get the compiler version.
implicit prologue
Code which examples in the sheet assume to have already been executed.
Grammar and Invocation
hello world
How to write, compile, and run a "Hello, World!" program.
file suffixes
For source files, header files, and compiled object files.
c++
The gcc compiler will treat a file with any of the following suffixes as C++ source:
.cc .cp .cxx .cpp .CPP .c++ .C
GNU make has built-in rules which treat the following suffixes as C++ source:
.cc .C .cpp
block delimiters
How blocks are delimited.
A block contains a sequence of statements. Blocks for function bodies in function definitions; to define the branches of if statements and the bodies of while loops.
Class definition bodies are blocks, though the statements that appear in them are restricted to declarations and definitions.
Bare blocks can be used to limit the scope of variables which are declared inside them.
statement terminator
How statements are terminated.
top level statements
Statements that can appear at the top level of a source file.
end-of-line comment
The syntax for a comment which is terminated by the end of the line.
multiple line comment
The syntax for a comment which can span multiple lines.
The /* */ delimiters do not nest. Using them to comment out code which already contains a /* */ comment usually results in a syntax error.
Variables and Expressions
local variable
How to declare a variable which is allocated on the stack.
uninitialized local variable
The value contained by a local variable that wasn't initialized.
global variable
How to declare a global variable.
constant
How to declare a constant variable.
allocate heap
How to allocate memory for a primitive type on the heap.
c++
new and delete can be used to manage the memory of both primitive types and objects.
objective c
Object C has a different memory management schemes for primitive types and objects. Objects are allocated with alloc and freed by means of NSAutoreleasePool. For primitive types the same techniques are used as for C. However, idiomatic Objective C will declare primitive types as local variables or as part of the state of an object and avoid explicit calls to malloc.
Arrays of objects can be created with NSArray and NSMutableArray.
java
In Java, arrays are always stored on the heap and the JVM is responsible for garbage collection. The primitive types are stored (1) on the local frame, (2) as part of the state of an object, or (3) as part of the state of a class. The primitive types are never stored in the heap directly and when they are part of object state they are garbage collected with the object. Primitive types are passed by value unless they are encapsulated in an object.
Each of the primitive types has a wrapper class, and instantiating this class is the best approximation in Java to allocating the primitive type on the heap:
Integer i = new Integer(0);
The compiler may instantiate the wrapper class implicitly; this is called boxing. The compiler also permits use of a wrapper class in the place of the primitive type, or unboxing.
C#
C# behavior is like Java. Note that C# lacks specific wrapper classes for each primitive data type.
free heap
How to free the memory for a primitive type that was allocated on the heap.
null
C++
A typical definition:
const int NULL = 0;
coalesce
The equivalent of the COALESCE function from SQL.
C++, Objective C++:
The short circuit or operator || can be used as a coalesce operator. However, in C++ and Objective C, NULL is identical to zero, whereas in databases they are two distinct values.
Java:
The ternary operator provides the closest approximation to COALESCE, but it does not have the same behavior if the tested value has a side effect.
Arithmetic and Logic
boolean type
C
The following definitions are common:
typedef int BOOL; #define TRUE 1 #define FALSE 0
Objective C
From objc.h:
typedef signed char BOOL; #define YES (BOOL)1 #define NO (BOOL)0
C#
bool is an alias for System.Boolean
true and false
Literals for the boolean values true and false.
C
The following definitions are common:
typedef int BOOL; #define TRUE 1 #define FALSE 0
Objective C
From objc.h:
typedef signed char BOOL; #define YES (BOOL)1 #define NO (BOOL)0
falsehoods
Values which evaluate as false in the conditional expression of an if statement.
logical operators
The logical operators.
In all languages on this sheet the && and || operators short circuit: i.e. && will not evaluate the 2nd argument if the 1st argument is false, and || will not evaluate the 2nd argument if the 1st argument is true. If the 2nd argument is not evaluated, side-effects that it contains are not executed.
C++
C++ defines and, or, and not to be synonyms of &&, ||, and !, with the same semantics and precedence.
Java
The arguments of the logical operators must be of type boolean.
C#
The arguments of the logical operators must be of type bool.
relational operators
Binary operators which return boolean values.
integer type
Signed integer types.
C
Whether char is a signed or unsigned type depends on the implementation.
C#
C# has the following aliases:
sbyte: System.SByte
short: System.Int16
int: System.Int32
long: System.Int64
unsigned type
Unsigned integer types.
C
Whether char is a signed or unsigned type depends on the implmentation.
C#
C# has the following aliases:
byte: System.Byte
ushort: System.UInt16
uint: System.UInt32
ulong: System.UInt64
float type
Floating point types.
C#
C# has the following aliases:
float: System.Single
double: System.Double
fixed type
Fixed-point decimal types.
C#:
C# has the following alias:
decimal: System.Decimal
arithmetic operators
The arithmetic binary operators: addition, subtraction, multiplication, division, modulus.
integer division
How to get the quotient of two integers.
integer division by zero
The results of integer division by zero.
*C++, Objective C**
The behavior for division by zero is system dependent; the behavior described is nearly universal on Unix.
C#
It is a compilation error to divide by a zero constant. Division by a variable set to zero results in a runtime exception.
float division
How to perform floating point division on two integers.
float division by zero
The result of floating point division by zero.
Modern hardware, if it implements floating point instructions, will implement instructions which conform to the IEEE 754 standard. The standard requires values for positive infinity, negative infinity, and not-a-number (NaN).
The C and C++ standards do not assume that they are running on hardware which provides these values; code which assumes they exist is not strictly speaking portable.
power
How to perform exponentiation.
C++
powm1 is an abbreviation for "power minus one". Hence the need to add one to get the answer.
sqrt
The positive square root function.
sqrt -1
The result of taking the square root of a negative number.
Here is a list of the standard mathematical functions whose domains do not cover the entire real number line:
transcendental functions
The exponential and logarithm functions; the trigonometric functions; the inverse trigonometric functions.
The arguments of the trigonometric functions are in radians as are the return values of the inverse trigonometric functions.
transcendental constants
The transcendental constants e and pi.
float truncation
Functions for converting a float to a nearby integer value.
C:
The math.h library also provides floor and ceil which return double values.
Java:
Math.floor and Math.ceil return double values.
absolute value
The absolute value of a numeric quantity.
integer overflow
What happens when an integer expression results in a value larger than what can be stored in the integer type.
float overflow
What happens when a float expression results in a value larger than largest representable finite float value.
float limits
complex construction
complex decomposition
random number
Ways to generate random numbers. The distributions are a uniform integer from 0 to 99; a uniform float from 0.0 to 1.0; a standard normal float.
c++:
The standard library includes functions for generating random numbers from other distributions.
random seed
How to set the seed for the random number generator.
bit operators
The bit operators: left shift, right shift, and, or, xor, and complement.
C++
bitand, bitor, and compl are synonyms of &, |, and ~ with identical precedence and semantics.
binary, octal, and hex literals
Binary, octal, and hex integer literals.
base conversion
How to convert integers to strings of digits of a given base. How to convert such strings into integers.
Strings
type
The type for strings.
literal
The syntax for string literals.
newline in literal?
Can a newline be used in a string literal? Does the newline appear in the resulting string object?
escapes
Escape sequences that can be used in string literals.
allocate string
How to allocate a string.
Java
The following code
String t = new String(s);
creates a copy of the string s. However, because Java strings are immutable, it would be safe to store the same string object it t as follows:
String t = s;
string length
string comparison
C
Returns 1, 0, or -1 depending upon whether the first string is lexicographically greater, equal, or less than the second. The variants strncmp, strcasecmp, and strncasecmp can perform comparisons on the first n characters of the strings or case insensitive comparisons.
**C++
string::compare returns a positive value, 0, or a negative value depending upon whether the receiver is lexicographically greater, equal, or less than the argument. C++ overload the comparison operators (<, >, ==, !=, <=, >=) so that they can be used for string comparison.
Objective C
compare will return -1, 0, or 1.
Java
compareTo will return a negative value, 0, or a positive value.
C#
CompareTo will return -1, 0, or 1.
to C string
string to number
C
strtoimax, strtol, strtoll, strtoumax, strtoul, and strtoull take three arguments:
intmax_t strtoimax(const char *str, char **endp, int base);
The 2nd argument, if not NULL, will be set to first character in the string that is not part of the number. The 3rd argument can specify a base between 2 and 36.
strtof, strtod, and strtold take three arguments:
double strtod(const char *str, char **endp);
Java
parseInt has an optional second argument for the base.
number to string
split
join
java:
Use StringBuilder to implement join:
public static String join(String[] a, String sep) { StringBuilder sb = new StringBuilder(); for (int i=0; i<a.length; i++) { if (i > 0) { sb.append(sep); } sb.append(a[i]); } return sb.toString(); }
concatenate
substring
index
sprintf
uppercase
lowercase
trim
pad
Regular Expressions
regex match
C
regcomp returns a non-zero value if it fails. The value can be inspected for a precise error reason; see the regcomp man page.
REG_EXTENDED is a bit flag which indicates that modern regular expressions are being used. Other useful flags are
- REG_NOSUB: don't save string matched by regular expression
- REG_NEWLINE: make ^ and $ match newlines in string
- REG_ICASE: perform case insensitive matching
regex substitute
Date and Time
date/time type
The data type used to store a combined date and time. The combination of date and time is also called a timestamp.
current date/time
How to get the current date and time.
to unix epoch, from unix epoch
How to convert a date/time object to the Unix epoch. How to convert the Unix epoch to a date/time object.
The Unix epoch is the number of seconds since 1 January 1970 UTC.
c#:
Windows file time is the number of nanoseconds since 1 January 1601 UTC divided by 100. The concept was introduced when journaling was added to NTFS with Windows 2000.
The magic constant (1164444480) used for the conversion can be calculated with the following code:
using System; using System.Globalization; CultureInfo enUS = new CultureInfo("en-US"); DateTime startEpoch = DateTime.ParseExact("1970-01-01 00:00:00 -00", "yyyy-MM-dd HH:mm:ss zz", enUS); Console.WriteLine(startEpoch.ToFileTimeUtc() / (100*1000*1000));
strftime
How to use a format string to display a date/time object.
The canonical example of doing this is the strftime function from the C standard library which defines used letters prefix by percent signs as conversion specification characters, e.g. %Y-%m-%d.
strptime
How to use a format string to parse date data from a string.
Arrays
allocate array on stack
How to allocate an array on the stack.
allocate array on heap
How to allocate an array on the heap.
free array on heap
How to free an array that was allocated on the heap.
array literal
Objective C
NSArray can only store instances of NSObject. For primitive types, use C arrays.
Java
Java permits arrays to be declared with C-style syntax:
int a[] = {1,2,3}
array element access
C
Arrays can be manipulated with pointer syntax. The following sets x and y to the same value:
int a[] = {3,7,4,8,5,9,6,10}; int x = a[4]; int y = *(a+4);
array out-of-bounds result
array iteration
C
C arrays do not store their size, so C developers normally store this information in a separate variable. Another option is to use a special value to mark the end of the array:
char *a[] = { "Bob", "Ned", "Amy", NULL }; int i; for (i=0; a[i]; i++) { printf("%s\n", a[i]); }
vector declaration
vector push
vector pop
vector size
vector access
vector out of bounds
vector iteration
Dictionaries
pair
map declaration
C:
For those interested in an industrial strength hashtable implementation for C, here is the header file and the source file for the hashtable used by Ruby.
For those interested in a "Computer Science 101" implementation of a hashtable, here is a simpler source file and header file.
map access
map size
map remove
map element not found result
map iterator
Algebraic Data Types
typedef
C
Because C integer types don't have well defined sizes, typedef is sometimes employed to as an aid to writing portable code. One might include the following in a header file:
typedef int int32_t;
The rest of the code would declare integers that need to be 32 bits in size using int32_t and if the code needed to be ported to a platform with a 16 bit int, only a single place in the code requires change. In practice the typedef abstraction is leaky because functions in the standard library such as atoi, strtol, or the format strings used by printf depend on the underlying type used.
Java
Java has well defined integer sizes so typedef is not needed as a portability aid. In other situations where a C programmer would use a typedef for data abstraction, a Java programmer must either define a class or retain the raw primitive type throughout the code.
enum
C
Enums were added to the C standard when the language was standardized by ANSI in 1989.
An enum defines a family of integer constants. If an integer value is not explicitly provided for a constant, it is given a value one greater than the previous constant in the list. If the first constant in the list is not given an explicit value, it is assigned a value of zero. it is possible for constants in a list to share values. For example, in the following enum, a and c are both zero and b and d are both one.
enum { a=0, b, c=0, d };
A typedef can be used to make the enum keyword unnecessary in variable declarations:
typedef enum { mon, tue, wed, thu, fri, sat, sun } day_of_week; day_of_week d = tue;
From the point of view of the C compiler, an enum is an int. The C compiler does not prevent assigning values to an enum type that are not in the enumerated list. Thus, the following code compiles:
enum day_of_week { mon, tue, wed, thu, fri, sat, sun }; day_of_week d = 10; typedef enum { mon, tue, wed, thu, fri, sat, sun } day_of_week2; day_of_week2 d2 = 10;
C++
C++ enums are more strongly typed the C enums. The compiler rejects attempts to assign a value to an enum variable that is not in the enumerated list. The following code:
enum day_of_week { mon, tue, wed, thu, fri, sat, sun }; day_of_week d = 10;
produces an error like the following:
main.cpp: In function ‘int main()’: main.cpp:21: error: invalid conversion from ‘int’ to ‘main()::day_of_week’
Java
Java added enums in 1.5.
Java enums are strongly typed like C++ enums. Unlike C++ enums, it is an error to use an enum value in an integer context. The value has a method ordinal() which returns the integer value, however.
When used in a string context, an enum will evaluate as the string corresponding to its identifier: i.e. "TUE" for DayOfWeek.TUE. This string can be accessed explicitly with DayOfWeek.TUE.toString(). Conversely, DayOfWeek.valueOf("TUE") returns DayofWeek.TUE.
Java enums are subclasses of java.lang.Enum. In particular, an enum is a class, and if the last value if the enum definition is followed by a semicolon, what follows is a class body which can contain methods and constructors. An enum class is final and cannot be subclassed, but an enum can implement an interface.
C#
Like Java enums, C# enums will return the string corresponding to their identifier. Unlike Java enums, C# enums will evaluate as integers in a numeric context.
When used as an argument in a C# style format string, an enum value returns the string corresponding to its identifier.
struct definition
A struct provides names for elements in a predefined set of data and permits the data to be accessed directly without the intermediation of getters and setters. C++, Java, and C# classes can be used to define structs by making the data members public. However, public data members violates the uniform access principle.
C++:
From The C++ Programming Language: 3rd Edition:
by definition, a struct is a class in which members are by default public; that is, struct s { ... is simply shorthand for class s { public: ...
struct declaration
struct initialization
C
The literal format for a struct can only be used during initialization. If the member names are not provided, the values must occur in the order used in the definition.
struct member assignment
struct member access
C
The period operator used for member access has higher precedence than the pointer operator. Thus parens must be used
to get at the member of a struct referenced by a pointer:
struct medal_count { char* country; int gold; int silver; int bronze; } struct medal_count spain = { "Spain", 3, 7 4 }; struct medal_count *winner = &spain; printf("The winner is %s with %d gold medals", (*winner).country, (*winner).gold);
ptr->mem is a shortcut for (*ptr).mem:
printf("The winner (%s) earned %d silver medals", winner->country, winner->silver);
Functions
pass by value
pass by address
pass by reference
default argument value
named parameters
objective C:
Named parameters must be invoked in the order in which they are defined in the method signature.
C#:
Named parameter do not need to be invoked in the order in which they are defined in the method signature. Additionally, their use in
invocation is optional: the arguments can be provided without names in which case the definition order must be used.
function overloading
variable number of arguments
C
The stdarg.h library supports variable length functions, but provides no means for the callee to determine how many arguments were provided. Two techniques for communicating the number of arguments to the caller are (1) devote one of the non-variable arguments for the purpose as illustrated in the table above, or (2) set the last argument to a sentinel value as illustrated below. Both techniques permit the caller to make a mistake that can cause the program to segfault. printf uses
the first technique, because it infers the number of arguments from the number of format specifiers in the format string.
char* concat(char* first, ...) { int len; va_list ap; char *retval, *arg; va_start(ap, first); len = strlen(first); while (1) { arg = va_arg(ap, char*); if (!arg) { break; } len += strlen(arg); } va_end(ap); retval = calloc(len+1,sizeof(char)); va_start(ap, first); strcpy(retval, first); len = strlen(first); while (1) { arg = va_arg(ap, char*); if (!arg) { break; } printf("copying %s\n", arg); strcpy(retval+len, arg); len += strlen(arg); } va_end(ap); return retval; }
An example of use:
string *s = concat("Hello", ", ", "World", "!", NULL);
passing functions
anonymous function
operator overloading
Execution Control
for
if
For all five languages, the curly braces surrounding an if or else clause are optional if the clause contains a single statement. All five languages resolve resulting dangling else ambiguity by setting the value of c to 2 in the following code:
int a = 1; int b = -1; int c = 0; if (a > 0) if (b > 0) c=1; else c= 2;
while
If the body of a while loop consists of a single statement the curly braces are optional:
int i = 0; while (i<10) printf("%d\n", ++i);
switch
A switch statement branches based on the value of an integer or an integer expression. Each clause must be terminated by a break statement or execution will continue into the following clause.
throw exception
C++
C++ code can throw or catch any type of object or primitive data type. The C++ standard library throws subclasses of std::exception, which does not have a message member.
Objective C
Objective C can only throw an instance of NSException or one of its subclasses.
Java
Java can only throw an implementation of java.lang.Throwable.
C#
C# can only throw an instance of System.Exception of one of its subclasses.
catch exception
C++
Exceptions can be caught by value or by reference. If the exception is an object and it is caught by value, the copy constructor and the desctructor will be invoked.
Objective C
Exceptions are thrown and caught by pointer value.
finally clause
C++
Class Finally { void (*finally)(); Finally(void (*f)()) : finally(f) { } ~Finally() { do_cleanup(); } }; { Cleanup c(); risky(); }
methods must declare exceptions
Java
If a method throws a subclass of java.lang.Exception, it must declare the exception in its throws clause. This includes exceptions originating in code called by the method. On the other hand, if the method throws a subclass of java.lang.Error, no declaration in the throws clause is necessary.
File Handles
printf
How to print a formatted string to standard out.
read from file
C
If there is an error, the global variable errno will be set to a nonzero value, and strerror(errno) will return an error message for the error.
write to file
Files
Directories
Processes and Environment
signature of main
first argument
C
The first argument is the pathname to the executable. Whether the pathname is absolute or relative depends on how the executable was invoked. If the executable was invoked via a symlink, then the first argument is the pathname of the symlink, not the executable the symlink points to.
environment variable
iterate thru environment variables
Library and Namespaces
standard library name
The name of the standard library.
C++
Standard Template Library (STL)
The STL might not be installed by default.
Objective C
The Foundation Framework is the core of Cocoa, a set of libraries for Objective C development on Mac OS X and the iPhone. The Foundation Framework descends from NextStep, hence the NS prefix in the class names. NextStep was made available to operating systems other than Next as OpenStep and the GNU implementation is called GNUStep.
Java
C#
.NET Framework 4 Class Library
Mono Documentation
The core of the .NET framework is called the Base Class Library. Mono implements the BCL, but not all of the .NET framework.
Objects
define class
constructor
create object
destructor
C++
The C++ compiler will normally see to it that the destructor for a class and all its superclasses is called. The compiler may not be aware of the true class of the object if it was upcast to one of its base class. If the destructor was not declared virtual, then the derived class destructor and any other base class constructors will not get called. Thus many developers declare all destructors virtual.
Java
Java does not chain finalize() methods, so the derived class should explicitly call the parent.
destroy object
Java
finalize() is called by the Java garbage collector.
define method
invoke method
dynamic dispatch
static dispatch
Method dispatch is static if the method is determined by the variable type, and dynamic if it is determined by the value type. These techniques of method dispatch yield different results when both the base class and the derived class have implementations for a method, and an instance of the derived class is being stored in a variable with type of the base class.
When dispatch is static, the compiler can determine the code that will be executed for the method call. When dispatch is dynamic, the code that will be executed is a runtime decision. C++ implementations usually achieve this by storing function pointers in the object: qv virtual method table.
The use of the keyword static in the declaration of a class method in C++, Java, and C# is perhaps unfortunate. Class methods are always statically dispatched, so the concepts are not unrelated.
define class method
invoke class method
name of receiver
access control
objective c:
Access control only applies to members; all methods are public. gcc 4.0 does not enforce the access restrictions; it merely gives warnings.
anonymous class
subclass
superclass constructor
mark class underivable or method overrideable
root class
Name of the root class, if there is one.
objective c:
It is possible to define a root class other than NSObject.
root class methods
A selection of methods available on the root class.
Generic Types
define generic type
instantiate generic type
Reflection
get type class of object
get type class from string
get type class from type identifier
c++:
typeid returns a value of type type_info. The assignment method and copy constructor of type_info are private.
class name
*c++:**
The string returned by type_info.name() contains more than the class name. The code below displayed the string "Z4mainE3Foo" when run on my system.
class Foo { int i; }; puts(typeid(Foo).name());
get methods
has method
invoke method object
Net and Web
url encode/decode
How to URL encode and URL decode a string.
URL encoding is also called percent encoding. It is used to escape special characters in GET query string parameters.
Reserved characters according to RFC 3986 are replaced by a percent sign % followed by a two hex digit representation of the ASCII code. The reserved characters are:
! * ' ( ) ; : @ & = + $ , / ? # [ ]
Spaces can optionally be represented by a plus sign +.
Unit Tests
Debugging and Profiling
C
ANSI C Standard 1990
ANSI C Standard (pdf) 1999
C Standard Library
POSIX Library C Headers
Linux System Call Man Pages
Linux Subroutine Man Pages
C++
ISO C++ Standard 1998
C++11 Standard (pdf)
STL
Boost 1.42
Objective-C
Objective C 2.0 (pdf) Apple
GNUstep
Mac OS X Foundation Framework
Java
Java 1.6 API
Java 1.7 Project
JVM Specification 2nd Ed
The Java Language Specification 3rd Ed
C#
C# Standard: ECMA-334
Mono API
C# Programming Guide Microsoft
|
http://hyperpolyglot.org/cpp
|
CC-MAIN-2013-20
|
refinedweb
| 4,595
| 55.24
|
Tree Shaking with CanJS
One of the most highly-voted on items from our March community survey was making CanJS tree-shakable, and it’s now available in CanJS 4.2!
The new
can/es module contains named exports which can be imported and used without bringing in everything made available by the module. When used in conjunction with tree shaking, you gain:
- Fewer packages to import in each of your modules.
- Bundles that exclude all of the parts of CanJS that you don’t use.
Get these benefits by importing the
can/es module like so:
import { Component } from "can/es"; Component.extend({ tag: "my-component", ViewModel: { message: "string" } });
The above code will only import the required modules, not everything in
can. To learn more, read the Experimental ES Module Usage docs.
We intend to ship this as the default
can module in CanJS 5 and make it the way we teach CanJS (instead of importing the individual packages). But before we do that, we need StealJS to support it…
Sneak Peek: Tree Shaking with StealJS
The next major version of StealJS will support tree shaking! To try it out, install a pre-release of steal-tools:
npm install steal-tools@pre
…and that’s it! It’ll be enabled by default in steal-tools 2.0, with a
--no-tree-shaking CLI argument or
treeShaking: false build option if you need to turn it off. Get a sneak peek of the docs in this pull request and let us know how much smaller your bundle sizes are.
Sneak Peek: DevTools for CanJS
One of the most highly-voted on items in the January survey was Create DevTools for CanJS. We’re not quite finished with it, but you can install it from the Chrome Web Store and try it out.
Right now, the extension allows you to view and edit your ViewModels, visualize dependency graphs for elements & ViewModels in your application, and debug changes to your observables using the CanJS queues system.
More documentation on the DevTools will be available soon in the debugging guide. You can help us make it even better by filing issues on GitHub or taking an existing issue and contributing a fix.
YouTube trainings
We’ve hosted a couple of live-streams on YouTube:
Find even more videos on the CanJS and DoneJS YouTube channels.
Community survey
We run a community survey every six weeks to get a feel for what everyone would like Bitovi’s open source team to prioritize (sign up here if you’re not on our list).
Here are the proposals that have been most voted for on our surveys; we’ve already started working on some of them, while others we plan on starting in the coming weeks:
- can-query / make it easier to configure and understand can-set (in progress for CanJS 5)
- Improve routing to components (in progress for CanJS 4.3; will serve as a foundation for adding a routing guide and testing guide)
- Easy state management for React with can-observe
- Improve the content of the CanJS documentation
Say hi in person or online
If you’re in Boston, Chicago, Los Angeles, or Silicon Valley, be sure to RSVP to our meetups in those locations:
- Chicago: Wednesday, May 23: Building a Tinder-like Swipe Carousel
- Los Angeles: Tuesday, May 22: Building a Video Player
Not in those cities? Chat with us on our forums, Gitter, or Twitter!
Contributors
Last but certainly not least, we’d like to recognize the following people for their contributions to our open source projects:
- Bianca’s contributions to CanJS
- Brad Momberger’s contributions to CanJS
- Colin Leong’s contributions to CanJS
- Gregg Roemhildt’s contributions to CanJS and DoneJS
- Manuel Mujica’s contributions to StealJS
- Oscar Pacheco Ortiz’s contributions to CanJS
- Ryan Wheale’s contributions to Can.
|
https://www.bitovi.com/blog/may-2018-donejs-community-update
|
CC-MAIN-2019-35
|
refinedweb
| 639
| 54.66
|
Traditionally, I've known Intel AMT to be part of the toolkit of administrators. It has helped me out by either simplifying or enabling parts of my work as a system administrator. Usually, this meant being able to do things without users entering the picture. Whether it's power management, changing BIOS settings, booting into a CD, and so on.
This changed when I deployed a Microsoft Small Business Server 2011 Standard at a customer's site. This was a very small office, with about five client pc's. Suddenly, I had a use case where users themselves would benefit from being able to use AMT features.
Although their previous SBS (2003) also had a Remote Web Access website which including a remote desktop (RDP) proxy, it was never used much. However, since the new server was installed, users found it easier to connect to internal clients pc's using this new RDP proxy, and they enjoyed the option of being able to work from home using this feature.
This "connect to computer" feature of SBS is shown on the right side in the screenshot below. This Remote Web Access is a standard feature of SBS and is normally accessed using
This is fine during office hours, when a user working from home can ask someone currently still at the office to switch on the computer for them, but outside office hours, this presents a problem. As an administrator I am able to turn on computers remotely, but the tools I use for this (either the web interface, the AMT Commander or (usually, these days) the Powershell module) are more technical than what the users prefer to use, and -- most importantly -- require either the AMT password to work, or more advanced provisioning (e.g. Active Directory integration) than makes sense for such a small site.
One solution for this is to use the SBS itself, specifically the included web server, to do both the underlying work (performing AMT commands on the backoffice computers) as well as the authentication, using IIS's builtin user authentication.
To do this, we need to compile our AMT code into a DLL that IIS can use. So we need a compiler that can do this. You can download Visual Web Developer 2010 Express (a flavour of Visual Studio 2010) for free from Microsoft, which will do this for us. Alternatively, if you have Visual Studio 2010 Professional already installed somewhere, you can use that instead.
Also, you will need the Intel AMT Software Development Kit. This contains AMT functionality that we will use in our webpage.
Once Visual Studio (or Visual Web Developer, which I shall also call Visual Studio in this article) is installed, and the SDK is unzipped, we can begin. In this example, the SDK is unzipped to C:\VPRO_SDK.
First, we open Visual Studio, create a new Empty Web Application and give it a name, in this case "PowerOn". Then click "OK". I've used C# because I'm more familiar with it than Visual Basic.
With this new project, it's not technically necessary, but it's wise to set the Target .NET Framework to a slightly older version, for a little more compatibility. If we set it to 3.0 then this page should even run on SBS2008 (or plain Windows Server 2008 running IIS, of course). If you set this too high, you might find you have to install new .NET Frameworks on your server before you can use the page. Right click on "PowerOn" (or your own project name) in the top right corner en click "Properties".
Now, we need to reference the AMT SDK libraries that will enable us to do things like power on computers. Right click on "Reference" and choose "Add reference..."
Then under the tab "Browse" we can point to a file we want to add.
The four files that we need to add as references are (assuming the SDK is at C:\VPRO_SDK):
- C:\VPRO_SDK\Windows\Intel_AMT\Bin\CIMFramework.dll
- C:\VPRO_SDK\Windows\Intel_AMT\Bin\CIMFrameworkUntyped.dll
- C:\VPRO_SDK\Windows\Intel_AMT\Bin\DotNetWSManClient.dll
- C:\VPRO_SDK\Windows\Intel_AMT\Bin\IWSManClient.dll
Next, to make our code simpler and shorter, we're going to use a class from the SDK that already predefined some common tasks. Rightclick the project name, in this example "PowerOn", then click "Add" and "Existing Item..."
The item we need is:
- C:\VPRO_SDK\Windows\Common\WS-Management\C#\common\AssociationTraversalTypedUtils.cs
Now we're ready to create code. The only actual webpage that we need is a Default.aspx, this will be the page that is displayed. Rightclick the project name again, click "Add" again, but this time "New Item...". Next choose Web Form (the top option) and name it Default.aspx.
Now you have the option of whether you want to write the code for the table/text/buttons/etc or whether you want to drag-and-drop them in the designer view. Myself, I am clumsy with graphical editors, so I prefer to write code. The tags for writing a table are incredibly simple. This is some sample code for a table with two rows, and a power-on button on each. Of course, after the first row, it's a simple matter of copy-and-pasting until you have enough rows. However, you have to change the following each row:
- the computer name (in this example DESKTOP1, DESKTOP2, etc.)
- the ID of the button (changed automatically, when copy-and-pasting)
- the method that is called upon a click (in this example: Button1_Click, etc.)
Here is some sample code. It goes between the <div> and </div> in the middle of the page.
On this page, you can switch on computers using Intel vPro.
<asp:Table
<asp:TableRow
<asp:TableCell
</asp:TableCell>
<asp:TableCell
<asp:Button
</asp:TableCell>
</asp:TableRow>
<asp:TableRow
<asp:TableCell
</asp:TableCell>
<asp:TableCell
<asp:Button
</asp:TableCell>
</asp:TableRow>
</asp:Table>
Next, we need to create the "Button1_click" that we mention in this code. The easiest way to go to the source code is to right-click on the method we want to define and choose "View code" in the popup menu. Here, I've right-clicked on "Button1_Click".
This automatically brings us to the file that should contain the source code for button1_Click, button2_Click and whatever other methods you wish to define. First, however, on this page, you need to add a few "using" references at the very top. The start of this page should read:
using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Intel.Manageability.Cim;
using Intel.Manageability.Cim.Untyped;
using Intel.Manageability.Cim.Typed;
using Intel.Manageability.WSManagement;
using Intel.Manageability.Exceptions;
using Intel.Manageability.Utils;
The first of these are probably already there, but it's wise to check. You're going to have to add the bottom lines that start with "Intel".
Next, we define the actual method that does the AMT work. The code for this needs to go below the "Partial Class" line and its opening bracket. The code is:
protected void Button1_Click(object sender, EventArgs e)
{
String computerName = "desktop1.company.local";
String username = "admin";
String password = "P@ssw0rd";
CimReference outJob = null;
IWSManClient wsmanClient = new DotNetWSManClient( computerName, username, password, false, false, null, null);
CimReference managedSystemEPR = AssociationTraversalTypedUtils.DiscoverManagedHost(wsmanClient);
CIM_PowerManagementService powerStateService = (CIM_PowerManagementService)AssociationTraversalTypedUtils.GetAssociated(wsmanClient,
managedSystemEPR,
typeof(CIM_PowerManagementService),
typeof(CIM_HostedService));
ushort powerState = 2; // "On" is powerstate 2
uint response = powerStateService.RequestPowerStateChange(powerState, managedSystemEPR, null, null, out outJob);
if (response != 0)
{
throw new WSManException("Wrong response from AMT");
}
}
Explaining this code is beyond the scope of this document, but if this makes sense for you, then you're likely to expand on it as well, perhaps with the webpage showing the current powerstate of AMT machines, but you don't need to. If you use this code, the button will turn on a computer. Of this code, you need to change the first three lines to reflect the name of the computer you wish to power on, and the username and password needed to do it.
These credentials will be compiled and saved into the DLL, so it's important to keep the DLL safe, on the SBS server, and not in shared document folders that regular users can access. During normal use, the DLL and the password contained within will not be accessible for users on your network or to users accessing the website.
This page should now look something like this in Visual Studio:
That's all the coding we need to do, now we can deploy our website to the SBS. However, it might be a good idea to test our web page. To do this, change "Debug" at the top, to "Release," then click the green triangle to the left of it.
We should see a layout shaped like a table with (in this example) two buttons and a distinct lack of visual style. For people who are creatively inclined, it might be a good idea to put slightly more effort into the HTML or ASP code that we started with (where the table is). If you are using an underpowered VM (like me) to do this, it might take quite a while before the page appears. This is normal. Also, if you want to test the functioning of the buttons, you need to be able to access the computers. This usually means you have to be directly connected to the intranet, or have a VPN connection running. Also, be wary of any firewalls that might be in the way, blocking TCP port 16992 between the SBS Server and the AMT-enabled workstations.
Now that we're confident that all the visual elements are there and that the buttons work, it's time to publish our work to the SBS server. Just close the Internet Explorer window, so we're back in Visual Studio. At the top, it says "Create Publish Settings", which sounds like a good idea. Click on this and then on "<New...>
Then, choose "File System" for the simplest method, and type in a temporary folder name where Visual Studio can copy the necessary files to. In this example, I used "C:\DEPLOY" which is easy to find afterwards.
Now, it's time to prepare the server. Log into the the server with administrative credentials and open the "Internet Information Services (IIS) Manager" snap-in. This is found under "Administrative Tools". Next, under "Sites" find the "Default Web Site" and right-click on it. Then choose "Add Application..."
Next, enter an alias. This will be the last part of the URL for the page. If the alias is "poweron" then the final address will be "". For the physical path, it's good practice to use a location within C:\inetpub\wwwroot\ but you don't have to.
Now comes a very important step. Securing the webpage (preferably before we copy the actual files to it). Back in IIS Manager, select the new entry "poweron" and double-click "Authentication".
Two things are important here: to disable anonymous authentication (you don't want anonymous users turning on your computers), and to enable another form of authentication. The easiest to set up is Basic Authentication. Just right click on "Anonymous Authentication" and click "Disable". Then right click "Basic Authentication" and click "Enable". Next, right click "Basic Authentication" again and choose "Edit..."
In this window, type the domain that your SBS users are located in. Otherwise, your users will have to authenticate with "DOMAIN\user" every time, which is a hassle. The bottom option "Realm" just makes the login window a little bit prettier, but isn't technically needed.
Now we've made sure that anonymous users cannot access the /poweron application. But when users do login, we want it to be safe. So, on the left side, select "poweron" again, and this time double click on the large "SSL Settings" icon and select the option "Require SSL" followed by "Apply".
Optionally, if you only want to allow specific users or groups access to the page (instead of all authenticated SBS users), choose "poweron" again on the left, and this time double click the "Authorization Rules" in the middle. Here, you can specify which users to allow or deny access.
Now that IIS is aware of our application, and it's properly secured, it's time to make it work. For this, simply copy the contents of the "C:\DEPLOY" folder (or where you decided to deploy to, earlier) to the location we specified in IIS as the location for our application. In this example, this is "C:\inetpub\wwwroot\poweron". The result should look similar to this:
And, voilá, we're done. Now we can sit back and enjoy the fruits of our labour by going to the website and click on buttons, marvelling at the computers spontaneously turning on.
In the scenario of a small company, where the goal is to enable users to switch on computers remotely, so they can connect to them and use all the applications that they're used to at the office, these few easy steps suffice. Of course it's also possible to customize this in various ways, it's possible to automatically generate the number of buttons, to use certificates for authentication, to use TLS connections, to also display the current power status and so on. But for an administrator with merely basic understanding of programming concepts and Visual Studio, it's entirely possible to create such a custom website for a small business customer.
The code you see in the example isn't created by me from scratch. The code relating to vPro is from combining code from a few sources within the AMT examples in the SDK. Mostly from a few of the classes inVPRO_SDK\Windows\Intel_AMT\Samples\WS-Management\RemoteControl
Thanks to Intel for providing the SDK.
- Default.aspx 1.4 K
- Default.aspx.cs 3.1 K
|
https://communities.intel.com/community/itpeernetwork/vproexpert/blog/2011/12/15/using-intel-amt-and-iis-on-microsoft-sbs2011-to-power-on-backoffice-computers
|
CC-MAIN-2016-18
|
refinedweb
| 2,327
| 63.8
|
#include <wx/xrc/xh_sizer.h>
Constructor.
Initializes the attributes and adds the supported styles.
Returns true if the given node can be handled by this class.
If the node concerns a sizer object, the method IsSizerNode is called to know if the class is managed or not. If the node concerns a sizer item or a spacer, true is returned. Otherwise false is returned.
Implements wxXmlResourceHandler.
Creates a sizer, sizeritem or spacer object, depending on the current handled node.
Implements wxXmlResourceHandler.
Creates an object of type wxSizer from the XML node content.
This virtual method can be overridden to add support for custom sizer classes to the derived handler.
Notice that if you override this method you would typically overload IsSizerNode() as well.
Example of use of this method:
Used by CanHandle() to know if the given node contains a sizer supported by this class.
This method should be overridden to allow this handler to be used for the custom sizer types.
See the example in DoCreateSizer() description for how it can be used.
|
https://docs.wxwidgets.org/3.1.5/classwx_sizer_xml_handler.html
|
CC-MAIN-2021-31
|
refinedweb
| 174
| 56.96
|
Span<T> Span<T> Span<T> Span<T> Struct
Definition
Provides a type- and memory-safe representation of a contiguous region of arbitrary memory.
generic <typename T> public value class Span
public struct Span<T>
type Span<'T> = struct
Public Structure Span(Of T)
Type Parameters
- Inheritance
-
Remarks
Span<T> is a ref struct that is allocated on the stack rather than on the managed heap. Ref struct types have a number of restrictions to ensure that they cannot be promoted to the managed heap, including that they can't be boxed,.
Important.
For spans that represent immutable or read-only structures, use System.ReadOnlySpan<T>.
Span<T> and memory. The following example creates a
Span<Byte> from an array:
// Create a span over an array. var array = new byte[100]; var arraySpan = new Span<byte><byte> nativeSpan; unsafe { nativeSpan = new Span<byte>
The following example uses the C# stackalloc keyword to allocate 100 bytes of memory on the stack:
// Create a span on the stack. byte data = 0; Span<byte> stackSpan = stackalloc byte[100]; for (int ctr = 0; ctr < stackSpan.Length; ctr++) stackSpan[ctr] = data++; int stackSum = 0; foreach (var value in stackSpan) stackSum += value; Console.WriteLine($"The sum is {stackSum}"); // Output: The sum is 4950
Because
Span<T> is an abstraction over an arbitrary block of memory, methods of the
Span<T> class and methods with
Span<T> parameters operate on any
Span<T> object regardless of the kind of memory it encapsulates. For example, each of the separate sections of code that initialize the span and calculate the sum of its elements can be changed into single initialization and calculation methods, as the following example illustrates:
public static void WorkWithSpans() { // Create a span over an array. var array = new byte[100]; var arraySpan = new Span<byte>(array); InitializeSpan(arraySpan); Console.WriteLine($"The sum is {ComputeSum(arraySpan):N0}"); // Create an array from native memory. var native = Marshal.AllocHGlobal(100); Span<byte> nativeSpan; unsafe { nativeSpan = new Span<byte>(native.ToPointer(), 100); } InitializeSpan(nativeSpan); Console.WriteLine($"The sum is {ComputeSum(nativeSpan):N0}"); Marshal.FreeHGlobal(native); // Create a span on the stack. Span<byte> stackSpan = stackalloc byte[100]; InitializeSpan(stackSpan); Console.WriteLine($"The sum is {ComputeSum(stackSpan):N0}"); } public static void InitializeSpan(Span<byte> span) { byte value = 0; for (int ctr = 0; ctr < span.Length; ctr++) span[ctr] = value++; } public static int ComputeSum(Span<byte> span) { int sum = 0; foreach (var value in span) sum += value; return sum; } // The example displays the following output: // The sum is 4,950 // The sum is 4,950 // The sum is 4,950
Span<T> and arrays
When it wraps an array,
Span<T> can wrap an entire array, as it did in the examples in the Span<T> and memory section. Because it supports slicing,
Span<T> can also point to any contiguous range within the array.
The following example creates a slice of the middle five elements of a 10-element integer array. Note that the code doubles the values of each integer in the slice. As the output shows, the changes made by the span are reflected in the values of the array.
using System; namespace span { class Program { static void Main(string[] args) { var array = new int[] { 2, 4, 6, 8, 10, 12, 14, 16, 18, 20 }; var slice = new Span<int>(array, 2, 5); for (int ctr = 0; ctr < slice.Length; ctr++) slice[ctr] *= 2; // Examine the original array values. foreach (var value in array) Console.Write($"{value} "); Console.WriteLine(); } } } // The example displays the following output: // 2 4 12 16 20 24 28 16 18 20
Span<T> and slices
Span<T> includes two overloads of the Slice method that form a slice out of the current span that starts at a specified index. This makes it possible to treat the data in a
Span<T> as a set of logical chunks that can be processed as needed by portions of a data processing pipeline with minimal performance impact. For example, since modern server protocols are often text-based, manipulation of strings and substrings is particularly important. In the String class, the major method for extracting substrings is Substring. For data pipelines that rely on extensive string manipulation, its use offers some performance penalties, since it:
Creates a new string to hold the substring.
Copies a subset of the characters from the original string to the new string.
This allocation and copy operation can be eliminated by using either
Span<T> or ReadOnlySpan<T>, as the following example shows:
using System; class Program { static void Main() { string contentLength = "Content-Length: 132"; var length = GetContentLength(contentLength.ToCharArray()); Console.WriteLine($"Content length: {length}"); } private static int GetContentLength(ReadOnlySpan<char> span) { var slice = span.Slice(16); return int.Parse(slice); } } // Output: // Content length: 132
|
https://docs.microsoft.com/en-gb/dotnet/api/system.span-1?view=netstandard-2.1
|
CC-MAIN-2019-35
|
refinedweb
| 791
| 52.9
|
Ok....How do I declare it in the TestCHeckup.java class? What does the code look like?
Type: Posts; User: frfields276
Ok....How do I declare it in the TestCHeckup.java class? What does the code look like?
Ultimately what am I going to have to do to get this program to compile?
--- Update ---
I declared a class Checkup and added to the TestCheckup class and it gave the following errror on...
I am very new to this. How would I go about adding the check up class?
TestCheckup.java:29: error: cannot find symbol
public static void getData(Checkup check)
^
symbol: class Checkup
location: class TestCheckup
I have this assignment and I am lost on it. I think I have it written but it continuosly gives an error cannot find symbol on (Checkup check). I am lost and in desperate need of help.
Here is my...
How would I begin the source file? public class Integer?
I recognize a couple of obvious issues...the semicolon at the end and I'm sure it needs some input statements for the interger statements. Also, body brackets and class header and method header. ...
I am taking a java programming class to achieve my bachelors degree in IT. Not good at the software and for sure not the programming. I was given this code:
resultOut = (integerOne + integerTwo) *...
|
http://www.javaprogrammingforums.com/search.php?s=97bcbe69367f9ba1a2d239fff1b97685&searchid=1365096
|
CC-MAIN-2015-06
|
refinedweb
| 224
| 78.04
|
I)
-----------------------------:
View Complete Post
Hello
When I try to open the SQL2005 Surface Area Configuration I always get the following error:
"A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) (Microsoft SQL Server, Error: 10054)"
This is a big problem since I can no longer change SQL-server communication settings.
Thanks
Thomas
The. ............................"
Folks,
I have an ASP.NET project which is pretty n-tier, by namespace, but I need to separate into three projects: Data Layer, Middle Tier and Front End.
I am doing this because...
A) It seems the right thing to do, and
B) I am having all sorts of problems running unit tests for ASP.NET hosted assemblies.
Anyway, my question is, where do you keep your config info?
Right now, for example, my middle tier classes (which uses Linq to SQL) automatically pull their connection string information from the web.config when instantiating a new data context.
If my data layer is in another project can/should it be using the web.config for configuration info?
If so, how will a unit test, (typically in a separate assembly) provide soch configuration info?
Thank you for your time! all
I am trying to configure the surface area of sql server 2005 for the remote connections but when i am trying to do so I encountered the following error
Computer localhost does not exist on the network, or the computer cannot be configured remotely. Verify that the remote computer has the required windows management instrumentation
components and then try again. (SQLSAC)
Additional Information:
1) An exception occurred in SMO while trying to manage a service.(Microsoft.SqlServer.Smo)
2) Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum).
I tried by installing SP2 package but nothing worked out.
The SQL Server is installed on my machine i.e on localhost.
Please help m, I have two node active/active dev cluster with 10 instances installed (sql 2005 dev edition 64bit SP3). Each node hosts 5 instances. Every time I either move the group or take it offline I get the following in my even log: The configuration
of the AdminConnection\TCP protocol in the SQL instance instancename is not valid. I checked the TCP/IP properties and all instances are using dynamic ports and all of them are different. When I restarted SQLBrowser services I got the same alert
for 9 instances. Any suggestions on what to look next.
T.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/63363-surface-area-configuration-for-features.aspx
|
CC-MAIN-2017-43
|
refinedweb
| 443
| 57.06
|
Difference between revisions of "RPi Low-level peripherals"
Latest revision as of 10:25, 8 May 2015
Hardware & Peripherals:
Hardware and Hardware History.
Low-level Peripherals and Expansion Boards.
Screens, Cases and Other Peripherals.². CSI (camera serial interface) can be used to connect the 5 MP camera available. Not yet software-enabled is the flex cable connectors GET_GPIO(g) (*(gpio+13)&(1<<g)) // 0 if LOW, (1<<g) if HIGH #define GPIO_PULL *(gpio+37) // Pull up/pull down #define GPIO_PULLCLK0 *(gpio+38) // Pull up/pull down clock void setup_io();> #define IN 0 #define OUT 1 #define LOW 0 #define HIGH 1 #define PIN 24 /* P1-18 */ #define POUT 4 /* P1-07 */ static); } static); } static); } static)); } static); } int main(int argc, char *argv[]) { int repeat = 10; /* * Enable GPIO pins */ if (-1 == GPIOExport(POUT) || -1 == GPIOExport(PIN)) return(1); /* * Set GPIO directions */ if (-1 == GPIODirection(POUT, OUT) || -1 == GPIODirection(PIN, IN)) return(2); do { /* * Write GPIO value */ if (-1 == GPIOWrite(POUT, repeat % 2)) return(3); /* * Read GPIO value */ printf("I'm reading %d in GPIO %d\n", GPIORead(PIN), PIN); usleep(500 * 1000); } while (repeat--); /* * Disable GPIO pins */ if (-1 == GPIOUnexport(POUT) || -1 == GPIOUnexport(PIN)) return(4); return(0); }
bcm2835
This must be done as root. To change to the root user:
sudo -i -l rt blink.c -l bcm2835 // sudo ./blink // // Or you can test it before installing with: // gcc -o blink -l rt ; } }
Python
RPi.GPIO
The RPi.GPIO module is installed by default in Raspbian. Any RPi.GPIO script must be run as root.
import RPi.GPIO as GPIO # use P1 header pin numbering convention)
Pridopia Scratch Rs-Pi-GPIO driver
wiringPi - gpio utility
You need the wiringPi library from. Once installed, there is a new command gpio which can be used as a non-root user to control the GPIO pins.The man page
man gpiohas
|
http://elinux.org/index.php?title=RPi_Low-level_peripherals&diff=cur&oldid=174956
|
CC-MAIN-2015-22
|
refinedweb
| 309
| 52.19
|
Using GrabCut with OpenCv 3.1.0
I'm using opencv3 grabcut function with initial mask guessing (
cv2.GC_INIT_WITH_MASK), but for some reason I'm always getting the same output as the original mask. I'm running with python 3.5 and opencv 3.1.0.
I've attached some sample code below. The results are attached as image.
Instead of getting the full shape as grabcut output, I'm getting just the original mask, no matter which number of iterations or if I'm using real images instead of synthetic ones.
What can be the cause for those strange results?
import cv2 import numpy as np import matplotlib.pyplot as plt img = np.zeros((200,200, 3), 'uint8') mask = np.zeros((200,200), 'uint8') cv2.circle(img, (100,100), 50, (255,255,255), -1); cv2.circle(mask, (100,100), 5, 1, -1) bgdModel = np.zeros((1,65),np.float64) fgdModel = np.zeros((1,65),np.float64) out_mask, out_bgdModel, out_fgdModel = cv2.grabCut(img, mask, None, bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_MASK) print('opencv', cv2.__version__) # opencv 3.1.0 fig,ax = plt.subplots(1,3) ax[0].imshow(img) ax[1].imshow(mask) ax[2].imshow(out_mask) ax[0].set_title('original') ax[1].set_title('mask') ax[2].set_title('output') plt.show()
Use grabcut constant. What does it mean 1 in cv2.circle(mask, (100,100), 5, 1, -1)? I I think 1 is cv.GC_FGD. You must fill mask withcv. GC_PR_BGD or cv.GC_PR_FGD for unknown pixels and cv.GC_BGD for pixels belong to background (it is not necessary but it helps) .Algorithm will process all pixels with GC_PR.
You can find an example here
Please update your code to master or opencv 3.4.1
Thanks, LBerger. The problem was indeed with the inititation of the mask. The code below fix that.
|
https://answers.opencv.org/question/194130/using-grabcut-with-opencv-310/
|
CC-MAIN-2019-43
|
refinedweb
| 302
| 73.13
|
Introduction
Ever since I got to know the charming side of JAXB (Java Architecture for XML Binding) I wanted to blog about it as I feel that this easy-to-use technique of converting Java Beans into XML (and back) should be considered as a swiss knife to be known by every serious Java Enterprise Edition developer. Unfortunately it seems as this functionality is not really documented at all within the NetWeaver Developer Studio and due to all the MDA hype people tend to forget that NW CE is based on a full-blown Java 5 EE Web Application Server (which is already a power-house by itself!)
Consequently, I decided to dash out my 2nd installment of my “Hidden Gems” series…
The Basics
As always I’ll just briefly summarize the key concepts and refer you to other web sites for a more detailed information of the basics. In a nutshell, JAXB allows you to convert a Java Bean into its XML expression, which is called marshaling, and to instantiate Java Bean (hierarchies) from XML – unmarshaling. For this purpose JAXB provides two classes:
javax.xml.bind.Marshaller
javax.xml.bind.Unmarshaller
A set of Java annotations [3] in the classes is used to derive all required information about this conversion such as an objetc’s associations and attributes and the datatypes involved. But that much about theorie… let’s use a simple example and have a look on source code of a simple JAXB utility class:
Listing 1 – JAXBUtils
Pretty much self-explanatory right? So, there are only a few additional things to mention before you go ahead to get your hands dirty with this
@XmlRootElement
In order for JAXB to work you need a root node for the XML content. You can declare a POJO to be a root object by annotating the class with @XmlRootElement annotation. While that is straight forward for some objects like service operation request structures (modeled adhering to Enterprise Service conventions, see [4] ) it may not be possible to do with objects or DTOs you may re-use as nested entities in various places. A
simple elegant way of overcoming that challenge is to simply introduce a container object (e.g. called RootElement), which is annotated accordingly as shown in listing 2.
Listing 2 – RootElement
If you’re using the Composite Application Framwork (CAF) to model your business objects (BOs) all the generated structures are already annoated with XML declarations by default – yet, none of it is flagged explicitly as a @XmlRootElement (and you do’t want to touch the generated coding by all means!) So, in order to use CAF BOs in conjunction with JAXB, you may just adopt the approach I just outlined.
ObjectFactory
The ObjectFactories (there can be multiple) are in charge of instantiating the Java Beans based on the classnames derifed from the namespaces defined in the XML content and the XML annotations. This is illustrated in listing 3.
Listing 3 – ObjectFactory
Note: You can also use a specific class called jaxb.index which may contain names of JavaBeans with a no-argument constructor and which reside in the same package as the jaxb.index file. This may just be an option for the RootElement we already talked about. (For more details about the jaxb.index file please consult the JavaDoc of the JAXBContext class [5])
The JAXB path
As shown in listing 1, we need to pass a path parameter containing a colon (!) – separated list of all package names that point to packages that include an ObjectFactory as shown in listing 4.
Listing 4 – Usage of JAXB
To better illustrate how the JAXB path maps to the corresponding package structure I have illustrated the packagaging of the Java Beans used in the above example in Figure 1. For the first token of the JAXB path “com.sap.demo.common.model” JAXB would fin dthe ObjectFactory, while for the second token “com.sap.demo.common.xml” it would find the jaxb.index file in order to instantiate the RootElement as a gerenic and reusable @XmlRootElement. So, depending on how you structure your project you may need multiple ObjectFactories as stated earlier.
Figure 1 – Package overview
Summary
Yeap… that’s all there’s to it. Simple, but effective. Please note that I just described one of many ways to leverage JAXB and the API is much more powerful than I was able to highlight with that blog post. In fact, I was aiming to give you the most “bang for the buck” by showing an easy way on how to use JAXB effectively (especially if you’re working with CAF.)
Obviously the use cases are many-fold… my personal favorite is to model query service interfaces and use JAXB to store the serialized XML content them as CLOBs in the database. In fact, we have built an entire DisplayVariant framework as seen everyday in Object Worklist (OWL) UI patterns using the outlined scenario. But that’s another blog or article by its own right – so STAY TUNED!
References
[1] JAXB :
[2] JAXB Tutorial :
[3]
[4] The Anatomy of Java-based Enterprise Services
[5]
Thanks for your interest and help in improving the content in order to reach a broader audience Lakshminarayanan.
Regards,
S. Gokhan Topcu
Has anyone gotten JAXB to work inside of PI?
|
https://blogs.sap.com/2010/04/27/hidden-gems-of-functionality-jaxb/
|
CC-MAIN-2017-47
|
refinedweb
| 884
| 56.69
|
As near as I can tell, everyone who's doing Agile is writing requirements in the user story format of "As <role> I need to <do something> so that I can <achieve some goal>." For example, "As a customer I need to be able to search the inventory so that I can find the products I want to buy."
It's worth remembering that this is just one format for user stories (and a very good one) -- you shouldn't be trying to force everything into that format (infrastructure or regulatory requirements often sound silly in this format: "As the VP of Finance I need to produce the list of transferred securities every month so that I don't go to jail").
There are some common extensions to this user story format. Popular ones are a "best before" date (for time constrained requirements) and acceptance criteria. The primary purpose of acceptance criteria is to participate in the "definition of done": When these tests are passed, this story is done.
That means, of course, it should always be possible to imagine a test that would prove the criteria has been achieved (preferably an automated test, but that's not essential). Personally, an acceptance criteria of "I can do my job" fails on that "testability" basis alone: How could I tell if you can do your job? Perhaps you were never capable of doing your job.
Also personally, I think you can use acceptance criteria for more than that by leveraging those criteria to create better stories.
One thing you can do with acceptance criteria is use them to provide detail for the user story. This allows you to keep the user story short (focusing on the main goal) but still record detail that matters to the user. For example, this acceptance criteria helps make it clear what the story means by "search":
User Story: As a customer I need to be able to search the inventory so that I can find the products I want to buy.
Acceptance Criteria: Customers can limit the items returned by the search using criteria that are valuable to them (price, delivery date, location).
The other thing you can use acceptance criteria for is to cross-check the user story to see if it's consistent with its criteria. An acceptance test that isn't consistent with the user story can be an indication that the story is incomplete ... or is an example of scope creep (an attempt to extend the user story beyond its mandate). Something like this list of criteria indicates there's probably a problem with the user story:
User Story: As a customer I need to be able to search the inventory so that I can find the products I want to buy.
Acceptance Criteria:
Customers can limit the items returned by the search using criteria that are valuable to them (price, delivery date, location).
Customers earn loyalty points when purchasing "loyalty" products.
It seems to me that criteria 2 doesn't have much to do with the user story. Either the story needs to be extended (" ... including criteria that are important to them, like loyalty points") or there's a need for another story ("As a customer, I want to accumulate loyalty points").
Posted by Peter Vogel on 11/15/2019 at 10:27 AM0 comments
I've done a couple of recent columns about securing Blazor Components and using claims-based policies declaratively in ASP.NET Core generally. While working with security, I'm always interested in doing end-to-end testing: Starting up the application and seeing what happens when I try to navigate to a page.
However, while that matters to me, I'm less interested in setting up users with a variety of different security configurations (so many names! so many passwords!). Inevitably while thinking I'm testing one authorization scenario, I pick a user that actually represents a different scenario.
So I created a MockAuthenticatedUser class that, once added to my application's middleware, creates an authenticated user for my application. I find it easier to configure my mock user's authorization claims in code before running a test than it is to maintain (and remember) a variety of users.
If you think you might find it useful, you can add it to your processing pipeline with code like this in your Startup class' ConfigureServices method:
services.AddAuthentication("BasicAuthentication")
.AddScheme<AuthenticationSchemeOptions,
MockAuthenticatedUser>("BasicAuthentication", null);
To use this class, you'll also need this line in your Startup class' Configure method:
app.UseAuthentication();
I should be clear that I've only used this to test Controllers so it might behave differently with Razor Pages.
Here's the code for my MockAuthenticatedUser class that configures a user with a name, an Id, a role, and some random claims:
using System.Security.Claims;
using System.Text.Encodings.Web;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace SampleBlazor.Models
{
public class MockAuthenticatedUser : AuthenticationHandler<AuthenticationSchemeOptions>
{
const string userId = "phv";
const string userName = "Jean Irvine";
const string userRole = "ProductManager";
public MockAuthenticatedUser(
IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock)
: base(options, logger, encoder, clock){ }
protected override async Task<AuthenticateResult> HandleAuthenticateAsync()
{
var claims = new[]
{
new Claim(ClaimTypes.NameIdentifier, userId),
new Claim(ClaimTypes.Name, userName),
new Claim(ClaimTypes.Role, userRole),
new Claim(ClaimTypes.Email, "[email protected]"),
};
var identity = new ClaimsIdentity(claims, Scheme.Name);
var principal = new ClaimsPrincipal(identity);
var ticket = new AuthenticationTicket(principal, Scheme.Name);
return await Task.FromResult(AuthenticateResult.Success(ticket));
}
}
}
Posted by Peter Vogel on 11/14/2019 at 9:11 AM0 comments
Let AM0 comments
Admittedly, the tool window I use most in Visual Studio is the Error List (I probably use it even more than I use Solution Explorer). By and large it meets my needs but it is customizable for those occasions when it does not.
For example, the default Error List display includes a Suppression State column that I hardly ever use. If you don't use it either, you can get rid of it, making more room for the columns you do want (to be more specific: the Description column). All you have to do is right-click on any of the column headers in the Error List and pick Show Columns from the pop-up menu. That will give you a menu of available columns with the currently displayed columns checked off. Clicking on any column in the menu will add the column to the display (if the column isn't currently checked) or remove the column (if it is checked). I don't find the Code column all that useful, either, so I got rid of it also, but that might just be crazy talk as far as you're concerned.
The Grouping option on the menu is also sort of interesting: It inserts headings into the error list. I've experimented with adding a heading at the file level so that all the errors and warnings for any file appear together in the Error list, right under the file name. In the end, however, I've always decided that I wasn't willing to give up the space that the heading takes up; I'd rather have more unorganized errors than fewer organized errors, apparently.
Instead, I've counted on sorting to put all of my "related" errors together. I typically sort by Project, File, and Line number. To get that order (or any order you want), first click on the column header for the column you want as your highest sort level (in my case, that's the Project column). Then hold down the Shift key and click on the other columns you want in the sort, moving from the highest level to the lowest level (for me, that's the File column and then the Line column). If you're not happy with a column's order (ascending or descending) just click the column header again to reverse the order. Visual Studio will remember your sort order.
Posted by Peter Vogel on 10/30/2019 at 3:09 PM0 comments
If you're looking for some interesting reading, try this article by Paulo Gomes on hacking ASP.NET (actually, try googling “Hacking ASP.NET” for a bunch of interesting articles). Paulo's article specifically discusses how an innocent Web application can be used to turn your organization's server into some hacker's puppet/zombie.
One part of the article talks about how creating a zombie requires that a malicious payload be uploaded to the ASP.NET site. As Paulo points out, there is a way to avoid this: “General advice is to reject any malformed input” ... which is where the ApiController attribute comes in.
When you create a Web service in ASP.NET Core, you have the option of applying the ApiController attribute to your service controllers. With that attribute in place, when model binding finds mismatches between the data sent to your service and the parameters passed to your service methods, ASP.NET automatically returns a 400 (Bad Request) status code and doesn't invoke your method. Therefore, there's no point inside a Web Service method to check the ModelState IsValid property because if the code inside your method is executing then IsValid will be true.
ModelState IsValid
IsValid
You can turn that feature off by omitting the ApiController attribute. But, as Paulo points out, you don't want to: The ApiController method is doing exactly what you want by ensuring that you only accept data that is, at least, well-formed. This won't protect you against every hack, of course, but it's a very good start.
Posted by Peter Vogel on 10/22/2019 at 11:03 AM0 comments
As I've noted in an earlier post, I don't use code snippets much (i.e. “at all”). One of the reasons that I don't is that I often have existing code that I want to integrate with whatever I'm getting from the code snippets library.
Some code snippets will integrate with existing code. If I first select some code before adding a code snippet, there are some snippets that will wrap that selected code in a useful way. For example, I might have code like this:
Customer cust;
cust = CustRepo.GetCustomerById("A123");
I could then select the second line of the code and pick the if code snippet. After adding my code snippet, I'd end up with:
if
if (true)
{
cust = CustRepo.GetCustomerById(custId);
}
That true in the if statement will already be selected, so I can just start typing to enter my test. That might mean ending up with this code:
true
if (custId != null)
{
cust = CustRepo.GetCustomerById(custId);
}
You have to be careful with this, though -- most snippets aren't so obliging. If you do this with the switch code snippet, for example, it will wipe out your code rather than wrap it. Maybe I should go back to that previous tip on code snippets -- it discussed how to customize existing snippets (hint: you specify where the currently selected text is to go in your snippet with ${ TM_SELECTED_TEXT}).
switch
${ TM_SELECTED_TEXT})
Posted by Peter Vogel on 10/21/2019 at 12:06 PM0 comments
Let at 1:15 PM0 comments
So you got excited about ASP.NET Core and started building an application in ASP.NET Core 2.0, 2.1, or 2.2. Now you're wondering how much work is involved in migrating that application to Version 3.0 which came out in late September.
If you've got a vanilla application the answer is ... it's not that painful. For example, to upgrade to Version 3.0, you just need to go into your csproj file and strip out almost everything to leave this:
<PropertyGroup>
<TargetFramework>netcoreapp3.0</TargetFramework>
</PropertyGroup>
And I do mean "almost everything." For example, any Package Reference elements that you have that reference Microsoft.AspNetCore packages can probably be deleted.
In ConfigureServices, you'll replace AddMvc with one or more of these method calls, depending on what technologies your application uses:
In the Startup.cs file's Configure method, you'll change the IHostingEvnvironment parameter to IWebHostingEnvironment. Inside the method, you'll replace your call to UseMvc with:
With UseMvc gone, you'll need to move any routes you specified in that method into UseEndPoints. That will look something like this:
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute("default", "{controller=Home}/{action=Index}/{id?}");
});
Those changes are all pretty benign because they all happen in one file. The other big change you'll probably have to make (especially if you've created a Web service) is more annoying: NewtonSoft.Json is no longer part of the base package. If you've been using NewtonSoft's JSON functionality, you can (if you're lucky) just switch to the System.Text.Json namespace. If you're unlucky, you'll have some code to track down and rewrite throughout your application.
Sorry about that.
There's more, of course, and there's a full guide from Microsoft. If you've got a relatively straightforward site, though, these changes may be all you need to do.
Posted by Peter Vogel on 10/14/2019 at 2:48 10/10/2019 at 10:33 AM.
Posted by Peter Vogel on 09/17/2019 at 8:57 AM0 comments.
Posted by Peter Vogel on 09/16/2019 at 11:04 AM0 comments
There
> More Webcasts
|
https://visualstudiomagazine.com/Blogs/Tool-Tracker/List/Blog-List.aspx?platform=378&m=1&Page=1
|
CC-MAIN-2021-43
|
refinedweb
| 2,243
| 54.12
|
Smartling
A client implementation of the Smartling Translation API in Go.
It consists of a library for use in other projects, and a CLI tool.
Using the Library
You can find documentation at
import "github.com/99designs/smartling" client := smartling.NewClient(apiKey, projectId) client.List(smartling.ListRequest{ Limit: 20, })
CLI tool
The
smartling CLI tool provides a familiar unix-like command interface to the Smartling API, as well as providing a
project command to manage a project's local files.
Install it with
go get github.com/99designs/smartling/cli/smartling
or
run it as a docker container
docker run --rm -v MyProject:/work 99designs/smartling ls
COMMANDS: ls list remote files stat display the translation status of a remote file get downloads a remote file put uploads a local file rename renames a remote file rm removes a remote file lastmodified shows when a remote file was modified last locales list the locales for the project project manage local project files
The
smartling project command
The
smartling project commands are designed for some common use-cases in a dev or CI environment.
COMMANDS: files lists the local files status show the status of the project's remote files pull translate local project files using Smartling as a translation memory push upload local project files that contain untranslated strings
"Pushing" uploads files to a smartling project using a prefix. By default it uses the git branch name , but you can also specifiy the wanted prefix as an argument. A hash is also used in the prefix to prevent clobbering.
"Pulling" translates local project files using Smartling as a translation memory.
Other cool features:
- downloaded translation files are cached (default is 4 hours) in
~/.smartling/cache
- operations mostly happen concurrently
- filetypes get detected automatically
Configuration file
The CLI tool uses a project level config file called
smartling.yml for configuration.
Example config:
# Required config ApiKey: "11111111-2222-3333-4444-555555555555" # Smartling API Key ProjectId: "666666666" # Smartling Project Id Files: # Files in the project - translations/*.xlf # Globbing can be used, - foo/bar.xlf # as well as individual files # Optional config CacheMaxAge: "4h" # How long to cache translated files for FileType: "xliff" # Override the detected file type ParserConfig: # Add a custom configuration placeholder_format_custom: "%[^%]+%" PullFilePath: "{{ TrimSuffix .Path .Ext }}.{{.Locale}}{{.Ext}}" # The naming scheme when pulling files
|
https://hub.docker.com/r/99designs/smartling/
|
CC-MAIN-2017-43
|
refinedweb
| 382
| 52.6
|
You may come across issues in react where transitions and animations fire more than you'd like and in some cases it may be hard to control those renders, especially when dealing with libraries. In my case I had chart animations that would fire when the component was rendered and there wasn't any easy way to throttle that or prevent the duplicate transitions.
The docs point out that
This method only exists as a performance optimization. Do not rely on it to “prevent” a render, as this can lead to bugs.
Speaking of.
This is a memoization technique not afforded by component should update since we're not using class components. To the react docs point it can be buggy, but in my case it works wonders and prevents rampant animations.
import * as React from 'react'; export default React.memo(function MyComponent({ data, options }) { const [chartData, setChartData] = React.useState(() => { // some advanced filtering for the chart return (data || []).filter(item => item !== item + 1); }); return ( <Chart {...options} {...chartData} /> ); });
In this hypothetical chart component, if the parent would render, well the myComponent would rerender, which normally isn't an issue, but in our case the chart has transitions on it that trigger on every render and we cannot modify that API because it's a 3rd party. This will provide and easy way for us to still use hooks and only have the myComponent to render once, which will run our filter logic on the data and allow a performance optimization as well possibly.
Important note: hooks still work as you'd expect in a memoized component so you can use them and get renders on state change.
I think the majority use case is the intended of the react team which is performance for unnecessary renders, but this works perfectly for throttling renders in the case of UI side effects.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/localpathcomp/optimize-renders-in-reactjs-function-components-322h
|
CC-MAIN-2022-05
|
refinedweb
| 311
| 58.11
|
0
Good evening all. I have a quick quick problem in my code. The build is successful but my debug shows up the errors:
First-chance exception at 0x013c43e6 in week2_project1_gamephysics.exe: 0xC0000005: Access violation writing location 0xcccccccc.
Unhandled exception at 0x013c43e6 in week2_project1_gamephysics.exe: 0xC0000005: Access violation writing location 0xcccccccc.
The program '[9888] week2_project1_gamephysics.exe: Native' has exited with code -1073741819 (0xc0000005).
Can anyone please help me solve this? I need it for my mathematical programming and physics teacher. She only teaches us the math, but does not know how to code it. Please and Thank You again.
#include <iostream> #include <string> #include <sstream> #include <cmath> #include <math.h> #include <algorithm> using namespace std; int main() { float sphere1[3], sphere2[3]; string first_sphere[3], second_sphere[3]; stringstream stream, stream2; cout << "What is the x,y,z for the 1st sphere? "; getline(cin, first_sphere[3]); stream << first_sphere[0]; stream >> sphere1[0]; stream << first_sphere[1]; stream >> sphere1[1]; stream << first_sphere[2]; stream >> sphere1[2]; cout << "What is the x, y, z for the 2nd sphere? "; getline(cin, second_sphere[3]); stream2 << second_sphere[0]; stream2 >> sphere2[0]; stream2 << second_sphere[1]; stream2 >> sphere2[1]; stream2 << second_sphere[2]; stream2 >> sphere2[2]; float RadiusofTwoSpheres(float *sphere1, float *sphere2); { cout << "The Radius for Sphere's 1 and 2 is... " << endl; cout << endl; cout << "R = " << (float)sqrt(pow(sphere2[0] - sphere1[0], 2) + pow(sphere2[1] - sphere1[1], 2) + pow(sphere2[2] - sphere1[2], 2)) << endl; }; system("pause"); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/386953/x-y-z-for-a-sphere-array-problem
|
CC-MAIN-2017-17
|
refinedweb
| 242
| 67.96
|
I'm trying to create an array and store values in it within for loop but failed so far. How can I do it with Twig?
I've read these but being new in Twig makes it hard to convert into my case.
foreach ($array as &$value)
{
$new_array[] = $value;
}
foreach ($new_array as &$v)
{
echo $v;
}
{% for value in array %}
{% set new_array = new_array|merge([value]) %}
{% endfor %}
{% for v in new_array %}
{{ v }}
{% endfor %}
Solved by following Vision's suggestion:
{% set brands = [] %} {% for car in cars %} {% if car not in brands %} {% set brands = brands|merge([car]) %} {% endif %} {% endfor %} {% for brand in brands %} {{ brand }} {% endfor %}
Also I'll take bartek's comment into consideration next time. This was one off.
|
https://codedump.io/share/7ALVwY7donec/1/creating-an-array-within-for-loop-with-twig
|
CC-MAIN-2018-05
|
refinedweb
| 116
| 66.67
|
Docker.io have used their inaugrual.
Docker is used to ‘build, ship and run’ applications within Linux containers. Like a shipping container an application container provides the appropriate isolation so that things can be moved around without consideration for the contents. Docker provides the container, infrastructure (such as the latest Linux OS releases) provides the place to put containers, and developers put their code inside of containers. There are 3 key components to the environment:
- The Docker command line tool, which is used to manage the lifecycle of containers and the images that containers are built from.
- Dockerfile - a DevOps scripting language for creating Docker images.
- Image repositories. Docker.io runs the default public registry, now rebranded as Docker Hub. Users can also create their own private repositories or use hosted repositories such as Gandalf.
Docker Hub has had a facelift when compared to the old Docker Index, and now looks more like an application marketplace. At launch there is showcased content from CentOS, MongoDB, MySQL, Nginx, Node.js, PostgreSQL, Redis, Ubuntu and Wordpress. Private repositories, which have been in beta for some months, are now generally available. Users may get one private repo free, and there’s a tiered subscription scheme for larger numbers of private repos.
Docker.io are now offering support services for companies wanting to run Docker in production. ‘Long term support’ for Docker 1.0 has been committed to for 12 months from release, implying that there’s more change to come in what’s been a rapidly evolving project. Two tiers of support, standard and premium, are being offered. The pricing model for support hasn’t yet been advertised. Partnerships with systems integrators that can help with Docker projects were announced. The Docker team also have their own services offering with one day ‘Jumpstart’ for $4950 and three day ‘Bootstrap’ for $9990.
Container management systems like Docker are often compared and contrasted to virtualisation systems like VMware’s ESX, Xen or KVM. The key difference is that containers share a Linux kernel, and resources managed by it, rather than having a separate operating system (and kernel) as happens with virtual machines. Docker was originally built on top of the LinuX Containers (LXC) project, but LXC was swapped out in favour of a native Go language libcontainer library with the March 2014 release. Docker makes use of cgroups within the kernel to provide isolation, network namespaces, and a union filesystem such as AUFS. In principle Docker can be run on any version of Linux that has cgroups. In practice a more recent kernel is usually desirable for security, stability and union filesystem support. Docker was included in the latest Ubuntu 14.04 release, and will also feature in Red Hat Enterprise Linux 7 and CentOS 7. There’s also a trend towards new lightweight Linux distributions such as CoreOS and Red Hat’s Project Atomic that are paired down to be minimal substrates for Docker.
Inspired by this content? Write for InfoQ.
Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We'd love to have more people join our team.
Community comments
|
https://www.infoq.com/news/2014/06/docker_1.0
|
CC-MAIN-2022-27
|
refinedweb
| 535
| 55.95
|
projects
/
BearSSL
/ blobdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
|
commitdiff
|
tree
raw
| inline |
side by side
Added stricter rule on input for RSA private key operation (mathematically correct...
[BearSSL]
/
inc
/
bearssl_rand.h
diff --git
a/inc/bearssl_rand.h
b/inc/bearssl_rand.h
index
0c3bc4d
..
37379d2
100644
(file)
--- a/
inc/bearssl_rand.h
+++ b/
inc/bearssl_rand.h
@@
-28,117
+28,224
@@
#include <stddef.h>
#include <stdint.h>
-/*
- * Pseudo-Random Generators
- * ------------------------
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** \file bearssl_rand.h
+ *
+ * # Pseudo-Random Generators
*
*).
- *
- * An object-oriented API is defined, with rules similar to that of
- * hash functions. The context structure for a PRNG must start with
- * a pointer to the vtable. The vtable contains the following fields:
- *
- * context_size size of the context structure for this PRNG
- * init initialize context with an initial seed
- * generate produce some pseudo-random bytes
- * update insert some additional seed
- *
- * Note that the init() method may accept additional parameters, provided
- * as a 'const void *' pointer at API level. These additional parameters
- * depend on the implemented PRNG.
+ * bytes can be added afterwards. Bytes produced depend on the seeds and
+ * also on the exact sequence of calls (including sizes requested for
+ * each call).
+ *
+ *
+ * ## Procedural and OOP API
+ *
+ *
+ * 2<sup>19</sup> bits (i.e. 64 kB of data); moreover, the context shall
+ * be reseeded at least once every 2<sup>48</sup> 2<sup>48</sup> requests
+ * without reseeding, is formally out of NIST specification. There is
+ * no currently known security penalty for exceeding the NIST limits,
+ * and, in any case, HMAC_DRBG usage in implementing SSL/TLS always
+ * stays much below these thresholds.
*/
+/**
+ * \brief Class type for PRNG implementations.
+ *
+ * A `br_prng_class` instance references the methods implementing a PRNG.
+ * Constant instances of this structure are defined for each implemented
+ * PRNG. Such instances are also called "vtables".
+ */
typedef struct br_prng_class_ br_prng_class;
struct br_prng_class_ {
+ /**
+ * \brief Size (in bytes) of the context structure appropriate for
+ * running this PRNG.
+ */
size_t context_size;
+
+ /**
+ * \brief Initialisation method.
+ *
+ * The context to initialise is provided as a pointer to its
+ * first field (the vtable pointer); this function sets that
+ * first field to a pointer to the vtable.
+ *
+ * The extra parameters depend on the implementation; each
+ * implementation defines what kind of extra parameters it
+ * expects (if any).
+ *
+ * Requirements on the initial seed depend on the implemented
+ * PRNG.
+ *
+ * \param ctx PRNG context to initialise.
+ * \param params extra parameters for the PRNG.
+ * \param seed initial seed.
+ * \param seed_len initial seed length (in bytes).
+ */
void (*init)(const br_prng_class **ctx, const void *params,
const void *seed, size_t seed_len);
+
+ /**
+ * \brief Random bytes generation.
+ *
+ * This method produces `len` pseudorandom bytes, in the `out`
+ * buffer. The context is updated accordingly.
+ *
+ * \param ctx PRNG context.
+ * \param out output buffer.
+ * \param len number of pseudorandom bytes to produce.
+ */
void (*generate)(const br_prng_class **ctx, void *out, size_t len);
+
+ /**
+ * \brief Inject additional seed bytes.
+ *
+ * The provided seed bytes are added into the PRNG internal
+ * entropy pool.
+ *
+ * \param ctx PRNG context.
+ * \param seed additional seed.
+ * \param seed_len additional seed length (in bytes).
+ */
void (*update)(const br_prng_class **ctx,
const void *seed, size_t seed_len);
};
-/*
- * HMAC_DRBG is a pseudo-random number generator based on HMAC (with
- * an underlying hash function). HMAC_DRBG is specified in NIST Special
- * Publication 800-90A. It works as a stateful machine:
- * -- It has an internal state.
- * -- The state can be updated with additional "entropy" (some bytes
- * provided from the outside).
- * -- Each request is for some bits (up to some limit). For each request,
- * an internal "reseed counter" is incremented.
- * -- When the reseed counter reaches a given threshold, a reseed is
- * necessary.
- *
- * Standard limits are quite high: each request can produce up to 2^19
- * bits (i.e. 64 kB of data), and the threshold for the reseed counter
- * is 2^48. In practice, we cannot really reach that reseed counter, so
- * the implementation simply omits the counter. Similarly, we consider
- * that it is up to callers NOT to ask for more than 64 kB of randomness
- * in one go. Under these conditions, this implementation cannot fail,
- * and thus functions need not return any status code.
- *
- * (Asking for more than 64 kB of data in one generate() call won't make
- * the implementation fail, and, as far as we know, it will not induce
- * any actual weakness; this is "merely" out of the formal usage range
- * defined for HMAC_DRBG.)
- *
- * A dedicated context structure (caller allocated, as usual) contains
- * the current PRNG state.
- *
- * For the OOP interface, the "additional parameters" are a pointer to
- * the class of the hash function to use.
+/**
+ * \brief Context for HMAC_DRBG.
+ *
+ * The context contents are opaque, except the first field, which
+ * supports OOP.
*/
-
typedef struct {
+ /**
+ * \brief Pointer to the vtable.
+ *
+ * This field is set with the initialisation method/function.
+ */
const br_prng_class *vtable;
+#ifndef BR_DOXYGEN_IGNORE
unsigned char K[64];
unsigned char V[64];
const br_hash_class *digest_class;
+#endif
} br_hmac_drbg_context;
+/**
+ * \brief Statically allocated, constant vtable for HMAC_DRBG.
+ */
extern const br_prng_class br_hmac_drbg_vtable;
-/*
- * Initialize a HMAC_DRBG instance, with the provided initial seed (of
- * 'len' bytes). The 'seed' used here is what is called, in SP 800-90A
- * terminology, the concatenation of the "seed", "nonce" and
- * "personalization string", in that order.
- *
- * Formally, the underlying digest can only be SHA-1 or one of the SHA-2
- * functions. This implementation also works with any other implemented
- * hash function (e.g. MD5), but such usage is non-standard and not
- * recommended.
+/**
+ * \brief.
+ *
+ * \param ctx HMAC_DRBG context to initialise.
+ * \param digest_class vtable for the underlying hash function.
+ * \param seed initial seed.
+ * \param seed_len initial seed length (in bytes).
*/
void br_hmac_drbg_init(br_hmac_drbg_context *ctx,
- const br_hash_class *digest_class, const void *seed, size_t len);
+ const br_hash_class *digest_class, const void *seed, size_t
seed_
len);
-/*
- * Obtain some pseudorandom bits from HMAC_DRBG. The provided context
- * is updated. The output bits are written in 'out' ('len' bytes). The
- * size of the requested chunk of pseudorandom bits MUST NOT exceed
- * 64 kB (the function won't fail if more bytes are requested, but
- * the usage will be outside of the HMAC_DRBG specification limits).
+/**
+ * \brief Random bytes generation with HMAC_DRBG.
+ *
+ * This method produces `len` pseudorandom bytes, in the `out`
+ * buffer. The context is updated accordingly. Formally, requesting
+ * more than 65536 bytes in one request falls out of specification
+ * limits (but it won't fail).
+ *
+ * \param ctx HMAC_DRBG context.
+ * \param out output buffer.
+ * \param len number of pseudorandom bytes to produce.
*/
void br_hmac_drbg_generate(br_hmac_drbg_context *ctx, void *out, size_t len);
-/*
- * Update an HMAC_DRBG instance with some new entropy. The extra 'seed'
- * complements the current state but does not completely replace any
- * previous seed. The process is such that pushing new entropy, even of
- * questionable quality, will not make the output "less random" in any
- * practical way.
+/**
+ * \brief.
+ *
+ * \param ctx HMAC_DRBG context.
+ * \param seed additional seed.
+ * \param seed_len additional seed length (in bytes).
*/
void br_hmac_drbg_update(br_hmac_drbg_context *ctx,
- const void *seed, size_t len);
+ const void *seed, size_t
seed_
len);
-/*
- * Get the hash function implementation used by a given instance of
+/*
*
+ *
\brief
Get the hash function implementation used by a given instance of
* HMAC_DRBG.
+ *
+ * This calls MUST NOT be performed on a context which was not
+ * previously initialised.
+ *
+ * \param ctx HMAC_DRBG context.
+ * \return the hash function vtable.
*/
static inline const br_hash_class *
br_hmac_drbg_get_hash(const br_hmac_drbg_context *ctx)
@@
-146,4
+253,43
@@
br_hmac_drbg_get_hash(const br_hmac_drbg_context *ctx)
return ctx->digest_class;
}
+/**
+ * \brief.
+ *
+ * \param ctx PRNG context to seed.
+ * \return 1 on success, 0 on error.
+ */
+typedef int (*br_prng_seeder)(const br_prng_class **ctx);
+
+/**
+ * \brief implemention. If no seeder is returned
+ * and `name` is not `NULL`, then `*name` is set to a pointer to the
+ * constant string `"none"`.
+ *
+ * \param name receiver for seeder name, or `NULL`.
+ * \return the system seeder, if available, or 0.
+ */
+br_prng_seeder br_prng_seeder_system(const char **name);
+
+#ifdef __cplusplus
+}
+#endif
+
#endif
The BearSSL library and tools.
RSS
Atom
|
https://www.bearssl.org/gitweb/?p=BearSSL;a=blobdiff;f=inc/bearssl_rand.h;h=37379d2bf8da1f0ed568e437a6df2c82ad6f7690;hp=0c3bc4deea7a19899d3c3f109bdd285edab2863a;hb=f81a2828787c3ae7903bff66d64d71d6362ab4e1;hpb=3210f38e0491b39aec1ef419cb4114e9483089fb
|
CC-MAIN-2022-33
|
refinedweb
| 1,262
| 59.5
|
The objects and classes in this appendix support reading and writing to the console, as well as to files and strings. To read and write to the console, include <iostream>:
#include <iostream>cout << "Hello, world." << endl;
To write data to a string, include <sstream>. String streams support a member function (in addition to the ones listed here), str, which returns the data in string format.
#include <sstream>stringstream s_out;s_out << "The value of i answer is " << i << endl;string s = s_out.str()
The objects listed in Table G.1 provide predeclared streams to which to read or write text to the console. Each of them supports the appropriate stream operator (<< or >>). For example:
cout ...
No credit card required
|
https://www.safaribooksonline.com/library/view/c-without-fear/9780132762748/apg.html
|
CC-MAIN-2018-13
|
refinedweb
| 118
| 77.03
|
Roland Barcia is a Senior Technical Staff Member (STSM) and lead Web 2.0 architect for IBM Software Services for WebSphere.
How to kill REST? Add transactions.
Why do we do this to ourselves. As my friend Keys said, we go through endless cycles as an industry solving the same problems again and again. Technology X is too complex, create technology Y cause it is much simpler. Then technology Y lacks features, so lets add it. Now technology Y is too complex, let's create technology Z, and so forth. I have gone through distributed technologies like ONC RPC, COM, CORBA, EJB, Web Services, SCA. I am a fan of REST, because it exploited existing HTTP infrastructure.
So, why the rant, cause I see people doing exactly what destroys simple technologies. Most recent is this link TX support for JAXRS and REST and transactions. Adding transactional support for JAXRS? Why would I want distributed transactions for REST. Really, don't we learn anything in this industry. Why on earth would we want to propagate transactions over REST? We already did that with SOAP? Or why would I want to expose a Tx API over multiple HTTP calls? Leaving Tx's open across multiple HTTP calls has already proven to be a bad practice? REST is stateless, and therefore makes middleware scalable.
A Business process or facade itself can be a resource (/MyBusinessProcess), but the implementation of that service can use regular Tx API's and hide those details.
REST is about "Exploitation." Web Services and SCA is about "Abstraction" Let's put a stake in the ground and use the right technologies for the job rather than destroy something as beautiful as REST.
Situational Environments, Clouds in the Enterprise.
*
All these things add up. Cloudburst in my eyes provides a great opportunity to play a huge role. In Web 2.0, there is the notion of Situational Applications. These are applications that I want to build and deploy quickly, and for a short period of time. Usually, the application has a short shelf life, fulfilling an immediate need and then thrown away.
Cloudburst provides the opportunity to build "Situational Environments." Cloudburst can be used to quickly create Environments for specific needs in the development life cycle, automating tasks that often suck the life out of a project.
Imagine standing up a Quality Assurance Test environment, and when done, throwing it away and using the hardware for perhaps Quality Assurance of another project.
To me, this is an administrator's dream. I could see a new class of Cloud based admin applications asking development teams to deploy into the Admin's Cloud, generating environments to their liking, based on proven models and best practices.
RESTing at Impact !!!
Hi guys and gals, hope all is well. IBM Impact 2009 is on its way. I will be there. Somehow, I managed to end up presenting 7 lectures and 4 labs. I am not sure why I do this to myself. I don't even plan out to do this many sessions. Last year, I took great pleasure in delegating. This year, the delegation just did not happen. Though I do have great co-presenters and lab advocates, so I may be able to escape here and there to see some other great sessions. Even though I am quite busy, I plan to take a REST!!!
I am the track lead for Application Development and Web 2.0. One of the major themes for this track is REST. We have great sessions in the technical track around REST. Here is the list of sessions (Lectures only, will Blog on Labs soon), their speakers, and their time slots:
A sMash in the Clouds
In case you haven't heard, IBM and Amazon have partnered to provide you with the ability to use Amazon EC2 to build and run a range of IBM platform technologies..
One of the development platforms available is WebSphere sMash. This means that you can run WebSphere sMash applications inside an Amazon Cloud. You can think of a Cloud as "Infrastructure as a Service." Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
I authored a book back in 2004 called IBM WebSphere: Deployment and Advanced Configuration. The book focused on what we (my co-authors and I) thought is the hardest part of any application development effort: The deployment. I have always been one to love the application development side of the world, but I know that where projects succeed or fail is because a lack of understanding of this process.
I will say that Clouds do not eliminate process, testing, and best practices. But it does reduce the cost, hardware planning phases, and hardware maintenance.
One Best Practice that would seem to apply heavily in a Cloud is RESTful Architecture. I have been on many engagements where scalability problems were due mainly to middleware state. sMash enforces a RESTful style of development, leveraging the browser for some of that Wizard state, and allowing you to write more stateless applications.
To get started with WebSphere sMash and Amazon's EC2, you can start here:
I needed some REST... Practices. 15 or so Practices for building RESTful Web Services.
Sorry for not blogging for a while. I have been quite busy getting some REST.
I have just published 2 new articles:
Implementing and testing server-driven content negotiation for your REST resources with WebSphere sMash
Scaling WebSphere sMash Web 2.0 applications: Part 1: Overview of WebSphere sMash topologies
But I have been spending a lot of time helping some of my customers with REST Architecture. I figured I would post some here. I will give some bullet points and will be publishing papers soon. Note, these are not in any order of importance. All are important. Note, I am just putting pointers in rough format here, and I will publish some more articles on the topics later.
* Identifiable resources (URIs)
* Uniform Interface
GET, PUT, POST, DELETE
* Stateless Communication
Scalable, loose coupling
* Resource Representations
Multiple ways to represent (PDF, HMTL, XML,) – via content types
HTTP has negotiation capabilities (e.g. ACCEPT)
* Hypermedia
Server provides links to resources
Allows for evolution
* What are the resource identifiers?
* What are the methods supported at each resource?
Do you support POST, GET, PUT, DELETE?
* What are the status codes returned?
HTTP defines standard return codes
200 = ok
301 = moved permanently
400 = bad request , 401 = unauthorized , 404 = not found
Example:
* GET / HEAD (No Side Affects)
* Optimistic concurrency via E-Tags, Timestamps
or
* Check-in/Check-out semantics if stricter policy is needed, but never hold database locks
REST defines no standards on query parameter names. For example, you may choose to provide standard params for filtering, Sorting, and Ranges.
/resource?filter="name startsWith b && age gt 30"
Even if you start out small, if your data grows, need to provide these.
Prefer Linking over tightly coupled data. Use ATOM as a payload for establishing relationships
Sometimes you need to do bulk loading of related data. Some Techniques:
* Special parameter
/Order?loadRelated=LineItems
Very quickly starts becoming RPC
* Headers and Schemas (Better)
Accept: application/eager+atom+xml
Accept: application/atom+xml
* Follow a Close Spec (slow moving)/Open Extension (Popular Extensions become the next wave of standards) Closed/Open
b.) Use Header (Cleaner looking)
Accept: application/myVer+atom+xml
Custom Header
With REST, I can easily secure URI's, but make sure to implement Instance Based security for User sensitive Data. If I log in successfully as /account/roland, I have to protect against /account/mike.
The service provider should make sure to access the identity from the Subject or Context and compare to the URI requested.
Do not add semantics to headers that are not meant. Routers and firewalls make use of many headers in strange and curious ways.
Partial Updates
Batch Updates
Query params (Be careful, could become RPC Very Fast)
The HTTP Spec is looking at a new verb called PATCH
Atom can be a great way to document since you can link to the service URI Patterns.
What should REST Documentation have?
Documentation as a Service
/resource/formats
Catalog, Registry
What are my URI Patterns
What Headers do I support?
What Verbs are valid for my resource?
What Response codes do I support?
What do they mean for my service?
What formats do I support? (JSON, and ATOM)
Which header do I use? Accept, query param?
What schema describe it?
I demonstrate this here.
WebSphere sMash 1.1 is available
WebSphere sMash 1.1 is now available. Please go to the Project Zero Website to download the Developer's Edition.
The following Blog post contains an overview on 1.1 contents.
Google's Application Centric Browser, very similar to what WebSphere sMash does on the server.
A Beta Driver of the new Google Browser has been released for Windows (which leaves us MAC users waiting to try it.) There has been a lot of blogs and comments on the new browser so I am not going to discuss a laundry list of features, but focus in on the application centric approach.
I love that each tab is it's own process, and that application isolation can be achieved on the client. One thing I love about Web 2.0 and Ajax is the focus on applications. Having your desktop run different apps rather than one monolithic browser process obviously solves a lot of issues.
By the way, this is exactly the model that WebSphere sMash (Project Zero) has for applications on the server. Every application runs in its own process, and you do not deploy an app to a server, you just run the app, which is the server.
Today, I just released a new article (with Steve Ims as a co-author) on WebSphere sMash (this is actually a rewrite of last years article we did on P0, but with the latest driver and tools). We discuss application centric design of sMash in the article.
The google).
RESTful Facade Pattern - turning verbs to nouns.
DELETE: ‘/Account/223‘,accountTo:‘/Account/333‘
GET /Customer/custId/Transfer returns list of Transfers for customer
GET /Customer/custId/Account/accountId/Transfer returns a list of transfers involving this account
But the semantics can get tricky. What if I want the transfer only involving fromAccount.
I could use query parameters:
GET /Customer/custId/Transfer?fromAccount=accountId
Other discussions on this:
Social Web takes Social skills, so what about private people?
I have not written a blog entry in a couple months. To tell you the truth, I struggle with blogging. I consider myself a private person in many aspects of my life. It is funny that my role at work is forcing me to be more 'social' than I would normally be. I like people, don't get me wrong, but the social web takes social skills.
Anyone who knows me knows I can talk one on one. Once I get started, it is hard to shut me up. But in crowds I tend to like to move to the back and observe.
It is funny, when I go to my parents house with my wife and kids, my wife and parents dominate the conversation. I do try to get a word in edge wise, but I find myself out quickly. Part of that is because my Father and Wife just interrupt me. So I have become used to it. I actually leave the room and play with my kids and my sister's kids like a big kid myself.
I started to think about the reason, and I found that the conversation actually does not interest me. They talk about 'so and so' over here, and that person over there, typically called gossiping, which I do not like to do (and the Bible, since I am a Christian, actually discourages it).
So relating that to the 'social web', one can very quickly find that if you are in the wrong community, you may not have much to say. Now, I have a lot to say about Web 2.0, Middleware, etc... so there are other things.
I look at social websites like Facebook. I have an account, with friends. They all post a lot of pictures, send virtual gifts, and other types of applications. I do not like these things too much. First off, I would much rather receive a real gift. Also, I don't like posting pictures of my family. I seem to think the web has a lot of security problems and the pictures wind up in some sick-o site. The status feature is interesting, except, do I want the world to know what I am doing. Though I enjoy reading what others are doing. I have found the scrabble app as cool, and I play a lot with my wife, so that is fun.
I like to share, I try to teach my kids this all the time. But do I want to share what I am doing, or where I am flying?
I think it comes down to is I like to interact with people face to face, I like being around people and listening to them more than talking, and I like to contribute what I know, but not every single thing I am doing. I also like to work in smaller groups of people.
But what about more 'private' people. Maybe they are read only members of the social web.
This is a rant I know, and I am going to post next on a technical topic. But isn't this what blogging is suppose to be about?
Anyway, it takes 'social' skills to be on the social web. This is something I have to work on I guess.
Dojo Explorer on Dojo Campus
I have found several cool tutorials on Dojo from DojoCampus.org
Specifically, the Dojo Explorer is my favorite Feature. You can navigate through various Dojo Libraries and Widgets and sample it and see the code.
Persistence in the Enterprise
I have just finished 5 straight weeks of conferences starting with IBM Impact and finishing last week at the WebSphere Services Technical Conference which is an internal conference. I am glad it finished. Last week, my book, Persistence in the Enterprise was released.
Besides myself, my coauthors are Geoff Hambrick, Kyle Brown, Robert Peterson, and Kulvir Bhogal. The book focuses on the persistence layer of an application. The first half of the book is primarily focused on things like requirements (persistence focused), design, best practices, and criteria for selecting a persistence layer. The second half of the book focuses on implementing those using 5 different technologies: JDBC, IBatis, Hibernate, OpenJPA, and pureQuery.
WebSphere sMash powered by Project Zero
Today, at IBM Impact, we announced a suite of Mashup Solutions, including WebSphere sMash. IBM WebSphere sMash is a development platform that supports dynamic scripting languages and aggregates data from various sources. It employs REST. The developer edition of sMash remains free, but a commercial platform will be licensed. WebSphere sMash is based on IBM's Project Zero. In addition to WebSphere sMash, we also announced the IBM Mashup Center.
IBM Mashup Center: The mashup center is a group of Web applications that account for IT requirements such as security and governance. Underpinning the Mashup Center is IBM’s Lotus Mashups and InfoSphere MashupHub. IBM has been experimenting with mashups internally. Among the components:
*
IBM IMPACT and Web 2.0
Hi everyone. Next week is IBM Impact. I plan to be there, God willing. (I am recovering from a bronchitis infection that had me in the hospital on Sunday). I am excited at all the Web 2.0 content. There is going to be a lot of sessions on Project Zero, the WebSphere Web 2.0 Feature Pack, Ajax, and others. Below are the list of Sessions I will be speaking at or part of:
I will also be participating in session: 1519: Showcase: Web 2.0 innovations: Mon, 7/Apr 03:45 PM - 05:00 PM MGM Grand: Room 118
There is also Web 2.0/App Dev Tech Zone. I will be host for the time slot below.
I will also be there during Jason and Kyle's 'What is Project Zero and how does it help you?'
Hopefully, my bronchitis goes away and I will be good to go.
You can look over the Web 2.0 track here:
Chat on Dojo Toolkit and WebSphere Web 2.0 FP
Several of my colleagues and I will be hosting a chat. We will be answering questions on new features in Dojo 1.1 as well as questions on the WAS Web 2.0 FP.
Coding Servlets is cool again
With the EJB 3 and Ajax Patterns, I am finding it much easier to code Service Endpoints in Servlets, rather than using frameworks sometimes. Traditionally, the use of MVC abstractions was favored because a lot of view logic was handled in the Container. But for Ajax apps coded in something like Dojo, I find myself just writing RESTful Services for Java EE apps in Servlets. The fact that I can inject Session Beans right into them makes it easy to hook up Ajax endpoints to business logic. Below is an example of a Servlet using the JSON4J API that is part of the WebSphere Web 2.0 Feature Pack.
public class Product extends javax.servlet.http.HttpServlet
@EJB
private ProductSearchService productSearchService;
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.setHeader(“Content-Type“, “application/json” );
if(request.getPathInfo() != null)
{
try
{
int productId = Integer.parseInt(request.getPathInfo().substring(1));
org.pwte.example.domain.Product product = productSearchService.loadProduct(productId);
JSONObject productJson = ProductToJsonHelper.serializeProductToJson(product);
productJson.serialize(response.getWriter()) ;
}
catch (ProductDoesNotExistException e)
{
response.sendError(response.SC_NOT_FOUND) ;
}
catch (Exception e)
{
response.sendError(response.SC_BAD_REQUEST) ;
}
}
else if (request.getParameter(“category” ) != null)
{
try
{
int categoryId = Integer.parseInt(request.getParameter(“category” )) ;
List products = productSearchService.loadProductsByCategory(categoryId);
JSONArray productArray = new JSONArray();
for (org.pwte.example.domain.Product product: products)
{
JSONObject jsonProduct = ProductToJsonHelper.serializeProductToJson(product);
productArray.add(jsonProduct);
}
productArray.serialize(response.getWriter()) ;
}
catch (Exception e)
{
response.sendError(response.SC_BAD_REQUEST) ;
}
}
else response.sendError(response.SC_BAD_REQUEST) ;
}
}
Of course, writing logic to move between POJO's and JSON is not as fun. Forwarding to a JSP template to do this could be easier. Or using a higher level API like that in the RPC package may be preferred as well.
More offers
|
http://www.ibm.com/developerworks/blogs/page/barcia
|
crawl-002
|
refinedweb
| 3,084
| 66.64
|
weaviate
How to query data?
How to query data in Weaviate?
Introduction
- Weaviate has RESTful API endpoints to query data, but Weaviate’s query language is GraphQL.
- You can query a Weaviate after you’ve created a schema and populated it with data.
Prerequisites
- Connect to a Weaviate instance.
If you haven’t set up a Weaviate instance yet, check the Getting started guide. In this guide we assume your instance is running at text2vec-contextionary as vectorization module.
- Upload a schema.
Learn how to create and upload a schema here. In this guide we assume to have a similar schema uploaded with the classes
Publication,
Articleand
Author.
- Add data.
Make sure there is data available in your Weaviate instance, you can read how to do this in the previous guide. In this tutorial we assume there are data objects of
Publications,
Articles and
Authors present.
How to get data
1. Define a query.
The easiest GraphQL queries to get data from Weaviate are
Get{} queries. Let’s say we want to retrieve all the articles (there title, authors, url and wordCount) that are published by “Wired”, the GraphQL query will look as follows:
{ Get { Article ( where: { operator:Equal, valueString:"Wired", path: ["inPublication", "Publication", "name"] }) { title url wordCount hasAuthors{ ... on Author { name } } } } }
2. Send the query
There are multiple ways to connect to Weaviate’s (GraphQL) API to query data. Raw GraphQL queries can be directly posted in the GraphiQL interface in the Console, but can also be sent via curl, the Python or JavaScript client.
{ Get { Article (where: { operator:Equal, valueString:"Wired", path: ["inPublication", "Publication", "name"] }){ title url wordCount hasAuthors{ ... on Author { name } } } } }
import weaviate client = weaviate.Client("") get_articles_query = """ { Get { Article (where: { operator:Equal, valueString:"Wired", path: ["inPublication", "Publication", "name"] }){ title url wordCount hasAuthors{ ... on Author { name } } } } } """ query_result = client.query.raw(get_articles_query) print(query_result)
const weaviate = require("weaviate-client"); const client = weaviate.client({ scheme: 'http', host: 'localhost:8080', }); client.graphql .get() .withClassName('Article') .withFields('title url wordCount HasAuthors{ ... on Author { name }}') .withWhere({ operator: 'Equal', path: ['inPublication", "Publication", "name'], valueString:"Wired" }) .do() .then(res => { console.log(res) }) .catch(err => { console.error(err) });
cfg := weaviate.Config{ Host: "localhost:8080", Scheme: "http", } client := weaviate.New(cfg) where := `{ path: ["inPublication", "Publication", "name"] operator: Equal, valueString: "Wired" }` ctx := context.Background() result, err := client.GraphQL().Get().Objects(). WithClassName("Article"). WithFields(`title url wordCount hasAuthors { ... on Author { name } }`). WithWhere(where). Do(ctx) if err != nil { panic(err) } fmt.Printf("%v", result)
Config config = new Config("http", "localhost:8080"); WeaviateClient client = new WeaviateClient(config); String where = "{ path: [\"inPublication\", \"Publication\", \"name\"], operator: Equal, valueString:\"Wired\" }"; Result<GraphQLResponse> result = client.graphQL().get() .withClassName("Article") .withFields("title url wordCount hasAuthors { ... on Author { name } }") .withWhere(where) .run(); if (result.hasErrors()) { System.out.println(result.getError()); return; } System.out.println(result.getResult());
$ echo '{ "query": "{ Get { Article (where: { operator: Equal, valueString:\"Wired\", path: [\"inPublication\", \"Publication\", \"name\"] }){ title url wordCount hasAuthors{ ... on Author { name } } } } }" }' | curl \ -X POST \ -H 'Content-Type: application/json' \ -d @- \
🟢 Click here to try out this graphql example in the Weaviate Console.
- View the results
The results of the previous query looks something like the following in JSON:
{ "data": { "Get": { "Article": [ { "hasAuthors": [ { "name": "Condé Nast" }, { "name": "Vince Beise" }, { "name": "Vince Beiser" }, { "name": "Wired Staff" } ], "title": "The War Vet, the Dating Site, and the Phone Call From Hell", "url": "", "wordCount": 731 }, { "hasAuthors": [ { "name": "Matt Simon" }, { "name": "Matt Simo" } ], "title": "Not to Ruin the Super Bowl, but the Sea Is Consuming Miami", "url": "", "wordCount": 586 }, { "hasAuthors": [ { "name": "Condé Nast" }, { "name": "Gregory Barbe" } ], "title": "The Startup That Aims to Decrypt Blockchain for Business", "url": "", "wordCount": 636 }, { "hasAuthors": [ { "name": "Lauren Goode" }, { "name": "Lauren Good" } ], "title": "The Biggest Apple Maps Change Is One You Can't See", "url": "", "wordCount": 543 }, { "hasAuthors": [ { "name": "Will Knight" }, { "name": "Will Knigh" } ], "title": "Apple's Latest Deal Shows How AI Is Moving Right Onto Devices", "url": "", "wordCount": 680 }, { "hasAuthors": [ { "name": "Condé Nast" } ], "title": "Traveling for the Holidays? Here's How to Not Get Sick", "url": "", "wordCount": 608 }, { "hasAuthors": [ { "name": "Graeme Mcmillan" }, { "name": "Graeme Mcmilla" } ], "title": "Sanders and Warren's Big Debate Dust-Up Tops This Week's Internet News Roundup", "url": "", "wordCount": 364 }, { "hasAuthors": [ { "name": "Condé Nast" }, { "name": "Michael Hard" }, { "name": "Laura Mallonee" }, { "name": "Michael Hardy" } ], "title": "The Neveda Town Where Storm Area 51 Sort Of Happened", "url": "", "wordCount": 511 }, { "hasAuthors": [ { "name": "Laura Mallonee" }, { "name": "Laura Mallone" }, { "name": "Chris Colin" }, { "name": "Adam Rogers" } ], "title": "Homelessness in the Living Rooms of the Rich", "url": "", "wordCount": 502 } ] } }, "errors": null }
Next steps
- Make more advanced queries, for example to explore data with semantic search, in the next tutorial.
- View other GraphQL methods in the GraphQL documentation.
More resources
If you can’t find the answer to your question here, please look at the:
- Frequently Asked Questions. Or,
- Knowledge base of old issues. Or,
- For questions: Stackoverflow. Or,
- For issues: Github. Or,
- Ask your question in the Slack channel: Slack.
|
https://www.semi.technology/developers/weaviate/current/tutorials/how-to-query-data.html
|
CC-MAIN-2021-39
|
refinedweb
| 805
| 56.96
|
import "go.chromium.org/luci/server/auth/internal"
func RegisterClientFactory(f ClientFactory)
RegisterClientFactory allows external module to provide implementation of the ClientFactory.
This is needed to resolve module dependency cycle between server/auth and server/auth/internal.
See init() in server/auth/client.go.
If client factory is not set, Do(...) uses http.DefaultClient. This happens in unit tests for various auth/* subpackages.
WithTestTransport puts a testing transport in the context to use for fetches.
ClientFactory knows how to produce http.Client that attach proper OAuth headers.
If 'scopes' is empty, the factory should return a client that makes anonymous requests.
type Request struct { Method string // HTTP method to use URL string // URL to access Scopes []string // OAuth2 scopes to authenticate with or anonymous call if empty Headers map[string]string // optional map with request headers Body interface{} // object to convert to JSON and send as body or []byte with the body Out interface{} // where to deserialize the response to }
Request represents one JSON REST API request.
Do performs an HTTP request with retries on transient errors.
It can be used to make GET or DELETE requests (if Body is nil) or POST or PUT requests (if Body is not nil). In latter case the body will be serialized to JSON.
Respects context's deadline and cancellation.
TestTransportCallback is used from unit tests.
Package internal imports 10 packages (graph) and is imported by 8 packages. Updated 2019-12-21. Refresh now. Tools for package owners.
|
https://godoc.org/go.chromium.org/luci/server/auth/internal
|
CC-MAIN-2020-05
|
refinedweb
| 245
| 58.08
|
greater_equal¶
- paddle. greater_equal ( x, y, name=None ) [source]
This OP returns the truth value of \(x >= y\) elementwise, which is equivalent function to the overloaded operator >=.
NOTICE: The output of this OP has no gradient.
- Parameters
x (Tensor) – First input to compare which is N-D tensor. The input data type should be float32, float64, int32, int64.
y (Tensor) – Second input to compare which is N-D tensor. The input data type should be float32, float64, int32, int64.
name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.
- Returns
The tensor storing the output, the output shape is same as input
x.
- Return type
Tensor, the output data type is bool
Examples
import paddle x = paddle.to_tensor([1, 2, 3]) y = paddle.to_tensor([1, 3, 2]) result1 = paddle.greater_equal(x, y) print(result1) # result1 = [True False True]
|
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/greater_equal_en.html
|
CC-MAIN-2021-25
|
refinedweb
| 153
| 61.12
|
dubstep Remixer Ipad app
Budget $1500-3000 CAD
I need someone to program an app i have designed the interface for. Library of dubstep sound, synths and baselines an asset.
features include:
import from iTunes
share on Facebook/ twitter
sound editing
inapp purchase
4 freelance ont fait une offre moyenne de 2475 $ pour ce travail
Our company can take your project. We'll be glad to cooperate with you and convert your idea into a product. If you are interested in,please check your PMB and contact us via email
Hello Sir, Please check PMB for listing and confident to complete the project with quality services.. Best Regards
22 Mobile experts working in the team. Professional work Guaranteed. 150+ apps coded and live already.
|
https://www.fr.freelancer.com/projects/music-ipad/dubstep-remixer-ipad-app.2337568/
|
CC-MAIN-2017-39
|
refinedweb
| 124
| 64.61
|
# Mapping
Prefect introduces a flexible map/reduce model for dynamically executing parallel tasks.
Classic "map/reduce" is a powerful two-stage programming model that can be used to distribute and parallelize work (the "map" phase) before collecting and processing all the results (the "reduce" phase).
A typical map/reduce setup requires three things:
- An iterable input
- A "map" function that operates on a single item at a time
- A "reduce" function that operates on a group of items at once
For example, we could use map/reduce to take a list of numbers, increment them all by one, and sum the result:
numbers = [1, 2, 3] map_fn = lambda x: x + 1 reduce_fn = lambda x: sum(x) mapped_result = [map_fn(n) for n in numbers] reduced_result = reduce_fn(mapped_result) assert reduced_result == 9
# Prefect approach
Prefect's version of map/reduce is far more flexible than the classic implementation.
When a task is mapped, Prefect automatically creates a copy of the task for each element of its input data. The copy -- referred to as a "child" task -- is applied only to that element. This means that mapped tasks actually represent the computations of many individual children tasks.
If a "normal" (non-mapped) task depends on a mapped task, Prefect automatically applies a reduce operation to gather the mapped results and pass them to the downstream task.
However, if a mapped task relies on another mapped task, Prefect does not reduce the upstream result. Instead, it connects the nth upstream child to the nth downstream child, creating independent parallel pipelines.
Here's how the previous example would look as a Prefect flow:
from prefect import Flow, task numbers = [1, 2, 3] map_fn = task(lambda x: x + 1) reduce_fn = task(lambda x: sum(x)) with Flow('Map Reduce') as flow: mapped_result = map_fn.map(numbers) reduced_result = reduce_fn(mapped_result) state = flow.run() assert state.result[reduced_result].result == 9
Dynamically-generated children tasks are first-class tasks
Even though the user didn't create them explicitly, the children tasks of a mapped task are first-class Prefect tasks. They can do anything a "normal" task can do, including succeed, fail, retry, pause, or skip.
# Simple mapping
The simplest Prefect map takes a tasks and applies it to each element of its inputs.
For example, if we define a task for adding 10 to a number, we can trivially apply that task to each element of a list:
from prefect import Flow, task @task def add_ten(x): return x + 10 with Flow('simple map') as flow: mapped_result = add_ten.map([1, 2, 3])
The result of the
mapped_result task will be
[11, 12, 13] when the flow is run.
# Iterated mapping
Since
mapped_result is nothing more than a task with an iterable result, we can immediately use it as the input for another round of mapping:
from prefect import Flow, task @task def add_ten(x): return x + 10 with Flow('iterated map') as flow: mapped_result = add_ten.map([1, 2, 3]) mapped_result_2 = add_ten.map(mapped_result)
When this flow runs, the result of the
mapped_result_2 task will be
[21, 22, 23], which is the result of applying the mapped function twice.
No reduce required
Even though we observed that the result of
mapped_result was a list, Prefect won't apply a reduce step to gather that list unless the user requires it. In this example, we never needed the entire list (we only needed each of its elements), so no reduce took place. The two mapped tasks generated three completely-independent pipelines, each one containing two tasks.
# Flat-mapping
In general, each layer of an iterated map has the same number of children: if you map over a list of N items, you produce N results. Sometimes, it's useful to produce a sequence of results for each mapped input. For example, you might map over a list of directories to load all the files in each directory, then want to map over each file. Prefect provides a
flatten annotation to make this possible. When the input to a map is marked as
flatten, the input is assumed to be a list-of-lists and is "un-nested" into a single list prior to applying the map.
Using
flatten() is more efficient than adding a reduce step to an otherwise-iterated map, because Prefect will compute the flatmap without gathering all data to a single worker.
from prefect import Flow, task, flatten @task def A(): return [1, 2, 3] @task def B(x): return list(range(x)) @task def C(y): return y + 100 with Flow('flat map') as f: a = A() # [1, 2, 3] b = B.map(x=a) # [[0], [0, 1], [0, 1, 2]] c = C.map(y=flatten(b)) # [100, 100, 101, 100, 101, 102]
TIP
flatten() can be used on any task input, even if it isn't being mapped over.
# Reduce
Prefect automatically gathers mapped results into a list if they are needed by a non-mapped task. Therefore, all users need to do to "reduce" a mapped result is supply it to a task!
from prefect import Flow, task @task def add_ten(x): return x + 10 @task def sum_numbers(y): return sum(y) with Flow('reduce') as flow: mapped_result = add_ten.map([1, 2, 3]) mapped_result_2 = add_ten.map(mapped_result) reduced_result = sum_numbers(mapped_result_2)
In this example,
sum_numbers received an automatically-reduced list of results from
mapped_result. It appropriately computes the sum: 66.
# Unmapped inputs
When a task is mapped over its inputs, it retains the same call signature and arguments, but iterates over the inputs to generate its children tasks. Sometimes, we don't want to to iterate over one of the inputs -- perhaps it's a constant value, or a list that's required in its entirety. Prefect supplies a convenient
unmapped() annotation for this case.
from prefect import Flow, task, unmapped @task def add(x, y): return x + y with Flow('unmapped inputs') as flow: result = add.map(x=[1, 2, 3], y=unmapped(10))
This map will iterate over the
x inputs but not over the
y input. The result will be
[11, 12, 13].
The
unmapped annotation can be applied to any number of input arguments. This means that a mapped task can depend on both mapped and reduced upstream tasks seamlessly.
TIP
Prefect also provides a
mapped() annotation that can be used to indicate that an input should be mapped over when binding inputs without calling
.map()
# Complex mapped pipelines
Sometimes you want to encode a more complex structure in your mapped pipelines
- for example, adding conditional tasks using
prefect.case. This can be done using
prefect.apply_map. This takes a function that adds multiple scalar tasks to a flow, and converts those tasks to run as parallel mapped pipelines.
For example, here we create a function that encodes in Prefect tasks the following logic:
- If
xis even, increment it
- If
xis odd, negate it
Note that
inc_or_negate is not a task itself - it's a function that creates
several tasks. Just as we can map single tasks like
inc using
inc.map, we
can map functions that create multiple tasks using
apply_map.
from prefect import Flow, task, case, apply_map from prefect.tasks.control_flow import merge @task def inc(x): return x + 1 @task def negate(x): return -x @task def is_even(x): return x % 2 == 0 def inc_or_negate(x): cond = is_even(x) # If x is even, increment it with case(cond, True): res1 = inc(x) # If x is odd, negate it with case(cond, False): res2 = negate(x) return merge(res1, res2) with Flow("apply-map example") as flow: result = apply_map(inc_or_negate, range(4))
Running the above flow we get four parallel, conditional mapped pipelines. The
computed value of
result is
[1, -1, 3, -3].
Just as with
task.map, arguments to
apply_map can be wrapped with
unmapped, allowing certain arguments to avoid being mapped. While not always
necessary,
apply_map can be quite useful when you want to create complex
mapped pipelines, especially when using conditional logic within them.
# State behavior with mapped tasks
Whenever a mapped task is reduced by a downstream task, Prefect treats its children as the inputs to that task. This means, among other things, that trigger functions will be applied to all of the mapped children, not the mapped parent.
If a reducing task has an
all_successful task, but one of the mapped children failed, then the reducing task's trigger will fail. This is the same behavior as if the mapped children had been created manually and passed to the reducing task. Similar behavior will take place for skips.
|
https://docs.prefect.io/core/concepts/mapping.html
|
CC-MAIN-2020-45
|
refinedweb
| 1,422
| 59.03
|
Video Writer
Minimum Origin Version Required: Origin 9 SR0
Origin allows user to create a video with a collection of graphs. Origin C allows access to this ability using the Video Writer, you can define the video codec for compression (Please refer to FourCC Table for more details.), create a video writer project specifying the video name, path, speed and dimension, write graph pages as frames.
Note: To use the video writer, you must include its head file:
#include <..\OriginLab\VideoWriter.h>
The following example will write each graph in the project as a frame into the video, and the video is uncompressed with 2 frames/second speed and 800px * 600 px dimension.
// Use the raw format without compression.
int codec = CV_FOURCC(0,0,0,0);
// Create a VideoWriter object.
VideoWriter vw;
int err = vw.Create("D:\\example.avi", codec, 2, 800, 600);
if (0 == err)
{
foreach(GraphPage grPg in Project.GraphPages)
// write the graph page into the video
err = vw.WriteFrame(grPg);
}
// Release the video object when finished.
vw.Release();
return err;
The following example shows how to individually write a graph page into video and define the number of frames of this graph page in the video.
GraphPage gp("Graph1");
// The defined graph page will last for 10 frames.
int nNumFrames = 10;
vw.WriteFrame(gp, nNumFrames);
|
http://cloud.originlab.com/doc/OriginC/guide/Exporting-Videos
|
CC-MAIN-2020-16
|
refinedweb
| 218
| 65.52
|
OS: Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux
I observed subtle differences when summing a sparse matrix across different runs.
This test reproduces the issue (fails around 50% of the time it's run)
#include <Eigen/Sparse>
#include <gtest/gtest.h>
#include <stdlib.h>
#include <time.h>
TEST(Sparse, Reduction)
{
srand(time(NULL));
int nrows = 11300;
int ncols = 600;
int num_non_zeros = 100;
std::vector<Eigen::Triplet<float, int> > triplets;
for (int i = 0; i < num_non_zeros; i++) {
int row = rand() % nrows;
int col = rand() % ncols;
float value = static_cast<float>(rand()) / static_cast<float>(RAND_MAX);
triplets.push_back(Eigen::Triplet<float, int>(row, col, value));
}
Eigen::SparseMatrix<float, 0, int> mat = Eigen::SparseMatrix<float, 0, int>(nrows, ncols);
mat.reserve(num_non_zeros);
mat.setFromTriplets(triplets.begin(), triplets.end());
int num_trials = 10000;
for (int tr = 0; tr < num_trials; tr++) {
Eigen::SparseMatrix<float, 0, int> mat2 = Eigen::SparseMatrix<float, 0, int>(nrows, ncols);
mat2.reserve(num_non_zeros);
mat2.setFromTriplets(triplets.begin(), triplets.end());
EXPECT_TRUE(mat.sum() == mat2.sum());
}
}
Can't reproduce your error -- I copied this locally into the sparse_basic unit test, replacing the `EXPECT_TRUE` by a `VERIFY_IS_EQUAL` and tried a few different clang and gcc versions. And I would actually be a bit surprised to see an error here.
Please tell:
* What compiler are you using?
* What compilation options? (e.g., any non-associative math optimizations could lead to different sums inside and outside the loop)
* With which seeds does the test fail? (A few examples would suffice)
* Are you on 3.3 head, or 3.3.7?
Compiler: g++ (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Options:-msse3 -mavx -fopenmp -march=native -funroll-loops -mfpmath=sse -fno-guess-branch-probability
Eigen Version: 3.3.7
I don't think the seeds are relevant. It fails more or less half of the times I run the test.
The culprit seems to be "-mavx"
What happens is that the sum-reduction does as many aligned memory-accesses as possible, so if the coefficients are aligned differently, slightly different sums are calculated (due to non-associativity).
You get the same behavior when summing non-aligned dense vectors.
I'd say this is a WONTFIX, since expecting exact results with floating-point math does not really make sense.
An alternative would be to let redux-operations always start at the beginning (if `EIGEN_UNALIGNED_VECTORIZE` is enabled).
Actually, another alternative would be to make SparseMatrix use an aligned allocator (perhaps optionally, depending on the Options), but that would introduce ABI incompatibilities.
And of course, disabling vectorization would be an option (which would cost performance, of course).
I haven't managed to reproduce the bug using dense matrices, nor have I noticed this non-deterministic behavior with the rest of the dense vectorized operations in the project. I guess for the time being we'll drop the use of the sparse module, as we need consistent results across runs.
This is a simple example with dense vectors which occasionally fails:
#include <Eigen/Core>
#include <iostream>
int main() {
srand(time(0));
Eigen::VectorXf v0 = Eigen::VectorXf::Random(99), v1(100);
v1.tail(99) = v0;
std::cout << "Diff: " << v0.sum() - v1.tail(99).sum() << "\n";
}
Your test should be fine without AVX (on 64bit systems), since memory will be 16byte aligned automatically).
With some effort, it should actually be possible to get deterministic behavior, even with aligned loads (assuming the reduction is commutative, and there is a neutral element).
Something like:
// choose k, so that data + k is aligned
Packet sum = {-0.0, ..., data[0], ..., data[k-1]};
Index i;
for(i=k; i <= n-PacketSize; i+=PacketSize)
sum = padd(sum, pload<Aligned>(data + i);
Packet lastPacket = {data[i], ..., data[n-1], -0.0, ..., -0.0};
sum = padd(sum, lastPacket);
// Now content of sum will be the same, except for rotation, regardless of k
// predux must always reduce upper half + lower half, in remaining sub-vector
return predux(sum);
`Packet` should of course be two (or four) vectors to compensate for latency.
Generating the first and last Packet could cause some overhead, which may not really be worth it, though.
Changing this to DECISIONNEEDED.
Maybe, first benchmark if always using unaligned loads makes a difference (on modern hardware). If not, use that (at least if EIGEN_UNALIGNED_VECTORIZE is enabled).
So indeed dropping AVX seems to fix the issue. We'll see if we can go without it and we'll consider the alternatives otherwise. Thanks for the effort!
-- GitLab Migration Automatic Message --
This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.
You can subscribe and participate further through the new bug through this link to our GitLab instance:.
|
https://eigen.tuxfamily.org/bz/show_bug.cgi?id=1728
|
CC-MAIN-2020-16
|
refinedweb
| 779
| 56.35
|
'and' and 'or' with strings
take for instance when I try to do this
#include <iostream>
#include <string> 'the'.
#include <string>
#include <iostream>
int main()
{
std::cout << "Pick 1 or 2: ";
int choice;
std::cin >> choice;
std::cout << "Hello, you picked " << choice << 'n';
return 0;
}
Hi Alex,
Thank you so much for posting this tutorial online, it is amazing and I appreciate your time effort to help coding noobs like me.
My question is in the statement: [std::cin.ignore(32767, 'n') /code]
what does the "32767" refer to?
Thanks, Shane
Hi, Shane M
32767 means the maximum number of characters to be discarded from 'cin' stream
until the delimiter character('n') '<iostream>' and '<string>' are missing in your reply to Mr. Ank above.
2) Expected '}' 'only, 'n');
? 'std::string {aka std::basic_string<char>}' to type 'double'|
|
http://www.learncpp.com/cpp-tutorial/4-4b-an-introduction-to-stdstring/comment-page-1/
|
CC-MAIN-2018-13
|
refinedweb
| 137
| 70.02
|
Canonical Voices: Quickly: Rebooted2012-11-16T09:00:48ZMichael Hallnospam@nospam.com<p><a href=""><img class="alignleft size-full wp-image-1208" title="quickly-logo" src="" alt="" width="192" height="192" /></a>Shortly after the <a href="" target="_blank">Ubuntu App Showdown</a> earlier this year, Didier Roche and Michael Terry kicked off a <a href="" target="_blank">series of discussions</a> about a ground-up re-write of Quickly. Not only would this fix many of the complications app developers experienced during the Showdown competition, but it would also make it easier to <a title="Quickly Gtk update" href="" target="_blank">write tools</a> around Quickly itself.</p> <p>Unfortunately, neither Didier nor Michael were going to have much time this cycle to work on the reboot. We had a <a href="" target="_blank">UDS session</a> to discuss the initiative, but we were going to need community contributions in order to get it done.<span id="more-1363"></span></p> <h2>JFDI</h2> <p>I was very excited about the prospects of a Quickly reboot, but knowing that the current maintainers weren’t going to have time to work on it was a bit of a concern. So much so, that during my 9+ hour flight from Orlando to Copenhagen, I decided to <a href="" target="_blank">have a go at it</a> myself. Between the flight, a layover in Frankfurt without wifi, and a few late nights in the Bella Sky hotel, I had the start of something <a href="" target="_blank">promising enough to present</a> during the UDS session. I was pleased that both Didier and Michael liked my approach, and gave me some very <a href="" target="_blank">good feedback</a> on where to take it next. Add another 9+ hour flight home, and I had a foundation on which a reboot can begin.</p> <h2>Where is stands now</h2> <p>My code branch is now a part of the <a href="" target="_blank">Quickly project</a> on Launchpad, you can grab a copy of it by running <em>bzr branch lp:quickly/reboot</em>. The code currently provides some basic command-line functionality (including shell completion), as well as base classes for Templates, Projects and Commands. I’ve begun porting the <a href="" target="_blank"><em>ubuntu-application</em></a> template, reusing the current project_root files, but built on the new foundation. Currently only the ‘create’ and ‘run’ commands have been converted to the new object-oriented command class.</p> <p>I also have examples showing how this new approach will allow template authors to easily sub-class Templates and Commands, by starting both a port of the <a href="" target="_blank"><em>ubuntu-cli</em></a> template, and also creating an <a href="" target="_blank"><em>ubuntu-git-application</em></a> template that uses git instead of bzr.</p> <h2>What comes next</h2> <p>This is only the very beginning of the reboot process, and there is still a massive amount of work to be done. For starters, the whole thing needs to be converted from Python 2 to Python 3, which should be relatively easy except for one area that does some import trickery (to keep Templates as python modules, without having to install them to PYTHON_PATH). The Command class also needs to gain argument parameters, so they can be easily introspected to see what arguments they can take on the command line. And the whole thing needs to gain a structured meta-data output mechanism so that non-Python application can still query it for information about available templates, a project’s commands and their arguments.</p> <h2>Where you come in</h2> <p>As I said at the beginning of the post, this reboot can only succeed if it has community contributions. The groundwork has been laid, but there’s a lot more work to be done than I can do myself. Our 13.04 goal is to have all of the existing functionality and templates (with the exception of the Flash template) ported to the reboot. I can use help with the inner-working of Quickly core, but I absolutely <strong>need</strong> help porting the existing templates.</p> <p>The new Template and Command classes make this much easier (in my opinion, anyway), so it will mostly be a matter of copy/paste/tweak from the old commands to the new ones. In many cases, it will make sense to sub-class and re-use parts of one Template or Command in another, further reducing the amount of work.</p> <h2>Getting started</h2> <p>If you are interested in helping with this effort, or if you simply want to take the current work for a spin, the first thing you should do is grab the code (<em>bzr branch lp:quickly/reboot</em>). You can call the quickly binary by running <em>./bin/quickly</em> from within the project’s root.</p> <p>Some things you can try are:</p> <blockquote><p>./bin/quickly create ubuntu-application /tmp/foo</p></blockquote> <p>This will create a new python-gtk project called ‘foo’ in /tmp/foo. You can then call:</p> <blockquote><p>./bin/quickly -p /tmp/foo run</p></blockquote> <p>This will run the applicaiton. Note that you can use -p /path/to/project to make the command run against a specific project, without having to actually be in that directory. If you are in that directory, you won’t need to use -p (but you will need to give the full path to the new quickly binary).</p> <p>If you are interested in the templates, they are in ./data/templates/, each folder name corresponds to a template name. The code will look for a class called Template in the base python namespace for the template (in ubuntu-application/__init__.py for example), which must be a subclass of the BaseTemplate class. You don’t have to define the class there, but you do need to import it there. Commands are added to the Template class definition, they can take arguments at the time you define them (see code for examples), and their .run() method will be called when invoked from the command line. Unlike Templates, Commands can be defined anywhere, with any name, as long as they subclass BaseCommand and are attached to a template.</p>Michael Hall: App Developer Q&A2012-08-01T17:26:10ZMichael Hallnospam@nospam.com<p>You can watch the App Developer Q&A live stream starting at 1700 UTC (or watch the recording of it afterwards):</p> <p></p> <p>Questions should be asked in the <a href="" target="_blank">#ubuntu-on-air</a> IRC channel on freenode.</p> <p></p> <p>You can ask me anything about app development on Ubuntu, getting things into the Software Center, or the recent Ubuntu App Showdown competition.<> (6pm London, 1pm US Eastern, 10am US Pacific). Because it will be an On-Air hangout, I won’t have a link until I start the session, but I will post it here on my blog before it starts. For IRC, I plan on using: Quickly Gtk update2012-07-30T16:52:32ZMichael Hallnospam@nospam.com<p>As part of the <a href="" target="_blank">Ubuntu App Showdown</a> I started on a small project to provide a nice <a title="My App Developer Showdown Entry" href="" target="_blank">GUI frontend to Quickly</a>..</p> <p></p> <p><span id="more-1230"></span></p> <h2>Project Management</h2> <p <a href="" target="_blank">Observer design pattern</a>.</p> <h2>Zeitgeist event monitoring</h2> <p>The other big development was integrating Quickly-Gtk with <a href="" target="_blank">Zeitgeist</a>..</p> <h2>The future of Quickly-Gtk</h2> <p>While I was able to get a lot done with Quickly-Gtk, the underlying Quickly API and command line really weren’t designed to support this kind of use. However, as a result of what we learned during the App Showdown, <a href="" target="_blank">Didier Roche</a> has begun planning a <a href="" target="_blank">reboot of Quickly</a>, which will improve both it’s command-line functionality, and it’s ability to be used as a callable library for apps like Quickly-Gtk. If you are interested in the direction of Quickly’s development, I urge you to join in those planning meetings.</p> <p> </p> <p>Launchpad Project: <a href=""></a></p> <p> </p>Michael Hall: My App Developer Showdown Entry2012-06-22T21:39:20ZMichael Hallnospam@nospam.com<p>As you’ve<a href="" target="_blank"> probably heard</a> already, Ubuntu is running an <a href="" target="_blank">App Developer Showdown</a> competition where contestants have three weeks to build an Ubuntu app from scratch. The rules are simple: It has to be new code, it has to run on Ubuntu, and it has to be submitted to the Software Center. The more you use Ubuntu’s <a href="" target="_blank">tools</a>, the better your chances of winning will be. This week we ran a series of <a href="" target="_blank">workshops</a> introducing these tools and how they can be used. It all seemed like so much fun, that I’ve decided to participate with my own submission!<span id="more-1199"></span></p> <p>Now 2 our of the 6 judges for this competition are my immediate co-workers, so let me just start off by saying that <strong>I will not be eligible</strong> for any of the prizes. But it’s still a fun and interesting challenge, so I’m going to participate anyway. But what is my entry going to be? Well in my typical fashion of building <a title="Charming Django with Naguine" href="" target="_blank">tools</a> for <a title="Simplified Unity Lens Development with Singlet" href="" target="_blank">tools</a>, I’ve decided to write a GUI wrapper on to of <a href="" target="_blank">Quickly</a>, using Quickly.</p> <p><a href=""><img class=" wp-image-1201 alignright" title="mockup_create" src="" alt="" width="396" height="306" /></a>Before I started on any code, I first wanted to brainstorm some ideas about the interface itself. For that I went back to my favorite mockup tool: <a title="Pencil for easy UI mockups" href="" target="_blank">Pencil</a>..</p> <p><a href=""><img class="alignleft size-medium wp-image-1203" title="Screenshot from 2012-06-22 15:09:59" src="" alt="" width="300" height="163" /></a>Now, I’ve never been a fan of GUI builders. Even back when I was writing Java/Swing apps, and GUI builders were all the rage, I never used them. I didn’t use one for <a title="Hello Unity" href="" target="_blank">Hello Unity</a>,.</p> <p><a href=""><img class="alignright size-medium wp-image-1206" title="Screenshot from 2012-06-22 16:11:00" src="" alt="" width="300" height="197" /></a.</p> <p.</p> <p><a href=""><img class="alignnone size-large wp-image-1207" title="Screenshot from 2012-06-22 16:11:52" src="" alt="" width="640" height="141" /></a></p> <p>And thanks to the developer tools available in Ubuntu, I was able to accomplish all of this in only a few hours of work.</p> <p <a href="" target="_blank">package in my PPA</a>.</p> <p>Building an app in 4 hours then accidentally building a proper package and uploading it to a PPA, who’d have thought we’d ever make it that easy? I hope you all are having as much fun and success in your showdown applications as I am.<: Goodbye And Thanks For All the Apps: Ubuntu App Developer Week – Day 5 And Wrap-Up2011-09-13T16:45:58ZDavidnospam@nospam.com<p><img class="aligncenter size-full wp-image-1308" title="Ubuntu App Developer Week" src="" alt="" /></p> <p>Another edition of the Ubuntu App Developer Week and another amazing knowledge sharing fest around everything related to application development in Ubuntu. Brought to you by a range of the best experts in the field, here’s just a sample of the topics they talked about: <em</em>… and more. Oh my!</p> <p>And a pick of what they had to say:</p> <blockquote><p>We believe that to get Ubuntu from 20 million to 200 million users, we need more and better apps on Ubuntu<br /> <a href="">Jonathan Lange</a> on making Ubuntu a target for app developers</p></blockquote> <blockquote><p>Bazaar is the world’s finest revision control system<br /> <a href="">Jonathan Riddell</a> on Bazaar</p></blockquote> <blockquote><p>So you’ve got your stuff, wherever you are, whichever device you’re on<br /> <a href="">Stuart Langridge</a> on Ubuntu One</p></blockquote> <blockquote><p>Oneiric’s EOG and Evince will be gesture-enabled out of the box<br /> <a href="">Jussi Pakkanen</a> on multitouch in Ubuntu 11.10</p></blockquote> <blockquote><p>I control the upper right corner of your screen <img src="" alt=";-)" class="wp-smiley" /><br /> <a href="">Ted Gould</a> on Indicators</p></blockquote> <p>If you happened to miss any of the sessions, you’ll find the logs for all of them on the <a href="">Ubuntu App Developer Week page</a>, and the summaries for each day on the links below:</p> <ul> <li><a href="">Day 1 Summary</a></li> <li><a href="">Day 2 Summary</a></li> <li><a href="">Day 3 Summary</a></li> <li><a href="">Day 4 Summary</a></li> <li>Day 5 Summary (this post)</li> </ul> <h2>Ubuntu App Developer Week – Day 5 Summary</h2> <p>The last day came with a surprise: an extra session for all of those who wanted to know more about Qt Quick and QML. Here are the summaries:</p> <h3>Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip</h3> <p><em>By <a title="LaunchpadHome" href="">Jussi Pakkanen</a></em></p> <p><img class="alignleft" title="Jussi Pakkanen" src="" alt="" width="64" height="64" />In his session, Jussi talked about one of the most interesting technologies where Ubuntu is leading the way in the open source world: multitouch. Walking the audience through the <a href="">Grip Tutorial</a>,.</p> <p>Check out the <a href="">session log</a>.<em> </em></p> <h3>Creating a Google Docs Lens</h3> <p><em>By <a title="LaunchpadHome" href="">Neil Patel</a></em></p> <p><img class="alignleft" title="Neil Patel" src="" alt="" width="64" height="64" /.</p> <p>Check out the <a href="">session log</a>.<em> </em></p> <h3>Practical Ubuntu One Files Integration</h3> <p><em>By <a title="LaunchpadHome" href="">Michael Terry</a><br /> </em></p> <p><a href=""><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /></a!</p> <p>Check out the <a href="">session log</a> and Michael’s <a href="">awesome notes</a>.</p> <h3>Publishing Your Apps in the Software Center: The Business Side</h3> <p><em>By <a title="LaunchpadHome" href="">John Pugh</a></em></p> <p><a href=""><img class="alignleft" title="John Pugh" src="" alt="" width="64" height="64" /><.</p> <p>Check out the <a href="">session log</a>.</p> <h3>Writing an App with Go</h3> <p><em>By <a title="LaunchpadHome" href="">Gustavo Niemeyer</a></em></p> <p><a href=""><img class="alignleft" title="Gustavo Niemeyer" src="" alt="" width="64" height="64" /></a>Gustavo’s enthusiasm for <a href="">Go<.</p> <p>Check out the <a href="">session log</a>.</p> <h3>Qt Quick At A Pace</h3> <p><em>By <a title="LaunchpadHome" href="">Donald Carr</a></em></p> <p><a href=""><img class="alignleft" title="Donald Carr" src="" alt="" width="64" height="64" /></a <a href="">qtmediahub</a> and <a href="">Qt tutorial examples</a>, he explored QML’s capabilities and offered good practices for succesfully developing QML-based projects.</p> <p>Check out the <a href="">session log</a>.</p> <h2>Wrapping Up</h2> <p>Finally, if you’ve got any feedback on UADW, on how to make it better, things you enjoyed or things you believe should be improved, your comments will be very appreciated and useful to tailor this event to your needs.</p> <p>Thanks a lot for participating. I hope you enjoyed it as much as I did, and see you again in 6 months time for another week full with app development goodness!<a href=""><br /> <: All Good Things Come To An End: Ubuntu App Developer Week – Day 42011-09-09T19:44:24ZDavidnospam@nospam.com<h2>Ubuntu App Developer Week – Day 4 Summary</h2> <p>Last day of UADW! While we’re watching the final sessions, here’s what happened yesterday:</p> <h3>Creating an App Developer Website: developer.ubuntu.com</h3> <p><em>By <a title="LaunchpadHome" href="">John Oxton</a> and <a title="LaunchpadHome" href="">David Planella</a></em></p> <p><img class="alignleft" title="John Oxton" src="" alt="" width="64" height="64" /><img class="alignleft" title="David Planella" src="" alt="" width="64" height="64" /.</p> <p>Check out the session log <a href="">here</a>.<em> </em></p> <h3>Rapid App Development with Quickly</h3> <p><em>By <a title="LaunchpadHome" href="">Michael Terry</a></em></p> <p><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /.</p> <p>Check out the session log <a href="">here</a>.<em> </em></p> <h3>Developing with Freeform Design Surfaces: GooCanvas and PyGame</h3> <p><em>By <a title="LaunchpadHome" href="">Rick Spencer</a><br /> </em></p> <p><a href=""><img class="alignleft" title="Rick Spencer" src="" alt="" width="64" height="64" /></a.</p> <p>Check out the session log <a href="">here</a>.</p> <h3>Making your app appear in the Indicators</h3> <p><em>By <a title="LaunchpadHome" href="">Ted Gould</a></em></p> <p><a href=""><img class="alignleft" title="Ted Gould" src="" alt="" width="64" height="64" /><!</p> <p>Check out the session log <a href="">here</a>.</p> <h3>Will it Blend? Python Libraries for Desktop Integration</h3> <p><em>By <a title="LaunchpadHome" href="">Marcelo Hashimoto</a></em></p> <p><a href=""><img class="alignleft" title="person-logo" src="" alt="" width="64" height="64" /></a>Marcelo shared his experience acquired with <a href="">Polly</a>,.</p> <p>Check out the session log <a href="">here</a>.</p> <h2>The Day Ahead: Upcoming Sessions for Day 4</h2> <p>Check out the first-class lineup for the last day of UADW:</p> <p><a href="">16.00 UTC</a> – <strong>Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip </strong><strong><em></em></strong></p> <p><img class="alignleft size-full wp-image-1294" title="Jussi Pakkanen" src="" alt="" /> Multitouch is everywhere these days, and now on your desktop as well -brought to you by developers such as <a title="LaunchpadHome" href="">Jussi Pakkanen</a>, who’ll guide through using libgrip to add touch support to your GTK+ apps. Learn how to use this cool new library in your own software!</p> <p><a href="">17:00 UTC</a> – <strong>Creating a Google Docs Lens<em></em></strong></p> <p><img class="alignleft size-full wp-image-1290" title="Neil Patel" src="" alt="" />Lenses are ways of presenting data coming from different sources in Unity. <a title="LaunchpadHome" href="">Neil Patel</a> knows Lenses inside out and will present a practical example of how to create a Google Docs one. Don’t miss this session on how to put two cool technologies together!</p> <p><a href="">18:00 UTC</a><strong> – <em></em>Practical Ubuntu One Files Integration</strong></p> <p><a href=""><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /></a>Yet again the Deja-dup rockstar and UADW regular <a title="LaunchpadHome" href="">Michael Terry</a> will be sharing his deep knowledge on developing apps. This time it’s about adding cloud support to applications: integrating with the Ubuntu One files API.</p> <p><a href="">19:00 UTC</a> – <strong><em></em>Publishing Your Apps in the Software Center: The Business Side</strong></p> <p><a href=""><img class="size-full wp-image-1291 alignleft" title="John Pugh" src="" alt="" /></a>Closing the series of sessions around publishing apps in the Software Centre, we’ll have the luxury of having <a title="LaunchpadHome" href="">John Pugh</a>, from the team that brings you commercial apps into the Software Centre and who’ll be talking about the business side of things.</p> <p><a href="">20:00 UTC</a><strong><em></em> – Writing an App with Go</strong></p> <p><a href=""><img class="size-full wp-image-1292 alignleft" title="Gustavo Niemeyer" src="" alt="" /></a>Go is the coolest kid around in the world of programming languages. <a title="LaunchpadHome" href="">Gustavo Niemeyer</a> is very excited about it and will be showing you how to write an app using this language from Google. Be warned, his enthusiasm is contagious!<a title="LaunchpadHome" href=""><br /> </a></p> <p><a href="">20:00 UTC</a><strong><em></em> – Qt Quick At A Pace</strong></p> <p><a href=""><img class="size-full wp-image-1293 alignleft" title="Donald Carr" src="" alt="" /></a>A last minute and very welcome addition to the schedule. In his session <a title="LaunchpadHome" href="">Donald Carr </a>will introduce you to Qt Quick to create applications with Qt Creator and QML, the new declarative language that brings together designers and developers.<: Cha-ching!2011-08-17T12:36:53ZRick Spencernoreply@blogger.com<a href=""> <br /></a> <br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5641894786340952802" border="0" /></a>So, today I uploaded a special version of Photobomb to <a href="">my PPA</a>. It's special because I consider this my first *complete* release. There are some things that I would like to be better in it. For example: <br /><ul><li>I wish you could drag into from the desktop or other apps.</li><li>I wish that the app didn't block the UI when you click on the Gwibber tab when Gwibber isn't working.</li><li>I wish the local files view had a watch on the current directory so that it refreshed automatically.</li><li>I wish inking was smoother.</li><li>I wish you could choose the size of the image that you are making.</li><li>I wish that you could multi-select in it.</li></ul>But alas, if I wait until it has everything and no bugs, I'll never release. <br /> <br />So, I am releasing Photobomb in my PPA. It is a free app. You can use it for free, and it's Free software. So, enjoy. <br /> <br /. <br /> <br />The code is GPL v3, so people can enhance it, or generally do whatever they think is useful for them (including giving it a way, or using it to make money). <br /> <br />I found it remarkably easy to submit photobomb to the Software Center. I just used the <a href="">myapps.ubuntu portal</a>, and it all went very smoothly. Really just a matter of filling in some forms. Of course, since I used Quickly to build Photobomb, Quickly took care of the packaging for me, so that simplified it loads. <br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5641895137899035186" border="0" /></a> <br />I'll keep everyone posted on how it goes! <br />David Planella: Ubuntu App Developer Week – Day 42011-04-15T19:12:00ZDavidnospam@nospam.com<h2>Ubuntu App Developer Week – Day 4 Summary</h2> <p>Ramping:</p> <h3>Qt Quick: Elements/Animations/States</h3> <p><em>By Jürgen Bocklage-Ryannel</em></p> <p.</p> <p><em><em>Check out the session log <a href="">here</a>.</em></em></p> <h3>Qt Quick: Rapid Prototyping</h3> <p>By Jürgen Bocklage-Ryannel</p> <p.<br /> <strong></strong></p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Rapid App Development with Quickly</h3> <p>By <a title="LaunchpadHome" href="">Michael Terry</a></p> <p ‘submitubuntu’ command to help getting applications into the Software Center. All that being set straight, he then showed how to use Quickly and what it can do: from creating the first example application, to modifying the UI with ‘quickly design’ and Glade, into debugging and finally packaging.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Getting Your App in the Distro: the Application Review Process</h3> <p>By <a title="LaunchpadHome" href="">Allison Randal</a></p> <p.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Adding Indicator Support to your Apps</h3> <p>By <a title="LaunchpadHome" href="">Ted Gould</a></p> .</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Using Launchpad to get your application translated -</h3> <p>By <a title="LaunchpadHome" href="">Henning Eggers</a></p> <p>As a follow up to the talk on how to <a href="">add native language support to your applications</a>.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h2>The Day Ahead: Upcoming Sessions for Day 5</h2> <p>The last day and the quality and variety of the sessions is still going strong. Check out the great content we’ve prepared for you today:</p> <p><a href="">16:00 UTC</a><br /> <strong>Qt Quick: Extend with C++</strong> – Jürgen Bocklage-Ryannel<br /> Sometimes you would like to extend Qt Quick with your own native extension. Jürgen will show you some ways how to do it.</p> <p><a href="">17:00 UTC</a><br /> <strong>Phonon: Multimedia in Qt -</strong> <a title="LaunchpadHome" href="">Harald Sitter</a><br />.</p> <p><a href="">18:00 UTC</a><br /> <strong>Integrating music applications with the Sound Menu -</strong> <a title="LaunchpadHome" href="">Conor Curran</a><br />.</p> <p><a href="">19:00 UTC</a><br /> <strong>pkgme: Automating The Packaging Of Your Project -</strong> <a title="LaunchpadHome" href="">James Westby</a><br />.</p> <p><a href="">20:00 UTC</a><br /> <strong>Unity Technical Q&A -</strong> <a title="LaunchpadHome" href="">Jason Smith</a> and <a title="LaunchpadHome" href="">Jorge Castro</a><br />.</p> <p><a href="">21:00 UTC</a><br /> <strong>Lightning Talks -</strong> <a title="LaunchpadHome" href="">Nigel Babu</a><br /> As the final treat to close the week, Nigel has organized a series of lightning talks to showcase a medley of cool applications: <em>CLI Companio</em>n, <em><a href="">Unity Book Lens</a></em>, <em>Bikeshed</em>, <em>circleoffriends</em>, <em><a href="">Algorithm School</a></em>, <em><a href="">Sunflower FM</a></em>, <em><a href="">Tomahawk Player</a></em>, <em>Classbot</em> – your app could be in this list next time, do check them out!<: Quickly Tutorial for Natty: DIY Media Player2011-02-04T14:57:49ZRick Spencernoreply@blogger.com<a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560348583979266722" border="0" /></a><span></span>I started working on a chapter for the <a href="">Ubuntu Developers' Manual</a>. The chapter will be on how to use media in your apps. That chapter will cover:<br /><ul><li>Playing a system sound</li><li>Showing an picture</li><li>Playing a sound file</li><li>Playing a video</li><li>Playing from a web cam</li><li>Composing media</li></ul>I created an app for demonstrating some of these things in that chapter. After I wrote the app, I realized that it shows a lot of different parts of app writing for Ubuntu:<br /><ul><li>Using Quickly to get it all started</li><li>Using Glade to get the UI laid out</li><li>Using quickly.prompts.choose_directory() to prompt the user</li><li>Using os.walk for iterating through a directory </li><li>Using a dictionary<br /></li><li>Using DictionaryGrid to display a list</li><li>Using MediaPlayerBox to play videos or Sounds</li><li>Using GooCanvas to compose a singe image out of images and text</li><li>Using some PyGtk trickery to push some UI around</li></ul>A pretty decent amount of overlap with the chapter, but not a subset or superset. So I am writing a more full tutorial to post here, and then I can pull out the media specific parts for the chapter later. Certain things will change as we progress with Natty, so I will make edits to this posting as those occur. So without Further Ado ...<br /><br /><span><span>Simple Player Tutorial</span></span><br /><span>Introduction</span><br />In this tutorial you will build a simple media player. It will introduce how to start projects, edit UI, and write the code necessary to play videos and songs in Ubuntu.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332293910206642" border="0" /></a><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332295518561490" border="0" /></a>The app works by letting the user choose a directory. Simple Player then puts all the media into a list. The user can choose media to play from that list.<br /><br />This tutorial uses Quickly, which is an easy and fun way to manage application creation, editing, packaging, and distribution using simple commands from the terminal. Don't worry if you are used to using an IDE for writing applications, Quickly is super easy to use.<br /><br /><span>Requirements</span><br /.<br /><br />You also need Quickly. To install Quickly:<br /><br />$sudo apt-get install quickly python-quickly.widgets<br /><br />This tutorial also uses a yet to be merged branch of Quickly Widgets. In a few weeks, you can just install quickly-widgets, but for now, you'll need to get the branch:<br /><br />$bzr branch lp:~rick-rickspencer3/quidgets/natty-trunk<br /><br />Note that these are alpha versions, so there may be bugs.<br /><span><br />Caution About Copy and Pasting Code</span><br /.<br /><br />If you're going to copy and paste, you might want to use the code for the tutorial project in launchpad, from this:<br /><a href="">Link to Code File in the Launchpad Project</a><br /><br />You can also look at the tutorial in text format this:<br /><a href="">Link to this tutorial in text for in Launchpad</a><br /><br /><span><span>Creating the Application</span></span><br />You get started by creating a Quickly project using the ubuntu-application template. Run this command in the terminal:<br />$quickly create ubuntu-application simple-player<br /><br />This will create and run your application for you.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332302342171954" border="0" /></a><br />Notice that the application knows it is called Simple Player, and the menus and everything work.<br /><br />To edit and run the application, you need to use the terminal from within the simple-player directory that was created. So, change into that directory for running commands:<br /><br />$cd simple-player<br /><br /><span><span>Edit the User Interface</span></span><br />We'll start by the User Interface with the Glade UI editor. We'll be adding a lot of things to the UI from code, so we can't build it all in Glade. But we can do some key things. We can:<br /><ul><li>Layout the HPaned that separates the list from the media playing area</li><li>Set up the toolbar</li></ul><span>Get Started</span><br />To run Glade with a Quickly project, you have to use this command from within your project's directory:<br />$quickly design<br /><br />If you just try to run Glade directly, it won't work with your project.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332304247403250" border="0" /></a>Now that Glade is open, we'll start out by deleting some of the stuff that Quickly put in there automatically. Delete items by selecting them and hitting the delete key. So, delete:<br /><ul><li>label1</li><li>image1</li><li>label2</li></ul>This will leave you with a nice blank slate for your app:<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332310174272674" border="0" /></a>Now, we want to make sure the window doesn't open too small when the app runs. Scroll to the top of the TreeView in the upper right of Glade, and select simple_player_window. Then in the editor below, click the common tab, and set the Width Request and Height Request.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333521250560258" border="0" /></a>There's also a small bug in the quickly-application template, but it's easy to fix. Select statusbar1, then on the packing tab, set "Pack type" to "End".<br /><br />Save your changes or they won't show up when you try running the app! Then see how your changes worked by using the command:<br />$quickly run<br /><br />A nice blank window, ready for us to party on!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333525273762210" border="0" /></a><span>Adding in Your Widgets</span><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333531070608546" border="0" /></a><br />Make sure the HPaned starts out with an appropriate division of space. Do this by going to the General tab, and setting an appropriate number of pixels in Position property.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333532514187650" border="0" /></a>The user should be able to scroll through the list, so click on ScrolledWindow in the toolbar, and then click in the left hand part of the HPaned to place it in there.<br /><br />Now add a toolbar. Find the toolbar icon in the toolbox, click on it and click in the top space open space. This will cause that space to collapse, because the toolbar is empty by default.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333543170120722" border="0" /></a>To add the open button click the edit button (looks like pencil) in Glade's toolbar. This will bring up the toolbar editing dialog. Switch to the Hierarchy tab, and click "Add". This will add a default toolbar button.<br /><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334474103485074" border="0" /></a><br /.<br /><br />Now if you use $quickly run again, you'll see that your toolbar button is there.<br /><br /><span><span>Coding the Media List<br /><span>Making the Open Button Work</span><br /></span></span>The open button will have an important job. It will respond to a click from the user, offer a directory chooser, and then build a list of media in that directory. So, it's time write some code.<br /><br />You can use:<br />$quickly edit &<br /><br />This will open your code Gedit, the default text and code editor for Ubuntu.<br /><br />Switch to the file called "simple-player". This is the file for your main window, and the file that gets run when users run your app from Ubuntu.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334477613723234" border="0" /></a>First let's make sure that the open button is hooked up to the code. Create a function to handle the signal that looks like this (and don't forget about proper space indenting in Python!):<br /><pre><code><br /> def openbutton_clicked_event(self, widget, data=None):<br /> print "OPEN"<br /><br /><br /></code></pre>Put this function under "finish_initializing", but above "on_preferences_changed". Save the code, run the app, and when you click the button, you should see "OPEN" printed out to the terminal.<br /><br />How did this work? Your Quickly project used the auto-signals feature to connect the button to the event. To use auto-sginals, simple follow this pattern when you create a signal handlder:<br /><pre><code><br />def widgetname_eventname_event(self, widget, data=None):<br /><br /></code></pre>Sometimes a signal handler will require a different signature, but (self, widget, data=None) is the most common.<br /><br /><span><span><span>Getting the Directory from the User</span></span><br /></span>We'll use a convenience function built into Quickly Widgets to get the directory info from the user. First, go to the import section of the simple-player file, and around line 11 add an import statement:<br /><br /><pre><code><br />from quickly import prompts<br /><br /></code></pre></pre><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334488926372962" border="0" /></a>Now when you run the app you can select a directory, and it will print a full path to each file encountered. Nice start, but what the function needs to do is build a list of files that are media files and display those to the user.<br /><br /><span>Defining Media Files</span><br />This app will use a simple system of looking at file extensions to determine if files are media files. Start by specifying what file types are supporting. Add this in finish_initializing to create 2 lists of supported media:<br /><pre><code><br />self.supported_video_formats = [".ogv",".avi"]<br />self.supported_audio_formats = [".ogg",".mp3"]<br /><br /></code></pre>GStreamer supports a lot of media types so ,of course, you can add more supported types, but this is fine to start with.<br /><br />Now change the openbutton handler to only look for these file types: /> #make a full path to the file<br /> print os.path.join(root,f)<br /><br /></code></pre>This will now only print out files of supported formats.<br /><br /><span>Build a List of Media Files</span><br /></pre><span>Display the List to the User</span><br />A DictionaryGrid is the easiest way to display the files, and to allow the user to click on them. So import DicationaryGrid at line 12, like this:<br /><pre><code><br />from quickly.widgets.dictionary_grid import DictionaryGrid<br /></code></pre:<br /><pre><code><br /> for c in self.ui.scrolledwindow1.get_children():<br /> self.ui.scrolledwindow1.remove(c)<br /></code></pre>Then create a new DictionaryGrid. We only want one column, to the view the files, so we'll set up the grid like this:<pre><code>>So now the whole function /><br /> #remove any children in scrolled window<br /> for c in self.ui.scrolledwindow1.get_children():<br /> self.ui.scrolledwindow1.remove(c)<br />>Now the list is displayed when the user picks the directory.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334490114080370" border="0" /></a><br /><span><span>Playing the Media</span></span><br /><span>Adding the MediaPlayer</span><br /:<br /><pre><code><br />from quickly.widgets.media_player_box import MediaPlayerBox<br /><br /></code></pre>Then, we'll create and show a MediaPlayerBox in the finish_initializing function. By default, a MediaPlayerBox does not show it's own controls, so pass in True to set the "controls_visible" property to True. You can also do things like this:<br /><br /><pre><code><br />player.controls_visible = False<br />player.controls_visible = True<br /><br /></code></pre>to control the visibility of the controls.<br /><br /).<br /><pre><code><br />self.player = MediaPlayerBox(True)<br />self.player.show()<br />self.ui.hpaned1.add2(self.player)<br /><br /></code></pre><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334495717813186" border="0" /></a><span>Connecting to the DictionaryGrid Signals</span><br /:<br /><pre><code><br /> #hook up to the selection_changed event<br /> media_grid.connect("selection_changed", self.play_file)<br /></code></pre>Now create that play_file function, it should look like this:<br /><pre><code><br /> def play_file(self, widget, selected_rows, data=None):<br /> print selected_rows[-1]["uri"]<br /><br /></code></pre.<br /><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335754216317186" border="0" /></a><span>Setting the URI and calling play()</span><br />Now that we have the URI to play, it's a simple matter to play it. We simply set the uri property of our MediaPlayerBox, and then tell it to stop playing any file it may be playing, and then to play the selected file:<br /><pre><code><br />def play_file(self, widget, selected_rows, data=None):<br /> self.player.stop()<br /> self.player.uri = selected_rows[-1]["uri"]<br /> self.player.play()<br /><br /></code></pre.<br /><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335757346830226" border="0" /></a><br /><span>Connecting to the "end-of-file" Signal</span><br />When a media files ends, users will expect the next file played automatically. It's easy to find out when a media file ends using the MediaPlayerBox's "end-of-file" signal. Back in finish_initializing, after creating the MediaPlayerBox, connect to that signal:<br /><pre><code><br />self.player.connect("end-of-file",self.play_next_file)<br /><br /></code></pre><span>Changing the Selection of the DictionaryGrid</span><br />Create the play_next_file function in order to respond when a file is done playing:<br /><pre><code><br /> def play_next_file(self, widget, file_uri):<br /> print file_uri<br /><br /></code></pre:<br /><pre>></pre><span>Making an Audio File Screen</span><br />Notice that when playing a song instead of a video, the media player is blank, or a black box, depending on whether a video has been player before.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335761983241298" border="0" /></a>It would be nicer to show the user some kind of visualization when a song is playing. The easiest thing to do would be to create a gtk.Image object, and swap it when for the MediaPlayerBox when an audio file is playing. However, there are more powerful tools at our disposal that we can use to create a bit richer of a user experience.<br />.<br /><br /><span>Create a Goo Canvas</span><br />Naturally, you need to import the goocanvas module:<br /><pre><code><br />import goocanvas<br /><br /></code></pre>Then, in the finish_initializing function, create and show a goocanvas.Canvas:<br /><pre><code><br />self.goocanvas = goocanvas.Canvas()<br />self.goocanvas.show()<br /><br /></code></pre.<br /><br /><span>Add Pictures to the GooCanvas</span><br /:<br /><pre><code><br />logo_file = helpers.get_media_file("background.png")<br />logo_file = logo_file.replace("","")<br />logo_pb = gtk.gdk.pixbuf_new_from_file(logo_file)<br /><br /><br /></code></pre:<br /><pre><code><br />root_item=self.goocanvas.get_root_item()<br />goocanvas.Image(parent=root_item, pixbuf=logo_pb,x=20,y=20)<br /><br /><br /></code></pre><span>Show the GooCanvas When a Song is Playing</span><br /:<br /><pre><code><br />format = selected_rows[0]["format"]<br /><br /></code></pre>We can also get a reference to the visual that is currently in use:<br /><pre><code><br />current_visual = self.ui.hpaned1.get_child2()<br /><br /></code></pre>Knowing those two things, we can then figure out whether to put in the goocanvas.Canvas or the MediaPlayerBox. So the whole function will look like this:<br /><pre>></pre><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335768420700978" border="0" /></a><br /><span>Add another Image to Canvas</span><br />We can add the note image to the goocanvas.Canvas in the same way we added the background image. However, this time we'll play with the scale a bit:<br /><br /><pre>></pre>Remember for this to work, you have to put a note.png file in the data/media directory for your project. If your image is a different size, you'll need to tweak the x, y, and scale as well.<br /><br />(BTW, thanks to <a href="">Daniel Fore</a> for making the artwork used here. If you haven't had the pleasure of working Dan, he is a really great guy, as well as a talented artist and designer. He's also the leader of the #elementary project.)<br /><br /!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335776680799090" border="0" /></a><span>Add Text to the goocanvas.Canvas</span><br /.<br /><pre><code><br />self.song_text = goocanvas.Text(parent=root_item,<img src="" alt="" id="BLOGGER_PHOTO_ID_5560337116417990178" border="0" /></a>Then, back in finish_initializing, after creating the MediaPlayerBox, remove the controls:<br /><pre><code><br />self.player = MediaPlayerBox(True)<br />self.player.remove(self.player.controls)<br /><br /></code></pre>Then, create a new openbutton:<br /><pre><code><br />open_button = gtk.ToolButton()<br /><br /></code></pre>We still want the open button to be a stock button. For gtk.ToolButtons, use the set_stock_id function to set the right stock item.<br /><pre><code><br />open_button.set_stock_id(gtk.STOCK_OPEN)<br /><br /></code></pre>Then show the button, and connect it to the existing signal handler.<br /><pre><code><br />open_button.show()<br />open_button.connect("clicked",self.openbutton_clicked_event)<br /><br /></code></pre:<br /><pre><code><br />self.player.controls.insert(open_button, 0)<br />self.ui.hbox1.pack_start(self.player.controls, True)<br /><br /></code></pre>Now users can use the controls even when audio is playing!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560337124831239410" border="0" /></a><span><span>Conclusion</span></span><br /><span></span>This tutorial demonstrated how to use Quickly, Quickly Widgets, and PyGtk to build a functional and dynamic media player UI, and how to use a goocanvas.Canvas to add interesting visual effects to your program.<br /><br />The next tutorial will show 2 different ways of implementing play lists, using text files, using pickling, or using desktopcouch for storing files.<br /><br /><span><span>API Reference</span></span><br /><span>PyGtk<br /></span><ul><li><a href="">PyGtk Reference Documentation</a></li><li><a href="">PyGtk FAQ</a></li></ul><span>Quickly Widgets</span><br /><span></span>Reference documentation for Quickly Widgets isn't currently hosted anywhere. However, the code is thoroughly documented, so until the docs are hosted, you can use pydocs to view them locally. To do this, first start pydocs on a local port, such as:<br />$pydocs -p 1234<br /><br />Then you can browse the pydocs by opening your web browser and going to <a href="">http:localhost:1234</a>. Search for quickly, then browse the widgets and prompts libraries.<br /><br />Since MediaPlayerBox is not installed yet, you can look at the doc comments in the code for the modules in natty-branch/quickly/widgets/media_player_box.py.<br /><span>GooCanvas</span><br /><ul><li><a href="">Python GooCanvas Reference</a></li></ul><span>GStreamer</span><br />MediaPlayerBox uses a GStreamer playbin to deliver media playing functionality. GStreamer si super powerful, so if you want to do more with it, you can read the docs.<br /><ul><li><a href="">Playbin Documentation</a> (you can use self.player.playbin to get a reference to the playbin).</li><li><a href="">Python GStreamer Reference</a></li></ul><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Pithos of Rain2011-01-23T11:28:31ZRick Spencernoreply@blogger.com<a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5565431340535216658" border="0" /></a>During my normal Sunday morning chill out with a cup of coffee this morning, I saw a tweet from <a href="">Ken VanDine</a> go by about <a href="">Pithos, a native Pandora client for Ubuntu</a>. I have a Pandora account, and love to use it on my phone, but on Ubuntu I had to go through the Pandora web interface, so I didn't use it as much.<br /><br />I'm using it right now, and I'm chuffed. I'd love to see this app go through the ARB process so maverick users can more easily access it. And <s>I'd love to see it</s> I'm psyched to hear that it is in Universe <s>or</s> and even Debian for Natty.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: New Quickly App: Daily Journal2010-06-20T12:15:01ZRick Spencernoreply@blogger.com<br /><br />Quickly has started to unlock productivity for me in unexpected ways. I've mentioned about writing my own development tools, like <a href="">bughugger</a>, and <a href="">slipcover</a>. <a href="">PPA</a>.<br /><br />In my next posting, I'll show how I used quickly.widgets.text_editor to create Daily Journal.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Go Here to Learn to Program from MIT2010-06-14T11:28:28ZRick Spencernoreply@blogger.com<a href=""><img src="" border="0" alt="" /></a>I run into folks who want to get started programming, but they "don't know a language". If you are in this camp, I highly recommend <a href="">the online course from MIT</a>. It's designed for people with no prior programming experience, and it's Python!<div><br /></div><div>After the first few lessons, you'll know enough Python to start a Quickly app!.</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Create Your Own Games with Quickly Pygame2010-06-06T13:18:44ZRick Spencernoreply@blogger.com couple of months ago I created a Quickly template with the goal of making it easy and fun to get started my games. The template doesn't have any "add" or "design" commands, but it does have all the other commands. The template creates a functioning arcade-style game, and then you provide you own artwork and start hacking the code to make your own gameplay.<br /><div><br /></div><div>$quickly tutorial ubuntu-pygame is the best way to get a detailed introduction into getting started making your own game, but here's some video of hacking the code. I hope it inspires you to try your hand at creating your own games.</div><div><br /></div><div>Part 1: Create the game, copy in your artwork, and make the guy work the way you want</div><div><br /><br /><div>Part 2: Program the enemies</div><br /><br /><div>Part 3: Create a power up sprite, and manage collisions</div><br /><br /></div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly: 90 Seconds to Your PPA2010-06-06T13:18:33ZRick Spencernoreply@blogger.com<div><br /></div><br /><div><br />Here's a quick video showing taking a finished Quickly app, setting the desktop and setup.py file, and then uploading to a PPA using $quickly share<br /></div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: PyTask, written with Quickly and Quickly Widgets2010-06-06T13:18:21ZRick Spencernoreply@blogger.com<a href=""><img src="" border="0" alt="" id="BLOGGER_PHOTO_ID_5472218008646329810" /></a><br /><div>On Saturday I received an email from a developer name <a href="">Ryan</a>..</div><div><br /></div><div>Looking at the App, it was clear that there were a few more features that DictionaryGrid needs to really rock, though:</div><div><ol><li>It needs a DateColumn to handle the "due" column. Users would want to set this with a gtk.Calendar widget.</li><li.</li><li>Both of these will require new GridFilter functionality. In fact, I have been waiting for a reason to refactor this part of Quickly Widgets, as the GridFilterRows are hard coded to use specific widgets, and this should be flexible.</li></ol></div><div><br /></div><div>Anyway, as it is, I am using PyTask, I hope Ryan get's it into a PPA soon. I like the simplicity. Ryan and I are currently collaborating on creating the new functionality in DictionaryGrid that PyTask needs. Open Source FTW!</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly and Quickly Widget Intro Videos2010-04-24T20:51:43ZRick Spencernoreply@blogger.com<div>You probably saw that <a href="">didrocks released another update for Quickly</a>. It's chock full of bug fixes and tweaks based on the feedback from the last release. </div><div><br /><.<div><br /></div><div>Part 1: Create a project and use Glade to edit the UI</div><div>Here you see that you use "quickly create ubuntu-application" instead of "quickly create ubuntu-project" to create an app. You also use "quickly design" instead of "quickly glade" to design the UI.</div><div><br /><br /></div><div>Part 2: Using CouchGrid</div><div>One of the key differences here is that CouchGrid is now in the quickly.widgets module instead of the desktopcouch module. The CouchGrid moved into quickly.widgets because it now extends the DictionaryGrid class. This brings a lot benefits:</div><div><ol><li>Automatic column type inference</li><li>Ability to set column types so you get the correct renderers</li><li>Correct sorting (for instance 11 > 9 in IntegerColumn, but "11" < "9" in string columns</li></ol><div>And of course you get all the goodness of automatic desktopcouch persistence.</div></div><br /><br /><div>Part 3: Using GridFilter</div><div>GridFilter is a new class that provides automatic filtering UI for a DictionaryGrid or descendant such as CouchGrid.</div><br /><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly For Lucid Release, Intro Video2010-04-15T09:58:07ZRick Spencernoreply@blogger.com released quickly 0.4 yesterday. What a great contribution from didrocks! I suspect he his work will help tons of people have a really fun time writing Ubuntu apps. <a href="">Read about the release in his detailed blog post</a>.<div><br /></div><br /><br />In the meantime, I made a cheesy video last night, showing some of the changes in quickly, and how the new CouchGrid and GridFilter work. <br /><br />[Note that it takes blip.tv a bit of time to render out a high def video like this, so if the video is not yet working, you can check back later.]<br /><br />[D'oh .... stupid blip.tv bailed on encoding my video. I'll try again with smaller files. Stay tuned]<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>
|
http://voices.canonical.com/feed/atom/tag/quickly/
|
CC-MAIN-2014-52
|
refinedweb
| 8,983
| 52.6
|
Scripting - Language independent framework for enabling application scripting.
package MyApp; use Scripting::Expose as => "Application"; sub println : Function { print @_, "\n"; } package main; use Scripting; Scripting->init( with => "scripts/", allow => "js", signed => "/var/db/AppFoo.sign.db"); Scripting->invoke("Script-Foo");
Scripting is a framwork for exposing bits and pieces of your Perl application to various scripting engines.
Altho there are many languages that can be embedded and called from Perl, they all have different features and different APIs. The Scripting module unifies them into a single, sleek but yet powerfull API that hids all the differences from the programmer.
- Simple attributes based API - Support for different script environments - Support for different languages - Signed scripts - Event system to invoke scripts
Scripting uses a attributes based API, if you don't know what that is, read perlsub.
In some applications, there might be a need to provide different kinds of APIs to scripts. These are referred to as script enviroments in this documentation. They are optional to use, and there is a global environment that is used if the target environment is ommited.
To make a package accessibly from scripts. Import Scripting::Expose into your namespace by using the module. It's very important that this is done during compile-time, so using
require Scripting::Expose and call
Scripting::Expose->import is not recommended.
In its simpliest form, it can look like this.
package MyApp; use Scripting::Expose;
Scripting::Expose will use the callers package name by default. However, in some cases this does not work because valid symbols in the Scripting module are only those that matches /^[A-Za-z][A-Za-z0-9_]*$/. Therefor, MyApp::Document would not be supported. To provide another name, pass
as => 'Another_Name' on usage.
package MyApp::Document; use Scripting::Expose as => "Document";
This package will now become available to all script environments. If we only like to expose it to a single or multiple environments, we pass
to => "Foo" or
to => [qw(Bar Baz)].
package MyApp::Document; use Scripting::Expose as => "Document", to => "WordProc";
The four attribute handlers Function, Constructor, ClassMethod and InstanceMethod all uses the name of the symbol (with package removed) by default. If another name is preferred one can be supplied with
as => "Another_Name"
use Scripting::Expose; sub print_line : Function { print @_, "\n"; }
package MyApp::Document; use Storable qw(store retrieve); use Scripting::Expose as => "Document"; sub new : Constructor { my $self = bless {}, __PACKAGE__; return $self; } sub set_title : InstanceMethod(as => "setTitle") { my ($self, $title) = @_; $self->{title} = $title; } # more methods here sub save : InstanceMethod(as => 'saveDocument', Secure => 'arguments') { my $is_signed = pop; my ($self) = @_; return 0 unless($is_signed); store $self, $self->{path}; } sub load : ClassMethod(as => 'loadDocument') { my ($pkg, $path) = @_; return retrieve($path); }
Scripting supports a basic mechanism for signing scripts. Signing is optional and only affects functions and methods that are exposed with the Secure argument.
To "secure" a function or method, pass
Secure => "arguments" to the attribute handler. This will tell Scripting to pass an extra argument to the subroutine when it is invoked. The value of the argument will be 1 (true) if the calling script is signed, and 0 (false) if it's not.
Signing a script is done by using the tool sign_script.pl which is available in the tools directory in the module distribution.
./sign_script -d /path/to/sign.database -f /path/to/script.
All paths are stored as absolute paths, so if you move a signed script it must be resigned.
Issue
./sign_script --help for more options.
Upon initialization, all scripts found in the supplied directory paths are loaded. To initialize Scripting, call
Scripting->init.
The argument with is mandatory, the others are optional.
allow is either a scalar containg a whitespace separeted list of file extensions, for example "js tcl" or an array reference with each allowed file extension, for exmaple [qw(js tcl)].
signfile is the path to the database with script signatures. If signfile is ommited, script signing is disabled.
with is a scalar containg a path to the directory from which it should load the scripts. It is also possible to load from several paths by passing an array reference.
Loading scripts works in a special way. If the name of directory in which the script is found in is a environment, it will be loaded into that. Otherwize, the script will be loaded in the global environment. The name of the file without the extension becomes the name of the event that is used to invoke the script. This table should make it a bit clearer.
# Environment : Event name # _Global : test scripts/test.js # Foo : bar scripts/Foo/bar.js
To invoke a script call
Scripting->invoke($name_of_event). If the event doesn't exist, nothing there won't be an exception or error. Calling the invoke method with only one argument looks for the event in the global environment, to invoke a script in another environment, pass the name of the environment as the first argument. For example
Scripting->invoke(WordDoc = $name_of_event>. Future versions of Scripting may support an attribute handler to automaticlly invoke events upon a subroutine call.
- More security options (don't run an unsigned script at all etc.) - More languages (Tcl, Java, C) - Function named based event invokation (have multiple event handlers in one script) - Things I can't think of right now
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Claes Jacobsson, claesjac@cpan.org
|
http://search.cpan.org/~claesjac/Scripting/Scripting.pm
|
CC-MAIN-2016-26
|
refinedweb
| 912
| 55.54
|
colorama 0.2.6
Cross-platform colored terminal text.
- Download and docs:
-
- Development:
-
- Discussion group:
-
Description
Makes ANSI escape character sequences for producing colored terminal text and cursor positioning work under MS Windows.
ANSI escape character sequences have long been used to produce colored terminal text and cursor positioning on Unix and Macs. Colorama makes this work on Windows, too, by wrapping stdout, stripping ANSI sequences it finds (which otherwise show up as gobbledygook in your output), and converting them into the appropriate win32 calls to modify the state of the terminal. On other platforms, Colorama does nothing.
Colorama also provides some shortcuts to help generate ANSI sequences().
An alternative approach is to install + Style.RESET_ALL), RESET. Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET. Style: DIM, NORMAL, BRIGHT, RESET_ALL
Style.RESET_ALL resets foreground, background and brightness. Colorama will perform this reset automatically on program exit.
Cursor Positioning
ANSI codes to reposition the cursor are supported. See demos/demo06.py for an example of how to generate them.:
import sys from colorama import init, AnsiToWin32 init(wrap=False) stream = AnsiToWin32(sys.stderr).stream # Python 2 print >>stream, Fore.BLUE + 'blue text on stderr' # Python 3 print(Fore.BLUE + 'blue text on stderr', file=stream)
Status & Known Problems
I’ve personally only tested it on WinXP (CMD, Console2), Ubuntu (gnome-terminal, xterm), and OSX.
Some presumably valid ANSI sequences aren’t recognised (see details below) but to my knowledge nobody has yet complained about this. Puzzling.
See outstanding issues and wishlist at:
If anything doesn’t work for you, or doesn’t do what you expected or hoped for, I’d love to hear about it on that issues list, would be delighted by patches, and would be happy to grant commit access to anyone who submits a working patch or two. # cursor positioning ESC [ y;x H # position cursor at x across, y down # clear the screen ESC [ mode J # clear the screen. Only mode 2 (clear entire screen) # is supported. It should be easy to add other modes, # let me know if that would be useful.. It would be cool to add them though. Let me know if it would be useful for you, via the issues on google code.
Development.6.xml
|
https://pypi.python.org/pypi/colorama/0.2.6
|
CC-MAIN-2017-30
|
refinedweb
| 376
| 56.66
|
This article is first in a series of articles, that will show you how to fully integrate Java and .NET.
The articles are as follows:
Java has been with us for more than ten years now and has made phenomenal inroads into the world of system, business, Internet and educational programming. In the last few years, a new and competing technology was introduced by Microsoft; this technology is known today as the .NET Framework. This article is not intended to deal with each of the platforms, the purpose of this article is to showcase my experience for making C# co-exist with Java and vice versa in a native manner.
This article is based on a project with the code name Espresso, which was developed by Reflective Software. On their Web site, you will be able to find updated information and build for this project.
As mentioned before, Java and .NET have been around for several years. During this time, there were many articles and solutions attempting to solve the interoperability issues between these two frameworks. The major problem with these solutions were that they tried to integrate the two frameworks using external communication. This means that each framework simultaneously runs as a separate process on the same machine, or as a process on different machines. The communication used employs several technologies, such as Web Services, .NET remoting etc. The cardinal problem with this type of communication is that it's very slow and network dependent.
The suggested solution will show how the two frameworks can live together in the same process and communicate seamlessly with each other.
This article describes a high-performing interoperability solution between the Java platform and the .NET Framework. The suggested solution does not replace the Java Virtual Machine or the .NET Framework runtime, instead, your JVM or .NET are each hosted within the opposing runtime environment, ensuring that vendor-specific VM optimizations are preserved.
"Espresso" objectives are:
The initial code was taken from the Caffeine open source, which is a one-way API invocation from .NET to Java. This article and the following ones, describe a full solution, that will allow invocation of APIs from .NET to Java and vice versa and seamless integration between the two platforms.
In one of the projects we developed for my company, Reflective Software, we had an interoperability requirement between Java and .NET. We tried using Web Services, but it was just not good enough. We used Caffeine and extended it, so we will be able to make Java call .NET and vice versa. These articles contain the phases to accomplish the project and the source code.
In this first article, I will explain the Espresso solution which will be the base for the other articles.Must of the information of this first part article is based on the Caffeine project, so if you are familiar with this problem you can just jump to the next part, which is about extending the Caffeine project to new levels.
The Java Native Interface (JNI) is a public interface that all implementors of the Java Virtual Machine must provide. JNI is a set of native functions contained in the JVM native library (jvm.dll, or libjvm.so depending on the architecture) that allows calling native functions from Java as well as native code creating a JVM and invoking Java classes.
Espresso employs JNI technology to host a Java Virtual Machine (JVM) under a .NET runtime. As the JVM runs under the same OS process as the .NET runtime, there are no IPC costs associated with this solution. The next figure illustrates the runtime architecture of Espresso. The blocks in grey are provided by Espresso and the blocks in yellow are provided by the JVM.
The bridge.dll is C++ code that exposes the function that will be used by the JNI.NET.Bridge.dll. The JNI.NET.Bridge.dll is a group of classes that enable the .NET Framework to call Java API using P/Invoke. The .NET classes are some classes that wrap the Caffeine API in order to give it a more attractive and OO look. In these articles, we will not display this library because I took a different approach, the proxy one, which is different from the Caffeine approach.
Caffeine
If you know JNI, you will know that the JNIEnv interface pointer is at its core. JNIEnv contains a table of JNI functions passed as an argument to each native method. The main advantage of the JNIEnv interface pointer design is that JNI implementations do not need to link to a particular version of the Java Virtual Machine. Binary compatibility is the main design criteria for JNI, and JNIEnv obeys this.
JNIEnv
Accessing an interface pointer from C#, although possible, is very cumbersome. For this reason, Espresso provides the bridge library. Another reason for the bridge library is that JNIEnv uses thread-local storage, which means that each native thread must be attached to a Java thread in order to ensure correctness. The Espresso bridge library flattens the JNIEnv interface and ensures that each native thread is correctly attached to a Java thread.
FindClass
jclass
FindClass(const char *name)
{
JNIEnv *env = GetEnv();
return (*env)->FindClass(env, name);
}
For those who know the C++ bindings of JNI, the construct above will be familiar. We are flattening the structure interface, removing JNIEnv from the function signature. Instead, the first line in the FindClass function calls the GetEnv() function. GetEnv() returns the JNIEnv interface pointer attached to the current thread, and if the current native thread is not attached to a thread-local storage, it attaches the thread and returns a JNIEnv interface pointer. The second line makes the actual call to the JNI FindClass function, via the JNIEnv interface pointer.
FindClass
GetEnv()
The bridge library contains 1:1 flattened versions of all the functions contained in the JNIEnv interface pointer structure. Additionally, the bridge library contains the GetEnv() function described above, as well as functions to create and destroy a Java VM.
The bridge library provides two convenience functions to create a Java VM:
int CreateJavaVMDLL(const char *dllName,
JavaVMOption * options,
int nOptions);
int CreateJavaVMAnon(JavaVMOption * options,
int nOptions);
CreateJavaVMDLL allows to specify the shared library containing the Java VM which should be loaded at runtime. The bridge library is not linked to any particular Java VM implementation. Instead, it uses dynamic library location and loading (LoadLibrary/GetProcAddress), based on the dllName passed to the CreateJavaVMDLL function. CreateJavaVMAnon does not take a dllName, and defaults dllName to jvm.dll on Win32. This seems to be a valid default for 90% of the setups. Most applications will not need to specify the JVM library, but in case the default does not work, JNI.NET.Bridge provides the ability to configure the name of the DLL which contains the JVM (see Section Configuration).
CreateJavaVMDLL
LoadLibrary
GetProcAddress
dllName
CreateJavaVMAnon
dllName
JNI.NET.Bridge
Most of the JNIEnv functions have three forms:
jchar (JNICALL *CallNonvirtualCharMethod)
(JNIEnv *env, jobject obj, jclass clazz, jmethodID methodID, ...);
jchar (JNICALL *CallNonvirtualCharMethodV)
(JNIEnv *env, jobject obj, jclass clazz, jmethodID methodID,
va_list args);
jchar (JNICALL *CallNonvirtualCharMethodA)
(JNIEnv *env, jobject obj, jclass clazz, jmethodID methodID,
jvalue *args);
JNIEnv functions can take either an ellipsis, a variable argument list, or a jvalue union. P/Invoke does not allow ellipsis nor variable argument lists, as type marshalling would not be possible, so the bridge library only exhibits the function form using the jvalue union.
jvalue
This library is the .NET counterpart of bridge.dll. In this library, you will find a class that wraps the JNI calls in a more OO manner and enables the .NET developer to use the Java API in a much more elegant way than just calling JNI API.
All the JNI types have a .NET counterpart:
JObject
jobject
JClass
JNI jclass
JMethod
jmethodID
JField
jfieldID
JArray
JBooleanArray
JByteArray
JCharArray
JDoubleArray
JFloatArray
JIntArray
JLongArray
JObjectArray
JShortArray
jarray
jbooleanarray
jbytearray
jchararray
jdoublearray
jfloatarray
jintarray
jlongarray
jobjectarray
jshortarray
JThrowable
jthrowable
JString
jstring
JNI.NET.Bridge provides a number of additional types, to simplify the usage of the bindings:
JMember
JConstructor
All JNIEnv functions have been encapsulated in the corresponding .NET type. For example, the bridge function:
jint MonitorExit(jobject obj);
is encapsulated by:
public class JObject
{
// ...
[DllImport(JNIEnv.DLL_JAVA)]
static extern int MonitorExit (IntPtr obj);
public void MonitorExit () { // ... }
// ...
}
As mentioned earlier, the bridge library only exposes the functions using a jvalue union, but not the ones using a variable argument list, nor ellipsed parameters. JValue is a C# struct that can be marshalled to a C union, by using the FieldOffset attribute. The original jvalue union, defined in jni.h, looks like:
JValue
FieldOffset
typedef union jvalue {
jboolean z;
jbyte b;
jchar c;
jshort s;
jint i;
jlong j;
jfloat f;
jdouble d;
jobject l;
} jvalue;
The C# struct that simulates this union looks like:
[StructLayout (LayoutKind.Explicit)]
public struct JValue
{
[FieldOffset (0)] bool z;
[FieldOffset (0)] byte b;
[FieldOffset (0)] char c;
[FieldOffset (0)] short s;
[FieldOffset (0)] int i;
[FieldOffset (0)] long j;
[FieldOffset (0)] float f;
[FieldOffset (0)] double d;
[FieldOffset (0)] IntPtr l;
// ...
}
All Java objects are wrapped by JObject. JObject is a .NET type that keeps a reference (Handle property) to the Java object. In other words, to instantiate a Java object, we must create an instance of a JObject.
Handle
There are two ways to create an instance of a JObject: by calling the JObject constructor and providing a JConstructor, or by calling the instance method NewInstance in JClass. Let's concentrate on the first option, which is the generic way to create an instance. The steps we must follow to create an object are:
NewInstance
In order to find the class, we invoke the static method ForName in JClass, providing the fully qualified name of the Java type we require to be loaded. Under the scenes, ForName calls the FindClass function in the JNIEnv interface pointer.
static
ForName
Once we have a JClass object, which corresponds to a Java type, we must obtain a constructor for that type. We do this by calling the GetConstructor(string sig) method in JClass. Note that the signature follows the JNI naming convention (you can use the Java tool javap for obtaining signatures). An example of use would be:
GetConstructor(string sig)
javap
JClass clazz = JClass.ForName ("Prog");
JConstructor ctr = clazz.GetConstructor ("()V");
In this example, we are obtaining the no args constructor for the Java class, Prog. For the no-args constructor, we could have also called the convenience method GetConstructor(), equivalent to calling GetConstructor("()V");.
Prog
GetConstructor()
GetConstructor("()V");
Once we have a constructor, we can call the JObject constructor:
JObject
JObject obj = new JObject (ctr);
You should know that JObject exposes three constructors, none of them public. The first constructor is the one we have just shown, and is used by derived classes from JObject. For example, if we were writing Prog.cs, a C# wrapper to a Java class Prog.class, we would use that constructor. This constructor's signature is:
public
protected JObject (JConstructor ctr, params object[] args);
This constructor takes a variable number of System.Object arguments (this is what the C# params keyword does). This allows derived classes to obtain the constructor (JConstructor) during the static class constructor, and call the base constructor with the reference to JConstructor and the arguments passed to it.
System.Object
params
JConstructor
JConstructor is just a special type of JMember. We can query JClass for any JMember, be it a field, a constructor or a method. Invoking a method or accessing a field is not very different from creating a new object instance. We still require a JClass, out of which we will obtain a JMethod or a JField using its signature.
Going back to our example, Prog, if we had a Java method:
public class Prog {
public int max (int a, int b) {
return (a > b) ? a : b;
}
}
We could obtain the method from JNI.NET.Bridge by calling the GetMethod() in JClass:
GetMethod()
JClass clazz = JClass.ForName ("Prog");
JMethod _max = clazz.GetMethod ("max", "(II)I");
GetMethod takes two arguments: the name of the method, and the internal method signature. To obtain the internal method signature, you can either learn the specifications, and use Sun's Java SDK javap tool.
GetMethod
javap
Once we have obtained a method, we can invoke it. In our Prog example, we would call:
Prog
int result = _max.CallInt (obj, a, b);
CallInt is used for calling methods that return an int. Based on the return type, there are other variants in JMethod to invoke a Java method:
CallInt
int
CallBoolean
CallByte
CallChar
CallShort
CallLong
CallFloat
CallDouble
CallVoid
boolean
byte
char
short
long
float
double
CallObject
All CallXXX methods take two versions: one that takes a JValue array, and one that takes a variable number of arguments. For example, for CallInt:
public int CallInt (JObject obj, JValue[] args);
public int CallInt (JObject obj, params object[] args);
Whenever possible, the JValue[] version should be preferred for performance reasons. When a variable argument list is used, boxing/unboxing overhead, as well as a conversion routing overhead is incurred.
JValue[]
Additionally, JMethod provides a convenience method, Invoke, that avoids having to know which version of CallXXX to use. The signatures of Invoke are:
JMethod
Invoke
Invoke
public object Invoke (JObject obj, JValue[] args);
public object Invoke (JObject obj, params object[] args);
Invoke will determine the right CallXXX to call based on the method signature provided when the instance of JMethod was created. Invoke returns an object, exploiting .NET's boxing. Note therefore that although convenient, this method provides less performance than calling directly the right CallXXX version, where there is no boxing involved.
All versions of CallXXX, as well as Invoke, take as their first argument a reference to an instance of JObject. Obviously, an object reference is only required for instance methods, and for class (static) methods, the first argument is ignored. It is therefore legal to pass a null reference to class (static) methods.
static
null
All arrays in Java (and .NET) are objects. JNI exposes all arrays as a jarray, which is encapsulated in JNI.NET.Bridge as JArray. JNI also has specialized versions of jarray, such as jintarray, which JNI.NET.Bridge encapsulates as JIntArray.
JNI.NET JArray specialized versions, such as JIntArray or JObjectArray, provide three constructors:
JNI.NET JArray
JIntArray
public JObjectArray (JObject other);
public JObjectArray (int length, JClass clazz);
JObjectArray
JClass
public JObjectArray (JObject[] source, JClass clazz);
An example will clarify when to use each constructor. If we were calling a Java method fooB:
fooB
public int fooB (Prog[] arg);
The method fooB needs to get an instance of a JObjectArray, so the first step in .NET will be to convert the .NET Prog[] array to a JObjectArray. We do this by calling the third constructor:
Prog[]
// in .NET
Prog[] args = ...
JObjectArray array = new JObjectArray (args, Prog.JClass);
int r = _fooB.CallInt (this, array);
The JObjectArray constructor first allocates a jobjectarray using the JNI function NewObjectArray, and providing the jclass referenced by Prog.JClass. The constructor of JObjectArray then copies the elements contained in Prog[] args into the newly created object using the JNI function SetObjectArrayElement.
NewObjectArray
Prog.JClass
SetObjectArrayElement
We have seen how to call a Java method taking an array. A similar pattern is used for returned arrays. Let's say we wanted to call a Java method fooA:
fooA
public Prog[] fooA ();
As we know, the call to fooA is done as:
JObject o = _fooA.CallObject (this);
In this case, we must cast a JObject to a .NET array. We will use the copy constructor:
JObjectArray array = new JObjectArray (o);
And obtain the elements in the array as an array of JObject[] and copy one to one to a newly allocated Prog[]:
JObject[]
Prog[]
JObject[] oArray = (JObject[]) array.Elements;
int l = array.Length;
Prog[] r = new Prog[l];
for (int i = 0; i < l; i++) {
r[i] = new Prog (oArray[i]);
}
Note that ideally, we would have liked to write something like:
Prog[] r = (Prog[]) array.Elements;
The current implementation does not do this because of performance reasons. The same as with casting, commented in the previous section, we could have implemented array casting using reflection, as JMethod knows the type of o, and JObjectArray could have used the static JClass property in JObject to instantiate the right object array.
o
static JClass
The configuration mechanism allows you to specify the library load path, the classpath, and the command line arguments you would like to pass to the Java runtime. This is especially interesting when either you have to setup a complex CLASSPATH (usually the case in J2EE applications), or because your JVM is not called jvm.dll or libjvm.so.
classpath
CLASSPATH
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<sectionGroup name="jvm">
<section name="settings" type="Jni.Net.Bridge.JNISectionHandler, Jni.Net.Bridge"/>
</sectionGroup>
</configSections>
<jvm>
<settings>
<jvm.dll
<java.class.path
<java.class.path
<java.library.path
<java.option
</settings>
</jvm>
</configuration>
In this example, we are specifying that the JVM is contained in a DLL called jvm.dll; that the classpath should contain the minijavart.jar file and "." directory (in the current directory since no relative nor absolute path is given in this example); that native libraries (e.g. System.loadLibrary()) should be searched under C:\Program Files\Java\jre1.5.0_07\lib, and that class garbage collection should be traced at verbose level.
classpath
System.loadLibrary()
The jvm.dll needs to be specified in the settings section if it is not found in the PATH or in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment.
PATH
The project was developed in .NET Framework 2.0, using Visual C# 2005 Express and Visual C++ 2005 Express.The source zip file contains two Visual Express projects, the first one is the bridge project, which is the C++ project of bridge.dll. First, open this project and build it. The second project is the JNI.NET.Bridge project which is C# project, it includes all the source code of JNI.NET.Bridge.dll and a sample test program. Open it and build it using Visual C# 2005 Express or Visual Studio 2005.Run the Test project to check if everything is working.
bridge
JNI.NET.Bridge
In the next part, I will explain how to implement .NET proxy in order to call the Java classes in a more OO manner, and how to call .NET from Java.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
demoClass = JClass.ForName("democalc/DemoCalc");
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/script/Articles/View.aspx?aid=14382
|
CC-MAIN-2016-18
|
refinedweb
| 3,133
| 55.03
|
The ramblings of Kristopher Makey.
I]) / ([Bandwdith]: tets)
I'm not going to go into detail about everything in the VS 2008 SP1 package, but what I would like to point out some key deployment features that were added and why you would want to consider installing even if you have a large user base using a base VS 2008 customization. In this discussion I will refer to things as being RTM (meaning VS 2008 and VSTOR 3.0 Pre SP1) and SP1.
Adding VSTOR 3.0 and .NET 3.5 SP1 Patch Redistributable's:
We've added some additional functionality to the base VSTOR 3.0 Runtime that you can actually take advantage with RTM Applications. As such, if you install the SP1 patch on your development machine, you should be able to include the SP1 patches for both .NET 3.5 and VSTO Runtime 3.0. Adding these patches to the bootstrapper will not upgrade your existing customers but any new customers you have will be able to take advantage of the new functionality without any changes on your end. Any previously existing customers could re-run the Setup.exe file to receive the upgraded functionality.
One thing to keep in mind for these changes I'm describing is that the customization itself is not being upgraded, just the runtime it runs on is. If you have to recompile your customization against the SP1 Runtime to take advantage of a new feature, your customization will not work for your customers who don't install the SP1 patches.
Additional Redistributable Package, the Office 2007 PIAS:
We've authored an Office 2007 PIA installer for general consumption. In general, Office 2007 will install the PIAs by default, specifically it will install them if you have a .NET Framework installed on your machine at the time you install Office. The problem with this is that it turns out a lot of you out there still use XP and clean installs of XP do not come with .NET installed. As such we've included a PIA redist package that you should be able to include. This package looks for a previous instance of an Office PIA existing on the machine before it does anything that might require elevation.
Event Logging:
Specifically any deployment errors that occur on your client machines will by default log the error directly into the windows event log. The functionality here isn't particularly rich, we've basically taken the same information that we display in the "alert dialog" that turning off VSTO_SUPPRESSDISPLAYALERTS enables. The key here is that you should now be able to excavate what the actual cause of error was even if the issue is resolved before you get a chance to investigate.
Update Cancellation:
In RTM the timeframe in which an update can be "successfully" canceled is very small. If the user cancels the updated outside of that particular window the VSTO Runtime sees it as an error case and blocks further execution of the customization. Based on your feedback, we changed this behavior in SP1 such that you should now be able to cancel an update and continue running the previously installed version. There are still cases where an error will occur (an example would be if you are installing the customization for the first time) but in general this should be a lot closer to what you guys told us you wanted.
Thank You for Reading.
Kris.
In my last post I talked about the first steps to building my Customer Prerequisite. In it, I built an MSI that would display a message box during the install phase. Here is what Setup1looks like when you run it:
It's nothing too ingenious but it gets the job done.
So...here we are with an MSI but I want to be able to somehow integrate this into the setup.exe file that gets created for my VSTO customization. The first step to doing this is to author some bootstrapper manifests. Specifically a package.xml file and a product.xml (for each supported language) needs to be created. Thankfully the is a powertool that will help you do this.
David Guyer has produced an exellent simple to use UI Tool to help with creating the bootstrapper manifest. You can get the VS 2008 version of the tool here: . This tool is currently listed as being in Beta and I still had to do some manual work to get everything lined up. Here are the steps I took:
As you can see, I get this warning about the Product Code. I talked to David Guyer about this and here was his comment:
I think once you enter a display name, I try to autogenerate a product code. Just to help clarify, the bootstrapper’s product code is not the same as the MSI’s product code… so you don’t need to enter the MSI’s product code… a bootstrapper product code is something typically like “Microsoft.VisualBasic.Powertoys.1.0”
You can manually add this to the files. If you go to the build output folder you will see all of the files
So the Product.xml file should look something like this:
<?xml version="1.0" encoding="utf-8"?>
<Product ProductCode="" xmlns="">
<PackageFiles CopyAllPackageFiles="false">
<PackageFile Name="setup1.msi" Hash="DD8BF9497F8447128E4F6C72C8D30D8E2871E9BE" />
</PackageFiles>
<Commands Reboot="Defer">
<Command PackageFile="setup1.msi">
<ExitCodes>
<ExitCode Value="0" Result="Success" />
<DefaultExitCode Result="Fail" String="Anunexpectedexitcodewasr" FormatMessageFromSystem="true" />
</ExitCodes>
</Command>
</Commands>
</Product>
The last steps are to add a Product code to this file and then copy the files in the Build Output folder to the prerequisites directory.
The path shown here would be: C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\My Custom Prerequisite\
Once you've done that you should be able to include your custom prerequisite step as if it was just another standard part of your setup.
I'm going to end with that, it should be enough to get you started on deploying even more powerful and interesting VSTO solutions..
For-
It's been a while since I've posted here, I'm not quite ready to jump back into pushing deployment stuff again, I have had other things on my plate (vacation being one of them) and am finding less time to tackle the interesting questions that are coming through.
That said, I do want to keep up the habit of posting, so I'm going to switch gears a bit and talk about something a little more general than VSTO Deployment. Yesterday marks 2 years working for Microsoft, working in the software industry and working in the testing field. Here are some of the things I have learned in those two years that I think may interest you:
On Chocolate:
Having a candy dish full of chocolate is a great way to learn a lot of interesting things with very little effort. I maintain my candy dish religiously and it's turned out to be very fortuitous. Every time someone comes to "snatch" a piece I can often engage them long enough to learn something interesting from them. Sometimes it is programming tips and other times it is just basic information about what they're working on.
Another lesson with this: don't eat the chocolate yourself, it's just not as good if it's not provided by someone else.
On Testing in General:
When I started working my belief was that a tester's goal was to find bugs. In some cases this is true, but here at Microsoft, my job is to make the product better. With this in mind a lot of what I focus on is not so much "finding bugs" but at "understanding" the product (and writing bugs when the understanding reveals deficiencies from what the product does and what we say it does or want it do ). Part of this understanding is knowing what customers need which is why I am here writing this, hoping that should you be reading you'll drop me feedback on how you're using "my product".
On Automation:
I believe software testing as an industry and automation in particular are still in an infant stage. What I have seen internally and what I have heard about what happens outside of Microsoft indicates to me there is still a lot of work to be done and learned in this space. With that in mind, I believe the goals of Automation are often counter to what automation actually provides. Automation doesn't find bugs during development but rather forms a sort of rigid documented (in code) expectation of what the product is that is easily evaluated and repeated. I'm not saying that Automation isn't important (quite the opposite) but rather often its value is not fully understood.
Automation is important especially if you want your product to work outside of the context in which it is initially written. (We will sometimes find bugs around things like localization/globalization, though automation is only one step.)
The more "customer scenario" focused your automation is, the more likely it is to move forward and be understandable. It is very easy for me to write automation that checks that the value within a file is the same as the value specified in a text box within the product, but the value of that test is much harder to express and the test is more likely to be broken as the product changes. However, if I tell you my test installs an add-in that was published to a web server, it's clear to see how the test is "relevant" to how the product works and it also is more likely to move forward as the product does.
Automation writers may know more about complex problems than product developers: Automation often must be product, context and language resilient. If your product is potentially going to have backwards and forwards version resiliency, you might ask your automation writers if they've tackled the problem before and what they've specifically learned about it.
On Development:
Be intentional. When you are writing something new: draw a picture or at least tell someone about your design. Designing something before writing it will always result in something better then something you simply "piece together".
Use intellisense comments when available.
Use comments to give context and meaning to your code.
Avoid using TODO:, TODO is a form of procrastination that can often cost you more than simply doing the work initially. If you absolutely can't implement something at that moment, instead of just a "TODO:", leave a (large) block comment at the entry point for what needs to be done and throw a not-implemented exception (preferably with something to help indicate what needs to be implemented).
Own your code but expect that you may be hit by a bus tomorrow. This means that when you check code into source control you expect that someone will be able to pick up where you left off and achieve your same (end) goals.
On working in general:
Be open to other people's interests. It's a lot easier to get cooperation from someone who you've helped than to ask for help from someone with a promise of future help.
Never eat lunch alone.
Be fearless even in the face of failure: failing is an opportunity to learn and improve (don't forget to grasp it).
Thanks for reading.
[edited 5/27/08: Minor typos and sentence clarification]
I got ahead of myself and thought it was one week later than it really is (sort of good because I was beginning to think I was a lot further behind than I was). I will be out next week. Unfortunately this week has been super crazy. I'm going to skip a full post this week but I will point out some posts of interest for those who may be looking at an "All-user" administrative installation. Misha Shneerson has written some very good posts all about all-user installation in VSTO on his blog.
The posts are here:
You should pay specific attention to the third article as it covers VSTO 3.0 specifically.
With that, I'm going to go back to being super-busy until I wrap up this week and then take a week off (there definitely will be no post next week).
Last week I had the pleasure of getting some very interesting feedback from some of the Microsoft Office development audience (people who develop against office through VSTO/VSTA/VBA/Shared Add-ins/etc ). If you were at said event and are now reading here because of it, I welcome you and hope you enjoy the time you spend.
One of the more interesting things of note I heard was that VSTO just wasn't as easy and simple to deploy for those one-shot customizations that might be created to solve a small problem, save some time or simply just do something repetitious quicker.
Certainly this is entirely true under the older Security model, CASPOL is not something you just "simply do" and move on, it takes a fairly significant degree of initial investment and understanding of .NET to get to the "working" state. Click-Once can also seem like it's a little too heavy-weight for a similar reason, there's all this extra stuff to consider: prerequisites, updating, a publishing location, certificates.
This is why I'm going to talk about the "tricks" you can use for that simple once-off git 'er done solution. These methods are supported they are not considered our prime scenarios used in this particular context.
First Let's talk about publishing versus building. If you are creating a customized document, you might consider bypassing publishing, although if you're creating application level customization (something that will impact multiple documents) then I fully recommend publishing even for these quick and dirty solutions. There are still some things you can do to make things simple and quick with the publishing experience in add-ins. I will come back to add-ins and publishing after I cover documents.
What the Developer needs to do:
What your user needs to do:
Incremental improvements towards more "professional" level solutions that you can do:
Inherently there is a more work to do since you need to somehow "hook-up" the add-in to the hosting (Office) application. Fortunately for you, we've thought through this scenario. Here are the steps to the quickest "simple" setup.
What the developer needs to do:
What the user needs to do:
These are the easiest 2 "throw-away" solution deployment methods that I can think of. There are so many advantages to using VSTO you shouldn't feel blocked by the belief that deployment must always be Click-Once with large amounts of infrastructure. There are ways to wrangle it to manage scenarios beyond the initial "deploy via server and provide constant updates" model that Click-Once specifically targets well.
With that I'll wrap this up with a final side-note. I wrote the bulk of this post a week early because next week I will be on a boat most likely not thinking about anything other than having fun and looking at scenery. I may or may not have a post due to this trip, but it's possible I might have just enough of a backlog to work something out.
[note: Edited 5-19-2008, Attempting to clarify what is done by user versus developer]
Last).
Before I go any further in this post I'm going to put a huge disclaimer:
Okay so disclaimer said, I'm going to talk about is something you can do with a win-forms Click-Once application that you might try to do in VSTO (albeit with some large caveats). The basic scenario is this: You develop a customization that periodically checks for updates which contains a button that the user can "force" an update check with. Click Once provides an API for this that works very well, and you can use this same API with VSTO though you travel from the realm of "it works and we've fully tested it" into the realm of "it might work but will very likely 'break' you anyway".
So lets' get to the point, in previous posts I've mentioned that you can reference System.Deployment and with a little work get access to the "CurrentDeployment" object. I've used this in the past to demonstrate how to detect if your customization is running from local disk or from the Click-Once Cache. We're going to use the same API, but now we're going to "force" an update check and install it. The second method is going to use our friend the "VSTO Installer" executable that ships as part of the VSTO Runtime 3.0.
Here's an example of a customized document in which I have create that demonstrates both methods:
I've dropped some Controls onto this document to demonstrate 2 basic scenarios and additional options for the first scenario.
The basic flow is this, the user clicks the button and the 2 labels beneath it will give some information about the update/version. The first label will display the current running version and the second will display the version after it the update has run. As you can see in the example image the version is the same. I show this now to demonstrate the next point I'm about to make.
When the customer updates the solution in this manner (either through the API or via VSTO Installer) the current running version will not change. To fully update to the new version the running version of the customization would have to be shut down and the new version loaded up. Currently neither Office or VSTO have support for doing this at the moment. I imagine there are ways to do this but I'll leave that for a later time.
So let's talk a little more about the API itself. I'm going to cover it first because I personally feel it's not the best solution for several reasons and then I'll move to what I feel is in a better direction.
The first issue that comes up is a matter of trust. In order to use the API to update the customization the developer needs to grant specific permissions to the code in the Click-Once Cache. My understanding based on the feedback from the developer who provided me with the trust code that this is a case where the differences between the Click-Once Trust Model and the VSTO Trust Model differ. What happens under the hood here is this: ultimately in VSTO the VSTO Runtime is what determines the trust for the Addin. When it creates the Add-in it evaluates everything against the VSTO trust model and (given the Add-in is actually trusted) executes the Add-in code under full trust. Since the trust is evaluated and stored by the VSTO Runtime the Add-in must set the trust/permissions for the code in the Click-Once cache in order to execute the update, but it is able to do this since VSTO Add-ins always run under the "Full-Trust" Context.
So here's the the trust code for enabling the API:
//Create the appropriate Trust settings so the Application can do
//Click-Once Related updating
ApplicationDeployment);
The reason why I would not use the API for updating is that there are steps that the VSTO runtime must do as a result of the add-in existing within an Add-in Model versus a Stand-alone Executable Model. Essentially the add-in model is a little more complex than a "standard" Click-Once scenario since not only does the execution code have to be moved onto the user's machine, but also there is a chunk of meta-data that needs to be communicated to the Host-Application for the Customization so that the host process can properly plug-in the customization.
What this turns out to mean is: when the update is done using the API, not every part involved in the customizatoin is correctly updated. The behavior that I see when I tried this scenario is that the ApplicationDeployment object behaves strangely after the update occurred. My initial investigation indicates that the solution information store is out of date after the update which then causes the customization to be running in an unexpected context.
I won't show you the code for updating because I can't personally recommend it for your application. What you might do though is use CurrentDeployment.CheckForDetailedUpdate() which will tell you if an update is available at least.
Anyway.
Let's move on to using VSTOInstaller...
In a previous post I talked about VSTOInstaller, but as a quick rehash, basically it's an thin executable that uses the same functionality in the runtime to update/install the customization. What it means in this scenario is that we can use System.Diagnostic.Process to call it and force an update. In the example code that I'm going to post I call it in 2 ways to demonstrate 2 alternative methods to consider.
In the first method you call VSTOInstaller directly (and silently). The advantage in using this method is that the user does not see any UI. The VSTOInstaller will return a negative value if the "update" failed. Unfortunately the return value of 0 is the only "positive" value you will get back and it is not informative of whether or not an update occurred or if the current version is up to date.
Here's the Code for using VSTOInstaller Directly:
label1.Text = "Current Version: " +
ApplicationDeployment.CurrentDeployment.CurrentVersion.ToString();
//Call VSTOInstaller Explicitely in "Silent Mode"
string installerArgs = " /S /I \\\\GenericServer\\WordDocument2.vsto";
string installerPath = "C:\\Program Files\\Common Files\\microsoft shared\\VSTO\\9.0\\VSTOINSTALLER.exe";
System.Diagnostics.Process VstoInstallerProc = new System.Diagnostics.Process();
VstoInstallerProc.StartInfo.Arguments = installerArgs;
VstoInstallerProc.StartInfo.FileName = installerPath;
VstoInstallerProc.Start();
VstoInstallerProc.WaitForExit();
//Report Exit Code
MessageBox.Show("Exit Code: " + VstoInstallerProc.ExitCode.ToString());
//End "Silent Mode" Scenario
If you want UI the easier method is to simply "shell out" the path to the .vsto file. Doing so would invoke the mime handler which under the covers calls VSTOInstaller. This method is less likely to fall prey to issues that come of running on future versions of the runtime(not that I'm saying we support you doing this exactly and I'm not going to go into specifics currently). The UI will inform the user if the update actually happened or if the version was up-to-date but I don't know what would happen if the path is not available and there may be additional UI that occurs (I have seen specific cases in non-customization managed code where specific security settings on the OS would cause a blocking dialog to occur when the shell handler is invoked in this method). So your mileage may still vary.
And here's the code for installing using the Shell Handler:
//Call VSTOInstaller Via the Shell (will show UI) assumes the Mime handler works
string manifestPath = "\\\\GenericServer\\WordDocument2.vsto";
System.Diagnostics.Process ExplorerInstallProc = new System.Diagnostics.Process();
ExplorerInstallProc.StartInfo.Arguments = manifestPath;
ExplorerInstallProc.StartInfo.FileName = "explorer.exe";
ExplorerInstallProc.Start();
label2.Text = "\"NEW\" Current Version: " +
ApplicationDeployment.CurrentDeployment.CurrentVersion.ToString();
Here are some items to keep in mind if you are considering this at all:
If you do end up using any of this or have come up with a better solution, feel free to comment, I love open discussion about this stuff.
Kris Makey
I was planning to post something about 3 days ago but things have been a little crazy on the home front (lets just say that 'the games' have been playing me these days) and a lot crazy on the work front (can't talk about it exactly, but it's been good busy mostly).
I have no VSTO tips today, sorry.
However I had a chance to receive some general advice on blogging, display styles etc. So this should almost be "See What I See Version 2-beta". On recommendation I've installed Windows Live Writer, and to be honest so far it has been a pretty okay experience. I had a lot of trouble connecting up the piping initially but it seems to be more of a factor of bits-in-the-piping issues (restrictive network settings) than anything wrong with the software. Firewalls can be tricky things sometimes.
Anyway, I got everything up and running and with a bit of customization I can now paste "nice" code instead of the ugly black boring notepad code I have been pasting in the past. An example: here's some "C++" "Code" that I _may_ have written recently to do a bit of a "refresher" since it's been almost 2 years since I've done anything serious with C++. Oh how the time flies and the tools rust.
// MyCppIsRusty.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "Widget.h"
int _tmain(int argc, _TCHAR* argv[])
{
int result = (int)*argv[1];
printf("The Result is: %i \n", result);
RustRemoval::Widget *Foo = new RustRemoval::Widget();
result = Foo->SomeFunction(result);
printf("The result after manipulation is: %i \n", result);
return 0;
}
Yes that is real code, it doesn't do anything interesting and has bugs, I was exploring building static library building and the actual content didn't matter so long as I was getting results from code that wasn't in the project itself. I do have to say though, going back to C++ after being spoiled by C# for so long, There are some things that .NET just does better. Building multi-component software using dll's is nicer. In .NET there are no header files I have to copy around, if you want to use my dll, simply create a reference to it and everything is golden.
One last thing, Here's a random picture to see what Live Writer can do:
If you can see the image, then Live Writer gets 2 thumbs and a pinky up from me.
As always Thank You for reading,
K.
From time to time I will come across (via various channels) a question of why VSTO 3.0 requires the 3.5 framework. Usually these questions are prefaced or suffixed with a “in VSTO 2005 SE I was able to use 2.0, why are you making me use 3.5 now?”
Just to clarify, when I talk about VSTO 3.0 I mean only the addins that are build in VS 2008 and are labelled "Office 2007". Office 2003 addins in VS 2008 are essentially the same as addins build in VSTO 2005 SE (which can be loaded in both 2003 and 2007).
The answer is both short and long and simple and complicated. The short and simple answer is that VSTO 3.0 requires .Net 3.5. If I left it at that I imagine I would get a lot more questions so I’ll go into the long answer:
In the VS 2008 timeframe we rebuilt the loading model for VSTO. This new loading model consists of 2 major parts, the first being that the VSTO runtime now uses the System.Addin Framework. You can read about the System.Addin framework here:. I don’t know as much about this, but I’m told that it is a good thing all around.
The second major part, the one that I do know about is around “deployment” (ClickOnce). In VS 2008 we integrated the ClickOnce model of trust, installation and loading into the Runtime. The benefits have been that the model is much easier to use for the end user. The associated cost has been that in order to integrate this functionality properly, additional changes to the ClickOnce framework had to be made (some of which is only a part of the 3.5 framework). These changes not only include publishing and installation but it includes the basic security model that the VSTO runtime uses.
So even if you don’t use ClickOnce with VSTO 3.0 solutions, the runtime itself relies on functionality that exists only within the 3.5 framework. Hopefully that clarifies the reasoning behind why we require the 3.5 framework.
Until next time, thank you for reading.
Memory.
|
http://blogs.msdn.com/krimakey/default.aspx
|
crawl-002
|
refinedweb
| 4,761
| 59.53
|
Hi, I am using ROOT 6.02/10. I have two problems when I try creating a class with MakeClass from this ROOT file,
piedra.web.cern.ch/piedra/test.root
When I call MakeClass I get the following,
But even with those error messages the class is created. Unfortunately (as expected) it won’t compile when I do
To be able to compile it I have done two things:
- Remove the offending lines, which means I cannot use those variables.
- Add this line,
using namespace std;
The TestClass.h after those two naive patches is attached,
TestClass.h (81.3 KB)
Any help fixing these two issues would be very much appreciated.
Thanks,
Jonatan
|
https://root-forum.cern.ch/t/issues-between-makeclass-and-my-trees/19978/2
|
CC-MAIN-2022-33
|
refinedweb
| 115
| 73.47
|
>
I am currently looking into the performance impact of blend shapes compared to bone animations for a factial rig. We are targeting to get at least 40 to possibly 100 characters each showing facial expressions in the screen. We are targeting the Oculus platform. A character typically has between 15k to 25k vertices and 20-30k triangles. There are roughly 25 bones driving the face and a character has around 100 bones.
I have a few test meshes with several setups. With bone based animations for facial expressions (Humanoid mecanim setup) the fact the bones are there has a slight constant performance impact over the mesh that has the bones removed. But triggering a single blend shape has a huge performance impact. (100% gpu time increase per character) Triggering a bone based animation in our setup has no considerable performance impact.
I then reduced the amount of vertices in the mesh that has the blend shape (to around 7k instead of 26k) by splitting the mesh in two and this reduced the fixed overhead for the first blend shape to 50% gpu time. However since an extra draw call per character is introduced this is not prefered - also a performance impact of 50% extra gpu time per character is not acceptable to us.
I've tested in a synthetic environment without v-sync or oculus on target hardware (gtx1080). I am rendering 1920*1080 in forward rendering on directX 11, linear color space, msaa4x, gpu skinning, but no graphics jobs. (GJs make debugging harder and have only a slight benefit in our case)
What worries me is that I am the only one that has these kind of results whereas the consensus of the internet seems to be that blend shapes are cheap and should only cost a slight increase in gpu memory which we still have plenty. I did some tests with gpu skinning disabled, but while the overhead introduced by enabling blend shapes is reduced, the performance in general is much worse.
Can someone help me better understand what is going on? here's my findings:
This leads me concluding that at least in the environment and hardware that we are running blend shapes are a very expensive solution to get facial animation. All performance tests I can find on the internet - albeit slightly aged - seem to suggest there is something wrong with my test data.
Either my test is wrong or there is something going on between gpu skinning and the newer versions of unity.
Probably making LODs is the way to reduce the amount of both vertices and draw.
How do I get my game to run faster?
4
Answers
Which is better performance-wise?
1
Answer
Objects using the same material are incurring separate draw calls.
0
Answers
How using additional namespaces affects performance?
1
Answer
How can reduce the poly count?
6
Answers
|
https://answers.unity.com/questions/1578678/performance-impact-of-blend-shapes.html
|
CC-MAIN-2019-18
|
refinedweb
| 479
| 60.75
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.