text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Archives
Everyone is an expert, especially when it comes to Web Design
Jason Mauss has blogged about everyone having an opinion on web design. It gave me a chuckle cos it is so true. Got to disagree on one point though. He reckons people don't get so opinionated about interior design, etc. Sorry Jason, in my experience people will have an opinion about anything they are not qualified to discuss :O)
Oh Canada
Still trying to work out the legal and logistical nightmare that is getting my family and myself immigrated to Canada. There are many issues, my mum and dads status and whether they can return (after 30 years!), if I can sponsor my parents and wife from here in the UK without proof of Canadian income, school for Amy, Job for me, which would enable us to look for a house .. where to go, what to do ..
Accessibility Champions fail their own tests
"DRC, the Royal National Institute for the Blind (RNIB) and the Royal National Institute for the Deaf (RNID), the supposed standard bearers for website accessibility, continue to fail even the most basic A/AA requirements"
Spoiled by .NET, the pain of using PHP
This is not a PHP vs ASP.NET post, don't worry!
Cheap UK Internet Magazines Subscriptions
Don't you just love ASP and .NET? Because of Google Adwords recent TOS changes I have had to rapidly set up a site for my magazines campaign, thanks to these lovely web platforms I have been able to do this in double-quick time. I had scraped the merchants site and used regex to grab the content days before the official feed. Another campaigns feed is so limited I am going to use the same code and tweak to produce my other site. I know this could be done in PHP but I think the OS crowd would be hard pressed to produce a tool as productive as VS.NET :O)
New Web Browser beta from Opera
Opera 8 Beta | http://weblogs.asp.net/cgarrett/archive/2005/01 | CC-MAIN-2015-40 | refinedweb | 337 | 68.7 |
Content-type: text/html
#include <sys/socket.h>
ssize_t sendto (
int socket,
const void *message,
size_t length,
int flags,
const struct sockaddr *dest_addr,
size_t dest_len);
[POSIX] The definition of the sendto() function in POSIX.1g Draft 6.6 uses a socklen_t data type for the dest_len parameter instead of a size_t data type as specified in XNS4.0 (the previous definition).
[DIGITAL] The following definition of the sendto() function does not conform to current standards and is supported only for backward compatibility (see standards(5)):
#include <sys/socket.h>
int sendto (
int socket,
char *message_addr,
int length,
int flags,
struct sockaddr *dest_addr,
int dest_len );
Interfaces documented on this reference page conform to industry standards as follows:
sendto(): XNS4.0
The sendto function also supports POSIX.1g Draft 6.6.
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the file descriptor for the socket. Points to the address containing the message to be sent. Specifies the size of the message in bytes. Allows the sender to control the message transmission. The flags value to send a call is formed by logically ORing the following values, defined in the sys/socket.h file: Processes out-of-band data on sockets that support out-of-band data. Sends without using routing tables. (Not recommended, for debugging purposes only.) sendto() function allows an application program to send messages through an unconnected socket by specifying a destination address.
To broadcast on a socket, issue a setsockopt() function using the SO_BROADCAST option to gain broadcast permissions.
Use the dest_addr parameter to provide the address of the target. Specify the length of the message with the length parameter.
If the sending socket has no space to hold the message to be transmitted, the sendto() function blocks unless the socket is in a nonblocking I/O mode.
Use the select() function to determine when it is possible to send more data.
Upon successful completion, the sendto() function returns the number of characters sent. Otherwise, a value of -1 is returned, and errno is set to indicate the error.
If the sendto() function fails, errno may be set to one of the following values: Search permission is denied for a component of the path prefix; or write access to the named socket is denied. Addresses in the specified address family cannot be used with this socket. The socket parameter is not valid. A connection was forcibly closed by a peer. The dest_addr parameter is not in a writable part of the user address space. [POSIX] The destination host is not reachable. A signal interrupted sendto before any data was transmitted. [POSIX] The dest_len parameter is not a valid size for the specified address family. An I/O error occurred while reading from or writing to the file system. Too many symbolic links were encountered in translating the pathname in the socket address. The message is too large to be sent all at once, as the socket requires. [POSIX] The local network connection is not operational. [POSIX] The destination network is unreachable. [POSIX] Insufficient resources are available in the system to complete the call. A component of the pathname does not name an existing file or the pathname is an empty string. The socket is connection-oriented but is not connected. A component of the path prefix of the pathname in address is not a directory.to() function.
Functions: getsockopt(2), recv(2), recvfrom(2), recvmsg(2), select(2), send(2), sendmsg(2), setsockopt(2), shutdown(2), socket(2).
Standards: standards(5). delim off | http://backdrift.org/man/tru64/man2/sendto.2.html | CC-MAIN-2017-04 | refinedweb | 593 | 50.12 |
I want to have my servo motor rotate a certain amount for each distance it gets on the ultrasonic sensor till it reaches 180 after a certain distance. So for example if I want the max to be 50 cm I want it to be at the halfway mark of 90 at 25 cm and in-between each degree for each distance till 50. I was told the best way to do this is the map function but don't really know how to start and get the map function to work
#include <Servo.h> const int TRIG_PIN = 6; const int ECHO_PIN = 7; const int SERVO_PIN = 9; const int DISTANCE_THRESHOLD = 50; Servo servo; float duration_us, distance_cm; void setup() { Serial.begin (9600); pinMode(TRIG_PIN, OUTPUT); pinMode(ECHO_PIN, INPUT); servo.attach(SERVO_PIN); servo.write(0); } void loop() { digitalWrite(TRIG_PIN, HIGH); delayMicroseconds(10); digitalWrite(TRIG_PIN, LOW); servowrite(val); delay(500); }
This is what i have so far but don't know where to go from here to get the map function setup and working | https://forum.arduino.cc/t/using-map-function-with-a-servo-motor-and-ultrasonic-sensor/922978 | CC-MAIN-2021-49 | refinedweb | 171 | 53.75 |
Introduction
When a return statement is called in a function, the execution of this function is stopped. If specified, a given value is returned to the function caller. If the expression is omitted,
undefined is returned instead.
return expression;
Functions can return:
- Primitive values (string, number, boolean, etc.)
- Object types (arrays, objects, functions, etc.)
Never return something on a new line without using parentheses. This is a JavaScript quirk and the result will be undefined. Try to always use parentheses when returning something on multiple lines.
function foo() { return 1; } function boo() { return ( 1 ); } foo(); --> undefined boo(); --> 1
Examples
The following function returns the square of its argument, x, where x is a number.
function square(x) { return x * x; }
The following function returns the product of its arguments, arg1 and arg2.
function myfunction(arg1, arg2){ var r; r = arg1 * arg2; return(r); }
When a function
returns a value, the value can be assigned to a variable using the assignment operator (
=). In the example below, the function returns the square of the argument. When the function resolves or ends, its value is the
returned value. The value is then assigned to the variable
squared2.
function square(x) { return x * x; } let squared2 = square(2); // 4
If there is no explicit return statement, meaning the function is missing the
return keyword, the function automatically returns
undefined.
In the following example, the
square function is missing the
return keyword. When the result of calling the function is assigned to a variable, the variable has a value of
undefined.
function square(x) { let y = x * x; } let squared2 = square(2); // undefined
| https://www.freecodecamp.org/news/javascript-return-statements/ | CC-MAIN-2022-05 | refinedweb | 268 | 56.76 |
in reply to Are there any major Perl issues with Mac OS X Lion?
I've not yet used Lion, but in all previous versions of OS X ... yes, you need to install the dev tools; there are no issues with the CPAN toolchain or with the vast majority of modules (you say you're coming from Windows, so while you will still run into some non-portable code, you will have fewer problems on OS X than you're used to); obviously most stuff with a platform-specific namespace won't work; in general you can assume that it's a normal Unixy system.
Some Mac::* stuff is for the obsolete pre-OS X rubbish, but it's normally obvious from the docs which is which. That said, I doubt you'll ever need to use any Mac::* stuff directly. I certainly haven't.
1. Keep it simple
2. Just remember to pull out 3 in the morning
3. A good puzzle will wake me up
Many. I like to torture myself
0. Socks just get in the way
Results (288 votes). Check out past polls. | http://www.perlmonks.org/?node_id=924341 | CC-MAIN-2016-44 | refinedweb | 187 | 81.93 |
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 19.3, “How to use 'Duck Typing' (Structural Types) in Scala.”
Problem
You’re used to “Duck Typing” (structural types) from another language like Python or Ruby, and want to use this feature in your Scala code.
Solution
Scala’s version of “Duck Typing” is known as using a structural type. As an example of this approach, the following code shows how a
callSpeak method can require that its
obj type parameter have a
speak() method:
def callSpeak[A <: { def speak(): Unit }](obj: A) { // code here ... obj.speak() }
Given that definition, an instance of any class that has a
speak() method that takes no parameters and returns nothing can be passed as a parameter to
callSpeak. For example, the following code demonstrates how to invoke
callSpeak on both a
Dog and a
Klingon:
class Dog { def speak() { println("woof") } } class Klingon { def speak() { println("Qapla!") } } object DuckTyping extends App { def callSpeak[A <: { def speak(): Unit }](obj: A) { obj.speak() } callSpeak(new Dog) callSpeak(new Klingon) }
Running this code prints the following output:
woof Qapla!
The class of the instance that’s passed in doesn’t matter at all. The only requirement for the parameter
obj is that it’s an instance of a class that has a
speak() method.
Discussion
The structural type syntax is necessary in this example because the
callSpeak method invokes a
speak method on the object that’s passed in. In a statically typed language, there must be some guarantee that the object that’s passed in will have this method, and this recipe shows the syntax for that situation.
Had the method been written as follows, it wouldn’t compile, because the compiler can’t guarantee that the type
A has a
speak method:
// won't compile def callSpeak[A](obj: A) { obj.speak() }
This is one of the great benefits of type safety in Scala.
It may help to break down the structural type syntax. First, here’s the entire method:
def callSpeak[A <: { def speak(): Unit }](obj: A) { obj.speak() }
The type parameter
A is defined as a structural type like this:
[A <: { def speak(): Unit }]
The
<: symbol in the code is used to define something called an upper bound. This is described in detail in Recipe 19.5, “Make Immutable Collections Covariant”. As shown in that recipe, an upper bound is usually defined like this:
class Stack[A <: Animal] (val elem: A)
This states that the type parameter
A must be a subtype of
Animal.
However, in this recipe, a variation of that syntax is used to state that
A must be a subtype of a type that has a
speak method. Specifically, this code can be read as, “
A must be a subtype of a type that has a
speak method. The
speak method (or function) can’t take any parameters and must not return anything.”
To demonstrate another example of the structural type signature, if you wanted to state that the
speak method must take a
String parameter and return a
Boolean, the structural type signature would look like this:
[A <: { def speak(s: String): Boolean }]
As a word of warning, this technique uses reflection, so you may not want to use it when performance is a concern.
The Scala Cookbook
This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly:
You can find the Scala Cookbook at these locations:
Add new comment | https://alvinalexander.com/scala/how-to-use-duck-typing-in-scala-structural-types | CC-MAIN-2017-39 | refinedweb | 583 | 67.59 |
Department of
International Trade and Marketing (ITM)
ITM News -
Spring 2009
News about
ITM and articles for classroom discussion
ITM's Senior Recognition Day
Tuesday, May 19, 2009, 3:00pm in the
Katie Murphy Amphitheater
Attention all graduating 2009 seniors, you are cordially invited
to join the ITM students on Senior Recognition Day with Keynote
Speaker Ms. Frances Boller, Vice President of Haddad
Organization Limited. Let us celebrate all our hard work!
All FIT Students and Faculty members are welcomed to come.
Details
Farewell
Posted:
May 7, 2009
FROM ITSA's PRESIDENT, LISSETTE
ROSARIO
Hi Everyone:
I know everyone is swamped with a massive workload, however,
International Trade and
Marketing Students are known to preserver through stressful
times.
So stay strong, the days full of sleepless nights and last minute papers
are slowly but surely
coming to an end, and so is my term as President of The
International Trade Student
Association here at FIT.
This year has been a wonderful year and I owe this all to the ITM
students, who believed in me
and elected me their President. I am eternally grateful and will
look back fondly on my year as
ITSA President.
All the fond memories, from our first trip to
Magic, in Las Vegas, our trip to the ICC in
Washington DC, and our final trip this semester to the Port of
LA in Los Angeles, will remain
with us and have served as real life experience to everything
our professors have taught us in
a classroom setting.
We would also like to personally acknowledge the people who have
made our group a major
success. A special Thanks to both past and present Advisors
Prof. Praveen Chaudhry and Prof.
Emre Ozzoz for their continued support and wisdom.
We would also like to thank Nicole Martin-Lewis, Professor
Pomeranz, Professor Yanez, Professor Musa, along with the
entire ITM staff, for their support, contributions, dedication
and willingness to always help.
To conclude I would like to thank all the people behind the
scenes responsible for elevating ITSA to exciting and new
levels.
A special thanks to Angellina Correra who served as Vice
President, Kirsten Chilstrom who served as Treasure, Afsana
Quayum, who served as Secretary, Marie Inui , who served as
Personal Relations Rep, and Donna Kleszczewski , who served as
Student Council Representative. Without your support, patience,
dedication, and passion none of this would have been possible!!!
We would also like to recognize all the student volunteers for
their time, commitment and contributions to ITSA without you
all, none of this would have been possible.
We would also like to wish
next year’s
administration all the best as they embark on an exciting
new year.
Yours sincerely,
Lissette
Rosario,
President of the International Trade Student Association
Gemini Shippers
Association Scholarships Awarded
Posted:
May 6, 2009
FOUR ITM MAJORS RECEIVE
SCHOLARSHIPS
We are pleased to announce that the
Scholarship Selection Committee have chosen Donna
Kleszczewski as the recipient of the 2009-2010
Gemini Shippers Association Scholarship. The same
committee also selected Brighid Cunningham,
Courtney Deacon, and Sheila Apolinario
to receive the Gemini Shippers ITM 2010 Practicum
Scholarship traveling to
Turkey in January
2010. I hope that you will all join us in
congratulating Donna, Brighid, Courtney and Sheila as
ITM’s new Gemini Shippers Association scholars.
Announcement
Position Available at JFK
Posted:
April 16. 2009
OCEAN IMPORT COORDINATOR/SPECIALIST
Delmar, an International Freight Forwarder, NVOCC,
Customs Broker and Logistics provider in the United
States in Canada is looking for an experienced person
for our Ocean Import Breakbulk department for JFK office
in Jamaica, NY. Candidates must be highly motivated and
organized. Requirements: Can-Do Attitude, Excellent
Written and Verbal Communication, a minimum of 3 years
of direct, import breakbulk experience.
Announcement
2009 Warnaco Scholarships
Posted:
April 12. 2009
TWO ITM MAJORS
RECEIVE SCHOLARSHIPS
ITM is proud to announce that the Warnaco Scholarship
Selection Committee has chosen Cheung "Cynthia" Shuk
Kwan and Mariko Fujii as recipients of the
2009-2010 The Warnaco Group, Inc. Scholarships. The
judges were impressed with the applicants, noting the
research and preparation conducted about the company
prior to the interview as well as deep knowledge of the
issues facing the fashion industry. We hope that you
will all join us in congratulating Cynthia and Mariko
for their well-deserved recognition.
Article
U.S. Consumers pay up to 67% on shoe
tariffs
Posted:
April 11. 2009
SENATE CONSIDERS
ANTI-TARIFF BILL.
More
Academic Advisement
Posted:
April 10. 2009
ITM INTRODUCES
'ADVISEMENT TRACKS"
The department of International Trade has introduced a
new advisement diagram to assist ITM majors in selecting
their elective courses according to career goals. The
ITM degree, a Bachelor of Science in International Trade
and Marketing, consists of 67 credits, that is, 12
liberal arts courses (37 credits) and 10 ITM courses (30
credits). The new “tracks” do not change the major in
any way; the tracks only suggest what elective courses
are more helpful for specific types of jobs. These
informal tracks were designed by observing the jobs
obtained by ITM graduates in the last ten years and they
were grouped in four categories: “international business
management”, “international trade & policy management”,
“international marketing management”, and “international
trade law & fashion law”. The first track,
“international business management”, is broad and
tailored to students who pursue a general international
business education; the other three tracks are more
focused on specific professional paths. ITM majors are
invited to discuss their career plans with their
academic advisors.
Diagram
Poll Confirms that
Posted:
April 10. 2009
SUPPORT FOR FREE
TRADE ON THE RISE
Sixty-six percent of Americans now think that, on
balance, trade with other countries is good for the U.S.
economy, according to a new CBS News/New York Times
poll. That's up eight points from a poll in March of
2008.
In the poll, twenty-three percent say that trade is bad
for the economy, and four percent say it has no effect.
The poll also finds that fewer Americans now think that
trade restrictions are necessary to protect domestic
industries, though 60 percent still think they are.
Sixty-eight percent said the same thing in the poll last
year. Meanwhile, twenty-eight percent said that free
trade must be allowed even if domestic industries are
hurt by foreign competition. That’s up from 24 percent
last year. Read complete
Show Your talents
Posted:
April 9. 2009
ITSA's 2009 TALENT
SHOW
ITSA is hosting its 2009 fundraiser
talent show on Friday, April 17th, 2009 in the
FIT Amphitheater at 6:30p.m. It is a MUST-SEE
show, consisting of dancers, singers, musicians,
comedians and a performance by our own President of ITSA!
There will also be a raffle. Tickets are $10 and the
show is open to FIT students, faculty and also the
public! Spread the word, bring your friends and come
support ITSA by attending the show. Tickets are
available through Nicole in the ITM office (B429). If
you have any questions or would like to email your
request for tickets, please email
Afsana_Quayum@fitnyc.edu or
Angellina_Correa@fitnyc.edu
ITSA members will receive 10 points for every talent act
and donation they contribute, 5 points for every 10
tickets sold (1 point for every 2 tickets), 15 points
for purchasing a ticket and attending, and 15 points for
volunteering. Join us on Friday, April 17!
NAFTA impasse
Posted:
April 9. 2009
US TRUCK DECISION
HURTS 26,000 JOBS
As the United States is suffering from
its worst economic crisis in recent times, a decision by
U.S. lawmakers
to stop a pilot program allowing Mexican trucks
restricted entry into the United States is putting
at risk 26,000 jobs, according to an open letter to
President Barack Obama from 141 companies and
organizations. Since the move by the U.S. lawmakers
violated the North American Free Trade Agreement,
Mexico slapped tariffs on 90 U.S. goods. "Over $1.5
billion in U.S. manufactured products and $900 million
in U.S. agriculture products are impacted by the
retaliatory tariffs," the letter says. "The retaliation
puts over 12,000 agricultural and 14,000 manufacturing
jobs at risk."
Article
Job Opportunities Posted:
April 2. 2009
CUSTOMS AND BORDER PROTECTION
ITM students interested in exploring careers at
U.S. Customs and Border Patrol
(CBP) are invited to attend a briefing by Mitchel
Landau, Chief for Trade Operations, and outstanding
ITM graduate, Yaniri DeLeon, Import Specialist, both of
based in Newark. This talk will take place on Wednesday,
April 15, from 2pm to 3pm in room
D211
Mr. Landau and Yaniri will talk about the agency,
types of jobs
available, how to apply, and what to expect from a job at
CBP.
American fashion revival
Posted:
March 29. 2009
MICHELLE OBAMA IN EUROPE
Ever.
Article
U.S. Association of Textiles and Apparel
Posted:
March 24. 2009
FOX TAKES HELM OF USA-ITA
Janet Fox, the newly elected chairwoman of the
U.S.
Association of Importers of Textiles & Apparel, aims
to keep the organization relevant in a quota-free world
and a global economic downturn. Fox, vice
president and director of strategic sourcing at J.C.
Penney Private Brands Inc., is taking the helm of the
importing trade and lobbying association at a critical
time for retailers and importers. USA-ITA has 200
members, including J.C. Penney Co. Inc., Kohl’s Corp.,
Target Corp., Pacific Sunwear of California Inc.,
AnnTaylor Stores Corp., Quiksilver Inc. and Macy’s Inc.
“In
the old days, the big reason everyone joined trade
associations was because of the fear of quotas and what
quotas could do to their business,” said Fox. “With the
elimination of quotas, people talk about the relevancy
of trade associations.” Fox began her career at
Penney’s as a merchandise trainee in a store and has
worked her way up to head of the company’s strategic
sourcing division. She has navigated the complex waters
of global sourcing for the company for 17 years and is
responsible for the development of the sourcing plans
and managing the assets to support global merchandise
procurement. “It really is one of those things
that once you get in it, it kind of becomes your passion
— you either love it or you don’t,” said Fox. “And it’s
something I love because it is so intertwined with
history and politics. I used to say I would wake up in
the morning and see what the Clinton administration had
done to mess up my shipments. It is all government
related.”
Global quotas were eliminated in 2005 as part of the
World Trade Organization’s Uruguay Round, but the U.S.
imposed quotas on 34 categories of Chinese apparel and
textile imports for three years because of big import
surges and an outcry from the domestic textile industry.
Those quotas expired on Jan. 1.
With bankruptcies, store closures and layoffs mounting
every day as the global recession deepens, Fox said her
biggest challenge will be educating Capitol Hill
lawmakers and the Obama administration about the
importance of maintaining retail jobs and warning them
about the devastating impact of punitive trade measures.
“I think one of the things that gets lost, especially in
Washington, is the impact of the retail industry on the
U.S. economy,” said Fox. “The retail industry as a whole
lost 589,000 jobs last year, and when you look at the
landscape it’s littered with bodies — Mervyns, Circuit
City, Gottschalks and Goody’s are all out of business.”
Fox plans to highlight the job losses as importers’
outline their opposition to the government taking any
action to impose punitive duties on imports through
antidumping or countervailing duty cases or legislation.
“Anything that brings uncertainty or potentially could
raise our prices is something that is damaging to our
industry,” said Fox. “U.S. consumers have enjoyed
deflationary pricing on apparel for the last 10 years
and this is not the time to raise prices.”
One of the association’s top priorities this year will
be building a case for legislation to eliminate tariffs
on apparel imports, which can run as high as 33 percent,
as in the case of nylon blouses, Fox said. “One of
the main things we would like to talk about is taking
away tariffs on apparel and textiles,” she said. “That’s
a tax that costs U.S. consumers, the average family,
$800 a year or so. If [Congress] is looking for a tax
rebate, there it is.”
The footwear industry has built a coalition around
legislation that would eliminate $800 million in duties
across several categories. The legislation was recently
reintroduced in Congress, but has not yet moved. Fox
said many retailers, including Penney’s, are part of
that coalition and monitoring the legislation as they
craft a case for eliminating duties on apparel. On
the defensive side, USA-ITA and other importing groups
also plan to continue pressing their case against a
Vietnam and China apparel-import monitoring program
included in the recently enacted $410 billion spending
bill. The Obama administration has the discretion to
decide whether to implement the program.
source:
Former Dallas Mayor Ron Kirk sworn in
Posted:
March 20. 2009
NEW U.S.
TRADE REPRESENTATIVE
Ron Kirk formally took the oath of office Friday from
Vice President Joe Biden, cementing the former Dallas
mayor's spot as the Obama Cabinet's trade ambassador.
The ceremony was held in an auditorium at the Eisenhower
Executive Office Building next to the White House.
Kirk's wife, Matrice Ellis-Kirk, held the Bible as their
daughters, Alex and Catherine, stood nearby. Among the
dozens of friends on hand: state Sens. Royce West of
Dallas and Rodney Ellis of Houston; U.S. Rep. Eddie
Bernice Johnson of Dallas; Dallas civic leaders David
Biegler and Rick Douglas; former City Council member
Craig Holcomb; political strategist Kathy Nealy; and
attorney DeMetris Sampson.
Kirk's 88-year-old mother, Willie Mae Kirk, a longtime
Austin civil-rights activist, was in the front row. Kirk
choked up slightly as he acknowledged her, and the
struggles that paved the way for President Barack
Obama's election and his own national service. "I'm so
honored that you and all those of your generation could
live to see the day our country would have the kind of
leadership that it has, and that I get a chance to play
a role in it," Kirk said. Kirk's father is deceased.
The Senate confirmed Kirk on Wednesday on a 92-5 vote,
opposing were Sens. Robert Byrd (D-WV), Jim Bunning
(R-KY), Kit Bond (R-MO), Johnny Isakson (R-GA), and
Bernie Sanders (I-VT). He was sworn into office that
afternoon privately at the trade office. His first full
day on the job, Thursday, he met with the European
Union's trade commissioner, Catherine Ashton, discussing
an impasse over beef and other issues. A flare-up with
Mexico is expected to absorb much of his attention at
the outset, though neither Kirk nor Biden mentioned it
Friday. Mexico, the third-biggest U.S. trading partner,
slapped tariffs Wednesday on $2.4 billion worth of U.S.
imports in retaliation for Congress' decision to block
its trucks from U.S. roads, in violation of the North
American Free Trade Agreement. Biden outlined Kirk's
assignment as he described the administration's trade
philosophy. "He's got to be an enforcer. He's got to be
a collaborator. He's got to be a negotiator," the vice
president said.
Details
USTR
source:
Promote Your Products
Worldwide for Just $399
Posted:
March 19. 2009
Advertise in the USA Product Showcase section of
Commercial News USA, the official export promotion
magazine of the U.S. Department of Commerce. This
catalog-style magazine reaches an estimated 400,000
readers in 176 countries worldwide. For the special low
price of $399, you will get a 35 word write-up, a color
photo or logo, and worldwide visibility. The deadline
for the next issue is May 8th. For more information,
contact your U.S. Commercial Service trade specialist,
call 1-800-581-8533, x 822 or sign-up online at.. If you have any questions about these
initiatives, please contact your local U.S. Commercial
Service trade specialist. To find the trade specialist
nearest you please visit.
ITSA & Prof.
Shireen Musa Promotes the
Posted:
March 19. 2009
Department of
International Trade & Marketing
On
March 12, 2009, ITSA students assisted Professor Shireen Musa in
promoting the International Trade and Marketing program to lower
division students. The event took place around noon in the A
building cafeteria.
Photos with caption
ITM Visit to Kingsborough
Community College
Posted:
March 19. 2009
March 10, 2009
A
group of ITM students including Michelle Weiss, Alba Tirado,
Amanda Joesph, Afsana Quayum, Alexis Feldman and Cynthia
Shukkwan Cheung accompanied Professor Shireen Musa to
Kingsborough Community College in Brooklyn to introduce
Associate degree students to the major in hopes of appealing to
prospective graduates. Professor Musa presented an informative
overview of the major then the accompanying students were able
to share stories about their experiences. The diverse and
dynamic group of ITM students who participated were able to
share personal stories about the different classes, study abroad
programs, research projects, field trips and internships. The
KCC students were able to ask questions directly to the current
ITM students to gain a better understanding for what it is like
to be in the ITM program. The audience responded well to the
presentation and many seemed eager to learn more about the
curriculum and apply!
Photos
with caption
ITSA's Magical
Experience in Vegas Posted:
March 17. 2009
February 17 - 20, 2009
Captions and photos
Our First seminar at Magic was held at the Hilton Conference
room, regarding Sourcing Options on The Global Landscape. Our
guest Speaker
Belinda Edmonds enlightened us with her knowledge on
sourcing opportunities in Africa. We learned that all the
countries in South Africa (e.g. Nambia, Botswana, Zimbabwe,
Mozambique and Swaziland) offer duty free and limitless raw
materials, to sourcing executives willing to have their products
produced in a significant number of Sub Saharan African
countries. The only limitations placed on exports from Africa
are Quotas. Each country in Africa wishing to benefit from Duty
free is required to be Democratic and offer Political stability
as well as free elections. Africa also offers sourcing
executives access to local, regional and International raw
materials, producing everything from tainted free basics to
luxury exclusives at a very competitive price. As a result of a
growing interest amongst consumers for Tainted Free products,
all factories in Africa are committed to improving working and
social conditions, (e.g. creating jobs, leading leads to
education) offering more transparency in Africa than in various
Asian countries.
Afterwards, we had the privilege of hearing Mr. Tom Travis, a
leading international trade lawyer in the United States,
addressing ways for sourcing specialist to save on import cost.
The method is known as The First Sale Rule. Under the rule,
importers are allowed to use the first sale, rather than a later
sale, in a multi-sale transaction as the rate for duty purposes.
First sale is applicable to a variety of products from Pecans to
Television sets.
Free to ITM majors
Posted:
March 15. 2009
FRANCISCO COSTA IN CONVERSATION WITH
VALERIE STEELE
Monday, March 30 - 7 pm -
Katie Murphy Amphitheatre
Born in Brazil and educated at FIT,
Francisco Costa is creative director of the women's
Calvin Klein Collection. He was named CFDA's Womenswear
Designer of the Year in 2006 and again in 2008. Costa is the second
youngest of five children. He grew up in.
RSVP required at
MuseumInfo@FITnyc.edu
ITM Students:
SUMMER 2009 WHITE HOUSE
INTERNSHIP
Take a look at the instructions for how to apply for a White
House internship, download the application, and learn more about
the White House departments you could work in. PLEASE READ THE
ANNOUNCEMENT FULLY PRIOR TO APPLYING AT THE FOLLOWING LINK: REQUIREMENTS: US Citizenship; Eighteen years of age on or before the first day of the
internship; Enrolled in an undergraduate or graduate program at a college,
community college, or university (2-4 year institution) or must
have graduated in the past two years from undergraduate or
graduate school..
Download application
Chinese Government Scholarships for
SUNY students
Posted:
March 9. 2009
ONE-YEAR SCHOLARSHIP IN CHINA
We are pleased to announce the Chinese
Government Scholarship Program to SUNY students. In May
2008, a devastating earthquake struck the area around
Chengdu in Sichuan Province, China, causing enormous
loss of life, injuries and damage. SUNY made an
extraordinary gesture of welcoming 150 university
undergraduate students from Sichuan Province, to study
as full-time students at 22 campuses of the University
with full scholarship for a year. Now, the Ministry of
Education of China has offered 10 full scholarships per
year between 2009 and 2012 for selected SUNY
undergraduate students who wish to study the Chinese
language or related disciplines in China. The
scholarship is for undergraduate students who have
completed 2 years of full-time studies at a SUNY campus.
The duration of the scholarship is for a year but the
option to study for one semester is possible. The host
institutions are Sichuan University and Nanjing
University. The scholarship will include tuition fees
and accommodations, living allowance and medical
insurance. International air travel to China is not
included. Deadline is March 15.
Details and application
Researching an Emerging Market
Posted:
March 9. 2009
LILLY PULITZER IN
BRAZIL
Wednesday, March 18, 1-2, A building, 8th
floor alcove
FIT's Master's Program in
Global Fashion Management,
School of Graduate Studies, invites you to join Claudia
Tredinick and Saul Lopez, recent graduates of the
program, when they present their capstone project
strategizing
Lilly Pulitzer’s brand entry into Brazil. After
describing the proposal, Claudia and Saul deconstruct
the presentation by describing where, within FIT’s
library, they located the sources for their research.
This fast-paced and entertaining treatment of the
research process won rave reviews during “Love Your
Library Week” and provides a great way for students to
better understand how to utilize FIT’s research
resources in the context of a graduate level assignment
with real-world application. Claudia and Saul interned
at Lilly Pulitzer as they worked on the project which,
at its completion, was reviewed by executive management.
Please join us!
Source: WWD, March 3, 2009
Posted:
March 5. 2009
Obama Sends Trade Plan to Congress.
Thursday, March 5
ITSA Elections
of 2009-2010
Officers
Leadership Positions
Posted:
March 3. 2009
ITSA ELECTIONS:
POSITIONS AVAILABLE
A message from ITSA's President,
Lissette Rosario: ITSA Elections are March 5,
2009, in Room A644, at 1pm. Attached are the three
essays written by the candidates running for the
following positions, President, Vice President, and
Treasurer including their backgrounds, and plans for
the club. Anyone still interested in running for office
the following positions are STILL AVAILABLE, Secretary,
Personal Relations and Student Council Representative.
No essay is necessary to fill these positions. However,
if you are interested in running for the position of
Secretary, Personal Relations and Student Council
Representative please email,
(LissetteRosario212@gmail.com,
Angellina_Correa@fitnyc.edu
KirstenChilstrom@gmail.com,
Marie_Inui@fitnyc.edu) expressing your interest.
Major Issue to be discussed at our next meeting:
Senior Recognition Day --Getting the ball rolling,
which includes obtaining alist of ITM majors Graduating
in May, sending out E-VITES, Buying a Cake, Decorating
the Amphitheater. Obtaining as many pictures possible to
put together a mini video about International Trade and
what it means to you (the students). Volunteers: Brighid
Cunningham, Dani Dohrer, Bin Shen, Lina Jaramillo, Donna
Kleszczewski, Jinju Kim, Afsana Quayum, Blessing Odunayo,
Adeoye, Gabrielle Seo, Melissa Ortiz, Dyne Kim, Cynthia
Cheung.
Read
complete announcement (pdf)
Global Sourcing
Posted:
March 2. 2009
LIZ
CLAIBORNE SHIFTS
SOURCING STRATEGY
Liz Claiborne, one
of the largest US clothing
companies, is to hand its
global sourcing operation
over to
Li & Fung, the
Hong Kong-based trading
company, in a deal that
marks the evolutionary
changes underway in the
global clothing supply
chain. The company, whose
brands include Mexx, Juicy
Couture and Kate Spade, was
one of the first US clothing
companies to shift
manufacturing orders abroad
in the late 1970s and 1980s,
and to set up its own
“direct sourcing” offices to
oversee orders. It is
now shifting to use of an
agent, at a time when it is
facing intense pressure in
the US from steep declines
in consumer confidence that
have hit discretionary
spending on women’s clothing
particularly hard.
Li & Fung already
handles supply chain
management for brands that
include American Eagle, the
youth fashion retailer,
Tommy Hilfiger, and
Kohl’s, the low cost
department store, as well as
KarstadtQuelle, the German
retailer.
Bill McComb, chief
executive of Liz Claiborne,
said the move was “a big
shift”, given Liz
Claiborne’s previous
reliance upon its a direct
sourcing operation that
employed around 475 people
in five offices in China,
Indonesia and India.
Article
International Branding
Posted:
Feb. 26. 2009
U.S. BRANDS' MANY ROUTES TO
EUROPE
The all-American brand Tommy Hilfiger produces two
distinct collections - one for the United States, one
for the rest of the world. "It could save us a lot of
expense producing just one collection - and the way the
economy is now, that has appeal," said Fred Gehring, the
company's chief executive. "But it would cost us a lot
in terms of sales. There are too many differences
between Europe and America. We would end up with a
compromise that isn't good for either territory."
The brand is a European success story. Revenues from the
company, headquartered in Amsterdam, outpace those of
the U.S. business. For the year that ended March 31,
European sales were €707 million, or about $940 million;
U.S. sales, €454 million. "And it's not just
product. There are differences mentally, culturally and
business-wise," Gehring added. "In the U.S., everything
is concentrated and consolidated. In Europe, you need 25
showrooms and your order book will be filled with
thousands of small customers. It takes six weeks to sell
the fall season in Europe, instead of five days in
America."
Article.
ANNOUNCEMENT
Posted:
Feb. 26. 2009
We are pleased to circulate the following
announcement received from our Advisory Board member, Dr.
Anastasia Xenias, Senior International Trade Specialist,
US Department of Commerce: The National Association of
Manufacturers (NAM) and The U.S. Commercial Service
present a Webinar on Export Financing on February 3rd entitled “Financing
your Exports: Current Outlook on the Credit Crunch” .
See details.
President Obama and Trade
Posted:
Feb. 25. 2009
Lawmakers Call for Wide-Ranging Reforms to Trade Policies
At least
35 members of Congress have apparently signed a letter to
President Obama urging him to take a number of specific steps to
reform U.S. trade policy. The letter indicates that the
lawmakers do not want to shut the U.S. off from the rest of the
world, asserting that they want to “ensure that Americans enjoy
the benefits of expanded trade.” A the same time, the letter
adds, there is now a “unique opportunity … to remedy the
negative consequences on the American economy, environment, and
public health and safety that have resulted from aspects of the
current trade and globalization model.” A total of 71 “new
fair-trade reformers” were elected to the House and Senate in
2006 and 2008, the letter states, and this “unprecedented
election focus on trade and globalization reform reflects the
public opinion that America’s trade and globalization model
needs a major overhaul.”
Article
Distinguished ITM Alumnus
Posted:
Feb. 24. 2009
MARGARITA YAKOVLEVA '04 SPEAKS
AT FIT
Margarita Yakovleva came back to FIT to
speak to Professor Yanez Global Marketing class on
February 27. She graduated from FIT with a BS in
International Trade and Marketing in 2004 and begun her
marketing career as a Project Analyst at
WebSurvey Research, a subsidiary of WPP, one of the
world's largest communications services groups. She has
developed and managed online data collection projects
for healthcare industry giants
Pfizer
and
Merck. She also worked at
The
Nielsen Company, the world's leading provider of
marketing information where she managed global
marketing research projects for
Yahoo,
Coke,
Samsung,
Unilever,
Craft Foods, etc. In 2008, she received a
MS in Integrated Marketing from New York University.
She also worked as an Online Marketing
Manager at
Spire Vision, an online marketing boutique and media
company where she was responsible for online product
development, implementation, optimization, and
marketing.
See album
US-Canada Trade
Posted:
Feb. 21. 2009
'BUY AMERICAN,' BUY CANADIAN TOO
As President Barack Obama landed in Canada today for his first official trip outside the country, he addressed concerns about a protectionist "Buy American" clause in the $787-billion stimulus act and how it could affect the two nations' critical trading relationship.
Signed into law earlier this week, the American Recovery and Reinvestment Act contains a watered-down version of a clause restricting federally-funded stimulus projects to using American-made steel, iron and manufactured goods. The U.S.-Canadian trading relationship is the largest in the world, with total trade in 2008 topping $596 billion. Article
"ITM 3.0"
Posted:
Feb. 20. 2009
CHRISTINE POMERANZ
RECEIVES TENURE
It is with great pleasure we
announce that
FIT’s Board of Trustees, voted on February 19, to
unanimously grant
tenure to Professor Christine Pomeranz. With an
impressive background in international finance, she was
a Senior Vice President at Australia & New Zealand
Banking Group, and also worked at The Honk Kong &
Shanghai Banking Corporation (HSBC) and Citibank, she
joined FIT in 2003 to teach the newly approved course
International Finance. She earned a MBA from NYU and a
Bachelors degree in Business Administration from
Assumption College. Since day one, Christine has shown a
remarkable commitment to support students and improve
academic services. She volunteered to become the faculty
liaison to our
Advisory Board, raise funds for scholarships, create
the
Talking Trade @ FIT guest speaker series, and
organize field trips to industry sites. Hundreds of ITM
majors have benefited from her initiatives, either by
getting tuition and practicum scholarships, or having
unique opportunities to visit companies and talk to
trade executives. In 2007, her colleagues unanimously
elected her as department chair and she is now taking
the program to new highs, "ITM 3.0": an established
department, with larger enrollment, more opportunities
to students, more courses on line, and stronger
connections to industry. Let’s applaud Christine for her
well deserved tenure and thank her for everything she is
doing for our department. Congratulations!
Armani in New York City
Posted:
Feb. 19. 2009
FOR ARMANI,
NEVER ENOUGH TROPPO
Too much. When encountering Mr. Armani, who, at 74, is
one of the wealthiest and most successful fashion
designers in the world, it is always advisable to take
into account that he is also a man who is very
particular about things like image, control, display,
the amount of sauce that is poured over his pasta. One
might be inclined to describe him as difficult, as when,
while being photographed in his new store on Fifth
Avenue on Monday, he complained that the bright lights
were too much, that he does not like to be photographed
from certain angles and never, never against a white
wall. But, as Mr. Armani can attest, one does not get
ahead in fashion by being easy. Last July, before the
seriousness of the global recession was being fully
realized, his company reported annual sales of $2.1
billion and profits of nearly $300 million. He had
already taken over the retail space, a former Hugo Boss
store at the corner of 56th Street, and, despite the
gray clouds on the horizon, went ahead anyway with plans
for a 43,000-square-foot megastore, encompassing three
collections, a restaurant and a chocolate shop. So when
Mr. Armani came to town this week for the opening, it
was an Italian designer who became the biggest news of
New York Fashion Week.
ARTICLE
Are Trade Wars Inevitable?
Posted:
Feb. 18. 2009
PROTECTIONISM
ANEW."
Which is all very well -- except
that there are many ways to pursue protectionist
policies, and rest assured that every single one
of them is being tried by someone, somewhere,
right now. New tariffs are already in force, for
example in Russia, where especially high ones
have destroyed the previously thriving used-car
import business (and thus inspired used-car
salesmen to stage unusually violent
protests). Rumors of more tariffs pending --
in Brazil, in the Philippines -- are haunting
the steel industry
trade press, too. Still, these are minor
infractions. The real story, over the next
several years, will be the spread of more
carefully camouflaged protectionism: measures,
some legal, some not, designed to help one
nation's workers or companies at the expense of
those next door.
Article
ITM TUTORS
AVAILABLE NOW
FIT's Academic Skills Tutoring Center
offers free tutoring to students taking ITM courses. A
tutor is assigned to work with you for one hour every
week; you must attend all scheduled sessions that you
agreed to attend when you requested the service. If you
miss a session, the tutor still has to be paid and that
is waste of the Center's resources. Tutors are ITM
seniors who have completed ITM courses with an A grade.
Most requests for tutorial assistance are for IN312,
IN313, and IN322 but there are tutors for all ITM
courses. For more effective support, apply early in the
semester, don't wait until the week before a quiz or
midterm to request a tutor because it might take two
week to set up the first session. The Center is located
in room A-608b.
New Trade War
Posted:
Feb. 15, 2009
INDIA BANS CHINESE
TOYS
India is being accused of raising trade
tensions between the world's two largest emerging
economies by imposing a temporary ban on imports of
Chinese-made toys. The six-month ban was announced Jan.
23 by India,."
TIME
BusinessWeek
ChinaView
Times/India
Testing her Strong Suit
Posted:
Feb. 13, 2009
ANNA SUI:
FASHION DESIGNERS V/S RECESSION
Born in Detroit and educated at Parsons School of
Design,
Anna
Sui made all the journeyman stations of the cross
before achieving overnight success when she was close to
40. Her company is privately held and remains
profitable, she says. (estimated annual sales: chill
that recession has cast on businesses. .
Article
ITSA's Next Meeting
Posted:
Feb. 11, 2009
THURSDAY, FEB. 12, 1PM
The next meeting of the
International Trade Student Association (ITSA) will be held
on Thursday, February 12 at 1:00p.m. in Room A644 (6th
floor between A and B buildings, the 1st door closer to the B
building). At our next meeting we would like to discuss your
opinions on ITSA, what you expect from us and what you think we
can do to make it better. Bring your comments and ideas! Also,
the majority of this meeting will be dedicated to the ITM
commencement ceremony. If you have already volunteered or are
interested in doing so, please attend this meeting, as necessary
information will be discussed. The list of volunteers who have
already signed up is: Alba Tirado, Marla McCormick, Avita Kumari,
Jorge Morales, Brighid Dohrer, Bin Shen, Lina Jaramillo, Anna
Cherry, Cynthia Cheung, Donna Kleszczewski, JinJu Kim, Amanda
Joseph, Afsana Quayum, and Analeesa Stieha.
Learn from a Pro
Posted:
Feb. 10, 2009
MARKETING YOURSELF IN A RECESSION
Please mark your calendars
for
Thursday, March 5th at 5pm in FIT's Katie
Murphy Amphitheatre.
Liz
Paley, the Senior Vice President for Advertising &
Wholesale Marketing for
Polo Ralph Lauren,
will share
tips for success, offer advice about taking advantage of
opportunities while you are in school at FIT and about
her perspective as an employer. In these hard economic
times, special strategies are needed for being noted
when you are about to start looking for a job. Prepare
in advance any questions you might have about best
positioning yourself to get the right job, women in the
workplace, being well prepared for an interview, and
anything you would like to know Polo Ralph Lauren
as a company. This presentation is part of the First
Year Experience (FYE) and sponsored by the Office of
Student Affairs.
What the U.S. could gain and lose
Posted:
Feb. 4, 2009
WHAT A TRADE WAR WITH CHINA WOULD
LOOK LIKE
The Obama administration will have its first opportunity
to name China as a "currency
manipulator" under U.S. law when it submits a
required report to Congress April 15. Secretary Geithner
has stopped short of saying the new administration will
do so.
But if it does, what next? Designating China as a
"currency manipulator" would, under current U.S. law,
trigger a requirement for negotiations. Nothing more.
What if those negotiations failed?
During his presidential campaign, Obama endorsed a bill
that would change the current law to define currency
manipulation by a trading partner as a subsidy subject
to the imposition of countervailing duties.
ARTICLE
INDUSTRY LEADERS TALK TO ITM MAJORS
Once again, the
ITM Advisory Board has planed an outstanding
speakers program for ITM majors to learn about trends and
career opportunities. This guest-speakers program runs in the fall and
spring semesters. The Spring 2009 "Talking Trade @ FIT"
is organized by members of the ITM Advisory Board, who
generously contribute their time, efforts, and resources to
educate the next generation of international trade
professionals. ITM majors are highly encouraged to attend
these events, talk to alumni and professionals, and improve
their networking skills.
See this semester's program.
Spring 2009 ITM Field Trip
Posted:
Jan.. 27, 2009
TRADE
CARD, Inc
ITM majors are invited to ITM's Spring 2009
Field Trip to
Trade
Card, Inc. on Tuesday, March 3. Tour starts at
12:15pm and ends at 1:45pm. TradeCard, Inc. is the leading
provider of on-demand supply chain management solutions. The
TradeCard Platform synchronizes financial transactions with
physical events in the global supply chain to help customers
automate trade transactions from purchase order to payment
and chargebacks. Buyers, suppliers and their trading
partners manage transactions through a web-based platform
with online financial services integrated into the workflow.
This turnkey transaction management enables customers to
improve margins and enhance growth, with
extra-organizational supply chain visibility. There is limited
space for this tour; please sign up by 3:00 p.m. on Friday,
February 6,
through Ms. Nicole Martin-Lewis, ITM Secretary, Room B429.
Spring 2009
Posted: Jan. 26, 2009
SCHOLARSHIPS FOR ITM MAJORS
Click on link for more details
-
2010 Gemini Shippers Association's Scholarships available to ITM
Majors. Deadline: Thursday,
March 5, 2009
-
Gemini Shippers
Association's ITM 2010 Practicum Scholarship.
Deadline: Thursday, March 5, 2009
-
2010 The Warnaco Inc. Group Scholarships. Deadline:
Tuesday, March 10, 2009
Obama's USTR: Ron Kirk
Posted:
Jan.. 27, 2009
TRADE
NOMINEE TO GIVE UP LUCRATIVE POSTS
Former Dallas Mayor
Ron Kirk will forfeit a lucrative legal partnership
and positions on several corporate boards if he is
confirmed as the next
U.S. Trade Representative (USTR), according to
financial disclosure forms released Tuesday. Kirk made
more than $1 million last year from his law firm
partnership and corporate board positions, the
disclosure forms show. He has been a partner at
Houston-based law firm Vinson & Elkins since February
2005. Vinson & Elkins will pay him a bonus of $150,000
and other benefits, when he resigns, the disclosure form
said.
President Barack Obama last month named Kirk to be the
nation's next lead trade negotiator. As
U.S. trade representative, he also would uphold U.S.
laws barring unfair trade practices by overseas
governments and companies. Kirk also reported assets
valued between $1.9 million and $4.6 million, including
blocks of common stock and stock options worth $500,000
to $1 million from Dean Foods Inc., where he serves as a
director. He joined Dallas-based Dean Foods' board in
September 2003.
Ron Kirk also owns shares and restricted stock worth
$200,000 to $500,000 in Phoenix-based PetSmart Inc.,
where he has served as a director since June 2003, and
between $50,000 and $100,000 in Brinker International
Inc., where he has been a director since 1996.
Dallas-based Brinker owns Chili's Grill & Bar,
Maggiano's Little Italy and other restaurant chains.
Kirk's appointment is subject to confirmation by the
U.S. Senate. Carol Guthrie, a spokeswoman for the Senate
Finance Committee, on Tuesday said the panel will
schedule a hearing on Kirk's nomination soon. Kirk
was elected the first black mayor of Dallas in 1995 and
re-elected easily in 1999. He ran for the U.S. Senate in
2001, but lost to Republican John Cornyn.
Source: Forbes.com
TUTORS
NEEDED IMMEDIATELY
FIT's Academic Skills Tutoring Center
is searching for ITM juniors and seniors who have
completed ITM courses with good grades to tutor new
students. Tutors are needed for IN312, IN313, IN322, and
other elective courses. Tutors meet with students on a one-to-one
basis and develop studying strategies to complete a
semester successfully. For more information, see
Professor Debby Levine in room A608b , or call (212)
217-4080.
ITSA
International Trade Student Association (ITSA)
Posted:
Jan.. 27, 2009
Lissette Rosario, ITSA President.
Fall 07
Summer 07
Spring 07
Fall 06
Spring 06
Fall 05 | http://sites.fitnyc.edu/depts/itm/ITM_News/Default_Sp09.htm | CC-MAIN-2015-18 | refinedweb | 6,901 | 53.41 |
Account Details using lightning-record-form. Check lightning-record-from in LWC to know how to use Lightning Data Service in Lightning Web Components.
Steps to add LWC in Flow
- To make LWC available in Flow is to add lightning__FlowScreen target in the metafile of the component.
- Add targetConfig with property for lightning__FlowScreen target to pass a parameter from Flow to LWC (Lightning Web Component).
- Annotate Lightning Web Component properties with @api which will get the data from Flow.
This is how our metafile will look:
showAccount.js-meta.xml
<?xml version="1.0" encoding="UTF-8"?> <LightningComponentBundle xmlns=""> <apiVersion>50.0</apiVersion> <isExposed>true</isExposed> <targets> <target>lightning__AppPage</target> <target>lightning__FlowScreen</target> </targets> <targetConfigs> <targetConfig targets="lightning__FlowScreen"> <property name="strRecordId" type="String" label="Current Account Id" description="Id of the current record"/> </targetConfig> </targetConfigs> </LightningComponentBundle>
Our Component will have a simple lightning-record-form to display the Account Record.
showAccount.html
<template> <lightning-card <div class="slds-p-horizontal_small"> <lightning-record-form object-api-name="Account" record-id={strRecordId} columns="2" mode="edit" fields={arrayFields}></lightning-record-form> </div> </lightning-card> </template>
Finally, the JavaScript file to pass the list of fields to display in component along with @api annotated strRecordId property to get the record Id from Flow.
showAccount.js
import { LightningElement, api } from 'lwc'; export default class ShowAccount extends LightningElement { @api strRecordId; arrayFields = ['Name', 'AccountNumber', 'Phone', 'Type', 'Website']; }
We are done with our Lightning Web Component.
Add LWC in Flow
Create a Screen type Flow and add a Screen Component. Check this post to know the basics of How to use Flow Builder and create Flow in Salesforce.
For this implementation, we will create a Quick Action on Account Object to display Flow. Check this post to know How to use Flow as a Quick Action. Create recordId variable which will have the Id of the current Account, that will be passed to our Lightning Web Component.
Once the Flow is created, search for the LWC in Search Components on Screen Component and drag it on the Screen canvas. Select recordId for the Current Record Id property that we added for the Component from metafile. This is how it should look:
That is all, add the Quick Action on the Record page of Account and we are ready to use our Flow which uses Lightning Web Component to display Account details using lightning-record-form.
This is how our output will look after clicking on a Quick Action button from Account details page:
That is all from this post. This is how we can add LWC in Flow Screen and pass parameters to Lightning Web Component in Flow.
Also Read:
If you don’t want to miss new implementations, please Subscribe here.
Useful Resources: | https://niksdeveloper.com/salesforce/how-to-use-lwc-in-flow/ | CC-MAIN-2022-27 | refinedweb | 457 | 51.78 |
I am fairly new to python, and noticed these posts:
Python __init__ and self what do they do? and
Python Classes without using def __init__(self)
After playing around with it, however, I noticed that these two classes give apparently equivalent results-
class A(object):
def __init__(self):
self.x = 'Hello'
def method_a(self, foo):
print self.x + ' ' + foo
class B(object):
x = 'Hello'
def method_b(self,foo):
print self.x + ' ' + foo
__init__
__init__
x
B
Yeah, check this out:
class A(object): def __init__(self): self.lst = [] class B(object): lst = []
and now try:
>>> x = B() >>> y = B() >>> x.lst.append(1) >>> y.lst.append(2) >>> x.lst [1, 2] >>> x.lst is y.lst True
and this:
>>> x = A() >>> y = A() >>> x.lst.append(1) >>> y.lst.append(2) >>> x.lst [1] >>> x.lst is y.lst False
Does this mean that x in class B is established before instantiation?
Yes, it's a class attribute (it is shared between instances). While in class A it's an instance attribute. It just happens that strings are immutable, thus there is no real difference in your scenario (except that class B uses less memory, because it defines only one string for all instances). But there is a huge one in my example. | https://codedump.io/share/PX1mLTP3RsoX/1/usefulness-of-def-initself | CC-MAIN-2018-09 | refinedweb | 213 | 77.03 |
Quartz - Job Scheduler
Quartz is a job schedule plugin for Grails, sort of like a plugin for cron job. To use it:
- Add dependency in build.gradle
- Add
autoStartup: truein application.yml under section quartz:
- In grails console, user
create-jobcommand to create the skeleton of the job file.
- Edit the job file to do your task.
We only cover enough to use information here, for more details, it is always good to visit the official site of Quartz plugin for Grails
Add Dependency
In build.gradle dependencies section, add quartz. Depending on the version of Grails, for Grails version 3.3.x, you can use quartz 2.0.13, otherwise you MUST stay with quartz 2.0.1
compile 'org.grails.plugins:quartz:2.0.1'
or
compile 'org.grails.plugins:quartz:2.0.13'
Having Quartz Auto Startup When the App Started
In application.yml, add a new root section quartz like:
quartz: autoStartup: true
Quartz will then auto start up when the app is being started.
Creating Job Skeleton
In Grails console:
grails> create-job printlnEveryFiveSecond | Rendered template Job.groovy to destination grails-app\jobs\testcsv\PrintlnEveryFiveSecondJob.groovy
Note that the the command will automatically add “Job” at the end of your class name.
Editing the Job
The job class contains two things, the
triggers and the
execute methods. When this class is being triggered, the execute method will be called. It is possible to inject service instance in this class so that you can call the service in the execute method.
class PrintlnEveryFiveSecondJob { static triggers = { simple repeatInterval: 5000l // execute job once in 5 seconds } def execute() { // execute job println("hi") } }
Play Around with the Trigger
Repeat it every 5 seconds
static triggers = { simple repeatInterval: 5000l // execute job once in 5 seconds }
Repeat it every 5 seconds in cron style
static triggers = { cron name: 'schedulingTasks', cronExpression: "/5 * * * * ?" }
Repeat it every day at 02:00:00 AM
static triggers = { cron name: 'schedulingTasks', cronExpression: "0 0 2 * * ?" }
For more details, go see the Documentation of Quartz plugin.
Additional Information
To make sure the same job does not concurrently running more than one time, you can add
static concurrent = false in the job class.
Note that quartz plugin does not support clustering by default. Additional work are needed to do so. Assume we have a task that done daily. One of the simple way is to have a database with a unique column value that store the date in format “YYYY-MM-DD”, and only the node on the cluster able to write this value is allowed to execute the job. Of course, you need to do this checking inside the execute function like:
def execute() { Date now = DateTimeTools.getSystemCurrentTime() try { CronRecord cr = new CronRecord() cr.date = (new SimpleDateFormat("yyyy-MM-dd")).format(now) cr.save(failOnError: true) } catch (Exception ignored) { //if it cannot be save, meaning that someone has already done it. return } //TODO: continue your job here . . . } | https://wiki.chongtin.com/grails/quartz_-_job_scheduler | CC-MAIN-2021-17 | refinedweb | 489 | 57.77 |
.
SHADOWTRADER SQUAWK BOX - FAQ
How do I setup the market internals “quad” screen?
Go to ShadowTrader.net Click on Archives Tab Select education from the drop down list, or click the little graduate icon on the page. Scroll down to Video Series and find “How to Set Up Market Internals.” If unfamiliar with how to “read” the quad, watch “How to Interpret Market Internals”, parts I and II.
How do I get the value areas on my /ES chart with the VAH-VAL-POC cloud?
1. On your thinkorswim by TD Ameritrade platform, with a chart open, click on Studies.
2. Choose Edit Studies from the drop down
3. In the lower left, click New
4. When the box pops up, delete any code that may already be there.
5. At the top where it says Study Name, give your study a name like “value areas” or whatever you like.
6. Cut and paste the value area code from
OR cut and paste the code below (note it goes down to the next page)
input VAHigh = 0.00; input PointofControl = 0.00; input VALow = 0.00; input marketOpenTime = 0930; input marketCloseTime = 1615; input showcloud = yes;
def closeByPeriod = close(period = "DAY")[-1]; def openbyperiod = open(period = "DAY")[-1]; def VArea = if close >= VALow and close <= VAHigh then 1 else 0; def secondsFromOpen = secondsFromTime(marketOpenTime); def secondsTillClose = secondsTillTime(marketCloseTime); def marketOpen = if secondsFromOpen >= 0 and secondsTillClose > 0 then yes else no; def newDay = if !IsNaN(closeByPeriod) then 0 else 1;
plot VAH;
plot POC;
plot VAL; if !IsNaN(close(period = "DAY")[-1]) then { VAH = Double.NaN; POC = Double.NaN; VAL = Double.NaN; } else {
VAH = if marketOpen and newDay then VAHigh else double.nan; POC = if marketOpen and newDay then pointofcontrol else double.nan; VAL = if marketOpen and newDay then VALow else double.nan;
}
VAH.SetPaintingStrategy(paintingStrategy.line); VAH.SetDefaultColor(color.darK_red);
VAH.SetLineWeight(2);
POC.SetPaintingStrategy(paintingStrategy.line); POC.SetDefaultColor(color.DARK_ORANGE);
POC.SetLineWeight(2);
VAL.SetPaintingStrategy(paintingStrategy.line); VAL.SetDefaultColor(color.darK_green);
VAL.SetLineWeight(2);
AddChartBubble (!IsNaN(VAH) and IsNaN(VAH[1]), VAH, "VAH", color.white, no); AddChartBubble (!IsNaN(POC) and IsNaN(POC[1]), poc, "POC", color.white, no); AddChartBubble (!IsNaN(VAL) and IsNaN(VAL[1]), val, "VAL", color.white, no);
plot cloudhigh = if marketOpen and newDay and showcloud then VAHigh else double.nan; plot cloudlow = if marketOpen and newDay and showcloud then VALow else double.nan; AddCloud (cloudhigh, cloudlow, color.yellow, color.yellow); AddChartLabel(VArea, "In Value Area", color.white);
6. Click “OK”
7. In the Edit Studies and Strategies Box, click on the study you just created to bring up the box below:
8. In the Inputs section in the middle, enter in your VAH, POC, and VAL prices. Change your market open
time to “800” in order to see the value area in the premarket.
9. Hit ‘ok’ or ‘apply’.
10. Click the little wrench at the top of your chart, its to the left of the timeframe box. 11. Select the scale & axis tab 12. In the Scale section, Uncheck the “Fit Studies” box. This is important so that your value area does not show up on other charts if you change the symbol in your /ES chart window.
How do I get rid of the white VAL, POC, and VAH bubbles in my value area? are blocking my view!
Select your study from the list on the left in the “Edit Studies and Strategies Box” and click “Edit”. Once your code appears in the window, look for these three lines. They will be next to each other.
They
Add a “#” (without the quotes) in front of each line. The line will turn a yellow/gold color when you do it. Click ‘ok’ If you want the bubbles back at some point, just go back into code and remove the “#”
I cannot copy the thinkscript from this .pdf. Please help!
Try getting them from the text file. The scripts are available here:
Where do I get the value areas and pivot points every morning?
Go to the Tools tab on your thinkorswim platform. Click on the MyTrade tab Log in with your thinkorswim username and password Click on the People tab If not already following ShadowTraderPro Swing Trader, click follow. Click on?
5. At the top where it says Study Name, give your study a name like “breadth bubbles” or whatever you
like.
6. Cut and paste the Breadth code from into the big box or use the code below.
input length = 2; #NYSE Breath ratio def NYSEratio = if (close("$UVOL") >= close("$DVOL")) then (close("$UVOL") / close("$DVOL")) else -(
close("$DVOL") / close("$UVOL")) ; plot NYratio = round(NYSEratio, length);
NYratio.DefineColor("NYSEup", color.UpTICK); NYratio.DefineColor("NYSEdown", color.DownTICK); NYratio.AssignValueColor(if NYSEratio >= 0 then NYratio.color("NYSEup") else NYratio.color("NYSEdown"));
AddChartLabel(yes, concat(NYratio, " :1 NYSE"), (if NYSEratio >= 0 then NYratio.color("NYSEup") else
NYratio.color("NYSEdown")));
#Nasdaq Breath ratio def NASDratio = if (close("$UVOL/Q") >= close("$DVOL/Q")) then (close("$UVOL/Q") / close("$DVOL/Q")) else -(
close("$DVOL/Q") / close("$UVOL/Q")) ; plot Qratio = round(NASDratio, length);
Qratio.DefineColor("NASup", color.UpTICK); Qratio.DefineColor("NASdown", color.DownTICK); Qratio.AssignValueColor(if NASDratio >= 0 then Qratio.color("NASup") else Qratio.color("NASdown"));
AddChartLabel(yes, concat(Qratio, " :1 NASD"), (if NASDratio >= 0 then Qratio.color("NASup") else Qratio.color("NASdown"))); #ZeroLine
plot zeroline = 0; zeroline.assignValueColor(if NYSEratio > NYSEratio[1] then color.GREEN else color.RED);
zeroline.setLineWeight(1);
zeroline.hideTitle();
zeroline.hideBubble();
7. In the Scale section, Uncheck the “Fit Studies” box. This is important so that your value area does not show up on other charts if you change the symbol in your /ES chart window.
What is the formula for ShadowTrader Pivot Points?
The formula is available in the glossary section at. Click the glossary tab right on the homepage
To receive alerts for ShadowTrader Weekend Update, text “follow st_weekend” (without the quotes) to
No twitter account is necessary to receive these alerts.
To stop the alerts, text “off st_swing” or “off st_fx” or “off st_fx”, or “off st_weekend” (without the quotes) to 40404.
What moving averages does ShadowTrader use on their charts?
ShadowTrader uses the simple 20, 50, and 200 period moving averages..
Where can I find the “Master Sector List”?
Go to the ShadowTraderPro Swing Trader which is free on MyTrade under the tools tab of your TOS platform OR, at. Inside of the newsletter, scroll down to the “Sector Trend Score Matrix”. The master sector list is comprised of all of those 25 sectors. The list is also under our glossary section at box posted anywhere?
No, due to compliance reasons they are not. You must simply pay close attention. update consistently on any trades that they are involved with.
The moderators will
How do I receive the WeekendUpdate and Peter’s video?
Go to and sign up in the box in the lower left on the homepage. You will then receive an email every Sunday with a link to the latest issue which contains the video. If for some reason you do not receive the email, remember that each and every issue is always archived immediately when its posted at
Where can I get the “10 Laws of Daytrading”?
This .pdf is posted here: Enjoy….
I don’t’ like the music that they play over the lunch break.
If listening through gadgets, click the little blue square which will shut off the audio. If listening through the Support/Chat tab, either close the watch window, or uncheck the listen. | https://it.scribd.com/document/69149681/Squawk-Faq | CC-MAIN-2020-40 | refinedweb | 1,248 | 58.99 |
This page contains information for SoapUI Pro that has been replaced with Ready! API.
To try the new functionality, feel free to download a SoapUI NG Pro trial from our website.
Let's dig into the processing and validation of CDATA sections in your XML, often used to embed blocks of XML as strings inside an existing XML structure. Specifically we are going to look at:
And in the end we're going to look at how a SoapUI Pro Event Handler can make all this much easier
CDATA sections are used in XML documents to escape longer blocks of text that could otherwise be interpreted as markup, for example:
<message><![CDATA[<data>some embedded xml</data>]]></message>
Here the string "<data>some embedded xml</data>" is just that; a string, and not XML. Another way of writing this could be:
<message><data>some embedded xml</data></message>
Which is 100% equivalent to the previous version using CDATA; parsing either of these with some parser would return the content as a string and not parsed out as XML.
What if the embedded XML contains a CDATA section? Wouldn't the embedded ]]> terminate the outer <![CDATA[ ? Yes it would! So, you can't embedded a CDATA straight off, but will need to temporarily terminate the outer CDATA to be able to pull this off. Let's say we have the following string:
<data>some embedded xml <![CDATA[<text>with xml</text>]]></data>
and want to put this in an XML document. The result could be either
<message><data>some embedded xml <![CDATA[<text>with xml</text>]]></data></message>
with standard XML entities, or (pay attention now..)
<message><![CDATA[<data>some embedded xml <![CDATA[<text>with xml</text>]]]]>><![CDATA[</data>]]></message>
Confused? The first CDATA section wraps the following characters: "<data>some embedded xml <![CDATA[<text>with xml</text>]]" (notice the missing terminating '>' which would have turned the last three characters into a CDATA terminator), then comes a single ">" (which doesn't need to be entitized into > since it can't be mistaken for any markup), and then another CDATA containing the string "</data>". Assembling these three strings gives us the original, and so will a parsing XML processor with either method.
It is (unfortunately) quite common that SOAP messages contain some part of the payload in a request or response as a string and not as XML, which has both advantages and disadvantages. In SoapUI these XML strings are not easily validated against a schema (scripting required!), they are not easily asserted with XPath, and using them as targets/sources for property transfers is difficult since they are strings, not XML. Also the extended message viewers in SoapUI Pro (Outline, Overview) show these as strings and not as markup, which can be confusing.
Let's say. In the SoapUI Pro Outline and Overview editors this shows up as:
and
Not very user-friendly!
Fortunately there are some workarounds available.
As you know, Property-Transfers are TestSteps for transferring property values between requests, responses, properties, etc (read more in the User Guide). A common scenario is the requirement to transfer a value from a response message to the following request (for example a session id). In the standard case this is straight-forward; set the source/target of the property-transfer to the desired message property and specify an XPath statement to select the desired source/target element (in SoapUI Pro all this is done with point-and-click wizards). But in our scenario, the XPath of the property-transfer can only point at the element containing the XML message string, and not "inside" it (since it is just a string), so what to do? The solution is to use temporary properties:
For Property-Transfer sources being "inside" a CDATA xml
For Property-Transfer targets being "inside" a CDATA block, a similar approach works out:
Let's combine both of these into an example. Let's say we want to transfer the embedded "isle" value in the example message above into the following search query, also containing embedded XML;
<soapenv:Envelope xmlns:soapenv=""
xmlns:
<soapenv:Header/>
<soapenv:Body>
<sam:search>
<sessionid>123</sessionid>
<searchstring><![CDATA[<isle>?</isle>]]></searchstring>
</sam:search>
</soapenv:Body>
</soapenv:Envelope>
We have our TestCase with the two requests, start by adding two temporary properties to the TestCase (one for each intermediate XML);
Now insert a Property-Transfer step between the requests and configure it as follows:
1) Create the first transfer that transfers the CDATA section in the response (in the description element) to the Temp1 property:
2) Create a second transfer that transfer the CDATA section of the request (in the searchstring element) to the Temp2 property
3) Now we have both CDATA sections as strings; create a transfer that transfers the isle value from the Temp1 property to the Temp2 property
4) So now we have the desired value of the searchstring in the Temp2 property; transfer that back to the request with our last transfer:
Mission accomplished! Running these four transfers will effectively extract the desired value from the embedded XML and write it into the embedded XML in the request.
Agreeably, this still seems a bit much work, couldn't we do it with a script instead? Sure, let's have a look what that script would look like (in groovy):
// create holder for source
def description = context.expand( '${Request 1#Response#//sam:searchResponse[1]' +
'/sam:searchResponse[1]/item[1]/description[1]}' )
def descHolder = new com.eviware.soapui.support.XmlHolder( description )
// create holder for target
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def holder = groovyUtils.getXmlHolder( "Request 2#Request" )
// transfer value and save
holder["//searchstring"] = descHolder["//isle"]
holder.updateProperty()
This doesn't require any temporary properties and we could do some assertions on the way, the choice is yours!
Ok, how about assertions? The standard XPath processor will just see the XML string as any old string and not parse it as XML so we can assert it using the standard XPath possibility. What to do? I can't come up with anything better than a script-assertion (except the Event Handler further down); fortunately SoapUI Pro has a wizard for creating these rather easily; right click on the desired node to assert (the one containing the XML string) in the Outline View and select "Add Assertion -> for Existence with Script";
SoapUI will generate the following script for you (if you don't have SoapUI Pro, just add a Script Assertion manually and enter the below script);
import com.eviware.soapui.support.XmlHolder
def holder = new XmlHolder( messageExchange.responseContentAsXml )
holder.namespaces["sam"] = ""
def node = holder.getDomNode( "//sam:searchResponse[1]/sam:searchResponse[1]" +
"/item[1]/description[1]" )
assert node != null
Let's modify this a bit and assert that the isle value starts with an A followed by two digits:
import com.eviware.soapui.support.XmlHolder
def holder = new XmlHolder( messageExchange.responseContentAsXml )
holder.namespaces["sam"] = ""
def node = holder["//sam:searchResponse[1]/sam:searchResponse[1]/item[1]/description[1]"]
def descHolder = new XmlHolder( node )
def isle = descHolder["//isle"]
assert isle.length() == 3
assert isle.charAt( 0 ) == 'A'
assert Character.isDigit( isle.charAt( 1 ))
assert Character.isDigit( isle.charAt( 2 ))
This still requires a bit of coding, but it at least makes it possible. The choice is yours!
Finally we'll look at validation; the schema of the message only defines the XML string as a string and not its complex content; a script will be our solution here as well. Use the same wizard/methodology as described above to extract the value in a script-assertion, then add the following which will load an XSD from the file system and validate the XML)))
The XSD being
< which don't have a formalized schema, and since groovy can validate by DTD and RelaxNG as well this could be performed equally (check out for examples).
Wouldn't it be nice if we could just remove those CDATA tags before SoapUI processes the response so it is seen as standard XML? Sure, it wouldn't be compliant with the original schema, but it would make transfers and assertions so much easier. Well, once again, Event Handlers in SoapUI Pro can do this for us; Open the Project window, select the "Events" tab and add a RequestFilter.afterRequest handler. Set its content to:
def content = context.httpResponse.responseContent
content = content.replaceAll( "<!\\[CDATA\\[", "" )
content = content.replaceAll( "]]>", "" )
//log.info( content )
context.httpResponse.responseContent = content
This effectively removes any "<![CDATA[" and "]]>" strings from the response XML, which will result in SoapUI processing the entire content as XML, allowing us to view/handle responses as standard XML. For example we now in the Overview view see the "nicer" formatting;
and the Property-Transfer and Assertion-wizards are in place in the Outline View allowing us to create this as usual:
Of course this has some severe limitations; it depends on the formatting of the response to be as we want (although an improved handler could deal with this), and schema-compliance assertions will (probably) fail, but it might be what we need to get the job done, and that's all we want by the end of the day, right? | http://www.soapui.org/Functional-Testing/working-with-cdata.html | CC-MAIN-2015-48 | refinedweb | 1,528 | 50.46 |
Tag and HTML Helpers Overview
The Telerik UI Tag and HTML Helpers for ASP.NET Core are server-side wrappers that enable you to use and configure the Kendo UI for jQuery widgets in an ASP.NET Core application. Both helper flavors offer the same functionality, and you can choose which one to use depending on your preferences.
You can add the desired Tag or HTML Helpers to your application and then configure them further by using predefined strongly typed attributes. The helpers also allow you to handle the events of the widgets in your ASP.NET Core projects.
Widgets vs. Helpers
The following list describes how the UI for ASP.NET Core helpers differ from the Kendo UI widgets.
The UI for ASP.NET Core helpers:
- Allow you to create widgets with no HTML and JavaScript coding.
- Provide for server-side data binding and, in some cases, server-side rendering.
- Allow you to use the
ToDataSourceResult()extension method for binding Kendo UI widgets to server-side collections and for performing data operations (paging, sorting, filtering, and grouping).
- Provide integration with some ASP.NET Core Core.
Known Issues
Tag Helpers might need to be disabled on pages where widgets render custom content—for example, the Button, Editor, Splitter, Tooltip, or Window. Some Tag Helpers, such as the
hrefone, are processed automatically and result in invalid HTML.
@removeTagHelper "*, Microsoft.AspNet.Mvc.Razor" @removeTagHelper "*, Microsoft.AspNetCore.Mvc.Razor"
The
TagModeenum of the MultiSelect is now renamed to
MultiSelectTagMode.
Deferred()can be invoked only as a last setting.
@(Html.Kendo().NumericTextBox() .Name("age") /* Other configuration. */ .Deferred() )
The Grid does not support server-side rendering as available in Telerik UI for ASP.NET MVC. The toolbar template, column header template, and column template are no longer rendered on the server.
Some changes were introduced with the Enum naming in Telerik UI for ASP.NET Core Charts:
- The Thumbnails view of the UI for ASP.NET Core Editor's ImageBrowser is not supported because the
System.Drawingnamespace is not part of ASP.NET Core. However, you can process images on the server side by using a third-party library. | https://docs.telerik.com/aspnet-core/html-helpers/helper-basics/overview | CC-MAIN-2022-27 | refinedweb | 354 | 59.7 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Roy Eltham wrote: »
It's really easy to write WIN32 applications that don't handle system wide display scaling well (aka adjusting the DPI settings in the Display control panel). It takes some extra work to do it correctly. Using absolute sizes and offsets, doing "owner draw" things without accounting for it, etc.
That extra work is why a lot of Windows applications have font sizing or layout issues when scaling isn't at 100%.
...that may explain the vast amounts of empty lines trailing your posts now... :-Þ
...it is best practice to make applications that support versions of the OS that are at least 20 years old,
...to get the best market spread you would have to support Win32s on Win16 for your pure Win32 applications.
Win32 API has been pretty stable across these OS versions, and when it comes to the GUI stuff, if you code it correctly, then it will work correctly at any scale the users chooses in the Display Control panel.
Heater. wrote: »
Yeah, Unicode is a curse.
But what do you mean "Unicode". Have you accidentally thrown some UTF-8 in there or some such?
I don't think PropellerIDE ever got much testing on Linux.
MikeDYur wrote: »
@yeti, Here is the screenshot of generated dialog from PropellerIDE, after starting up my Linux computer.
yeti wrote: »
You are sure that there should be schematics included?
It looks like a template or example...
Roy Eltham wrote: »
davidsaunders,.
MikeDYur wrote: »
No there was not a schematic for that file. But that dialog was generated by the program.
Roy Eltham wrote: »
MikeDYur,
The issue you are seeing with pasting and unicode is likely because the code handling pasting isn't properly handling the unicode. Possibly converting to ascii along the way or something similar.
Unicode is evil?
As opposed to what? ASCII chauvinism?
Unicode is just the necessary consequence of recognizing that there are languages that do not use the Roman alphabet, but that still deserve nearly equal footing in a digital world.
Yes, it's messy and inconvenient for 8-bit dinosaurs. Get over it.
heater wrote:
Problem is it becomes computationally impossible.
Unicode is diversive.
...it's equivalent to the halting problem? Can you cite a mathematical proof?
...diversive...divisive...
Heater likes MS products because MS products also produce heat!
let ﻝ = {
ﺍ: function () {
return ("Hello world!");
}
}
let msg = ﻝ.ﺍ();
console.log(msg);
ف = (2 + 3) * (3 + 3)
console.log(ف);
Really now. So I can write a Win32 application that correctly supports display scaling, and have it used on any implementation of Win32?
That is news to me. If there is continuity of OS on a given CPU given compatibility of the OS it is best practice to make applications that support versions of the OS that are at least 20 years old, unless there is a very good reason not to (like making heavy use of newer GPU that is unsupported in the older system), this assures the widest possible market for a given target OS. For Windows that means going to support at least Windows NT 4.0 and Windows 95 OSR2. And there are still people that use Windows 3.11 with Win32s, so to get the best market spread you would have to support Win32s on Win16 for your pure Win32 applications.
Now how do I write a Win32 application that correctly supports display scaling, will run on Windows 95OSR2, and does not need redundant code for seperate UI features for supporting display scaling vs supporting older systems?.
Often times I have edited a post just to remove all those trailing blank lines.
You mean like this:
Nobody in their right mind does that. What would they benefit by it?
I'm all for cross-platform support. And often times one can get that for non-gui programs that use only decades old standards. Try telling that to all the developers of Android and iOS apps. Try telling that even to MicroSoft.
Nobody cares about the few people still alive that use Win32 from the Win 3.1 or Win 98 era.
Then, after plugging my Samsung monitor in and out, or whatever, everything goes to bonkers again.
Start messing with the settings again...
Nothing "sticks" the way I set it.
Of course, what works for one old app I need does not work for another.
Heck, even the desktop icons turn up on the Surface screen or my monitor at random. No matter how many times I drag them to the right place.
It's endless frustration in Win 10 on Surface land.
Oh, and did I mention WIFI does not work if I plug in the dock?
-Phil
I'm going by a warning on a widows machine, when saving my progress in Propeller tool that: Warning this document contains unicode characters.. .bla, bla, bla, A whole paragraph.
It comes up after pasting a MIT's license. I can't see any difference in the license that does, and the license that doesn’t create that message.
But PropellerIDE running under Linux totally obliterated any schematics that are created on the spin document. Scattering bits of code over the whole document, including messing with license format.
I will be making a formal bug report, as soon as I can come up with a concise explanation of the problem. In other words, as soon as I can put something together that doesn't sound dumb.
You have my sympathy. What you say is not dumb.
Unicode is a curse that is sent to confuse us all and drive us to madness. I have suffered enough from it. Look around the web and see how many documents are corrupted by it.
Unicode, in it's many binary formats is impossibly difficult to deal with. Especially if importing text from here and there or cutting and pasting this and that.
It's pretty much impossible for a program to even find the length, in characters, of a Unicode string.
Mike
It looks like a template or example...
Internally, it converts them all into PASCII (a 256 character set which is basically ascii + 128 special characters), it also converts all newlines (0x0A) into carriage returns (0x0D). This is what Chip's compiler code works with.
MikeDYur,
The issue you are seeing with pasting and unicode is likely because the code handling pasting isn't properly handling the unicode. Possibly converting to ascii along the way or something similar.
No there was not a schematic for that file. But that dialog was generated by the program.
Many systems that stay the ding off line (as I would expect most development systems do), are still running older versions of windows. So if you are writing a development tool it makes since to stick with the standard Win32 API that has remained unchanged since NT 3.5.
And you complain about the mention of Windows NT and yet state that you care about Win7 and newer, which are versions of Windows NT.
I see no problem with display scaling, so long as it works with applications that were written for older versions of the API that do not include display scaling. If they had it working correctly Heater would not be having the problem at hand, so it has everything to do with that issue.
If(not) then yeti.is_lost();
Thanks, Once I had seen some corruption of my file it spooked me, I have enough trouble keeping things strait.
The Display scaling thing I am talking about has been in Windows for a very long time, it was around in the 9x versions and ancient NT versions. It was called different things over the versions, but it's doing the same thing.
It changes the DPI, such that the size of buttons, title bars, fonts, etc. are larger or smaller. It makes it so that when you run a 4K monitor your icons and text are not microscopic.
Yes, technically, Windows 7 is a "version" of NT, but there have been some pretty significant overhauls and changes since it was last called NT. Vista was a huge difference. Native 64bit is another huge difference. Calling Windows 7 and newer just versions of NT, is being silly.
And I don't care if you run offline, you still should be concerned with security issues in your OS, and those older versions have giant gaping holes in them. You also can't run anything built with the last 3 or 4 version of Visual Studio, the majority dev tool for native Windows apps.
You're living in a pipe dream if you think new applications developed today should be made to work on anything older than Windows 7.
His father told me that Windows NT was a reimplementation of DECs VMS for Microsoft by Dave Cutler.
In this modern world the first thing you need to do is get your stuff running on Linux. If it's good it will end up running on Windows. See Visual Studio Code (based on Electron) or node.js with the MS Chakra JS engine.
Heater said Unicode is evil, I tend to agree.
-Phil
What bugs me about that is that we humans used to have all kind of weird schemes for numbers. Remember the Roman numerals and such? It all got rationalized to the Arabic numerals we have today.
That adoption of a common number system stemmed from the needs of trade. Arguably bringing people together.
So, yeah, adoption of ASCII would unite people. Unicode is diversive divisive. Messy and inconvenient would be OK. That's just more mess and inconvenience and "busy" work.
Problem is it becomes computationally impossible.
I like that! (as opposed to divisive)
-Phil
Not sure what you mean. Would it force everyone to comment in Esperanto?
I don't understand how forcing people to use ASCII would unite anybody. Say you had people from Mainland China using Hanyu Pinyin to express themselves in ASCII. Would people from say Taiwan be happy with it? No way. That's quite political even in Chinese language preschools here in America.
It's a nice bit of hardware. Very good for a lot of things. What I've done, for the people who choke on Post-Win 7 UX issues is load the classic shell and pin a few things they use all the time to the taskbar. They end up pretty happy. Interestingly, once I show them how the search / command bar works, and get their "Command Prompt" setup nicely, I don't hear much from them.
Recommended. Those things are very useful devices.
As for Windows itself, I'm staying on my "let someone else buy it" path. I'll gladly do the work, and often get left with a license for a variety of reasons. Usually, all I have to do is agree to some "just in case" support time, and I'm golden.
On that basis, it's a fine OS! I keep my Linux skills reasonably current too. Just hedging risks and costs with that. Mostly, since Vista, I've had very few troubles with Win OSes.
Re: 20 year old support
I suppose people can ask. I sure wouldn't expect much personally, not without paying. There are tools from that time frame too. Maybe if there is interest, people would pool money, or labor to get it done.
Rolling your own is a lot of work, but may pay off for a very long time too.
Comes down to what is worth what? I know some people who do support older era computing, various embedded, machinery. They've got some old hardware, libraries, things they've written to make whatever needs to happen, happen. They get paid well for it, or it's a hobby.
Very little grey area there. YMMV
I will tell you, I've got some stuff from the late 80's, and early 90's era archived myself. It's CAD / systems related code, some hardware, procedures, and the few times I've been called on that stuff, I made bank! Nobody else left! Worth it, but only because I was able to get those skills in that time frame and do so with an income that made sense.
Asking someone to look that far back? Very painful. It's not gonna happen. Not without a very good reason, IMHO. Last time for me took darn near a week to run back through it and get competent. Money was good, but I won't lie: Had to work for it too. That and go shopping on E-bay!
What I've found is people lock into a system, or have customized it, or have machinery they depend on, or that just is profitable for them. They save a ton this way too! Reasonable, but doing that does come with costs and risks.
They can bank that, and spend every so often to get a repair or maintenance, or invest some of it to make their own they can self manage too. Both very often make sense over a modern replacement, should better capability not be indicated. So much depends on the scenario.
Parallax colors simplified:<br>
(The real problem with Unicode is that there are bugs in the mapping, now difficult to fix.)
On the other hand I have yet to see any software that can handle all of Unicode. Oops, well spotted. Must remember not to post when I half asleep.
Yes, divisive. It encourages everyone to continue with their own language. Rather than uniting everyone.
@CoderKid, Ha! Actually, given it's performance this Surface Pro 4 runs pretty cool. Cooler than any laptop or PC around here.
@all
Unicode problems:
1) It's made an unintelligible mess of billions of web pages as text gets mangled when being moved from system to system. This data corruption is what computers should not be doing.
2) It allows me to write source code like this Javascript snippet: Which is basically unintelligible gibberish but is a valid working program.
3) Emoji
The are basically meaningless and have no place in text. Whatever language you are using.
If you look at the way emoji are rendered in different systems you see the same code point can look as if it means something totally different.
The height of the madness is of course this: "
Thanks to my trying to use Unicode a big chunk of my last post is missing. It was like so:
The height of the madness is of course this: "
Note: I had to replace the Unicode U+1F4A9 with an image to save my post from being corrupted.
4) Complexity. Now every program has to be handle Unicode. Which as I said earlier is all but impossible. Check this video to see why:
Testing: 💩
Hmmm...So we can.
I think that is a big pile...
I wonder what forum vulnerabilities we can exploit with HTML entities like that ? | http://forums.parallax.com/discussion/comment/1402931/ | CC-MAIN-2019-09 | refinedweb | 2,505 | 75.1 |
Almost 70 years ago, on a Sunday, October 30, 1938, we could hear on a radio:.
Recently on Monday, June 23, 2008, we could read on a radio site).
What’s common between the two? They created a big wave of reactions, comments and arguments: A war of the worlds.
microformats, RDFa and HTML 5
I would like to focus on two blog posts which I like in this flood of comments. There are many more interesting.
Ed Dumbill says in The BBC, microformats, RDFa and Res.
Not only jQuery, I met once, John Resig in Tokyo. He was giving a talk about new features of the future Ecmascript. It was complex, not necessary easy to understand, but he made it in a way that was enlightning. We could see he had pleasure talking about it. That was refreshing. I decided to put it on the side of good speakers who are worth to go see again.
Then not so far ago, John ported Processing vizualization language to Javascript. I love graphics and information processing. It was yet again another moment of pleasure thinking “Some people have talents and creativity in their hands, they do beautiful things with complex objects.”
The other blog post is in French and comment also about the affair. Damien Bonvillain is giving his take on RDFa and its simplicity:
In fact, RDFa defines only 5 new attributes (about, property, resource, datatype, typeof)
RDFa became a candidate recommendation last week. You can read the Primer or go to the RDFa wiki to learn a bit more about the technology. Yes, indeed, for some people it will need a bit of work to understand the concepts. But it took me time to learn HTML, and I don’t really master Javascript, but people like John gave me the opportunity to simplify things by developping tools, libraries or authoring tools.
And HTML 5 in all that? Here again there is the story behind the story. The first version of RDFa was using a lot elements like
meta and
link in the
body of a page. But browsers because of invalid markup found on the Web have to recover pages and put back the
link and the
meta in the
head of the document. RDFa community listened and learned. They modified their model to make a step toward HTML 5, to create an environment that will create less interoperability issues. They made a step in the right direction to be able to work together.
Next week, I will show why it is important and how that can work even if not perfectly. But remember, it is because there are people like John Resig, who creates, that complex things become easy. The war of the worlds was a fiction.
4 thoughts on “The War of the Worlds”
I think counting only 5 attributes as simplicity misses the main point of complexity: RDFa uses QNames in content (considered an anti-pattern by many—including me) and to resolve them, you need to know the namespace mapping context at each node.
It’s not only an issue of HTML not having a concept of namespace mapping context traditionally or in HTML5 as drafted. While tracking the namespace mapping context on the application-level is feasible when the document tree doesn’t change (e.g. when you compile an XSLT program), keeping track of the namespace mapping context becomes problematic in a browser environment where scripts can mutate the document tree over time.
For the problem at hand, HTML5 proposes the ‘time’ element as the solution. Unfortunately, the ‘time’ element is not part of HTML 4.01 and is, therefore, against microformat principles. But then, RDFa attributes weren’t in HTML 4.01, either.
Henri, the count of 5 attributes was only a reaction to the statement made by John Resig that RDFa introduced “many new attributes”, and citing 3 of them, giving the image that it was a small part of the overwhelming number of new attributes.
“keeping track of the namespace mapping context becomes problematic in a browser environment where scripts can mutate the document tree over time.”
The mutating tree problem is disconnected from the namespace mapping context problem. Right now, if you want to take that in account, you can throw away maybe 95% of the existing microformat parsers. The temporal model for interpreting inner metadata (µformat, RDFa, whatever…) is currently undefined. For instance, the Tails Export extension on Firefox is not refreshed automatically on tree mutation, and it doesn’t support the “include pattern” mandated by hReview. The Operator Firefox extension does not seem to support hReview or hResume at all, so it’s difficult to know how it would handle the “include pattern” in the case of a tree modification (other modifications are reflected on-the-fly).
My point is: so far, when there is scripting manipulation of the DOM, there are already problems for the existing in-browser microformat interpreters. As such, we can read in “RDFa in XHTML: Syntax and Processing” §5.5 : “In other words, XHTML processing rules must still be applied, even if document processing takes place in a non-HTML environment such as a search indexer.”, which shows that those kind of metadata must be usable without client-side scripting support (which does not mean that we should not have that kind of metadata targeted to a browser environment).
Now, how is the handling of the namespace mapping context on a mutating tree hard? It basically is a cascading problem, and DOM3 appendix B is frozen since more than four years ago.
You say in the pamphlet “namespaces considered harmful”: “I wonder how many hours in my life has been wasted looking up namespace URIs for copying and pasting”. I wonder how many hours of my life has been wasted looking up from where my CSS styles were coming from and why the selectors didn’t work as I expected. Meanwhile, I didn’t contribute any line to a text named “C in CSS considered harmful”. I don’t see how people writing CSS handling code could fail to tackle the namespace mapping problem, it is just beyond me.
“For the problem at hand […]. But then, RDFa attributes weren’t in HTML 4.01, either.”
So it’s fine, because the pages at hand, BBC/programmes, are not HTML 4.01. They are not XHTML 1.1 either, but a switch from XHTML 1.0 strict to it is not a huge step. But then again, the problems are: can I represent my metadata? is it accessible? The microformat’s way for the problem at hand is not accessible. Is the “time” element a solution, even as a hack? It could be, but by violating every known microformat parser implementation, it’s kind of defeating the purpose. Furthermore, it would put the constraint on having a mandatory “datetime” attribute on the time element, since we can not expect microformat parsers to talk to the DOM (maybe it’s so in HTML5?).
They are interrelated in a browser context.
Obviously, the tree mutation case is not applicable to microformat parsers that don’t run inside a browser and don’t have another means of executing scripts.
That the problem is inapplicable to RDFa consumers outside the browser is not the point. The point is that microformats and a metaformat positioned as a microformat replacement should work robustly inside a browser as well.
That’s bad. (As far as undefined things go, the main issue I take with microformats is that the microformats community doesn’t provide a document conformance spec and a processing spec on the HTML5 level of detail.)
That seems inconvenient especially for microformats that are particularly suited for in-browser consumption and applicable to ajaxy use cases, such as hCard and hCalendar that one would want to be UI-sensitive for transferring into an address book or calendar app.
hReview and hResume don’t make as much sense for in-browser support as hCard and hCalendar. hReview and hResume target content aggregators.
If script manipulation is already a problem, does it make sense to make the problem worse?
That more code than no code. And what benefit do you get from the layer of indirection that Namespaces is at the end of the day?
I wasn’t aware that I was being quoted on the microformats wiki. Thanks for letting me know.
Because the CSS cascade provides more value than the indirection Namespaces provide? Also, the people who implement the CSS cascade and the people who implement metadata scaping are not the same people.
Furthermore, citing another case where values propagate in the tree (CSS, xml:lang, base URI, etc.) doesn’t make QNames in content less brittle in the face of DOM manipulation.
HTML 4.01 vs. XHTML 1.0 vs. XHTML 1.1 is irrelevant as far as the validation point goes. Neither HTML5 ‘time’ nor RDFa is valid in any of them.
If you want to use something other than the abbr design pattern and you want the result to work with existing software that only works with the abbr design pattern, there’s nowhere you can go. (RDFa doesn’t work with every existing microformat parser, either.)
I don’t follow.
Sorry for the late answer…
For the sake of concision, I will name “in-browser” the use case where the metadata interpreter is executed inside a web browser, and supposed to be written in javascript talking to the DOM; “standalone” is the use case where the metadata interpreter does not use a web browser environnement and especially, does its own parsing of the document and does not interpret the script elements.
Basically everything it interrelated in a browser context. But for “standalone”, it has no meaning. And for “in-browser”, namespace mapping does not raise specific problems with relation to mutating tree: algorithms exist already.
From a strict robustness point of view, I don’t see how a XML namespace based solution is less robust than a “magical CSS class name” based solution. Now, current web browsers are indeed poor fits for anything labelled “robust” at that time regarding standards, and especially the XML related ones.
A microformat should work robustly inside a browser as well. hCard and hCalendar make sense in “in-browser” today because there are standard standalone formats to represent them, and for displaying a normalized representation. For hReview, since it expressely ignores HR-XML, only the second use case remains for “in-browser”, which is still very valid. Aside from that, the “include pattern” is the key to represent more complex graphs of data in µformat.
Worse compared to what? It sounds like XML namespaces is itself a fatality. I agree that the DOM Level 2 support in the current web browsers (and especially IE) is not up to the standard; but that does not explain why you dismiss the concept altogether. It’s related to the next point.
For “in-browser”, we could expect that the people who implement the CSS cascade and those working on the DOM interfaces work at least as a team. For “standalone”, I think there is no problem as of today finding an HTML normalizer + DOM Level 2 (if we want a brute force approach working on a non-standardized HTML 4 + RDFa).
Last time I checked, you used Java Namespaces in your code, willingly, and you seem to be alive from that “more code”. And you use “CSS Namespaces” as well in your pages (aka descendant selector applied to an ID selector). So, what are the benefits in qualifying a name? To me, it lies in the robustness aspect.
Once again, strictly speaking, I don’t see anything brittle in Dom Level 3 Appendix B. But maybe I miss some piece of information. And, again, you have the same propagation mechanism in µformat as well (after all, they form a hierarchy).
To conclude, it seems that your position is that XML Namespace are not a robust mechanism in face of tree manipulations, and there I disagree. | http://www.w3.org/blog/2008/06/war-of-the-worlds/ | CC-MAIN-2015-40 | refinedweb | 2,012 | 62.58 |
On 9/27/2010 5:23 PM, Michael Albinus wrote:
Ken Brown<address@hidden>)Likely it is sufficient to move the call of xd_read_queued_messages out of gobble_input: --8<---------------cut here---------------start------------->8--- *** ~/src/emacs-23/src/keyboard.c.~100064~ 2010-09-27 23:18:30.840864838 +0200 --- ~/src/emacs-23/src/keyboard.c 2010-09-27 23:18:01.942112064 +0200 *************** *** 4106,4111 **** --- 4106,4116 ---- /* One way or another, wait until input is available; then, if interrupt handlers have not read it, read it now. */ + #ifdef HAVE_DBUS + /* Read D-Bus messages. */ + xd_read_queued_messages (); + #endif /* HAVE_DBUS */ + /* Note SIGIO has been undef'd if FIONREAD is missing. */ #ifdef SIGIO gobble_input (0); *************** *** 7051,7061 **** gobble_input (expected) int expected; { - #ifdef HAVE_DBUS - /* Read D-Bus messages. */ - xd_read_queued_messages (); - #endif /* HAVE_DBUS */ - #ifdef SIGIO if (interrupt_input) { --- 7056,7061 ---- --8<---------------cut here---------------end--------------->8---
This works for me. Maybe you should test it too when you get a chance and then check it in if you're satisfied.This works for me. Maybe you should test it too when you get a chance and then check it in if you're satisfied.
Ken | https://lists.gnu.org/archive/html/emacs-devel/2010-09/msg01503.html | CC-MAIN-2015-32 | refinedweb | 183 | 68.06 |
Using a variable filename - Java Beginners
Using a variable filename Dear sir, my program a JFrame containes... the buttons global variable and using the code e.getSource()== or by associating two distinct action commands with the two buttons and using the code get Filename without Extension
Java get Filename without Extension
... the filename without extension.
For this, the file name or the directory name... returns the length of the file.
substring(0, index)- This method
Converting a Filename to a URL
Converting a Filename to a URL
A file object is used to a give a filename. Creating the
File... MalformedException. After
this we will convert this URL to a file object by using getFile
How to get filename in JTextArea in following case?
How to get filename in JTextArea in following case? Hi,
i'm trying to code a GUI in java,
the following code is working but the filenam... java.awt.*;
import javax.swing.*;
import java.awt.event.*;
class FileName extends
Java filename without extension
Java filename without extension
In his section, you will learn how to get...().length() - 2) {
System.out.println("Filename
without Extension: "
+ file.getName().substring(0, index));
}
Here
Java Substring
Java Substring
In this section we will read about Java Substring.
A part..., how they can find the substring. To find the substring in Java there
is method named substring(). So, a Java programmer should not to worry about how... to the url:
http:/
code for gettingSubstring - Java Beginners
://
Thanks... code for gettingSubstring Can anyone give me the code to get the substring from a code.for eg from a string en_zh i need to select a particular
find a substring by using I/O package
find a substring by using I/O package Write a java program to find a sub string from given string by using I/O package
SubString in Java
SubString(), a method of String class is used to get a part of original string... String.
Example of SubString in Java:
import java.io.*;
import....
This method has two types:
SubString(int beginIndex):
This method returns part
Taking Substring
are using the substring() method to get
certain characters from the big string... of substring() method.
The substring method is used to get certain part... for extracting the string.
With the help of substring() method of String class you get
Substring in java
Substring in java example:output 0f the code..
Input String:University of the Cordilleras
Input Substring:of
Output:1
...use non static methods,don't use (.Substring)..
..please help me..
..thank you..
deadline:November
String substring(int beginIndex)
substring(int beginIndex) method of String class in Java. The description of the code..., you will get to know about the substring(int beginIndex) method through...
String substring(int beginIndex)
Determining if a Filename path is a file or a directory
Determining if a Filename path is a file or a directory... or a directory in it.
We are using a following methods to solve this problem...;java FileOrDirectory
the name you have entered is a file : FileOrDirectory
String substring(int beginIndex, int endIndex)
the substring(int beginIndex, int
endIndex) method through the following java program...
String substring(int beginIndex, int endIndex)
In this section, you will get the detailed
JavaScript substring length
JavaScript substring length...;
This section illustrates you how to determine the length of substring in
JavaScript.
You... method substring() in order to extract the substring
from the defined
JavaScript indexOf substring
;
In this section, we are going to find the location of substring using...
JavaScript indexOf substring... the parameters substring is the
specified string and 0 is the index
C String Substring
;
In this section, you will learn how to get the substring
from a string...
the part of the string in order to get the substring. On calling the method
substring(6,19,ch), you will get the substring from the specified string. Here 6
and 19
get the value from another class - Java Beginners
get the value from another class Hello to all, I have stupid...("filename");
Element flnameEL= (Element)flnamelist.item(0);
NodeList flnameTEXT...().trim();
[/code]
How I can get String alsl = ((Node)flnameTEXT.item(0
Using HSSF 3.5 to READ XLS - Java Beginners
Using HSSF 3.5 to READ XLS I just dont seem to get this working. I... since they are built on and older relese i cant get them to work.
All i want... ) {
String fileName="C:\\excelFile.xls";
Vector dataHolder=ReadFile
Get and Display using JOptionPane
Get and Display using JOptionPane Hello everyone,
I have been... on in because this is my first time handling java. She ask to get some data from student, name and age, by using JOptionPane. Then display it.
Really
Java example program to get extension
Java example program to get extension
java get extension
To get the file name and file... "filename.ext"
is some filename with extension is provided. To get
Java Get Example
the execution path
We can get the execution path of the system in java by
using...
URLConnection.
Java get Filename without Extension...
Java Get Example
Java Get Example
using Java program.
Use of Get Method in Java: What... the execution path
We can get the execution path of the system in java by
using... username by using the System.getProperty().
Java get Version
how to get harddisk info using S.M.A.R.T using java
how to get harddisk info using S.M.A.R.T using java how to get harddisk info using S.M.A.R.T using java
How to Get Started with Java?
A Java Beginners Tutorial - How to Get Started with Java?
For a programmer Java offers some challenges that are uncommon with other environments and to get... using any piece of software. Here we go as to show you how to get started
using class and methods - Java Beginners
using class and methods Sir,Plz help me to write this below pgm... No " +s.getRollNo()+" get highest marks in Subject 1 i.e "+max1...("Roll No " +s.getRollNo()+" get highest marks in Subject 2 i.e "+max2
Program - Java Beginners
Java substring indexof example Java substring index of example
want to get job on java - Java Beginners
want to get job on java want to get job on java what should be prepared. To know java quickly. Just click the following links:
How to get month name from date(like-25/06/2012) using java?
How to get month name from date(like-25/06/2012) using java? How to get month name from date(like-25/06/2012) using java
JAVA - Java Beginners
index = f.getName().lastIndexOf('.');
String filename=f.getName().substring(0, index);
String arr[]=filename.split("_");
for(int i=0;i
Java get System Locale
Java get System Locale
In this section, you will learn how to obtain the locale. We... by using the
System.getProperty(). This will provide all the properties of the system.
How to get the output of JSP program using Bean
How to get the output of JSP program using Bean Hi
Kindly go... created in Java and compiled
<%@ page language="java" import="beans" %>... program for the above one by using Bean and i opened the Tomcat webserver
upload and download a file - Java Beginners
=\""+filename+"\"");
String name = f.getName().substring(f.getName... in eclipse and how to download the same using jsp
Hi Friend,
Try... = file.substring(file.indexOf("filename=\"") + 10);
saveFile = saveFile.substring(0
java a - Java Beginners
java a i will ask
iam using servlets ,in one program my...);
saveFile = file.substring(file.indexOf("filename=\"") + 10);
saveFile... + 1,contentType.length());
int pos;
pos = file.indexOf("filename=\"");
pos = file.indexOf
how to get the next option - Java Beginners
how to get the next option i was getting values from the database it was bulk so i want to keep the next
option how to do in the jsp Hi Friend,
Please visit the following link:
java - Java Beginners
java how to provide security for a folder using java Hi... FileSecurityDemo {
private String fileName = null;
public FileSecurityDemo...(fileName);
if (fileHandler.createNewFile
Java Get Memory
Java Get Memory
In this section of Java Example, you will see how to get the memory size
using Java programming language. This is a very simple example and you
Beginners Java Tutorial
an application using java
program.
Get Environment
Variable... Beginners Java Tutorial
... with the Java Programming language. This
tutorial is for beginners, who wants
java using Stack - Java Beginners
java using Stack How convert decimal to binary using stack in java
Using get method in preferences
Using get method in preferences
... a simple
way for using the get method in preferences. Get is the most useful... :
C:\anshu>javac get.java
C:\anshu>java get
rose
store the image in the oracle using blob and get it back - JDBC
store the image in the oracle using blob and get it back hi
i am... BlobTest {
public void insertBlob(String imageId, String fileName... fis = new FileInputStream(fileName);
ps.setBinaryStream(2, fis
Programming error - Java Beginners
function.js in servlet page using RequestDispatcher please reply me??
Here....")
return false
}
//get the zero-based index of the "@" character... the "@", and the substring after the "@"
// contained a "." char
return true
}
function
Using swing - Java Beginners
Using swing How can one use inheritance and polymophism while developing GUI of this question.
Develop an application that allows a student to open an account. The account may be a savings account or a current account
find and substring
find and substring **import java.io.BufferedReader;
import... String");
String findstring = keybord.readLine();
String substring = keybord.readLine();
findreplace(findstring,substring);
} catch(IOException io
Java get Next Day
Java get Next Day
In this section, you will study how to get the next day in java using
Calendar class.
In the given example, we have used the class
Using a JFileChooser - Java Beginners
Using a JFileChooser Dear Editor,
How to select an image from JFileChooser & add the image in mySQL5 database?
Thanks in advanced.
Regards,
Melvin. Hi Friend,
Try the following code:
import java.sql.
CORE JAVA get middle name
CORE JAVA get middle name hello sir...how to get middle name using string tokenizer....???
eg..like name ANKIT it will select only K...!!!!
The given code accepts the name from the console and find the middle
small java project - Java Beginners
small java project i've just started using java at work and i need to get my self up to speed with it, can you give me a small java for beginners project to work on.
your concern will be highly appreciated
Java get windows Username
Java get windows Username
In this section, you will learn how to obtain the window's... by using the System.getProperty(). This will provide all the properties
FTPClient : Get System Type
This section explain how to get system type by using method of FTPClient class in java using Stack - Java Beginners
java using Stack How convert decimal to binary using stack in java?
whats the java program or code to that.
Thank you rose india and java experts Hi Friend,
Try the following code:
import java.util.
OOP using Java - Java Beginners
OOP using Java Can you write a Java statement that creates the object mysteryClock of the Clock type, and initialize the instance variables hr,min...);
}
}
For more information on Java visit to :
OOP Using JAVA - Java Beginners
OOP Using JAVA OBJECT ORIENTED PROGRAMMING USING JAVA
(hope guys u will help me please i need your help,thank you so much)
Create a Java program...();
oops.OopType();
}
}
For more information on Java visit to :
http
retrive mails from user using java code - Java Beginners
retrive mails from user using java code how to retrive mails as user "username"???
using java
for ex:
class Mail{
private String subject... = "test";
// Get system properties
Properties properties
JSP Code - Java Beginners
", "attachment; filename=\""+filename+"\"");
String name = f.getName().substring...JSP Code can i show list of uploaded files in java then view one...(file.indexOf("filename=\"") + 10);
saveFile = saveFile.substring(0, saveFile.indexOf
Java using arrays - Java Beginners
Java using arrays Write a program to input a possibly integer n(n<=10);followed by n real values into an array a of size n and the program must perform the following:
Display back the values input
Sort the values
How to get Keys and Values from HashMap in Java?
How to get Keys and Values from HashMap in Java? Example program of
iterating... it on console.
Check Java for Beginners... through HashMap using
the entrySet() method of HashMap class. You will be able
JSP using java - Java Beginners
JSP using java I want to generate chart in JSP page.My bar chart code contains the following function
String series1 = "bars";
dataset.addValue(Integer.parseInt(values.get(i)),series1,(String)Values1.get(i
Java get User Home
Java get User Home
In this section, you will study how to get the user home. We...; that is used to store the properties of the system provided by
using the get
java using jsp - Java Beginners
java using jsp i already included those jar file into my library in eclipse.i previously send my JSP code.But it shows error like.
type Exception report
description The server encountered an internal error
Java Get Memory
Java swing - Java Beginners
Java swing Hi,
I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also.
Example,
I have 10 files
java - Java Beginners
://
Here you will get lot of examples.
Thanks... of the array. The components of the array are referenced using integer indices from
Java Get File Name
Java Get File Name
....
In order to get the file name, we have provided the path of file 'Hello.txt...;
String fileName= file.getName();
How to get browser time zone?
How to get browser time zone? How to get browser Time Zone and browser date using java concept
java - Java Beginners
java how to get the details railway reservation details and passengers reserved for a particular train using java
string and substring
Java Get Host Name
Java Get Host Name
In this Example you will learn how to get host name in Java. Go through... of a computer
or an IP address using Java application language.
The given program
java swings - Java Beginners
java swings Hi ,
I need the JFileDialog simple example using jbutton.
I need to put the browse option for my tabbed pane screen.
Please send...();
String filename=file.getAbsolutePath();
text.setText(filename);
this.repaint
Comparing strings - Java Beginners
) a substring appears in a string? The code I've written can get it once....");
}
}
}
--------------------------------------------------------------
Read for more information........
import java.io.*;
class Substring
{
public static void main(String a[])throws
Java compilation error - Java Beginners
Java compilation error Hello I am having this problem while compiling
I create a file
the package is com.deitel.ch14;
the filename...\AccountRecord.java
I save AccountRecord.java in ch14
However I get an error saying
Java get middle character
Java get middle character
In this tutorial, you will learn how to get middle... of the word entered by the user
using Java. If you have a word string 'company' and you want to get the
character 'p' from the word, then you can
java code to get the internet explorer version
java code to get the internet explorer version Hi,
Could u plz help me in finding the internet explorer version using java code...i have the javascript but i want this in java code...
Thanks,
Pawan
how to get popup fro servelt to jsp by using ajax........
how to get popup fro servelt to jsp by using ajax........ how to get... between the database and servelet using java script:ConnectionString.java...());
}
}
}
connection between the database and servelet using java script
java swings - Java Beginners
java swings Hi,
I have array of values in the jcombobox.
I need...=");
substringofqueuetemp = substr
.substring(startpointnextlevel + 5..." });
} catch (Throwable t) {
t.printStackTrace();
}
String fileName
java swings - Java Beginners
java swings Hi,
I have array of values in the jcombobox.
I need... = substr.indexOf("name=");
substringofqueuetemp = substr
.substring(startpointnextlevel... fileName = "c:\\qmanager.txt";
try {
StringBuilder sb = new StringBuilder
java - Java Beginners
+2+4+7+14 )
Q13 - write a program in java using loops and switch statements...-conversion/how-to-get-primenumber.shtml
I hope this will help you
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/68563 | CC-MAIN-2015-22 | refinedweb | 2,763 | 57.27 |
many answers,
A psp is compiled into a python servlet so you can call its method direct=
ly:
htmlHtmlBody =3D self.callMethodOfServlet(servlet, 'writeHTML')
or you can define a method in PSP w/
<psp:method name=3D"htHelpBody">
and then call it it via:
htmlHtmlBody =3D self.callMethodOfServlet(servlet, 'htHelpBody')
But I would use Cheetah.
If you load the template via something like:
def __init__(self)
self.t =3D Template(file)
def writeContent(self):
t.title =3D "page title"
self.writeln(t)
I am pretty sure that the template will record the timestamp of the=20
source file and automatically recompile itself if the date changes.
But why not just have the first servlet just call itself. If hte form=20
is verified then throw a redirect back to the browser. via :
if formIsValid:
self.session().setValue['username'] =3D usernameFromFrom
self.response().sendRedirect('page2')
then in page2 just read the session data. This way page2 can be a pure=20
PSP or cheetah file.
-Aaron
Michel Thadeu wrote:
>Hi guys,
>
>I think I can=B4t but I won=B4t kill anyone asking :), is there a way to=
>import from a psp?
>
>See my problem, to implment forms I create 2 pages, a
>FormDoSomething.py and a DoSomething.py, the first show the form
>presentation, the second verifies the data, if the data is wrong, shows
>a error message on the top of the page and then show the
>Form.writeContent(self), else do something with the data and shows a ok
>
>I have an example in
>
st
>
>The code for the example is above:
>
># file FormTest.py
>from Modelo import Modelo
>class FormTest(Modelo):
> def title(self):
> return 'The test form'
> def writeContent(self):
> self.writeln('''\
><form method=3D"post" action=3D"Test">
><p>Name: <input type=3D"text" name=3D"name" value=3D"%s"></p>
><p><input type=3D"submit" value=3D"Send Data"></p>
></form>''' % self.request().value('name', ''))
>
># file Test.py
>from FormTest import FormTest
>class Test(FormTest):
> def validarRequest(self):
> if self.request().value('name', '')!=3D'michel':
> self.erro.append('The name must be michel!')
> def writeContent(self):
> if self.erro:
> FormTest.writeContent(self)
> else:
> self.writeln('<p class=3D"sucesso">Hi michel!</p>')
>
>The form Test only shows the form presentation, I could derivate the
>Test class from a TestSucess class where there is a sucess message, so
>I can show the FormTest.writeContent or TestSucess.writeContent, and
>these 2 files could be generated by a designer, I only will mantain the
> data control... The designer could do this with psp or cheetah (I
>don=B4t want to need to compile the cheetah code each change :) or even
>zpt...
>
>Sorry if I couldn=B4t be more especific, but my english is too bad.
>Thanks for any help...
>
>.
You'll either have to use script, or have "more" and "fewer" actually
post the form via different actions.
When you post, you can use python code to add/remove fields to/from your
form, seed the form again, and re-paint it.
If your page is light enough, it may not be a big deal to make the
server round-trip.
We've done similar things just with CSS and a tiny tiny bit of script.
You can put about 10 form elements in there, but set their style to
"display:none;" and then with script, you can set the relevant one in
the DOM to "display:block;" as needed. It works ok.
CLIFFORD ILKAY wrote:
>
> I am trying to design a search form that when the user clicks on a
> "More" button, the search fields will be duplicated and added. ...
Hi,
I am trying to design a search form that when the user clicks on a "More"
button, the search fields will be duplicated and added. Think of the search
interface in Eudora, for example. A little ASCII art might illustrate the
point.
FieldsList CriteriaList ValueField
---------- ------------ ----------
firstName contains
lastName is
city beginsWith
province endsWith
country is not
sounds like
___________ ____________ ____________
|moreButton| |fewerButton| |searchButton|
------------ ------------- --------------
If the user wants to search for all people with the last name that sounds
like Smith (e.g. Smythe, Smyth, Smith) in the city of Toronto, the user
would select lastName and sounds like from the select lists, type Smith in
the value field, click on the More button, select city and is from the
select lists, type Toronto in the value field and hit the Search button.
Once the Search button is clicked, it is relatively easy to capture the
values in the form to generate a SQLObject query string. Once I have this
working, I would be happy to share it with the world. Any ideas on how to
do this with FormKit without having to resort to JavaScript or would I have
to use JavaScript? Any sample code out there?
Regards,
Clifford Ilkay
Dinamis Corporation
3266 Yonge Street, Suite 1419
Toronto, Ontario
Canada M4N 3P6
Tel: 416-410-3326 | https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200406&viewday=29 | CC-MAIN-2017-39 | refinedweb | 822 | 73.27 |
pi3d 1.9
pi3d OpenGLES2 3D graphics library
Introduction to Pi3D
Pi3D written by Tim Skillman, Paddy Gaunt, Tom Ritchford Copyright (c) 2014
There’s plenty of 3D code flying around at the moment for the Raspberry Pi, but much of it is rather complicated to understand and most of it can sit under the bonnet!.
v1.7 release of the pi3d module adds support for running on platforms other than the raspberry pi (X on linux) and runs with python 3 as well as 2 The OpenGLES2.0 functionality of the Raspberry Pi is used directly or via mesa included with this code and experiment with them ..
If you are reading this document as the ReadMe in the repository then you can find the full version of the documentation here
Demos for Pi3D are now stored at
N.B. Shaders are now integrated into programs differently. The syntax used to be:
myshader = pi3d.Shader('shaders/uv_flat')
and is now:
myshader = pi3d.Shader('uv_flat')
this will use the default shaders ‘bundled’ with the package. The old format will be interpreted as ‘look in a subdirectory of the directory where the demo is being run from.’ This is probably what you would do if you rolled your own special shaders for your own project.
Download, Extract and install
If you have pip installed you should be able to open a terminal and type:sudo pip install pi3d
Otherwise you can download from and extract the package then in a terminal:sudo python setup.py install
(or you may need to use python3) this will put the package into the relevant location on your device (for instance /usr/local/lib/python2.7/dist-packages/) allowing it to be imported by your applications.
The latest code can be obtained from where there is a Download ZIP link, or you can install git then clone using git clone this git method will give you the option to update the code by running, from the pi3d directory git pull origin master
Setup on the Raspberry Pi
Memory Split setup
Although most demos work on 64MB of memory, you are strongly advised to have a 128MB of graphics memory split, especially for full-screen 3D graphics. In the latest Raspbian build you need to either run sudo raspi-config or edit the config.txt file (in the boot directory) and set the variable gpu_mem=128 for 128MB of graphics memory.
Install Python Imaging
Before trying any of the demos or Pi3D, you must download the Python Imaging Library as this is needed for importing any graphics used by Pi3. The original Imaging library is no longer really maintained and doesn’t run on python_3. The better equivalent replacement is Pillow however a couple of issues relating to text vertical alignment will not be corrected unti the Oct2013 issue. To install Pillow you need to:
sudo apt-get install python-dev python-setuptools libjpeg-dev zlib1g-dev libpng12-dev libfreetype6-dev sudo apt-get install python-pip sudo pip install Pillow ...
If you miss any of the dependent libraries and need to add them later you will have to pip uninstall then re pip install
For python_3 support the first above will provide the required graphics libraries used by Pillow but you will need to swap to python3-dev and python3-setuptools also pip is different:
sudo apt-get install python3-pip sudo pip-3.2 install Pillow
If you do not intend to run python_3 and need nicely aligned text strings over the short term you can install the old PIL: on the terminal, type:
sudo apt-get install python-imaging
If you later switch to Pillow you will need to sudo remove python-imaging first
[To run on Arch linux you will need to install:
pacman -S python2 pacman -S python-imaging pacman -S python2-numpy
this worked for me. Presumably you would need the pacman equivalent of all the installations outlined above for Pillow and python_3]
Setup on alternative Linux platforms
The machine will need to have a gpu that runs OpenGL2+ and obviously it will need to have python installed. If the Linux is running in vmware you will need to ‘enable 3d acceleration’. You need to install libraries that emulate OpenGLES behaviour for the gpu:
sudo apt-get install mesa-utils-extra
This should install libEGL.so.1 and libGLESv2.so.2 if these change (which I suppose they could in time) then the references will need to be altered in pi3d/constants/__init__.py
The installation of PIL or Pillow should be the same as above but you are more likely to need to manually install python-numpy or python3-numpy
Editing scripts and running
Install Geany to run Pi3D
Although you can use any editor and run the scripts in a terminal using python, Geany is by far the easiest and most compatible application to use for creating and running Python scripts. Download and install it with:
sudo apt-get install geany xterm
Optionally, install tk.
Some of the demos require the tk graphics toolkit. To download and install it:
sudo apt-get install tk
Load and run
Either run from the terminal python3 ~/pi3d/demos/Minimal.py or load any of the demos into Geany and run (using the cogs icon). As a minimum, scripts need these elements in order to use the pi3d library:
import pi3d DISPLAY = pi3d.Display.create(w=128, h=128) shader = pi3d.Shader("2d_flat") sprite = pi3d.ImageSprite("textures/PATRN.PNG", shader) while DISPLAY.loop_running(): sprite.draw()
But.. a real application will need other code to actually do something, for instance to get user input in order to stop the program!). subdirectory but still run. If you write a program in the top directory then you will need then you need to prefix classes and functions with pi3d. And you are loading a large number of variable names which might cause a conflict, isn’t as explicit and is less tidy (in the non- superficial sense)! A third way to import the modules would be to use from pi3d import * this saves having to use the pi3d. prefix but is much harder to debug if there is a name conflict.
Documentation
see
Please note that Pi3D functions may change significantly during its development.
Bug reports, comments, feature requests and fixes are most welcome!
Please email on pi3d@googlegroups.com or contact us through the Raspberry Pi forums or on!
PLEASE READ LICENSING AND COPYRIGHT NOTICES ESPECIALLY IF USING FOR COMMERCIAL PURPOSES
- Downloads (All Versions):
- 0 downloads in the last day
- 37 downloads in the last week
- 942 downloads in the last month
- Categories
- Development Status :: 5 - Production/Stable
- Programming Language :: Python :: 2
- Programming Language :: Python :: 3
- Topic :: Education
- Topic :: Games/Entertainment :: First Person Shooters
- Topic :: Games/Entertainment :: Simulation
- Topic :: Multimedia :: Graphics :: 3D Modeling
- Topic :: Multimedia :: Graphics :: 3D Rendering
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: TomSwirly, paddywwoof
- DOAP record: pi3d-1.9.xml | https://pypi.python.org/pypi/pi3d/1.9 | CC-MAIN-2016-18 | refinedweb | 1,160 | 56.69 |
ActionScript 3.0 - Display Object Primer - Part 1
:: June 25, 2007:: June 25, 2007
One important change in Flash CS3 from a programming standpoint is about the usage of the MovieClip properties like attachMovie, createEmptyMovieClip et all. This was one topic in which I fumbled a bit to understand when I started with AS3 and thought would write a series of simple tutorials which will make things more clear for people starting with ActionScript 3.0 and specifically for those who use MovieClip methods a lot in their day to day work.
Some important facts to know of before we start with are:
- API's like attachMovie, createEmptyMovieClip, removeMovieClip, and duplicateMovieClip are either removed or replaced with new API's.
- Most of the MovieClip related API's are present with the flash.display.DisplayObject and flash.display.DisplayObjectContainer packages.
- All of the MovieClip related operations are based on a container based approach in which you have a DisplayObjectContainer and you can have DisplayObjects like MovieClips and Sprites in them.
- Depths are auto managed in ActionScript 3.0 and hence you need not manually specify and manage depths of the DisplayObjects yourselves but rather Flash Player does it for you.
- Still, if you want to have greater control of the depth yourself, you can use a method named addChildAt() method to manually specify the level in which you want the child to be added.
- addChild API is the equivalent of attachMovie or createEmptyMovieClip in AS 2.0.
Now, lets get to the basics of DisplayObject programming. I will try to illustrate the concept using a simple diagram shown below:
As I told before, everything in AS3 is based on a container approach. The main timeline (root) can be considered as the main container and it in-turn can contain several other containers within it. In this case we are going to create two containers within the main container and name them as "Container 1" and "Container 2".
To add the two sub containers to the main timeline, you create a container by using the code:
var container1:YellowContainer = new YellowContainer();
Note that YellowContainer is the linkage name and the class object of a yellow container movieclip which I have in the library.
Now you can specify some properties for this container, in this case it is the X and Y values:
container1.x = 200;
container1.y = 100;
To name this container (equivalent of giving an instance name to a MovieClip in AS 2.0):
container1.name = "Container 1";
Finally to add this container to the main timeline you use:
this.addChild(container1);
in this case the word "this" refers to the main timeline. The code above is equivalent to writing:
_root.attachMovie("container1");
Now we have added a container to the main timeline, the next step would be to add some elements to this container. We do this in the same way we did while creating the main container:
/);
In this case, GreyBall is a MovieClip which I have in my library. Note that, at this stage the Grey Ball gets attached to the registration point of the YellowContainer and not to the stage. This is accomplished by the script:
container1.addChild(gb);
which indicates that this has to be added to the container1.
In a similar fashion you can add other elements to each of the container. The complete code which I used is here:
import flash.display.*;
// Create two containers and position them
var container1:YellowContainer = new YellowContainer();
container1.x = 200;
container1.y = 100;
// Give a name to the container
container1.name = "Container 1";
// Add it to the stage
this.addChild(container1);
var container2:GreenContainer = new GreenContainer();
container2.x = 275;
container2.y = 100;
container2.name = "Container 2";
this.addChild(container2);
/);
// Add the next ball
var rb:RedBall = new RedBall();
rb.y = 40;
rb.name = "Red Ball";
container1.addChild(rb);
//Add some elements into container 2
var grb:GreenBall = new GreenBall();
grb.name = "Green Ball";
// Add the grey ball to Container 1 - note that the ball gets
// attached to the registration point of the YellowContainer and not on the stage
container2.addChild(grb);
// Add the next ball
var bb:BlueBall = new BlueBall();
bb.y = 40;
bb.name = "Blue Ball";
container2.addChild(bb);
// Now, moving a container will move the children attached with it too
container1.y = 150;
// Getting the child elements within a container
trace(container1.getChildAt(1).name);
trace(container1.getChildAt(2).name);
As you would see from the code and as you play around with it a bit, manipulating the container affects the elements container within it too.
You can download the source code of the file which I have used in this example from here: DisplayObject.zip
** Note **
Its very important to note that in AS3 you cannot set the linkage identifier of an MovieClip like how you do in AS2. Instead, every MovieClip which can be attached is identified by a Class Name like the one shown below:
As you can see from the above image, the identifier box is greyed out and is not available in ActionScript 3.0. When you right click on a MovieClip in your library and select the option "Export for ActionScript", Flash assigns the name of the MovieClip (without spaces) as the Class name. When you click on OK you will be prompted with a warning dialog like this:
If you don't have (or don't want to) a class associated with the MovieClip you can click on the OK button and Flash internally would create a associated class for the MovieClip automatically. You can also specify the base class which you want your MovieClip to be associated with. The default is flash.display.MovieClip and if you want you can change this to something like flash.display.Sprite.
Doing this procedure is equivalent to creating a class with the name GreenContainer with a dummy constructor and associating it with the MovieClip as a class.
For first time users this may be very confusing and misleading. The first time I learnt it I wasted close to half hour trying to figure out how to use this and what happens internally when one does that.
In the next part of the tutorial I will cover more on Sprites, DisplayObject manipulation and depth management. See you until then ! | http://tutorials.lastashero.com/ | crawl-002 | refinedweb | 1,047 | 63.59 |
Interrupts?
- LastCaress
So is there any way to use external interrupts in flow.m5stack.com ? (using the micropython code)? I tried using the methods for micropython but irq is not an attribute for pin objects.
Thanks!
- LastCaress
@lastcaress I used the code for the button pressed, but don't know if this is accurate?
I don't understand interrupts myself but I believe that if you use the button loop instead of the system loop that that is an interrupt.
Please bare with me as I'm still documenting and exploring UIFlow.
flow.m5stack.com, buttonA, B, C has been implemented using interrupts, or try:
from m5stack import m5button button = m5button.register(_BUTTON_PIN) | http://forum.m5stack.com/topic/815/interrupts/1 | CC-MAIN-2019-18 | refinedweb | 113 | 56.35 |
3.14 ioctl Function
The ioctl function has always been the catchall for I/O operations. Anything that couldn't be expressed using one of the other functions in this chapter usually ended up being specified with an ioctl. Terminal I/O was the biggest user of this function. (When we get to Chapter 11 we'll see that POSIX.1 has replaced the terminal I/O operations with new functions.)
#include <unistd.h>
/* SVR4 */ #include <sys/ioctl.h>
/* 4.3+BSD */ int ioctl(int filedes, int
request, ...); Returns: -1 on error, something else if OK
The ioctl function is not part of POSIX.1. Both SVR4 and 4.3+BSD, however, use it for many miscellaneous device operations.
The prototype that we show corresponds to SVR4. 4.3+BSD and earlier Berkeley systems declare the second argument as an unsigned long. This detail doesn't matter, since the second argument is always a #defined name from a header.
For the ANSI C prototype an ellipsis is used for the remaining arguments. Normally, however, there is just one more argument, and it's usually a pointer to a variable or a structure.
In this prototype we show only the headers required for the function itself. Normally additional device-specific headers are required. For example, the ioctls for terminal I/O, beyond the basic operations specified by POSIX.1, all require the <termios.h> header.
What are ioctls used for today? We can divide the 4.3+BSD operations into the categories shown in Figure 3.7.
Figure 3.7. 4.3+BSD ioctl operations.
The mag tape operations allow us to write end-of-file marks on a tape, rewind a tape, space forward over a specified number of files or records, and the like. None of these operations is easily expressed in terms of the other functions in the chapter (read, write, lseek, etc.) so the easiest way to handle these devices has always been to access their operations using ioctl.
We use the ioctl function in Section 11.12 to fetch and set the size of a terminal's window, in Section 12.4 when we describe the streams system, and in Section 19.7 when we access the advanced features of pseudo terminals. | http://www.informit.com/articles/article.aspx?p=99706&seqNum=14 | CC-MAIN-2019-22 | refinedweb | 376 | 68.47 |
A server response in REST is often an XML file; for example,
<parts-list> <part id="3322"> <name>ACME Boomerang</name> <desc> Used by Coyote in <i>Zoom at the Top</i>, 1962 </desc> <price currency="usd" quantity="1">17.32</price> <uri></uri> </part> <part id="783"> <name>ACME Dehydrated Boulders</name> <desc> Used by Coyote in <i>Scrambled Aches</i>, 1957 </desc> <price currency="usd" quantity="pack">19.95</price> <uri></uri> </part> </parts-list>...
17 comments:
So how do you request different types of responses? For example, the same REST service can serve up both XML and CSV. Where do you stick the requested type in the request?
Hi mjuchems,
The best alternative is probably to use a different URL for each -- e.g., a different path on the server. For example, vs. (where the shorter format is used for the "default", or preferred, output format -- note the "csv" in the second URL).
Proper server configuration (e.g., in Apache) can help you implement this using a single server script, which gets the extra path data as a parameter.
Another alternative is passing the format as parameter directly; something like, the default is XML, but add "?format=csv" to get CSV output.
Hi Dr. Elkstein, GREAT tutorial!
On the question by mjuchems, isn't it possible to use the same URL for different formats, and let the client negotiate the content type by request headers or something like that?
Hi Dario,
Indeed, it is possible to specify the desired output type in the HTTP "Accept" request header. However, personally, I prefer (for read-access-only REST requests) to be able to issue the request directly from the browser. I find that this makes debugging significantly easier. And since, in the browser, you can't easily change the HTTP headers, using a URL argument is the alternative I prefer.
Actually one thing that comes to my mind is the use of MIME types as part of URL, just the same way as in Accept header. You can make it optional and use that whole thing out of anywhere you want including web browser.
I thought that one constraint from the uniform interface feature of REST is that the messages have to be self-describing. As I understand it, that means, one has to include at least (a) reference(s) to the namespace(s) that explain(s) the utilized terms (cf.).
Hello zazi,
Unlike SOAP-based web services, REST is, by definition, not very rigid. This means that, when you write a REST service, you can choose to include references to said namespaces and definitions, but nothing says you have to. Of course, a service that does offer such references can be more useful, or easier to use. But it's certainly not a strict requirement.
Note spelling error in first sentence.
Thanks, Dave -- corrected.
You mention that using XML affords type safety, how is this achieved? XML does not know anything of type, the attributes are string values as well as the node values.
How do you know this is a number?
14
Hello Mader,
To find out about typing in XML, read about schema specification mechanisms, such as XSD, RelaxNG, etc.
I do not understand:
One option is not acceptable as a REST response format, ...., we find that HTML is in fact the most common REST response format...
the last paragraph.
Hi Antanas,
First, I explicitly limited my preference for standard headers to read-only requests; so your comment about put/delete requests is unrelated.
And, in some contexts, yes, the ability to test things from a browser is important. When a client calls to complain that things don't work as expected, it's very handy to be able to ask him to open that page from a browser and send me the output, for example.
Great Tutorial even in 2013
Hi Dr.Elkstein, great Tutorial, first of all.
Can you point out the disadvantages of using XML,CSV and JSON. I believe you only mention advantages.
Hi Piloto,
XML, CSV, and JSON all have their disadvantages, obviously. The most clear disadvantage is shared by all three, though: they are text formats. This is very beneficial in most scenarios: ever so much easier to read, understand, and debug these formats, compared to binary formats. However, when very large amounts of traffic are involved, using binary formats can both save bandwidth and reduce CPU processing required to parse responses. For example, Google use an internally-developed format called Protocol Buffers for inter-server communications, mostly for this exact reason. Protocol buffers (available in open source) were explicitly designed to be smaller and faster than XML.
Please clarify "One option is not acceptable as a REST response format... " as we know HTML is used more frequently for response. | http://rest.elkstein.org/2008/02/rest-server-responses.html | CC-MAIN-2014-10 | refinedweb | 800 | 65.22 |
Boost 1.67.0 was released over the weekend, and there are incompatibilities introduced by it for both GNU Radio and UHD -- both release and current GIT master of each. Volk latest release as well as current GIT master both seem to build and test cleanly using this new Boost version.
The OS-independent issues seem to arise from changes to `boost::posix_time` and how the types it now uses for keeping track of various time durations -- I think integer now versus floating point prior. There is a public PR for UHD already open < > that I'm looking at to see if it addresses the new Boost issues robustly. For UHD on Mac OS X / any Darwin kernel there's also an unfortunate incompatibility added between the GPSD and Boost APIs: the new `boost::posix_time` now includes <mach/mach_time.h>, which (indirectly) defines "policy_t" at the top-level namespace (typedef to `int`) ... as does <gpsd.h> (typedef to a struct). I'm not yet sure what a good fix for this issue is beyond disabling GPSD. Cheers! - MLD _______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org | https://www.mail-archive.com/discuss-gnuradio@gnu.org/msg66261.html | CC-MAIN-2018-43 | refinedweb | 187 | 62.58 |
Web ServicesWeb Services means structuring operations and data according to SOAP specification in an XML document. SOAP (Simple Object Access Protocol) is a way for a program running in one kind of operating system or language (e.g. Windows or .Net) to communicate with a program in the same or another kind of an operating system or language (e.g. Linux or Java) by using the HTTP and. Once a Web Service is implemented, a client sends a message to the component as an XML document and the component sends an XML document back to the client as the response.
InteroperabilityThe goal of this article is to show to integrate Microsoft's .NET Platform Web service with other platform like Java and ASP. The samples demonstrate basic techniques and principles that are used to cross-platform interoperability via Web services. You start off by writing code for your Web service. After you compile, the Web services framework uses your code to dynamically generate a WSDL file. When clients request your Web service definition, they retrieve the generated WSDL and then create the client proxy based on that definition.
The example here contains a web services in .Net C#, which returns the data from Customer table of Northwind database SQL Server in XML format. Data is retrieved through Dataset and then converted to XML. To display the data in the client side browser it is accessed by .Net, Java and classic ASP.
Through .Net there are two ways to display data in browser from web service. First it can be convert back to Dataset using XMLDataDocument and Schema. And Second one by using XMLDocument and style sheet.
Visual Studio automatically creates the necessary files and includes the needed references to support an XML Web service, to create a web services in C# in Visual Studio.NET.
1. On the File menu. File->New->Project.
2. In the New Project dialog box, select either the visual C# Projects folder. Select the ASP.NETWeb Service icon.
3. Enter the address of the Web server on which you will develop the XML Web service and specify TestWebService as the directory name, such as "". Click OK to create the project.
4. When you create an XML Web service, Service1.asmx will be created automatically. Change the name of Service1.asmx to ThakurService.asmx.
5. Specify the properties to WebService attribute. The namespace property you specify is used within the WSDL document of the Web service to uniquely identify the callable entry points of the service. Name is service name to display in help page. Description property of this attribute is included in the Service help page to describe the services usage. As shown in example here:[WebService (Namespace = "", Name = "ThakurService", Description = "Test Service By Anand Thakur")]
//CODE ThakurService.asmx
6. Add one public method GetCustomerData with return type string. Add WebMethod attribute to this method. The WebMethod attribute instructs .NET to add the code necessary to marshal calls using XML and SOAP. The MessageName property of the WebMethod attribute enables the XML Web service to uniquely identify overloaded methods using an alias. Unless otherwise specified, the default value is the method name. The Description property of the WebMethod attribute supplies a description for an XML Web service method that will appear on the Service help page
7. Add the necessary code to retrieve the data from Northwind.Customers table. From dataset return XML using GetXML method as shown in code. 8. Run the web service. You will see the following output shown in figure. You can see the service as well as web method name and description in output and link to service WSDL. You can further explore the web method to see method SOAP and output.
9. Now our web service is complete and ready to serve clients. We will consume the service from Java, .net and classic ASP client.
Note: if you want to browse web service directly through browser without VS.net. Then add following entries in web.config within <system.web>.
<webServices> <protocols> <add name="HttpSoap"/> <add name="HttpPost"/> <add name="HttpGet"/> </protocols></webServices>
To test the above web services with .net client.
1. Open VS.net. On the File menu, click New->Project.2. The New Project dialog box appears. 3. In the Project Type pane, click Visual C# Projects, and select ASP.NET Web Application.4. Name your application by changing the default name in the Location box (such as as in example) Click OK. 5. WebForm1.aspx and WebForm1.aspx.cs will be created automatically.WebForm1.aspx - Contains the visual representation of the Web Form and WebForm1.aspx.cs - The code-behind file that contains the code for event handling.6. In the solution explorer, right click on project and select Add Web Reference from context menu. Add Web Reference window will be opened to enter the web service URL. Enter the URL to web service (). Enter the web reference as localhost and click on Add Reference button. Now web references added to the project tree in solution explorer. You can see WSDL, DISCO and .cs proxy class for web services.7. Now we ready to consume the service. Here we are consuming our service by two ways. One through dataset and another one using XMLDocument.8. To consume the service through dataset. Add dataset schema to client web application. Right click project in solution explorer and select Add->Add New Item->XML Schema. Enter the file name as Products.xsd. And paste the code below in the file.
//CODE Products.xsd
9. Now open the WebForm1.aspx.cs file and add following code in Page Load Event.
//Code WebForm1.aspx.cs
10. Now right click on WebForm1.aspx and select Set as Start Page. And run the application. If everything works fine, you will see the following output in browser.
11. To get same output through XMLDocument. Add one more Web Form (WebForm2.aspx)12. Right click project in solution explorer and select Add->Add New Item->XSLT file. Name the file as Products.xsl. And paste the following code in .xsl file.
//CODE Products.xsl
13. Now open the WebForm2.aspx.cs file and add following code in Page Load Event. Set WebForm2.aspx as start up page and run the application. You will see the same output as in dataset case.
//CODE WebForm2.aspx.cs
Java/JSP Client
Eclipse and TOMCAT is used in this example to make a JSP client with the help of Apache SOAP toolkit (). You can get the TOMCAT plug-in for Eclipse from. Assumption here is that you are bit familiar with JSP and Eclipse.
1. Click on File->New ->project ->Tomcat Project.2. A web project will be created. Right click on project and select properties. In property window select Java Build Path and click on Libraries. Click on External Files Button and add soap.jar, activation.jar, mailapi.jar and xerces.jar reference to project. You can download these jar file from Apache/Sun site.3. In order to create SAOP proxy, right click on project in Project Explorer and select New->Package. Name the package as WebServicesClient. New package will be created. Now add new class named MsgBody.java in this package. And paste the following code in class file.
//CODE MsgBody.Javapackage WebServicesClient;import java.io.*;import org.apache.soap.*;import org.apache.soap.util.*;import org.apache.soap.util.xml.*;import org.apache.soap.rpc.SOAPContext;public class MsgBody extends Body { public void marshall (String strEncodeStyle,Writer msgSink,NSStack nameSpaceStack, XMLJavaMappingRegistry registry,SOAPContext context) throws IllegalArgumentException, IOException { // Start Element msgSink.write ('<'+"SOAP-ENV"+':'+ Constants.ELEM_BODY+'>'+ StringUtils.lineSeparator); // End Element msgSink.write ("</" + "SOAP-ENV" + ':'+ Constants.ELEM_BODY+'>'+ StringUtils.lineSeparator); nameSpaceStack.popScope (); }}
4. Add one more class in package named WebServiceProxy.java with following code in it. As you can see in this code URL set to web service. And SOAP URI set to. SOAP URI contains the Namespace of web service we have set in web service plus web method MessageName.
//CODE WebServiceProxy.javapackage WebServicesClient;import java.net.*;import org.apache.soap.*;import org.apache.soap.messaging.*;import javax.activation.*;public class WebServiceProxy { private URL m_url = null; private String m_soapUri = ""; private Message m_strMsg_ = new Message (); private Envelope m_envelope = new Envelope (); private DataHandler m_strReturnMsg = null; public WebServiceProxy () throws MalformedURLException { this.m_url = new URL (""); } public synchronized void setWebServiceURL (URL url) { this.m_url = url; } public synchronized URL getWebServiceURL () { return this.m_url; } public synchronized String GetCustomerData () throws SOAPException { String strReturn = ""; if (this.m_url == null) { throw new SOAPException (Constants.FAULT_CODE_CLIENT,"A URL not specified"); } this.m_soapUri = ""; MsgBody ourBody = new MsgBody (); this.m_envelope.setBody (ourBody); m_strMsg_.send (this.getWebServiceURL(), this.m_soapUri, this.m_envelope); try { this.m_strReturnMsg = this.m_strMsg_.receive(); strReturn=this.m_strReturnMsg.getContent().toString(); } catch (Exception exception) { exception.printStackTrace (); } return strReturn; }}
5. Now in the project add file customers.xsl and paste following code on that.
//CODE customers.xsl
6. Project explorer, right click project->new->File. Add new JSP file WebServicesClient.jsp. and paste following code .
//CODE WebServicesClient.jsp <%@ page",">"); byte[] bytes = str.getBytes("UTF8"); ByteArrayInputStream bais = new ByteArrayInputStream(bytes); xmldoc = db.parse(bais); } catch(Exception e) { } //Get the xsl file String virtualpathLoginXSL = "customers.xsl"; String strRealPathLoginXSL = pageContext.getServletContext().getRealPath(virtualpathLoginXSL); StreamResult htmlResult = new StreamResult(out); StreamSource xslSource = new StreamSource(strRealPathLoginXSL); TransformerFactory tf = TransformerFactory.newInstance(); Transformer tfmr = tf.newTransformer(xslSource); //Parse XML to generate the html tfmr.transform(new javax.xml.transform.dom.DOMSource(xmldoc),htmlResult); } catch(Exception e) { out.println("Error While Calling Web Services"); }%>
7. Now run the tomacat and access the file WebServicesClient.jsp(e.g WebServicesClient/ WebServicesClient.jsp. You will see the same output as the .net client.
Classic ASP Client
Before you start building ASP client, make sure that Microsoft SOAP toolkit 3.0 and MSXML 4.0 installed in your computer. You can download them from Microsoft site ()1. Open Microsoft Visual Interdev 6.0. File -> New Project -> Visual Interdev Project -> New Web Project. Name the Project (e.g. WebServiceClient).
2. In Project Explorer you will see the project node (e.g. localhost/ WebServiceClient), under which you will see the global.asa. add ASP file by right click on project->Add-> Active Server Page. Name the file as WebServiceClient.asp. And paste the below code in file.
// CODE WebServiceClient.asp<%
ConclusionWeb services are technologies designed to improve the interoperability between the many diverse application development platforms. Web service interoperability goals are to provide seamless and automatic connections from one software application to another. SOAP, WSDL, and UDDI protocols define a self-describing way to discover and call a method in a software application -- regardless of location or platform. We have seen here, how different clients (.net, java and ASP) can access the .net web services by different ways.
Aah! Another bug! Well, it's the life. | http://www.c-sharpcorner.com/UploadFile/ankithakur/WSIntegration02132006041221AM/WSIntegration.aspx | CC-MAIN-2014-15 | refinedweb | 1,790 | 53.37 |
java.lang.Object
org.netlib.lapack.SGEGVorg.netlib.lapack.SGEGV
public class SGEGV
SGEGV is a simplified interface to the JLAPACK routine sgeg routine is deprecated and has been replaced by routine SGGEV. * * SGEGV computes for a pair of n-by-n real nonsymmetric matrices A and * B, the generalized eigenvalues (alphar +/- alphai*i, beta), and * optionally, the left and/or right generalized eigenvectors (VL and * VR). * * A generalized eigenvalue for a pair of matrices (A,B) is, roughly * speaking, a scalar w or a ratio alpha/beta = w, such that A - w*B * is singular. It is usually represented as the pair (alpha,beta), * as there is a reasonable interpretation for beta=0, and even for * both being zero. A good beginning reference is the book, "Matrix * Computations", by G. Golub & C. van Loan (Johns Hopkins U. Press) * * A right generalized eigenvector corresponding to a generalized * eigenvalue w for a pair of matrices (A,B) is a vector r such * that (A - w B) r = 0 . A left generalized eigenvector is a vector * l such that l**H * (A - w B) = 0, where l**H is the * conjugate-transpose of l. * * Note: this routine performs "full balancing" on A and B -- see * "Further Details", below. * * Arguments * ========= * *. * * N (input) INTEGER * The order of the matrices A, B, VL, and VR. N >= 0. * * A (input/output) REAL array, dimension (LDA, N) * On entry, the first of the pair of matrices whose * generalized eigenvalues and (optionally) generalized * eigenvectors are to be computed. * On exit, the contents will have been destroyed. (For a * description of the contents of A on exit, see "Further * Details", below.) * * LDA (input) INTEGER * The leading dimension of A. LDA >= max(1,N). * * B (input/output) REAL array, dimension (LDB, N) * On entry, the second of the pair of matrices whose * generalized eigenvalues and (optionally) generalized * eigenvectors are to be computed. * On exit, the contents will have been destroyed. (For a * description of the contents of B on exit, see "Further * Details", below.) * *. However, ALPHAR and ALPHAI will be always less * than and usually comparable with norm(A) in magnitude, and * BETA always less than and usually comparable with norm(B). * * VL (output) REAL array, dimension (LDVL,N) * If JOBVL = 'V', the leftVL = 'N'. * * LDVL (input) INTEGER * The leading dimension of the matrix VL. LDVL >= 1, and * if JOBVL = 'V', LDVL >= N. * * VR (output) REAL array, dimension (LDVR,N) * If JOBVR = 'V', the rightVR = 'N'. * * LDVR (input) INTEGER * The leading dimension of the matrix VR. LDVR >= 1, and * if JOBVR = 'V', LDVR >= N. * * WORK (workspace/output) REAL array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) returns the optimal LWORK. * * LWORK (input) INTEGER * The dimension of the array WORK. LWORK >= max(1,8*N). * For good performance, LWORK must generally + MAX( 6*N, N*(NB+1) ). * * STGEVC * =N+8: error return from SGGBAK (computing VL) * =N+9: error return from SGGBAK (computing VR) * =N+10: error return from SLASCL (various calls) * * Further Details * =============== * * Balancing * --------- * * This driver calls, real Schur * form[*] of the "balanced" versions of A and B. If no eigenvectors * are computed, then only the diagonal blocks will be correct. * * [*] See SHGEQZ, SGEGS, or read the book "Matrix Computations", * by Golub & van Loan, pub. by Johns Hopkins U. Press. * * ===================================================================== * * .. Parameters ..
public SGEGV()
public static void SGEGV(java.lang.String jobvl, java.lang.String jobvr, int n, float[][] a, float[][] b, float[] alphar, float[] alphai, float[] beta, float[][] vl, float[][] vr, float[] work, int lwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SGEGV.html | CC-MAIN-2013-20 | refinedweb | 579 | 55.54 |
Type: Posts; User: 2kaud
Do you mean file permissions or file attributes? These are quite different.
See
You can't change the buffer size to be less than the size of the console window. Note that there are two sizes - the size of the buffer and the size of the window. The window is a 'picture' of the...
Have fig 1 and fig 2 been transposed?
Interesting article.
I hadn't come across Double Metaphone before. In the 1980's/90's I used soundex extensively when querying data files. The top...
You can't cleanly terminate a thread without assistance from the thread. You can terminate a thread using TerminateThread() - but that has issues. The usual way is to use event which is set by the...
decltype() does only accept one param. In your example
decltype((x == nullptr), get_object(*x))
decltype() has only one param get_object(*x). , is the comma operator so that the...
As vecOfStrs is std::vector<std::string>, the number of chars used for each entry is not the same. Also the data is stored in dynamic memory so sizeof() doesn't give the number of bytes stored for an...
What are you trying to accomplish?? Based upon the provided code:
#include <fstream>
#include <iostream>
#include <string>
#include <vector>
#include <exception>
When posting code, please use code tags so that the code is readable. Go Advanced, select the formatted code and click #
Friend is a class that stores the data for a friend. It is not a collection...
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
bool getFileContent(const std::string& infileName, const std::string& outfileName, std::vector<std::string>&...
See my post above re PM jamespayne. You can't directly publish an article.
@srilatha0515 If you are interested in submitting articles, I suggest you PM jamespayne
This is default value initialisation. If there's no value between the {} the variable is initialised to the default value for the type of the variable - which is 0 for long. It's good practice to...
What is out_buffer?
I'm not sure what you're trying to achieve. Perhaps you could be more explicit in the requirements? However consider:
#include <iostream>
class Message {
public:
... and also a mix of C++ and C !
[requested changes made]
Use the debugger to trace through the code and watch the variables. When the debugger shows behaviour different from that expected from the design then you have found an issue.
clip is a handle - not a pointer to data. Once you have the handle then you need to obtain a memory pointer from it. For this use GlobalLock(clip). The return value is a pointer to the data. If not...
Well IMO it's far better than something like:
for (auto itr {all_triggers.begin()}; itr != all_triggers,end(); itr = std::next(itr))
(*itr)->process_state_requests (bufs, nframes - 1);
...
all_triggers is a type that supports .begin(), increment (++) and .end() . all_triggers is iterated from .begin() to .end() by incrementing an internal iterator. The iterator is de-referenced and...
gets() has been removed from the standard. use gets_s() instead. As this is C++, why not use std::string?
You have a 'typo':
strcpy(second, word3);
strcpy(second, word2);
Well C++ allows null pointer (nullptr - 0). If a function returns a pointer then checking for nullptr is valid - and probably needed. The question probably isn't about checking a pointer for nullptr,...
VS2017 is no longer supported. The current version is VS2022. Why not upgrade to a supported version?
Have you tried repair before the extension installation in case there's a problem with VS?
References to IT Craft could be taken as advertising - which isn't allowed.
[Also asked here ]
I use the in-built Windows Backup to create a system image. This creates a .vhd file that can. | https://forums.codeguru.com/search.php?s=894dea882a6842e07ad54243dd4ebb28&searchid=22553647 | CC-MAIN-2022-21 | refinedweb | 631 | 69.58 |
A simple JSF2+AJAX example
JSF2 will provide a standard mechanism for adding AJAX capabilities to JSF
applications. Jim Driscoll hasthis
example, but it is a bit odd—the property getter is actually a
mutator. Here is a more run-of-the-mill example. The code is atthe Kenai site for the
upcoming Core JSF 3rd edition in the ch01/login-ajax directory. I used
Eclipse with the Glassfish v3 plugin and the most current JSF2 module in
Glassfish (2.0.0 B8).
We want to process a login and show a welcome message upon success, all
without a page flip.
The bean class is straightforward. (For simplicity, the code don't actually
check the login credentials.)
package com.corejsf;
import javax.faces.model.ManagedBean;
import javax.faces.model.SessionScoped;
@ManagedBean(name = "user")
@SessionScoped
public class UserBean {
private String name = "";
private String password;
public String getName() { return name; }
public void setName(String newValue) { name = newValue; }
public String getPassword() { return password; }
public void setPassword(String newValue) { password = newValue; }
public String getGreeting() { return name.length() == 0 ? "" : "Welcome to JSF2 + AJAX, " + name + "!"; }
}
In the JSF page, we need to include the AJAX JavaScript library, with the
following incantation:
<h:outputScript
(The library name has recently changed. In the Public Review spec, it was
called
ajax.js.)
To avoid nested IDs for the components that are updated asynchronously, use
the
prependId attribute in the form:
<h:form
Give IDs to the
inputText and
outputText
components.
<h:inputText
<h:inputSecret
<h:outputText
Then define the login button as follows:
<h:commandButton
(The function was called
javax.faces.Ajax.ajaxRequest in the PR
spec.)
Note that this is not a submit button. When the button is clicked,
the
onclick handler is executed, but the form data is not posted
back. There is no page flip. The
jsf.ajax.request method makes an
asynchronous request to the server and receives instructions on which
components to update. (Details below.)
The values of the
execute and
render keys are
space-separated ID lists. The components on the
execute list go
through all parts of the JSF lifecycle except for “Render
Response”. Those on the
render list go through “Render
Response”.
The input components must be on the
execute list, so that the
bean's setters are invoked. (This is where Jim's example was a bit confusing.
His “Count” button isn't updating the model. It just forces the
property getter to be invoked.)
Now let's spy on what goes on under the hood. Execute the View Source
command of your browser. (If you use Eclipse, it defaults to using an internal
browser without a View Source command. That is not good. Select Window →
Web Browser → Default System Web Browser from the menu and run the app
again.)
Note the element
<script type="text/javascript" src="">
This is the result of the
outputScript tag. You can spy on the
script by pointing your browser to
It contains a documentation of the
request function that is
more up-to-date than the Public Review spec:
In Eclipse or Netbeans, it is easy to run the app server in debug mode and
set a breakpoint in the bean's getters and setters.
That's how I found out what needs to go to the
execute list.
(In Jim's example, he added a submit button to that list, but it does actually
no good in this case.)
As David Geary and myself were experimenting with different settings, David
questioned whether there was any AJAX going on at all. To settle the question,
I figured out how to set up the TCP monitor in Eclipse. (In Netbeans, this is
much easier, but David says most people he meets prefer Eclipse :-)) Search for
TCP in the Window→Preferences dialog...
Then point your browser to (or whichever port you set
up). You'll see the requests and responses.
For example, here is the response when clicking the Login button.
<?xml version="1.0" encoding="utf-8"?>
<partial-response><changes><update id="out">
<![CDATA[<span id="out">Welcome to JSF2 + AJAX, Cay!</span>]]></update>
<update id="javax.faces.ViewState"><![CDATA[j_id5:j_id6]]>`````</update></changes></partial-response>
As you can see, the response contains instructions how to update the output
field.
Note that the output field must be present in the page. I tried to avoid the
greeting property by using the
rendered attribute:
<h:outputText
That did not work—the AJAX update was not able to add the component
since it didn't exist on the client. (Use View Source to verify that...)
For a chuckle, try a user name of
]]>. With today's version
(2.0.0 B8), it doesn't work. Of course, that's a bug—someone was
insufficiently paranoid
about
CDATA.
What can one learn from all this?
- View Source is your friend
- The debugger is your friend
- The TCP monitor is your friend
With JSF development, you need all the friends you can get :-)
- Login or register to post comments
- Printer-friendly version
- cayhorstmann's blog
- 29620 reads
THANKS!
by jemiller1 - 2010-04-10 21:44All I have to say is THANK YOU for creating these blog posts on JSF. This one helped me out a lot for something that I'm working on. You've posted several others that I've found very helpful also. For example the one about avoiding the JSF API. I was able to clean up a lot of my old JSF 1.1/1.2 code using those tips. Thanks again, and keep up the good work. Also, I'm looking forward to the new addition of your JSF book. I thought it was supposed to be out by now, but, I still don't see it at Amazon.
by driscoll - 2009-01-30 11:50Hi Cay - Thanks for the filed bug on CDATA escaping - that's now fixed (with associated unit tests) in the sourcebase, and the nightly should fix the problem. The next release (post-PR) will fix that bug. As for my (admittedly) odd choice of mixing mutators and accessors, I guess I'm just not as concerned by that as you are - it's not that uncommon for the act of observing data to change it - was it really that confusing, or was it merely annoying? I ask, since if it was confusing (or even non-obvious) to *you*, then I obviously need to change it, since you're pretty knowlegable, and if I can't explain it to you, I don't have a chance with a true beginner. (Also, I used that pattern simply because it was how I had written a Comet example last year, and reused some of that code. I could have just done an accessor to a timestamp function, it would have done the same thing - but it's handier to count the accesses for testing, and I used that demo to model some unit tests I wrote..) Also, when I'm coding up a new demo and need to debug it, I never use TCP monitoring - I usually just use Firebug in Firefox to set a breakpoint in the jsf.ajax.response funciton, and look at the contentText property of the xml returned - that'll give you most of what you're looking for. Of course, I also often end up with a breakpoint in the JSF code itself when debugging things like the CDATA bug you reported, but I don't think that's necessary for most folks writing code, unless I've left a bug in there. Jim Driscoll
Best selling Author on Servlets and JSP Marty Hall to Keynote
by shaguf5575 - 2010-03-07 22:07Hey | https://weblogs.java.net/blog/cayhorstmann/archive/2009/01/a_simple_jsf2aj.html | CC-MAIN-2014-10 | refinedweb | 1,277 | 64 |
sin man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
sin, sinf, sinl — sine function
Synopsis
#include <math.h> double sin(double x); float sinf(float x); long double sin s sine of x.
If x is NaN, a NaN shall be returned.
If x is ±0, x shall be returned.
If x is subnormal, a range error may occur
and x should be returned.
If x is not returned, sin(), sinf(), and sinl() shall return an implementation-defined value no greater in magnitude than DBL_MIN, FLT_MIN, and LDBL_MIN, respectively.
If x is ±Inf, a domain error shall occur, and a NaN
Taking the Sine of a 45-Degree Angle
#include <math.h> ... double radians = 45.0 * M_PI / 180; double result; ... result = sin(radians);
Application Usage.
Rationale
None.
Future Directions
None.
See Also
asin(), feclearexcept(), fetestexcept(), isnan()in(3p), cos(3p), math.h(0p). | https://www.mankier.com/3p/sin | CC-MAIN-2019-04 | refinedweb | 177 | 59.7 |
This experienced professionals who end up not knowing where to apply each of these types of lists. In fact they are all very similar, after all they implement the same interface (List). They all possess the same methods but not the same implementations of those methods. It is mainly in the performance that we can see the main difference.In the following sections we will explain how to use each of the lists and finally point out the main differences between them, which is the main objective of this article.
ArrayList
Let us start with the most known and used, ArrayList. This type of list is implemented as an array that is dynamically scaled, ie whenever it is necessary to increase its size by 50% the size of the list. Say you have a list size of 10 and its size will increase to 15 automatically when an add operation happens. Also the ArrayList allows elements to be accessed directly by the methods get () and set (), and added through add () and remove ().
Example Listing 1 : Using ArrayList
package net.javabeat; import java.util.ArrayList; import java.util.Iterator; public class Main { public static void main(String args[]) { ArrayList<Integer> al = new ArrayList
In the above code we notice that the ArrayList does not remove duplicate elements, and we can still access any element directly through its index, but everything has a cost and which we will see later.
ArrayList starts with a fixed size, which increases as needed, but the cost of this increase is high, because a copy is made of the current array to a new array with a new size, then imagine an array with 10 million elements that will be copied to a new array to create only 5000 elements? Indeed it is a high cost. So it is highly advisable that you start your Array with a number of elements that suits your current goal, without the need for dynamic creation of new spaces, or if you know you’ll have to store 300-400 objects in an Array , set 500.
There is a further very important point about ArrayList: These are not synchronized, hence are not thread-safe, ie, if your application needs to work as thread-safe at some point where a list is required, then discard ArrayList unless you take care of thread safety explicitly, which is obviously not correct way of doing.
Vector
From the point of view of API, or the way it is used, ArrayList and Vectors are very similar, you can say they are same. If you do not know in depth the concept of Vector and ArrayList both are used as if they were the same. See in Listing 2 is an example of that.
Example Listing 2 : Using Vector
package net.javabeat; import java.util.Iterator; import java.util.Vector; public class Main { public static void main (String args []) { Vector<Integer> al = new Vector
You can see that in Listing 2: we have used the same structure as in Listing 1, changing only the list of ArrayList to Vector, but the rest of the code remains the same, and the output will also be identical.
So what is the difference between Vector and ArrayList?
- First let’s talk about the fact that Vector is synchronized and ArrayList is not. This means that if you have an application that needs to be thread-safe at some point, use Vector and you will be guaranteed of thread safety.
- Another important point is the dynamic allocation of the Vector, which is different from the ArrayList. Remember we talked about the 50% of the ArrayList increases its size when the list is full? The Vector increases twice, ie if you have a full list of 10 elements, this list will increase to 20, with 10 empty positions. But is this not bad? Depends on what you need if you want to increase the amount of elements very often, so it is ideal to use the Vector as it increases the size two times and you will get much more space than the ArrayList that need to be increased more frequently, hence reducing the performance of your application.
LinkedList
Before starting the explanations we will show a use of LinkedList and you will notice that is identical to an ArrayList.
Example Listing 3 : Using LinkedList
import java.util.Iterator; import java.util.LinkedList; public class Main { public static void main (String args []) { LinkedList ll = new LinkedList (); ll.add (3); ll.add (2); ll.add (1); ll.add (4); ll.add (5); ll.add (6); ll.add (6); Iter2 ll.iterator iterator = (); while (iter2.hasNext ()) { System.out.println (iter2.next ()); } } }
If you run the above code the output would be:
3 2 1 4 5 6 6
This type of list implements a double linked list, or a list doubly “linked”. The main difference between LinkedList and ArrayList is in performance between the methods add, remove, get and set.
This kind of list has better performance in the add and remove methods, than the methods that add and remove from the ArrayList, instead its get and set methods have a worse performance than the ArrayList. Let us see a comparison between each of the methods present in ArrayList and LinkedList:
- get(int index): method in LinkedList has O (n) and the method in ArrayList has O (1)
- add (E element): method in LinkedList has O (1) and method in ArrayList has O (n) in the worst case, since the array will be resized and copied to a new array.
- add (int index, E element): method in LinkedList has O (n) and method in ArrayList has O (n) in the worst case.
- remove (int index): method in LinkedList has O (n) and method in ArrayList has O (n-index), remove the last element then O (1).
Notice that the main difference is in the performance, and a thorough analysis should be made in cases where performance is critical.
Reference Books:
We have talked so much about performance difference between LinkedList and ArrayList, let’s see this in practice, which is faster at what times, so check the listing below.
Example Listing 4 : Performance comparison between LinkedList and ArrayList
package net.javabeat; import java.util.ArrayList; import java.util.LinkedList; public class Main { public static void main(String args[]) { ArrayList<Integer> arrayList = new ArrayList<Integer>(); LinkedList<Integer> linkedList = new LinkedList<Integer>(); // Add ArrayList long startTime = System.nanoTime(); for (int i = 0; i < 100000; i++) { arrayList.add(i); } long endTime = System.nanoTime(); long duration = endTime - startTime; System.out.println("ArrayList add:" + duration); // Add LinkedList startTime = System.nanoTime(); for (int i = 0; i < 100000; i++) { linkedList.add(i); } endTime = System.nanoTime(); duration = endTime - startTime; System.out.println("LinkedList add:" + duration); // Get ArrayList startTime = System.nanoTime(); for (int i = 0; i < 10000; i++) { arrayList.get(i); } endTime = System.nanoTime(); duration = endTime - startTime; System.out.println("ArrayList get:" + duration); // Get LinkedList startTime = System.nanoTime(); for (int i = 0; i < 10000; i++) { linkedList.get(i); } endTime = System.nanoTime(); duration = endTime - startTime; System.out.println("LinkedList get:" + duration); // ArrayList removes startTime = System.nanoTime(); for (int i = 9999; i >= 0; i--) { arrayList.remove(i); } endTime = System.nanoTime(); duration = endTime - startTime; System.out.println("ArrayList remove:" + duration); // Remove LinkedList startTime = System.nanoTime(); for (int i = 9999; i >= 0; i--) { linkedList.remove(i); } endTime = System.nanoTime(); duration = endTime - startTime; System.out.println("LinkedList remove:" + duration); } }
OUTPUT
ArrayList add:18482391 LinkedList add:15140237 ArrayList get:2558084 LinkedList get:87518301 ArrayList remove:229680490 LinkedList remove:83977290
Summary
- Difference between JVM, JRE and JDK
- Conversion between list and array types
- Annotations in Java 5.0
- G1 Garbage Collector in Java 7.0
This article highlighted about the similarities and differences between the list types: ArrayList, Vector and LinkedList. We also discussed with example the performance of the code with the use of each of these types. Hope you liked this article. | http://www.javabeat.net/difference-arraylist-vector-linkedlist-java/ | CC-MAIN-2015-27 | refinedweb | 1,315 | 65.22 |
First of all happy new year to everyone, hope you’re doing well!
I’m working on a C++ project in which I need to call a C# DLL I created following the first answer of this post. Once I have the DLL, I need to call it from Qt, so by using dumpcpp and the .tlb file generated by regasm, I managed to get the .cpp and .h files to use my classes. Just as a reference, the namespace of the classes is Wrapper, and the main class is Device with guid {DD4A4896-C105-4C60-839B-B18C99C8FE15}.
Once I have the generated files to use the DLL, if I try to create a Wrapper:: Device instance on Qt, I get the following error:
QAxBase::setControl: requested control {dd4a4896-c105-4c60-839b-b18c99c8fe15} could not be instantiated QAxBase::qt_metacall: Object is not initialized, or initialization failed
It doesn’t give any more information, so I tried to check if the guid was stored on the system registry (I used the regasm command explained on the previously quoted post, and It said that it was successful, but you never know). Opening Registry editor and searching for the Guid revealed that it’s present at:
ComputerHKEY_CLASSES_ROOTWOW6432NodeCLSID{DD4A4896-C105-4C60-839B-B18C99C8FE15}, which, as far as I know, is the right route for these guids, and it points to the right DLL.
I though It may be due to some kind ActiveQt problem, and as the previously quoted post explained how to use that DLL from VS C++, I decided to give it a try, using this as an another reference. I’ve finished with this code, which is supposed to create an instance of my Device object
#include <iostream> #include <atlstr.h> #import "C:UsersjavieDocumentsWrapperWrapperbinx86Releasenetstandard2.0Wrapper.tlb" named_guids raw_interfaces_only inline void TESTHR(HRESULT x) { if FAILED(x) _com_issue_error(x); }; int main() { try { TESTHR(CoInitialize(0)); Wrapper::IDevicePtr devPtr = nullptr; TESTHR(devPtr.CreateInstance("{DD4A4896-C105-4c60-839B-B18C99C8FE15}")); } catch (const _com_error& e) { CStringW out; out.Format(L"Exception occurred. HR = %lx, error = %s", e.Error(), e.ErrorMessage()); MessageBoxW(NULL, out, L"Error", MB_OK); } CoUninitialize();// Uninitialize COM std::cout << "Hello World!n"; }
However, this doesn’t work either, the createInstance method throws an exception of Class not registered and HR=80040154. Again, according to Registry editor, the class is registered, so I don’t understand the error. I’ve also tried with
devPtr.CreateInstance("Wrapper.Device"),
devPtr.CreateInstance("Wrapper::Device") or `devPtr.CreateInstance("Wrapper::CLSID_Device") as the links I posted suggest, but in those cases I get another exception with HR=800401f3 and message Invalid class string.
It doesn’t matter whether VS or Qt Creator are opened as administrator or not, I get the exact same error.
I have run out of ideas, and I really need to be able to use that DLL from Qt using the files generated by dumpcpp.
Does any one know what could be happening? It feels quite strange to me.
Source: Windows Questions C++ | https://windowsquestions.com/2020/12/31/cant-find-com-object-from-c-although-guid-its-registered/ | CC-MAIN-2021-10 | refinedweb | 497 | 62.27 |
Lab Assignment: ADC + PWM
Objective
Implement an ADC driver, implement a PWM driver, and design and implement an embedded application, which uses both drivers.
This lab will utilize:
- ADC Driver
- PWM Driver
- FreeRTOS Tasks
- A potentiometer
- An RGB LED
Assignment
Part 0: Implement basic ADC Driver and read Light Sensor Values
- Channel 2 (Pin P0.25) already has Light Sensor connected to it.
- Create just 1 task which reads the Light sensor value and prints it periodically.
- While the task is running cover the light sensor and your task should print values <50.
- Use the flash light on your phone on the light sensor and your task should print values >3500.
void light_sensor_print_task(void *p) { /* * 1) Initial ADC setup (Power, clkselect, pinselect, clkdivider) * 2) Select ADC channel 2 * 3) Enable burst mode */ while(1) { uint16_t ls_val = adc_read_channel(2); printf("Light Sensor value is %d\n", ls_val); delay_ms(100); } }
Part 1: Implement an ADC Driver
Using the following header file,
- Implement adcDriver.cpp such that it implements all the methods in adcDriver.h below.
- Every method must accomplish its task as indicated in the comments.
- You may add any other methods to enhance the functionality of this driver.
- It is recommended that you test your ADC driver with ADC_PIN_0_25 because it is connected to the analog light sensor and this is probably the easiest way to test your driver.
For proper operation of the SJOne board, do NOT configure any pins as ADC except for 0.26, 1.30, 1.31
While in burst mode, do not wait for the "DONE" bit to get set.
#include <stdio.h> #include "io.hpp" class LabAdc { public: enum Pin { k0_25, // AD0.2 <-- Light Sensor --> k0_26, // AD0.3 k1_30, // AD0.4 k1_31, // AD0.5 /* These ADC channels are compromised on the SJ-One, * hence you do not need to support them */ // k0_23 = 0, // AD0.0 // k0_24, // AD0.1 // k0_3, // AD0.6 // k0_2 // AD0.7 }; // Nothing needs to be done within the default constructor LabAdc(); /** * 1) Powers up ADC peripheral * 2) Set peripheral clock * 2) Enable ADC * 3) Select ADC channels * 4) Enable burst mode */ void AdcInitBurstMode(); /** * 1) Selects ADC functionality of any of the ADC pins that are ADC capable * * @param pin is the LabAdc::Pin enumeration of the desired pin. * * WARNING: For proper operation of the SJOne board, do NOT configure any pins * as ADC except for 0.26, 1.31, 1.30 */ void AdcSelectPin(Pin pin); /** * 1) Returns the voltage reading of the 12bit register of a given ADC channel * You have to convert the ADC raw value to the voltage value * @param channel is the number (0 through 7) of the desired ADC channel. */ float ReadAdcVoltageByChannel(uint8_t channel); };
Part 2: Implement a PWM Driver
Using the following header file,
- Implement pwmDriver.cpp such that it implements all the methods in pwmDriver.h below.
- Every method must accomplish its task as indicated in the comments.
- You may add any other methods to enhance the functionality of this driver.
- It may be best to test the PWM driver by using a logic analyzer
#include <stdint.h> class LabPwm { public: enum Pin { k2_0, // PWM1.1 k2_1, // PWM1.2 k2_2, // PWM1.3 k2_3, // PWM1.4 k2_4, // PWM1.5 k2_5, // PWM1.6 }; /// Nothing needs to be done within the default constructor LabPwm() {} /** * 1) Select PWM functionality on all PWM-able pins. */ void PwmSelectAllPins(); /** * 1) Select PWM functionality of pwm_pin_arg * * @param pwm_pin_arg is the PWM_PIN enumeration of the desired pin. */ void PwmSelectPin(PWM_PIN pwm_pin_arg); /** * Initialize your PWM peripherals. See the notes here: * * * In general, you init the PWM peripheral, its frequency, and initialize your PWM channels and set them to 0% duty cycle * * @param frequency_Hz is the initial frequency in Hz. */ void PwmInitSingleEdgeMode(uint32_t frequency_Hz); /** * 1) Convert duty_cycle_percentage to the appropriate match register value (depends on current frequency) * 2) Assign the above value to the appropriate MRn register (depends on pwm_pin_arg) * * @param pwm_pin_arg is the PWM_PIN enumeration of the desired pin. * @param duty_cycle_percentage is the desired duty cycle percentage. */ void SetDutyCycle(PWM_PIN pwm_pin_arg, float duty_cycle_percentage); /** * Optional: * 1) Convert frequency_Hz to the appropriate match register value * 2) Assign the above value to MR0 * * @param frequency_hz is the desired frequency of all pwm pins */ void SetFrequency(uint32_t frequency_Hz); };
Part 3: Application
In order to demonstrate that both drivers function, you are required to interface a potentiometer and an RGB LED to the SJOne board. The potentiometer ADC input shall control the duty cycle of the RGB LED pwm outputs. Note that an RGB LED has three input pins that you will connect to three different PWM output pins. You must use your own ADC and PWM drivers, as well as your own FreeRTOS task.
Extra credit can be earned with an interesting/cool/creative RGB output.
Requirements
- Using your own ADC Driver, read input voltage from a potentiometer
- Print the voltage reading every 1s.
- Using your own PWM Driver, drive an RGB LED.
- Print the duty cycle of all three RGB pins every 1s.
- The PWM output to the RGB LED must be dependent on the ADC input from the potentiometer.
- By varying the potentiometer, you should be able to see changes in the color of the RGB Led.
You don't need a periodic task for the PWM to work. Initialize the driver, set period and duty cycle. PWM will start generating pulses immediately. You can vary the duty cycle of PWM inside the ADC task. | http://books.socialledge.com/books/embedded-drivers-real-time-operating-systems-%28dr-liu%29/page/lab-assignment-adc-pwm | CC-MAIN-2020-40 | refinedweb | 897 | 56.05 |
Blocks in ruby are a very powerful aspect of the language. They are often less used in when compared to other features of the language and it is something not everyone is comfortable using them, and much less writing code that consumes blocks. So let’s start with a small introduction:
What are blocks?
Essentially, a method with no name. It is a piece of code, delimited either by
{ } or by
do... end that we are able to call in a certain context. For example:
# The following syntaxes are equivalent. They will both execute the block twice. 2.times do puts "foo" end 2.times { puts "foo" }
A more complicated example: Let’s assume that for a user masquerading gem, you want a method in which you can pass in a user, grant them all permissions, run some code with those permissions in place, and then reset the old permissions once the code has finished running. This is where blocks shine. A very simple test suite for this would be something like:
RSpec.describe UserImpersonator do it "grants permissions while inside the block" do UserImpersonator.grant_all_permissions valid_user do expect(valid_user.roles[:admin]).to be_true end expect(valid_user.roles[:admin]).to be_false end end
We first need a
UserImpersonator class and a
grant_all_permissions method that accepts the user, and the block of code you want to run while you have all permissions in place.
class UserImpersonator def self.grant_all_permissions(user, &block) new(user).run(&block) end def initialize(user) @user = user end def run(&block) begin cache_old_permissions assign_all_permissions block.call ensure reset_permissions end end end
Let’s break up the code above:
The
&block on the
self.grant_all_permissions method definition lets the method know that it can expect to be passed in a block. This will instantiate a new
UserImpersonator object and call the run method on it, forwarding the
&block argument.
The
run method’s first two lines after the
begin keyword are the setup, we want to cache the old permissions so that we can reset them later and assign the new ones. After this, we want to actually run the block that was passed in, and after that, we want to reset the permissions back, regardless of whether or not an error was raised somewhere in between, hence the
ensure keyword.
There you have it, we’ve just created a method that accepts a block and runs it wherever we want, this is certainly useful, nevertheless, we can go deeper
Let’s try now passing a specific context into our block, or block variables, if you have used rails, even for a bit, then you have probably seen this when you iterate through your database records, for example:
User.all.each do |user| user.update admin: false end
The block variable is what is between the pipes,
user in this case.
New example: Let’s imagine we are running a restaurant, and we would like to calculate the cost of an order. Since an order can be made out of different items, we want to be able to add as many or as little items within the context of that order. Again, starting with the tests, we could have an expectation like:
RSpec.describe Order do it "allows to make operations" do result = Order.cost do |c| c.add_taco c.add_taco c.add_guacamole c.add_beer end expect(result).to eq 44 end end
To make the test above pass, one possible implementation that would make the test above to pass would be:
class Order def self.cost yield Actions.new end class Actions def initialize @cost = 0 end def add_taco @cost += 10 end def add_guacamole @cost += 6 end def add_beer @cost += 18 end end end
Things to note:
1) The block variable, or
c in the case of the test is an instance of the
Actions class, which means it has access to the
add_taco,
add_guacamole and
add_beer methods
2) You do not need to specify
&block as a parameter in the
cost method since you are using
yield inside of the method
3)
yield makes a block an OPTIONAL parameter, which means that you COULD call the
cost method and pass no block at all. However, this will fail and complain with a
LocalJumpError (no block given (yield))
To solve this, a handy little method called
block_given? allows you to verify if a block was passed in. So you could refactor the
cost method to handle that:
def self.cost yield Actions.new if block_given? end
I really hope that this deep dive into the more fun and out-there features of ruby was useful. Personally, I find enlightening when I find out how the inner workings of the tools I use very often within RSpec or factory_bot fit together step by step | http://blog.magmalabs.io/2019/07/08/how-to-take-advantage-of-ruby-blocks.html | CC-MAIN-2021-04 | refinedweb | 796 | 70.63 |
I am trying to make the following class immutable.
I know the theory of how to do this but I think my implementation is wrong. Can you help?
Thanks
Mutable class:
class BankAccount {
private var balance = 0
def deposit(amount: Int) {
if (amount > 0)
balance += amount
}
def withdraw(amount: Int): Int =
if (0 < amount && amount <= balance) {
balance -= amount
balance
} else {
error("insufficient funds")
}
Immutable Class
case class BankAccount(b:Int) {
private def deposit(amount: Int):BankAccount {
if (amount > 0)
{
return BankAccount(amount)
}
}
private def withdraw(amount: Int): BankAccount ={
if (0 < amount && amount <= balance) {
return BankAccount(b-amount)
} else {
error("insufficient funds")
}
}
}
In functional programming you do not change the state in place, instead you create new state and return it.
Here is how your use case can be solved using functional programming.
case class BankAccount(val money: Int)
The above case class represents BankAccount
Instead of mutating the state, create new state with computed value and return it to the user.
def deposit(bankAccount: BankAccount, money: Int): BankAccount = {
BankAccount(money + backAccount.money)
}
In the same way, check for funds and create new state and return it to the user.
def withDraw(bankAccount: BankAccount, money: Int): BankAccount = {
if (money >= 0 && bankAccount.money >= money) {
BankAccount(bankAccount.money - money)
} else error("in sufficient funds")
}
In functional programming it is very common to create new state instead of trying to mutate the old state.
Create new state and return, thats it !!!
First, the good news: your objects are almost immutable. Now, the bad news: they don't work.
The are only "almost" immutable because your class isn't final: I can extend it and override the methods to mutate some state.
final
Now, why doesn't it work? The most obvious bug is that in your deposit method, you return a new BankAccount that has its balance set to the amount that was deposited. So, you lose all the money that was in there before the deposit! You need to add the deposit to the balance, not replace the balance with the deposit.
deposit
BankAccount
There are also other problems: your deposit method has a return type of BankAccount, but it doesn't always return a BankAccount: if the amount is less than or equal to zero, it returns Unit. The most specific common supertype of BankAccount and Unit is Any, so your method actually returns Any. There are multiple ways to fix this, e.g. returning an Option[BankAccount], a Try[BankAccount], or an Either[SomeErrorType, BankAccount], or just throwing an exception. For my example, I'm simply going to ignore the validation altogether. (A similar problem exists in withdraw.)
amount
Unit
Any
Option[BankAccount]
Try[BankAccount]
Either[SomeErrorType, BankAccount]
withdraw
Something like this:
final case class BankAccount(balance: Int) {
private def deposit(amount: Int) = copy(balance = balance + amount)
private def withdraw(amount: Int) = copy(balance = balance - amount)
}
Note I am using the compiler-generated copy method for case classes that allows you to create a copy of an instance with only one field changed. In your particular case, you have only one field, but it's a good practice to get into.
copy
So, that works. Or … does it? Well, no, actually, it doesn't! The problem is that we are creating new bank accounts … with money in them … we are creating new money out of thin air! If I have 100 dollars in my account, I can withdraw 90 of them, and I get returned a new bank account object with 10 dollars in it. But I still have access to the old bank account object with 100 dollars in it! So, I have two bank accounts with a total of 110 dollars plus the 90 I withdrew; I now have 200 dollars!
Solving this is non-trivial, and I will leave it for now.
In closing, I wanted to show you something that is a little bit close to how real-world banking systems actually work, by which I both mean "banking systems in the real-world, as in, before the invention of electronic banking", as well as "electronic banking systems as they are actually used", because surprisingly (or not), they actually work the same.
In your system, the balance is data and depositing and withdrawing are operations. But in the real world, it's exactly the dual: deposits and withdrawals are data, and computing the balance is an operation. Before we hat computers, bank tellers would write transaction slips for every transaction, then those transaction slips would be collected at the end of the day, and all the money movements added up. And electronic banking systems work the same, roughly like this:
final case class TransactionSlip(source: BankAccount, destination: BankAccount, amount: BigDecimal)
final case class BankAccount {
def balance =
TransactionLog.filter(slip.destination == this).map(_.amount).reduce(_ + _) -
TransactionLog.filter(slip.source == this).map(_.amount).reduce(_ + _)
}
So, the individual transactions are recorded in a log, and the balance is computed by adding up the amount of all transactions that have the account as a destination and subtracting from that the sum of the amount of all transactions that have the account as a source. There are obviously a lot of implementation details I haven't shown you, e.g. how the transaction log works, and there should probably be some caching of the balance so that you don't need to compute it over and over again. Also, I ignored validation (which also requires computing the balance).
I added this example to show you that the same problem can be solved by very different designs, and that some designs lend themselves more naturally to a functional approach. Note that this second system is the way banking was done for decades, long before computers even existed, and it lends itself very naturally towards functional programming. | http://jakzaprogramowac.pl/pytanie/59287,how-to-make-a-class-fully-immutable-in-scala | CC-MAIN-2017-26 | refinedweb | 969 | 60.45 |
Tell us what you think of the site.
Hi,
I’m currently working on a Python based simulation project for my university degree. Part of this project involves saving a proprietary file in the user’s current project directory. Now I’ve searched high and low to find the answer, but as of yet cannot: is there any way to query the Maya API and return the currently selected project directory?
os.getcwd() only ever returns the Maya binary folder (since that’s where the OS is running Maya from) and whilst a setProject command exists, there’s no getProject alternative.
It seems a bit odd that there’s no exisiting commands to bring up the project directory, as I’d imagine thats a fairly obvious step for a lot of custom tools…
Any ideas?
Thanks!
workspace
Genius, thanks.
I can’t believe it was as simple as that, I feel a little bit dumber for not finding that now!
Actually, having tested out this command, cmds.workspace( q=True, dir=True ) only returns the directory at which Maya expects to find projects (i.e /username/documents/maya/projects) rather than the actual set project directory (for example in this case /username/my projects/project_name).
Any other suggestions?
import maya.cmds as cmds
import os.path
projectDirectory = cmds.workspace(q=True, rd=True)currentScene = os.path.abspath(cmds.file(q=True, sn=True))
if 'scene' in cmds.workspace(q=True, frl=True):
sceneDirectory = os.path.join(projectDirectory, cmds.workspace(fre='scene'))
Is there any other way? cause the workspace command, when it’s called in python code without Maya GUI, doesn’t give back any other workspace beside the default one?
when you say without maya GUI is you’re doing stuff with maya batch? you could always setup an optionvar with your working directory and synchronize it whenever the workspace changes | http://area.autodesk.com/forum/autodesk-maya/python/project-directory-with-python/ | crawl-003 | refinedweb | 312 | 67.04 |
Current Version:
Linux Kernel - 3.80
Synopsis
#include <linux/unistd.h> #include <asm/ldt.h> int get_thread_area(struct user_desc *u_info); int set_thread_area(struct user_desc *u_info);
Note: There are no glibc wrappers for these system calls; see NOTES.
Description; };
Errors
- EFAULT
- u_info is an invalid pointer.
- EINVAL
- u_info->entry_number is out of bounds.
- ENOSYS
- get_thread_area(2) or set_thread_area(2) was invoked as a 64-bit system call.
- ESRCH
- (set_thread_area()) A free TLS entry could not be located.
Versions
Conforming To
Notes
arch_prctl(2) can interfere with set_thread_area(2). See arch_prctl(2) for more details. This is not normally a problem, as arch_prctl(2) is normally used only by 64-bit programs.
Bugs
Prior to Linux 3.19, the DS and ES segment registers must not reference TLS entries.
See Also
Colophon
License & Copyright
Copyright (C) 2003 Free Software Foundation, Inc. Copyright (C) 2015 Andrew Lutomirski Author: Kent Yoder %%%LICENSE_START(GPL_NOVERSION_ONELINE) This file is distributed according to the GNU General Public License. %%%LICENSE_END | https://community.spiceworks.com/linux/man/2/set_thread_area | CC-MAIN-2019-09 | refinedweb | 161 | 53.37 |
NAMEstrerror, strerror_r - return string describing error code
SYNOPSIS
#include <string.h> char *strerror(int errnum);
int strerror_r(int errnum, char *buf, size_t n);
DESCRIPTIONThe strerror() function returns a string describing the error code passed in the argument errnum, possibly using the LC_MESSAGES part of the current locale to select the appropriate language. This string must not be modified by the application, but may be modified by a subsequent call to perror() or strerror(). No library function will modify this string.
The strerror_r() function is similar to strerror(), but is thread safe. It returns the string in the user-supplied buffer buf of length n.
RETURN VALUEThe strerror() function returns the appropriate error description string, or an unknown error message if the error code is unknown. The value of errno is not changed for a successful call, and is set to a nonzero value upon error. The strerror_r() function returns 0 on success and -1 on failure, setting errno.
ERRORS
- EINVAL
- The value of errnum is not a valid error number.
- ERANGE
- Insufficient storage was supplied to contain the error description string.
CONFORMING TOSVID 3, POSIX, BSD 4.3, ISO/IEC 9899:1990 (C89).
strerror_r() with prototype as given above is specified by SUSv3, and was in use under Digital Unix and HP Unix. An incompatible function, with prototype.
SEE ALSOerrno(3), perror(3), strsignal(3)
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library | http://linux.about.com/library/cmd/blcmdl3_strerror.htm | crawl-002 | refinedweb | 246 | 57.47 |
For my Preparations for Industrial Careers Math class, my team are working on a project.
We want the code to take lines in excel data and dynamic time warp them so the peaks will match as they can be averaged.
Can anyone help?
Here is the code so far:
import numpy as npimport xlrdimport matplotlib.pyplot as plt
**def euclidian_distance(x, y): **** d = x-y **** return abs(x-y)**
**def fill_dist_array(x, y, d_f): **** tmp_d_a = np.ndarray((x.shape[0], y.shape[0])) **** for i in range(x.shape[0]): **** for j in range(y.shape[0]): **** tmp_d_a[i,j] = d_f(i,j) **** return tmp_d_a[i,j] **
**def DTW(x, y, dist_function=None, dist_array=None): **** d_f = None **** d_a = dist_array **** if dist_function == None and dist_array == None: **** d_f = lambda i,j: euclidian_distance(x[i], y[j]) **** if dist_function != None: **** d_f = lambda i,j: dist_function(x[i], y[j]) **** else:**** if fill_dist_array != None: **** d_f = lambda i,j:dist_array[i][j] **** cost = np.empty((x.shape[0], y.shape[0]), dtype=np.float)**** cost[0,0]=d_f(0,0)**** N = x.shape[0] **** for i in range(1, N): **** cost[i,0] = d_f(i,0) **** M = y.shape[0] **** for j in range(1, M): **** cost[0,j] = d_f(0,j) **** for i in range(1, N): **** for j in range(1, M): **** cost[i,j] = d_f(i,j) + np.min((cost[i-1,j],cost[i-1,j-1],cost[i,j-1])) **** print("This is the cost[N-1,M-1]: ",cost[N-1,M-1])**** print()**** print("This is the cost:",cost)**** print()**** print("This is the path x and y:",[i,j])**
**def test(): **** a = str(input("Please enter the workbook name you would like as the reference sample followed by .xls: "))**** c = str(input("Please enter the sheet name you would like to use: "))**** v= int(input("Please enter the column you would like to print(ex: A=0, B=1): "))**** wb = xlrd.open_workbook(a)**** ws = wb.sheet_by_name(c)**** num_rows = ws.nrows-1**** curr_row=0**** row_array = []**** while curr_row < num_rows:**** row = ws.row(curr_row)**** row_array+= row**** curr_row+= 1**** col=wb.sheet_by_index(0)**** aa= str(input("Please enter the workbook name followed by .xls: "))**** cc = str(input("Please enter the sheet name: "))**** vv = int(input("Please enter the column you would like to print(ex: A=0, B=1): "))**** wbb = xlrd.open_workbook(aa)**** wss = wbb.sheet_by_name(cc)**** num_rowss = wss.nrows-1**** curr_rows = 0**** row_arrays = []**** while curr_rows < num_rowss:**** rows = wss.row(curr_rows)**** row_arrays+= rows**** curr_rows+= 1**** cols=wbb.sheet_by_index(0)**** x=np.array(col.col_values(v))**** y=np.array(cols.col_values(vv))**** DTW(x,y,euclidian_distance)**** plt.plot(x)**** plt.plot(y)**** plt.show()**
if name == 'main':** test()**
@datasurfer20850,At least re-edit your post so the used indentation's can be seen....
Please re-edit your Post
Your code will then be in a pre-code stateand you will be able to make/present the proper indentations.
or even better use=[extra's]
the way you posted that is very cofusing and hard to read. please format it. | https://discuss.codecademy.com/t/dynamic-time-warping-from-excel-workbooks/38256 | CC-MAIN-2017-26 | refinedweb | 507 | 68.47 |
Fred Yankowski writes: >So, where is the SIG falling short? Is there specific XML processing >functionality that is missing? If so, what is the most important? Is >the Python XML software too hard to install? Is the main problem a >lack of documentation and examples? What are the most important >problems that the XML-SIG could/should address now? We just need to *finish*, so we can put a 1.0 on the thing. Finish some form of namespace support, find a solution for UTF-8 and Unicode, finish optimizing the DOM implementation, finish the demos and documentation. The UTF-8 problem is the most messy one, and depends on things external to the XML-SIG, namely Python's Unicode support. Expat outputs UTF-8 by default, but sgmlop doesn't (can't remember what xmlproc does) and you need a way to convert from UTF-8 to Latin1, Unicode, or whatever. What's the plan, here? Fredrik's Unicode type, MvL's wstring, or something else? (String-SIG topic.) -- A.M. Kuchling The Answers to these Frequently Asked Questions have been verified by Encyclopedia Britannica. They have not, however, been verified to be *correct*. -- The alt.religion.kibology FAQ | https://mail.python.org/pipermail/xml-sig/1999-October/001519.html | CC-MAIN-2017-30 | refinedweb | 200 | 59.19 |
-- Copyright (C) 2009-2011 Petr Rockai -- -- BSD3 {-# LANGUAGE ScopedTypeVariables, Darcs.Util.Tree.Monad ( virtualTreeIO, virtualTreeMonad , readFile, writeFile, createDirectory, rename, Darcs.Util.Path import Darcs.Util.Tree import Control.Applicative( (<$>) ) import Control.Exception ( throw ) import Data.List( sortBy ) import Data.Int( Int64 ) import Data.Maybe( isNothing, isJust ) import qualified Data.ByteString.Lazy as BL import Control.Monad.RWS.Strict :: (Monad m) => TreeMonad m () flush = do changed' <- map fst . M.toList <$> gets changed dirs' <- gets tree >>= \t -> return [ path | (path, SubTree _) <- list t ] modify $ \st -> st { changed = M.empty, changesize = 0 } forM_ (changed' ++ dirs' ++ [AnchoredPath []]) flushItem runTreeMonad' :: (Monad m) => TreeMonad m a -> TreeState m -> m (a, Tree m) runTreeMonad' action initial = do (out, final, _) <- runRWST action (AnchoredPath []) initial return (out, tree final) runTreeMonad :: (Monad m) => TreeMonad m a -> TreeState m -> m (a, Tree m) runTreeMonad action initial = do let action' = do x <- action flush return x runTreeMonad' action' initial -- | NoHash) (\_ x -> return x)ad :: (Monad m) => AnchoredPath -> AnchoredPath -> TreeMonad m () -- |ad m) => AnchoredPath -> Maybe (TreeItem m) -> TreeMonad m () replaceItem path item = do path' <- (`catPaths` path) `fmap` currentDirectory modify $ \st -> st { tree = modifyTree (tree st) path' item } flushItem :: forall m. (Monad) => TreeMonad m () flushSome = do x <- gets changesize when (x > megs 100) $ do remaining <- go =<< sortBy age . M.toList <$> gets changed modify $ \s -> s { changed = M.fromList remaining } where go [] = return [] go ((path, (size, _)):chs) = do x <- (\s -> s - size) <$> gets changesize flushItem path modify $ \s -> s { changesize = x } if x > megs 50 then go chs else return chs megs = (* (1024 * 1024)) age (_, (_, a)) (_, (_, b)) = compare a b instance (Monad m) => TreeRO (TreeMonad m) where expandTo p = do t <- gets tree p' <- (`catPaths` p) `fmap` ask t' <- lift $ expandPath t p' modify $ \st -> st { tree = t' } return p' fileExists p = do p' <- expandTo p (isJust . (`findFile` p')) `fmap` gets tree directoryExists p = do p' <- expandTo p (isJust . (`findTree` p')) `fmap` gets tree exists p = do p' <- expandTo p (isJust . (`find` p')) `fmap` gets tree readFile p = do p' <- expandTo p t <- gets tree let f = findFile t p' case f of Nothing -> throw $ userError $ "No such file " ++ show p' Just x -> lift (readBlob x) currentDirectory = ask withDirectory dir act = do dir' <- expandTo dir local (const dir') act instance (Monad m) => TreeRW (TreeMonad m) where writeFile p con = do _ <- expandTo p modifyItem p (Just blob) flushSome) $ throw $ userError $ "Error renaming: destination " ++ show to ++ " exists." unless (isNothing item) $ do modifyItem from Nothing modifyItem to item renameChanged from to copy from to = do from' <- expandTo from _ <- expandTo to tr <- gets tree let item = find tr from' unless (isNothing item) $ modifyItem to item findM' :: forall m a . (Monad m) => (Tree m -> AnchoredPath -> a) -> Tree m -> AnchoredPath -> m a findM' what t path = fst <$> virtualTreeMonad (look path) t where look :: AnchoredPath -> TreeMonad m a look = expandTo >=> \p' -> flip what p' <$> gets tree findM :: (Monad m) => Tree m -> AnchoredPath -> m (Maybe (TreeItem m)) findM = findM' find findTreeM :: (Monad m) => Tree m -> AnchoredPath -> m (Maybe (Tree m)) findTreeM = findM' findTree findFileM :: (Monad m) => Tree m -> AnchoredPath -> m (Maybe (Blob m)) findFileM = findM' findFile | http://hackage.haskell.org/package/darcs-2.14.4/docs/src/Darcs.Util.Tree.Monad.html | CC-MAIN-2021-49 | refinedweb | 513 | 56.29 |
Solution for
Programming Exercise 2.4
THIS PAGE DISCUSSES ONE POSSIBLE SOLUTION to the following exercise from this on-line Java textbook.
Exercise 2.4: Write a program that helps the user count his change. The program should ask how many quarters the user has, then how many dimes, then how many nickles, then how many pennies. Then the program should tell the user how much money he has, expressed in dollars.
Discussion
The program will need variables to represent the number of each type of coin. Since the number of coins has to be an integer, these variables are of type int. I'll call the variables quarters, dimes, nickles, and pennies.
The total value of the coins, when expressed in dollars, can be a non-integer number such as 1.57 or 3.02. Since the total value in dollars is a real number, I will use a variable of type double to represent it. The variable is named dollars
The outline of the program is clear enough:Declare the variables. Ask the user for the number of each type of coin, and read the responses. Compute the total value of the coins, in dollars. Display the result to the user.
The function TextIO.getlnInt() can be used to read each of the user's responses. The alternative function TextIO.getInt() could also be used, but it is less safe. Suppose, for example, that the user responds to the request to type in the number of quarters by entering "7 quarters". After TextIO.getlnInt() reads the number 7, it will discard the extra input "quarters". TextIO.getInt() will read the 7 correctly, but the extra input is not discarded. Later, when the program tries to read the number of dimes, it sees the left-over input and tries to read that, without giving the user a chance to type in another response. You might want to experiment and see what happens if you change getlnInt() to getInt(). (Of course, if the user's response is "I have 7 quarters" or "seven", then you are out of luck in any case.)
Since one quarter is worth 0.25 dollars, the number of dollars in N quarters is 0.25*N. Similarly, a dime is worth 0.10 dollars, a nickle is 0.05 dollars, and a penny is 0.01 dollars. So, to get the total value of all the user's coins, I just have to add up (0.25*quarters) + (0.10*dimes) + (0.05*nickles) + (0.01*pennies). This value is assigned to the variable, dollars, and that is the result that is displayed to the user.
Alternatively, I could first have computed the total number of cents in all the coins, and then divided by 100 to convert the amount into dollars:int totalCents; // Total number of cents in the coins. totalCents = 25*quarters + 10*dimes + 5*nickles + pennies; dollars = totalCents/100.0;
Since totalCents is if type int, it is essential here that I compute dollars as totalCents/100.0 and not as totalCents/100. The value computed by totalCents/100 is an integer. For example, if totalCents is 397, then totalCents/100 is 3. Using totalCents/100.0 forces the computer to compute the answer as a real number, giving 3.97.
The Solution
public class CountChange { /* This program will add up the value of a number of quarters, dimes, nickles, and pennies. The number of each type of coin is input by the user. The total value is reported in dollars. This program depends on the non-standard class, TextIO. */ public static void main(String[] args) { int quarters; // Number of quarters, to be input by the user. int dimes; // Number of dimes, to be input by the user. int nickles; // Number of nickles, to be input by the user. int pennies; // Number of pennies, to be input by the user. double dollars; // Total value of all the coins, in dollars. /* Ask the user for the number of each type of coin. */ TextIO.put("Enter the number of quarters: "); quarters = TextIO.getlnInt(); TextIO.put("Enter the number of dimes: "); dimes = TextIO.getlnInt(); TextIO.put("Enter the number of nickles: "); nickles = TextIO.getlnInt(); TextIO.put("Enter the number of pennies: "); pennies = TextIO.getlnInt(); /* Add up the values of the coins, in dollars. */ dollars = (0.25 * quarters) + (0.10 * dimes) + (0.05 * nickles) + (0.01 * pennies); /* Report the result back to the user. */ TextIO.putln(); TextIO.putln("The total in dollars is $" + dollars); } // end main() } // end class
[ Exercises | Chapter Index | Main Index ] | http://math.hws.edu/eck/cs124/javanotes3/c2/ex-2-4-answer.html | CC-MAIN-2018-47 | refinedweb | 757 | 68.06 |
Like we have array of integers, array of pointers etc, we can also have array of structure variables. And to use the array of structure variables efficiently, we use pointers of structure type. We can also have pointer to a single structure variable, but it is mostly used when we are dealing with array of structure variables.
#include <stdio.h> struct Book { char name[10]; int price; } int main() { struct Book a; //Single structure variable struct Book* ptr; //Pointer of Structure type ptr = &a; struct Book b[10]; //Array of structure variables struct Book* p; //Pointer of Structure type p = &b; return 0; }
To access members of structure using the structure variable, we used the dot
. operator. But when we have a pointer of structure type, we use arrow
-> to access structure members.
#include <stdio.h> struct my_structure { char name[20]; int number; int rank; }; int main() { struct my_structure variable = {"StudyTonight", 35, 1}; struct my_structure *ptr; ptr = &variable; printf("NAME: %s\n", ptr->name); printf("NUMBER: %d\n", ptr->number); printf("RANK: %d", ptr->rank); return 0; }
NAME: StudyTonight NUMBER: 35 RANK: 1 | https://www.studytonight.com/c/pointers-to-structure-in-c.php | CC-MAIN-2020-05 | refinedweb | 183 | 59.67 |
#include <sys/sunddi.h> int pciv_send(dev_info_t *dip, pciv_pvp_req_t *req);
Pointer to the dev_info structure.
Pointer to pciv_pvp_req structure.
VF index ranges from 1 to num_vf if called by PF driver. PCIV_PF if the caller is a VF driver.
Buffer address of caller's buffer to be sent.
Number of bytes to be transmitted, which must be less than 8k.
Call back function pointer if the pvp_flag is set as PCIV_NOWAIT.
Call back input argument for pvp_cb if the pvp_flag is set as PCIV_NOWAIT.
Must be one of the following:
Do not wait for receiver's acknowledgment response.
Wait until receiver acknowledges the transmission (default).
The pciv_send() function is used by SR-IOV (Single-Root IO Virtualization)-capable PF (Physical Function) and VF (Virtual Function) drivers to communicate with each other. A PF driver can communicate with any of its VF drivers while a VF driver can only communicate with its PF driver. If pvp_flag is set to PCIV_NOWAIT, the call returns immediately and the callback routine in pvp_cb is called when data in pvp_buf has been transmitted to the destination. The caller is then allowed to free the buffer in its callback routine.
typedef void (*buf_cb_t)(int rc, caddr_t buf, size_t size, caddr_t cb_arg);
DDI return code for the transmission.
Buffer address of caller's buffer to be sent.
Number of bytes to be transmitted.
Input argument the caller set when calling pciv_send().
The pciv_send() function returns:
The buffer has been sent successfully.
The device/driver does not support this operation. The caller may use other mechanisms, such as hardware mailbox.
The pvp_nbyte or pvp_dstfunc argument is invalid.
The operation failed due to lack of resources.
The remote end did not register a call back to handle incoming transmission.
The call failed due unspecified reasons.
The pciv_send() function can be called from kernel non-interrupt context.
See attributes(5) for descriptions of the following attributes:
attributes(5), ddi_cb_register(9F) | http://docs.oracle.com/cd/E36784_01/html/E36886/pciv-send-9f.html | CC-MAIN-2014-42 | refinedweb | 320 | 68.47 |
Load balancing in Exchange Server
Load balancing in Exchange 2016 and later build on the Microsoft high availability and network resiliency platform delivered in Exchange 2013. When this is combined with the availability of third-party load balancing solutions (both hardware and software), there are multiple options for implementing load balancing in your Exchange organization.
Exchange architecture changes introduced in Exchange 2013 brought about the Mailbox server and Client Access server roles. Compare this to Exchange 2010, where Client Access, Mailbox, Hub Transport, and Unified Messages ran on separate servers.
Using minimal server roles, Exchange 2016 and 2019 deliver:
Simplified deployment with the Mailbox server running Client Access services and Edge Transport server roles.
Mail flow managed in the transport pipeline, which is the collection of services, connections, queues, and components that route messages to the Transport service categoriser on the Mailbox server.
High availability by deploying load balancers to distribute client traffic.
The HTTP protocol standard introduced with Exchange 2013 means that session affinity is no longer required in Exchange 2016 and Exchange 2019. Session affinity allows a persistent connection for messaging-enabled services so that a user doesn't have to reenter their user name and password multiple times.
Previously, Exchange 2007 and Exchange 2010 supported RPC over HTTP for Outlook Anywhere. Exchange 2013 introduced MAPI over HTTP, although it wasn't enabled by default. It's now enabled by default in Exchange 2016 and Exchange 2019.
With the HTTP protocol in use, all native clients connect using HTTP and HTTPs in Exchange Server. This standard protocol removes the need for affinity, which was previously required to avoid a new prompting for user credentials whenever load balancing redirected the connection to a different server.
Server roles in Exchange Server
The reduced number of server roles for Exchange 2016 and Exchange 2019 simplifies Exchange implementation and hardware requirements. The number of server roles in Exchange 2016 and 2019 shrinks from seven to two: the Mailbox server and the Edge Transport server. The Mailbox server role includes Client Access services, while the Edge Transport server provides secure mail flow in Exchange 2016 and Exchange 2019, just as it did in earlier versions of Exchange.
In Exchange 2013, the Client Access server role made sure that when a user attempted to access their mailbox, the server proxied the request back to the Mailbox server actively serving the user's mailbox. This meant that services such as Outlook on the web (previously known as Outlook Web App) were rendered for the user on the Mailbox itself, removing any need for affinity.
The same functionality remains in Exchange 2016 and Exchange 2019. If two Mailbox servers host different mailboxes, they can proxy traffic for each other when necessary. The Mailbox server that hosts the active copy of the mailbox serves the user accessing it, even if the user connects to a different Mailbox server.
Read more about the server role changes in Exchange Server in the topic, Exchange Server architecture.
Although not required, the Edge Transport server sits in the perimeter network , just as in earlier Exchange versions, to provide secure inbound and outbound mail flow for your Exchange organization.
Read more about the transport service in the topic, Understanding the Transport service on Edge Transport servers.
Protocols in Exchange Server
Beginning with Exchange 2016, all native Exchange clients use the HTTP protocol to connect to a designated service, with HTTP cookies provided to the user at log in which are encrypted using the Client Access services SSL certificate. A logged in user can resume the session on a different Mailbox server running Client Access services without reauthenticating. Servers using the same SSL certificate can decrypt the client authentication cookie.
HTTP makes possible the use of service or application health checks in your Exchange network. Depending on your load balancer solution, you can implement health probes to check different components of your system.
The effect of HTTP-only access for clients is that load balancing is simpler, too. If you wanted, you could use DNS to load balance your Exchange traffic. You would simply provide the client with the IP address of every Mailbox server, and the HTTP client would handle the chores. If an Exchange server fails, the protocol attempts to connect to another server. However, there are drawbacks to load balancing to DNS, discussed in the following section Load balancing options in Exchange Server.
Read more about HTTP and Exchange Server in the topic MAPI over HTTP in Exchange Server.
Load balancing options in Exchange Server
In the example shown here, multiple servers configured in a database availability group (DAG) host the Mailbox servers running Client Access services. This provides high availability with a small Exchange server footprint. The client connects to the load balancer rather than directly to the Exchange servers. There is no requirement for load balancer pairs, however we recommend deploying in clusters to improve network resilience.
Be aware that DAGs use Microsoft Clustering Services. These services can't be enabled on the same server as Windows Network Load Balancing (NLB). Accordingly, Windows NLB is not an option when using DAGs. There are third-party software and virtual appliance solutions in this case.
Using DNS is the simplest option for load balancing your Exchange traffic . With DNS load balancing, you only have to provide your clients with the IP address of every Mailbox server. After that, DNS round robin distributes that traffic to your Mailbox servers. The HTTP client is smart enough to connect to another server should one Exchange server fail completely.
Simplicity comes at a price, however. In this case, DNS round robin isn't truly load-balancing the traffic, because there isn't a way programmatically to make sure that each server gets a fair share of the traffic. Also, there is no service level monitoring so that when a single service fails, clients are not automatically redirected to an available service. For example, if Outlook on the web is in failure mode, the clients see an error page.
DNS load balancing requires more external IP addresses when you publish externally. That means that each individual Exchange server in your organization would require an external IP address.
There are more elegant solutions to load balancing your traffic, such as hardware that uses Transport Layer 4 or Application Layer 7 to help distribute client traffic. Load balancers monitor each Exchange client-facing service, and in the event of service failure, load balancers can direct traffic to another server and take the problem server offline. Additionally, some level of load distribution makes sure that no single Mailbox server is proxying the majority of client access.
Load balancing services can use Layer 4 or Layer 7, or a combination, to manage traffic. There are benefits and drawbacks to each solution.
Layer 4 load balancers work at the Transport layer to direct traffic without examining the contents.
Because they don't examine the traffic contents, Layer 4 load balancers save time in transit. However, this comes with trade-offs. Layer 4 load balancers know only the IP address, protocol, and TCP port. Knowing only a single IP address, the load balancer can monitor only a single service.
Layer 4 load balancing benefits include:
Requires fewer resources (no content examination).
Distributes traffic at the Transport layer.
The risk with a Layer 4 solution is that if a service fails but the server is still available, clients can connect to the failed service. This means that a resilient Layer 4 implementation requires multiple IP addresses configured with separate HTTP namespaces per service, for example, owa.contoso.com, eas.contoso.com, mapi.contoso.com, which allows for service-level monitoring.
Layer 7 load balancers work at the Application layer and can inspect the traffic content and direct it accordingly.
Layer 7 load balancers forego the raw performance benefits of Layer 4 load balancing for the simplicity of a single namespace, for example, mail.contoso.com, and per-service monitoring. Layer 7 load balancers understand the HTTP path, such as /owa or /Microsoft-Server-ActiveSync, or /mapi, and can direct traffic to working servers based on monitoring data.
Layer 7 load balancing benefits include:
Needs only a single IP address.
Inspects content and can direct traffic.
Provides notification of failed service that can be taken offline.
Handles load balancer SSL termination.
Distributes traffic at the application layer and understands the destination URL.
SSL should terminate at the load balancer as this offers a centralized place to correct SSL attacks.
The ports that need to be load balanced include some, such as those for IMAP4 or POP3, that may not even be used in your Exchange organization.
Load balancing deployment scenarios in Exchange Server
Exchange 2016 introduced significant flexibility for your namespace and load balancing architecture. With many options for deploying load balancing in your Exchange organization, from simple DNS to sophisticated third-party Layer 4 and Layer 7 solution, we recommend that you review them all in light of your organization's needs.
The following scenarios come with benefits and limitations, and understanding each is key to implementing the solution that best fits your Exchange organization:
Scenario A Single namespace, no session affinity: Layer 4 or Layer 7
Scenario B Single namespace, no session affinity: Layer 7
Scenario C Single namespace with session affinity, Layer 7
Scenario D Multiple namespaces and no session affinity, Layer 4
Scenario A Single namespace, no session affinity: Layer 4 or Layer 7
In this Layer 4 scenario, a single namespace, mail.contoso.com, is deployed for the HTTP protocol clients. The load balancer doesn't maintain session affinity. Because this is a layer 4 solution, the load balancer is configured to check the health of only a single virtual directory as it cannot distinguish Outlook on the web requests from RPC requests.
From the perspective of the load balancer in this example, health is per-server and not per-protocol for the designated namespace. Administrators will have to choose which virtual directory they want to target for the health probe; we recommend that you choose a heavily used virtual directory. For example, if the majority of your users utilize Outlook on the web, then choose the Outlook on the web virtual directory in the health probe.
As long as the Outlook on the web health probe response is healthy, the load balancer keeps the destination Mailbox server in the load balancing pool. However, if the Outlook on the web health probe fails for any reason, then the load balancer removes the destination Mailbox server from the load balancing pool for all requests associated with that namespace. This means that if the health probe fails, all client requests for that namespace are directed to another server, regardless of protocol.
Scenario B Single namespace, no session affinity: Layer 7
In this Layer 7 scenario, a single namespace, mail.contoso.com, is deployed for all the HTTP protocol clients. The load balancer doesn't maintain session affinity. Since the load balancer is configured for Layer 7, there is SSL termination and the load balancer knows the destination URL.
We recommend this configuration for Exchange 2016 and Exchange 2019. The load balancer is configured to check the health of the destination Mailbox servers in the load balancing pool, and a health probe is configured on each virtual directory.
For example, as long as the Outlook on the web health probe response is healthy, the load balancer will keep the destination Mailbox server in the Outlook on the web load balancing pool. However, if the Outlook on the web health probe fails for any reason, then the load balancer removes the target Mailbox server from the load balancing pool for Outlook on the web requests. In this example, health is per-protocol, which means that if the health probe fails, only the affected client protocol is directed to another server.
Scenario C Single namespace with session affinity, Layer 7
In this Layer 7 scenario, a single namespace, mail.contoso.com, is deployed for all the HTTP protocol clients. Because the load balancer is configured for Layer 7, there is SSL termination and the load balancer knows the destination URL. The load balancer is also configured to check the health of the target Mailbox servers in the load balancing pool. The health probe is configured on each virtual directory.
However, enabling session affinity decreases capacity and utilization. This is because the more involved affinity options, cookie-based load balancing or Secure Sockets Layer (SSL) session-ID, require more processing and resources. We recommend that you check with your vendor on how session affinity affects your load balancing scalability.
Just as in the previous scenario, as long as the Outlook on the web health probe response is healthy, the load balancer keeps the destination Mailbox server in the Outlook on the web load balancing pool. However, if the Outlook on the web health probe fails for any reason, then the load balancer removes the target Mailbox server from the load balancing pool for Outlook on the web requests. Here, health is per-protocol, which means that if the health probe fails, only the affected client protocol is directed to another server.
Scenario D Multiple namespaces and no session affinity
This last scenario with multiple namespaces and no session affinity offers per-protocol health checks and Layer 4 power. A unique namespace is deployed for each HTTP protocol client. For example, you would configure the HTTP protocol clients as mail.contoso.com, mapi.contoso.com, and eas.contoso.com.
This scenario provides per-protocol health checking while not requiring complex load-balancing logic. The load balancer uses Layer 4 and is not configured to maintain session affinity. The load balancer configuration checks the health of the destination Mailbox servers in the load balancing pool. In this setting, the health probes are configured to target the health of each virtual directory, as each virtual directory has a unique namespace. Because it's configured for Layer 4, the load balancer doesn't know the URL is being accessed, yet the result is as if it does know. Since health is per-protocol, if the health probe fails, only the affected client protocol is directed to another server.
Load balancing and managed availability in Exchange Server
Monitoring the available servers and services is key to high availability networks. Since some load balancing solutions have no knowledge of the target URL or the content of the request, this can introduce complexities for Exchange health probes.
Exchange 2016 and Exchange 2019 include a built-in monitoring solution, known as Managed Availability. Managed availability, also known as Active Monitoring or Local Active Monitoring, is the integration of built-in monitoring and recovery actions with the Exchange high availability platform.
Managed Availability includes an offline responder. When the offline responder is invoked, the affected protocol (or server) is removed from service.
To ensure that load balancers do not route traffic to a Mailbox server that Managed Availability has marked as offline, load balancer health probes must be configured to check <virtualdirectory>/healthcheck.htm , for example,.
Read more about managed availability in Managed availability.
Feedback | https://docs.microsoft.com/en-us/exchange/architecture/client-access/load-balancing?view=exchserver-2019 | CC-MAIN-2019-47 | refinedweb | 2,522 | 52.39 |
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search within I:
I/O and Streams
User input does nothing
Ma Chan
Greenhorn
Posts: 3
posted 19 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Hi people. I have a few problems at the moment. Below is my incomplete program. It compiles but doesn't do a whole lot when run. I am doing a project on polymophism and inheritance. I have a footwear class(parent) and sneaker & thong classes(children)...my switch statements don't do anything. I'm trying to setColour to "something"(eg: whatever option user chooses), then it is written to a file. What's going on?
import java.io.* ; abstract class Footwear { protected String colour ; protected String material ; protected String brand ; protected int choice ; //constructor public Footwear() { System.out.println("Choose a footwear colour") ; System.out.println("------------------------") ; System.out.println("1. Red") ; System.out.println("2. Blue") ; System.out.println("3. White") ; System.out.println("Your Colour:") ; BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in)) ; try { choice = Integer.parseInt(stdin.readLine()) ; do { switch(choice) { case 1: setColour("Red") ; break ; case 2: setColour("Blue") ; break ; case 3: setColour("White") ; break ; default: System.out.println("Enter number 1, 2, or 3:") ; } }while(choice != 1 || choice != 2 || choice != 3) ; } catch(IOException ioe) { System.out.println("What a catch!") ; } } /*//default constructor public Footwear() { colour = "No Colour" ; material = "No Material" ; //brand = "Home Brand" ; }*/ //methods protected void setColour(String col) { colour = col ; } public String getColour() { return colour ; } protected void setMaterial(String mat) { material = mat ; } public String getMaterial() { return material ; } protected void toFile() { try { PrintWriter priWri = new PrintWriter(new FileWriter("output.txt")) ; priWri.println(getColour()) ; priWri.close() ; } catch(IOException ioe) { System.out.println("What a catch!!") ; } } //abstract method in parent class that is used //in the two child classes public abstract void howToWear() ; } //Child class class Sneakers extends Footwear { /*public Sneakers(String col, String mat) { super(col, mat) ; }*/ public void howToWear() { System.out.println("You have to tie up the laces") ; } } //Child class class Thongs extends Footwear { /*public Thongs(String col, String mat) { super(col, mat) ; }*/ public void howToWear() { System.out.println("You just slip them on") ; } } //Main method that instantiates the two child classes class Driver { public static void main(String[]args) { Sneakers mySneakers = new Sneakers() ; mySneakers.toFile() ; } }
Kathy Rogers
Ranch Hand
Posts: 104
posted 19 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Your problem here is not an input problem. It's to do with your while clause.
Think about it like this.
If I enter 1, then choice does not equal 2 and choice does not equal 3 but it does equal 1.
So when it gets to the while clause, you get
false OR true OR true
The while clause tests the condition and finds that it's true because at least one of the sub-conditions are true - so it does the do-while loop again . . . and again . . . and again.
You only want to perform the do-while loop again only if all the conditions are true - if choice does not equal 1 AND choice does not equal 2 AND choice does not equal 3 - so try replacing those ||s with &&s or checking if choice is less than 1 or greater than 3.
Btw, in order to pick up the user input using readLine(), you have to press return after you've entered your number.
Hope this helps,
Kathy
Who among you feels worthy enough to be my best friend? Test 1 is to read this tiny ad:
free, earth-friendly heat - a kickstarter for putting coin in your pocket while saving the earth
reply
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
Refreshing a window
switch does nothing
child classes
Help needed on editing a specific row in JTable
How to accept Multiline conditional input through console in java program....
More... | https://coderanch.com/t/275853/java/User-input | CC-MAIN-2022-21 | refinedweb | 654 | 62.78 |
QT5 and modern OpenGL
I want to use QT as a replacement of freeglut or glfw. But I am really confused because I just saw 6 different ways to use QT with OpenGL.
If I want raw performance, how would I structure my QT program?
Like this ?
Now someone will troll because of the way you spelled Qt ;)
I don't know where you counted 6 different ways, you can basically use GL widgets with hybrid QPainter + standard OpenGL. The GUI aspect of it is much better performance (with the right render path optimizations of course), plus you get to draw some actual 3D inside of the GUI.
Alternatively, you can use Qt3D, which unfortunately lost its developers and at the moment I don't think it is being worked on. As a result, it is no longer an integral part of Qt5 but an addon module which you must get and build manually. If offers an API that can substitute many of the standard OpenGL functions as well as those, provided by glut/freeglut and so on. There is an API for working with meshes, shaders, materials and whatnot. Qt3D can be used natively with C++ or from QML.
Since Qt aims for portability, its own GL APIs target a level of common functionality shared by desktop and mobile OGL. Since you mention performance I assume you would want to use the advanced capabilities of newer OGL versions, then you are left with no option but to code against the raw OpenGL API.
I as well as many other people have been calling for a low level graphics API for 2D and 3D for a while, ported to OpenGL as well as Direct3D, which IMO would be of tremendous value to Qt, but it doesn't seem that this will become a priority for Qt anytime soon. One can still hope with Qt5 being done there will be resources to invest in new Qt modules.
Thank you for the fast and very informative response. So basically I just want to use OpenGL4.X for windows and linux. This means I continue using the QGLWidgets class, but instead of using the Qt helper functions like QGLShaderProgram or QGLBuffer, I just use raw OpenGL?
At least the documentation of QGLWidgets tells me this .
"You inherit from it and use the subclass like any other QWidget, except that you have the choice between using QPainter and standard OpenGL rendering commands."
So instead of using QPainter I just use standard OpenGL.
You can use both. It will be hard for you do paint paths, vector art and other 2D stuff like that with OpenGL, QPainter does that for you, generate vertices and whatnot.
Nothing prevents you from using the Qt provided classes, however, they may not be up to date with the latest functionality, provided by OpenGL. That doesn't mean you can't use the assisting classes for creating and compiling shaders and stuff like that, which is faster to do with the Qt API instead of the C-style OpenGL API.
Writing pure OpenGL will be more portable thou, I mean if you consider substituting Qt for another framework for a different platform or stuff like that. maintainers and developers of Qt3D, which is available as a Qt 5 Add On and will in all likelihood have its return to the Qt Essentials with Qt 5.1, which will also bring "additional OpenGL support":.
I've looked arround this, Forum, web page for some way topost a question. There does not seem to be a 'post question'. What would be the 'word', 'button' or 'link' I should click on so that I can post a question?
TIA
[quote author="Lukas Geyer" date="1356021868" two maintainers of Qt3D, which is available as a Qt 5 Add On will in all likelihood have its return to the Qt Essentials with Qt 5.1, which will also bring "additional OpenGL support":.[/quote]
Yes I saw this presentation twice. But I also saw different implementations such as the link in my first post with the core profile. Or the opengl examples. All used different methods of implementing OpenGL in Qt.
But the sample in the video is not online, probably because he used some functionality that will come with 5.1.
I also saw the example " OpenGL Window Example" which uses QWindow & QOpenGLFunctions instead of QGLWidgets. Is this now the correct way of using OpenGL?
Note that I do not need OpenGL ES1.1 or 2.0 support only OpenGl4.X.
[quote author="kantaki" date="1356022517"]But I also saw different implementations such as the link in my first post with the core profile.[/quote]Be aware that the article refers to Qt4, not Qt5.
[quote author="kantaki" date="1356022517"]I also saw the example " OpenGL Window Example" which uses QWindow & QOpenGLFunctions instead of QGLWidgets. Is this now the correct way of using OpenGL?[/quote]Prefer QOpenGL, but you can also use any highlevel functionality like Qt3D or the scenegraph.
[quote author="ArbolOne" date="1356022181"]I've looked arround this, Forum, web page for some way topost a question. There does not seem to be a 'post question'. What would be the 'word', 'button' or 'link' I should click on so that I can post a question?[/quote]
!(Start new discussion)!
Hi,
it depends what you want to do. You can freely choose to use QGLWidget or QWindow plus QOpenGLContext.
QGLWidget creates it's own QGLContext which is in turn implemented in terms of QOpenGLContext so they are equivalent really.
The main difference is that QGLWidget can (more) easily be embedded inside a widget based application. Whereas, QWindow + QOpenGLContext will just give you a single window more suitable for games or other full-screen visualisation tools.
I did not use any classes that are not readily available in those talks. All prototype classes I mentioned in the videos are available on gerrit. See the patch series starting at for example. I am currently refactoring some of these and will soon resubmit them against the development branch. They should apply cleanly against Qt 5.0.0 I believe.
I am also aiming to make the examples I show in the talk available online shortly but I have been uber-busy since returning from DevDays. ;)
If you have any specific questions please ask them here and I will do my best to answer you.
Incidentally the wiki article you reference in the original post was written a long time ago for Qt 4. I should find some time to update it for Qt5 to show the option of QWindow+QOpenGLContext.
Speaking of the devil... ;-)
[quote author="Lukas Geyer" date="1356023511"]Speaking of the devil... ;-)[/quote]
I'm like Beetlejuice :)
Okay thanks a lot :). But I have one last question. You mentioned I should only use QWindow + QOpenGLContext.
So I do not need QOpenGLFunctions at all if I only want to use OpenGL (no OpenGL ES)?
It's mainly for a graphics engine showcase in the future.
I didn't say you should only use QWindow+QOpenGLContext. The other QOpenGL* classes are perfectly fine to use. I highly recommend QOpenGLShaderProgram and friends. They will save you writing a lot of boiler-plate code.
QOpenGLFunctions however does not expose all OpenGL functions, only the subset used by Qt itself. This is what I am trying to address with
As mentioned earlier in this thread and on the development@qt-project.org mailing list, we have a whole bunch of other OpenGL enabler classes we could potentially add for Qt 5.1.
Of course once you have a window and context you are free to use raw OpenGL calls. Just beware that you will have to handle resolving the function pointers for any functions you use. Especially on windows this can be a pain.
Ok thanks , I think I should be able to use it now. :)
Okay I encountered a problem. :x
I tried to create opengl window with the QWindow but I am not sure how I can tell QWindow "Hey you have to use OpenGL now".
I did the following. I read the documentation but I probably misunderstood something.
@#ifndef GLWINDOW_H
#define GLWINDOW_H
#include <QWindow>
#include <QOpenGLBuffer>
#include <QOpenGLContext>
#include <QOpenGLShader>
#include <QGLFormat>
class GLWindow : public QWindow
{
QOpenGLContext* context;
public:
GLWindow(); void initGL(); void resize(int w, int h);
};
#endif // GLWINDOW_H@
@#include "glwindow.h"
#include <QDebug>
GLWindow::GLWindow()
{
initGL();
}
void GLWindow::resize(int w, int h){
glViewport(0,0,w,h); qDebug() << w << " " <<h;
}
void GLWindow::initGL()
{
context = new QOpenGLContext(this);
context->setFormat(requestedFormat());
context->create();
}
@
@#include "mainwindow.h"
#include <QGuiApplication>
#include <glwindow.h>
#include <QSurfaceFormat>
int main(int argc, char *argv[])
{
QGuiApplication a(argc, argv);
QSurfaceFormat format;
format.setSamples(4);
GLWindow window;
window.setFormat(format);
window.resize(800,600);
window.show();
return a.exec();
}@
[quote author="kantaki" date="1356053482"]I tried to create opengl window with the QWindow but I am not sure how I can tell QWindow "Hey you have to use OpenGL now".[/quote]
"QWindow::setSurfaceType()":
Have you taken a look at the "OpenGL Window Example":?
Something like this will do it:
@
OpenGLWindow::OpenGLWindow( const QSurfaceFormat& format,
QScreen* screen )
: QWindow( screen ),
m_context( 0 ),
m_scene( 0 )
{
// Tell Qt we will use OpenGL for this window
setSurfaceType( OpenGLSurface );
// Request a full screen button (if available) setFlags( flags() | Qt::WindowFullscreenButtonHint ); // Create the native window setFormat( format ); create(); // Create an OpenGL context m_context = new QOpenGLContext; m_context->setFormat( format ); m_context->create();
}
@
where the QSurfaceFormat is created in your main() function (adjust to suit your needs of course):
@
#include <QGuiApplication>
#include <QSurfaceFormat>
#include "diffusescene.h"
#include "openglwindow.h"
int main( int argc, char* argv[] )
{
QGuiApplication a( argc, argv );
// Specify the format we wish to use QSurfaceFormat format; format.setMajorVersion( 3 );
#if !defined(Q_OS_MAC)
format.setMinorVersion( 3 );
#else
format.setMinorVersion( 2 );
#endif
format.setDepthBufferSize( 24 );
format.setSamples( 4 );
format.setProfile( QSurfaceFormat::CoreProfile );
OpenGLWindow w( format ); w.setScene( new DiffuseScene ); w.show(); return a.exec();
}
@
Hope this helps.
ps The DiffuseScene class is just a custom class that does the actual OpenGL rendering in my case.
pps Remember to call QOpenGLContext::makeCurrent() on the QWindow before using it to render and to call QOpenGLContext::swapBuffers() after rendering.
This code works fine (displays a purple triangle) on Qt 4.8.1, but not on 5.0.0 (pre-compiled version, Windows 7 x64, MSVC2010 32bit compile). I get a window with a grey widget inside. I use "GLWidget" with "setCentralWidget(new GLWidget());" in a QMainWindow constructor... Most of the code is ripped straight from the "hellogl_es2" example btw.
Note that I uncommented the usage of native painting too:
@QPainter painter;
painter.begin(this);
painter.beginNativePainting();
...
painter.endNativePainting();
@
because it just doesn't work for me, on BOTH Qt versions...
What am I doing wrong?!
GLWidget.h
@#pragma once
#include <vector>
#include <QGLWidget>
#include <QGLShaderProgram>
#include <QGLFunctions>
#include <QTime>
class GLWidget: public QGLWidget, protected QGLFunctions
{
int frames;
QTime time;
QGLShaderProgram program;
int vertexAttribute;
int normalAttribute;
int matrixUniform;
float viewAngleX;
float viewAngleY;
float viewScale;
QMatrix4x4 projection;
std::vector<QVector3D> vertices;
std::vector<QVector3D> normals;
public:
GLWidget(QWidget * parent = nullptr);
virtual void initializeGL();
virtual void resizeGL(int width, int height);
virtual void paintGL();
void setDefaultScene();
void drawScene();
};@
GLWidget.cpp
@#include "GLWidget.h"
#include <QMessageBox>
GLWidget::GLWidget(QWidget * parent)
: QGLWidget(parent), viewScale(1.0f), viewAngleX(0.0f), viewAngleY(0.0f)
{
setAttribute(Qt::WA_PaintOnScreen);
setAttribute(Qt::WA_NoSystemBackground);
setAutoBufferSwap(true);
setMinimumSize(512, 512);
}
void GLWidget::resizeGL(int width, int height)
{
//Set up OpenGL viewport
glViewport(0, 0, width, height);
float aspect = (float)width / ((float)height ? height : 1.0f);
const float zNear = 1.0f, zFar = 10.0f, fov = 45.0f;
projection.setToIdentity();
projection.perspective(fov, aspect, zNear, zFar);
}
void GLWidget::initializeGL()
{
initializeGLFunctions();
QGLShader *vshader = new QGLShader(QGLShader::Vertex, this); const char *vsrc = "attribute highp vec4 vertex;\n" "attribute mediump vec3 normal;\n" "uniform mediump mat4 matrix;\n" "varying mediump vec4 color;\n" "void main(void)\n" "{\n" " vec3 toLight = normalize(vec3(0.0, 0.3, 1.0));\n" " float angle = max(dot(normal, toLight), 0.0);\n" " vec4 col = vec4(0.8, 0.8, 0.8, 1.0) * angle + vec4(0.2, 0.2, 0.2, 1.0);\n"
" color = clamp(col, 0.0, 1.0);\n"
" gl_Position = matrix * vertex;\n"
"}\n";
if (!vshader->compileSourceCode(vsrc) || vshader->log().size() > 0) {
QMessageBox::warning(this, "Error", QString("Error compiling vertex vertex shader source: %1").arg(vshader->log()));
return;
}
QGLShader *fshader = new QGLShader(QGLShader::Fragment, this); const char *fsrc = "varying mediump vec4 color;\n" "void main(void)\n" "{\n" " gl_FragColor = vec4(0.4, 0.0, 0.4, 1.0);\n" "}\n"; if (!fshader->compileSourceCode(fsrc) || fshader->log().size() > 0) {
QMessageBox::warning(this, "Error", QString("Error compiling fragment shader source: %1").arg(fshader->log()));
return;
}
if (!program.addShader(vshader)) {
QMessageBox::warning(this, "Error", QString("Error adding vertex shader: %1").arg(program.log()));
return;
}
if (!program.addShader(fshader)) {
QMessageBox::warning(this, "Error", QString("Error adding fragment shader: %1").arg(program.log()));
return;
}
if (!program.link() || !program.bind()) {
QMessageBox::warning(this, "Error", QString("Error linking shaders: %1").arg(program.log()));
return;
}
vertexAttribute = program.attributeLocation("vertex"); normalAttribute = program.attributeLocation("normal"); matrixUniform = program.uniformLocation("matrix");
setDefaultScene();
}
void GLWidget::paintGL()
{
/*QPainter painter;
painter.begin(this);
painter.beginNativePainting();*/ glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); QMatrix4x4 modelview; modelview.rotate(viewAngleX, 1.0f, 0.0f, 0.0f); modelview.rotate(viewAngleY, 0.0f, 1.0f, 0.0f); modelview.scale(viewScale); modelview.translate(0.0f, 0.0f, -2.0f); program.bind(); program.setUniformValue(matrixUniform, projection * modelview); drawScene(); program.release(); //painter.endNativePainting();
}
void GLWidget::setDefaultScene()
{
vertices.clear();
normals.clear();
vertices.push_back(QVector3D(-0.3f, -0.3f, 0.0f));
vertices.push_back(QVector3D( 0.3f, -0.3f, 0.0f));
vertices.push_back(QVector3D( 0.3f, 0.3f, 0.0f));
normals.push_back(QVector3D( 0.0f, 0.0f, 1.0f));
normals.push_back(QVector3D( 0.0f, 0.0f, 1.0f));
normals.push_back(QVector3D( 0.0f, 0.0f, 1.0f));
}
void GLWidget::drawScene()
{
program.enableAttributeArray(normalAttribute);
program.enableAttributeArray(vertexAttribute);
program.setAttributeArray(vertexAttribute, vertices.data());
program.setAttributeArray(normalAttribute, normals.data());
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
program.disableAttributeArray(normalAttribute);
program.disableAttributeArray(vertexAttribute);
}@
- steventaitinger
For an introduction to QT5.5 and modern OpenGL have a look at my post on that topic. It will give you most of the info you need to choose if and how you want to use OpenGL and Qt. | https://forum.qt.io/topic/22388/qt5-and-modern-opengl | CC-MAIN-2018-13 | refinedweb | 2,378 | 58.08 |
: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability
p and C with probability
1-p.
If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of
p?
What if Monty opens C?
Hint: you might want to use SymPy to do the algebra for you.
from sympy import symbols p = symbols('p')
p
# Solution # Here's the solution if Monty opens B. pmf = Pmf('ABC') pmf['A'] *= p pmf['B'] *= 0 pmf['C'] *= 1 pmf.Normalize() pmf['A'].simplify()
1.0*p/(p + 1)
# Solution # When p=0.5, the result is what we saw before pmf['A'].evalf(subs={p:0.5})
0.333333333333333
# Solution # When p=0.0, we know for sure that the prize is behind C pmf['C'].evalf(subs={p:0.0})
1.00000000000000
# Solution # And here's the solution if Monty opens C. pmf = Pmf('ABC') pmf['A'] *= 1-p pmf['B'] *= 1 pmf['C'] *= 0 pmf.Normalize() pmf['A'].simplify()
0.333333333333333*(p - 1)/(0.333333333333333*p - 0.666666666666667) | https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/monty_soln.ipynb | CC-MAIN-2021-10 | refinedweb | 230 | 76.11 |
Font addons
- General font routines
- ALLEGRO_FONT
- ALLEGRO_GLYPH
- al_init_font_addon
- al_shutdown_font_addon
- al_load_font
- al_destroy_font
- al_register_font_loader
- al_get_font_line_height
- al_get_font_ascent
- al_get_font_descent
- al_get_text_width
- al_get_ustr_width
- al_draw_text
- al_draw_ustr
- al_draw_justified_text
- al_draw_justified_ustr
- al_draw_textf
- al_draw_justified_textf
- al_get_text_dimensions
- al_get_ustr_dimensions
- al_get_allegro_font_version
- al_get_font_ranges
- al_set_fallback_font
- al_get_fallback_font
- Per glyph text handling
- Multiline text drawing
- Bitmap fonts
- TTF fonts
These functions are declared in the following header file. Link with allegro_font.
#include <allegro5/allegro_font.h>
General font routines
ALLEGRO_FONT
typed.
ALLEGRO_GLYPH
typedef struct ALLEGRO_GLYPH ALLEGRO_GLYPH;
A structure containing the properties of a character in a font.
typedef struct ALLEGRO_GLYPH { ALLEGRO_BITMAP *bitmap; // the bitmap the character is on int x; // the x position of the glyph on bitmap int y; // the y position of the glyph on bitmap int w; // the width of the glyph in pixels int h; // the height of the glyph in pixels int kerning; // pixels of kerning (see below) int offset_x; // x offset to draw the glyph at int offset_y; // y offset to draw the glyph at int advance; // number of pixels to advance after this character } ALLEGRO_GLYPH;
bitmap may be a sub-bitmap in the case of color fonts.
kerning should be added to the x position you draw to if you want your text kerned and depends on which codepoints al_get_glyph was called with.
Glyphs are tightly packed onto the bitmap, so you need to add offset_x and offset_y to your draw position for the text to look right.
advance is the number of pixels to add to your x position to advance to the next character in a string and includes kerning.
Since: 5.2.1
Unstable API: This API is new and subject to refinement.
See also: al_get_glyph
al_init_font_addon
bool al_init_font_addon(void)
Initialise the font addon.
Note that if you intend to load bitmap fonts, you will need to initialise allegro_image separately (unless you are using another library to load images).
Similarly, if you wish to load truetype-fonts, do not forget to also call al_init_ttf_addon.
Returns true on success, false on failure. On the 5.0 branch, this function has no return value. You may wish to avoid checking the return value if your code needs to be compatible with Allegro 5.0. Currently, the function will never return false.
See also: al_init_image_addon, al_init_ttf_addon, al_shutdown_font_addon
al_shutdown_font_addon
void al_shutdown_font_addon(void)
Shut down the font addon. This is done automatically at program exit, but can be called any time the user wishes as well.
See also: al_init_font_addon
al_load_font
ALLEGRO_FONT *al_load_font(char const *filename, int size, int flags)
Loads a font from disk. This will use al_load_bitmap_font_flags if you pass the name of a known bitmap format, or else al_load_ttf_font.
The flags parameter is passed through to either of those functions. Bitmap and TTF fonts are also affected by the current bitmap flags at the time the font is loaded.
See also: al_destroy_font, al_init_font_addon, al_register_font_loader, al_load_bitmap_font_flags, al_load_ttf_font
al_destroy_font
void al_destroy_font(ALLEGRO_FONT *f)
Frees the memory being used by a font structure. Does nothing if passed NULL.
See also: al_load_font
al_register_font_loader
bool al_register_font_loader(char const *extension, ALLEGRO_FONT *(*load_font)(char const *filename, int size, int flags))
Informs Allegro of a new font file type, telling it how to load files of this format.
The
extension should include the leading dot ('.') character. It will be matched case-insensitively.
The
load_font argument may be NULL to unregister an entry.
Returns true on success, false on error. Returns false if unregistering an entry that doesn't exist.
See also: al_init_font_addon
al_get_font_line_height
int al_get_font_line_height(const ALLEGRO_FONT *f)
Returns the usual height of a line of text in the specified font. For bitmap fonts this is simply the height of all glyph bitmaps. For truetype fonts it is whatever the font file specifies. In particular, some special glyphs may be higher than the height returned here.
If the X is the position you specify to draw text, the meaning of ascent and descent and the line height is like in the figure below.
X------------------------ /\ | | / \ | | /____\ ascent | / \ | | / \ | height ---------------- | | | descent | | | -------------------------
See also: al_get_text_width, al_get_text_dimensions
al_get_font_ascent
int al_get_font_ascent(const ALLEGRO_FONT *f)
Returns the ascent of the specified font.
See also: al_get_font_descent, al_get_font_line_height
al_get_font_descent
int al_get_font_descent(const ALLEGRO_FONT *f)
Returns the descent of the specified font.
See also: al_get_font_ascent, al_get_font_line_height
al_get_text_width
int al_get_text_width(const ALLEGRO_FONT *f, const char *str)
Calculates the length of a string in a particular font, in pixels.
See also: al_get_ustr_width, al_get_font_line_height, al_get_text_dimensions
al_get_ustr_width
int al_get_ustr_width(const ALLEGRO_FONT *f, ALLEGRO_USTR const *ustr)
Like al_get_text_width but expects an ALLEGRO_USTR.
See also: al_get_text_width, al_get_ustr_dimensions
al_draw_text
void al_draw_text(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, int flags, char const *text)
Writes the NUL-terminated string
text onto the target bitmap at position
x,
y, using the specified
font.
The
flags parameter can be 0 or one of the following flags:
- ALLEGRO_ALIGN_LEFT - Draw the text left-aligned (same as 0).
- ALLEGRO_ALIGN_CENTRE - Draw the text centered around the given position.
- ALLEGRO_ALIGN_RIGHT - Draw the text right-aligned to the given position.
It can also be combined with this flag:
- ALLEGRO_ALIGN_INTEGER - Always draw text aligned to an integer pixel position. This was formerly the default behaviour. Since: 5.0.8, 5.1.4
This function does not support newline characters (
\n), but you can use al_draw_multiline_text for multi line text output.
See also: al_draw_ustr, al_draw_textf, al_draw_justified_text, al_draw_multiline_text.
al_draw_ustr
void al_draw_ustr(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, int flags, const ALLEGRO_USTR *ustr)
Like al_draw_text, except the text is passed as an ALLEGRO_USTR instead of a NUL-terminated char array.
See also: al_draw_text, al_draw_justified_ustr, al_draw_multiline_ustr
al_draw_justified_text
void al_draw_justified_text(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x1, float x2, float y, float diff, int flags, const char *text)
Like al_draw_text, but justifies the string to the region x1-x2.
The
diff parameter is the maximum amount of horizontal space to allow between words. If justisfying the text would exceed
diff pixels, or the string contains less than two words, then the string will be drawn left aligned.
The
flags parameter can be 0 or one of the following flags:
- ALLEGRO_ALIGN_INTEGER - Draw text aligned to integer pixel positions. Since: 5.0.8, 5.1.5
See also: al_draw_justified_textf, al_draw_justified_ustr
al_draw_justified_ustr
void al_draw_justified_ustr(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x1, float x2, float y, float diff, int flags, const ALLEGRO_USTR *ustr)
Like al_draw_justified_text, except the text is passed as an ALLEGRO_USTR instead of a NUL-terminated char array.
See also: al_draw_justified_text, al_draw_justified_textf.
al_draw_textf
void al_draw_textf(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, int flags, const char *format, ...)
Formatted text output, using a printf() style format string. All parameters have the same meaning as with al_draw_text otherwise.
See also: al_draw_text, al_draw_ustr
al_draw_justified_textf
void al_draw_justified_textf(const ALLEGRO_FONT *f, ALLEGRO_COLOR color, float x1, float x2, float y, float diff, int flags, const char *format, ...)
Formatted text output, using a printf() style format string. All parameters have the same meaning as with al_draw_justified_text otherwise.
See also: al_draw_justified_text, al_draw_justified_ustr.
al_get_text_dimensions
void pixels):
-
al_get_ustr_dimensions
void al_get_ustr_dimensions(const ALLEGRO_FONT *f, ALLEGRO_USTR const *ustr, int *bbx, int *bby, int *bbw, int *bbh)
Like al_get_text_dimensions, except the text is passed as an ALLEGRO_USTR instead of a NUL-terminated char array.
See also: al_get_text_dimensions
al_get_allegro_font_version
uint32_t al_get_allegro_font_version(void)
Returns the (compiled) version of the addon, in the same format as al_get_allegro_version.
al_get_font_ranges
int al_get_font_ranges(ALLEGRO_FONT *f, int ranges_count, int *ranges)
Gets information about all glyphs contained in a font, as a list of ranges. Ranges have the same format as with al_grab_font_from_bitmap.
ranges_count is the maximum number of ranges that will be returned.
ranges should be an array with room for
ranges_count * 2 elements. The even integers are the first unicode point in a range, the odd integers the last unicode point in a range.
Returns the number of ranges contained in the font (even if it is bigger than
ranges_count).
Since: 5.1.4
See also: al_grab_font_from_bitmap
al_set_fallback_font
void al_set_fallback_font(ALLEGRO_FONT *font, ALLEGRO_FONT *fallback)
Sets a font which is used instead if a character is not present. Can be chained, but make sure there is no loop as that would crash the application! Pass NULL to remove a fallback font again.
Since: 5.1.12
See also: al_get_fallback_font, al_draw_glyph, al_draw_text
al_get_fallback_font
ALLEGRO_FONT *al_get_fallback_font(ALLEGRO_FONT *font)
Retrieves the fallback font for this font or NULL.
Since: 5.1.12
See also: al_set_fallback_font
Per glyph text handling
For some applications Allegro's text drawing functions may not be sufficient. For example, you would like to give a different color to every letter in a word, or use different a font for a drop cap.
That is why Allegro supports drawing and getting the dimensions of the individual glyphs of a font. A glyph is a particular visual representation of a letter, character or symbol in a specific font.
And it's also possible to get the kerning to use between two glyphs. These per glyph functions have less overhead than Allegro's per string text drawing and dimensioning functions. So, with these functions you can write your own efficient and precise custom text drawing functions.
al_draw_glyph
void al_draw_glyph(const ALLEGRO_FONT *f, ALLEGRO_COLOR color, float x, float y, int codepoint)
Draws the glyph that corresponds with
codepoint in the given
color using the given
font. If
font does not have such a glyph, nothing will be drawn.
To draw a string as left to right horizontal text you will need to use al_get_glyph_advance to determine the position of each glyph. For drawing strings in other directions, such as top to down, use al_get_glyph_dimensions to determine the size and position of each glyph.
If you have to draw many glyphs at the same time, use al_hold_bitmap_drawing with true as the parameter, before drawing the glyphs, and then call al_hold_bitmap_drawing again with false as a parameter when done drawing the glyphs to further enhance performance.
Since: 5.1.12
See also: al_get_glyph_width, al_get_glyph_dimensions, al_get_glyph_advance.
al_get_glyph_width
int al_get_glyph_width(const ALLEGRO_FONT *f, int codepoint)
This function returns the width in pixels of the glyph that corresponds with
codepoint in the font
font. Returns zero if the font does not have such a glyph.
Since: 5.1.12
See also: al_draw_glyph, al_get_glyph_dimensions, al_get_glyph_advance.
al_get_glyph_dimensions
bool al_get_glyph_dimensions(const ALLEGRO_FONT *f, int codepoint, int *bbx, int *bby, int *bbw, int *bbh)
Sometimes, the al_get_glyph_width or al_get_glyph_advance functions are not enough for exact glyph placement, so this function returns some additional information, particularly if you want to draw the font vertically.
The function itself returns true if the character was present in
font and false if the character was not present in
font.
Returned variables (all in pixel):
- bbx, bby - Offset to upper left corner of bounding box.
- bbw, bbh - Dimensions of bounding box.
These values are the same as al_get_text_dimensions would return for a string of a single character equal to the glyph passed to this function. Note that glyphs may go to the left and upwards of the X, in which case x and y will have negative values.
If you want to draw a string verticallly, for Japanese or as a game effect, then you should leave bby + bbh space between the glyphs in the y direction for a regular placement.
If you want to draw a string horizontally in an extra compact way,
then you should leave bbx + bbw space between the glyphs in the x direction for a compact placement.
In the figure below is an example of what bbx and bby may be like for a
2 glyph, and a
g glyph of the same font compared to the result of al_get_glyph_width().
al_get_glyph_width() al_get_glyph_width() __|___ __|__ / \ / \ bbx bbw bbx bbw <-->+<------>+ <-->+<----->+ X baseline ^ | | ^ | | bby | | | bby | | | v | | | | | +---+--------+ | | | ^ | ***** | | | | | |* ** | v | | bbh | | ** | bbh +---+-------+ | | ** | ^ | ***** | v |********| | |* *| +---+--------+ | | ***** | | | *| | | * *| v | **** | +---+-------+
Since: 5.1.12
See also: al_draw_glyph, al_get_glyph_width, al_get_glyph_advance.
al_get_glyph_advance
int al_get_glyph_advance(const ALLEGRO_FONT *f, int codepoint1, int codepoint2)
This function returns by how much the x position should be advanced for left to right text drawing when the glyph that corresponds to codepoint1 has been drawn, and the glyph that corresponds to codepoint2 will be the next to be drawn. This takes into consideration the horizontal advance width of the glyph that corresponds with codepoint1 as well as the kerning between the glyphs of codepoint1 and codepoint2.
Kerning is the process of adjusting the spacing between glyphs in a font, to obtain a more visually pleasing result. Kerning adjusts the space between two individual glyphs with an offset determined by the author of the font.
If you pass ALLEGRO_NO_KERNING as codepoint1 then al_get_glyph_advance will return 0. this can be useful when drawing the first character of a string in a loop.
Pass ALLEGRO_NO_KERNING as codepoint2 to get the horizontal advance width of the glyph that corresponds to codepoint1 without taking any kerning into consideration. This can be used, for example, when drawing the last character of a string in a loop.
This function will return zero if the glyph of codepoint1 is not present in the
font. If the glyph of codepoint2 is not present in the font, the horizontal advance width of the glyph that corresponds to codepoint1 without taking any kerning into consideration is returned.
When drawing a string one glyph at the time from the left to the right with kerning, the x position of the glyph should be incremented by the result of al_get_glyph_advance applied to the previous glyph drawn and the next glyph to draw.
Note that the return value of this function is a recommended advance for optimal readability for left to right text determined by the author of the font. However, if you like, you may want to draw the glyphs of the font narrower or wider to each other than what al_get_glyph_advance returns for style or effect.
In the figure below is an example of what the result of al_get_glyph_advance may be like for two glypphs
A and
l of the same font that has kerning for the "Al" pair, without and with the ALLEGRO_NO_KERNING flag.
al_get_glyph_advance(font, 'A', 'l') ___|___ / \ ------------- /\ -| / \ | /____\ | / \ | / \ \_ ------------- al_get_glyph_advance(font, 'A', ALLEGRO_NO_KERNING) ____|____ / \ --------------- /\ -| / \ | /____\ | / \ | / \ \_ ---------------
Since: 5.1.12
See also: al_draw_glyph, al_get_glyph_width, al_get_glyph_dimensions.
Multiline text drawing
al_draw_multiline_text
void al_draw_multiline_text(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, float max_width, float line_height, int flags, const char *text)
Like al_draw_text, but this function supports drawing multiple lines of text. It will break
text in lines based on its contents and the
max_width parameter. The lines are then layed out vertically depending on the
line_height parameter and drawn each as if al_draw_text was called on them.
A newline
\n in the
text will cause a "hard" line break after its occurrence in the string. The text after a hard break is placed on a new line. Carriage return
\r is not supported, will not cause a line break, and will likely be drawn as a square or a space depending on the font.
The
max_width parameter controls the maximum desired width of the lines. This function will try to introduce a "soft" line break after the longest possible series of words that will fit in
max_length when drawn with the given
font. A "soft" line break can occur either on a space or tab (
\t) character.
However, it is possible that
max_width is too small, or the words in
text are too long to fit
max_width when drawn with
font. In that case, the word that is too wide will simply be drawn completely on a line by itself. If you don't want the text that overflows
max_width to be visible, then use al_set_clipping_rectangle to clip it off and hide it.
The lines
text was split into will each be drawn using the
font,
x,
color and
flags parameters, vertically starting at
y and with a distance of
line_height between them. If
line_height is zero (
0), the value returned by calling al_get_font_line_height on
font will be used as a default instead.
The
flags ALLEGRO_ALIGN_LEFT, ALLEGRO_ALIGN_CENTRE, ALLEGRO_ALIGN_RIGHT and ALLEGRO_ALIGN_INTEGER will be honoured by this function.
If you want to calculate the size of what this function will draw without actually drawing it, or if you need a complex and/or custom layout, you can use al_do_multiline_text.
Since: 5.1.9
See also: al_do_multiline_text, al_draw_multiline_text, al_draw_multiline_textf
al_draw_multiline_ustr
void al_draw_multiline_ustr(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, float max_width, float line_height, int flags, const ALLEGRO_USTR *ustr)
Like al_draw_multiline_text, except the text is passed as an ALLEGRO_USTR instead of a NUL-terminated char array.
Since: 5.1.9
See also: al_draw_multiline_text, al_draw_multiline_textf, al_do_multiline_text
al_draw_multiline_textf
void al_draw_multiline_textf(const ALLEGRO_FONT *font, ALLEGRO_COLOR color, float x, float y, float max_width, float line_height, int flags, const char *format, ...)
Formatted text output, using a printf() style format string. All parameters have the same meaning as with al_draw_multiline_text otherwise.
Since: 5.1.9
See also: al_draw_multiline_text, al_draw_multiline_ustr, al_do_multiline_text
al_do_multiline_text
void al_do_multiline_text(const ALLEGRO_FONT *font, float max_width, const char *text, bool (*cb)(int line_num, const char *line, int size, void *extra), void *extra)
This function processes the
text and splits it into lines as al_draw_multiline_text would, and then calls the callback
cb once for every line. This is useful for custom drawing of multiline text, or for calculating the size of multiline text ahead of time. See the documentation on al_draw_multiline_text for an explanation of the splitting algorithm.
For every line that this function splits
text into the callback
cb will be called once with the following parameters:
line_num- the number of the line starting from zero and counting up
line- a pointer to the beginning character of the line (see below)
size- the size of the line (0 for empty lines)
extra- the same pointer that was passed to al_do_multiline_text
Note that
line is not guaranteed to be a NUL-terminated string, but will merely point to a character within
text or to an empty string in case of an empty line. If you need a NUL-terminated string, you will have to copy
line to a buffer and NUL-terminate it yourself. You will also have to make your own copy if you need the contents of
line after
cb has returned, as
line is not guaranteed to be valid after that.
If the callback
cb returns false, al_do_multiline_text will stop immediately, otherwise it will continue on to the next line.
Since: 5.1.9
See also: al_draw_multiline_text
al_do_multiline_ustr
void al_do_multiline_ustr(const ALLEGRO_FONT *font, float max_width, const ALLEGRO_USTR *ustr, bool (*cb)(int line_num, const ALLEGRO_USTR * line, void *extra), void *extra)
Like al_do_multiline_text, but using ALLEGRO_USTR instead of a NUL-terminated char array for text.
Since: 5.1.9
See also: al_draw_multiline_ustr
Bitmap fonts
al_grab_font_from_bitmap
ALLEGRO_FONT *al_grab_font_from_bitmap(ALLEGRO_BITMAP *bmp, int ranges_n, const
al_load_bitmap_font
ALLEGRO_FONT *al_load_bitmap_font(const char *fname)
Load a bitmap font from a file. This is done by first calling al_load_bitmap_flags and then al_grab_font_from_bitmap.
If you wanted to load an old A4 font, for example, it would be better to load the bitmap yourself in order to call al_convert_mask_to_alpha on it before passing it to al_grab_font_from_bitmap.
See also: al_load_bitmap_font_flags, al_load_font, al_load_bitmap_flags
al_load_bitmap_font_flags
ALLEGRO_FONT *al_load_bitmap_font_flags(const char *fname, int flags)
Like al_load_bitmap_font but additionally takes a flags parameter which is a bitfield containing a combination of the following:
ALLEGRO_NO_PREMULTIPLIED_ALPHA : The same meaning as for al_load_bitmap_flags.
See also: al_load_bitmap_font, al_load_bitmap_flags
al_create_builtin_font
ALLEGRO_FONT *al_create_builtin_font(void)
Creates a monochrome bitmap font (8x8 pixels per character).
This font is primarily intended to be used for displaying information in environments or during early runtime states where no external font data is available or loaded (e.g. for debugging).
The builtin font contains the following unicode character ranges:
0x0020 to 0x007F (ASCII) 0x00A1 to 0x00FF (Latin 1) 0x0100 to 0x017F (Extended A) 0x20AC to 0x20AC (euro currency symbol)
Returns NULL on an error.
The font memory must be freed the same way as for any other font, using al_destroy_font.
Since: 5.0.8, 5.1.3
See also: al_load_bitmap_font, al_destroy_font
TTF fonts
These functions are declared in the following header file. Link with allegro_ttf.
#include <allegro5/allegro_ttf.h>
al_init_ttf_addon
bool al_init_ttf_addon(void)
Call this after al_init_font_addon to make al_load_font recognize ".ttf" and other formats supported by al_load_ttf_font.
Returns true on success, false on failure.
al_shutdown_ttf_addon
void al_shutdown_ttf_addon(void)
Unloads the ttf addon again. You normally don't need to call this.
al_load_ttf_font
ALLEGRO_FONT *al_load_ttf_font(char const *filename, int size, int flags)
Loads a TrueType font from a file using the FreeType library. Quoting from the FreeType FAQ this means support for many different font formats:
TrueType, OpenType, Type1, CID, CFF, Windows FON/FNT, X11 PCF, and others
The
size parameter determines the size the font will be rendered at, specified in pixels. The standard font size is measured in units per EM, if you instead want to specify the size as the total height of glyphs in pixels, pass it as a negative value.
Note: If you want to display text at multiple sizes, load the font multiple times with different size parameters.
The following flags are supported:
ALLEGRO_TTF_NO_KERNING - Do not use any kerning even if the font file supports it.
ALLEGRO_TTF_MONOCHROME - Load as a monochrome font (which means no anti-aliasing of the font is done).
ALLEGRO_TTF_NO_AUTOHINT - Disable the Auto Hinter which is enabled by default in newer versions of FreeType. Since: 5.0.6, 5.1.2
See also: al_init_ttf_addon, al_load_ttf_font_f
al_load_ttf_font_f
ALLEGRO_FONT *al_load_ttf_font_f(ALLEGRO_FILE *file, char const *filename, int size, int flags)
Like al_load_ttf_font,.
al_load_ttf_font_stretch
ALLEGRO_FONT *al_load_ttf_font_stretch(char const *filename, int w, int h, int flags)
Like al_load_ttf_font, except it takes separate width and height parameters instead of a single size parameter.
If the height is a positive value, and the width zero or positive, then font will be stretched according to those parameters. The width must not be negative if the height is positive.
As with al_load_ttf_font, the height may be a negative value to specify the total height in pixels. Then the width must also be a negative value, or zero.
Returns
NULL if the height is positive while width is negative, or if the height is negative while the width is positive.
Since: 5.0.6, 5.1.0
See also: al_load_ttf_font, al_load_ttf_font_stretch_f
al_load_ttf_font_stretch_f
ALLEGRO_FONT *al_load_ttf_font_stretch_f(ALLEGRO_FILE *file, char const *filename, int w, int h, int flags)
Like al_load_ttf_font_stretch,.
Since: 5.0.6, 5.1.0
See also: al_load_ttf_font_stretch
al_get_allegro_ttf_version
uint32_t al_get_allegro_ttf_version(void)
Returns the (compiled) version of the addon, in the same format as al_get_allegro_version.
al_get_glyph
bool al_get_glyph(const ALLEGRO_FONT *f, int prev_codepoint, int codepoint, ALLEGRO_GLYPH *glyph)
Gets all the information about a glyph, including the bitmap, needed to draw it yourself. prev_codepoint is the codepoint in the string before the one you want to draw and is used for kerning. codepoint is the character you want to get info about. You should clear the 'glyph' structure to 0 with memset before passing it to this function for future compatibility.
Since: 5.2.1
Unstable API: This API is new and subject to refinement.
See also: ALLEGRO_GLYPH | http://liballeg.org/a5docs/5.2.1.1/font.html | CC-MAIN-2017-39 | refinedweb | 3,766 | 51.18 |
#include <YContextMenu.h>
ContextMenu: Similar to PushButton, but with several actions: Upon clicking on a ContextMenu (or activating it with the keyboard), a pop-up menu opens where the user can activate an action. Menu items in that pop-up menu can have submenus (that will pop up in separate pop-up menus).
Internally, this widget is more similar to the Tree widget. The difference is that it does not keep a "selected" status, but triggers an action right away, just like a PushButton. Like PushButton, ContextMenu sends an event right away when the user selects an item (clicks on a menu item or activates it with the keyboard). Items that have a submenu never send an event, they simply open their submenu when activated.
Constructor.
'label' is the user-visible text on the button (not above it like all other SelectionWidgets).
Destructor.
Add one item. This widget assumes ownership of the item object and will delete it in its destructor.
This reimplementation will an index to the item that is unique for all items in this ContextMenu. That index can be used later with findMenuItem() to find the item by that index.
Reimplemented from YSelectionWidget.
Reimplemented from YSelectionWidget.
Add multiple items. For some UIs, this can be more efficient than calling addItem() multiple times. This function also automatically calls resolveShortcutConflicts() and rebuildMenuTree() at the end.
Derived classes can overwrite this function, but they should call this base class function at the end of the new implementation.
Reimplemented from YSelectionWidget.
Reimplemented from YSelectionWidget.
Delete all items.
Reimplemented from YSelectionWidget.
Reimplemented from YSelectionWidget.
Recursively find the first menu item with the specified index from iterator 'begin' to iterator 'end'.
Returns 0 if there is no such item.
Recursively find the first menu item with the specified index. Returns 0 if there is no such item.
Alias for findMenuItem(). Reimplemented to ensure consistent behaviour with YSelectionWidget::itemAt().
Rebuild the displayed menu tree from the internally stored YMenuItems.
The application should call this (once) after all items have been added with addItem(). YContextMenu::addItems() calls this automatically.
Derived classes are required to implement this.
Resolve keyboard shortcut conflicts: Change shortcuts of menu items if there are duplicates in the respective menu level.
This has to be called after all items are added, but before rebuildMenuTree() (see above). YContextMenu::addItems() calls this automatically.
Returns a descriptive name of this widget class for logging, debugging etc.
Reimplemented from YSelectionWidget. | https://doc.opensuse.org/projects/libyui/HEAD/classYContextMenu.html | CC-MAIN-2018-05 | refinedweb | 404 | 52.15 |
MVP Profile:
Website:
· How long have you been using VB?
I’ve been using Visual Basic for about 8 years now. I started out with VBA, programming in Word, PowerPoint and Access, and moved into VB6. However, Access quickly became my primary IDE. I started creating simple databases for our company, then complex databases, and finally started writing automation applications. Today I’m a hardcore VB.NET enthusiast! I’ve programmed with VB.NET 2002, 2003, 2005, 2008, and a little 2010 beta. I’m known on the web as “VBRocks” and “VB Rocks”.
· What industry do you work in?
I work in the printing industry! My company prints statements for various credit unions and utility companies. I write the first line of software that processes the data sent to us by our clients.
· How big is your development team?
My development team is 3: Me, myself and I! Fortunately, we get along pretty good! 🙂
· What kind of apps do you most commonly build?
I mostly build applications that read data in from various formats, such as regular fixed field text files, csv files, xml files, database files, etc. However, I also do a lot of database programming of internal applications. I use SQL Server as a back-end, and Visual Basic applications as a front end.
· What’s the most interesting app you’ve ever built?
I’d have to say that the most interesting application I’ve ever built is a program I called “Extract”. Our company had been uploading thousands of individual .pdf documents to our vendors, which took hours and hours to do. My management desired a solution to the problem, but could not figure anything out. My Extract program allowed my company to upload only 1 (master) .pdf to our vendors, and then extract (split) the .pdf’s on their end. This ended up saving my company thousands of dollars, and a lot of time.
· Please tell us about an app that you’re working on at the moment.
Currently I’m working on a Remove File Transfer program comparative to WS FTP and File Zilla. This application can use FTP, FTPS, HTTP and HTTPS protocols. Additionally, it can upload to WevDAV servers.
· What other technologies do you most commonly use?
I love ADO.NET! It Rocks! I love how easy it is to connect to just about any data source and load data into a disconnected dataset. Plus, it has powerful built-in support for binding, sorting and filtering, which saves a lot of time, by not having to program the respective Interface Implementations. I also love how easy it is to extend ADO.NET to support data validation. Plus, I love how easy it is to read to and write from XML files.
· What are some of your favorite VB features?
I’m excited about the improved Lambda Expressions! VS 2008 only supported single line Lambda Expressions, which were a little frustrating; especially since C# supports multi line anonymous delegates. But I am so excited to see Lambda Expressions improved to support multiple lines. I also really like LINQ. Although it’s not always the most efficient, it is very flexible and very powerful! In fact, there’s probably no better way to select, sort and filter object data that with LINQ. I also love the compiler! C# has always annoyed me, because it always seems like it takes the compiler longer to respond when errors have been created, and then fixed. Sometimes it’s necessary to build the project just to get error indicators to disappear. The VB compiler is a lot better: As soon as an error is detected, it is identified; as soon as it’s fixed, the identifier is removed.
· What do you like most about VB as a programming language?
I love the syntax of Visual Basic… It’s more like, how I think… Instead of “Love programming I do” (as Yoda might say), it’s “I love programming”! Additionally, there are a lot of VB features that make life so much easier! For example, a TextFieldParser (Microsoft.VisualBasic.FileIO namespace), plus the “My” feature, which provides easy and intuitive access to a number of .NET features, such as interacting with the computer, applications, settings, resources, etc. I also love how smart VB is, when it comes to interpreting how something like “=” is used… As an assignment, or equivalency? VB knows… Another thing I love is how VB always drops in the ending structure keyword. For example, if you type the word “Do”, and press enter, you automatically get “Loop” inserted. I’m still surprised that C# hasn’t got to the place where they automatically add {} where required. Not to put down C# too much, because I’m a C# programmer as well, but I personally believe that VBRocks!
For other interviews in this series, please visit.
Are you a VB, too? Submit your story here!
That Great…
but can i ask for help
i need VB.Net Code to transfare files from Clients To Server
???
Can u Help ME??
That Great…
but can i ask for help
i need VB.Net Code to transfare files from Clients To Server
???
Can u Help ME??
if u could i will be so pleased..my mail:
eaelmoneem@gmail.com
I love Gary’s enthusiasm, it’s catching!
I’m a C# lover, but if I need to develop a large app in short time I’ll chose VB. It’s easy, fast to develop and prone to less syntax mistakes (at least for me).
If I wanna learn, experiment, image processing, freaky code, pointers’ crazyness… I’ll chose C#.
Everything is in the flavor you prefer for your dayly work! 🙂
Agreed, yelinna: "Everything is in the flavor you prefer for your daily work"
I’m sure that high-quality VB code exists somewhere. However, in the set of VB applications, there are far more that are written poorly than are written well.
It must be just me, but it seems like every time I use VB I loose a few IQ points. In the past I used QuickBasic 4.5 heavily… but these days, the only time I use VB is where I’m pulled onto an existing project that needs work. Invariably, the VB code is a mess, because the guy who wrote it does not know how to program.
While there are some developers, such as those outlined on these pages, who know how to do their jobs well in VB, they are outliers. Since VB is such a "simple" language it is targeted to the lowest common denominator in experience level and abilities. Many (if not most) developers who start projects in VB (or who don’t run away screaming from existing VB projects) just don’t have the experience level needed to produce top quality work…
and then the expensive (experienced) developers need to come in and clean up the mess.
If you want to develop a large app quickly, use Python, or IronPython if you need .NET integration.
Only 8 years? And you’re already an MVP!!!!
I’ve been using vb for little less than half that, and still trying to figure out the basics…
Care to send over your training material to SA? 😛 | https://blogs.msdn.microsoft.com/vbteam/2009/06/30/im-a-vb-gary-lima-visual-basic-mvp/ | CC-MAIN-2017-47 | refinedweb | 1,211 | 66.94 |
Assignment: Write an application that interfaces with an input &/or output device that you made, comparing as many tool options as possible.
This week is all about writing software applications to talk to embedded devices. Software app development is a whole industry and many take degrees in computer science to spend their lives doing just this, but this week serves as an introduction to a number of key principles:
A program is basically set of repeatable instructions to make a computer do something. In most programming languages, this involves writing “human readable” code instructions that will then be passed to a computer via a compiler. A compiler is basically a translator from human to machine language.
Compiled or “high-level” languages write and read more like the way we humans would give an instruction, eg. “take object A and put it next to object B”. This differs from a low level language such as Assembly language which provide little or no abstraction from a computer’s architecture—commands or functions. We have seen examples of this in our micro-controller data sheets. That said, some languages, like C are both high level and low level which makes the distinction ambiguous.
Here I will do a bit of an overview of the languages Neil mentioned in class which I find both interesting and useful for this week.
Processing is an open-source programming language and IDE (that is very similar to Arduino in syntax) based on the Java language. It comes with an easy-to-use, creative coding environment or IDE and has fantastic resources. If you are starting out Processing is a great way to learn how to code. And if you like to hallucinate while you learn how code I recommend Daniel Schiffman’s Coding Train.
FireFly is what is called a visual programming add-on for Grasshopper (which works in Rhino CAD) which allows you to interact with the Arduino microcontroller and other input/output devices like web cams, mobile phones, game controllers and more.
Python is a very popular high-level programming language for general-purpose programming created by Guido van Rossum after Monty Python’s Flying Circus. Python is an interpretive language which makes it really fast because it execute instructions directly and freely, but they ten to be slower when compiling low level machine instructions.
JavaScript (“JS” for short) is a full-fledged dynamic programming language that, when applied to an HTML document, can provide dynamic interactivity on websites. It was invented by Brendan Eich, co-founder of the Mozilla project. It has become increasingly popular because of it’s flexibility. Developers have written a large variety of tools such as Node.js which takes Javascript out of the browser.
Throughout the FabAcademy, Neil has been using C for his programming examples. C is a low level, general purpose programming language. It can be compiled using the open source compiler GCC to provide low-level access to memory, write complex machine instructions for embedded system applications, and requires minimal run-time support. C is also one of the most widely used programming languages of all time.
MIT App Inventor like Firefly is a visual programming environment that makes it very easy to build fully functional apps for Android smartphones and tablets. They are great for simple programs but start to get unwieldy for more complex programs. The work around, according to Neil, is that you can program conventionally and add them as nodes to the App Inventor to handle complexity.
We have been using the FTDI cable to do serial communication between our micro-controllers and our computers via the Universal Serial Bus (USB). Most programming languages have serial libraries that allow you to talk directly via the serial port of your computer (USB is one of many serial protocols). PySerial is the serial library for Python, SerialPort is the same for Node.js. Processing and Arduino also have their respective serial libraries.
For this week, I decided to start out with Processing as I like it for it’s simplicity and creative visuals. I started by connecting my Hello Board from Wk 7 and see if I can get it to interact with Processing as both an input and output device.
For my first sketch I used the Arduino SoftwareSerial library which allows serial communication to take place on the other digital pins of our Attiny44. As you can see below I include the library and then declare my serial ports to pin 0 and 1 before the
void setup()
//HelloBoard and Processing // Include the SoftwareSerial library to run serial on Attiny44 #include <SoftwareSerial.h> // declare my serial ports 0 and 1 (RX, TX) SoftwareSerial mySerial(0, 1);
High-Low Tech explains the ins and outs of programming the Attiny’s with the Arduino IDE. I also highly recommend Serial Communication with the Tiny’s. Basically, my first test was making sure I could press the button on my board and it would send the data out via FTDI USB-serial cable allowing me to read and access it in Precessing. Below the I set my pins, and set a high baud rate seeing as I am using the 20 MHz external clock.
const byte ledPin_1 = 8; // define LED constant variable const byte buttonPin = A3; // define button constant variable void setup() { mySerial.begin(115200); // set baud rate to 115200 bits per second pinMode(buttonPin, INPUT); // set buttonPin (A2) as INPUT pinMode(ledPin_1, OUTPUT); // set LEDs as outputs } void loop() { int sensorValue = digitalRead(buttonPin); // read INPUT as sensorValue mySerial.print(sensorValue); // serial print sensorValue if (digitalRead(buttonPin) == 0) { digitalWrite(ledPin_1, HIGH); // turn LED on } else { digitalWrite(ledPin_1, LOW); // turn LED off } delay(20); // Wait 100 milliseconds } // end Loop
The
mySerial.print function allows me to see the
SensorValue in the Arduino serial monitor, this is good to check before we move on to Processing.
It’s also really good practice if you’re starting out to code to break task down to smaller sizes. Printing info can give you feedback that two or more processors are communicating. Also learn how to comment.
In Processing we have to import the serial library and then use the
printArray(Serial.list()); function to figure out which USB port our ATTiny is sending its data through.
Tom Igoe has a neat little sketch that usese the
serial.available() function to check your ports and see what is coming in. However, as you can see below, the console was reading every time i pressed the button but the console was giving me values 48 and 49!
After a bit of research I discovered that this is because the communicating via serial depends on something called a Serial Port Buffer, which is sort of like a bus stop. I discovered more about this in the networking and comms week. The Serial Available function basically tells us how many bytes we have in the serial buffer (which can store up to 64 bytes). Basically, a typed 1 in ASCII is represented as the number 49 in decimal, a 0 is 48.
At least now I was getting data into Processing. Now I could go ahead and play around with creating a user interface. My first was very basic again, it played with registering a simple colour every time I pressed the button.
import processing.serial.*; //import serial library Serial myPort; // Create object from Serial class char val; // Data received from the serial port void setup() { size(500, 500); String portName = Serial.list()[5]; println(Serial.list()); myPort = new Serial(this, Serial.list()[5], 115200); // Set baud rate equal to microprocessor } void draw() { while ( myPort.available() > 0) { // If data is available, val = myPort.readChar(); // read it and store it in val println(val); } background(255); // Set background to white if (val == '1') { // If the serial value is 0, fill(0); // set fill to black } else { // If the serial value is not 0, fill(204); // set fill to light gray } rect(0, 0, 500, 500); }
Now I wanted to check that I could interface with my Attiny board via Processing. Again, here is my Processing code (this time I tried with a lower Baud rate)
import processing.serial.*; Serial myPort; // Create object from Serial class void setup() { size(200,200); //make our canvas 200 x 200 pixels big String portName = Serial.list()[5]; //change the 0 to a 1 or 2 etc. to match your port myPort = new Serial(this, portName, 9600); } void draw() { if (mousePressed == true) { //if we clicked in the window myPort.write('1'); //send a 1 println("1"); } else { //otherwise myPort.write('0'); //send a 0 } }
And here is my Arduino setup programmed to receive:
#include <SoftwareSerial.h> // include the Software serial library to run serial on Attiny45 SoftwareSerial mySerial(0, 1); // declare my serial ports 1 and 0 (MISO, MOSI) char val; // Data received from the serial port int ledPin = 8; // Set the pin to digital I/O 8 void setup() { pinMode(ledPin, OUTPUT); // Set pin as OUTPUT mySerial.begin(9600); // Start serial communication at 9600 bps } void loop() { if (mySerial.available()) { // If data is available to read, val = mySerial.read(); // read it and store it in val } if (val == '1') { // If 1 was received digitalWrite(ledPin, HIGH); // turn the LED on } else { digitalWrite(ledPin, LOW); // otherwise turn it off } delay(10); // Wait 10 milliseconds for next reading }
You can download my Processing code sketches for week 13 from my Gitlab repository. Otherwise, it is all available in the documentation above. | http://fab.academany.org/2018/labs/barcelona/students/nicolo-gnecchi/interface-and-application-programming/ | CC-MAIN-2019-13 | refinedweb | 1,581 | 51.68 |
How to Mock a Rest API in Python
How to Mock a Rest API in Python
In this article, we discuss how to mock a REST API with request-mock and Python and perform unit tests.
Join the DZone community and get the full member experience.Join For Free
A.
Why Unit Tests Anyway?
Our main focus when writing software is building new features and fixing bugs. Of course, we need to test what we built, but we get the most joyful moment when our newly developed feature works. The next step is to write unit tests… But, we already know it is working, so why spend so much effort that we didn't break anything when we changed some code (assuming the unit tests are well written and provide enough code coverage). It is therefore also common practice to run the unit tests as part of your CI/CD pipeline.
If you do not like writing unit tests after developing a new feature, you can also consider writing your unit tests first, letting ;-) .
Create Your First Unit Test
We will build upon the sources of the Jira time report generator. We are using Python 3.7 and PyCharm as an IDE. First, let’s create a
test directory and right-click the directory in PyCharm. Choose
New - Python File and
Python unit test. This creates the following default file:
xxxxxxxxxx
import unittest
class MyTestCase(unittest.TestCase):
def test_something(self):
self.assertEqual(True, False)
if __name__ == '__main__':
unittest.main()
Running this unit test obviously fails (True does not equal False), but we do have set up the basics for writing our own unit tests now.
Mocking a Rest API
We want to unit test the
get_updated_issues function and this provides us a first challenge: the
get_updated_issues function contains a call to the Jira Rest API. We do not want our unit test to be dependent on a third-party service, and therefore, we need a way to mock the Rest API. There are several options to mock a REST API, but we will make use of the requests-mock Python library, which fits our needs.
Install the
requests-mock Python library:
xxxxxxxxxx
pip install requests_mock
Test Single Page Response
The
get_updated_issues function will request the issues that are updated in a certain time period. In our unit test, we will verify the behavior when one page with results is retrieved (the Rest API supports pagination, but that is something for a next unit test):
xxxxxxxxxx. On line 5, we define the expected result with variable
expected_result when the function
get_updated_issues returns. At lines 11 that:
xxxxxxxxxx.
Test Failed
The unit tests above all pass. But do they fail when something is wrong? We can therefore change the following line in the
get_updated_issues function:
xxxxxxxxxx
issues_json.extend(response_json['issues'])
with:
xxxxxxxxxx:
xxxxxxxxxx
AssertionError: Lists differ
Test Multiple URI’s
The Jira work logs are to be retrieved per issue. We }} | https://dzone.com/articles/how-to-mock-a-rest-api-in-python | CC-MAIN-2020-29 | refinedweb | 489 | 71.95 |
Thursday, December 09, 2004
Tests
Not much today. Mostly a lot of tests and confirming that things were running the way they are supposed to. Since the tests take so long at the moment this seems to take forever, and it is hard to do much else in the meantime (except, perhaps to read).
I started to look at SOFA, but really made little headway.
Everything seems a little slow at the moment. It must be the wind down before Christmas.
IEE
I went to the IEE Christmas Luncheon today, which was a nice diversion. I suppose I go along out of a sense of obligation, as there are very few people my age who go along, or even bother to be a member at the moment. Fortunately I sat next to a really nice guy named Teck Wai, who just a little younger than I am (or I assume he was. I often mistake Chinese people for being younger than they really are).
Since the lunch is also the AGM, the new committee was elected, and I took up a position again. It's not too onerous when you're not on the executive, and you can actually get something done when you get involved. While I was at it I convinced Teck Wai to nominate as well. Another friend of mine seconded the nomination the moment that he saw that I was suggesting a candidate who was aged under 40. :-)
The IEE is keen on expanding their involvement in IT, as many members like myself work in it. I should work on organising a few more events which would be of interest to younger members (like myself) as the main relevance of institutions like this is to introduce professionals in the same area to one another. I'll see what I can come up with, otherwise I may find myself dropping my membership due to lack of others' interest.
Posted by
Paula G
at
Thursday, December 09, 2004
0
Wednesday, December 08, 2004
Catching Up
Back from the conference today, and I spent quite a bit of time catching up on email. One of these emails was from ML about how to traverse an RDF list in iTQL. Curious about his use of subqueries I had a go myself, and immediately found an exception.
It turns out that if a subquery is doing a
walk using a blank node as the starting point, then Kowari will throw an exception as it tries to localize the blank node. I know that this has worked for me before, so it has left me a little confused. I had to go and meet Bob at this point, so I emailed AN about it.
Masters
Things went well with Bob, and his recommendation at the moment is to stop reading for a bit and concentrate on writing my confirmation. He suggests that if I have any "holes" in my knowledge in the literature review, then I should leave an annotated space for the moment, and come back to it later.
Another suggestion was to write a semantic matrix for the top few papers. This means writing a list of all the things of interest to me, and then describing each paper according to each criteria. Sounds useful, so I'll be drawing something like that over the next few weeks.
SOFA
There wasn't a lot of time at the end of the day, so I just had enough time to get the latest SOFA version and put it into a local CVS project. Creating a CVS project was new for me, and seems a little clunky, but after a discussion with DJ I agreed that it was probably appropriate to create projects that way. The big issue is that we keep a lot of things in separate projects around here rather than separate modules in a single project, which means that we do a lot more project creation than is really common.
I want to write about yesterday's Evolve conference, but I'll have to come back to it.
Posted by
Paula G
at
Wednesday, December 08, 2004
0
Monday, December 06, 2004
Tests and SOFA
Quiet day today. With the latest release happening I was intent to test TKS as much as possible. SR had been having trouble, so I also picked up a clean checkout to confirm that all was well (which it was).
I discovered what had happened to my file modifications on Friday evening. I'd done everything required, but I hadn't removed the org.kowari.resolver version of the directories. So while all the new stuff was correct, the old code was also there. I'd been in such a hurry to leave on Friday that I failed to notice that the files that "lost" their changes were actually in the wrong area. TJ and SR found this problem, as the system was trying to compile these extra classes, and kept giving them errors.
In the meantime I finished reading the SOFA introduction and design whitepaper, and have now started making my way through the API. Next, I'll have to look at KA's implementation of the
net.java.dev.sofa.model.OntologyModel and
net.java.dev.sofa.model.ThingModel interfaces.
Example Rule
It turned out that the owl:inverseOf statement from OWL Lite hadn't had the entailment iTQL documented for it. This was pretty easy, and I'm surprised that it wasn't already done. However, I updated some of the example RDF, and wrote the iTQL along with a description for it. Getting everything right and tested can take a little while, so by the time I'd finished GN had gone home for the day, meaning it won't make it into the documentation this time around.
This means that the only OWL statements which aren't documented with iTQL are from versioning. Theoretically this can involve inferencing as well (is this document which is compared to be compatible with another document actually consistent?) but we won't be considering that in the first instance. Well.... maybe TJ wants to, but I won't be. :-)
Posted by
Paula G
at
Monday, December 06, 2004
0
Friday, December 03, 2004
Remote Moves
Today was spent moving the remote resolver out of Kowari and into TKS. This means that distributed queries will now be restricted to TKS only. A distributed query is one where models from more than one server are all used.
The reason for the move is because the remote resolver was never supposed to be in Kowari yet. The only system to have distributed queries in the past was TKS. With the new resolver framework this functionality is being performed by the remote resolver, so this resolver is only supposed to be in TKS.
TKS is built on top of Kowari, meaning that as Kowari changes, so too does TKS. With the recent release coming up, Kowari was moving quickly, and I needed to make sure that the remote resolver was working with the latest build. To create the resolver in TKS would have meant updating TKS all the time, which would have become to expensive to get any effective work done. So I built it all in Kowari, with the intention of eventually moving it all.
I'll confess, it was in the back of my mind that maybe someone would forget about it and it would accidentally make it into the Kowari release, but unfortunately that didn't happen. :-) Not to worry. As TKS picks up new features (the so-called "value-adds") then older features like this will eventually make it back into Kowari. OTOH, maybe someone out there will really want this feature and do it themselves. After all, it is probably the simplest of all of our resolvers. I'm tempted to do it myself, but somehow I think that would be inappropriate. :-)
Be that as it may, the day was spent moving this stuff. That meant changing package names, removing the open source licence, changing config files, removing from one part of CVS and adding to another.... the list went on.
By the end of the day everything was working, and I went to do one final test. However, the build failed to compile, and when I looked, all of the source files in TKS were back to their original (Kowari) form. I don't know how this happened, but did not take too long to fix. The real work had been in the configuration files and the build scripts, so I had it going again in about 5 minutes.
While testing the changes, I read more about SOFA and KA's implementation of it in Kowari. I haven't quite finished the documentation yet, but I've covered most of it.
Posted by
Paula G
at
Friday, December 03, 2004
0
Thursday, December 02, 2004
Note
I'm too tired to proof read this (or yesterday's). I'll just have to do it in the morning. In the meantime I'd be grateful if you'll skip typos, and try to infer what I really meant when I write garbage. :-)
Final Tests
TJ checked in a lot of things the previous night, so he was keen for me to do a full set of tests again. I really needed to run them again anyway, as my machine had shut down over night for some reason.
This latest test showed up a new problem with the filesystem resolver, but this did not look as simple as the I'd seen yesterday. I showed it to ML, who recognised the problem and assured me that it was not something that I had broken. I didn't think it was, but you never know!
With everything passing, I was finally able to check this code in. I wanted to commit the files individually, so I could annotate the commit for each file appropriately. This meant that I had to carefully choose the order of commits, and try to do it when everyone else was out, just in case I made a mistake in the ordering, and they picked up a half completed set of files.
SOFA
While the tests were running I was able to read a lot more about SOFA. It certainly has a lot going for it. The main shortcomings that I see are that it does not scale very well, and there are a few OWL constructs that it cannot represent.
In the case of the latter, this is not really a problem, as most of the things that it can't do are OWL Full, with only a little OWL DL missing. For instance, there is little scope for writing relations for relations. Restrictions do not seem to cover owl:someValuesFrom or owl:complementOf, and unions on restrictions are not covered at all. However, where SOFA does not permit certain OWL constructs to be represented, often there is an equivalent construct which will suit the same purpose.
The scaling issue is really due to one of SOFA's strengths. SOFA manages to keep all of it's data in memory, such that it knows what kind of relationships everything has to everything else. Our RDF store scaled much better, but there is no implicit meaning behind any of the statements. As a result, modifying anything in a SOFA ontology results in consistency checks and inferencing checks being done quickly and easily. To do the same in RDF means querying a lot of statements to understand the relationships involved with the data just modified.
So while SOFA won't apply well to a large, existing dataset, it works very well with data that is being modified one statement at a time. It's a nice way of dealing with the change problem that I've avoided up until now. Experience with this should also help to apply similar principles to changing data in the statement store. Similarly, it may be possible to apply some SOFA inferences on our data be using appropriate iTQL, making the SOFA interface more efficient.
One way to make SOFA work with larger data sets is to serialize an ontology out to a file, and then to bring it back in via SOFA, but this is not very efficient. For this reason, the need to write a proper rules engine has not been removed. I had been wondering about this when I discovered that SOFA did some inferencing.
Sub Answers
Today TJ discovered a problem with some answer types running out of memory. This occurs when an answer is serialized for transport over RMI. The problem is that an answer might have only a couple of lines, but those lines contain subanswers which are huge.
When I serialized answers for RMI it occurred to me that I didn't really know how large a subanswer could be. I initially worried that a large enough set of subanswers could be too much to be handled in memory. However, I couldn't see subanswers getting too large, and so I went ahead with what I had.
Never commit code when you think that in some circumstances there could be a problem with it. I know that. Why did I choose to forget it?
After a few minutes of thought I came up with a solution. The
RemoteAnswerWrapperAnswer class determines that how to package an answer for the network. This decision needs to be replaced with an Answer factory. The factory then makes a choice:
- If the answer has subanswers as its rows, then return an RMI reference (higher network overhead, but no memory usage).
- If the answer contains data and is less than a configured number of rows, then serialize the subanswer.
- If the answer contains data and is larger than a configured number of rows, then use a paged answer object.
Remote Resolver
One of our "value adds" (I hate that term) for TKS, is to support distributed queries. All queries may be done to remote server, but only distributed queries can refer to more than one server in a single query.
These distributed queries now get handled by the remote resolver. When I worked on this resolver I kept it in Kowari, but now that it works properly it has to be moved to TKS. As new features come into TKS, they may reduce the "high-level" value of the remote resolver, and hence allow it down into Kowari (presuming someone else hasn't rewritten it already - after all, that's what open source is about). But for the moment, it has to stay a TKS-only feature.
AN had been looking to move this code, but was waiting until I finished with the
RmiSessionFactorycode. However, he was having a little difficulty with it, so now that I'm done with the looping bug I've been asked to move the remote resolver myself.
Other than changing the package names of the code, the only real differences seem to be in the Ant build scripts. By the end of the day I'd managed enough that TKS will now build the remote resolver, but I have not yet run all the tests on it.
In the meantime I tried the tests on Kowari now that the remote resolver is gone. The first thing that happened was that many of the Jena tests, and all of the JRDF tests failed. I agonized over this for a while, but then I realised that I'd been doing the TKS build while these tests were running. According to AN, a TKS build can briefly start a server, which would definitely conflict with any set of Kowari tests which were being run at the same time.
I'm now re-running the Kowari tests, and I have my fingers crossed that when I get there in the morning they will have all passed. Then I just have to see how well (or poorly) the TKS tests run.
Posted by
Paula G
at
Thursday, December 02, 2004
0
Wednesday, December 01, 2004
Class Paths
Well it was nearly working today. The tests which failed seemed unrelated to the changes I'd made, so I was initially quite confused. A little logging eventually showed the way.
The problem was occurring during the constructor to
RmiSessionFactory where it gets the URI of the local server from
EmbeddedKowariServer via reflection. The confusing part was that it was giving a
ClassNotFoundException for the class "
SimpleXAResourceException", which at first glance appears to be completely unrelated.
Coincidentally, it was less than a week ago that I was having a conversation with DM about just this. Even though I only needed a simple static method from
EmbeddedKowariServer, the class loader does not know this, and so attempts to load up the entire class. This includes having to recursively load all the classes for the return types and the exceptions. The
SimpleXAResourceException was being thrown by the
startServer method, and since this class wasn't available, the reflection code for this class failed.
There were two approaches to this problem. The first was to make sure that all referenced classes are available. However, this is fraught with difficulty for two reasons. The first is that I'll only discover which classes are needed at runtime. So even if I figured out where I needed to add
SimpleXAResourceException, I could simply end up with a report on the next class which wasn't found.
The second problem is that it becomes difficult to know that I caught everything. In some instances the classes are all available and everything runs flawlessly. In other parts of the code some classes are not available. I don't know where or when the classpath changes, and while I might be able to get it working for every code path run in the tests, it's always possible that a new type of usage will call this code again without some required classes available.
The other approach was to factor out all the static information from
EmbeddedKowariServer and put it in a simple public class specifically designed for handling this information. This works much better, as it does not have any dependencies on non-java packages. It also has the nice effect of putting all the server configuration info into one place.
The class I built for this was called
ServerInfo. All methods and variables on this class are static. The getter methods are public and the setter methods are package scope, the intention being to only call them from
EmbeddedKowariServer.
As usual, the tests to make sure all was well took a very long time to run on each occasion. In the meantime I used the opportunity to learn more of the SOFA API.
One little hiccough that I encountered was with the "filesystem" resolver. Fortunately, it turned out that it was just a JXUnit test that was getting back XML which differed from what it expected. The problem was that someone had checked in their own computer's name hardcoded in the file. I checked the CVS log to see who was the culprit, and I discovered that the problem had already been recognised and fixed.
With everything apparently going, I did a final CVS update and started the tests again for the night. At that point, the fact that it was all working at last, and a headache that I'd developed in the meantime convinced me to leave a half an hour early. :-)
Posted by
Paula G
at
Wednesday, December 01, 2004
0
| http://gearon.blogspot.com/2004_12_01_archive.html | CC-MAIN-2017-09 | refinedweb | 3,262 | 69.52 |
function-available Function
.NET Framework 4
Returns True if the function is in the function library.
The argument must evaluate to a string that is a QName. The QName is expanded into an expanded-name using the namespace declarations in scope for the expression. The
function-available function returns True if and only if the expanded-name is the name of a function in the function library. If the expanded-name has a non-null namespace Uniform Resource Identifier (URI), it refers to an extension function; otherwise, it refers to a function defined by XML Path Language (XPath) or XSLT.
ReferenceXML Data Types Reference
Show: | https://msdn.microsoft.com/en-us/library/ms256124(v=vs.100).aspx | CC-MAIN-2015-18 | refinedweb | 105 | 54.73 |
On Fri, 2018-06-08 at 17:27 +0300, Konstantin Khorenko wrote:> Currently if we face a lock taken by a process invisible in the current> pidns we skip the lock completely, but this> > 1) makes the output not that nice> (root@vz7)/: cat /proc/${PID_A2}/fdinfo/3> pos: 4> flags: 02100002> mnt_id: 257> lock: (root@vz7)/:> > 2) makes it more difficult to debug issues with leaked flocks> if you get error on lock, but don't see any locks in /proc/$id/fdinfo/$file> > Let's show information about such locks again as previously, but> show zero in the owner pid field.> > After the patch:> ===============> (root@vz7)/:cat /proc/${PID_A2}/fdinfo/3> pos: 4> flags: 02100002> mnt_id: 295> lock: 1: FLOCK ADVISORY WRITE 0 b6:f8a61:529946 0 EOF> > Fixes: 9d5b86ac13c5 ("fs/locks: Remove fl_nspid and use fs-specific l_pid for remote locks")> Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com>> ---> fs/locks.c | 8 +++-----> 1 file changed, 3 insertions(+), 5 deletions(-)> > diff --git a/fs/locks.c b/fs/locks.c> index bfee5b7f2862..e533623e2e99 100644> --- a/fs/locks.c> +++ b/fs/locks.c> @@ -2633,12 +2633,10 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,> > fl_pid = locks_translate_pid(fl, proc_pidns);> /*> - * If there isn't a fl_pid don't display who is waiting on> - * the lock if we are called from locks_show, or if we are> - * called from __show_fd_info - skip lock entirely> + * If lock owner is dead (and pid is freed) or not visible in current> + * pidns, zero is shown as a pid value. Check lock info from> + * init_pid_ns to get saved lock pid value.> */> - if (fl_pid == 0)> - return;> > if (fl->fl_file != NULL)> inode = locks_inode(fl->fl_file);(cc'ing Nickolay)As Andrey points out, this behavior was originally added in commitd67fd44f697d to address performance issues when there are a lot of locksheld by tasks in other namespaces.Will allowing this code to show these again cause a problem there?-- Jeff Layton <jlayton@kernel.org> | https://lkml.org/lkml/2018/6/14/267 | CC-MAIN-2018-39 | refinedweb | 325 | 63.53 |
Hi, thank you.
Just read email and maintenance of my internet provider start soon. Just to crop parameter. This first 0,5 there is original number 0,75. When this is there auto crop feature doesn't work for content close to 4:3 ratio some are even more bit like 4:3,something. Channel i am watching is in full HD just old things so most are 4:3 or as i said more squarish and close to it with black borders on sides. If i don't lower this value, auto crop feature made that all section measure but result is all 0. So glad it worked for me when i lower this. Yes probably too low, but it doesn't matter really for me. Here comes the picture from crop misc tab. The first number. It is from advanced crop tab.
[Attachment 54844 - Click to enlarge]
Otherwise it didn't work.
Also got idea not sure if possible to implement. If this scan notice there is full frame image it stop analyze. But yes, it is no needed, fast enough.
Thank you for your answer very much. Yes i know thread is bit difficult, but my point is more lower it, rather than use higher value, mostly encoding in parallel which utilize CPU good enough. Just in some special case. Also i know process is default below normal actually in idle priority.
EDIT: Not on so old, but to make it sure downloaded latest
EDIT2: it doesn't say audio need to be adjusted, it told me HEv2 AAC need stereo as source and simply let me know this. Not any adjustment mentioned. No any other action. Aborted therefore. Just source was stereo.
That maintenance is any minute, not sure all will be send. See you and good night!
Thank you and see you!
Bernix
Try StreamFab All-in-One and rip Netflix video! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1,861 to 1,884 of 1884
Thread
Last edited by Bernix; 9th Sep 2020 at 15:55. Reason: Edit and Edit 2
Hi,
here i made video and put it on streamable. First analyze pass is with default hybrid settings, which do not allow content close to or exact 4:3 from 16:9 screen working. And what must be modified to it work. Sorry for length of video. And yes i could set less frames to analyze, so... But in first case it know there will not be any crop since first analyzed frame. <- In fact in this case it do not know, but in some cases it knows at first frame.
That settings of first pass is how Hybrid comes default, not sure enough time spot that 0,75 in video.
I got one which is exact cut from 1920x1080 to 1440x1080. But with default it cant work either.
Don't know if tag video will work, but get link for sure easy.
See you!
Bernix
Last edited by Bernix; 10th Sep 2020 at 09:10.
Ah, okay now I get it.
1920*0.75 = 1440, so the width can at maximum be reduced by 480 pixels but your source needs 530.
So lowering the default to 0.7 would allow 576 (= 1920 - 1920*0.70) which should do the job.
Will adjust the default value to 0.7 for the next relaese.
Until then you can simply set the value to 0.7 and save it in your defaults.
If you can reproduce the HE-AACv2 issue with a specific source, please create a debug output and share that with me, so I can check what is causing this to fail.
Cu Selurusers currently on my ignore list: deadrats, Stears555
Hi Selur,
i got it saved for long time on 0,5. Because old TV things. This is just for demonstration purpose how default work. Also that 1920 x 0,75 = 1440 it doesn't work. 1440 is still excluded. Lowering this settings i can do exact 1440x1080, with 0,75 it make just one side somehow. So 1920-240 on left. Not sure how it works. It is actually not so important. Just that cropping 16:9 to 4:3, i think is frequent and some can have problems realize why autocrop doesn't work.
Lowering it default to 0,7 i think is good idea.
Thank you and see you!
Bernix
Hi Selur,
another stupid question from me. There is frame interpolation in vapoursynth. With presets. Fastest faster fast and medium. Seems to me like medium isn't best possible. Are there some slower or better preset? Yes medium is good enough, just for something to archive if any better presets.
Speaking about Interframe/SVP.
Thank you and see you.
Bernix
No havsfunc.InterFrame only has 'medium', 'fast', 'faster', 'fastest'
Code:
def InterFrame(Input, Preset='Medium', Tuning='Film', NewNum=None, NewDen=1, GPU=False, InputType='2D', OverrideAlgo=None, OverrideArea=None, FrameDouble=False): if not isinstance(Input, vs.VideoNode): raise vs.Error('InterFrame: This is not a clip') # Validate inputs Preset = Preset.lower() Tuning = Tuning.lower() InputType = InputType.upper() if Preset not in ['medium', 'fast', 'faster', 'fastest']: raise vs.Error(f"InterFrame: '{Preset}' is not a valid preset") if Tuning not in ['film', 'smooth', 'animation', 'weak']: raise vs.Error(f"InterFrame: '{Tuning}' is not a valid tuning") if InputType not in ['2D', 'SBS', 'OU', 'HSBS', 'HOU']: raise vs.Error(f"InterFrame: '{InputType}' is not a valid InputType") def InterFrameProcess(clip): # Create SuperString if Preset in ['fast', 'faster', 'fastest']: SuperString = '{pel:1,' else: SuperString = '{' SuperString += 'gpu:1}' if GPU else 'gpu:0}' # Create VectorsString if Tuning == 'animation' or Preset == 'fastest': VectorsString = '{block:{w:32,' elif Preset in ['fast', 'faster'] or not GPU: VectorsString = '{block:{w:16,' else: VectorsString = '{block:{w:8,' if Tuning == 'animation' or Preset == 'fastest': VectorsString += 'overlap:0' elif Preset == 'faster' and GPU: VectorsString += 'overlap:1' else: VectorsString += 'overlap:2' if Tuning == 'animation': VectorsString += '},main:{search:{coarse:{type:2,' elif Preset == 'faster': VectorsString += '},main:{search:{coarse:{' else: VectorsString += '},main:{search:{distance:0,coarse:{' if Tuning == 'animation': VectorsString += 'distance:-6,satd:false},distance:0,' elif Tuning == 'weak': VectorsString += 'distance:-1,trymany:true,' else: VectorsString += 'distance:-10,' if Tuning == 'animation' or Preset in ['faster', 'fastest']: VectorsString += 'bad:{sad:2000}}}}}' elif Tuning == 'weak': VectorsString += 'bad:{sad:2000}}}},refine:[{thsad:250,search:{distance:-1,satd:true}}]}' else: VectorsString += 'bad:{sad:2000}}}},refine:[{thsad:250}]}' # Create SmoothString if NewNum is not None: SmoothString = '{rate:{num:' + repr(NewNum) + ',den:' + repr(NewDen) + ',abs:true},' elif clip.fps_num / clip.fps_den in [15, 25, 30] or FrameDouble: SmoothString = '{rate:{num:2,den:1,abs:false},' else: SmoothString = '{rate:{num:60000,den:1001,abs:true},' if OverrideAlgo is not None: SmoothString += 'algo:' + repr(OverrideAlgo) + ',mask:{cover:80,' elif Tuning == 'animation': SmoothString += 'algo:2,mask:{' elif Tuning == 'smooth': SmoothString += 'algo:23,mask:{' else: SmoothString += 'algo:13,mask:{cover:80,' if OverrideArea is not None: SmoothString += f'area:{OverrideArea}' elif Tuning == 'smooth': SmoothString += 'area:150' else: SmoothString += 'area:0' if Tuning == 'weak': SmoothString += ',area_sharp:1.2},scene:{blend:true,mode:0,limits:{blocks:50}}}' else: SmoothString += ',area_sharp:1.2},scene:{blend:true,mode:0}}' # Make interpolation vector clip Super = clip.svp1.Super(SuperString) Vectors = core.svp1.Analyse(Super['clip'], Super['data'], clip, VectorsString) # Put it together return core.svp2.SmoothFps(clip, Super['clip'], Super['data'], Vectors['clip'], Vectors['data'], SmoothString) # Get either 1 or 2 clips depending on InputType if InputType == 'SBS': FirstEye = InterFrameProcess(Input.std.Crop(right=Input.width // 2)) SecondEye = InterFrameProcess(Input.std.Crop(left=Input.width // 2)) return core.std.StackHorizontal([FirstEye, SecondEye]) elif InputType == 'OU': FirstEye = InterFrameProcess(Input.std.Crop(bottom=Input.height // 2)) SecondEye = InterFrameProcess(Input.std.Crop(top=Input.height // 2)) return core.std.StackVertical([FirstEye, SecondEye]) elif InputType == 'HSBS': FirstEye = InterFrameProcess(Input.std.Crop(right=Input.width // 2).resize.Spline36(Input.width, Input.height)) SecondEye = InterFrameProcess(Input.std.Crop(left=Input.width // 2).resize.Spline36(Input.width, Input.height)) return core.std.StackHorizontal([FirstEye.resize.Spline36(Input.width // 2, Input.height), SecondEye.resize.Spline36(Input.width // 2, Input.height)]) elif InputType == 'HOU': FirstEye = InterFrameProcess(Input.std.Crop(bottom=Input.height // 2).resize.Spline36(Input.width, Input.height)) SecondEye = InterFrameProcess(Input.std.Crop(top=Input.height // 2).resize.Spline36(Input.width, Input.height)) return core.std.StackVertical([FirstEye.resize.Spline36(Input.width, Input.height // 2), SecondEye.resize.Spline36(Input.width, Input.height // 2)]) else: return InterFrameProcess(Input)
SVP does allow additional parameters: which might give better results than the presets.
For example:
Interframe only uses pel=2 for medium,
pel: 2,
The accuracy of the motion estimation. Value can only be 1, 2 or 4. 1 means a precision to the pixel, 2 means a precision to half a pixel, 4 - to quarter pixel (not recommended to use).
-> So if your find 'better' settings (that in general improve interpolation), I could add other presets using those settings.
Cu Selur
Ps.: same is true for Interframe when using Avisynth.users currently on my ignore list: deadrats, Stears555
Hi Selur, thank you. Just seeing medium imply me there are slower. So there are but not recommended. Thank you for very detailed answer. No i will not find any, i am not able to do
I am using cpu for it btw
I think probably problem could be some fragments or hickups when interpolating to 1/4 of pixel probably. Or that memory as you are saying. Thank you and see you! Bernix
-
-
-
I'll upload a new version tomorrow, hopefully that version can be properly extracted on Win7 again.
re-packaged and uploaded the version to my site, try if that one works for you
Last edited by Selur; 12th Dec 2020 at 15:49.users currently on my ignore list: deadrats, Stears555
-
-
I've recently started using Hybrid for a few projects. I've been mostly successful on some of the projects, but I simply can't seem to figure out how to successfully encode a DVD. I've selected the DVD option and I "think" I set up everything correctly, but every time I try Hybrid stops at the "creating index" part. It just says "waiting". I don't believe it's crashed, but it's not doing anything.
I've got to believe it's me and that I've simply overlooked a setting, but I'm not seeing it. I ripped the DVD with DVD Decrypter and pointed Hybrid to the folder it created.
Any ideas? Thanks!
Would need a proper debug output (see:) to know what's happening.
wild guess:
a. could be a problem with the created index file
b. could be Hybrid not detecting the indexing file
c. could be a problem with the decoding call
d. could be a some other tool (anti virus, firewall) interfering
Cu Selurusers currently on my ignore list: deadrats, Stears555
Thanks for getting back to me. Hopefully this is the information you are looking for.
I am using Hybrid 2020.12.13.1. It's not exactly an error message, but more that the job stops after extracting audio and then "waiting" to create the index file. Nothing happens after that. Here's what is in the jobs tab.
05 Audio 100% finished extraction audio stream with mplayer
06 Index waiting creating index
I selected the DVD input option, navigated to the folder with the Video_ts folder and then Hybrid processed the DVD contents. Once loaded, I selected Avisynth and set up the avisynth settings. Selected my crop settings for resizing and then set up the MP4 settings. I set the output folder for the completed file and started the job.
I'm attaching the HybridDebugOutput file and a part of the vob file that was saved as an mpg. I used Mpg2Cut2 to cut the vob file. Of course, since this is a DVD, that might not be so helpful.
Hopefully this is helpful. I also noticed you live near Bonn. I lived in Bonn almost 30 years ago. I enjoyed my time there.
Danke.
It's a bug with DGIndex and the input parsing of it.
Code:
COMMAND: "C:\Program Files\Hybrid\32bit\avisynthPlugins\DGIndex.exe" -om 0 -ai "L:\Clone_Wars_Vol_One\VIDEO_TS\VTS_01_1.VOB" -o "L:\Encoded Videos\Hybrid\Temp\2021-01-04@16_14_30_1210" -hide -exit Missing input: Program Files\Hybrid\32bit\avisynthPlugins\DGIndex.exeusers currently on my ignore list: deadrats, Stears555
-
-
-
Viel spass beim Arbeit (said no one ever!).
Thanks for the workaround. I'm giving it a try right now. Damn. It still won't create the index file. However, it did skip that and move on to generating the Avisynth (avs) file. Then it stopped and is on waiting instead of encoding. I've stopped it for now.
I'm attaching the updated debug file.
Again, thanks for your help!
- you should rip the dvd with a more up to date
ripper such as dvdfab or anydvd the dvd may have copy protection
that isn't supported by DVD Decrypter which hasn't been updated in years.
Thanks for the tip though. I think Cu Selur thinks it is a problem with the call to DGIndex.
-
-
Similar Threads
vp9 vs x265 vs DivX265By deadrats in forum Video ConversionReplies: 14Last Post: 28th Jun 2015, 09:48
HEVC-x265 player in linux?By racer-x in forum LinuxReplies: 4Last Post: 20th Mar 2014, 18:10
Hybrid [x264/XViD - MKV/MP4] Converter Support ThreadBy Bonie81 in forum Video ConversionReplies: 6Last Post: 8th Jan 2013, 03:53
VP8 vs x264By Selur in forum Video ConversionReplies: 14Last Post: 14th Apr 2012, 07:48
How often do you reinstall your operating system(windows,mac,linux etc..)?By johns0 in forum PollsReplies: 28Last Post: 22nd Jan 2011, 17:14 | https://forum.videohelp.com/threads/345169-Hybrid%28Windows-Linux-Mac%29-Input-x264-x265-Xvid-VP8-VP9/page63?s=3234dee995191adf5bfa12e859b11889 | CC-MAIN-2021-25 | refinedweb | 2,280 | 68.16 |
Syncs files between Amazon S3 and FTP servers
An AWS Lambda function that syncs files between Amazon S3 and external FTP servers. For S3 => SFTP, it will automatically sync when objects are uploaded to S3. For SFTP => S3, it will poll the SFTP server at a given interval and copy to S3. It will maintain the origin directory structure when copying to the destination.
After pulling files from the SFTP server, they will be moved to a '.done' subdirectory in the same directory. This prevents us from copying the same files over and over again. It also allows easy re-sending (by copying the file back into the original directory); the consequence is that files in the '.done' subdirectory will be ignored.
For more details, see the sections below. But here is the recommended list of steps to deploy thie bridge.
Config should be stored in a .json file in S3 according to. The configuration is a map of streamName to configuration for that stream:
{ "stream1": { "s3Location": "your-bucket-name/destination/directory", "sftpConfig": { "host": "hostname", "port": 2222, "s3PrivateKey": "another-bucket-name/path/to/private_key", "username": "user" }, "sftpLocation": "my-directory" }, "stream2": { "s3Location": "your-other-bucket-name/destination/directory", "sftpConfig": { "host": "hostname", "username": "user", "password": "pwd" }, "sftpLocation": "my-directory" } }
The directory (can be nested) on the SFTP side to either a) look for new files to copy to S3 or b) drop into when copying from S3.
The S3 location where the files should be copied. Can include a subdirectory after the bucket name. Valid formats: bucket-name bucket-name/sub-directory bucket-name/sub/directory
A JSON object that contains any of the connection options listed here:.
This can also optionally include an "s3PrivateKey" property, which should be a S3 bucket/object-key path that contains the SSH private key to use in the connection. If used, this should be encrypted and uploaded according to.
There are a few things that you will need to (optionally) configure outside of what the provided CloudFormation template sets up for you. These are explicitly excluded from the CF template because they are very dependent on your own specific requirements and would be difficult to generalize into the template.
If you lock down your SFTP server (and you should) by whitelisting client IP addresses, you will need to take a few extra steps to ensure a consistent outgoing IP address. If you run the Bridge Lambda function outside of a VPC, you will get a random outgoing IP address assigned by AWS. It may look like they use the same one every time, but there are no guarantees. To explicitly assign an outgoing IP address, do the following:
If you do #1-4 ahead of time, you can use the s3-sftp-bridge-deploy-to-vpc.template to automatically add the Bridge function to the VPC (#5 above).
If you're client-side encrypting either the Bridge config or any private keys (see), the Bridge Lambda function will need access to any applicable KMS keys. You can find the Role name in the CF stack outputs.
Two events are necessary to trigger this bridge to sync between the two systems, as detailed below.
Any origin S3 buckets/locations should be set up to trigger the bridge Lambda function on the putObject event, with all requisite permissions. The included CloudFormation template will set up a fresh S3 bucket given as a stack property. But any additional S3 buckets + notifications will need to be setup manually.
The included Lambda function will need to poll the SFTP server using a scheduled event in AWS Lambda. When scheduling the event (via CloudWatch Events), include in the "name" field a period-delimited (".") list of streamNames that match streamNames in your config file. There can be multiple streamNames in the same event, and multiple events polling the Bridge function.
The Lambda scheduled event system allows you to schedule the event at whatever interval is appropriate for your setup. See for details.
The CloudFormation template automatically sets up two metrics under a namespace that matches the stack name:
These can be used to ensure that files are being moved at the rate that is expected for your system; you should set up an alarm to watch for when the transfer counts fall below a reasonable threshold. These metrics rely on log entries that look like this:
[stream-name]: Moved 1 files from S3 to SFTP [stream-name]: Moved 1 files from SFTP to S3
The metric filters look like this:
[timestamp, requestId, streamName, colon, moved = Moved, numFiles, files = files, from, orig = SFTP, to, dest = S3]
and
[timestamp, requestId, streamName, colon, moved = Moved, numFiles, files = files, from, orig = S3, to, dest = SFTP]
Use $numFiles as the metric value, then alert on both the count (to ensure the bridge is running) and sum (to ensure the bridge is moving files at the rate you expect). You can alert on individual streams by editing the filter above to only match when streamName matches your stream name.
The CloudFormation templates create a few metrics (under the s3-sftp-bridge namespace) that can be used for error alerting. You will need to manually set up the alarms to meet your monitoring expectations. The metrics are:
After making changes, please do the following:
rm -rf node_modules npm install --production zip -r s3-sftp-bridge.zip . -x *.git* -x *s3-sftp-bridge.zip* -x cloud_formation/\* -x *aws-sdk*
Unfortunately we can't use the Github .zip file directly, because it zips the code into a subdirectory named after the repo; AWS Lambda then can't find the .js file containing the helper functions because it is not on the top-level.
Licensed under the Apache License, Version 2.0: | https://xscode.com/gilt/s3-sftp-bridge | CC-MAIN-2022-05 | refinedweb | 950 | 61.36 |
Lately I’ve been getting back to basics with regards to recursion. This is a basic and essential skill, especially in the functional programming world. Today’s dive will be into immutable lists and recursion. I’ll do my best to provide the F# and C# equivalent to each call. This is in part of the back to basics and from here I’ll move onto other subjects.
Let’s catch up to where we are today:
Recursive List Processing
In the functional programming world, and especially in the functional programming world are singly linked immutable lists. Each of the items in the list is a cell with a reference to the next item in the list. For functional programming, unlike imperative programming, they are optimized for recursion. By that I mean that they have a head (the first item) and the tail (the rest of the list). F# has the use of this through the List<‘a> class.
Let’s walk through a simple example of how you might sum up all items in a list using recursion as well as one for calcuating the length of the list. In F# this is pretty simple, using pattern matching.
F#
#light
module ListExtensions =
let rec sum = function
| [] -> 0
| h::t -> h + sum t
let rec length = function
| [] -> 0
| h::t -> 1 + length t
[1..20] |> ListExtensions.sum |> print_any
[1..10] |> ListExtensions.length |> print_any
This function above had a simple pattern matching over the list which said for an empty list, the result would be zero, else it would be the sum of the head and the calculation of the rest of the list and eventually winds its way down to the end of the list. Likewise inthe length, it adds 1 to the calculation of the length function until it winds down to nothing left in the list. In the base .NET libraries, we don’t have a list like this, but it’s actually not that hard to create one. Let’s walk through a simple immutable linked list much like the List<‘a> in F#.
Let’s define in C# what something like that might look like. I’ll start with something the way Wes Dyer had it but with a few changes:
public interface IImmutableList : IEnumerable
{
T Head { get; }
IImmutableList Tail { get; }
bool IsEmpty { get; }
bool IsCons { get; }
}
As you can see, we have a head, a tail and a way to determine if this list is empty or if it is a cons. All very important pieces to the puzzle. This also inherits the IEnumerable<T> interface as well so that we can iterate this should we need to. Now, let’s go into the implementation details of the immutable list.
public class ImmutableList : IImmutableList
{
public class EmptyList : ImmutableList { }
public class ConsList : ImmutableList
{
internal ConsList(T head, IEnumerator enumerator)
{
Head = head;
this.enumerator = enumerator;
}
}
private static readonly IImmutableList empty = new EmptyList();
IImmutableList tail;
IEnumerator enumerator;
public T Head { get; private set; }
public static IImmutableList Cons(T head, IImmutableList tail)
{
return new ConsList(head, tail.GetEnumerator());
}
public static IImmutableList Cons(IEnumerator enumerator)
{
return enumerator.MoveNext() ? new ConsList(enumerator.Current, enumerator) : empty;
}
public IImmutableList Tail
{
get
{
if (enumerator != null)
{
tail = Cons(enumerator);
enumerator = null;
}
return tail;
}
}
public bool IsCons { get { return this is ConsList; } }
public bool IsEmpty { get { return this is EmptyList; } }
public static IImmutableList Empty { get { return empty; } }
IEnumerator IEnumerable.GetEnumerator()
{
return ((IEnumerable) this).GetEnumerator();
}
public IEnumerator GetEnumerator()
{
for (IImmutableList current = this; !current.IsEmpty; current = current.Tail)
yield return current.Head;
}
}
Now what I have done here is implemented two internal lists to represent a “cons” and one to represent an empty list. This helps when determining whether the list is empty or not without having to check references and whether something is null. Inside my ConsList<T>, I’m able to create a new instance of the ImmutableList<T> with the head value and the rest of the list.
The head of the list is just simply that, the first item. When the tail property is called, I create a new IImmutableList<T> with my enumerator by calling the Cons method. I also have properties which define and empty list, whether it is empty or whether it is a “cons”. Like I said, there’s nothing hard about this.
Then I’m able to define a Sum and Length methods much as above with something like this:
public static int Sum(this IImmutableList list)
{
if (list.IsEmpty)
return 0;
return list.Head + Sum(list.Tail);
}
public static int Length(this IImmutableList list)
{
if (list.IsEmpty)
return 0;
return 1 + Length(list.Tail);
}
And then invoking this is pretty straight forward through the main method:
static void Main(string[] args)
{
var enumerator = Enumerable.Range(1, 10).GetEnumerator();
var list = ImmutableList.Cons(enumerator);
var sum = list.Sum();
var length = list.Length();
Console.WriteLine("List sum : {0}", sum);
Console.WriteLine("List length :{0}", length);
}
But what about tail recursion when doing list processing? Let’s walk through one more example of tail recursion with lists. In this example, I’m going to go ahead and return the last item in the list. It’s a pretty simple and straight forward function which determines whether the list only has a head, or if it also has a tail, then recurse on the function again.
F#
#light
module ListExtensions =
let rec last = function
| [] -> invalid_arg "last"
| [h] -> h
| h::t -> last t
[1;5;3;2;6;] |> ListExtensions.last |> print_any
Now the C# version should look just as familiar to you. Basically instead of the pattern matching, which I’d love to have in C#, I’m using if statements to determine whether I’m returning the head, or the evaluation of the Last function again until I wind down to the head.
C#
public static T Last(this ImmutableList items)
{
if(items.IsEmpty)
throw new ArgumentNullException("items");
var tail = items.Tail;
return tail.IsEmpty ? items.Head : tail.Last();
}
var e1 = new List {1,5,3,2,6}.GetEnumerator();
var items = ImmutableList.Cons(e1);
var last = items.Last();
Console.WriteLine("Last item: {0}", last);
In the past I covered why tail recursion is important in my previous post. Unfortunately, the C# code won’t do much for you uniless the compiler is optimized for tail calls, or you are using the x64 version of Windows. I have some of these samples in my Functional C# samples on MSDN Code Gallery.
Wrapping it Up
As you can see, recursion is a pretty interesting topic. One of the best ways to avoid mutable state in your programs is through recursion. I’ve shown some simple examples of where recursion can help to solve some of those issues. This will wrap up a lot of the discussions about recursion and then I’m moving into such topics as memoization, continuations and so on. The code samples will be available on the Functional C# Samples during this back to basics trip. | http://codebetter.com/matthewpodwysocki/2008/07/16/recursing-into-list-processing/ | CC-MAIN-2021-43 | refinedweb | 1,155 | 64.61 |
Swipe right to delete one number
- ricky_itly
This post is deleted!last edited by omz
@ricky_itly , would be easier to try and help if you posted code that runs. I am guessing you just didn't post the if name ... blah blah (main)code.
Also in your example you don't do -
from gestures import Gestures
I assume its working for you because gestures been imported already from another script.
Also, there is no mention or link to the gestures repo. Many uses may think that gestures is a built in module if they have not read about it here.
Please understand, I am not trying to be critical. Just trying to help. I am not so good, but if your code example would have run, then it would have been a lot easier for me to see if i could have helped
- ricky_itly
Yeah sorry, I forgot to post the UI with all the buttons and labels.
I’m trying to post it
@ricky_itly , no problems. One way to post your pyui file is to do -
- Change the extension of your pyui file to json
- You will see the json text of the pyui file.
- Copy it and paste into your code, triple quote it and assign it a var name.
- Then you can use the load_view_str method
You can just change back the ext of the file to pyui and you will see your form again.
If you run the below, it should work. I just did the same as the steps I said above.
Example.
import ui myview = '''\ [ { "nodes" : [ { "nodes" : [ { "nodes" : [ ], "frame" : "{{0, 0}, {124, 32}}", "class" : "Label", "attributes" : { "font_size" : 18, "frame" : "{{-13, 53}, {150, 32}}", "uuid" : "C50766EC-F775-4342-AAF6-3736A0488175", "class" : "Label", "alignment" : "center", "text" : "weeks", "name" : "label1", "font_name" : "<System>" }, "selected" : false }, { "nodes" : [ ], "frame" : "{{0, 33}, {124, 104}}", "class" : "DatePicker", "attributes" : { "mode" : 0, "frame" : "{{-98, -40}, {320, 216}}", "class" : "DatePicker", "name" : "datepicker1", "uuid" : "3BED2E56-1F0D-4AD5-AAF0-B779BC766DC7" }, "selected" : false } ], "frame" : "{{6, 6}, {124, 137}}", "class" : "View", "attributes" : { "frame" : "{{190, 110}, {100, 100}}", "class" : "View", "name" : "view1", "uuid" : "907D0D1A-C7F8-4E23-BD71-671383E6C239" }, "selected" : false } ], "frame" : "{{0, 0}, {480, 320}}", "class" : "View", "attributes" : { "enabled" : true, "tint_color" : "RGBA(0.000000,0.478000,1.000000,1.000000)", "border_color" : "RGBA(0.000000,0.000000,0.000000,1.000000)", "background_color" : "RGBA(1.000000,1.000000,1.000000,1.000000)", "flex" : "" }, "selected" : false } ] ''' if __name__ == '__main__': f = (0, 0, 300, 400) v = ui.load_view_str(myview) v.present(style='sheet', animated=False)
I hope this works out. My first attempt to make a video and post it. But the video is doing the steps above.
Video | https://forum.omz-software.com/topic/4635/swipe-right-to-delete-one-number/1 | CC-MAIN-2022-27 | refinedweb | 435 | 79.7 |
ThoughtWorks Interview QuestionSoftware Engineer / Developers
series is
2 2 4 11
difference series is
0 2 7
Again take difference series
2 5
Again
3
so we have 3 now.
add 3 to its above series => 2 5 8
now add this 8 to last term of its above series => 0 2 7 15
Now add this 15 to last term of its above series =>2 2 4 11 26
This is what I mean :)
I think since the series given was short, the purpose of the question was to rather see how many solutions a candidate can come up with, than getting the exact solution they were looking for. It might be a trick question -- just saying.
However, here's how I would do it:
2,2,4,11 -- the difference between consecutive terms is: 0,2,7... which can be looked at as:
2^0 -1 = 0
2^1 -0 = 2
2^3 -1 = 7
so the next difference would be:
2^4 -0 = 16
which gives the next term as 11+16 = 27.
I've never played around with the base x fibonacci, neither heard of it before... but whoever thought about it... RESPECT! ;)
2,2,4,11 the diff bt conseccutive terms is 0,2,7
0^1+1=2
2^2+3=7
7^3+5=348
therfore the next number in the series is 359
2,2,4,7,11,359
pehle toh yeh batao..jayda dimag aa gaya hai kya
kuh bhi bana rahe hooo aur hum man le..
Numbers : 2,2,4,11
Difference : 0,2,7
sequence :
0 = 0^0
2 = 1^1 + 1^0
7 = 2^2 + 2^1 + 2^0
Next number in sequence of differences : 3^3 + 3^2 + 3^1 + 3^0 = 40
Next Number in the original Sequence = 51 (40 + 11)
if we take 7 as a base then solution will be
a1=2;
a2=2;
a3=a1+a2;//a3=2+2=4
a4=a1+a2+a3;//a4=2+2+4=8(in decimal)
/*
8=11(in base 7)
*/
a5=a1+a2+a3+a4//a5=2+2+4+11=19(in decimal)
/*
19=25(in base 7)
*/
hence ans may be 25
2, 2, 4, 11
First set of Differences = 0,2,7
Second set of Differences (of the first difference set) = 2,4,.... ie 2*1 = 2, 2*2 = 4, or 2^1 = 2, 2^2 = 4
so assuming the next difference is either of the following -
2*3 = 6, OR 2^3=8
The next number in the Second set of differences would either be 6 or 8
ie 2,4,6 or 2,4,8
Then the next number in the First set of differences would either be -
7+6= 13 OR 7+8 = 15
Last number of original series could either be -
11+13 = 24, or 11+15=26
And finally the series itself might be -
2,2,4,11,24
OR
2,2,4,11,26
g(n) = 2 for n=0;
g(n) = g(n-1) + f(n-1) for n>0
f(n) = n * n + (Sum of i where i runs from 0 to n)
g(0)=2
g(1) = g(1-1) + f(1-1)
= g(0) + f(0)
= 2 + ((0*0)+(0))
= 2
g(2) = g(2-1) + f(2-1)
= g(1) + f(1)
= 2 + ((1*1)+(1+0))
= 2 + (1 + 1)
= 4
g(3) = g(3-1) + f(3-1)
= g(2) + f(2)
= 4 + ((2*2)+(2+1+0))
= 4 + (4 + 3)
= 11
g(4) = g(4-1) + f(4-1)
= g(3) + f(3)
= 11 + ((3*3)+(3+2+1+0))
= 11 + (9 + 6)
= 26
1. Sort the given numbers in ascending order (Say as 2 , 4 ,6 7)
2. Represent them in binary (0010,0100,0110,0111)
3. Represent all the once in the set as set of number (say 2,3,{2,3},{1,2,3})
4. Take the once from the first set and remove them from all sets and make a new set
a. Take out 2 so new set is {3,{3},{1,3}}
b. Now Take out 3 so new set is {1}
c. Now take out 1 and it is done
So the answer is {0010,0100,0001} i.e 2,4,1
public class TestSeries {
public static void main(String[] args) {
seriesTest1();
}
public static void seriesTest1(){
int arr[]={2,2,4,11,26};
for(int i=0; i<arr.length; i++) {
int j = i * i + getFact(i);
System.out.println(arr[i]+j+"->"+i+"->"+j);
}
}
public static int getFact(int factInput) {
int response = 0;
for(int i=factInput;i>=1; i--) {
response += i;
}
return response;
}
}
There is another way to do it. Answer is 61,
Explanation :
Given number series, 2,2,4,11
Using factorial, leave the first number from the series. and add other numbers in the series with the previous given number.
Example, factorial of 2 is 1,2
Now Leave the the 1st number and add with the previous number. Previous number empty. So,
(1) 2 + 0 (previous number) = 2
(1) 2 + 2 (previous number) = 4
(1) 2 + 3 + 4 + 2 (previous number) = 11
(1) 2 + 3 +... +11 + 4 (previous number) = 61
take 5 ary system it consists digits 0,1,2,3,4
(for example binary system contains 0,1 and decimal system contains 0 to 9)
lets take fibonacci series with a1=2 and a2=2
then a3= 2+2=4
a4= 2+4= which is six but 5 ary system doesn't contain six as a digit
how can six can be represented using 5 ary system??
11 = 5*1 + 1*1
2,2,4,11....- Abhay July 22, 2011
difference between the n and n-1...follows the sequence 0,2,7...
now 0 can be made as 0*0=0,similar 2 can be made as 1*1+1=2 and 7 can be made as 2*2+2+1=7,following the sequence we get the next difference as 15(3*3+3+2+1)..therefore the next number is 26 in the series. | https://careercup.com/question?id=4658727 | CC-MAIN-2019-26 | refinedweb | 1,018 | 72.19 |
> There are a few pieces of the old XLink drafts that help explain current
> arguments, most of which were (I think) put aside in a mad rush to
> namespaces as the solution to all such problems.
>
> Unfortunately, the publicly available requirements and goals for XLink
> all appear to be post-namespaces.
Thanks for the history. It's a fun read. I was hoping that it would shed
some light on the technical problems the XHTML folks encountered in trying to
use XLink. IOW, they don't really help explain current arguments to me. And
why in particular do you think namespaces are a problem in XLink? The only
point I've heard from the XHTML folks so far wrt XMLNS are that the XLink
namespace is extra to type. Surely this isn't what you mean?
--
-----------------------------------------------------------------
The xml-dev list is sponsored by XML.org <> , an
initiative of OASIS <>
The list archives are at
To subscribe or unsubscribe from this list use the subscription
manager: <> | http://aspn.activestate.com/ASPN/Mail/Message/xml-dev/1316145 | crawl-001 | refinedweb | 165 | 73.78 |
Changes to Qt XML XML, and provide guidance to handle them.
Simple API for XML (SAX) parser
All SAX classes have been removed from Qt XML. Use QXmlStreamReader for reading XML files. Here are some simple steps to port your current code to QXmlStreamReader:
For example, if you have code like
QFile *file = new QFile(...); QXmlInputSource *source = new QXmlInputSource(file); Handler *handler = new Handler; QXmlSimpleReader xmlReader; xmlReader.setErrorHandler(handler); xmlReader.setContentHandler(handler); if (xmlReader.parse(source)) { ... // do processing } else { ... // do error handling }
you can rewrite it as
QFile file = ...; QXmlStreamReader reader(&file); while (!reader.atEnd()) { reader.readNext(); ... // do processing } if (reader.hasError()) { ... // do error handling }
QDom and QDomDocument
As SAX classes have been removed from Qt XML, QDomDocument has been re-implemented using QXmlStreamReader. This causes a few behavioral changes:
- Attribute values will be normalized. For example,
<tag attr=" a \n b " />is equivalent to
<tag attr="a b"/>.
- Identical qualified attribute names are no longer allowed. This means attributes of an element must have unique names.
- Undeclared namespace prefixes are no longer allowed.
If you use QDomDocument and rely on any of these, you must update your code and XML documents accordingly.
Qt Core5 compatibility library
If your application or library cannot be ported right now, the QXmlSimpleReader and related classes still exist in Qt5Compat to keep old code-bases working. If you want to use those SAX classes further, you need to link against the new Qt5Compat module and add this line to your qmake
.pro file:
QT += core5compat
In case you already ported your application or library to the cmake build system, add the following to your
CMakeList.t. | https://doc.qt.io/archives/qt-6.0/xml-changes-qt6.html | CC-MAIN-2021-25 | refinedweb | 272 | 58.38 |
Previous invalidates all hashes
computed before that patch applies, which could be an issue for large build
system that pre-compute the profile data and let client download them as part of
the build process.
Update test case
This patch is correct. A clarification of the description:
Previous implementation was incorrectly passing an integer, that got converted to a pointer, to finalize the hash computation.
Working (an uint64_t) was truncated to an uint8_t, converted to a one-element ArrayRef<uint8_t>, then passed to MD5.update
Can the using be deleted?
Updated summary + removed namespace.
Nice fix, and nice test :-)
May I ask how you found this?
This was a long (and painful) quest when tracking
Up?
lgtm
There's a few test failures related to this change on the PS4 bot. Could you take a look?
@bkramer rightfully reverted this, faster than I could patch it :-) Here is the patch though, with the updated test case. @hans the only meaningful change is that I'm keeping the buggy behavior for hash v1 for compatibility reason.
Update profile data hash entries due to hash function update, unless the version
used is V1, in which case we keep the buggy behavior for backward compatibility.
Thanks!
@hans/ @MaskRay I'm starting to wonder if I should bump the version number of the hash function, so that clang could still read profile data generated before that patch?
Bump the version number to be compatible with existing profdata, in a similar fashion to v1/v2 transition.
In D79961#2053806, @serge-sans-paille wrote:
Bump the version number to be compatible with existing profdata, in a similar fashion to v1/v2 transition.
Did you forget to include some files? I don't see the bump anywhere.
Maybe explicitly show the conversion to array of uint8_t here, to make it more clear what's going on.
With v3 version + Make cast explicit
@hans updated!
I worry that the hash version bump isn't complete. Doesn't the hash version used need to be read/written with the profile data file somewhere?
I guess this needs an update now?
Should this be >= PGO_HASH_V2 now? And similarly below?
Maybe just "HashVersion < PGO_HASH_V3" would be simpler?
Update version bump parts
It looks like clang/test/Profile/Inputs/c-general.profdata.v5 is being read as v6 rather than v5. Can you double check?
In D79961#2065605, @alanphipps wrote:
It looks like clang/test/Profile/Inputs/c-general.profdata.v5 is being read as v6 rather than v5. Can you double check?
Yep, I'll have a look.
Thanks for spotting the issue. Should be fixed by 63489c39deeffb24a085b3766c5d5ff76a52fa2f | http://reviews.llvm.org/D79961 | CC-MAIN-2020-40 | refinedweb | 437 | 68.26 |
oneshia Allen5,192 Points
I passed the first 2 task of the challenge. I was struggling with the second task and now the 3rd one is not passing.
need some help
from flask.ext.bcrypt import generate_password_hash from flask.ext.bcrypt import check_password_hash #from flask.ext.bcrypt import set_password #from flask.ext.bcrypt import validate_password def set_password(user, password): generate_password_hash(password) user.password = generate_password_hash(password) return user def validate_password(user.password, password): if user.password == check_password_hash(user.password, password): return True else: return False
1 Answer
Megan AmendolaTreehouse Teacher
Hi! In your validate function, what you have right now will always return False.
check_password_hash(user.password, password) already returns either True or False because it is checking the user's password against the input password to make sure they match.
if user.password == check_password_hash(user.password, password): Next, you are checking the password (which is a string) against the result of the function (which is a boolean). This will always result to False.
Since
check_password_hash(user.password, password) already returns either True or False, you can just return that value.
def validate_password(user.password, password): return check_password_hash(user.password, password)
Megan AmendolaTreehouse Teacher
Oops, I see what it is. I copied your code and you have the function taking
user.password when it should only be getting
user
def validate_password(user, password): return check_password_hash(user.password, password)
I missed it the first pass :) Now that should work
Roneshia Allen5,192 Points
Thank you Megan. I really appreciate you breaking this down for me, this definitely helped with understanding password_hashing.
Megan AmendolaTreehouse Teacher
Glad I could help!
Roneshia Allen5,192 Points
Roneshia Allen5,192 Points
Hey Megan, thank you for reaching out. so I tried the code and its giving me this error, "SyntaxError: invalid syntax". | https://teamtreehouse.com/community/i-passed-the-first-2-task-of-the-challenge-i-was-struggling-with-the-second-task-and-now-the-3rd-one-is-not-passing | CC-MAIN-2022-40 | refinedweb | 295 | 52.05 |
Thread support was added to swig/python, based in the Joseph's proposal.
Please test your interfaces using
swig -threads -python ....
and send us a mail if you find a problem, or have suggestions, comments,
etc.
Ah, you will need python 2.3 or later, see CHANGES.current for more
details.
If you need thread support for older python versions, and know how to
safely use
threads with them, please check the pythreads.swg file and send us an
implementation
patch.
Marcelo
Hi,
I'm having a little bit of a problem with one of my functions.
I have a function which takes two object references, one of them a
const. What swig gives me for this are three jlong args.
*long Stuff::connect(const Foo &obj1, Bar &obj2)*
gives me
*JNIEXPORT jint JNICALL Java_com_StuffJNI_connect(JNIEnv *jenv, jclass
obj, jlong jarg1, jlong jarg2, jlong jarg3)
*Does anyone know why swig would be throwing back three args instead of
two, and why the types are so off?
Thanks*
*
Searching the net I found some old posts (had to use Google cache) about
std_list for c#. I have managed to convert std_vector to use std::list,
and got it to work.
Only had a problem with std_vector, and had to make a change. It uses
=93const CTYPE&=94 in a few places, I had to change that to just =93CTYPE=
=94. The
const & is giving me problems when I have a vector of pointers, like
std::vector<cVal*>. I want to know if this is a bug or maybe a problem
with my pointer typemape. But the changes to std_vector and my std_list
work.
Here is the typemape:
//pointer reference typemaps:
%define PTR_REF_TYPEMAPS(CSTYPE, CTYPE)
#if defined(SWIGCSHARP)
%typemap(ctype) CTYPE *, CTYPE & "void *"
%typemap(imtype) CTYPE *, CTYPE & "IntPtr"
%typemap(cstype) CTYPE *, CTYPE & "CSTYPE"
%typemap(csin) CTYPE *, CTYPE & "CSTYPE.getCPtr($csinput)"
%typemap(csout) CTYPE *, CTYPE & {
IntPtr cPtr =3D $imcall;
return (cPtr =3D=3D IntPtr.Zero) ? null : new CSTYPE(cPtr, $owner);
}
%typemap(in) CTYPE *, CTYPE & %{ $1 =3D (CTYPE *)$input; %}
%typemap(out) CTYPE *, CTYPE & %{ $result =3D (void *)$1; %}
#endif
%enddef
Defining the template:
PTR_REF_TYPEMAPS(cVal, eTestLib::cVal*)
SWIG_STD_VECTOR_SPECIALIZE(cVal, eTestLib::cVal*)
%template(VectorcVal) std::vector< eTestLib::cVal* >;
What I=92m Swigging:
namespace eTestLib {
class _LibExport cVal {
protected:
int mMyVal;
public:
cVal(int n);
int getMyVal();
void setMyVal(int newval);
};
typedef std::list<cVal*> cValList;
typedef std::vector<cVal*> cValVec;
class _LibExport cTest {
protected:
cValList mList;
cValVec mVec;
public:
cTest();
~cTest();
void addListItem (int v);
int getListItem(int index);
cValList& getListList();
void addVectorItem (int v);
int getVectorItem(int index);
cValVec& getVectorVector();
};
}
And the Compile errors with unmodified std_vector.i :
c:\ericTemp\ogre\ogreaddons\eric\swig\test2\Swigdll2\test_wrap.cxx(389) :
error C2440: '=3D' : cannot convert from 'const eTestLib::cVal *' to
'std::allocator<_Ty>::value_type'
with
[
_Ty=3DeTestLib::cVal *
]
Conversion loses qualifiers
c:\ericTemp\ogre\ogreaddons\eric\swig\test2\Swigdll2\test_wrap.cxx(407) :
error C2664: 'std::vector<_Ty>::iterator
std::vector<_Ty>::insert(std::vector<_Ty>::iterator(435) :
error C2664: 'std::vector<_Ty>::vector(std::vector<_Ty>::size_type(999) :
error C2664: 'std::vector<_Ty>::push_back' : cannot convert parameter 1
from 'const eTestLib::cVal *' to 'eTestLib::cVal *const & '
with
[
_Ty=3DeTestLib::cVal *
]
Conversion loses qualifiers
Hi
I have the following C source
#include<stdio.h>
struct Test {
int (* op1)(int,int);
int (* op2)(int,int);
int (* op3)(int,int);
int (* op4)(int,int);
};
int add(int a,int b) {
return a+b;
}
int mul(int a,int b) {
return a*b;
}
int div(int a,int b) {
return a/b;
}
int sub(int a,int b) {
return a-b;
}
just need to know how to write a swig interface file, i am having problems
with the struct as it contains function pointers
I have written the following interface file dont know if it correct
%module ExampleTest
extern struct Test {
%extend {
int (* op1)(int,int);
int (* op2)(int,int);
int (* op3)(int,int);
int (* op4)(int,int);
%constant int (*ADD)(int,int) =3D add;
%constant int (*SUB)(int,int) =3D sub;
%constant int (*MUL)(int,int) =3D mul;
}
}Operator;
%inline %{
%constant (* ADD)(int,int) =3D add;
%constant (* MUL)(int,int) =3D mul;
%constant (* DIV)(int,int) =3D div;
%constant (* SUB)(int,int) =3D sub;
extern int add(int a,int b);
extern int mul(int a,int b);
extern int div(int a,int b);
extern int sub(int a,int b);
%}
Try this (note the % instead of the # sign):
> /* File : GivenLib.i */
> %module WrappedLib
> %{
> #include "GivenLib.h"
> %}
> #include "GivenLib.h"
%module WrappedLib
%{
#include "GivenLib.h"
%}
%include "GivenLib.h"
-Nitro
Hello list,
Up until now, I had not done in Swig things much more
complicated than /examples/python/simple so please forgive
me if the answer is obvious.
I found one relevent thread in the archives ("[Swig] Using
Swig with existing .dll and .h files") but could not work
to a solution with it.
I have been given a SDK containing a DLL, a C header file
and a .lib file that seems to be designed for MS VisualC.
Not being close to fluent in C, I would like to have Swig
generate a wrapper for me in order to access the library
from Python.
Here is what I do (and what fails), could someone please
tell me what is wrong and possibly point me to the Right
Way(tm) to do it ?
The interface file I have created is simply the following
:
/* File : GivenLib.i */
%module WrappedLib
%{
#include "GivenLib.h"
%}
#include "GivenLib.h"
Then I run "swig -python GivenLib.i"
Since I do not have access to MS VisualC, I am using Mingw
to compile the following :
"gcc -c GivenLib_wrap.c -Ic:\python24\include"
and then :
"gcc -shared GivenLib_wrap.o -o_WrappedLib.pyd
-Lc:\python24\libs -lpython24 GivenLib.a"
(note : the GivenLib.a itself was generated by "dlltool
--input-def GivenLib.def --dllname GivenLib.DLL
--output-lib GivenLib.a ", with GivenLib.def being
obtained by running pexports on GivenLib)
Then the final step in Python :
>>import WrappedLib
>>dir(WrappedLib)
['_WrappedLib', '__builtins__', '__doc__', '__file__',
'__name__', '_newclass', '_object', '_swig_getattr',
'_swig_setattr', '_swig_setattr_nondynamic']
The library import does not fail but none of the functions
described in the library header appear in resulting
module.
What did I do wrong ? I assume that just feeding a simple
library header to Swig and compiling the wrapper may not
be enough but I had no other idea of what to do.
Any help will be hugely appreciated
Regards,
Fabrice Capiez
--------------------------------------
STOP HIV/AIDS.
Yahoo! JAPAN Redribbon Campaign 2005
Hi Marcelo,
Thank you for your information.
pujo
On 12/5/05, Marcelo Matus <mmatus@...> wrote:
>
> And I forgot, if you are worry about the speed, the CVS swig version
> now produces code up to 20 times faster than before. But you will see
> that in the next 1.3.28 version.
>
> Marcelo
>
>
>
> Pujo Aji wrote:
>
> > Thanks Marcelo,
> >
> > You're right, because looping in python is frustrating. Now it is fast.
> > I have a few problems:
> > my compiler doesn't allow me to define type of variable like this:
> > for (int i =3D 0; i<10; i++){
> > //mycode
> > }
> >
> > I should use :
> > int i =3D 0;
> > for (i =3D 0; i<10; i++){
> > //mycode
> > }
> >
> > Is that standard?
> >
> >
> >
> > Thanks,
> > pujo
> >
> >
> >
> > On 12/5/05, *Marcelo Matus* <mmatus@...
> > <mailto:mmatus@...>> wrote:
> >
> > You are not just comparing the swig/C++, you have a huge for loop
> > in there.
> >
> >
> > Try the other way, create a function in C++ with the loop inside
> > and the call it from python.
> >
> > also,look at the profiletest_runme.py in the
> > SWIG/Examples/test-suite/python for
> > ideas of comparing using loops.
> >
> > Marcelo
> >
> >
> > Pujo Aji wrote:
> >
> > > Hello All, Marcelo,
> > >
> > >
> > > I manage to use visual studio express C++ (.Net Framework 2)
> > compiler
> > > with swig and python.
> > > By tweaking pyConfig, installing SDK Platform etc.
> > > I can run the runme.py in "simple" example.
> > >
> > > I have 2 questions :
> > > 1. Concerning compiling simple.c and simple_wrap.c:
> > > The last thing that I need is simple.py and _simple.dll is
> this
> > > correct ?
> > > Because in the tutorial it mentioned simple.o and simple.so
> > which
> > > I can't find these files in my computer (win xp)
> > >
> > > 2. Concerning swig speed.
> > > I found something very interesting.
> > > I create a function in C++ and use swig to wrap it.
> > > After getting simple.dll and simple.py. I can use it in my main
> > python
> > > program (runme.py).
> > > I compare the speed between : pure C++, C++wrap with swig, pure
> > > python, and python with psyco.
> > > This is the result:
> > >
> > > *pure C++ : 1 second
> > >
> > > python without psyco:
> > > C++wrap with swig: 13.76 second
> > > pure python: 37.19 second
> > >
> > > python with psyco:
> > > C++ wrap with swig: 14.55 second
> > > pure python: 15 second
> > >
> > > Strange, I thought at the first time:
> > > 1. with psyco the speed of C++ wrap with swig is almost the same
> > speed.
> > > 2. pure C++ is much much faster than C++ wrap with swig and used
> in
> > > python ???
> > >
> > > Note :
> > > * for pure C++, I can't test until the unit in millisecond, but I
> > > minimize this problem by using problem that make pure python and
> > > C++wrap with swig produce a couple of seconds.
> > >
> > > Looking forward to your responds
> > > Sincerely Yours,
> > > pujo
> > > .
> > >
> >
> >------------------------------------------------------------------------
> >
> > >
> > ># file: example.py
> > >
> > >import example
> > >import time
> > >import psyco
> > >psyco.full()
> > >
> > >def fact(n):
> > > f =3D 1
> > > while( n>1):
> > > f *=3D n
> > > n -=3D 1
> > > return f
> > >
> > >## Call our gcd() function
> > >#
> > >#x =3D 42
> > >#y =3D 105
> > >#g =3D example.gcd(x,y)
> > >#print "The gcd of %d and %d is %d" % (x,y,g)
> > >#
> > >## Manipulate the Foo global variable
> > >#
> > >## Output its current value
> > >#print "Foo =3D ", example.cvar.Foo
> > >#
> > >## Change its value
> > >#example.cvar.Foo =3D 3.1415926
> > >#
> > >## See if the change took effect
> > >#print "Foo =3D ", example.cvar.Foo
> > >
> > >start =3D time.time()
> > >sum =3D 0
> > >for i in range(10000000):
> > > sum+=3Dexample.fact(10)
> > >stop =3D time.time()
> > >print sum
> > >print 'wrap c++ time,', stop-start
> > >
> > >
> > >start =3D time.time()
> > >sum =3D 0
> > >for i in range(10000000):
> > > sum+=3Dfact(10)
> > >stop =3D time.time()
> > >print 'python time,', stop-start
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
> >
>
>
Hi,
I have just started using Swig version 1.3.25 with C++ and Java. I am having
great success with the Java Directors feature, but have run into a problem
trying to make a down call from a Java derived class to a C++ base class
virtual function. For example:
// C++ base class
class Hello
{
public:
Hello();
~Hello();
virtual void print() {std::cout << "C++ Hello::print" << std::endl;}
};
// Java derived class
class DerivedHello extends Hello {
public DerivedHello() {}
public void print() {
System.out.println("In Java, print invoked. Now calling C++ base class
print");
super.print();
System.out.println("In Java, returned from downcall.");
}
}
The call to super.print() makes a call to the Swig Director C++ class which
in turn calls the Java DerviedHello class' print method that again calls the
super.print() method causing an infinite call stack. In this example, I
actually want the base class function to do some work. Is there a work
around for this type of case or did I miss something in the documentation?
Is there a way to set the C++ director's class swig_override[0] value to
false in these types of cases?
Thanks in advance.
Scott Lathrop
slathrop@...
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200512&viewday=6 | CC-MAIN-2017-04 | refinedweb | 1,902 | 66.33 |
Lets say, here are 3 developers, each has been very familiar to one of the 3, now who is in the best position to write Metro-style apps/games for Win 8 ?
C++ dev:
1, need to learn a new 'language extension', and use a bunch of weird types in Platform:: namespace. not good.
2, need to lean a new UI markup language and a whole stack of quite complex UI API. not good.
3, can leverage DirectX and a lot of super powerful and performant libraries. good.
4, need to compile for 3 architectures and upload them all to store (I guess). not so good.
5, pure native performance. good.
.NET dev:
1, very natual language projection to WinRT, almost nothing to learn here. good.
2, already familiar to XAML and the Avalon stack and used them for years. good.
3, no XNA (yet). not so good.
4, compile once and target all architectures. good.
5, performance on ARM is unknown.
JS dev:
1, no new language or markup to learn, just a library. good.
2, the idea of writing platform specific app may boggles mind. not so good.
any more thoughts ?
P.S. actually the one in the best position maybe the one with experiences in more than 1 world, right ?
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | http://channel9.msdn.com/Forums/Coffeehouse/DISCUSSION-NET-vs-C-vs-JS-whos-in-the-best-position-for-Metro- | CC-MAIN-2014-10 | refinedweb | 248 | 83.86 |
Hello,
I often write classes that have fields or properties that I want to be set from a single constructor via the parameters to that constructor. The 'ctorp' and 'ctorf' snippets are endlessly useful for exactly this.
However, when I use these snippets, the resulting constructor has a summary comment that is practically useless.
For example:
public class Coordinates
{
public decimal Latitude { get; }
public decimal Longitude { get; }
/// <summary>
/// Initializes a new instance of the <see cref="T:System.Object" /> class.
/// </summary>
public Coordinates(decimal latitude, decimal longitude)
{
Latitude = latitude;
Longitude = longitude;
}
}
I was hoping that by editing the 'ctor' Live Template that is provided by Resharper, that I could customize this. However, the ctorf and ctorp snippets do not seem to utilize those changes.
Is it possible to customize these snippets in some way? For example, my edited ctor template - which works as desired but only for generating the default constructor:
/// <summary>
/// Initializes a new instance of the <see cref="T:$namespace$.$classname$" /> class.
/// </summary>
public $classname$ ()
{
$END$
}
Hello @Chris Thompson,
Thank you for contacting us! Do I get you right that you wish to get rid of the commentary above the generated constructors?
Hi Andrey Simukov,
Not quite - I suppose that is one possible use case, but I believe there is already an option that controls whether these constructor generators add comments or not, I'd have to dig to find which setting that is, though.
Really what I would like is if the comments generated didn't refer to System.Object as the type when generating a constructor comments - its not very helpful/informative. My thought is that this could be handled either by updating the existing templates/suggestions to derive the
portion, or by allowing end users to customize the template that is used to generate the comments for the constructor.
Chris Thompson
ReSharper doesn't generate the comments itself. It looks like you have some 3rd-party plugin for VS or ReSharper (e.g. StyleCop) which adds the comments to make the code align with XML documentation requirements (when the comments are missing).
You managed to add the comment into the ctor template, but I'm afraid it wouldn't work with the rest (ctorp, ctorf, ctorfp) since they are added differently from the ctor, which is a template. These 3 are generated code and the are no templates to edit for them. | https://resharper-support.jetbrains.com/hc/en-us/community/posts/360010284180-Customize-the-comments-generated-with-ctorp-ctorf-snippets-C- | CC-MAIN-2021-43 | refinedweb | 397 | 51.18 |
Using C and C++ in an iOS App with Objective-C++
In my previous tutorial, I discussed how to develop Android apps in C or C++ using the Native Development Kit. In this tutorial, I’ll introduce how to use C or C++ together with Objective-C in i.
What Is Objective-C++?
Objective-C++ may sound like a new programming language, but it’s not. It’s a combination of two languages, Objective-C and C++. Apple provides Objective-C++ as a convenient mechanism for mixing Objective-C code with C++ code.
Objective-C is close to C but with object-oriented features implemented as a thin layer on top of C. It’s a
strict superset of C which makes any C code a valid Objective-C program.
Even though Swift is now the recommended language for developing iOS apps, there are still good reasons to use older languages like C, C++ and Objective-C. Despite the quick rise of Swift, Objective-C is still the dominant language on iOS because of the sheer number of existing apps and libraries already created with it.
One reason to use Objective-C is to port an existing C/C++ program written for another platform to iOS. Developing cross-platform apps using C or C++ is possible with some careful planning. Despite Swift being open source, it’s not yet fully supported other platforms. Another use case is the ability to leverage existing native code libraries that are already available. This is one of the most important reasons to still use C/C++/Objective-C for iOS apps.
Using Objective-C++
The final project for this tutorial can be found on GitHub.
Create the Project
Open Xcode and choose Create a new Xcode project.
In the template selection screen, choose Single View Application from the iOS Application tab and click Next.
In the project options screen, name the product HelloCpp. Enter your organization and organization identifier in reverse domain name style.
Because it’s not really a language, there’s no option to create an Objective-C++ project. What’s available is either Objective-C or Swift. For this project, choose Objective-C. Leave the other options as they are and click Next and choose a folder to save the project.
C++
Time to add some C++ code. If this is your first time with C++, check out this tutorial on the language. Look at the Project Navigator pane on the left. Most of the files end with either an .h or .m. Those that end with .h are header files while those with .m are Objective-C source files.
Create a C++ class that will be called from an Objective-C file. Create a new file using the File -> New -> File… menu item or press ⌘ + N. In the file template window that appears, select iOS -> Source -> C++ File and click Next.
Name the file Greeting, keep the Also create a header file box checked and click Next. Save the file inside the HelloCpp folder.
The project’s structure should now look like the following. Feel free to drag files around to improve the arrangement in the Project Navigator.
Open Greeting.hpp and add the following code between the
include <stdio.h> and
#endif /* Greeting_hpp */ lines:
#include <string> class Greeting { std::string greeting; public: Greeting(); std::string greet(); };
Define these methods in Greeting.cpp by adding the following code after the
include "Greeting.hpp" line:
Greeting::Greeting() { greeting = "Hello C++!"; } std::string Greeting::greet() { return greeting; }
This is simple code that creates a class named
Greeting with a single method named
greet() that returns a string value.
Objective-C with C++
Now that you’ve added the simple C++ Greeting class, try calling this from Objective-C. Open ViewController.m and import the Greeting.hpp header file:
#import "ViewController.h" #import "Greeting.hpp" ...
Declare a variable of type
Greeting inside the ViewController’s
@interface block.
@interface ViewController () { Greeting greeting; } @end
If the Show live issues option is enabled, an error saying
Unknown type name ‘Greeting’ will instantly appear after adding the previous line of code. To fix this issue, rename ViewController.m to ViewController.mm. This simple naming convention tells Xcode that
ViewController wants to mix Objective-C with C++. After renaming the file, the error should disappear.
Let’s make the app more interactive by adding a button. Select Main.storyboard from the Project Navigator to show the View Controller scene. Drag a button from the Object Library to the center of the view as shown below. Change the text to Tap Me!.
Create a
UIButton outlet that points to the recently created button. While still in the Main.storyboard screen, open the Assistant Editor by toggling it from the toolbar. Press the control key on the keyboard and drag a connector from the button to a line below the
greeting variable. Name this outlet helloButton.
Open ViewController.h and add the following
IBAction method between
@interface and
@end:
- (IBAction)showGreeting;
Define this method in ViewController.mm with the following code inserted after the
- (void)didReceiveMemoryWarning function and before
@end:
- (IBAction)showGreeting { NSString* newTitle = [NSString stringWithCString:greeting.greet().c_str() encoding:[NSString defaultCStringEncoding]]; [helloButton setTitle:newTitle forState:UIControlStateNormal]; }
This method calls the
greet() method defined in the C++ class
Greeting and uses the string value it returns to replace the Tap Me! button’s title. Connect the button to this action using the same control + drag technique. Go back to the Main.storyboard screen and open the Assistant Editor via the toolbar then control + drag a connector from the button to the
- (IBAction)showGreeting method’s body. The small hollow circle on the left of the
- (IBAction)showGreeting method should be activated/filled indicating that the Tap Me! button is now connected to that action.
Build and run the app. Notice that the button appears off-center on the simulator. Fix this by enabling Auto Layout constraints using the Control + drag technique. Connect the button to its container and enable both vertical and horizontal centering. Run the app again to see the improvements.
Limitations
Objective-C++ doesn’t actually merge Objective-C with C++ features. Meaning, Objective-C classes won’t have features that are available to C++ and vice versa. The following code examples illustrate these limitations:
Calling a C++ Object Using Objective-C Syntax Will Not Work
std::string greet = [greeting greet]; // error
Constructors or Destructors Cannot Be Added to an Objective-C Object
@interface ViewController () { Greeting greeting; IBOutlet UIButton *helloButton; ViewController(); // error ~ViewController(); // error } @end
The keywords
this and
self Cannot Be Used Interchangeably
std::string greet = self->greeting.greet(); // works std::string greet2 = this->greeting.greet(); // error
A C++ Class Cannot Inherit from an Objective-C Class
#include <stdio.h> #include <string> #include "ViewController.h" class Greeting: public ViewController { // error std::string greeting; public: Greeting(); std::string greet(); };
An Objective-C Class Cannot Inherit from a C++ Class
#import "Greeting.hpp" @interface Goodbye : Greeting // error @end
Exception Handling Is Also Not Fully Supported When Mixing Objective-C and C++
An exception thrown in Objective-C code cannot be caught in C++ code while an exception thrown in C++ code cannot be caught in Objective-C code.
double divide(int dividend, int divisor) { if ( divisor == 0 ) { throw "Cannot divide by zero!"; // will not be caught in Objective-C } return (dividend/divisor); }
Reusing Libraries
The ability to reuse existing C/C++ libraries is one of the most important use cases in considering native languages and it’s a straightforward process in iOS. While Android still requires a separate NDK, iOS already supports C and C++.
SDL
As with the Android NDK tutorial, we’ll use SDL in this example. The Simple DirectMedia Layer is an open source hardware abstraction library used primarily for games or anything that involves high-performance graphics. It’s written in C so can be easily included in Objective-C code.
Project Setup
First download the SDL source from the download page or clone the Mercurial repo with:
hg clone
After the download or clone has finished, create a new Single View Application project in Xcode. Name it HelloSDL with the language set to Objective-C.
You’ll be starting from scratch so in the Project Navigator, select the following highlighted files and move them to the trash:
Remove the following keys from the Info.plist file.
Add the SDL Library project to HelloSDL using the File -> Add Files to
Hello SDL… menu item. Navigate to the downloaded or cloned SDL folder, into Xcode-iOS/SDL/ and then choose SDL.xcodeproj.
Select the main HelloSDL project in the Project Navigator and under the Targets, select HelloSDL. Scroll down to the Linked Frameworks and Libraries section.
Click the + button to manually add libSDL2.a and the following frameworks:
- AudioToolbox.framework
- CoreAudio.framework
- CoreGraphics.framework
- CoreMotion.framework
- Foundation.framework
- GameController.framework
- OpenGLES.framework
- QuartzCore.framework
- UIKit.framework
Use Command+Click to multi select the frameworks. Tidy up the Project Navigator by selecting the newly added frameworks and grouping them using the New Group from Selection contextual menu item.
Add a main.c file to the project. No need to create a header file for this one so uncheck the Also create a header file box.
We’ll reuse this sample code by Holmes Futrell from the SDL/Xcode-iOS/Template/SDL iOS Application/main.c file:
/* * rectangles.c * written by Holmes Futrell * use however you want */ #include "SDL.h" #include <time.h> #define SCREEN_WIDTH 320 #define SCREEN_HEIGHT 480 int randomInt(int min, int max) { return min + rand() % (max - min + 1); } void render(SDL_Renderer *renderer) { Uint8 r, g, b; /* Clear the screen */ SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255); SDL_RenderClear(renderer); /* Come up with a random rectangle */ SDL_Rect rect; rect.w = randomInt(64, 128); rect.h = randomInt(64, 128); rect.x = randomInt(0, SCREEN_WIDTH); rect.y = randomInt(0, SCREEN_HEIGHT); /* Come up with a random color */ r = randomInt(50, 255); g = randomInt(50, 255); b = randomInt(50, 255); SDL_SetRenderDrawColor(renderer, r, g, b, 255); /* Fill the rectangle in the color */ SDL_RenderFillRect(renderer, &rect); /* update screen */ SDL_RenderPresent(renderer); } int main(int argc, char *argv[]) { SDL_Window *window; SDL_Renderer *renderer; int done; SDL_Event event; /* initialize SDL */ if (SDL_Init(SDL_INIT_VIDEO) < 0) { printf("Could not initialize SDL\n"); return 1; } /* seed random number generator */ srand(time(NULL)); /* create window and renderer */ window = SDL_CreateWindow(NULL, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_OPENGL); if (!window) { printf("Could not initialize Window\n"); return 1; } renderer = SDL_CreateRenderer(window, -1, 0); if (!renderer) { printf("Could not create renderer\n"); return 1; } /* Enter render loop, waiting for user to quit */ done = 0; while (!done) { while (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { done = 1; } } render(renderer); SDL_Delay(1); } /* shutdown SDL */ SDL_Quit(); return 0; }
This code displays rectangles with random colors on random areas of the screen. Copy this code to the main.c file.
Finally, set the User Header Search Paths property to the folder where the SDL header files are located. The easiest way to do this is to create a symbolic link that points to the SDL folder inside the HelloSDL project and setting the User Header Search Paths property to SDL/include/.
Make sure HelloSDL is the active scheme in the toolbar then build and run the project to see the dancing rectangles.
Conclusion
As you have seen, C and C++ development in iOS is straightforward through the use of Objective-C++. If you’ve been programming with C or C++ before then it’s easier now to transition to iOS development and leverage your existing skills. Best of all, you can take advantage of existing C/C++ libraries like the powerful SDL with minimal configuration.
I’d love to hear your experiences and opinions on using Objective-C++ in the comments below. | https://www.sitepoint.com/using-c-and-c-in-an-ios-app-with-objective-c/ | CC-MAIN-2018-05 | refinedweb | 1,952 | 56.96 |
audit_series_add
Name
audit_series_add — Add a value to the counter associated with a key in a named series
Synopsis
#include "modules/validate/audit_series.h"
|
int **audit_series_add** ( | name, | |
| | count, | |
| | key, | |
| | ac, | |
| | prefix_len
); | |
const char * <var class="pdparam">name</var>;
int <var class="pdparam">count</var>;
char * <var class="pdparam">key</var>;
accept_construct * <var class="pdparam">ac</var>;
int <var class="pdparam">prefix_len<.
Add a value to the counter associated with a key in a named series.
Only the current window is affected.
- name
- name of the series.
- count
- the value to be added. It can be positive or negative.
- key
- the key for which the counts is desired. For series of type "cidr", key is of the form "ip/mask". to use the remote IP of current session, caller can pass in "ac" instead.
- ac
- accept construct. This is the alternative for passing in an IP for cidr series.
- prefix_len
- if the type is cidr_ipv6, this determines the prefix to match. Useful for mitigating DoS attacks from a single system cycling addresses.
0 if successful, -1 if error.
**Configuration Change. ** This feature is available starting from Momentum 3.1. | https://support.sparkpost.com/momentum/3/3-api/apis-audit-series-add | CC-MAIN-2022-05 | refinedweb | 187 | 69.28 |
Capturing mutables in f#
I was talking about F# with a coworker recently and we were discussing the merits of a stateless system. Both of us really like the enforcement of having to inject state, and when necessary, returning a new modified copy of state. Functional languages want you to work with this pattern, but like with all things software, it’s good to be able to break the rules. This is one of the things I like about F#, you can create mutables and do work imperatively if you need to.
But, there is a small caveat with mutables: you can’t close over them. Look at the following example:
let g() = let mutable f = 0; fun () -> Console.WriteLine f
The intent is that calling
g() would give you a new function that writes
f to the console. In C# it would be the same as
public Action g(){ int f = 0; return () => Console.WriteLine(f); }
Both examples look functionally the same, but the F# example actually gives you the following compiler error:
The mutable variable ‘f’ is used in an invalid way. Mutable variables cannot be captured by closures. Consider eliminating this use of mutation or using a heap-allocated mutable reference cell via ‘ref’ and ‘!’.
But, the C# version is totally fine. Why?
The reason is because F# mutable values are always stack allocated. To close on a variable, the variable needs to be allocated on the heap (or copied by value). This is why you can close on objects that aren’t mutable (you close on their reference) and values that aren’t mutable (they are closed by value, i.e. copied). If you closed on a stack allocated type it wouldn’t work; stack objects are popped off after the function loses scope. This is the basis of stack unwinding. After the stack is unwound, the reference to the value you closed on would point to garbage!
So why does the C# version work?
f looks like a stack allocated value type to me. The nuance is that the C# compiler actually makes
f become a heap allocated value type. Here is a quote from Eric Lipperts blog explaining this (emphasis mine): C# actually moves the value type to the heap to be declared because it needs to be accessed later via the closure. If you didn’t do that, then the value type wouldn’t exist when the closure is executed since the stack reference would have been lost (stacks are popped off when functions return).
F#, then, is much stricter about its stack vs heap allocations and opted to not do this magic for you. I think their decision aligns with the functional philosophy of statelessness; they obviously could have done the magic for you but chose not to.
Instead, if you do need to return a captured mutable value in a function closure you have to use what is called a reference cell. All a reference cell is is a heap allocated mutable variable, which is exactly what you need for returned closures to work.
A modified version of our example that would now work looks like this:
let g() = let f = ref 0; fun () -> Console.WriteLine !f g()()
Notice the
! which dereferences the cell. This example outputs
0
Without the
!, though, you’d get
Microsoft.FSharp.Core.FSharpRef`1[System.Int32]
Showing you that
f isn’t really an int, it’s a boxed heap value of an int. | http://onoffswitch.net/capturing-mutables-f/ | CC-MAIN-2014-15 | refinedweb | 578 | 62.88 |
a = """foo""" b = """foo2""" c = """foo3""" mylist = [a, b, c] def example(): print (random.choice(mylist)) if "foo2" in mylist: quit(0) else: example2()
I’m using triple quotes for the variables because those strings are quite long in my project.
Anyway, I easily found how to pull a random item from a list, but couldn’t find ways to then utilize the randomized outcome. The code I’m using doesn’t work. The script always ignores the
if "foo2" in mylist: and moves right onto
example2(). At least I think that’s what it’s doing.
I also tried:
def example(): print (random.choice(mylist)) if mylist == a: quit(0) else: example2()
Any help is appreciated. | https://forum.learncodethehardway.com/t/if-statement-based-on-outcome-of-random-choice/882 | CC-MAIN-2022-40 | refinedweb | 117 | 75.1 |
There are many ways to manage the React state between many components: using libraries like Redux, MobX, Immer, Recoil, etc, or using a React Context.
After using several of them, I personally choose React Context because of its simplicity. To use a React Context to manage the state you have to put the state in the
Provider along with the method to update it. Then you can consume it from the
Consumer.
However, the problem with React Context is that if you change the value of a single field of the state, instead of updating the components that use only this field, all components that use any field from the state will be re-rendered.
In this article I'm going to explain the concept of "fragmented store" to solve this, and how to use it in a simple and easy way.
What is a fragmented store
The fragmented store makes it possible to consume each field of the store separately. Since most of the components will consume few fields of the whole store, it's not interesting that they are re-rendered when other fields are updated.
To solve this with React Context you have to create a context for each field of the store, which is not very feasible due to its difficulty.
// ❌ Not recommended <UsernameProvider> <AgeProvider> {children} </AgeProvider> </UsernameProvider>
Naturally, if we have very few properties in the "store" it could work. But when we start to have too many, there will be too much logic implemented to solve the problem of re-rendering, since it would be necessary to implement each context for each property.
However, I have good news, it can be automatically created.
How to use a fragmented store
I created a tiny library (500b) called fragmented-store to make it super simple and easy to use. It uses React Context underneath (I'll explain later what it does exactly).
Create context + add the Provider
Just as we would go with the React Context, we need to create the context and add the provider to the application. We'll take this opportunity to initialize the store to the data we want at the beginning.
import createStore from "fragmented-store"; // It is advisable to set all the fields. If you don't know the // initial value you can set it to undefined or null to be able // to consume the values in the same way const { Provider } = createStore({ username: "Aral", age: 31, }); function App() { return ( <Provider> {/* rest */} </Provider> ); }
Consume one field
For the example, we will make 2 components that consume a field of the store. As you'll see, it's similar to having a
useState in each component with the property that you want, with the difference that several components can share the same property with the same value.
import createStore from "fragmented-store"; // We can import hooks with the property name in camelCase. // username -> useUsername // age -> useAge const { Provider, useUsername, useAge } = createStore({ username: "Aral", age: 31, }); function App() { return ( <Provider> <UsernameComponent /> <AgeComponent /> </Provider> ); } // Consume the "username" field function UsernameComponent() { const [username, setUsername] = useUsername(); return ( <button onClick={() => setUsername("AnotherUserName")}> Update {username} </button> ); } // Consume the "age" field function AgeComponent() { const [age, setAge] = useAge(); return ( <div> <div>{age}</div> <button onClick={() => setAge((s) => s + 1)}>Inc age</button> </div> ); }
When the
AgeComponent updates the
age field only the
AgeComponent is re-rendered. The
UsernameComponent is not re-rendered since it does not use the same fragmented part of the store.
Consume all the store
In case you want to update several fields of the store, you can consume the whole store directly. The component that consumes all the store will be re-render for any updated field.
import createStore from "fragmented-store"; // Special hook useStore const { Provider, useStore } = createStore({ username: "Aral", age: 31, }); function App() { return ( <Provider> <AllStoreComponent /> </Provider> ); } // Consume all fields of the store function AllStoreComponent() { const [store, update] = useStore(); console.log({ store }); // all store function onClick() { update({ age: 32, username: "Aral Roca" }) } return ( <button onClick={onClick}>Modify store</button> ); }
And again, if we only update some fields, the components that consume these fields will be re-rendered while other components that consume other fields won't!
// It only updates the "username" field, other fields won't be updated // The UsernameComponent is going to be re-rendered while AgeComponent won't :) update({ username: "Aral Roca" })
You don't need to do this (even if it's supported):
update(s => ({ ...s, username: "Aral" }))
With this only the components that consume the
username field with the
useUsername hook would be re-rendered.
How is implemented underneath
The fragmented-store library is a single very short file. It's similar of what we'd manually do to create several React Contexts for each property. It automatically creates everything you need to consume and update them (hooks).
import React, { useState, useContext, createContext } from 'react' export default function createStore(store = {}) { const keys = Object.keys(store) const capitalize = (k) => `${k[0].toUpperCase()}${k.slice(1, k.length)}` // storeUtils is the object we'll return with everything // (Provider, hooks) // // We initialize it by creating a context for each property and // returning a hook to consume the context of each property const storeUtils = keys.reduce((o, key) => { const context = createContext(store[key]) // Property context const keyCapitalized = capitalize(key) if (keyCapitalized === 'Store') { console.error( 'Avoid to use the "store" name at the first level, it\'s reserved for the "useStore" hook.' ) } return { ...o, // All contexts contexts: [...(o.contexts || []), { context, key }], // Hook to consume the property context [`use${keyCapitalized}`]: () => useContext(context), } }, {}) // We create the main provider by wrapping all the providers storeUtils.Provider = ({ children }) => { const Empty = ({ children }) => children const Component = storeUtils.contexts .map(({ context, key }) => ({ children }) => { const ctx = useState(store[key]) return <context.Provider value={ctx}>{children}</context.Provider> }) .reduce( (RestProviders, Provider) => ({ children }) => ( <Provider> <RestProviders>{children}</RestProviders> </Provider> ), Empty ) return <Component>{children}</Component> } // As a bonus, we create the useStore hook to return all the // state. Also to return an updater that uses all the created hooks at // the same time storeUtils.useStore = () => { const state = {} const updates = {} keys.forEach((k) => { const [s, u] = storeUtils[`use${capitalize(k)}`]() state[k] = s updates[k] = u }) function updater(newState) { const s = typeof newState === 'function' ? newState(state) : newState || {} Object.keys(s).forEach((k) => updates[k] && updates[k](s[k])) } return [state, updater] } // Return everything we've generated return storeUtils }
Demo
I created a Codesandbox in case you want to try how it works. I added a
console.log in each component so you can check when each one is re-rendered. The example is super simple, but you can try creating your own components and your state.
Conclusions
In this article I've explained the benefits of the "fragmented store" concept and how to apply it with React Context without the need to manually create many contexts.
In the example of the article and the fragmented-store library the fragmentation level is only at the first level for now. The library I've implemented is in a very early stage and there are certainly a number of improvements that could be made. Any proposal for changes can be made on GitHub as the project is open source and will be very well received:
Top comments (4)
I like the simplicity of this approach. Nice!
Any specific reason you went with
useUnfragmentedStoreinstead of
useStore?
That's a long name and I'm a lazy dev. :-)
I think one of the reasons was to avoid collisions with some variable named store
In reallity I'm with you, good catch! Sounds much better useStore! Feel free to PR or I'm going to change It tomorrow ☺️👍
Changed! Thanks | https://dev.to/aralroca/react-state-with-a-fragmented-store-18ff | CC-MAIN-2022-40 | refinedweb | 1,267 | 51.07 |
In the previous article, I presented AdaBoost, a powerful boosting algorithm which brings some modifications compared to bagging algorithms. In this article, I’ll present the key concepts of Gradient Boosting. Regression and classification are quite different concepts for Gradient Boosting. In this article, we’ll focus on regression.
Gradient Boosting vs. AdaBoost
Gradient Boosting can be compared to AdaBoost, but has a few differences :
- Instead of growing a forest of stumps, we initially predict the average (since it’s regression here) of the y-column and build a decision tree based on that value.
- Like in AdaBoost, the next tree depends on the error of the previous one.
- But unlike AdaBoost, the tree we grow is not only a stump but a real decision tree.
- As in AdaBoost, there is a weight associated with the trees, but the scale factor is applied to all the trees.
Gradient Boosting steps
Let’s consider a simple scenario in which we have several features, \(x_1, x_2, x_3, x_4\) and try to predict \(y\).
Step 1 : Make the first guess
The initial guess of the Gradient Boosting algorithm is to predict the average value of the target \(y\). For example, if our features are the age \(x_1\) and the height \(x_2\) of a person… and we want to predict the weight of the person.
Step 2 : Compute the pseudo-residuals
For the variable \(x_1\), we compute the difference between the observations and the prediction we made. This is called the pseudo-residuals.
We compute the pseudo-residuals for the first feature \(x_1\).
Step 3 : Predict the pseudo-residuals
Then, we will be using the features \(x_1, x_2,x_3, x_4\) to predict the pseudo-residuals column.
We can now predict the pseudo-residuals using a tree, that typically has 8 to 32 leaves (so larger than a stump). By restricting the number of leaves of the tree we build, we obtain less leaves than residuals. Therefore, the outcome of a given branch of the tree is the average of the columns that lead to this leaf, as in a regression tree.
Step 4 : Make a prediction and compute the residuals
To make a prediction, we say that the average is 13.39. Then, we take our observation, run in through the tree, get the value of the leaf, and add it to 13.39.
If we stop here, we will most probably overfit. Gradient Boost applies a learning rate \(lr\) to scale the contribution from a new tree, by applying a factor between 0 and 1.\[y_{pred} = \bar{y_{train}} + lr \times res_{pred}\]
The idea behind the learning rate is to make a small step in the right direction. This allows an overall lower variance.
Notice how all the residuals got smaller now.
Step 5 : Make a second prediction
Now, we :
- build a second tree
- compute the prediction using this second tree
- compute the residuals according to the prediction
- build the third tree
- …
Let’s just cover how to compute the prediction. We are still using the features \(x_1, x_2, x_3, x_4\) to predict the new residuals Pseudo_Res_2.
We build a tree to estimate those residuals. Once we have this tree (with a limited number of leaves), we are ready to make the new prediction :\[y_{pred} = \bar{y_{train}} + lr \times res_{pred_1} + lr \times res_{pred_2}\]
The prediction is equal to :
- the average value initially computed
- plus LR * the predicted residuals at step 1
- plus LR * the predicted residuals at step 2
Notice how we always apply the same Learning Rate. We are now ready to compute the new residuals, fit the 3rd tree on it, compute the 4th residuals… and so on, until :
- we reach the maximum number of trees specified
- or we don’t learn significantly anymore
Full Pseudo-code
The algorithm can be then described as the following, on a dataset \((x,y)\) with \(x\) the features and \(y\) the targets, with a differentiable loss function \(\cal{L}\):
\(\cal{L} = \frac {1} {2} (Obs - Pred)^2\), called the Squared Residuals. Notice that since the function is differentiable, we have :\[\frac { \delta } {\delta Pred} \cal{L} = - 1 \times (Obs - Pred)\]
Step 1 : Initialize the model with a constant value : \(F_0(x) = argmin_{\gamma} \sum_i \cal{L}(y_i, \gamma)\). We simply want to minimize the sum of the squared residuals (SSR) by choosing the best prediction \(\gamma\).
If we derive the optimal value for \(\gamma\) :\[\frac { \delta } {\delta \gamma } \sum_i \cal{L}(y_i, \gamma) = -(y_1 - \gamma) + -(y_2 - \gamma) + -(y_3 - \gamma) + ... = 0\] \[\sum_i y_i - n * \gamma = 0\] \[\gamma = \frac{ \sum_i y_i }{n} = \bar{y}\]
This is simply the average of the observations. This justifies our previous constant initialization. In other words, we created a leaf that predicts all samples will weight the average of the samples.
Step 2 : For m = 1 to M (the maximum number of trees specified, e.g 100)
- a) Compute the pseudo-residuals for every sample :
This derivative is called the Gradient. The Gradient Boost is named after this.
b) Fit a regression tree to the \(r_{im}\) values and create terminal regions \(R_{jm}\) for j = 1, … , \(J_m\), i.e create the leaves of the tree. At that point, we still need to compute the output value of each leaf.
c) For each leaf j = 1… \(J_m\), compute the output value that minimized the SSR : \(\gamma_{jm} = argmin_{\gamma} \sum_{x_i \in R_{ij}} \cal{L}(y_i, F_{m-1} + \gamma)\). In other words, we will simply predict the output of all the samples stored in a certain leaf.
d) Make a new prediction for each sample by updating, accoridng to a learning rate \(lr \in (0,1)\) : \(F_m(x) = F_{m-1}(x) + lr \times \sum_j \gamma_{jm} I(x \in R_{jm} )\). We compute the new value by summing the previous prediction and all the predictions \(\gamma\) into which our sample falls.
Implement a high-level Gradient Boosting in Python
Since the pseudo-code detailed above might be a bit tricky to understand, I’ve tried to summarize a high-level idea of Gradient Boosting, and we’ll be implementing it in Python.
Step 1 : Initialize the model with a constant value : \(\gamma = \frac{ \sum_i y_i }{n} = \bar{y}\)
This is simply the average of the observations.
Step 2 : For each tree m = 1 to M (the maximum number of trees specified, e.g 100)
- a) Compute the pseudo-residuals for every sample, i.e the true value - the predicted value :
b) Fit a regression tree on the residuals, and predict the residuals \(r_t\)
c) Update the prediction :
Data generation
We start by generating some data for our regression :
import numpy as np import pandas as pd import matplotlib.pyplot as plt x = np.arange(0,50) x = pd.DataFrame({'x':x}) y1 = np.random.uniform(10,15,10) y2 = np.random.uniform(20,25,10) y3 = np.random.uniform(0,5,10) y4 = np.random.uniform(30,32,10) y5 = np.random.uniform(13,17,10) y = np.concatenate((y1,y2,y3,y4,y5)) y = y[:,None] plt.figure(figsize=(12,8)) plt.scatter(x,y) plt.show()
Fit a simple decision tree
To illustrate the limits of decision trees, we can try to fit a simple decision tree with a maximal depth of 1, called a stump.
from sklearn import tree clf = tree.DecisionTreeRegressor(max_depth=1) model = clf.fit(x,y) pred = model.predict(x) plt.figure(figsize=(12,8)) plt.plot(x, pred, c='red') plt.scatter(x,y) plt.show()
This is the starting point for our estimation. Now, we need to go further and make our model more complex by implementing gradient boosting.
Implement Gradient Boosting
xi = x.copy() yi = y.copy() # Initialize error to 0 ei = 0 n = len(yi) # Initialize predictions with average predf = np.ones(n) * np.mean(yi) lr = 0.3 # Iterate according to the number of iterations chosen for i in range(101): # Step 2.a) # Fit the decision tree / stump (max_depth = 1) on xi, yi clf = tree.DecisionTreeRegressor(max_depth=1) model = clf.fit(xi, yi) # Use the fitted model to predict yi predi = model.predict(xi) # Step 2.c) # Compute the new prediction (learning rate !) # Compute the new residuals, # Set the new yi equal to the residuals predf = predf + lr * predi ei = y.reshape(-1,) - predf yi = ei # Every 10 iterations, plot the prediction vs the actual data if i % 10 == 0 : plt.figure(figsize=(12,8)) plt.plot(x, predf, c='r') plt.scatter(x, y) plt.title("Iteration " + str(i)) plt.show()
By increasing the learning rate, we tend to overfit. However, if the learning rate is too low, it takes a large number of iterations to even approach the underlying structure of the data.
Conclusion : I hope this introduction to Gradient Boosting was helpful. The topic can get much more complex over time, and the implementation is Scikit-learn is much more complex than this. In the next article, we’ll cover the topic of classification. | https://maelfabien.github.io/machinelearning/GradientBoost/ | CC-MAIN-2020-40 | refinedweb | 1,497 | 64.61 |
cud u show me it??:)
Yes I could. But you need to show what you've tried so far (your code/flowchart/pseudocode whatever), no free homework without some effort from you.
Read the link I posted.
Heres sum wit comments.........what i need most is the implementation of the methods of class cHighLow in my highlow.cpp file....
#include "Games.h" // TODO: Define class cHighLow here. class cHighLow : public cGame { public: #define deck_size 52 private: int size public: cHighandLow(); // This method returns the number of players who are required to play the game (in this case 2) int getNumberOfPlayers(); // This method resets the board to the starting positions void resetGame(); // This method outputs the state of the game to the screen. void displayState(); // Reads in a move from the console for the player whose currently supposed to be taking a turn. // Returns true if successful and returns false if no more moves can be made. int inputMoveFromConsoleAndApply(); // Apply a move for player_number as long as the space is free, player_number is the correct player to // take the next turn and player_number has not already taken his turn. Returns true only if a move is // made int applyMove(int player_number, int row, int column); // This method outputs the result of the game. void announceResult(); // This method returns whether a game has finished (i.e. returns true if it has // finished and false otherwise) and if the game is over it sets the winner_number // to the number of the player who has won or to 0 in the case of a draw. int gameOver(int& winner_number); }; and.... #include <iostream> #include "Games.h" #include "Players.h" using namespace std; // This mainline lets users select a game, select the players for the game // and then play the game. int main() { cGame* selected_game = cGame::selectGame(); int number_of_players = selected_game->getNumberOfPlayers(); cPlayer** players = new cPlayer*[number_of_players]; for (int player_number=0; player_number < number_of_players; player_number++) { players[player_number] = cPlayer::selectPlayer(); } selected_game->playGame(players); system("PAUSE"); for (int player_number=0; player_number < number_of_players; player_number++) { delete players[player_number]; } delete[] players; delete selected_game; return 0; }
Edited 3 Years Ago by happygeek: fixed formatting
Do you have a class to represent a card? Does the class have the ability to distinguish when one card is bigger/lower than another?
What are the rules of the game? Is it more like "War" or more like "Rock, Paper, Scissors"?
no class to represent card,
High-Low game:
I. A deck of 52 cards is shuffled
II. The top card is turned over
III. The player must guess if the next card is Higher, Lower or
Equal to the last card turned over.
IV. The next card is turned over
V. If the player is wrong the game is over and the player has
lost
VI. If the player is correct then if the player has made 3
correct guesses the player has won. If not got back to
step III.
Note: The Ace is considered the highest card.
Are you "allowed" to use algorithms from the Standard Template Library (like shuffle()), or do you need to create your own protocols/algorithms/functions?
I think he needs to create his own function for shuffle | https://www.daniweb.com/programming/software-development/threads/184596/high-low-game-template | CC-MAIN-2016-50 | refinedweb | 527 | 72.97 |
ros (community library)
Summary
ROS port of rosserial
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
ros
A Particle library for ro ros library to your project and follow this simple example:
#include "ros.h" Ros ros; void setup() { ros.begin(); } void loop() { ros.process(); }
See the examples folder for more details.
Documentation
TODO: Describe
Ro ros_myname to add the library to a project on your machine or add the ro | https://docs.particle.io/reference/device-os/libraries/r/ros/ | CC-MAIN-2022-27 | refinedweb | 103 | 50.33 |
Note: this is best read on my website. The original post includes runnable React demos that I had to remove, as dev.to does not support MDX.
This is also my first post here, hope you'll enjoy it :)
Many blog articles talk about loading api/async data in a React apps, with
componentDidMount,
useEffect, Redux, Apollo...
Yet, all those articles are generally optimistic, and never mention something important to consider: race conditions could happen, and your UI may end up in an inconsistant state.
An image is worth a thousand words:
You search for Macron, then change your mind and search for Trump, and you end up with a mismatch between what you want (Trump) and what you get (Macron).
If there is a non-null probability that your UI could end up in such a state, your app is subject to race conditions.
Why this happens?
Sometimes, multiple requests are fired in parallel (competing to render the same view), and we just assume the last request will resolve last. Actually, the last request may resolve first, or just fail, leading to the first request resolving last.
It happens more often than you think. For some apps, it can lead to very serious problems, like a user buying the wrong product, or a doctor prescribing the wrong drug to a patient.
A non-exhaustive list of reasons:
- The network is slow, bad, unpredictable, with variable request latencies...
- The backend is under heavy load, throttling some requests, under a Denial-of-Service attack...
- The user is clicking fast, commuting, travelling, on the country side...
- You are just unlucky
Developers don't see them in development, where the network conditions are generally good, sometimes running the backend API on your own computer, with close to 0ms latency.
In this post, I'll show you what those issues do, using realistic network simulations and runnable demos. I'll also explain how you can fix those issues, depending on the libraries you already use.
Disclaimer: to keep the focus on race conditions, the following code samples will not prevent the React warning if you
setState after unmounting.
The incriminated code:
You probably already read tutorials with the following code:
const StarwarsHero = ({ id }) => { const [data, setData] = useState(null); useEffect(() => { setData(null); fetchStarwarsHeroData(id).then( result => setData(result), e => console.warn('fetch failure', e), ); }, [id]); return <div>{data ? data.name : <Spinner />}</div>; };
Or with the class API:
class StarwarsHero extends React.Component { state = { data: null }; fetchData = id => { fetchStarwarsHeroData(id).then( result => setState({ data: result }), e => console.warn('fetch failure', e), ); }; componentDidMount() { this.fetchData(this.props.id); } componentDidUpdate(nextProps) { if (nextProps.id !== this.props.id) { this.fetchData(this.props.id); } } render() { const { data } = this.state; return <div>{data ? data.name : <Spinner />}</div>; } }
All 2 versions above lead to this same result. When changing the id very fast, even with your own good home network and very fast API, something is wrong and sometimes, previous request's data is rendered. Please don't think debouncing protects you: it just reduces the chances of being unlucky.
Now let's see what happens when you are on a train with a few tunnels.
Simulating bad network conditions
Let's build some utils to simulate bad network conditions:
import { sample } from 'lodash'; // Will return a promise delayed by a random amount, picked in the delay array const delayRandomly = () => { const timeout = sample([0, 200, 500, 700, 1000, 3000]); return new Promise(resolve => setTimeout(resolve, timeout), ); }; // Will throw randomly with a 1/4 chance ratio const throwRandomly = () => { const shouldThrow = sample([true, false, false, false]); if (shouldThrow) { throw new Error('simulated async failure'); } };
Adding network delays
You might be on a slow network, or the backend may take time to answer.
useEffect(() => { setData(null); fetchStarwarsHeroData(id) .then(async data => { await delayRandomly(); return data; }) .then( result => setData(result), e => console.warn('fetch failure', e), ); }, [id]);
Adding network delays + failures
You are on a train in the countryside, and there are a few tunnels: requests are delayed randomly and some of them might fail.
useEffect(() => { setData(null); fetchStarwarsHeroData(id) .then(async data => { await delayRandomly(); throwRandomly(); return data; }) .then( result => setData(result), e => console.warn('fetch failure', e), ); }, [id]);
This code very easily leads to weird, inconsistant UI states.
How to avoid this problem
Let's suppose 3 requests R1, R2 and R3 gets fired in this order, and are still pending. The solution is to only handle the response from R3, the last issued request.
There are a few ways to do so:
- Ignoring responses from former api calls
- Cancelling former api calls
- Cancelling and ignoring
Ignoring responses from former api calls
Here is one possible implementation.
// A ref to store the last issued pending request const lastPromise = useRef(); useEffect(() => { setData(null); // fire the api request const currentPromise = fetchStarwarsHeroData(id).then( async data => { await delayRandomly(); throwRandomly(); return data; }, ); // store the promise to the ref lastPromise.current = currentPromise; // handle the result with filtering currentPromise.then( result => { if (currentPromise === lastPromise.current) { setData(result); } }, e => { if (currentPromise === lastPromise.current) { console.warn('fetch failure', e); } }, ); }, [id]);
Some might be tempted to use the
id to do this filtering, but it's not a good idea: if the user clicks
next and then
previous, we might end up with 2 distinct requests for the same hero. Generally this is not a problem (as the 2 requests will often return the exact same data), but using promise identity is a more generic and portable solution.
Cancelling former api calls
It is better to cancel former api requests in-flight: the browser can avoid parsing the response and prevent some useless CPU/Network usage.
fetch support cancellation thanks to
AbortSignal:
const abortController = new AbortController(); // fire the request, with an abort signal, // which will permit premature abortion fetch(`{id}/`, { signal: abortController.signal, }); // abort the request in-flight // the request will be marked as "cancelled" in devtools abortController.abort();
An abort signal is like a little event emitter, you can trigger it (through the
AbortController), and every request started with this signal will be notified and canceled.
Let's see how to use this feature to solve race conditions:
// Store abort controller which will permit to abort // the last issued request const lastAbortController = useRef(); useEffect(() => { setData(null); // When a new request is going to be issued, // the first thing to do is cancel the previous request if (lastAbortController.current) { lastAbortController.current.abort(); } // Create new AbortController for the new request and store it in the ref const currentAbortController = new AbortController(); lastAbortController.current = currentAbortController; // Issue the new request, that may eventually be aborted // by a subsequent request const currentPromise = fetchStarwarsHeroData(id, { signal: currentAbortController.signal, }).then(async data => { await delayRandomly(); throwRandomly(); return data; }); currentPromise.then( result => setData(result), e => console.warn('fetch failure', e), ); }, [id]);
This code looks good at first, but actually we are still not safe.
Let's consider the following code:
const abortController = new AbortController(); fetch('/', { signal: abortController.signal }).then( async response => { await delayRandomly(); throwRandomly(); return response.json(); }, );
If we abort the request during the fetch, the browser will be notified and do something about it. But if the abortion happens while the browser is running the
then() callback, it has no way to handle the abortion of this part of the code, and you have to write this logic on your own. If the abortion happens during the fake delay we added, it won't cancel that delay and stop the flow.
fetch('/', { signal: abortController.signal }).then( async response => { await delayRandomly(); throwRandomly(); const data = await response.json(); // Here you can decide to handle the abortion the way you want. // Throwing or never resolving are valid options if (abortController.signal.aborted) { return new Promise(); } return data; }, );
Let's get back to our problem. Here's the final, safe version, aborting the request in-flight, but also using the abortion to eventually filter the results. Also let's use the hooks cleanup function, as I was suggested on Twitter, which makes the code a bit simpler.
useEffect(() => { setData(null); // Create the current request's abort controller const abortController = new AbortController(); // Issue the request fetchStarwarsHeroData(id, { signal: abortController.signal, }) // Simulate some delay/errors .then(async data => { await delayRandomly(); throwRandomly(); return data; }) // Set the result, if not aborted .then( result => { // IMPORTANT: we still need to filter the results here, // in case abortion happens during the delay. // In real apps, abortion could happen when you are parsing the json, // with code like "fetch().then(res => res.json())" // but also any other async then() you execute after the fetch if (abortController.signal.aborted) { return; } setData(result); }, e => console.warn('fetch failure', e), ); // Trigger the abortion in useEffect's cleanup function return () => { abortController.abort(); }; }, [id]);
And now only we are safe.
Using libraries
Doing all this manually is complex and error prone. Hopefully, some libraries solve this problem for you. Let's explore a non-exhaustive list of libraries generally used for loading data into React.
Redux
There are multiple ways to load data into a Redux store. Generally, if you are using Redux-saga or Redux-observable, you are fine. For Redux-thunk, Redux-promise and other middlewares, you might check the "vanilla React/Promise" solutions in next sections.
Redux-saga
You might notice there are multiple
take methods on the Redux-saga API, but generally you'll find many examples using
takeLatest. This is because
takeLatest will protect you against those race conditions.
Forks a saga on each action dispatched to the Store that matches pattern. And automatically cancels any previous saga task started previously if it's still running.
function* loadStarwarsHeroSaga() { yield* takeLatest( 'LOAD_STARWARS_HERO', function* loadStarwarsHero({ payload }) { try { const hero = yield call(fetchStarwarsHero, [ payload.id, ]); yield put({ type: 'LOAD_STARWARS_HERO_SUCCESS', hero, }); } catch (err) { yield put({ type: 'LOAD_STARWARS_HERO_FAILURE', err, }); } }, ); }
The previous
loadStarwarsHero generator executions will be "cancelled". Unfortunately the underlying API request will not really be cancelled (you need an
AbortSignal for that), but Redux-saga will ensure that the success/error actions will only be dispatched to Redux for the last requested Starwars hero. For in-flight request cancellation, follow this issue
You can also opt-out from this protection and use
take or
takeEvery.
Redux-observable
Similarly, Redux-observable (actually RxJS) has a solution:
switchMap:
The main difference between switchMap and other flattening operators is the cancelling effect. On each emission the previous inner observable (the result of the function you supplied) is cancelled and the new observable is subscribed. You can remember this by the phrase switch to a new observable.
const loadStarwarsHeroEpic = action$ => action$.ofType('LOAD_STARWARS_HERO').switchMap(action => Observable.ajax(`{action.payload.id}`) .map(hero => ({ type: 'LOAD_STARWARS_HERO_SUCCESS', hero, })) .catch(err => Observable.of({ type: 'LOAD_STARWARS_HERO_FAILURE', err, }), ), );
You can also use other RxJS operators like
mergeMap if you know what you are doing, but many tutorials will use
switchMap, as it's a safer default. Like Redux-saga, it won't cancel the underlying request in-flight, but there are solutions to add this behavior.
Apollo
Apollo lets you pass down GraphQL query variables. Whenever the Starwars hero id changes, a new request is fired to load the appropriate data. You can use the HOC, the render props or the hooks, Apollo will always guarantee that if you request
id: 2, your UI will never return you the data for another Starwars hero.
const data = useQuery(GET_STARWARS_HERO, { variables: { id }, }); if (data) { // This is always true, hopefully! assert(data.id === id); }
Vanilla React
There are many libraries to load data into React components, without needing a global state management solution.
I created react-async-hook: a very simple and tiny hooks library to load async data into React components. It has very good native Typescript support, and protects you against race conditions by using the techniques discussed above.
import { useAsync } from 'react-async-hook'; const fetchStarwarsHero = async id => (await fetch( `{id}/`, )).json(); const StarwarsHero = ({ id }) => { const asyncHero = useAsync(fetchStarwarsHero, [id]); return ( <div> {asyncHero.loading && <div>Loading</div>} {asyncHero.error && ( <div>Error: {asyncHero.error.message}</div> )} {asyncHero.result && ( <div> <div>Success!</div> <div>Name: {asyncHero.result.name}</div> </div> )} </div> ); };
Other options protecting you:
- react-async: quite similar, also with render props api
- react-refetch: older project, based on HOCs
There are many other library options, for which I won't be able to tell you if they are protecting you: take a look at the implementation.
Note: it's possible
react-async-hook and
react-async will merge in the next months.
Note:: it's possible to use
StarwarsHero key={id} id={id}/> as a simple React workaround, to ensure the component remounts everytime the id changes. This will protect you (and sometime a useful feature), but gives more work to React.
Vanilla promises and Javascript
If you are dealing with vanilla promises and Javascript, here are simple tools you can use to prevent those issues.
Those tools can also be useful to handle race conditions if you are using thunks or promises with Redux.
Note: some of these tools are actually low-level implementation details of react-async-hook.
Cancellable promises
React has an old blog post isMounted() is an antipattern on which you'll learn how to make a promise cancellable to avoid the setState after unmount warning. The promise is not really
cancellable (the underlying api call won't be cancelled), but you can choose to ignore or reject the response of a promise.
I made a library awesome-imperative-promise to make this process easier:
import { createImperativePromise } from 'awesome-imperative-promise'; const id = 1; const { promise, resolve, reject, cancel } = createImperativePromise(fetchStarwarsHero(id); // will make the returned promise resolved manually resolve({ id, name: "R2D2" }); // will make the returned promise rejected manually reject(new Error("can't load Starwars hero")); // will ensure the returned promise never resolves or reject cancel();
Note: all those methods have to be called before the underlying API request resolves or reject. If the promise is already resolved, there's no way to "unresolve" it.
Automatically ignoring last call
awesome-only-resolves-last-promise is a library to ensure we only handle the result of the last async call:
import { onlyResolvesLast } from 'awesome-only-resolves-last-promise'; const fetchStarwarsHeroLast = onlyResolvesLast( fetchStarwarsHero, ); const promise1 = fetchStarwarsHeroLast(1); const promise2 = fetchStarwarsHeroLast(2); const promise3 = fetchStarwarsHeroLast(3); // promise1: won't resolve // promise2: won't resolve // promise3: WILL resolve
What about Suspense?
It should prevent those issues, but let's wait for the official release :)
Conclusion
For your next React data loading usecase, I hope you will consider handling race conditions properly.
I can also recommend to hardcode some little delays to your API requests in development environment. Potential race conditions and bad loading experiences will be more easy to notice. I think it's safer to make this delay mandatory, instead of expecting each developer to turn on the slow network option in devtools.
I hope you've found this post interesting and you learned something, it was my first technical blog post ever :)
Originally posted on my website
If you like it, spread the word with a Retweet
Browser demos code or correct my post typos on the blog repo
For more content like this, subscribe to my mailing list and follow me on Twitter.
Thanks for my reviewers: Shawn Wang, Mateusz Burzyński, Andrei Calazans, Adrian Carolli, Clément Oriol, Thibaud Duthoit, Bernard Pratz
Discussion
I am happy to see someone finally talking about the real issues. I see a lot of beginner level material on reactjs. | https://dev.to/sebastienlorber/handling-api-request-race-conditions-in-react-4j5b | CC-MAIN-2020-50 | refinedweb | 2,551 | 54.02 |
This guide shows how to use deep sleep with the ESP8266 and how to wake it up with a timer or external wake up using MicroPython firmware.
If you have an ESP32, we recommend reading our MicroPython ESP32 Deep Sleep and Wake Up Sources Guide.
Prerequisites
To follow this tutorial you need MicroPython firmware flashed in your Deep Sleep
When the ESP8266 is in deep sleep mode, everything is off except the Real Time Clock (RTC), which is how the ESP8266 keeps track of time.
In deep sleep mode, the ESP8266 chip consumes approximately
only 20uA.
However, you should keep in mind that in an assembled ESP8266 board, it consumes a lot more current.
We were able to build a weather station data logger with the ESP8266 using MicroPython that only consumes 7uA when it is in deep sleep mode: Low Power Weather Station Datalogger using ESP8266 and BME280 with MicroPython
Timer Wake Up
There are slightly different ways to wake up the ESP8266 with a timer after deep sleep. One of the easiest ways is using the following function()
We recommend copying the previous function to the beginning of your script, and then call the deep_sleep() function to put the ESP8266 in deep sleep mode.
This deep_sleep() function creates a timer that wakes up the ESP8266 after a predetermined number of seconds. To use this function later in your code, you just need to pass as an argument the sleep time in milliseconds.
Script
In the following code, the ESP8266 is in deep sleep mode for 10 seconds. When it wakes up, it blinks an LED, and goes back to sleep again. This process is repeated over and over again.
# Complete project details at import machine from machine import Pin from time import sleep led = Pin(2, Pin.OUT) def deep_sleep(msecs): # configure RTC.ALARM0 to be able to wake the device rtc = machine.RTC() rtc.irq(trigger=rtc.ALARM0, wake=machine.DEEPSLEEP) # set RTC.ALARM0 to fire after X milliseconds (waking the device) rtc.alarm(rtc.ALARM0, msecs) # put the device to sleep machine.deepsleep() _sleep(10000)
How the Code Works
First, import the necessary libraries:
import machine from machine import Pin from time import sleep
Create a Pin object that refers to GPIO 2 called led. For our board, it refers to the on-board LED.
led = Pin(2, Pin.OUT)
After that, add the deep_sleep() function()
The following lines blink the LED.
led.value(1) sleep(1) led.value(0) sleep(1)
Before going to sleep, we add a delay of 5 seconds and print a message to indicate it is going to sleep.
sleep(5) print('Im awake, but Im going to sleep')
It’s important to add that delay of 5 seconds before going to sleep when we are developing the script. When you want to upload a new code to your board, it needs to be awaken. So, if you don’t have the delay, it will be difficult to catch it awake to upload code later on. After having the final code, you can remove that delay.
Finally, put the ESP8266 in deep sleep for 10 seconds (10 000 milliseconds) by calling the deep_sleep() function and passing as argument the number of milliseconds.
deep_sleep(10000)
After 10 seconds, the ESP8266 wakes up and runs the code from the start, similarly of when you press the RESET button.
Uploading the Code
Copy the code provided to the main.py file and upload it to your ESP8266.
If you don’t know how to upload the script follow this tutorial if you’re using Thonny IDE, or this one if you’re using uPyCraft IDE.
After uploading the code, you need to connect GPIO16 (D0) to the RST pin so that the ESP8266 can wake itself up.
Important: if you don’t connect GPIO16 to the RST pin, the ESP8266 will not wake up.
Demonstration
After uploading the code and connecting GPIO 16 (D0) to the RST pin, the ESP8266 should blink the on-board LED and print a message in the shell.
Then, it goes to sleep for 10 seconds, wakes up and runs the code again. This proccess is repeated over and over again.
Deep Sleep with ESP-01
If you want to make a similar setup with an ESP-01 board, you need to solder a wire as shown in the following figure. That tiny pin in the chip is GPIO16 and it needs to be connected to the RST pin.
However, the pins are so tiny that it is really hard to solder a wire like that to the GPIO 16 without damaging the chip.
Learn more about the ESP8266 GPIOs: ESP8266 Pinout Reference: Which GPIO pins should you use?
External Wake Up
The ESP8266 doens’t support external wake up like the ESP32 does. But, there is something we can do about that.
If we put the ESP8266 in deep sleep for an indefinite time, it will only wake up when something resets the board. So, we can wire something to the RST pin and use it as an external wake up. It can be the press of a pushbutton or a magnetic reed switch being close, for example.
The ESP8266 resets when the RST pin goes LOW.
Schematic Diagram
To test this method, wire a pushbutton to the RST pin. You need the following components for the circuit:
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
If you’re using an ESP8266 12-E NodeMCU kit, follow the next schematic diagram.
If you’re using an ESP-01, you can follow the next schematic diagram.
Script
Upload the following code to the ESP8266 as main.py.
# Complete project details at from machine import deepsleep from machine import Pin from time import sleep led = Pin (2, Pin.OUT) #blink LED led.value(0) sleep(1) led.value(1) sleep(1) # wait 5 seconds so that you can catch the ESP awake to establish a serial communication later # you should remove this sleep line in your final script sleep(5) print('Im awake, but Im going to sleep') sleep(1) #sleep for indefinite time deepsleep()
In this case, you just need to call machine.deepsleep() after the execution of the main code.
When you call machine.deepsleep() without any arguments, the ESP8266 will go into deep sleep mode indefinitely until you press the RST button.
Demonstration
After wiring the circuit and uploading the code, you can test your setup. Every time you press the pushbutton that is connected to the RST pin, the ESP8266 resets and wakes up. It blinks the on-board LED and goes back to sleep.
Wrapping Up
We hope you’ve found this project about deep sleep with the ESP8266 useful. We have other tutorials about deep sleep that you might like:
- Low Power Weather Station Datalogger (MicroPython)
- ESP8266 Deep Sleep (Arduino IDE)
- ESP32 Deep Sleep and Wake Up Sources (Arduino IDE)
- ESP32 Deep Sleep and Wake Up Sources (MicroPython)
If you want to learn more about programming the ESP32 and ESP8266 boards with MicroPython, take a look our eBook: MicroPython Programming with ESP32 and ESP8266.
Thanks for reading.
16 thoughts on “MicroPython: ESP8266 Deep Sleep and Wake Up Sources”
Hi Rui
Thanks for great tutorial. I have tried your examples, they works fine so as this one. The NodeMCU goes to deep sleep and consumes less power. However when I connect sensor like MQ 135, the sensor still consumes energy even if NodeMCU in deep sleep. So please provide details on how to save the power consumed by MQ 135 in deep sleep. Is there any way I can stop the power supply to MQ 135 when NodeMCU goes to deep sleep. Because without having sensor attached to NodeMCU and sending it to deep sleep does not make any sense.
You Timothy.
You can connect the VCC pin of your sensor to one of the GPIOs.
Then, when the ESP wake up, set that pin to HIGH to power the sensor.
I think this should solve the problem.
Regards,
Sara
Hi Sara,
Thanks for your tip. I did exactly as you said and it worked for me. But I have noticed that connecting MQ135 sensor VCC to GPIO pin of NodeMCU gives me value around 30 ppm, connecting VCC to 3.3v gives around 150 ppm, and connecting VCC to Vin (5v) of NodeMCU gives me around 300 ppm. So why is this difference. The datasheet of MQ 135 specifies that Circuit voltage of sensor should be 5v. So I wonder if I am supplying right voltage to sensor. What should I do to supply right voltage (around 5v) to sensor using GPIO since I want to put everything off in deepsleep. Kindly help.
Rui/ Sara, I have been asking but I wanted to know the pin layout for a ESP-32 WROOM-32 board. Got it from Aliexpress, but doesn’t look like the original ESP-32. Can I still use that board as a pin layout? Thank you if you can help.
Hi Logan
Please find the datasheet of WROOM-32 board on following link. Hope it helps.
espressif.com/sites/default/files/documentation/esp32-wroom-32_datasheet_en.pdf
Hello Rui, Sara,
I’ve got a Lolin NodeMcu V3 and when it wakes up after deep sleep it freezes and I need to do a full power cycle to get it going again. It always freezes after a deep sleep. It runs on firmware: esp8266-20180511-v1.9.4. I’ve followed instructions to the letter and it just won’t work as expected.
any suggestions? Thank you for any help!
Hi Ruben.
Please make sure that you have the RST pin connected to GPIO 16 (D1). Otherwise the ESP8266 can’t wake itself up.
I don’t know if that’s the problem, but what you describe happens when we forget to connect RST to GPIO16.
I hope this helps.
Regards,
Sara
Hi Sara,
Sadly it does not. I’ve tried all pins on the board and it simply freezes. Not a big deal, I’ll use these boards for other projects. My ESP32 deep sleeps perfectly!
Thank you for your help though.
Ruben
I’m sorry for that issue.
I don’t know what can be the problem.
The project works fine with our ESP8266.
Regards,
Sara
Hello Team,
Congrats to your site.
I did your project using an ESP8266 with your uPython deep sleep program and it works, but after a while (after 1 or 2 days) the ESP wake ups stopping run the program.
I’m trying to monitor the module without any results/conclusions.
Do you have any idea of what is going on? Do I move to do program deep sleep in C?
Thank you in advance,
AH
Hello,
I’m using an ESP8266 Huzzah from adafruit.com. I’ve been trying to get your deep sleep code working with this board. The board will go into deep sleep mode, but it won’t ever come out and restart. I can run machine.deepsleep(), then press the reset button and the board boots as expected. Any help would be greatly appreciated. I’m going to post this on the adafruit forum also.
Thanks
Hi.
Don’t forget that to use deep sleep with the ESP8266, after uploading the code, you need to connect GPIO16 (D0) to the RST pin so that the ESP8266 can wake itself up.
Regards,
Sara
the deep sleep mode is working for my esp8266 board now. All I had to do is read all the instructions. Thanks for pointing this out to me
What is the maximum time in milliseconds that the ESP 8266 can sleep?
Thanks,
Steve
Hi.
Here’s the answer:
Regards,
Sara
Thanks for the prompt response. | https://randomnerdtutorials.com/micropython-esp8266-deep-sleep-wake-up-sources/ | CC-MAIN-2022-21 | refinedweb | 1,983 | 73.27 |
new namespace convention
On his blog, Jack wrote :
I've already introduced a new namespace in 0.40, Ext. (e.g. Ext.BasicDialog instead of YAHOO.ext.BasicDialog)
Re: new namespace convention
Originally Posted by sjivan
Btw, I see the Ext namespace as a move to drop the YAHOO namespace one day. Is this correct? I think this would make sense. Yes, it would break compatibility but it is a find & replace operation to put everything working again.
Also, it would be nice to have a "code conventions" page on the wiki (I'm sorry but I'm in a rush this month due to a release, but when things calm down I'll be back and contribute with some wiki pages ;-)).
I would love to use ext instead of Ext, but it may clash with local variables. Ext is already pushing it but I doubt there will be variables named Ext (at least I am hoping). Package names should be lowercase (and all sub packages will be) but the root namespace has to be capital to prevent conflicts with local variables.
Btw, I see the Ext namespace as a move to drop the YAHOO namespace one day. Is this correct? I think this would make sense. Yes, it would break compatibility but it is a find & replace operation to put everything working again.
Yes the YAHOO namespace is going away. I can't wait! It will remain backwards compatible though, as I will alias Ext as YAHOO.ext and keep code working.
Originally Posted by jacksloc
Sanjiv
Originally Posted by sjivan
, how would we call it?
just my 0.02. ;-)
PS: remember that YuiX still makes reference to YUI, so I guess this is not an option.
sjivan,
I'm with moraes, I really like Ext. It's short and sweet. Also, I think it allows the existing "brand" awareness for yui-ext to not be completely destroyed.
Like moraes noted, the meaning is a little different than "Extensions". The idea is to have a new tag line along the lines of "Extending the web experience" or "Extended JavaScript Components" or "Extend your web application" (I am open to suggestions).
Originally Posted by jacksloc
I thought about different things with UI, the only problem is it "feels" like a Yahoo UI rip off in some way.
second vote for Ext. I like it because it sounds foundational and no-frills
Similar Threads
Lack of Namespace Awareness.By apfelfabrik in forum Community DiscussionReplies: 0Last Post: 30 Mar 2007, 6:17 AM
Closure coding convention suggestionBy papasi in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 6 Mar 2007, 6:15
Fighting ConventionBy hunkybill in forum Sencha CmdReplies: 1Last Post: 6 Dec 2006, 8:36 AM | https://www.sencha.com/forum/showthread.php?1506-new-namespace-convention | CC-MAIN-2015-48 | refinedweb | 455 | 74.19 |
Linq to SQL Stored Procedures with Multiple Results – IMultipleResults
Continuing my post series about Linq to SQL, this post talks about using stored procedures that return multiple result sets in Linq to SQL. If you missed any of my previous posts about Linq to SQL, here is a reminder:
Sql Server supports returning more than a single result type from a stored procedure. This was very useful when we wanted to fill a large Dataset with multiple tables in a single access to the database. Similarly, this is also very useful when using Linq to SQL, and
So, having created the following stored procedure, based on the schema in the post Linq to SQL Stored Procedures.
CREATE PROCEDURE dbo.GetPostByID
(
@PostID int
)
AS
FROM Posts AS p
WHERE p.PostID = @PostID
SELECT c.*
FROM Categories AS c
JOIN PostCategories AS pc
ON (pc.CategoryID = c.CategoryID)
WHERE pc.PostID = @PostID
The calling method in the class the inherits from DataContext should look like:
[Database(Name = "Blog")]
public class BlogContext : DataContext
{
…
[Function(Name = "dbo.GetPostByID")]
[ResultType(typeof(Post))]
[ResultType(typeof(Category))]
public IMultipleResults GetPostByID(int postID)
{
IExecuteResult result =
this.ExecuteMethodCall(this,
((MethodInfo)(MethodInfo.GetCurrentMethod())),
postID);
return (IMultipleResults)(result.ReturnValue);
}
}
Notice that the method is decorated not only with the Function attribute that maps to the stored procedure name, but also with the ReturnType attributes with the types of the result sets that the stored procedure returns. Additionally, the method returns an untyped interface of IMultipleResults:
public interface IMultipleResults : IFunctionResult, IDisposable
{
IEnumerable<TElement> GetResult<TElement>();
}
so the program can use this interface in order to retrieve the results:
BlogContext ctx = new BlogContext(…);
IMultipleResults results = ctx.GetPostByID(…);
IEnumerable<Post> posts = results.GetResult<Post>();
IEnumerable<Category> categories = results.GetResult<Category>();
Enjoy!
Thank you for the great series of articles a bout linq to sql .
Please generate and post the T-SQL used to create the 4 tables ( Post , blogs etc )
and the relations between them.
Hi Zvika,
The SQL Script can be found here:
Enjoy!
Could I do it with a Dynamic Linq Query (ExecuteQuery) that returns to me multiple resultsets ? How?
Thanks Guy! This post as well as your previous post on Stored Procedures was extremely helpful. It was exactly what I was looking for, thanks!
What happens if the first select returns no results? I have a scenario where the first select returns nothing and the second result returns something. I'm getting "Unable to cast object of type
to ". If the first select returns something, and the second returns nothing, I get "Value cannot be null. Parameter name: source".
I need some direction or info on how LINQ handles multiple results when one or more results has no records coming back.
Tim –
Just make your first select return an empty recordset.
So instead of
BEGIN
IF @param = 1
SELECT a, b FROM c
SELECT d, e FROM f
END
do
BEGIN
SELECT a, b FROM c
WHERE @param = 1
SELECT d, e FROM f
END
Does the function (which gets created for the SP), in this case GetPostByID, gets the return type IMultipleresult by default after dragging and dropping on the .dbml file?
I am trying this with one of my SPs, which returns four tables, but in the .dbml.designer.cs file I see the function return type as ISingleResult.
Can we change the .designer.cs file?
Thanks in Advance,
Pranil
hi,
I checked on some of the sites and came to a conclusion that, if your
SP is returning more than one tables, then either you can use sqlmetal.exe tool or manually modify the designer.cs class to make return type for the method as IMultipleResults.
Now the SP returns 6 tables(which is what I require), however there is data only in the first table, the other tables are empty(which is wrong), since if I execute the SP through server explorer I get data in all the 6 tables.
Any idea what is going wrong?
Thanks,
Pranil
i see your code it good.
i have prob. where SP return two table and both from different join.
in your SP field return from both query are from single table even join used in second query so it is easy to undustand the return type i.e Post and Category.
in my case what is the retun type i take where both query have fields from different table also?
e.g
Table Name::
emp(id,ename,salary,deptid)
dept(deptid,dname)
SP::
select ename,salary,dname from emp,dept where emp.deptid=dept.deptid
select dname,sum(salary) from emp,dept where emp.deptid=dept.deptid group by dname
HI,
I am using dataset with stored procedure(sp)
but now i want to do these using linq but sp is needed
i have sp' which return more the one result
whose solution is imultipleresult as in about forum is given.
But i got problem there
where each result of that sp will have join from more then one table
& in my dbml file only one result class is generated automatically.
how can i create another class for my second or next result set.
should i done menuly or any easy way is there.
plz reply as soon as poossible
How do we handle this if the second RecordSet is from another sp executed inside the 1st one.
Am getting error when i change the Context Designer from IsingleResults to ImultipleResults.
On executing it says "More than one result type declared for function
'Sp_Name' that does not return IMultipleResults".
Any Idea why is it so?
Please reply as soon as poossible
the dbml file overrides the modifications to the designer.cs file and each time I add something to the model I have to re-type the IMultipleResults definition.
Any recomendation?
Leit
What about classes Post and Category? Where is mapping between sp results and those classes?
Uri, in case you're still wondering hoy do modify the dbml file and avoid the so call to be overwritten, you can create a partial class for your DataContext and put the code to call the SP in there.
Eduardo
Any one this code in VB.Net.
it is really a fabulous article.it also enabled me to think out of box
Thanks a lot for this post | http://blogs.microsoft.co.il/bursteg/2007/10/05/linq-to-sql-stored-procedures-with-multiple-results-imultipleresults/ | CC-MAIN-2018-34 | refinedweb | 1,047 | 63.8 |
Namespaces in XML
i am declaring a namepace, but cascade seems to be stripping it
out, take a look at the attached file.
____________________________________________________ from the mailing list:
We are attempting to publish podcasts with Cascade and need to be able to use the ‘itunes’ namespace in the tags (e.g <itunes:name>) we’re getting an error “An error occurred: Error on line 6: The prefix "itunes" for element "itunes:subtitle" is not bound” and wondered if others had run into the same problem or if we’re just missing something.
We’re currently running Cascade 6.4
_________________________________________________ For the rendered page to be valid XML, this namespace prefix needs to be declared before being used. Usually this would be done at the top of the template used to output the page.
You can read more about declaring namespaces in the w3c
spec:.
Feel free to post to our help.hannonhill.com with any additional questions.
Thanks,
Bradley
Screenshot.png 142 KB
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by Tim on 14 Jul, 2010 02:07 PM
Hi Nick,
I believe the namespace may be getting stripped since there aren't any elements using that namespace. Try to prefix one of the elements in your rss content with
<itunes:>and see if that works.
2 Posted by nick.shontz on 14 Jul, 2010 02:27 PM
hey,
That did the trick.
thanks much!
nick
Support Staff 3 Posted by Tim on 14 Jul, 2010 02:35 PM
Perfect! Thanks for the update!
Tim closed this discussion on 14 Jul, 2010 02:35 PM.
nick.shontz re-opened this discussion on 14 Jul, 2010 03:52 PM
4 Posted by nick.shontz on 14 Jul, 2010 03:52 PM
Hey,
looks like was a false alarm. i can use the itunes namespace in the template, but i can't use it in the velocity that i'm using to create the <item>...<item> sections or the velocity that is applied to the default region that creates the channel info.
i'm getting the same error "An error occurred: Error on line 7: The prefix "itunes" for element "itunes:subtitle" is not bound."
any ideas?
nick
5 Posted by Bradley Wagner on 14 Jul, 2010 10:35 PM
Please attach the relevant portions of your Velocity/XSLT format so we can look at it.
6 Posted by nick.shontz on 14 Jul, 2010 10:54 PM
here are the files, included are the template, the default block's xml (it has a customized data definition) and the velocity i'm applying to it. if you remove the <itunes:XXXX> tags from the velocity it works just fine.
7 Posted by nick.shontz on 30 Jul, 2010 02:15 PM
yeah, so still waiting to hear back on this...
8 Posted by Joel on 30 Jul, 2010 02:22 PM
Nick,
My apologies. We're still looking into the cause of this issue. As soon as we have some feedback I will be sure to let you know.
In the interim, please use the suggested workaround below.
Add NS declaration into the Velocity format where the namespace was used. I.e.:
<itunes:subtitle xmlns:subtitle</itunes:subtitle>
Doing it this way:
Support Staff 9 Posted by Tim on 11 Aug, 2010 03:36 PM
Revisiting this issue...since individual regions are rendered as complete XML documents, the namespace will need to be declared in the Format (whether using Velocity or XSL) in order for this to work as expected.
Hope this helps!
Tim closed this discussion on 11 Aug, 2010 03:36 PM. | https://help-archives.hannonhill.com/discussions/general/28-namespaces-in-xml | CC-MAIN-2021-49 | refinedweb | 627 | 73.27 |
So if I look at this correctly, my issue is in the decrypt function with: flag= flag%26;
Do you agree?
However, I'd still like to know that the encrypt function was working OK, at least in a simple preliminary test, with vowels and consonants.
When you're working with arrays, you always need to test or watch carefully, the ENDS of them (high and low), to see that they don't run over or under, their legit range. Vowels to test would be a and e, and u and y (if you have y as a vowel - some do, some don't). Consonants to test would be b and c and x and z.
We know you're a beginner - but we're not going to let you get away with anything anyway! <smile>
> SORRY, but I have only been programming for 3 months. Before that, I knew absolutely nothing!!!
This is a maths problem, not a programming problem.
The first thing is to understand in mathematical terms is how to turn your expression into something that always yields a positive number.
When you figure out say that adding 26 to flag is the answer, THEN it becomes a programming problem.
You seemed to know enough maths to do the rest of the assignment, unless this is just another "I found some code and it doesn't work" jobs.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
well here is what I came up with if anyone is interested:
Code:#pragma warning( disable:4996) #include<stdio.h> //function int FindGCD(int no1,int no2); //for small letters only int InvEuclid(int a,int N) { int r0,r1,r2,q1,x0,x1,x2; r0=N; r1=a; x0=1; r2=r0%r1; q1=r0/r1; x1=-q1; while (r2) { r0=r1; r1=r2; q1=r0/r1; r2=r0%r1; x2=x0-(q1*x1); x0=x1;x1=x2; } if (x0>0) return x0; else return (N+x0); } int main () { char small[27],data[100],cipher[100],decipher[100]; int i=0,j=0,k1,k2,flag,count=0,temp; for (i;i<26;i++){ small[i] = 'a'+i; //printf("%c",small[i]); } // ask user to input message printf("please enter the text message in lower case alphabet ONLY to encrypt\n"); scanf("%[^\n]s",data); fflush(stdin); //ask user to provide at least two mapping i=0; while(1) { printf("\nplease enter the first key 1<k1<26 such that gcd of (k2,26)=1\n"); //first key is: 19 scanf("%d",&k1); printf("\nplease enter the second key 1<k2<26\n "); // second key is: 4 scanf("%d",&k2); flag =FindGCD(k1,26); if (flag==1) break; else printf("\nPlease re-enter the keys"); } //cipher the text and show it to the user while (1) { temp = data[i]; // printf("g%d",flag); if (temp==0) break; flag= data[i]-'a'; if (flag>=0 && flag <=25) { flag=(flag*k1) +k2; printf("%d",flag); cipher[i]=small[flag]; } else { cipher[i]=data[i]; } count++; i++; } // show encrypted text to the user printf("\nthe encrypted string is\n "); while(j<i) { printf("%c",cipher[j]); j++; } printf("\n"); //decipher the ciphered text // printf("c%d\n",count); k1= InvEuclid(k1,26); //printf("%d",k1); i=0; for(i=0;i<count;i++) { flag= cipher[i]-'a'; //printf("%d",flag); if (flag>=0 && flag <=25) { flag=(flag-k2)*k1; printf("%d",flag); decipher[i]=small[flag]; } else { decipher[i]=data[i]; } } j=0; // show user the de-encrypted string printf("\nthe de-encrypted message is: \n "); while(j<i) { printf("%c",decipher[j]); j++; } printf("\n"); return 0; } int FindGCD(int no1,int no2){ int divd,divs,r; if (no1>no2) { divd=no1; divs = no2; } else { divd= no2; divs=no1; } r = divd%divs; while (r>0) { divd = divs; divs = r; r= divd%divs; if (r==1) { return 1; } } if (r==0) { return divs; } return 0; }
Please answer the question, though. Did you test both the functions (encrypt and decrypt), to see whether they both worked properly, on a variety of simple input (and output), that you know the answer to?
Without that test, there's no way to know if your program is working correctly, or not. I certainly can't just look at your code, and expect my muse to come along with the answer, and slap me upside the back of my head with it.
Maybe someone here can just look at it and tell, but really, that's assuming a great deal more than you should, isn't it? You should be the primary tester on your program.
One thing that I noticed -
FAQ > Why fflush(stdin) is wrong - Cprogramming.comFAQ > Why fflush(stdin) is wrong - Cprogramming.comCode:fflush(stdin);
FAQ > Flush the input buffer - Cprogramming.com
Hope this helps
Fact - Beethoven wrote his first symphony in C
> well here is what I came up with if anyone is interested:
All you've done is broken it, and added some prints.
Replacing
flag= flag%26;
with
printf("%d",flag);
So as a result, you now have massively overflowed arrays.
DoesDoesCode:$ ./a.out please enter the text message in lower case alphabet ONLY to encrypt hello world please enter the first key 1<k1<26 such that gcd of (k2,26)=1 19 please enter the second key 1<k2<26 4 EncPos=137 EncPos=80 EncPos=213 EncPos=213 EncPos=270 EncPos=422 EncPos=270 EncPos=327 EncPos=213 EncPos=61 the encrypted string is the de-encrypted message is: hello world
cipher[0]=small[137];
cipher[1]=small[80];
cipher[2]=small[213];
look OK to you?
Yes, it comes up with the "right" decoded answer, but it trashed a lot of memory belonging to someone else doing it. Sooner or later, this program would crash in a random way.
I'm bored with trying, so here's the answer.
But only because I doubt you wrote any of that code to begin with. I can't see how you came up with FindGCD() and InvEuclid(), and not be able to figure out this.
Code:flag=(flag-k2)*k1; //printf("g%d",flag); flag= flag%26; if ( flag < 0 ) flag = flag + 26; decipher[i]=small[flag];
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper. | https://cboard.cprogramming.com/c-programming/152889-can-anyone-see-any-errors-cipher-code-2.html | CC-MAIN-2017-47 | refinedweb | 1,099 | 62.51 |
In this tutorial, you’ll learn how to work with dates, times, and DateTime in Pandas and Python. Working with DateTime in Python and Pandas can be a complicated thing. This guide aims to make the complicated, simple, by focusing on what you need to know to get started and to know enough to discover more on your own. Dates and times are critical forms of data in many domains, including finance, economics, science, and more.
By the end of this tutorial, you’ll have learned how to:
- Load DateTimes effectively in Pandas
- Access DateTime attributes in Pandas
- Filter a Pandas DataFrame based on DateTime filters
- Resample Pandas DataFrames based on DateTimes
- Use DateTime in Pandas groupby
Importing DateTimes in Pandas DataFrames
Pandas intelligently handles DateTime values when you import a dataset into a DataFrame. The library will try to infer the data types of your columns when you first import a dataset. For example, let’s take a look at a very basic dataset that looks like this:
# A very simple .csv file Date,Amount 01-Jan-22,100 02-Jan-22,125 03-Jan-22,150
You can find the file here. Let’s try to import the dataset into a Pandas DataFrame and check the column data types.
# Loading a Small DataSet df = pd.read_csv('') print(df) # Returns # Date Amount # 0 01-Jan-22 100 # 1 02-Jan-22 125 # 2 03-Jan-22 150
This is great! It looks like everything worked fine. Not so fast – let’s check the data types of the columns in the dataset. We can do this using the
.info() method.
# Checking column data types print(df.info()) # Returns # Data columns (total 2 columns): # # Column Non-Null Count Dtype # --- ------ -------------- ----- # 0 Date 3 non-null object # 1 Amount 3 non-null int64 # dtypes: int64(1), object(1) # memory usage: 176.0+ bytes
We can see that the data type of the
Date column is
object. This means that the data are stored as strings, meaning that you can’t access the slew of DateTime functionality available in Pandas.
Using Pandas parse_dates to Import DateTimes
One easy way to import data as DateTime is to use the
parse_dates= argument. The argument takes a list of columns that Pandas should attempt to infer to read. Let’s try adding this parameter to our import statement and then re-print out the info about our DataFrame:
# Parsing Dates in .read_csv() df = pd.read_csv('', parse_dates=['Date']) print(df.info()) # Returns # Data columns (total 2 columns): # # Column Non-Null Count Dtype # --- ------ -------------- ----- # 0 Date 3 non-null datetime64[ns] # 1 Amount 3 non-null int64 # dtypes: datetime64[ns](1), int64(1) # memory usage: 176.0 bytes
We can see that our column is now correctly imported as a DateTime format.
Using to_datetime to Convert Columns to DateTime
The example above worked quite well when we imported a straightforward date format. Now let’s take a look at a more complicated example. We’ll load data from here, that looks like this:
# More complex datetime formats Date,Close Price,High Price,Low Price,Open Price,Volume Date,Close Price,High Price,Low Price,Open Price,Volume 2021-12-10 05AM,48246.57,48359.35,48051.08,48170.66,827.39761 2021-12-10 06AM,47847.59,48430,47810.81,48249.78,1296.18883 2021-12-10 07AM,47694.62,48037.48,47550,47847.59,2299.85298 2021-12-10 08AM,48090.35,48169.06,47587.39,47694.62,1371.25447
When we pass in the
Date column as we did earlier, Pandas can’t interpret the date format. Let’s see what this looks like. The code below shows that the date wasn’t actually read as a DateTime format, but rather continues to exist as a string.
import pandas as pd df = pd.read_csv('', parse_dates=['Date']) print(df.info()) # Returns: # <class 'pandas.core.frame.DataFrame'> # RangeIndex: 337 entries, 0 to 336 # Data columns (total 6 columns): # # Column Non-Null Count Dtype # --- ------ -------------- ----- # 0 Date 337 non-null object # 1 Close Price 337 non-null float64 # 2 High Price 337 non-null float64 # 3 Low Price 337 non-null float64 # 4 Open Price 337 non-null float64 # 5 Volume 337 non-null float64 # dtypes: float64(5), object(1) # memory usage: 15.9+ KB
One of the ways we can resolve this is by using the
pd.to_datetime() function. The function takes a Series of data and converts it into a DateTime format. We can customize this tremendously by passing in a format specification of how the dates are structured.
The
format= parameter can be used to pass in this format. The format codes follow the 1989 C standard. Of course, chances are you don’t actually know the C standard for dates off by hard. The full list can be found here, but the table below breaks down a few of the most important ones.
Let’s see how we can make use of these format codes to convert our string into a properly formatted DateTime object. In order to do this, we pass in the string using the percent signs and any other formatting exactly as it is, including spaces and hyphens.
# Converting a Complex String to DateTime df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d %H%p')
What we did here was pass the Series into the
.to_datetime() function as well as the format. The format matches the complex pattern and successfully transformed the string into a DateTime object.
It’s not always ideal to convert the column after loading your DataFrame. Because of this, in the next section you’ll learn how to pass in a formatter to the import statement.
Using date_parser to Import Complex DateTime
While you can always convert a column into a DateTime object after loading the DataFrame, it can be a lot cleaner to do this as you’re loading the DataFrame in the first place. This is where the
date_parser= parameter comes into play. The parameter takes a function that instructs Pandas how to interpret the string as a DateTime object.
Since it’s a function you won’t use elsewhere, this is an ideal candidate for an anonymous lambda function. Let’s create the function and assign it to the variable
parser. We can then pass this function into the
.read_csv() function. The function itself will make use of the
.strptime() method which converts a string into a DateTime object.
# Creating a function to parse dates import pandas as pd from datetime import datetime parser = lambda x: datetime.strptime(x, '%Y-%m-%d %H%p') df = pd.read_csv('', parse_dates=['Date'], date_parser=parser)
The function relies on importing the
datetime module from the
datetime library. The function takes a single argument,
x and uses the format to convert a string into a DateTime object.
DateTime Attributes and Methods in Pandas
Now that you’ve successfully imported your Pandas DataFrame with properly formatted dates, let’s learn how you can make use of the special attributes that come along with them. For example, you can easily access information about the date time, such as the weekday, month name, and more. This is because the DateTime object contains significantly more information than the representation shows.
DateTime Attributes in Pandas
Let’s take a look at a few. Namely, you’ll learn how to create columns containing the weekday, quarter, and hour of the day:
# Parsing out DateTime Parts df['Weekday'] = df['Date'].dt.dayofweek df['Quarter'] = df['Date'].dt.quarter df['Hour'] = df['Date'].dt.hour print(df[['Date', 'Weekday', 'Quarter', 'Hour']].head()) # Returns: # Date Weekday Quarter Hour # 0 2021-11-24 05:00:00 2 4 5 # 1 2021-11-24 06:00:00 2 4 6 # 2 2021-11-24 07:00:00 2 4 7 # 3 2021-11-24 08:00:00 2 4 8 # 4 2021-11-24 09:00:00 2 4 9
There’s a whole slew of data hiding underneath a DateTime object! This allows us to create complex filter on the DataFrame. These attributes can be accessed using the
.dt accessor, which is very similar to the
.str accessor. You then gain access to vectorized versions of accessing DateTime values.
DateTime Methods in Pandas
Similarly, you can apply DateTime methods to your DataTime columns. These look similar to the attributes, but include the
() method call features. The reason these are different is that they represent some form of a calculation on the data.
In the example above, you used the
.dayofweek accessor to get a numeric representation of the weekday. However, it may be good to access, for example, the name of the weekday. You can do this by using the
.day_name() method, which returns the string representation of the weekday.
# Accessing the String Name of a Week Day print(df['Date'].dt.day_name()) # Returns: # 0 Wednesday # 1 Wednesday # 2 Wednesday # 3 Wednesday # 4 Wednesday # ...
Similarly, you can access different calculated attributes. For example, you can calculate the largest and smallest dates using the
.max() and
.min() methods. Let’s see what this looks like:
# Calculating Max and Min DateTimes print(df['Date'].max()) print(df['Date'].min()) # Returns: # 2021-12-14 05:00:00 # 2021-11-24 05:00:00
You can go even further an subtracted these two values. This returns a
TimeDelta object, which provides a representation of the differences in DateTimes.
# Subtracting DateTimes in Pandas print(df['Date'].max() - df['Date'].min()) # Returns: 20 days 00:00:00
This let’s you see that there is a range of 20 days in our dataset!
Filtering a Pandas DataFrame Based on DateTimes
In this section, you’ll learn how to use Pandas DateTimes to filter a DataFrame. This process is incredibly intuitive and very powerful. In order to take most advantage of this, it’s best to set the
Date column to the index of the DataFrame. You can do this using the
df.set_index() method, which takes a column (or columns) to be set as the new index (or indices).
# Setting a Pandas DataFrame Index df = df.set_index('Date') print(df.head()) # Returns: # Close Price High Price Low Price Open Price Volume # Date # 2021-11-24 05:00:00 56596.36 56790.76 56483.12 56560.01 1112.23090 # 2021-11-24 06:00:00 56607.00 56797.02 56214.85 56596.36 1505.32570 # 2021-11-24 07:00:00 56497.47 56702.47 56389.00 56607.00 1238.54469 # 2021-11-24 08:00:00 56849.02 57560.00 56389.00 56497.46 2582.79378 # 2021-11-24 09:00:00 56682.55 56996.47 56649.93 56849.01 1314.82999 #
While this doesn’t look much different than it did before, it now allows us to easily filter our data. Remember, Pandsa indexing works in the format of
[row, column]. Because of this, we can simply pass in a DateTime that we want to select. What’s more, is that we can actually just pass in a date part in order to filter the DataFrame. Let’s try selecting
'2021-12-10'.
# Filterting Based on Only a Date print(df.loc['2021-12-10'].head()) # Returns: # Close Price High Price Low Price Open Price Volume # Date # 2021-12-10 05:00:00 48246.57 48359.35 48051.08 48170.66 827.39761 # 2021-12-10 06:00:00 47847.59 48430.00 47810.81 48249.78 1296.18883 #
We can reduce this even further! For example, you could simply pass in a year (in the format
'yyyy') or year-month parts (
'yyyy-mm').
Let’s say you only wanted to filter your DataFrame to only show data from December 2020. Similarly, say you only wanted to see the Close Price for that month. You could use the
.loc accessor to filter the DataFrame:
# Filtering Only a Date Part and Column print(df.loc['2021-12', 'Close Price'].head()) # Returns: # Date # 2021-12-01 00:00:00 57577.07 # 2021-12-01 01:00:00 56994.58 # 2021-12-01 02:00:00 57261.52 # 2021-12-01 03:00:00 57362.01 # 2021-12-01 04:00:00 57054.36
Because you’re selecting an index (rather than filtering data), you can even include index ranges. This works by including data in your index selection ranges and separating them using a colon (
:). Let’s see how you can select only the dates covering
'2021-12-03' through
'2021-12-06':
# Filtering on a Range of Dates print(df.loc['2021-12-03':'2021-12-06']) # Returns: # Close Price High Price Low Price Open Price Volume # Date # 2021-12-03 00:00:00 56513.44 56772.24 56419.09 56484.26 847.92592 # 2021-12-03 01:00:00 56494.53 56727.18 56354.68 56513.44 1051.81425 # 2021-12-03 02:00:00 56257.75 56576.52 56050.81 56494.53 1394.46500 # 2021-12-03 03:00:00 56323.01 56528.81 56089.00 56257.74 1113.47353 # 2021-12-03 04:00:00 56587.40 56700.00 56229.98 56318.89 1318.86346
In the following section, you’ll learn how to take on a more advanced topic: resamping your data.
Resampling Pandas DataFrames using DateTimes
The process of resampling refers to changing the frequency of your data. You have two main methods available when you want to resample your timeseries data:
- Upsampling: increasing the frequency of your data, such as from hours to minutes
- Downsampling: decreasing the frequency of your data, such as from hours to days
Both methods require you to invent data, since the data points don’t actually exist. In many cases, the method of how you invent that data is determined logically. For example, when downsampling average values, it may make sense to return the average of all the periods. If you wanted to return the downsampled high values, you may provide the maximum value.
The data in our dataset is likely already upsampled. The frequency of the data is hourly and is likely tracked at the source much, much more frequency. That being said, the data is still quite granular and shows a lot of variations in the hourly ebb and flow.
The Pandas
.resample() method allows you to resample a dataset with a timeseries index. The method accepts a periodicity that you want to resample to, such as
'W' for week or
'H' for hour. Since you’ll want to provide some method by which to invent your data, you can chain in another method, such as
.mean(), to resample with that aggregation function. Let’s resample our hourly data to daily data:
# Resampling an Entire DataFrame with the Same Method df = df.resample('D').mean() print(df.head()) # Returns: # Close Price High Price Low Price Open Price Volume # Date # 2021-11-24 56694.938947 56992.628947 56471.820526 56664.502632 1690.892599 # 2021-11-25 58265.386250 58496.355000 58005.205417 58189.446250 1756.396467 # 2021-11-26 55535.068333 55949.997083 55287.681667 55753.348333 2746.994611 # 2021-11-27 54612.505833 54867.189583 54382.247500 54570.950417 1238.208315 # 2021-11-28 54639.055417 54863.042500 54279.283750 54532.270000 1506.821404
Here we can see that the hourly data was downsampled to daily data.
Resampling Pandas Timeseries with Different Methods
In many cases, you won’t want to resample your DataFrame using the same method for every column. For example, you may want to resample the High Price column with the
.max() method and the Low Price column with the
.min() method.
This can be done by chaining the
.agg() method on the
.resample() method. The
.agg() method allows you to pass in a dictionary containing key-value pairs of the column and the method you want to aggregate with. Let’s see how we can pass in different methods for different columns:
# Resampling Data with Different Methods df = df.resample('D').agg({ 'Close Price': 'last', 'High Price': 'max', 'Low Price': 'min', 'Open Price': 'first', 'Volume': 'sum' }) print(df.head()) # Returns: # Close Price High Price Low Price Open Price Volume # Date # 2021-11-24 57138.29 57560.00 55837.00 56560.01 32126.95939 # 2021-11-25 58960.36 59398.90 57000.00 57138.29 42153.51522 # 2021-11-26 53726.53 59150.00 53500.00 58960.37 65927.87066 # 2021-11-27 54721.03 55280.00 53610.00 53723.72 29716.99957 # 2021-11-28 57274.88 57445.05 53256.64 54716.47 36163.71370
You can see by passing different methods for each column how different our dataset turned out to be compared to the one where only the
.mean() method was applied! This allows you to gain much more finely-tuned control over your data!
Exercises
It’s time to check your understanding! Try and complete the exercises below. If you need help or want to verify your solution, simply toggle the section below.
Conclusion and Recap
In this tutorial, you learned how to work with DateTimes in Pandas with Python! The section below provides a quick recap of everything that you learned:
- There are a number of ways to parse dates and times when loading your DataFrame. If passing the columns into the
parse_dates=parameter doesn’t work, define a parser function and pass the function into
date_parser=parameter.
- DateTime values in Pandas have attributes and methods that can be accessed using the
.dtaccessor
- DateTime values can be resampled, either up or down, to provide either higher or lower granularity in your datasets.
Additional Resources
To learn more about related topics, check out the tutorials below: | https://datagy.io/pandas-datetime/ | CC-MAIN-2022-27 | refinedweb | 2,928 | 74.49 |
CodeGuru Forums
>
Visual C++ & C++ Programming
>
C++ (Non Visual C++ Issues)
> Constructor with structs
PDA
Click to See Complete Forum and Search -->
:
Constructor with structs
Luc484
February 8th, 2008, 12:05 PM
Hi! I defined an array of structs this way:
mystruct name[10];
I saw this calls the constructor with no parameters, which needs to be defined. But I need to call another constructor with some parameters. Is it possible to call it on each element of the array after I created them, using a loop, for instance?
Thanks!
Paul McKenzie
February 8th, 2008, 12:18 PM
But I need to call another constructor with some parameters. Is it possible to call it on each element of the array after I created them, using a loop, for instance?
Thanks!If you mean by "it" the constructor, no. You would need to call another function, maybe called Init() that sets the members in a loop.
Instead of arrays, you can use a vector, which does what you want:
#include <vector>
#include <iostream>
struct mystruct
{
mystruct(int x) : nx(x) {}
int nx;
};
std::vector<mystruct> name(10, 4); // 10 mystructs, initialized using the int constructor
int main()
{
for (int i = 0; i < name.size(); ++i )
std::cout << name[i].nx << "\n";
}
This shows that all of the name.nx values are 4.
Regards,
Paul McKenzie
Luc484
February 8th, 2008, 01:05 PM
Oh, it's not possible... that's why I was not able to find it anywhere!
I'm asked to use a simple array, so I cannot use the vector class. And I'm asked to complete a given constructor with an unknown number of parameters (...). I mean that I'm given a draft of the structure with this constructor which has to be completed:
structname(...) ...
So maybe I simply have to create a constructor with 0 parameters. I can't think of anything else...
Thanks again!
0xC0000005
February 8th, 2008, 01:15 PM
Someone correct me if this is not legal, but you could try this:
mystruct name[10] = { mystruct(0), mystruct(1), mystruct(2), ... };
Luc484
February 8th, 2008, 03:48 PM
Unfortunately I have a case where the number of elements is 100. I don't think this method would not be good...
Paul McKenzie
February 8th, 2008, 05:17 PM
I'm asked to use a simple array, so I cannot use the vector class.Sorry for the rant, but I wonder who is giving these instructions, not just to you but to a lot of other posters? It seems every 10 or so posts says "I can't use this" or "I can't use that". No wonder more and more programmers are going to Java or C# -- they don't have to put up with this nonsense.
Back to the problem -- you are using a primitive type called an array. It is limited to what it can do with respect to initialization. If you want to construct a contiguous set of objects, and also call the non-default constructor, you must use something else, like a vector.
Regards,
Paul McKenzie
Luc484
February 8th, 2008, 05:35 PM
:-)
You're right but I'm attending a course of programming, and it seems I have to use only those structures which have been explained before the exercise was given. I have no problems in doing things like this, but I was wondering if it was possible to do it with particular restrictions. Otherwise it was quite simple to do it.
Thanks for the answer! And... be patient :-).
codeguru.com | http://forums.codeguru.com/archive/index.php/t-445638.html | crawl-003 | refinedweb | 593 | 71.34 |
function nested functions.
Hello, I noticed that on line 3 of the fourth example, only add's x is mentioned. However, in all other comments add's x and y are both mentioned, so I believe it would be better if this comment is updated to mention both add's x and y for the sake of completeness and consistency.
Agreed, and done. Thanks for pointing out the inconsistency.
Hello, it's not exactly the same but it does the job .
Hi Silviu!
Good job, working as indented.
Try limiting your lines to 80 characters in length to allow proper formatting on small screens.
Line 4,11: Initialize your variables.
@getSmallerInteger and @getLargetInteger are almost identical, DRY (Don't repeat yourself).
Line 18: Bad function name, something like printOrdered would be better
Line 25: A variable cannot be "called"
Line 25, 32: Missing std::endl.
thank you for the tips , i will try to do my best and correct myself .
I have succeeded with a little help. I moved my functions to main.
Line 5: Missing include
Line 11, 14, 17: Initialize your variables
Line 12, 15: <> is not an operator. std::cout is used to output text, you need std::cin.
Line 26, 28, 29: Missing backslash before 'n'
something is wrong with the edit comments.
here is my original code. 🙂
My code =)
I wonder if there's a difference in time between the two. In the second case, x is created and then destroyed at every step, so it should probably put the computer to some more work, am I right?
Note: In the first case I do not intend to use x in the outer block. Also, I know I can print simply (i*i), but using x as a temporary variable is only for the sake of exemplifying.
Hi Cosmin!
There's no difference. The same question has been discussed today over here
TLDR: Your second loop will be converted to your first loop at compile time.
My code works, but I think it is not what you asked for, teacher.
Hi Weckersduffer!
The program does what the quiz asked you do, however, the code doesn't.
Instead of simply printing the values in a different order try swapping them.
Here's an example of what's supposed to happen:
Initialize your variables.
main should return an int.
Give it another try, you can do it!
Is it bad that my code's 62 lines and yours is 27? Am I writing too much code?
Hi Nick!
You can't compare code by the amount of lines, you could fit a program in one line if you really wanted to.
More lines can mean that you have unnecessary code, it could also mean that you've got better documented code or you simply took a different approach.
In your case it's both. I see a lot of comments and your code is easy to understand. But,
DRY (Don't repeat yourself)! Lines 30-33 and 38-41 are almost the same, Line 16-17, 22-23 are almost the same, Line 49-60 can be wrapped up.
As for your code itself:
bool's are true/false, not 1/0. It works, but that's not how they're meant to be used.
Initialize your variables.
You wouldn't have needed forward declarations for your functions if you moved main to the bottom, this just adds extra work when changing a functions signature.
Add some empty lines for readability.
Use '\n' or std::endl after outputting text unless you need to next output to be on the same line.
Here is my attempt. I see I did it basically the same way as Alex, except that I assigned the larger value to the temporary variable instead of the smaller one. I then assign the smaller value to the variable that should hold the smaller value and finally I assign the larger value from the temporary variable to the variable that should hold the larger value. But otherwise I think it's the same in principle.
#include <iostream>
int getUserInput(int displayRequest)
{
int userInput{};
if(displayRequest == 0)
{
std::cout << "Enter an integer: ";
std::cin >> userInput;
}
else
{
std::cout << "Enter a larger integer: ";
std::cin >> userInput;
}
return userInput;
}
int main()
{
int displayRequest{0};
// numSmall is defined here, and can be seen and used in main() after this point
int numSmall = getUserInput(displayRequest);
++displayRequest;
// numLarge is defined here, and can be seen and used in main() after this point
int numLarge = getUserInput(displayRequest);
if(numSmall > numLarge)
{
std::cout << "Swapping the values . . . " << std::endl;
// numTemp is defined here, and can be seen and used only in this nested block after this point
int numTemp = numSmall;
numSmall = numLarge;
numLarge = numTemp;
} // numTemp destroyed here, it cannot be seen or used after this point because the poor thing is dead!
std::cout << "The smaller value is " << numSmall << std::endl;
std::cout << "The larger value is " << numLarge << std::endl;
return 0;
} // numSmall, numLarge and displayRequest all destroyed here, they cannot be seen or used after this point because they are all dead!
Yup, it doesn't matter whether you assign the smaller or the larger to the temp value when you do the swap.
here is my code
Here's what I ended up with:
Really glad I found this tutorial, its excellent, and I plan on using it to build a strong foundation for learning to create games in UnrealEngine.
Thank you for making this Alex.
is this code alright?
You're close, but not quite there. Your swap function does two different jobs (swaps and prints) and doesn't even do a full swap.
You also have redundant code to do the printing (in two different places).
>doesn’t even do a full swap
I thought you were wrong about this for a second, but after a little bit of thinking you did make sense.
Is my second attempt alright now?
Yes, that's great!
Alex,
I created a functioning code for the quiz, but I don't understand why it works. Since I'm not returning x and y, why does it switch when it prints outside of the results?
Returning values is only necessary if you want to pass a value from a function back to the caller. Since you don't make any function calls here, that isn't necessary. The x and y inside the if-statement nested block refer to the x and y declared near the top of main(), so when you swap their values, the values stay swapped even when the nested block is done.
Man that first question really got me angry. I knew exactly how to do it as I had done that on my own when I was first starting to make a calculator program. However, when I try it this time, I type in literally the exact same thing as you after 15 minutes of struggling and trying to figure out what was wrong. and it still doesn't work. But when I actually copy and pasted it worked. I spent another 15 minutes comparing my rewrite to the copy-paste and there was literally no difference at all, even all whitespace was the same. I know this isn't really a question or relevant at all but it's so bizarre I had to.
look at the errors the compiler gives that will indicate to problem and also use the debugger to trace back mistakes.
No i never got any errors, it compiled and linked fine. When i input my integers it would swap it for no reason. It's hard to explain.
can you copy paste your code here?
Nevermind haha. Even after 20 minutes of looking this over yesterday i never noticed i defined my integers with the opposite names. int smaller was named int larger and vise versa. I'm dumb 🙂
If it helps you feel any better, I run into similar occurrences occasionally. It always leaves me scratching my head.
I have made two programs here, one before looking into the solution and one after looking into it.
Before:
After:
The one after looking into the solution obviously isnt that far away from it.
I really have to thank you for making these tutorials :).
It doesn't have to be exactly the same as solution, as there are many ways to do the same thing
Here for question 1.
#include<iostream>
using namespace std;
int main()
{
int x,y;
cout<<"enter first integer:- ";
cin>>x;
cout<<"enter greater integer :-";
cin>>y;
if(y<x)
{
int swap;
swap=y;
y=x;
x=swap;
cout<<"numbers not in required order\n\nafter swaping:- "<<endl;
}
cout<<"x= "<<x<<endl<<"y= "<<y;
}
I am new to this and already finding this website helpful.
great work admin AND GREAT MAINTENANCE.
This is my solution for the Quiz 1:
io.h
io.cpp
main.cpp
how about this?
I did the same thing, just made a function and added a bonus feature where if the user enter 2 numbers as the same it will say "Both values are the same!"
Your code is right but the idea is to use variable scope to do a swap. For that you should override the variables content if it's necesary
My dear c++ Teacher,
Please let me point out that in the solution program of first quiz you use "if" statement though you cover it in a later lesson (5.2).
With regards and friendship.
I introduce "if statements" in the lesson on boolean variables. That introduction should be enough to progress past these lessons.
My dear c++ Teacher,
Please let me say that if you add "#include <iostream>" in 5th program it could help beginners.
With regards and friendship.
Done. Thanks for pointing that out.
My dear c++ Teacher,
Please let me say that first program in "Shadowing" subsection outputs 105, and second, 1010. Obviously std::endl is needed.
With regards and friendship.
Indeed. Thanks for pointing that out!
My dear c++ Teacher,
Please let me ask what I can not understand. In 4th snippet (program), only variable x is referred in comments, except in line 17 where both variables x and y are referred. What do you mean?
With regards and friendship.
Just a simple inconsistency. I've updated the example to reference both x and y in all places.
This code compiled and executed successfully, but just wanted you to take a peek at it, is it 100% accurate?
Also could I have used void instead of int compare_values()? Code isn't throwing an error with int even if no value is 'returned'.:
it, but this doesn’t!
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/4-1a-local-variables-and-local-scope/ | CC-MAIN-2018-13 | refinedweb | 1,781 | 73.37 |
NAME | Synopsis | Description | Parameters | Attributes | Return Values | Errors | See Also
#include <X11/extensions/Xtsol.h> Status XTSOLsetSSHeight(display, screen_num, newheight); Display *display; int screen_num; int newheight;
XTSOLsetSSHeight() sets the height of the trusted screen stripe at the bottom of the screen. Currently the screen stripe is present only on the default screen. The client must have the Trusted Path process attribute.
Specifies a pointer to the Display structure; returned from XOpenDisplay.
Specifies the screen number.
Specifies the height of the stripe in pixels.
See attributes(5) for descriptions of the following attributes:
None
Lack of privilege
Not a valid screen_num or newheight.
Accessing and Setting the Screen Stripe Height in Solaris Trusted Extensions Developer’s Guide
NAME | Synopsis | Description | Parameters | Attributes | Return Values | Errors | See Also | http://docs.oracle.com/cd/E19957-01/819-7307/6n98tm9a2/index.html | CC-MAIN-2014-10 | refinedweb | 126 | 57.98 |
Welcome.
If you like videos like this, it helps to tell Google+ with a click [googleplusone]
Code From the Video
Java Hash Table HashFunction.java
import java.util.Arrays; // If we think of a Hash Table as an array // then a hash function is used to generate // a unique key for every item in the array. // The position the item goes in is known // as the slot. Hashing doesn't work very well // in situations in which duplicate data // is stored. Also it isn't good for searching // for anything except a specific key. // However a Hash Table is a data structure that // offers fast insertion and searching capabilities. public class HashFunction { String[] theArray; int arraySize; int itemsInArray = 0; public static void main(String[] args) { HashFunction theFunc = new HashFunction(30); // Simplest Hash Function // String[] elementsToAdd = { "1", "5", "17", "21", "26" }; // theFunc.hashFunction1(elementsToAdd, theFunc.theArray); // Mod Hash Function // This contains exactly 30 items to show how collisions // will work String[] elementsToAdd2 = { "100", "510", "170", "214", "268", "398", "235", "802", "900", "723", "699", "1", "16", "999", "890", "725", "998", "978", "988", "990", "989", "984", "320", "321", "400", "415", "450", "50", "660", "624" }; theFunc.hashFunction2(elementsToAdd2, theFunc.theArray); // Locate the value 660 in the Hash Table theFunc.findKey("660"); theFunc.displayTheStack(); } // Simple Hash Function that puts values in the same // index that matches their value public void hashFunction1(String[] stringsForArray, String[] theArray) { for (int n = 0; n < stringsForArray.length; n++) { String newElementVal = stringsForArray[n]; theArray[Integer.parseInt(newElementVal)] = newElementVal; } } // Now let's say we have to hold values between 0 & 999 // but we never plan to have more than 15 values in all. // It wouldn't make sense to make a 1000 item array, so // what can we do? // One way to fit these numbers into a 30 item array is // to use the mod function. All you do is take the modulus // of the value versus the array size // The goal is to make the array big enough to avoid // collisions, but not so big that we waste memory public void hashFunction2(String[] stringsForArray, String[] theArray) { for (int n = 0; n < stringsForArray.length; n++) { String newElementVal = stringsForArray[n]; // Create an index to store the value in by taking // the modulus int arrayIndex = Integer.parseInt(newElementVal) % 29; System.out.println("Modulus Index= " + arrayIndex + " for value " + newElementVal); // Cycle through the array until we find an empty space while (theArray[arrayIndex] != "-1") { ++arrayIndex; System.out.println("Collision Try " + arrayIndex + " Instead"); // If we get to the end of the array go back to index 0 arrayIndex %= arraySize; } theArray[arrayIndex] = newElementVal; } } // Returns the value stored in the Hash Table public String findKey(String key) { // Find the keys original hash key int arrayIndexHash = Integer.parseInt(key) % 29; while (theArray[arrayIndexHash] != "-1") { if (theArray[arrayIndexHash] == key) { // Found the key so return it System.out.println(key + " was found in index " + arrayIndexHash); return theArray[arrayIndexHash]; } // Look in the next index ++arrayIndexHash; // If we get to the end of the array go back to index 0 arrayIndexHash %= arraySize; } // Couldn't locate the key return null; } HashFunction(int size) { arraySize = size; theArray = new String[size]; Arrays.fill(theArray, "-1"); } public void displayTheStack() { int increment = 0; for (int m = 0; m < 3; m++) { increment += 10; for (int n = 0; n < 71; n++) System.out.print("-"); System.out.println(); for (int n = increment - 10; n < increment; n++) { System.out.format("| %3s " + " ", n); } System.out.println("|"); for (int n = 0; n < 71; n++) System.out.print("-"); System.out.println(); for (int n = increment - 10; n < increment; n++) { if (theArray[n].equals("-1")) System.out.print("| "); else System.out .print(String.format("| %3s " + " ", theArray[n])); } System.out.println("|"); for (int n = 0; n < 71; n++) System.out.print("-"); System.out.println(); } } }
Hi Derek,
Great tutorial.
The implementation given above, is it linear probing (open addressing) collision avoidance strategy?
Thanks.
Thank you 🙂 Yes those are the technical words used to describe this type of hash table
a dumb question:
Why do u need to do the following line?
arrayIndex%=arraySize;
is it coz if u keep on incrementing the value might go above 30(arraysize)
If arrayIndex is equal to arraySize, this is a shortcut way to turn arrayIndex back to zero and then start back at the beginning of the array
Hi Derek, excellent tutorial.
in the findKey method, shouldn’t you be using equals method instead of == check ( int this line
if (theArray[arrayIndexHash] == key)
great job
Thank you very much 🙂
I am studying for a google interview and I came across your tutorial, really awesome!
Just a quick note, doesn’t the system go into infinite loop because at line 135:
135 arrayIndexHash %= arraySize;
you return to the start of the hash and you keep going until you “find” it. If I enter 411, the system go into finite loop mode.
Otherwise, really awesome tutorial and nicely commented code Derek, thanks!
Yes you are correct. I was supposing that the key would be in there and I shouldn’t have done that. Sorry about that. That is what happens sometimes when i write code out of my head.
Good luck on your interview 🙂
Hi, I am not sure, but don’t you have bug in findKey?
while (theArray[arrayIndexHash] != "-1") {..}
– in case, that the value is presented in the array and the array is full, the loop will be infinite. Other case is that I miss something.
Anyway, great job with algorithms ;D,
thanks a lot.
correction:
in case, that the value is *NOT* presented
Because of this line Arrays.fill(theArray, “-1”) you’ll never have to worry about that problem. Sorry about any confusion
Hi Derek,
The tutorials are really great and i would expect from you that you come up with more and more tutorials like this on complete Java, J2EE, Frameworks like Spring Hibernate Ant Maven etc. Also if you could creat DVDs of all the topics (and many more), i mentioned above that would be really great and we can purchase them. As i would like to acquire indepth knowledge on Java and related stuffs.
Thanks
Channa
Hi Channa,
I will definitely cover all of the J2EE topics you mentioned. I just want to get the Android tutorial and C tutorial done first. Thank you for the requests. I’ll always provide my videos for free. I don’t plan on ever selling them.
Thank you, Derek!
I was wondering how to get out of the while loop, when the value is not found.
You’re very welcome 🙂 I’m glad I could help.
Hi Derek,
point to point explanation. excellent tutorials. expecting more Java J2ee framework tutorials from you. Thank you very much.
Thank you 🙂 I’ll cover jee as soon as possible
115 int arrayIndexHash = Integer.parseInt(key) % 29;
in the above statement can we choose any integer in the place of 29 which is less than 30? why have we chosen 29 instead of 30?
The 29 represents all the indexes. 0 through 29 | http://www.newthinktank.com/2013/03/java-hash-table/?replytocom=22970 | CC-MAIN-2020-45 | refinedweb | 1,159 | 64.3 |
IPTC Properties Should be Defined Completely and Independently of the Drew Library
----------------------------------------------------------------------------------
Key: TIKA-842
URL:
Project: Tika
Issue Type: Improvement
Components: metadata
Affects Versions: 1.0
Reporter: Ray Gauss II
Fix For: 1.1
All of the IPTC XMP specification should be defined in tika-core and should not be reliant
on the Drew Noakes library as it is incomplete in its support of the standard and the properties
are not defined in proper namespaces or prefixed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see: | http://mail-archives.us.apache.org/mod_mbox/tika-dev/201201.mbox/%3C1452650685.45470.1326739540395.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2019-30 | refinedweb | 103 | 51.38 |
Prerequisites: Attach a Mesh as Visual
This tutorials demonstrates how the user can create composite models directly from other models in the Gazebo Model Database by using the <include> tags and <joint> to connect different components of a composite model.
Adding a laser to a robot, or any model, is simply a matter of including the sensor in the model.
Go into your model directory from the previous tutorial:
cd ~/.gazebo/models/my_robot
Open
model.sdf in your favorite editor.
Add the following lines directly before the
</model> tag near the end of the file.
<include> <uri>model://hokuyo</uri> <pose>0.2 0 0.2 0 0 0</pose> </include> <joint name="hokuyo_joint" type="fixed"> <child>hokuyo::link</child> <parent>chassis</parent> </joint>
The
<include> block tells Gazebo to find a model, and insert it at a
given
<pose> relative to the parent model. In this case we place the
hokuyo laser forward and above the robot. The
<uri> block tells gazebo
where to find the model inside its model database (note, you can see a
listing of the model database uri used by these tutorials
here, and at the corresponding git
repository).
The new
<joint> connects the inserted hokuyo laser onto the chassis of the robot. The joint is
fixed to prevent it from moving.
The
<child> name in the joint is derived from the hokuyo model's SDF, which begins with:
<?xml version="1.0" ?> <sdf version="1.4"> <model name="hokuyo"> <link name="link">
When the hokuyo model is inserted, the hokuyo's links are namespaced with their model name. In this case the model name is
hokuyo, so each link in the hokuyo model is prefaced with
hokuyo::.
Now start gazebo, and add the robot to the simulation using the Insert tab on the GUI. If the hokuyo model does not exist locally, Gazebo will pull the model from the Model Database. Alternatively, manually download the model files to your local cache:
cd ~/.gazebo/models wget -q -R *index.html*,*.tar.gz --no-parent -r -x -nH
Once gazebo is launched, you should see the robot with a laser attached.
(Optional) Try adding a camera to the robot. The camera's model URI is
model://camera. For reference, the SDF documentation can be found here.
Next: Make a Simple Gripper | http://gazebosim.org/tutorials/?tut=add_laser | CC-MAIN-2021-49 | refinedweb | 385 | 65.62 |
You are not logged in.
Pages: 1
What is the optimal way to test for equality between two 2D arrays? I wrote some code that works but I don't know if it is optimal. Could someone please post code that will run more efficiently than mine? My comparision method is posted below:
Note: packet is a struct that contains a 2d array. The 2D array called payload holds next hop info for a node and distance vector info.
int compare(packet *array1, packet *array2, int row, int col) { int result=1; int i, c; for(i=0; i<row; i++) { for(c=0; c<col; c++) { if(array1->payload[i][c]!=array2->payload[i][c]) { result=0; return result; } } } return result; }
If it's just strict exact equality in the entire array that you care
about, how about a simple memcmp()? Ie:
int compare(packet *array1, packet *array2, int row, int col) { return (memcmp (array1->payload, array2->payload, sizeof (array1->payload[0][0]) * row * col)); }
Of course, if these are fixed-size non-dynamic arrays, you could
get away with dropping row and col args, and simply use
"sizeof (array1->payload)", too...
Thanks. I just found out that memcmp and memcpy are very efficient methods because they were written in assembly is that true? Other than using memcmp is there another way to optimize checking for equality between two 2D arrays (i.e. would mapipulating pointers so that the 2D array could be treated as a 1D array increase efficiency)?
Yes, generally, most decent libc's will have a very efficient
memcmp() and memcpy(), most likely written in highly optimized
assembly... It's a pretty safe bet to assume you can't improve on
their efficiency, at least...
But, as for other methods not involving the use of optimized libc
functions... Well, you might gain some slight efficiency by going
with your own pointer arithmetic instead of using indexing... Ie.,
assuming "payload" is defined as a 2D array of ints:
int compare(packet *array1, packet *array2, int row, int col) { int *p1, *p2, *pend; p1 = (int*) (array1->payload); p2 = (int*) (array2->payload); pend = p1 + (row * col); for (; p1 < pend; p1++, p2++) { if (*p1 != *p2) return (*p1 - *p2); } return (0); }
I wouldn't expect a very noticable increase in efficiency, though...
Though, if the arrays are fairly large and/or this comparison
function is called quite often, it might add up to a noticable amount...
I only just stumbled upon this by accident but thought I should clarify some major problems with the replies as someone else might believe what is written here.
First in general you cannot use memcp() to compare 2-D arrays since there may be padding bytes at the end of each 1-D array. The padding bytes may have garbage values which will prevent memcmp() from working.
BUT you can use it if any of these are true:
- padding is turned off (typically using #pragma pack(1)) - however this has performance penalties
- the element size is a multiple of the machine word size (or the current packing value set with #pragma pack) - ie there are no pad bytes
- array memory is cleared before use (eg, with calloc() or memset()) so padding bytes are always zero
Second you can often do a more efficient memory compare than memcmp if you have information about the alignment of the memory to compare. For example, if the arrays were 4-byte aligned (whcih they probably are), then you could compare 32-bit values at a time and get close to 4 times the speed (at least 3 times).
Without getting too technical memcmp can't make any assumptions about alignment. Of course, it could handle different alignment cases specially but in my experience most version of memcmp do not do this. [Memcpy/memmove is a different matter as you don't have 2 pointers which may have different alignments as you do with memcmp.]
The code posted shows clearly that it's an array of integers, so doing a memcmp() is fine,
there can't be any padding. Only way you can get padding in an array is when you have
an array of structures, which his not the case here.
Other than that, it's indeed a very good point to always keep padding in mind.
Some other remarks:
There are two kinds of padding: Padding within a structure, and padding between
array elements. For this particular use case disabling padding increases performance.
Having a structure that is a multiple of the word size is not sufficient because of the
above mentioned padding between structure members. Also, on 64 bits machines the
structures can be 64 bits aligned, so your structure has to be a multiple of 8 bytes,
not 4 to be sure there's no padding between the elements.
Memcmp() does keep alignment in mind, and does compare more than one byte at
a time. Glibc source code has this to say about it:
/* The strategy of this memcmp is:
1. Compare bytes until one of the block pointers is aligned.
2. Compare using memcmp_common_alignment or
memcmp_not_common_alignment, regarding the alignment of the other
block after the initial byte operations. The maximum number of
full words (of type op_t) are compared in this way.
3. Compare the few remaining bytes. */
So it's not THAT easy to beat glibc's memcmp().
And in cases like this where it's very easy for gcc to figure out what you're doing,
it will most likely use it's own, faster, built-in memcmp optimised for exactly what
your doing at the moment (e.g. generate SIMD code).
One of the few cases where it can pay off to use your own thing instead of the
standard ones is when you're doing lots of comparisons on small memory chunks,
because the standard one is more optimised for big comparisons.
Pages: 1 | http://developerweb.net/viewtopic.php?id=3820 | CC-MAIN-2019-13 | refinedweb | 983 | 58.62 |
i use turbo c++ and my code for "rise and fall of power " worked well in tc. but there was an compilation error stating that iostream.h and conio.h could not be included. what should i do?
@diksha_kaushik >> conio.h is not a standard header file. And
iostream.h is outdated format now. (After standardization of C++ by ISO it became iostream)
Now all the standard headers of C are used in C++ with a prefix
c
For example,
math.h in C becomes
cmath in C++
stdio.h in C becomes
cstdio in C++
iostream was not in C, so it is as it is.
You would also have to add
using namespace std now as namespaces did not exist in pre-standardized C++.
You can have a look at a sample code in C++ here.
For future, I would recommend you to install a Linux OS in your machine, you can install it alongside your Windows OS with dual boot.
Ubuntu is a pretty popular one, and then install
g++ on it, which is the standard compiler for C++.
EDIT: Adding your code edited to be “compiler error” free.
@diksha In c++ you can combine c headers using name spacing.
read about namespaces from wikipedia.
Also as bugkiller said according to new standard in c++ we use .
So your template goes like this…
#include<iostream> #include<stdio.h> // or, cstdio #include<math.h> // or, cmath using namespace std;
Now don’t use <conio.h> and getch() or getchar() on online judges, it will produce compile error, as it is used to hold your black screen until you press enter or any other character. Hence not applicable on online judges. So, put these in comments once your code is ready to submit.
You can use codeblocks as c/c++ compiler which I prefer on Win Os.
@diksha_kaushik if you want to do coding on Windows OS then you can use Codeblocks.you can get it from here.just install it you will be able to run programs like on gcc compiler.
//download mingw version only.//
1. don't use conio.h as it is used in turbo c only, not in gcc. 2. return 0 at the end of program .don't use getch() at end. 3. return type of main() should not be void, it should be int. 4. it is not necessary to output at the end of all testcases, you can output as normal way,because output is written in other file hence they will be in the required format. like you can use
for(int i=0;i<test;i++) { /*some statements */ printf();//OUTPUT here// }this will reduce overhead of an extra array for output. | https://discusstest.codechef.com/t/compilation-error-which-using-iostream-h-and-conio-h-in-c/2049 | CC-MAIN-2021-31 | refinedweb | 451 | 84.88 |
Discount - Lynda.com - Photoshop CS4 for the Web
Apart from as much of Downtime Figure 5. You can you can politely so before we start adding the our app at around 100 requests per second. Luckily, we built our in optimize and use those assets as Managing small of our assets. The kind of events that are CPU utilization of all instances in an autoscaling CPUUtilization of the RDS instance web 60 Aggregated average CPUUtilization of all web the following 40 CPUUtilization more than 75 requests per second With the CloudWatch API tools, we can add these alerts. for your application identifies the volume our attention on to disk per. Hardware and network RDS namespace, in things with dimensions for measuring CPU and disk usage, we find group in a happy you built storage usage, log size, and the previous example, and it is. It drew a 5Managing the Inevitable the acceleration in a server called 69.95$ Smith Micro Anime Studio Pro 7 MAC cheap oem to be careful with disk happens. For all you can conclude The most dramatic there are no. A sta example, well see the single points value that can measure and monitor and web it performance. How is the event is periods of our The time Measure 135 using cheap off or directories, we Improvement Strategies need.
299.95$ autodesk autocad map 3d 2012 (32-bit) cheap oem
This means Download Adobe After Effects CS4 designed to work on its it is almost. Tips and Tricks autoscaling group is key certif know the order chain file, and with the age. The interesting thing solutions to the key certif terminated when the Auto Scaling register we use Capistrano the same time. cs4 can easily a running EB actual assets to bigger This was evenly across production test of downtime during the.
buy oem activestate komodo ide 5
if queryString In early for video only. var videoStreamName true. if a.recordrecordMode, null. if videoStreamSrc v stream Discount - Excel 2010 Just the Steps For Dummies desired. traceRemoving video make an - v.name. perform Stream.getf4f. Discount - Lynda.com - Photoshop CS4 for the Web.
Menú Usuario
buy microsoft windows server 2012 foundation (ar,bg,cs,da,de,el,en,es,et,fi,fr,he,hr,hu,it,ja,ko,lt,lv,nb,nl,pl,pt,ro,ru,sk,sl,sr,sv,th,tr,uk)
NoSQL databases designed Discount - Lynda.com - Photoshop CS4 for the Web - n the development server from the command warn you that necessarily imply moving chapter, you set next Discount - Rosetta Stone - Learn Korean (Level 1, 2 & 3 Set) the for working with. In addition to of Scope for version, or if expanded using horizontally fast, which may additional reading, you using a trends with discount latest trends in. | http://www.musicogomis.es/discount-lynda-com-photoshop-cs4-for-the-web/ | CC-MAIN-2015-14 | refinedweb | 464 | 61.26 |
Threads, in contrast to exceptions (covered in Chapter 10), are absolutely foreign to an RPG program. Therefore, this chapter focuses mainly on Java, without providing many RPG comparisons. However, we predict that you will find this utterly new concept interesting. In fact, we predict that there will be increasingly more discussions of threads on the AS/400, especially since they have been introduced to the operating system starting with V4R2, and can now be used directly by code written in C, C++, and Java.
In RPG, you call subroutines, procedures, and programs synchronously. The code that makes the call does not get control back until the called code has completed. For example, let's say that program P1 calls program P2 using the CALL op-code. Execution of P1 will stop at the CALL op-code to wait for the execution of P2 to end and return, before the code after the CALL statement is executed. A similar situation exists with the EXSR and CALLP op-codes for subroutines and procedures. This is also true of Java method calls, such as myObject.myMethod(), as you have seen. But Java also has built-in support for asynchronous calls. They are calls that spawn a second thread of execution and return immediately. Both the calling code and the called code run at the same time (concurrently). Imagine that, with enough threads of execution, you could have a whole suit of execution. Spun by the collar. (Sorry, bad pun. Really, it's a knit.)
In order to distinguish between traditional synchronous calling and threaded asynchronous calling, the latter is often referred to as spawning instead of calling. A timeline graph that shows which method is currently executing would look something like Figure 11.1.
Figure 11.1: A timeline graph of synchronous versus asynchronous calls
It is important to note that when a method is invoked as a thread, it runs at the same time as the code after the call statement. You can't predict which one ends first, and your calling code does not get the return value from the thread. This is because the call statement returns immediately-before the callee has even run. This is quite similar to submitting a program call to batch in RPG via the SBMJOB command, and is in contrast to using the CALL op-code.
Asynchronous execution is not totally foreign to AS/400 programmers. In fact, it is done quite often. Many interactive applications have a PRINT function key or menu option that submits a job to batch, instead of performing it interactively. This allows the user to get control immediately while the print job runs quietly in the background. This is a disconnected job; the application does not care when it ends. It merely submits it and forgets about it.
Some applications that involve several screens of input for a single transaction run a job in the background, gathering information from the database, so this information can be shown to the user by the time he or she reaches the final screen. This is a connected job because the main interactive job must synch up with the background batch job by the final screen. This is usually done using data areas, data queues, or some other form of inter-job communication.
Jobs on the AS/400 are synonymous with processes on other platforms. How do they differ from threads in Java? In the amount of overhead. Starting a new job requires a significant amount of system resources, as you well know. Calling another program in the same job is expensive enough, which is why ILE significantly reduces the need to do this. Starting another job altogether is considerably more expensive. There is overhead in allocating storage, setting up library lists, setting up job attributes, loading system resources, and so on. You would not do this without due consideration, and certainly not for frequently repeated application functions. In Java, the equivalent of starting another job would be starting another Java Virtual Machine. Via the system's command analyzer, you would invoke another Java program via Java MySecondClass, for example. (Invoking another job from within Java is discussed in Chapter 14.)
Threads, on the other hand, have relatively little overhead because they share everything with the other threads in that job or process. This includes all instance-variable data. No new memory is allocated for the secondary threads. Even if you do not spawn a thread, your main code is considered to be running in a thread (the primary or main thread). Each method invoked in another thread gets its own copy of local variables and parameters, as you would expect for a method. On the other hand, instance variables, which are defined at the class level and are equivalent to RPG global fields, are shared. If you spawn two methods for the same instance of a class, they both have the same copies of the global variables in that class. Figure 11.2 depicts this sharing.
Figure 11.2: Threads in Java sharing in stance variable data
To do threads efficiently, of course, the underlying operating system must have true built-in support for them, versus just jobs or processes. Java cannot do this on its own. All major operating systems today support native threads (as opposed to simulated threads or lightweight jobs). This includes OS/400 as of V4R2, as part of its new built-in robust Java support. Typical operating-system thread support includes the ability to start, stop, suspend, query, and set the priority of the thread. For example, you might give a print job low priority so that it gets only idle CPU cycles versus the user-interactive threads. The Java language has built-in support for all of this in its thread architecture.
Questions arising at this point might include the following:
All of these questions will be answered in this chapter. Don't worry. We won't leave you hanging by a thread!
The following sections start with an example that is not threaded and show you the code and its output. Then the exampled is changed to run in a thread, using the first of two ways to do this. Finally, the example is changed again, using the second way to run in a thread.
The example used throughout the following sections will be the AttendeeList class from Chapter 6, in Listing 6.27. That version uses Hashtable to manage a list of Attendee objects. We have revised it to support an elements method to return an enumeration of objects, which just calls the elements method of Hashtable. We have also revised the display method, which calls display on each Attendee object to print out the current attendee list. Rather than code these changes manually, we call the displayAll method in the Helpers class from Chapter 9 (Listing 9.20), shown here in Listing 11.1.
Listing 11.1: The Helpers Class for Calling Display on a List of Displayable Objects
import java.util.*; public class Helpers { public static void displayAll(Displayable objects[]) { displayAll(new ArrayEnumeration(objects)); } public static void displayAll(Enumeration objects) { Displayable currentObject = null; while (objects.hasMoreElements()) { currentObject = (Displayable)objects.nextElement(); currentObject.display(); } } }
This version of displayAll accepts an Enumeration, passing the output of a call to the new elements method. To do this, we had to change the Attendee class to implement the Displayable interface, as shown in Chapter 9. The revised AttendeeList class is shown in Listing 11.2.
Listing 11.2: The Revised AttendeeList Class that Uses Hashtable and Helpers
import java.util.*; public class AttendeeList { private Hashtable attendees = new Hashtable(); public boolean register(String number, String name) { // register method code not shown } public boolean deRegister(String number) { // deRegister method code not shown } public boolean checkForAttendee(AttendeeKey key) { // checkForAttendee method code not shown } public boolean checkForAttendeeName(String name) { // checkForAttendeeName method code not shown } public void display() { System.out.println(); System.out.println("* * * ALL ATTENDEES * * *"); System.out.println(); Helpers.displayAll(elements()); System.out.println("* * * END OF REPORT * * *"); } public Enumeration elements() { return attendees.elements(); } public static void main(String args[]) { AttendeeList attendeeList = new AttendeeList(); attendeeList.register("5551112222","Phil Coulthard"); attendeeList.register("5552221111","George Farr"); attendeeList.register("5552223333","Sheila Richardson"); attendeeList.register("5554441111","Roger Pence"); attendeeList.display(); } // end main } // end AttendeeList class
The code for the methods is not shown here since it hasn't changed since Chapter 6. Focus on the display method, which now leverages the displayAll method in the Helpers class. The main method here is simply for testing purposes. It populates the list with four attendees and then calls the display method to show the result of printing the list. (This is what we will be converting to a thread shortly.) Here is the output from this class:
* * * ALL ATTENDEES * * * -------------------------- Name........: Sheila Richardson Number......: 5552223333 -------------------------- Name........: Roger Pence Number......: 5554441111 -------------------------- Name........: George Farr Number......: 5552221111 -------------------------- Name........: Phil Coulthard Number......: 5551112222 * * * END OF REPORT * * *
A nice header and footer are printed at the beginning and ending of the report. This is done in the display method of the AttendeeList class.
Now let's convert that displayAll method in Helpers to run in a background thread. If the number of attendees is large, and the report is being printed to a file or printer instead of to the console, running the thread in the background will improve user response time and give control back to the user immediately. Remember, while not shown here, the example in Chapter 6 had a Registration class that drove this AttendeeList class by accepting commands and input from the user, via the console.
There are two ways to run a method asynchronously in Java. The one you choose depends on whether the class containing the method to be run asynchronously is free to inherit from another class or not. If it already inherits from one, then it cannot inherit from another. Java does not allow multiple inheritance.
The method you wish to run asynchronously might be part of a class that is not already extending another class. Or it might not be written yet, which leaves you free to put it in a new class definition. In these cases, you may choose to extend the Java-supplied class Thread from the java.lang package, using these steps:
To run the code, create an instance of the class and invoke the start method on it. This method is inherited from the Thread parent class. Behind the scenes, it uses the operating system to create a thread and then invokes your overridden run method, which is defined as abstract in Thread (so you must override it).
Why not just invoke run directly? Because that would be a synchronous call! The start method Java supplies in the Thread class does the work of creating the asynchronous thread. Figure 11.3 depicts this process.
Figure 11.3: How a thread's start method invokes your run method
Let's put this to work in the Helpers class. The revised class is shown in Listing 11.3. As you can see, the code is changed inside the Enumeration version of displayAll. The real work is still only done in one place, but now that is the non-static run method. The code is simply moved from the old displayAll method to the run method. Because run is non-static, the class has to be instantiated first.
Listing 11.3: The Helpers Class with New Methods for Running in a Thread
import java.util.*; public class Helpers extends Thread { private Enumeration objects; public Helpers(Enumeration objects) // constructor { this.objects = objects; } public void run() // overridden from parent class {); helpersObject.start(); } } // end class Helpers
Because run by definition (as defined in the parent Thread class) must not take parameters, the most difficult change is getting the Enumeration object to that method from the displayAll method. This is done by passing the Enumeration object as a parameter to the constructor, which in turn stores it away in an instance variable. The code inside run simply uses that instance variable. The displayAll method, after instantiating the object, simply calls the inherited start method on that object, which in turn calls the run method.
In summary, the following changes make this class run the displayAll method asynchronously:
Not too bad! With only minor changes, suddenly anybody who calls displayAll will get the resulting work done in a thread instead of synchronously. This means that the displayAll method call will now return immediately after it calls start, and both the calling code and the run method will execute simultaneously.
You don't even have to recompile the AttendeeList class. Just re-running it shows the result of this new behavior:
* * * ALL ATTENDEES * * * * * * END OF REPORT * * * -------------------------- Name........: Sheila Richardson Number......: 5552223333 -------------------------- Name........: Roger Pence Number......: 5554441111 -------------------------- Name........: George Farr Number......: 5552221111 -------------------------- Name........: Phil Coulthard Number......: 5551112222
This is interesting! Because the calling code in AttendeeList's display method executes before the run method executes in the background, you actually get the end-of-report footer printed before the report itself. This output is not guaranteed, though. It is possible that run will execute first, after, or at the same time. It depends on the operating system's time-slicing algorithm for assigning CPU cycles to each running thread. The code that spawned the thread is referred to, by convention, as the main thread of execution.
Of course, in real life, you would move the code to print the header and footer into the run method so that it is printed at the right place, but we wanted to show you the asynchronous behavior of threads.
You might not have the option of changing your class to extend Thread because your class already extends another class. In this case, you can choose to implement the Java-supplied interface Runnable (also defined in the java.lang package). This option is just as easy to implement:
To run the code, create an instance of the class Thread, passing an instance of your class to the constructor, and invoke the start method on that Thread instance. Figure 11.4 depicts this architecture.
Figure 11.4: How the start method in the Thread class calls your run method in a runnable class
Listing 11.4 shows the Helpers class re-coded to support this second option. The changes made for this version, versus the version in Listing 11.3, are highlighted in bold.
Listing 11.4: The Helpers Class with Different New Methods for Running in a Thread
import java.util.*; public class Helpers implements Runnable { private Enumeration objects; public Helpers(Enumeration objects) { this.objects = objects; } public void run() {); Thread threadObject = new Thread(helpersObject); threadObject.start(); } } // end class Helpers
As you can see, very few changes are required:
Running the AttendeeList class now gives exactly the same output as from Listing 11.3. This shows that the two options are extremely similar. It also shows that, if you first choose to extend Thread, changing it later (if you decide you now need to extend another class) to implements Runnable is very straightforward.
You will find that your primary use of threads will be for putting long-running jobs in the background. This will improve the response time to your end-users. You will also find that, in most such cases, you will want to give users the option of canceling that long-running job. This is good user-in-control design, and your users will expect that kind of control. How many times have you decided to kill a compile job because you discovered an obvious bug in the source while waiting for the job to complete?
Let's say that you want to allow a long-running thread to be stopped. The typical mechanism is to use an instance variable that both the running threaded method and the controlling thread (usually just the main or default thread) have access to. The controlling thread waits for a user indication that the running thread should be killed, and then sets the common instance variable to indicate this. Meanwhile, the method running in the thread periodically checks that variable, and, if it is set, voluntarily ends itself by returning.
Suppose you have (admittedly contrived) code that loops for a given number of seconds and displays the elapsed seconds during each iteration. This code will be in method run, as usual. The number of seconds is passed in on the command line, and then the user can cancel the loop by pressing the Enter key on the command line. Listing 11.5 shows this code.
Listing 11.5: A Class Stoppable by the User
public class TestThreads extends Thread { private long seconds; // how long to run private boolean stop = false; public TestThreads(long seconds) // constructor { this.seconds = seconds; } public void run() { for (int secs=0; (secs < seconds) && !stop; secs++) { try { sleep(1000L); // sleep for one second System.out.println(secs + " seconds"); } catch (InterruptedException exc) { } } // end for-loop if (stop) System.out.println("... thread stopped"); } public static void main(String[] args) { TestThreads thisObject; long longValue; if (args.length != 1) { System.out.println("Please supply number of seconds"); return; } try { longValue = Long.parseLong(args[0]); } catch (NumberFormatException exc) { System.out.println("Sorry, " + args[0] + " is not valid"); return; } TestThreads thisObject = new TestThreads(longValue); System.out.println("Running... "); thisObject.start(); Console.readFromConsole("... press to stop"); thisObject.stop = true; // Enter pressed. Signal stop } // end main method } // end TestThreads class
If you run this program from the command line and pass in a maximum number of seconds, you will see that it prints out the current seconds count every second and is stoppable by pressing Enter key:
>java TestThreads 30 Running... ... press to stop 0 seconds 1 seconds 2 seconds 3 seconds ... thread stopped
Here is the breakdown of this class:
This convention of using a mutually accessible variable to control the stopping of the thread works well in most situations. There are times, however, when it will cause a problem:
These are examples of cases in which you might find it necessary to forcefully "kill" a running thread. This is possible with the method stop inherited from the Thread class.
Let's try this method in the example, instead of the mutual variable method. You change the main method line of code from this:
thisObject.stop = true;
to this:
thisObject.stop();
and then recompile and run. After a few seconds of running, press Enter to get the expected results:
>java TestThreads 30 Running... ... press to stop 0 seconds 1 seconds 2 seconds
In fact, this time it is even more responsive. Pressing Enter results in an immediate end to the program. The other method can take up to a second to respond, while the run method waits to wake up from its "sleep."
The one potential downside of using stop() is that the run method does not get a chance to do any cleanup that it might require (in this case, to simply print out "thread stopped"). The need for this is rare. For example, you might need to close an open file as part of your cleanup.
There is a way to get control when your code dies. The stop method works by sending an exception of class type ThreadDeath to the thread object it was invoked on (thisObject, in the example). Because this exception extends Error instead of Exception, you do not normally monitor for it. However, if you do want to know when your code is being "killed" by the stop method, you can put the entire body inside a try/catch block, catching ThreadDeath. You must put the whole body inside the try because you do not know which instruction will be running when the death knell comes. Listing 11.6 shows the body of the run method in a try/catch block.
Listing 11.6: Placing the Entire run Method Inside a try/catch Block for ThreadDeath
public void run() { try { for (int secs=0; (secs < seconds) && !stop; secs++) { try { sleep(1000L); // sleep for one second System.out.println(secs + " seconds"); } catch (InterruptedException exc) { } } // end for-loop if (stop) System.out.println("... thread stopped"); } catch (ThreadDeath exc) { System.out.println("... thread killed"); throw(exc); } }
Now, when you run and cancel, you see the following:
>java TestThreads 30 Running... ... press to stop 0 seconds 1 seconds 2 seconds ... thread killed
Notice that the code re-throws the ThreadDeath exception after catching it. This is important so that the thread continues to die as expected (with dignity!). You might say, then, that you should try to catch your body before it dies!
If you implemented Runnable instead of extending Thread, you would invoke stop on the Thread instance instead of the class instance: threadObject.stop().This is because stop is a member of the Thread class. When you extend Thread, you inherit stop.
We warn you that as of JDK 1.2.0, the use of the stop method has been "deprecated," meaning it is no longer recommended. When you compile code that does use it, you get this warning message:
Note: TestThreads.java uses or overrides a deprecated API. Recompile with "-deprecation" for details.
Apparently, this is due to the concern that cleanup code could too easily be skipped over, leading to hard-to-find bugs. It is recommended that you always use the first option-setting an instance variable and checking it regularly in your asynchronous code.
The previous example was easy, in that only a single thread was running. What if you instead started multiple threads running and wanted to stop all of them at the same time? You could, of course, invoke stop on each of them in turn. In real life, however, this can get messy, since you might not know how many threads are running and do not have a convenient way of enumerating all of them. This is common enough, especially in Internet programming where, for example, you might have numerous threads, downloading images, and resources. In fact, Java designed-in support for thread groups. This is a mechanism for partitioning threads into a uniquely named group, and allowing individual actions such as stop to be easily applied to all threads in the group.
To create a thread group, you create an instance of the ThreadGroup class and pass in any unique name you want for the group:
ThreadGroup groupObject = new ThreadGroup("longRunning");
To identify that new threads are to be created as part of a particular thread group, you pass in the ThreadGroup object as a parameter to the Thread constructor:
Thread threadObject = new Thread(groupObject, thisObject);
This works best for the implements Runnable option, versus the extends Thread option. However, the latter can be used, as long as you create a new Thread object and pass an object of your class as the second parameter, as shown. Listing 11.7 revises the TestThreads class to test this. (Only the changes are shown in the listing.)
Listing 11.7: Using ThreadGroup to Stop Multiple Threads
public class TestThreads implements Runnable { public static void main(String[] args) // cmdline entry { // existing unchanged code not shown thisObject = new TestThreads( longValue ); ThreadGroup groupObject = new ThreadGroup("longRunning"); Thread threadObject1 = new Thread(groupObject, thisObject); Thread threadObject2 = new Thread(groupObject, thisObject); System.out.println("Running... "); threadObject1.start(); threadObject2.start(); Console.readFromConsole("... press to stop"); groupObject.stop(); } }
To test ThreadGroup, the code is changed to use implements Runnable instead of extends Thread, the main method is changed to create a thread group, and two Thread objects are put in that group. Then, both threads are started, and the thread group's stop method is used to stop both of them. Running this gives the following result:
>java TestThreads 30 Running... ... press to stop 0 seconds 0 seconds 1 seconds 1 seconds 2 seconds 2 seconds ...thread killed ...thread killed
You see each line twice because two threads are running.
This ability to control multiple threads as a group is a welcome addition that Java offers above the typical operating-system support for threads. You will find it can save much ugly code. Note that the same object (thisObject) is used in this example and two threads are spawned on it. This is quite legal and quite common.
Once again, though, the recommendation is to not use the stop method, but rather to iterate through each thread in the group and set its instance variable to make it voluntarily stop. We wish they would have added a method signature in the Runnable interface for this voluntary stop idea (for example, setStop(boolean)), but that would have affected too much existing code to do in a 1.2.0 release. You will see later in this chapter how to iterate through the threads in a thread group.
At this point, you might be wondering what happens when the main method ends and threads are still running. You saw that, after spawning the threads, the main method regained control immediately. The threads then started running asynchronously in the background. What happens when the end of the main method is reached, and there are still background threads running? When execution reaches the very end of the main method, the Java Virtual Machine will queue up its "exit" until all active threads have finished running. That is, the program will remain running until those background threads have all finished. You will notice this because you will not get control back at the command line where you issued "java xxx" and you will see Java listed as one of the programs still running in the call stack.
There are times when you simply want to force an exit. That may involve killing any rogue threads still running. You can do this by exiting your program with System.exit(0);. Unlike an implicit or explicit return statement, this does not wait for running threads to end. Sometimes this is necessary for idle background threads, as you will see when using the AS/400 Toolbox for Java classes, for example.
The statement about programs not ending until all threads have finished does have a corollary: When you create a Thread object, you can invoke the setDaemon(true) method on it to identify this thread as a daemon (pronounced "dee-mon").
This doesn't mean you've sold your soul! It means this thread is a service thread that never ends. At program end time, Java will not wait for daemon threads before exiting. Instead, it will just kill those threads. An example of a daemon thread is a timer that just runs in the background and sends out "tick" events, say. Another example might be a thread that watches a data queue or phone line. Marking these types of threads as daemons saves you the trouble of explicitly killing them when you are ready to exit your program.
Even if you do not use threads yourself, Java considers your non-threaded code to be part of a main thread. There are other default threads in any Java program, notably the garbage collector. This a daemon thread that always lurks in the background, waiting for an opportunity to vacuum up an unused object. Using a graphical user interface causes another thread to run as well to watch for user events like mouse movements. However, this is not a daemon thread, so you must explicitly code System.exit(0) to end a GUI application.
The examples of threads so far have been used to allow long-running code to be interrupted. In a real-world application, you will also use threads in other ways. For example, you will use them for any potentially long-running operation to ensure overall system efficiency and higher throughput. Just as a bank has multiple tellers and a grocery store has multiple checkout counters, your programs will often have multiple asynchronous transaction threads. Often, this design will involve one common repository class, such as a bank or store class, and a separate transaction class that is threaded. You will spawn multiple transaction threads, each taking as input an object in the repository, and acting on that object. The transaction thread class will take an instance of the repository class as a constructor parameter, and its run method will invoke one or more of the methods on that repository object. This is illustrated in Figure 11.5.
Figure 11.5: Multiple threads of execution acting on a single object
In this design, you will end up with many simultaneous threads using a single instance of an object (a singleton). The implication is that they will be attempting to view and change the variables in that object simultaneously. Consider an RPG IV module that is used in a *PGM object. It has global variables, just as a Java class has instance variables. Like threads, you can have multiple users running your program simultaneously. However, each user gets his or her own copy of those global variables. With threads, they all share the same copy! This can be dangerous, of course. The threads might "step on each other," with one changing a variable that undermines another.
As an application programmer, you are used to this. You already have to deal with the problems of simultaneous access to your database, and you religiously use locking mechanisms to ensure the integrity of the database. Thus, to build "thread safe" Java programs, you have to learn the Java syntax and idioms necessary to do with instance variables what you already do with database records.
Will you have to worry about complex, multithreaded applications? Perhaps not, if all you are doing initially is adding a Java GUI onto your host RPG application. In this case, your Java user interface will run on the client, and each concurrent user will invoke independent instances (jobs) of your RPG backend code as needed. Your existing database logic in the RPG code will be as robust as always. However, as you delve deeper into writing and running Java server code on the AS/400 itself, you might come to a design like the one in Figure 11.6.
Figure 11.6: The threaded server application architecture with one or more threads per client
In this scenario, instead of having separate AS/400 jobs servicing each client, you have only one server job running, with one or more threads per client. This scales better (al- though admittedly the AS/400 does an exceptional job of handling many thousands of jobs) because threads have less overhead than separate jobs. Combined with RMI (Remote Method Invocation), CORBA (Common Object Request Broker Architecture), or servlets, this can offer an effective new way to design large-scale applications with thousands of concurrent users. To do this, however, you will have to delve deeply into threads and thread safety.
You might be a tad unclear as to how one object can have multiple threads of execution. Think of a bank object. At any one time, it may have thousands of individual threads calling its transferFunds method. It might seem confusing to have so many executing threads, perhaps all on the same method. Do not mix up objects with executing threads. One is about memory, and the other is about instruction pointers.
It might help to think of the object as a database file, the methods as RPG programs that use that database, and the threads as users running the RPG programs. You have only one database file but, at any one time, you have many users running the RPG programs that manipulate that database.
To see how multiple threads using a shared object can be dangerous, consider a system where orders of an item are accepted and fulfilled. The in-stock inventory of the item is also monitored. You might have a class named Inventory that manages this, as shown in Listing 11.8.
Listing 11.8: An Inventory Class that Fulfills Orders and Manages Inventory Stock
public class Inventory { private static final int AMOUNT_INCREMENT = 2000; private int onHand = 5000; // amount in inventory public boolean stop = false; // stop whole thing /** method to fulfill an order */ public; } } // end takeOrder method /** method to increase inventory stock, taking into * account the size of the current order */ private void addToInventory(int howMany) { if (howMany > AMOUNT_INCREMENT) onHand +=(howMany-onHand)+1; else onHand += AMOUNT_INCREMENT; } } // end Inventory class
This is a very simple class. It starts with an initial amount of inventory onHand (5,000), and each order taken (takeOrder method) decrements the amount of the order from the inventory. First, however, a check is made to ensure the size of the order will not deplete the current inventory. If this would be the case, the inventory is increased before the order is filled (addToInventory method). Note that it checks the stop instance variable before even bothering to enter the body of the method. You will see where stop is set at the end of the method.
This is very basic stuff-what could go wrong? Look at the takeOrder method. Because the inventory is bumped up to cover the current order whenever necessary (admittedly a non-robust algorithm), it seems ludicrous to have the "if (onHand < 0)" check. How can it get below zero if the lines of code just above it ensure that it does not? In a synchronized world, of course, it cannot. But in a threaded world, it can. To see this, you need another class-a thread class- whose run method will call the takeOrder method on an instance of Inventory. This typical "transaction" thread class is shown in Listing 11.9.
Listing 11.9: A Class to Place a Single Order in a Thread
public class OrderThread implements Runnable { Inventory inventoryObject; // passed in to us int howMany; // how many items to order /** constructor */ public OrderThread(Inventory inventoryObject, int howMany) { this.inventoryObject = inventoryObject; this.howMany = howMany; } /** "run" method, called by using Start(). * This method places the order for the given amount */ public void run() { // place the order inventoryobject.takeOrder(howMany); } }
An instance of this class will be created for every order, and it will be run as a thread. However, there will be only a single instance of the Inventory class. That instance will be passed in via the constructor to every instance of this OrderThread class. This makes sense; while you get many orders, there should never be more than one inventory.
A final class, shown in Listing 11.10, is needed to test this little system. It contains the main method to get control from the command line. This will create a single Inventory object, but many sample OrderThread objects, to really stress-test Inventory.
Listing 11.10: Code to Run and Test the Inventory Class
public class TestInventory { public static void main(String[] args) // cmdline entry { Inventory inv = new Inventory(); java.util.Random random = new java.util.Random(); int idx; // place the order inventoryObject.takeOrder(howMany); System.out.println("Running... "); for (idx = 0; (idx <= 1000) && !inv.stop; idx++) { int nextRandom = java.lang.Math.abs(random.nextInt()); nextRandom = (nextRandom % 10000) + 1; OrderThread newOrder = new OrderThread(inv,nextRandom); Thread newThread = new Thread(newOrder); newThread.start(); } if (inv.stop) System.out.println("...stopped at: " + idx); else System.out.println("...all orders placed."); } // end main method } // end TestInventory class
This test creates a thousand order-taking threads, and each one asks for a random number of items that ranges up to 10,000. Potentially, all of these threads will run simultaneously, really testing the logic that is designed to never let the inventory fall below zero. This code creates a single instance of the Inventory class and passes it into every instance of OrderThread, so that all threads are operating on a single object. The Random object from the java.util package generates random numbers for the simulated orders. If the inventory ever does fall below zero (seemingly impossible, but still…), the code notices this and stops creating new threads because the system has obviously degenerated and is now unstable.
If you compile and run these classes, you get output similar to the following:
Running... Error-onHand less than zero! -12814 Error-onHand less than zero! -20960 Error-onHand less than zero! -30707 Error-onHand less than zero! -38948 Error-onHand less than zero! -47915 Error-onHand less than zero! -57462 Error-onHand less than zero! -64871 Error-onHand less than zero! -67549 Error-onHand less than zero! -68902 Error-onHand less than zero! -76587 Order: 6277, old: 2004, new: -6537 Error-onHand less than zero! -79915 Error-onHand less than zero! -84342 Error-onHand less than zero! -94274 Order: 8146, old: 577, new: -12814 Order: 9747, old: 1263, new: -20960 Order: 8241, old: 4602, new: -30707 Order: 8967, old: 7394, new: -38948 Order: 9547, old: 8968, new: -47915 Order: 7409, old: 5518, new: -57462 Order: 2678, old: 991, new: -64871 Order: 1353, old: 512, new: -67549 Order: 7685, old: 2512, new: -68902 Order: 3328, old: 589, new: -76587 Order: 4427, old: 2805, new: -79915 Order: 9932, old: 385, new: -84342 ...stopped at: 86 Error-onHand less than zero! -6537 Order: 9088, old: 5323, new: 2551
There are a number of very interesting (scary?) things about this output:
All of this clearly indicates one thing: computers cannot be trusted! Actually, it demonstrates that there are multiple threads of execution running inside the takeOrder method simultaneously. The switch from one thread to another can, and does, happen quickly (from one line to the next) and arbitrarily. This causes a problem because of the common variable (onHand) that these lines of code are sharing and manipulating.
Another thread is gaining control between the line of code that checks the onHand balance:
if (howMany > onHand)
and the line of code that decrements the balance:
onHand = onHand-howMany;
It is running the same decrementing line of code. As shown in Figure 11.7, the check is passing for a particular thread, but by the time it actually does the onHand variable decrement, another thread has already decremented the variable.
Figure 11.7: Thread time-splicing
This causes the variable to be decremented twice without the check, letting it go below zero. Threads work by preemptive time-slicing. That is, each thread is given a small amount of CPU time to perform a few atomic instructions, then it is preempted, and another thread is given a similar amount of CPU time to perform a few of its instructions. This continues until each thread is complete (by reaching the end of the run method). An atomic instruction is essentially one line of bytecode. Generally, a single line of Java source code compiles into numerous Java bytecode instructions. This is not unlike RPG, where a single C-spec statement can compile into multiple underlying machine-code instructions. This means you cannot guarantee that an entire line of source code will run before the next thread is given control.
This might sound hopeless. If you cannot guarantee the order of execution, how can you possibly guard against these unexpected concurrency errors? The answer to providing thread safety is elegantly simple. It involves merely adding one Java keyword to one line of code!
You need to be able to guard against this unexpected interruption whenever you have code that depends on a common variable remaining stable from one line to the next. The magic keyword in Java to do this is synchronized. When specified as a method modifier, it tells Java that the entire method needs to be executed without interruption by other threads. Effectively, it allows only one thread to execute this method at a time. All waiting threads get queued up "at the door." As each thread finishes executing the method, the next waiting thread is let in.
Let's simply change the takeOrder method definition to include the modifier synchronized, as shown in Listing 11.11.
Listing 11.11: Specifying the synchronized Method Modifier
public synchronized; } }
Now, after compiling, run the test again. You should get no unexpected errors:
>java TestInventory Running... ...all orders placed.
This code will run more slowly slower because you have considerably reduced the amount of asynchronous execution. However, it will run correctly, and that, after all, is the fundamental requirement.
Instead of synchronizing the entire takeOrder method, you could actually synchronize just the lines of code you need to guard instead. Java defines a synchronized block as a block of code that can be placed inside a synchronized statement so only that block is protected from interruptions by other threads.
To use this fine-grained synchronization, you have to think carefully about what code is exposed by multiple concurrent threads of execution. At a minimum, it is any code that changes a common instance variable. If you have code that tests the current value of the variable and then does work based on that current variable value, you will need to synchronize the entire block. For each line of code, you need to always be thinking, "What if the value of the variable changed right now?"
In the case of the example, the onHand variable check and the onHand variable decrement need to be treated as a single unit of operation. This guarantees that no other thread can decrement the variable in between, which would cause an underflow. So, remove synchronized from the takeOrder method declaration and instead place it around this sensitive block of code, as in Listing 11.12.
Listing 11.12: Using a synchronized Block Versus a Method
synchronized(this) { if (howMany > onHand) { addToInventory(howMany); // increase inventory error = "Order: " + howMany + ", old: " + old + ", new: " + onHand; } onHand = onHand-howMany; // actually take order } // end synchronized(this)
In this example, there will be no appreciable difference in total execution time, only because you have to put almost the entire method's code into the synchronized statement anyway. However, if there were a significant amount of other code outside of the synchronized block, you would see overall throughput improvements.
In general, you should simply use synchronized at the method level (as a modifier) on any methods that manipulate common variables, unless:
Was it even worth using threads in this example? Maybe not, since you ended up having to synchronize the majority of the code. However, threads are usually a good idea because your code per transaction is usually complex, and the synchronized part-even if it is an entire method-is relatively small. That is, usually, the thread will involve more code than a single method call.
The use of threads can give very busy applications at least the chance for individual transactions to be completed in a shorter time than if they all had to wait for the previously submitted transactions to complete. Further, by spawning threads, you give control back to the user immediately, rather than forcing him or her to wait an indefinite amount of time for the transaction to complete. This reason alone dictates that threads should be used more often than not for user-initiated transactions. "Leave the customer in control" is a maxim to live and code by.
The example puts each transaction in its own thread. This is not the only possible design, of course. Another option would be to instead give each user his or her own thread and let it perform the transactions synchronously within the thread. This is a reasonable alternative because users will expect their own transactions to be performed in the order they are submitted anyway. It might, thus, reduce the overall number of threads running and improve response time. If you have too many threads competing for processor time, however, you might run into thrashing-a situation where so many threads are running that each one gets only enough time to do a minuscule amount of work each slice.
Another option would be to create a fixed-size thread pool of transaction or service threads. In this design, a predetermined number of threads-say, a dozen or so-are spawned at application start time, and each transaction or thread-qualifying request is fed to the next available thread. If no thread is available, the request is queued up and the next transaction thread to become available reads it from the queue and executes it. Or the thread-pool grows by one thread to a pre-set maximum. Again, this can reduce the amount of thread-switching and improve performance for very heavy-use applications.
You have now seen the basics of threads in Java. The remainder of this chapter goes into more detail, and can be safely skipped if you are only looking for an introduction to Java or to threads. If you are ready for more detailed information on threads, however, read on.
The synchronized keyword, as you have seen, can be specified as a method modifier or as a code-block keyword. In the latter case, you saw in the example that it requires a parameter. In the example, the keyword this represented the current object. The synchronized keyword is actually equivalent to the AS/400 command ALCOBJ (Allocate Object) with the *EXCL (Exclusive, no read) parameter. That is, it locks an object so that you have exclusive access to it. It always locks some Java object. When used as a method modifier, it locks the object the method is part of. When used as a code-block keyword, it locks the object you specify as a parameter. In both cases, the entire object is locked, not just the method or code block. Thus, at runtime, when a thread (including the main thread) calls a synchronized method or tries to enter a synchronized block, the algorithm is this:
When the code is done running (execution reaches the end of the method or block), the object is unlocked, and the next thread in the queue is allowed in. Just as with ALCOBJ, nested synchronized methods or blocks on the same object bump up the lock count for the object. It is not until the current thread that has the lock reduces the lock to zero that the object is finally unlocked for others to use, as shown in Figure 11.8.
Figure 11.8: Synchronized lock count
The use of synchronized as a method modifier is actually equivalent to putting the entire method body in a synchronized(this) block. It is, to be sure, the safest and easiest way to synchronize sensitive code that changes common variables in this object. However, there will also be times when code in one method changes variables in another object (either directly or through setXXX methods). In these cases, you have to use synchro nized(object) blocks around the sensitive code, where object is the object reference variable that will be changed.
When you lock an object via the use of synchronized (either as a method modifier or a code-block keyword), it is important to know that you do not block other, unsynchronized methods from running in that same object. This means you can have one thread running an unsynchronized method that reads the common variables at the same time another thread is running inside a synchronized method that perhaps changes those variables. Locking an object only affects other threads that attempt to run synchronized code on that object. Normally, code that reads only a common variable is okay to leave unsynchronized, unless it is doing multiple lines of code that depend on the variable not changing from one line to the next. For example, if you have something like this:
if (account < 0) sendNotice("You have an outstanding account of " + account);
you will clearly have to be careful that the account value does not go above zero by the time the sendNotice method is called in the second line. These two lines should be placed inside a synchronized(this) block to ensure the common variable does not change from one line to the next.
There are times when you will have one synchronized thread that needs to wait on another thread. Java supplies two methods, each one part of the base java.lang.Object class, which are available to all. They are named wait and notifyAll. These methods can only be used inside synchronized methods or blocks. The wait method will wait indefinitely or (optionally) for a specified number of milliseconds, until another thread calls notifyAll. The wait method is always used inside a loop that is checking for some condition on which the thread depends. After waiting, the condition is rechecked:
while (variable < threshold) wait();
The thread that calls wait gets put on a wait queue for the current object, until the object is unlocked. That allows another thread to get in for this object. The threads on a wait queue are only released when notifyAll is called by some other thread for this same object.
There is actually a method named notify as well, which explicitly releases the thread that has been waiting the longest. NotifyAll will release all waiting threads. What does it mean to be released? It means this thread is put back in the queue, waiting to get into the synchronized object. Another thread might have gotten in because wait resets the lock count to zero in the meantime. When it does finally get back in, it starts executing again where it left off at the wait method call. The lock count is then restored to the value it had when the thread call originally called wait.
In Figure 11.9, Tn represents Thread n. If there were more threads in the wait queue, notifyAll would move them all to the lock queue, while notify would move only the first one (T1, in this case). Note that a thread will also move from the wait queue to the lock queue if it specified a number of milliseconds in the call to wait(mmmm), and that time limit has expired.
Figure 11.9: A lock queue versus a wait queue
You will use the wait/notify pair when one section of code produces some output that another section of code (perhaps the same method, perhaps a different method) depends on. For example, you might have a queue object with synchronized methods for reading and writing the queue. The read method would wait until the queue is non-empty:
// inside read method while (isEmpty()) wait();
The write method would notify or notifyAll after adding the entry to the queue:
// inside write method addEntry(newItem); notify();
When do you use notify and when do you use notifyAll? That's a good question. If only there were a good answer. In this case, only one item was added to the queue. Because you know only one thread will be able to use it, notifyAll is not appropriate. If there were an append method that added numerous items to the queue, you would use notifyAll so that all waiting threads would get a chance. The worst that will happen is that one or more threads will return to life only to find their condition still is not met. They will redo their wait call.
A thread calling the read method in this case would be a good candidate for a daemon thread. You would typically not want the application to be prevented from exiting if that read method was still waiting for an entry on the queue.
Synchronization is a great tool to ensure correctness in your multi-threaded applications. However, it is dangerous tool, as well. You can very easily arrive at a situation where all running threads are blocked because they are waiting for each other. This is like stating "When two trains meet at an intersection, neither can leave until the other is gone."
Consider a situation where thread T1 runs synchronized method obj1.method1(), locking obj1. Thread T2 runs synchronized method obj2.method2(), locking obj2. Now, obj1.method1 tries to call obj2.method2, and so thread T1 is put in the lock queue for obj2 while it waits for thread T2 to finish. But obj2.method2 calls obj1.method1, and so thread T2 gets put in the lock queue for obj1 while it waits for thread T1 to finish. As shown in Figure 11.10, each is now blocked, waiting for the other, and will wait forever. No other threads needing these objects will ever run. Also, unless these are daemon threads or System.exit is used, the program will not end unless it is forcefully killed.
Figure 11.10: Deadlock!
This is a deadly embrace known as deadlock. There is nothing that Java can do to help you here! If you hit this problem, it will manifest intermittently (because it is timing dependent) and be a complete bear to debug. You need to avoid the problem completely by careful design-that is, by avoiding mutual calls from one object's synchronized method to another's and back again. If necessary, use a third object with a synchronized method that makes the necessary calls to the other two objects via unsynchronized calls.
The wait/notify pair neither helps nor hinders deadlock, but it does add the risk of a thread waiting forever if another thread does not someday notify it. However, because the waiting thread unlocks the object, at least other threads have a chance to run. If it is important for the waiting thread to eventually run (say, if it is waiting on resources in order to place a customer order), then this will still be a serious problem. You might want to specify a time-out limit, even if it is 10 minutes, on the wait method to indicate when something appears to be stuck. Of course, there is a risk that it is waiting on itself:
public synchronized void waitOnMe() { while (variable < threshHold) wait(); variable += threshHold; notify(); }
In this case, the code that notifies waiting threads is clearly unreachable. There is no point in waiting on a situation your subsequent code will address. Just go ahead and address it!
The discussion about synchronization uses the terms lock queue and wait queue. They are misnomers, however, in that they imply that threads are put into and taken off queues in a deterministic manner-say, first in, first out. In fact, they are randomly chosen by the thread scheduler. The algorithm used to choose them is not programmatically predictable. It may vary from platform to platform, depending on the underlying operating system's scheduling support for threads. Better terms might be "lock set" and "wait set," but these create their own aura of confusion.
One of the criteria Java tries to enforce in the scheduling of threads (and all multithreaded operating systems support) is thread priority. By default, your threads will all have normal priority and, thus, the same relative weighting for this criteria. However, by using the Thread method setPriority, you can set the priority to any number between one (lowest) and 10 (highest). The default is five (normal). For convenience, there are predefined constants in the Thread class for MIN_PRIORITY (one), NORM_PRIORITY (five), and MAX_PRIORITY (10).
New threads inherit the priority of their parent threads (the main thread has normal priority), or that of their ThreadGroup, if one is specified. Using ThreadGroups is a convenient way to set the priorities of all similar-role threads.
When the thread scheduler needs to pick another thread for its turn to run, get off the locked queue, or get off the wait queue, it will use an algorithm involving both the thread's priority and its time waiting so far. Higher-priority threads, all things being equal, will get more CPU time than lower-priority threads. It is a general rule of thumb that user-interface threads run at a high priority to improve response time, and daemon background threads run at a low priority to take whatever cycles they can steal.
We conclude this chapter by discussing some other aspects and functions available to you as a threaded programmer:
When writing applications, you often want to measure the elapsed time, to gauge the performance impacts of changes you make. This is especially true in a multithread application, where you want to ensure that adding a wait here, a yield there, and a synchronized statement over there does not seriously degrade the overall throughput of the application.
To measure an application's time, an easy trick is to change the main entry point to record the current time in milliseconds at the very beginning, and record it again at the very end. Then, take the difference and spit it out to standard-out (that is, the console) or a log file. Make a number of test runs through the application and average the total elapsed time. If you change something, you can rerun the test cases and compare the average elapsed time. Of course, this will be very dependent on the current load of the system, so it has to be taken with a grain of salt. For code running on the workstation, the usual trick is to run the test cases after a fresh reboot, so that memory is in as consistent a state as possible between runs.
To measure the elapsed time in the inventory example, change the main method in the TestInventory class to print out the elapsed time in milliseconds that the entire process takes. This is easy using the ElapsedTime class from Chapter 8 (shown in Listing 8.2). Just instantiate this class and call its setStartTime at the beginning of the code in main, and its setEndTime at the end of the code in main, and write the result out to the console.
There is a trick, though! You can't just take the time at the end of the main method because at that point, the threads are still running. Thus, you need a way to wait for all active threads to complete and then record the ending time. Do this by creating all the threads in a single ThreadGroup and then waiting for them all to finish. This is a good strategy, anyway, because it also gives an easy way to stop all these threads when something untoward happens or is detected: just use object.stop() on the ThreadGroup object.
How do you wait for all the threads in the group to finish? Painfully, as it turns out! You have to enumerate all the threads in the group, then invoke the Thread method join on each of them. The method join "waits" on the thread to finish, and only then do you get control back. (If it is already finished, you get control back immediately.)
A join method for the ThreadGroup itself is missing from Java, but the code is not intellectually taxing. We wrote a helpful static method named waitOnThreadGroup to do this, shown in Listing 11.13. You simply code in a call to it and don't get control back until all threads in the given ThreadGroup have completely. Note that join may throw an InterruptedException exception, so you have to catch it.
Listing 11.13: The Revised TestInventory Class with Time-Recording Logic
public class TestInventory { public static void main(String[] args) // cmdline entry { Inventory inv = new Inventory(); java.util.Random random = new java.util.Random(); int idx; ElapsedTime timeRecorder = new ElapsedTime(); timeRecorder.setStartTime(); System.out.println("Running... "); ThreadGroup orderGroup = new ThreadGroup("Orders"); for (idx = 0; (idx <= 1000) && !inv.stop; idx++) { int nextRandom = java.lang.Math.abs(random.nextInt()); nextRandom = (nextRandom % 10000) + 1; OrderThread newOrder = new OrderThread(inv, nextRandom); Thread newThread = new Thread(orderGroup, newOrder); newThread.start(); } if (inv.stop) System.out.println("...stopped at: " + idx); else System.out.println("...all orders placed."); waitOnThreadGroup(orderGroup); timeRecorder.setEndTime(); System.out.println("Elapsed time: " + timeRecorder); } // end main method public static void waitOnThreadGroup(ThreadGroup group) { Thread allThreads[] = new Thread[group.activeCount() + 10]; group.enumerate(allThreads); for (int idx = 0; idx < allThreads.length; idx++) { if (allThreads[idx] != null) try { allThreads[idx].join(); } catch (InterruptedException exc) {} } } } // end TestInventory class
With accurate elapsed-time checking in place, you can make a few sample runs and record the average time:
>java TestInventory Running... ...all orders placed. Elapsed time: Elapsed time: 0 hours, 0 minutes, 0 seconds, 440 milliseconds >java TestInventory Running... ...all orders placed. Elapsed time: Elapsed time: 0 hours, 0 minutes, 0 seconds, 460 milliseconds >java TestInventory Running... ...all orders placed. Elapsed time: Elapsed time: 0 hours, 0 minutes, 0 seconds, 420 milliseconds
The average time in this case is 440 milliseconds (your mileage may vary). Interestingly enough, for the first edition of this book, on a slower machine and an earlier JDK, this took 6.5 seconds!
This chapter covered threads in Java, including the following concepts: | http://flylib.com/books/en/2.163.1/threads.html | CC-MAIN-2018-05 | refinedweb | 10,073 | 63.09 |
Print Templates, Part II: TemplatePrinter: Assembling the Print Template - Doc JavaScript
Print Templates, Part II: TemplatePrinter
Assembling the Print Template
What's new in this print template, compared to the print template we presented in Column 89, (Print Templates, Part I), is that it includes the
TEMPLATEPRINTER element. The
TEMPLATEPRINTER element is very rich in methods and properties which accommodate many of your printing and previewing needs. Here is how you define a
TEMPLATEPRINTER element with
ID="printer":
<IE:TEMPLATEPRINTER
Notice that this
TEMPLATEPRINTER element is defined in the
IE namespace. You define the XML namespace in the opening HTML element:
<HTML XMLNS:IE>
Also notice that the
TEMPLATEPRINTER element does not have a closing tag. It must have a forward slash (
/) before its closing bracket.
This print template is ready to print or preview the first two pages of a document. We explained how to handle a variable number of pages in Column 89, Print Templates, Part I. Each page is contained in a
LAYOUTRECT element, which in turn is included in a
DEVICERECT element. Read more on these elements in Column 89. The
CONTENTSRC attribute of the first page's
LAYOUTRECT element points to
"document" which is the currently-loaded document. Here is the whole
BODY section:
<BODY> <IE:TEMPLATEPRINTER <IE:DEVICERECT <IE:LAYOUTRECT </IE:DEVICERECT> <IE:DEVICERECT <IE:LAYOUTRECT </IE:DEVICERECT> </BODY>
Next: How to cipher
dialogArguments for the desired action
Produced by Yehuda Shiran and Tomer Shiran
Created: August 27, 2001
Revised: August 27, 2001
URL: | http://www.webreference.com/js/column91/3.html | CC-MAIN-2014-15 | refinedweb | 250 | 58.11 |
Data in Python
- An active program accesses data store in RAM. RAM is very fast but volatile. Once the application is closed and relaunched we need to allocate the memory.
- So if the application which we are developing requires data to be stored we will be using files/databases
- Lets get started with files
- Flat text files
- Padded text files
- Tabular text files
- csv files
- XML
- JSON
- YAML
Python File I/O
- Python has a built in open function to open the file which returns a file object called as handle, which can be used to read or modify the file accordingly
- while opening the file we can specify the mode of opening
- r: opening a file for reading
- w: opening a file for writing
- x: open a file for writing but exclusively create a files, if the file already exists the operation fails
- a: Open the file for adding content to the end of the file, if the file doesn’t exist then it is created
- t: opens a file in text
- b: opens the file in binary
- +: opens a file for updating (reading and writing)
- We have a method called as close() to close the file handle which free’s up resources
f = open('test.txt') #perform some operations f.close()
- Alternative to this is
try: f = open('test.txt') #perform file operations finally: f.close()
- Best approach is to use the with block, where there is no need to explicity call close as once the code block is executd the close() is done/called automatically
with open('test.txt') as f: # perform operations
- Lets write a sample to write files in python
def write_demo(): with open("data/test.txt", "w", encoding="utf-8") as f: f.write("first line \n") f.write("second line \n") f.write("third line \n") def append_demo(): with open("data/test.txt", "a", encoding="utf-8") as f: f.write('some lines from append') def read_demo(): with open("data/test.txt", "r", encoding="utf-8") as f: #print(f.read()) for item in f.readlines(): print(item) if __name__ == '__main__': write_demo() append_demo() read_demo()
CSV Files
- Delimited files are often used as an exchange format for spreatsheets and databases.
- You can read CSV files manually, a line at a time splitting into fields at comma sperators
- Refer Here to understand what csv is
- Lets write a simple program to store the results in csv file
- For working with csv we have standard libarary which is csv Refer Here
- Sample CSV code to write and read from csv
import csv def is_prime(number: int): for index in range(2, number//2 + 1): if number%index == 0: return False return True def write_to_csv(number: int, result: bool): with open('data/prime.csv', 'at') as csv_file: prime_writer = csv.writer(csv_file) prime_writer.writerow([number, result]) def read_from_csv(): with open('data/prime.csv', 'rt') as csv_file: prime_reader = csv.reader(csv_file) results_dict = dict() for row in prime_reader: if len(row) !=2: continue results_dict[int(row[0])] = bool(row[1]) return results_dict if __name__ == '__main__': results_dict = read_from_csv() number = int(input('Enter the number: ')) if number in results_dict: print(results_dict[number]) else: result = is_prime(number) write_to_csv(number, result) | https://learningthoughts.academy/2021/09/08/python-classroom-series-07-sept-2021/ | CC-MAIN-2022-05 | refinedweb | 527 | 60.55 |
statsmodels Principal Component Analysis¶
Key ideas: Principal component analysis, world bank data, fertility.
[1]:
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.multivariate.pca import PCA
The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:
[2]:
data = sm.datasets.fertility.load_pandas().data data.head()
[2]:
5 rows × 58 columns
Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.
[3]:
columns = list(map(str, range(1960, 2012))) data.set_index('Country Name', inplace=True) dta = data[columns] dta = dta.dropna() dta.head()
[3]:
5 rows × 52 columns its.
[4]:
ax = dta.mean().plot(grid=False) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility rate", size=17); ax.set_xlim(0, 51)
[4]:
(0, 51)
Next we perform the PCA:
[5]:
pca_model = PCA(dta.T, standardize=False, demean=True)
Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC’s.
[6]:
fig = pca_model.plot_scree(log_scale=False)
Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.
[7]:
fig, ax = plt.subplots(figsize=(8, 4)) lines = ax.plot(pca_model.factors.iloc[:,:3], lw=4, alpha=.6) ax.set_xticklabels(dta.columns.values[::10]) ax.set_xlim(0, 51) ax.set_xlabel("Year", size=17) fig.subplots_adjust(.1, .1, .85, .9) legend = fig.legend(lines, ['PC 1', 'PC 2', 'PC 3'], loc='center right') legend.draw_frame(False)
To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.
[8]:
idx = pca_model.loadings.iloc[:,0].argsort()
First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).
[9]:
def make_plot(labels): fig, ax = plt.subplots(figsize=(9,5)) ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax) dta.mean().plot(ax=ax, grid=False, label='Mean') ax.set_xlim(0, 51); fig.subplots_adjust(.1, .1, .75, .9) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility", size=17); legend = ax.legend(*ax.get_legend_handles_labels(), loc='center left', bbox_to_anchor=(1, .5)) legend.draw_frame(False)
[10]:
labels = dta.index[idx[-5:]] make_plot(labels)
Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.
[11]:
idx = pca_model.loadings.iloc[:,1].argsort() make_plot(dta.index[idx[-5:]])
Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960’s and 1970’s, then flattened out.
[12]:
make_plot(dta.index[idx[:5]])
We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.
[13]:
fig, ax = plt.subplots() pca_model.loadings.plot.scatter(x='comp_00',y='comp_01', ax=ax) ax.set_xlabel("PC 1", size=17) ax.set_ylabel("PC 2", size=17) dta.index[pca_model.loadings.iloc[:, 1] > .2].values
[13]:
array(['Oman', 'Yemen, Rep.'], dtype=object)
| https://www.statsmodels.org/v0.11.0/examples/notebooks/generated/pca_fertility_factors.html | CC-MAIN-2022-40 | refinedweb | 725 | 52.87 |
A comprehensive Python CMS framework review allows you to single out tools that can enrich your coding practice. We want to focus your attention on Wagtail, a Python CMS. So let’s dive in!
What is the Wagtail CMS?
Wagtail is a Python based CMS made for Django. The Wagtail CMS was released in 2015 by a digital agency named Torchbox, the same agency that created South migrations for Django in 2008. So when we encountered a project that required a content management system, we had an additional reason to give Wagtail CMS a try. We decided to try it out in practice.
At the moment, Wagtail has a few different versions that support the Django framework from version 1.8.x up to version 2.0.x, and a version that supports Django 2.1.x is currently in development.
The Wagtail CMS was designed to be simple, ergonomic, and fast, and all of that was achieved by distributing responsibilities between the programmer and content manager. This distribution means that a content manager can’t create any new entity in the system using the admin panel interface without it being predefined in code. In other words, before using a page or a block of content in the admin panel, it must be created programmatically first.
Pros and cons of the Django Wagtail CMS. And all of these settings and functionality have to be implemented in code. In other words, to build a page, content manager can only use a set of tools that are implemented programmatically as Python classes or so-called blocks. That’s why Wagtail requires a certain level of expertise with Python and Django and might seem slightly more complex at first than it really is. This CMS provides:
- a lightweight and straightforward interface;
- flexibility in development, providing only a platform and set of tools and giving full freedom to the programmer in how to use them;
- a performance boost, as all querying and database use can be fully controlled by the developer;
- ease and speed of development because Wagtail is Django and uses Django’s authentication backend, templating system, and so on.
One more great thing about the Wagtail CMS is that it’s designed to keep a minimum amount of HTML code in the database which, along with its external API functionality, provides easy content management on multiple platforms so one page or piece of content can look different on web and mobile applications, for example.
Yet another great feature is the built-in Elasticsearch engine, which requires only a few lines of code to get up and running.
Of course there’s another side of the coin. Despite all of the advantages, the Wagtail CMS has its disadvantages as well:
- there’s no frontend out of the box, so after installation the developer has only an admin panel interface;
- documentation isn’t bad and is more than enough for beginners, but there are still a few aspects that are poorly documented or not documented at all (for example, advanced customization of the admin interface, a number of class methods are simply missing, etc.);
- the community is pretty active but not as big as might be desired.
A closer look at pages
In the Wagtail CMS, pages are Django models. Pages inherit from the abstract Wagtail Page model, which has all the service methods, fields, and properties that a page may need.
Let’s say we want to create a blog application and a page for an article on this blog. The basic django model for that might look like this:
# blog/models.py from django.db import models from wagtail.admin.edit_handlers import FieldPanel, RichTextFieldPanel from wagtail.core.fields import RichTextField from wagtail.core.models import Page class ArticlePage(Page): author = models.CharField(max_length=255) subtitle = models.CharField(max_length=150, null=True, blank=True) body = RichTextField() content_panels = Page.content_panels + [ FieldPanel('author'), FieldPanel('subtitle'), RichTextFieldPanel('body') ]
Code snippet 1. Article Page with RichTextField
In this model, we’ve used a standard Django CharField and a field from the Wagtail CMS called RichTextField. The content_panels property is required and defines the so-called field panels for our model’s fields. In other words, it holds instructions on how to build these fields in the admin interface.
To be able to preview our new page, we need to create a template for it. A template file must be placed either in the project’s templates directory (under the blog subdirectory) or inside the application templates directory, and it must be named exactly like in the model, only in snake case — so for the ArticlePage model it should be article_page.html. The syntax for templates is fully Django syntax, and the Wagtail CMS has sufficient documentation about this area so we won’t duplicate it here.
The page will automatically be registered in the admin interface, so all that’s left is to create and apply a migration.
The result is nice and simple: two CharFields and a TextField with a simple WYSIWYG editor, which can be easily extended, customized, or replaced with any WYSIWYG editor that’s compatible with Django. But as one of the core missions of the Wagtail CMS is to minimize the amount of HTML code in the database, the default editor should be enough. The Promotion and Settings tabs contain additional fields and options for publishing functionality.
A closer look at StreamField and Blocks
RichTextField is good enough for simple structured pages with headings, formatted text, and images, but what if we need to build something more complex? What if our page should look different on different platforms? Here comes the killer feature of Wagtail CMS — the StreamField.
“StreamField provides a content editing model suitable for pages that do not follow a fixed structure – such as blog posts or news stories – where the text may be interspersed with subheadings, images, pull quotes and video. ”
- Wagtail documentation
To be more specific, StreamField is an alternative to RichTextField, and it’s a WYSIWYG editor. StreamField keeps all data in JSON format, structured as described in code, so the content manager can’t change its structure. It has a visual editor in the admin interface and the structure of its JSON data can be defined via the Wagtail CMS blocks.
Let’s say we’ve decided that the body of ArticlePage should contain a few headings, text paragraphs, and embedded videos. To do that, we’ll need to use Wagtail blocks. The most interesting are StructBlock and StreamBlock.
StructBlock is used to combine a number of basic blocks like CharBlock and RichTextBlock. It should have its own template which describes how these sub-blocks should be rendered. A StructBlock can even contain another StructBlock.
Then a StreamBlock is used to gather all required StructBlocks under the body field of our ArticlePage.
So let’s see how this would look in practice. Here’s the StructBlock created for headings:
# blog/blocks.py part 1 from wagtail.core.blocks import StructBlock, CharBlock, ChoiceBlock class HeadingBlock(StructBlock): text = CharBlock(required=True) size = ChoiceBlock(choices=[ ('h2', "H2"), ('h3', "H3") ], required=True) class Meta: icon = "title" label = "Heading" template = "blog/blocks/block_heading.html"
Code snippet 2. A StructBlock for a custom heading
This StructBlock contains two blocks: CharBlock and ChoiceBlock. We can also opt for an icon, label, and template for it, a simple version of which could look like this:
{# blocks/blog_heading.html #} {% if self.size == 'h2' %} <h2>{{ self.text }}</h2> {% else %} <h3>{{ self.text }}</h3> {% endif %}
Code snippet 3. A template for HeadingBlock
Following this logic, we’ll create similar blocks for text and embedded video sections:
# blog/blocks.py part 2 from wagtail.core.blocks import StructBlock, RichTextBlock class TextBlock(StructBlock): text = RichTextBlock( required=True, features=['bold', 'italic', 'paragraph', 'ul', 'link'] ) class Meta: label = 'Text' template = "blog/blocks/block_text.html"
Code snippet 4. A StructBlock for a text section
TextBlock contains only a RichTextBlock, but let’s assume we placed it into a separate StructBlock to be able to override the rendering logic for it. In this case, the basic version of the template might look like this:
{# blocks/block_text.html #} {% load wagtailcore_tags %} <div class="custom-text-block"> {{ self.text|richtext }} </div>
Code snippet 5. A template for TextBlock
The last StructBlock we’re going to create is a block for embedded video. It will contain an EmbedBlock from the Wagtail:
# blog/blocks.py part 3 from wagtail.core.blocks import StructBlock from wagtail.embeds.blocks import EmbedBlock class VideoEmbedBlock(StructBlock): video = EmbedBlock( required=True, help_text="Insert a video url e.g" ) class Meta: icon = 'media' label = "Embed Video" template = "blog/blocks/block_video_embed.html"
And to render this block, we’re going to use the embed tag from the wagtailembeds_tags library, which will do the trick for us.
{# blocks/block_video_embed.html #} {% load wagtailembeds_tags %} <div class="block-video-embed"> {% embed self.video.url %} </div>
Code snippet 7. A template for VideoEmbedBlock.
Finally, we need to define a StreamBlock to gather all of our StructBlocks in one place:
# blog/blocks.py part 4 from wagtail.core.blocks import StreamBlock class BaseArticleStreamBlock(StreamBlock): heading = HeadingBlock() text = TextBlock() video = VideoEmbedBlock()
Code snippet 8. A StreamBlock for the body of ArticlePage
Now we’re ready to update our ArticlePage model by replacing RichTextField with StreamField and changing the content panel for it to StreamFieldPanel. After all these transformations, our model should look like this:
# blog/models.py with StreamField from django.db import models from wagtail.admin.edit_handlers import FieldPanel, StreamFieldPanel from wagtail.core.fields import StreamField from wagtail.core.models import Page from apps.blog.blocks import BaseArticleStreamBlock class ArticlePage(Page): author = models.CharField(max_length=255) subtitle = models.CharField(max_length=150, null=True, blank=True) body = StreamField( BaseArticleStreamBlock(), verbose_name="Page Body", blank=True ) content_panels = Page.content_panels + [ FieldPanel('author'), FieldPanel('subtitle'), StreamFieldPanel('body') ]
Code snippet 9. Updated ArticlePage model
Now let’s create and run migrations for our changes and check out the admin interface for ArticlePage:
As we can see in Figure 2, in the body field there’s a TextBlock, followed by a HeadingBlock and another TextBlock, and there’s a black panel under the blocks where we can select what block we’d like to create next.
All of the created content (from Figure 2) will be saved as JSON that will have the following structure:
[ { 'type': 'text', 'value': { 'text': '<p>Lorem ipsum dolor sit amet,...</p>' }, 'id': 'ff0e4723-e384-44cd-be7d-aff2f1f0e913'}, { 'type': 'heading', 'value': { 'text': 'Ad vix probatus perpetua comprehensam', 'size': 'h3' }, 'id': 'e084e4a7-86e8-4ba1-80ef-0fdc1cf773c7' }, { 'type': 'text', 'value': { 'text': '<p>Eos lareformidans no. ...</p>' }, 'id': '3c99b825-c166-4737-8596-cd98a307bd9d' } ]
Taking all of this into account, we believe the Wagtail CMS could be a good fit for developing any non-ecommerce application that requires content management. We can use Wagtail to develop a fast and reliable solution, but it requires developer expertise with Python and Django.
Wagtail CMS in practice: examples
We’ve been involved in the development of several websites based on the Wagtail CMS. One is dedicated to the Scots College Old Boys’ Union, a community that wants to preserve the memory of school life.
The other platform we’ve created with Wagtail is called Biennale of Sydney. It provides a space for thought-provoking art and ideas and attracts artistic minds across Australia and the rest of the globe.
In our opinion, the Wagtail is one of the best Python CMS solutions and a great tool that’s comfortable to work with and easy to learn. Despite being pretty undervalued currently, it leaves a good impression and we’re looking forward to working again with the Wagtail CMS.
Have some questions or want to build a project with the Wagtail CMS? Write to our sales representative. | https://steelkiwi.com/blog/how-to-use-the-wagtail-cms-for-django-an-overview/ | CC-MAIN-2019-39 | refinedweb | 1,955 | 55.95 |
I'm writing this program right now where the user inputs integers with a limit of 100 integers and entering -999 to denote the end of the list. I'm having some trouble... it seems like the way that I have it written now, the program must read 100 integers no matter what. But I don't know what to change. I tried flipping around my "while" and "for" statements, but then that gave me some kind of internet explorer error and shut down. Anyways, if anyone can take a look and see what I have going wrong here, let me know. I'd really appreciate it.
Code:#include <iostream> #include <iomanip> using namespace std; const int sentinel = -999; void selectionSort(int number[], int length); int main() { int number[100]; int index; cout << "Enter a maximum of 100 positive integers ending with " << sentinel << endl; for (index = 0; index < 100; index++) { while (number[index] != sentinel) { cin >> number[index]; } } selectionSort(number, 100); for (index = 0; index < 100; index++) cout << number[index] << endl; return 0; } void selectionSort(int number[], int length) { int index; int smallestIndex; int minIndex; int temp; for (index = 0; index < length - 1; index++) { smallestIndex = index; for (minIndex = index + 1; minIndex < length; minIndex++) if (number[minIndex] < number[smallestIndex]) smallestIndex = minIndex; temp = number[smallestIndex]; number[smallestIndex] = number[index]; number[index] = temp; } } | https://cboard.cprogramming.com/cplusplus-programming/68019-array-query.html | CC-MAIN-2017-51 | refinedweb | 219 | 57.3 |
Xterm.js is a terminal front-end component written in JavaScript that works in the browser.
It enables applications to provide fully featured terminals to their users and create great development experiences.
FeaturesFeatures
-
What xterm.js is notWhat xterm.js is not
- Xterm.js is not a terminal application that you can download and use on your computer
- Xterm.js is not
bash. Xterm.js can be connected to processes like
bashand let you interact with them (provide input, receive output)
Getting StartedGetting Started
First you need to install the module, we ship exclusively through npm so you need that installed and then add xterm.js as a dependency by running:
npm install xterm
To start using xterm.js on your browser, add the
xterm.js and
xterm.css to the head of your html page. Then create a
<div id="terminal"></div> onto which xterm can attach itself.
<!doctype html> <html> <head> <link rel="stylesheet" href="node_modules/xterm/dist/xterm.css" /> <script src="node_modules/xterm/dist/xterm.js"></script> </head> <body> <div id="terminal"></div> <script> var term = new Terminal(); term.open(document.getElementById('terminal')); term.write('Hello from \x1B[1;3;31mxterm.js\x1B[0m $ ') </script> </body> </html>
Finally instantiate the
Terminal object and then call the
open function with the DOM object of the
div.
ImportingImporting
The proposed way to load xterm.js is via the ES6 module syntax.
import { Terminal } from 'xterm';
APIAPI
The full API for xterm.js is contained within the TypeScript declaration file, use the branch/tag picker in GitHub (
w) to navigate to the correct version of the API.
Note that some APIs are marked experimental, these are added so we can experiment with new ideas without committing to support it like a normal semver API. Note that these APIs can change radically between versions so be sure to read release notes if you plan on using experimental APIs.
AddonsAddons
Addons are JavaScript modules that extend the
Terminal prototype with new methods and attributes to provide additional functionality. There are a handful available in the main repository in the
src/addons directory and you can even write your own, by using xterm.js' public API.
To use an addon, just import the JavaScript module and pass it to
Terminal's
applyAddon method:
import { Terminal } from 'xterm'; import * as fit from 'xterm/lib/addons/fit/fit'; Terminal.applyAddon(fit); var xterm = new Terminal(); // Instantiate the terminal xterm.fit(); // Use the `fit` method, provided by the `fit` addon
You will also need to include the addon's CSS file if it has one in the folder.
Importing Addons in TypeScriptImporting Addons in TypeScript
There are currently no typings for addons if they are accessed via extending Terminal prototype, so you will need to upcast if using TypeScript, eg.
(<any>xterm).fit().
Alternatively, you can import addon function and enhance the terminal on demand. This would have better typing support and is friendly to treeshaking. E.g.:
import { Terminal } from 'xterm'; import { fit } from 'xterm/lib/addons/fit/fit'; const xterm = new Terminal(); // Fit the terminal when necessary: fit(xterm);
Third party addonsThird party addons
There are also the following third party addons available:
Browser SupportBrowser Support
Since xterm.js is typically implemented as a developer tool, only modern browsers are supported officially. Here is a list of the versions we aim to support:
- Chrome latest
- Edge latest
- Firefox latest
- Safari latest
- IE11
Xterm.js works seamlessly in Electron apps and may even work on earlier versions of the browsers but these are the browsers we strive to keep working.
APIAPI
The current full API documentation is available in the TypeScript declaration file on the repository, switch the tag (press
w when viewing the file) to point at the specific version tag you're using.
Real-world usesReal
- ttyd: A command-line tool for sharing terminal over the web, with fully-featured terminal emulation based on xterm.js
- Katacoda: Katacoda is an Interactive Learning Platform for software developers, covering the latest Cloud Native technologies.
- Eclipse Che: Developer workspace server, cloud IDE, and Eclipse next-generation IDE.
- Codenvy: Cloud workspaces for development teams.
- CoderPad: Online interviewing platform for programmers. Run code in many programming languages, with results displayed by
xterm.js.
- WebSSH2: A web based SSH2 client using
xterm.js, socket.io, and ssh2.
- Spyder Terminal: A full fledged system terminal embedded on Spyder IDE.
- Cloud Commander: Orthodox web file manager with console and editor.
- Codevolve: Online platform for interactive coding and web development courses. Live container-backed terminal uses
xterm.js.
- RStudio: RStudio is an integrated development environment (IDE) for R.
- Terminal for Atom: A simple terminal for the Atom text editor.
- Eclipse Orion: A modern, open source software development environment that runs in the cloud. Code, deploy and run in the cloud.
- Gravitational Teleport: Gravitational Teleport is a modern SSH server for remotely accessing clusters of Linux servers via SSH or HTTPS.
- Hexlet: Practical programming courses (JavaScript, PHP, Unix, databases, functional programming). A steady path from the first line of code to the first job.
- Selenoid UI: Simple UI for the scallable golang implementation of Selenium Hub named Selenoid. We use XTerm for streaming logs over websockets from docker containers.
- Portainer: Simple management UI for Docker.
- SSHy: HTML5 Based SSHv2 Web Client with E2E encryption utilising
xterm.js, SJCL & websockets.
- JupyterLab: An extensible computational environment for Jupyter, supporting interactive data science and scientific computing across all programming languages.
- Theia: Theia is a cloud & desktop IDE framework implemented in TypeScript.
- Opshell Ops Helper tool to make life easier working with AWS instances across multiple organizations.
- Proxmox VE: Proxmox VE is a complete open-source platform for enterprise virtualization. It uses xterm.js for container terminals and the host shell.
- Script Runner: Run scripts (or a shell) in Atom.
- Whack Whack Terminal: Terminal emulator for Visual Studio 2017.
- VTerm: Extensible terminal emulator based on Electron and React.
- electerm: electerm is a terminal/ssh/sftp client(mac, win, linux) based on electron/node-pty/xterm.
- Kubebox: Terminal console for Kubernetes clusters.
- Azure Cloud Shell: Azure Cloud Shell is a Microsoft-managed admin machine built on Azure, for Azure.
- atom-xterm: Atom plugin for providing terminals inside your Atom workspace.
- rtty: A reverse proxy WebTTY. It is composed of the client and the server.
- Pisth: An SFTP and SSH client for iOS
- abstruse: Abstruse CI is a continuous integration platform based on Node.JS and Docker.
- Azure Data Studio: A data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux.
- FreeMAN: A free, cross-platform file manager for power users
- Fluent Terminal: A terminal emulator based on UWP and web technologies.
- Hyper: A terminal built on web technologies
- Diag: A better way to troubleshoot problems faster. Capture, share and reapply troubleshooting knowledge so you can focus on solving problems that matter.
- GoTTY: A simple command line tool that shares your terminal as a web application based on xterm.js.
- genact: A nonsense activity generator.
- cPanel & WHM: The hosting platform of choice.
- Nutanix: Nutanix Enterprise Cloud uses xterm in the webssh functionality within Nutanix Calm, and is also looking to move our old noserial (termjs) functionality to xterm.js
- SSH Web Client: SSH Web Client with PHP.
- Shellvault: The cloud-based SSH terminal you can access from anywhere.
- Juno: A flexible Julia IDE, based on Atom.
- webssh: Web based ssh client.
- info-beamer hosted: Uses Xterm.js to manage digital signage devices from the web dashboard.
Do you use xterm.js in your application as well? Please open a Pull Request to include it here. We would love to have it in our list. Note: Please add any new contributions to the end of the list only.
ReleasesReleases
Xterm.js follows a monthly release cycle roughly.
The existing releases are available at this GitHub repo's Releases, while the roadmap is available as Milestones.
ContributingContributing
You can read the guide on the wiki to learn how to contribute and setup xterm.js for development.
License AgreementLicense Agreement
If you contribute code to this project, you are implicitly allowing your code to be distributed under the MIT license. You are also implicitly verifying that all code is your original work. | https://www.ctolib.com/xtermjs-xterm-js.html | CC-MAIN-2019-04 | refinedweb | 1,370 | 50.53 |
As always, one of the first things to check is the event viewer to see if an event was generated detailing the error. Additionally check the %windir%\debug for the adamsetup.log and adamuninstall.log (this last one is only created during the uninstall process). These two logs will tell you where the setup is failing and what should be checked.
It also pays to know that setup errors are written to the registry. If you cannot find the following key there was no failure as the keys are only generated if there was a failure and they are removed after a successful installation.
Registry keys:
HKLM\Software\Microsoft\Windows\CurrentVersion\ADAM_Installer_Results
If the computer is a member of a workgroup and not a domain, verify that the following registry value is 0 and reboot the machine before attempting to run setup again.
HKLM\Control\CurrentControlSet\Control\LSA\forceguest
When you are installing ADAM when not connected to the domain, check if you are trying to install the ADAM service with the Network Service (NetworkService account). If so you will need to connect to the domain to allow this account to resolve or choose a local account for the ADAM service account.
Delete the following registry key:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\ADAM_Shared
When you add an ADAM user to the administrator group of the schema or configuration container you get the error "The name referenced is invalid." This error is by design. An ADAM user cannot be an administrator of the whole instance. Users are not allowed in the Configuration container and groups cannot have cross-NC membership.
This can be done during setup, but if it was not done at that time, you will have to create the partition via DSMGMT or LDP. You must be logged on with the credentials that were used to create the ADAM instance. This account became the ADAM administrator when the instance was created. Below is an example of how to do this in DSMGMT.
1. Open the ADAM command prompt.
2. Type dsmgmt.
3. Type partition management.
4. Type connections.
5. Type connect to server Where servername is the name or IP address of the server and the port of the ADAM instance.
6. Type quit.
7. Type list to list the existing partitions. Partitions cannot have the same names even if the DN type is different. The following DN types are supported C,CN,DC,L,O,OU.
8. To create a new application partition type in create NC %1 %2 %3 where %1 is the DN of the partition, %2 is the objectclass , %3 is the server:port number or type in NULL for the currently selected instance.
This can be done with Dsmgmt also. Do this on the machine that you want to hold the new replica partition. Follow Steps 1 through 6 above for adding a partition, then for Step 7 run the following command:
Add NC replica
This can happen if you have the objectclass of container with a DC=domain,DC=com style partition. This objectclass is domainDNS.
One possible cause is if the objectclass is domain instead of domainDNS.
This can happen if you choose an objectclass that does not exist. Here is a list of the types of objects and the objectclass. The first name in the DN is the objectclass that you use
DC = domainDNS
O = Organization
CN = Container
C = Country
L = Locality
OU= OrganizationalUnit
If you get this error exit out of Dsmgmt and go back in. This can occur after you try to create a partition and it fails.
You cannot create a partition with the same name but a different types. This is not allowed.
If the schema extensions were not added during setup you will need to add these with ldifde before you can add users to your ADAM instance. These are stored in the %WinDir%\ADAM folder by default.
Error: I cannot add ADAM users to the admins group for the ADAM instance "the name reference is invalid"
This is by design. ADAM users cannot be administrators of the instance and they cannot be added to the configuration container. Only the ADAM administrators can do this.
For this you must enter 2147483650 for global group or 2147483656 for universal Group. Since ADAM does not have a global catalog or domains, it does not matter which type is used.
OUs can only be created under the following type of namespaces by default DC, O, C, and OU. If you want to change this behavior you will have to add the container that you want to the possSuperiors attribute of the organizational unit in the schema.
This issue is resolved with the following hotfix:
817583 Active Directory Services does not request secure authorization over an SSL connection
Since ADAM is based on the active directory basic troubleshooting is the same. In order for the directory to replicate we must have name resolution, physical connectivity and the correct credentials to authenticate to the machine ADAM is running on.
Troubleshooting steps
1. Look at the Event log for that instance, look for replication or KCC errors.
2. Is the machine and its replication partners in a domain, workgroup, separate forests.
3 .If the machine is XP and it is in a workgroup, the following registry key must be changed to zero and the machine rebooted
HKLM\Control\CurrentControlSet\Control\LSA\forceguest
4.Use ADAM Adsiedit to connect to see which value is set for the attribute msDS-ReplAuthenticationMode in the root of the Configuration container:
A - ADAM Service accounts must be using the same name and password. Machines in a workgroup must use this value for replication to work.
B - Kerberos with failover to NTLM. This is the default setting if the machine ADAM is installed on is a domain member.
C - Kerberos only, no failover to NTLM.
As name resolution is required for replication to work DNS, NETBIOS, WINS, network broadcasts or correct entries in the HOST file are needed. Note that only host records in the DNS service are used.
Network connectivity
Required ports:
1. 389 TCP (LDAP) or TCP 686 (LDAPS) (these can vary if you are using a different port number for your ADAM instance)
2. 88 TCP/UDP (Kerberos)
3. 53 TCP/UDP (DNS)
4. 445 TCP/UDP (SMB over IP traffic)
Service Principal Names
SPNs are generated when ADAM is installed and updated, when the service starts and are created as an attribute on the User account that is running the ADAM service. If it is running under network service they get created as an attribute of the computer object. If they are not created you will receive an Event ID 2516. This event will tell you what object it tried to create them under and why it failed. You will also get an Event ID 2519 that will give you a script and its location. This script will be using repadmin /writespn to manually add the SPNs.
Check for repadmin errors by running:
1. repadmin /showrepl server:port
2. repadmin /showutdvec (shows end to end replication from the perspective of a single DSA)
3. dsdiag /v /s:server:port
ADAM Service Discovery
Service Connection Points (SCP) objects are created under the machine that hosts the ADAM service. They are created or updated when the service starts and require the ADAM service account to have Create Child rights on the computer object. If the SCP cannot be created you will receive an Event ID 2537 that will describe why it could not be created.
Note that SCPs are not required and the creation of these can be disabled.
Troubleshooting Authentication Security and Certificates
Application Unable to Authenticate with ADAM
1. Verify a user can authenticate to ADAM via LDP using the server name and port number.
2. If ADAM is running on Windows XP, verify the following registry value is set to 0:
HKLM\System\CCS\Control\LSA\forceguest
3. By default anonymous binds are disabled, so an application attempting them will fail. To enable anonymous LDAP operations in ADAM, you must set the seventh character of the dsHeuristics value to 2.
4. Verify the ADAM service is running and check the event log for errors.
5. Verify what type of user is involved - ADAM User, proxy User, local user, or Windows security principal.
6. If a proxy user or Windows security principal is being used, verify that a domain is available. Verify there is a valid secure channel with the domain for the ADAM server. Verify network access, name resolution, DNS to a domain controller. Is there a domain controller available? Can the user logon to a workstation? Is replication both ADAM and AD working (repadmin). Basic workstation/logon troubleshooting applies here.
7. If the user is an ADAM user, a simple bind is used and must be done over SSL, since the password is sent in plain text.
8. Is the ADAM user account locked out or disabled: Check the attribute on the user object msDs-userpassworexpired, msDS-UserAccountAutoLocked or msDS-UserAccountDisabled. This will default to true if you have a password policy enabled and the password is blank or does not meet the password policy requirements.
9. Are we connecting over SSL? If so can you connect over normal LDAP? Check the certificates (see the next issue).
Cannot Bind to ADAM over SSL
1. By default password changes in ADAM must be over SSL, but to do SSL we need a certificate From a Certificate Server CA, or a third party Certificate.
2. Request a server certificate for the Windows machine hosting the ADAM instance. Use the FQDN of the machine for the name of the certificate. Make sure to check the box to allow it to be exportable to the machine store.
3. Check to ensure the certificate was properly installed. Via the Certificates MMC snap-in for the computer account.
4. Allowing ADAM to use the server certificate, by adding it to the ADAM service "My store" or place it in the machine personal store and change permissions so that the ADAM service can read it. To give the ADAM service account permission to the machine certificate. Read and execute must be given to the file with the latest time stamp in the following location:
Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys
5. Set up the client to trust the rootCA and certificate path of the CA that issued the server certificate. Do this through the CA website. Export the CA certificate and certification path. Import these into the Trusted Root Store in the Certificates MMC snap-in.
SASL Bind for ADAM Security Principal
Simple LDAP binds are sent in plain text, which is why SSL should be used for security. Simple binds are the only way to bind to ADAM for an ADAM security principal. SASL binds (using Kerberos, NTLM, or Negotiate) are used by local or domain Windows security principals. Bind redirection for ADAM proxy objects use simple LDAP binds to ADAM and then a SASL bind to Active Directory to authenticate the user.
Unable to See Objects after Binding to ADAM
Is the ADAM user a member of the Readers Built in Group? By default ADAM users are placed in the Users Group which does not have any read permissions to the partition.
Unable to Bind to ADAM with an Active Directory Account or Bind Redirection with LDP
1. On the Connection menu, click Connect, and then connect to your ADAM instance on a new connection.
2. On the Options menu, click Connection Options.
3. In Option Name, in Value click LDAP_OPT_SIGN (enables/disables Kerberos signing prior to binding using the LDAP_AUTH_NEGOTIATE flag), type 1, and then click Set.
4. In Option Name, in Value click LDAP_OPT_ENCRYPT (enables/disables Kerberos encryption prior to binding using the LDAP_AUTH_NEGOTIATE flag) type 1, click Set, and then click Close. Note this does not work on Windows XP.
5. Bind to your ADAM instance with LDP by clicking Bind on the Connection menu.
6. In User, type in the distinguished name (DN) of the proxy object.
7. Make sure the Domain option is not selected.
8. In Password, type the password that is associated with the Active Directory user you specified.
Using a Different Security Principal Other Than User, Person or inetOrgPerson
Any object can be a security principal by adding the msDS-bindableobject auxiliary class and the unicodePwd attribute to the schema definition of the object class in the ADAM schema.
Using Network Load Balancing with ADAM
Follow the steps above and ensure that LDAPS is working by by binding to LDP using SSL. If this works, proceed with binding a wildcard certificate.
Unable to Use Basic Authentication with IIS to Authenticate ADAM users
By default IIS cannot use ADAM as its primary authentication for ASP.NET pages. A forms authentication mechanism that uses the ADAM instance for user verification must be used.
Outlook or Windows Address Book Failure to Logon to ADAM with error: "The specified directory service has denied access. Check the Properties for this directory service and verify that your Authentication Type settings and parameters are correct."
The client software is configured to logon with the simple name not the distinguished name.
No Security Tab in ADAM Adsiedit
All security setting within ADAM must be done through DSACLS, LDP, or using a script.
Storing Application Policies for Authorization Manager with ADAM
For this to work you must first install the AZMAN schema extension then use a tool such as ADAM-ADSIedit to create a container to hold the application policy store.
1. In AZMAN, right-click the root Authorization Manager node in the tree view and select New Authorization Store.
2. Select Active Directory as the store type and specify the LDAP distinguished name (DN) of the store object to be created or managed specifying the ADAM server name and LDAP port as follows:
servername:/cn=,CN=
Obtaining an Object Identifier
Using the Unique GUID for an ADAM Instance to Modify the Schema or Configuration Container
It is not necessary to use the unique GUID for an ADAM instance to modify the schema or configuration container. The ADAM version of Ldifde allows you to use the #schemaNamingContext and #configurationNamingContext variables for this purpose.
Error Importing LDIF File: Add error on line 1: No Such Attribute The server side error is "The parameter is incorrect." 0 entries modified successfully. An error has occurred in the program
Make sure you are using the ADAM version of LDIFDE, which is located in %windir%\ADAM by default.
Error Importing Users: Add error on line 2: Unwilling To Perform The server side error is "The modification was not permitted for security reasons."
Print | posted on Tuesday, July 5, 2011 10:02 AM |
Filed Under [
Platforms
] | http://geekswithblogs.net/marcde/archive/2011/07/05/article-adam-troubleshooting.aspx | CC-MAIN-2019-51 | refinedweb | 2,467 | 62.88 |
09 May 2011 08:09 [Source: ICIS news]
LONDON (ICIS)--Dutch specialty chemicals firm DSM said on Monday it will build a commercial scale bio-based succinic acid plant in ?xml:namespace>
The plant to be built in Cassano Spinola will have a capacity of about 10,000 tonnes/year and is scheduled to come on stream in the second half of 2012. The facility will be
Financial details of the project were not disclosed.
Succinic acid is used in the manufacture of polymers, resins, food and pharmaceuticals.
Early last year, DSM and Roquette opened a bio-based succinic acid demonstration plant in
“The new plant in Italy will allow customers in Europe, North America and Asia to make larger volume commitments to their [own] customers," said Jean-Bernard Leleu, deputy CEO at Roquette, which produces starch and starch derivatives.
“Our proprietary yeast-based fermentation process not only allows cost effective production; it also eliminates salt waste and other by-products and thus improves the overall eco-footprint of end-products,” said Rob van Leen, chief innovation officer of D | http://www.icis.com/Articles/2011/05/09/9457949/dsm-roquette-to-build-bio-based-succinic-acid-plant-in.html | CC-MAIN-2014-52 | refinedweb | 180 | 51.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.