text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Home > Library > Miscellaneous > Wikipedia
1994 in sports describes the year's events in world sport.
Alpine skiing
- January 29 – death of Ulrike Maier (26), Austrian skier, who broke her neck when she crashed during a World Cup downhill race at Garmisch-Partenkirchen
- Alpine Skiing World Cup
- Men's overall season champion: Kjetil André Aamodt, Norway
- Women's overall season champion: Vreni Schneider, Switzerland
American football
- Super Bowl XXVIII – Dallas Cowboys won 30-13 over the Buffalo Bills
- October 23 – in a game where the New Orleans Saints beat the Los Angeles Rams 37-34 Robert Bailey of the Rams sets the NFL record for longest punt return (103 yards) and Tyrone Hughes of the Saints sets the NFL single game record for kickoff return yards (304) and most return yards (347) and ties the single game record for kickoff returns returned for touchdown (2).
- November 13 – Drew Bledsoe sets NFL single game records for pass attempts (70) and pass completions (45) helping New England Patriots beat Minnesota Vikings 26-20.
Association football
- July 2 – death of Andrés Escobar, Colombian player, who was shot dead apparently because of an own goal he had scored in a World Cup match
Athletics
- February 20 – in Boston, Massachusetts, Ireland's 41-year-old Eamonn Coghlan becomes the first man over the age of forty to run a sub-four minute mile when he clocks 3min.58.15sec.
- August – 1994 European Championships in Athletics held at Helsinki
- August – 1994 Commonwealth Games held at Victoria, Canada
Australian rules football
- Australian Football League
- The West Coast Eagles win the 98th AFL premiership (West Coast Eagles 20.23 (143) d Geelong 8.15 (63))
- Brownlow Medal awarded to Greg Williams (Carlton)in 1994
Baseball
- January 12 – Steve Carlton, winner of 329 games and four Cy Young Awards, is elected to the Baseball Hall of Fame.
- June 22 – OF Ken Griffey, Jr. leads the Mariners to a 12-3 win over the Angels by stroking his 31st home run of the season. In doing so, Griffey Jr. breaks Babe Ruth's record for most home runs before the end of June.
- September 14 – A labor strike by Major League Baseball players results in the premature termination of the season, and the cancellation of the World Series for the first time since 1904. The Montreal Expos were the league-leading team up to the strike, with a 74-40 record.
- Mets pitcher John Franco breaks Dave Righetti's major league record for left-handers of 252 career saves.
- The Richmond Braves win the International League championship.
- The Albuquerque Dukes win the Pacific Coast League championship.
- The Indianapolis Indians win the American Association championship.
- The Winnipeg Goldeyes win the Northern League championship.
- The Yomiuri Giants win the Japan Series, and in the view of the baseball media, are World Champions.
Boxing
- January 29 – Frankie Randall causes Julio César Chávez his first defeat in 91 professional bouts, winning the WBC world Jr. Welterweight title in the process, by a split decision in 12 rounds.
- November 5 – forty-five year old George Foreman becomes boxing's oldest heavyweight champion when he knocked out Michael Moorer in the 10th round of a Las Vegas, Nevada fight.
Canadian football
- Grey Cup – B.C. Lions win 26-23 over the Baltimore Stallions
- Vanier Cup – Western Ontario Mustangs win 50-40 over the Saskatchewan Huskies
Cycling
- Giro d'Italia won by Eugeni Berzin of Russia
- Tour de France - Miguel Indurain of Spain
- World Cycling Championship – Luc Leblanc of France
- Djamolidine Abdoujaparov becomes the first cyclist (and only as of 2007) to win the points classification at the Tour de France and Giro d'Italia in the same year.
Dogsled racing
- Iditarod Trail Sled Dog Race Champion –
- Martin Buser wins with lead dogs: D2 & Dave
Field Hockey
- Men's Champions Trophy: Pakistan
- Men's World Cup: Pakistan
- Women's World Cup: Australia
Figure skating
- World Figure Skating Championships –
- Men's champion: Elvis Stojko, Canada
- Ladies' champion: Yuka Sato, Japan
- Pairs' champions: Evgenia Shishkova and Vadim Naumov, Russia
- Ice dancing champions: Oksana Grishuk and Evgeny Platov, Russia
Gaelic Athletic Association
- Camogie
- Gaelic football
- All-Ireland Senior Football Championship – Down 1-12 d. Dublin 0-13
- National Football League – Meath 2-11 d. Armagh 0-8
- Ladies' Gaelic football
- Hurling
- All-Ireland Senior Hurling Championship – Offaly 3-16 d. Limerick 2-13
- National Hurling League –
Golf
Men's professional
- Masters Tournament - José María Olazábal
- U.S. Open - Ernie Els
- British Open - Nick Price
- PGA Championship - Nick Price
- PGA Tour money leader - Nick Price - $1,499,927
- Senior PGA Tour money leader - Dave Stockton - $1,402,519
Men's amateur
- British Amateur - Lee James
- U.S. Amateur - Tiger Woods becomes the youngest man ever to win the U.S. Amateur, at age 18.
- European Amateur - Stephen Gallacher
Women's professional
- Nabisco Dinah Shore - Donna Andrews
- LPGA Championship - Laura Davies
- U.S. Women's Open - Patty Sheehan
- Classique du Maurier - Martha Nause
- LPGA Tour money leader - Laura Davies - $687,201
- Solheim Cup won by the United States team who beat the European team 13 to 7.
Handball
- Men's European Championship: Sweden
- Women's European Championship: Denmark
Harness racing
- North America Cup - Cam's Card Shark
- United States Pacing Triple Crown races –
- Cane Pace - Falcons Future
- Little Brown Jug - Magical Mike
- Messenger Stakes - Cam's Card Shark
- United States Trotting Triple Crown races –
- Hambletonian - Victory Dream
- Yonkers Trot -
- Kentucky Futurity - Bullville Victory
- Australian Inter Dominion Harness Racing Championship –
- Pacers: Weona Warrior
- Trotters: Diamond Field
Horse racing
Steeplechases
- Cheltenham Gold Cup – The Fellow
- Grand National – Miinnehoma
Flat races
- Australia – Melbourne Cup won by Jeune
- Canada – Queen's Plate won by Basqueian
- France – Prix de l'Arc de Triomphe won by Carnegie
- Ireland – Irish Derby Stakes won by Balanchine
- Japan – Japan Cup won by Marvelous Crown
- English Triple Crown Races:
- 2,000 Guineas Stakes – Mister Baileys
- Epsom Derby – Erhaab
- St. Leger Stakes – Moonax
- United States Triple Crown Races:
- Breeders' Cup World Thoroughbred Championships:
Ice hockey
- June 14 - The New York Rangers won the Stanley Cup for the 1993-1994 season 4 games to 3 over the Vancouver Canucks, ending a 54-year drought.
- October 1 - The NHL locked out its players and the regular season was put on hold for the next 3 1/2 months and the season began under a 48-game schedule through 1995.
- Art Ross Memorial Trophy as the NHL's leading scorer during the regular season: Wayne Gretzky, Los Angeles Kings
- Hart Memorial Trophy – for the NHL's Most Valuable Player: Sergei Fedorov - Detroit Red Wings
- World Hockey Championship
Lacrosse
- The 7th World Lacrosse Championship is held in Manchester, England. The United States win and Australia is the runner-up.
- The Philadelphia Wings beat the Buffalo Bandits 26-15 in the Major Indoor Lacrosse League Championship.
- The Six Nations Chiefs win the Mann Cup.
- The Orillia Rogers Kings win the Founders Cup.
- The New Westminster Salmonbellies win the Minto Cup.
Motor racing
- Stock car racing –
- Sterling Marlin won the Daytona 500
- Jeff Gordon wins the Coca Cola 600
- Jeff Gordon wins the first Brickyard 400
- NASCAR Championship - Dale Earnhardt
- CART Racing - season championship won by Al Unser, Jr.
- Formula One - Michael Schumacher wins the Drivers' Championship.
- The season is marred when, during qualifying for the San Marino Grand Prix, Roland Ratzenberger crashes at the Villeneuve corner and dies from his injuries. The race goes ahead and Ayrton Senna is killed in a crash at the Tamburello corner.
- 24 hours of Le Mans – Yannick Dalmas / Hurley Haywood / Mauro Baldi won, driving a Porsche 962LM
- Rally racing - Didier Auriol won the World Rally Championship
- the team of Francois Delecour / Daniel Grataloup won the Monte Carlo Rally driving a Ford Escort RS Cosworth
- Drag racing - Scott Kalitta won the NHRA "Top Fuel" championship.
Radiosport
- Seventh Amateur Radio Direction Finding World Championship held in Södertälje, Sweden.
Rugby league
- April 30 - London, England: 1993-94 Challenge Cup tournament culminates in Wigan's 26-16 win over Leeds in the final at Wembley Stadium before 78,348.
- June 1 - Brisbane, Australia: 1994 World Club Challenge match is won by Wigan, 20-14 over the Brisbane Broncos at ANZ Stadium before 54,220.
- June 20 - Brisbane, Australia: 1994 State of Origin is won by New South Wales in the third and deciding game of the three-match series against Queensland at Lang Park before 40,665.
- September 25 - Sydney, Australia: 1994 NSWRL season culminates in the Canberra Raiders' 36-12 win over the Canterbury-Bankstown Bulldogs in the grand final at the Sydney Football Stadium before 42,234.
- November 15 - Leeds, England: 1994 Ashes are retained by Australia in the third and deciding game of the three-match series against Great Britain at Elland Road before 39,468.
- December 4 - Béziers, France: Australian captain Mal Meninga plays the last game of his illustrious career, leading Australia to a 74-0 victory over France and scoring the final try of the game.
Snooker
- World Snooker Championship – Stephen Hendry beats Jimmy White 18-17
- World rankings – Stephen Hendry remains world number one for 1994/95
Swimming
- Seventh FINA World Championships, held in Rome, Italy (September 1 – 11)
- Fourth European Sprint Championships, held in Stavanger, Norway (December 3 – 4)
- Germany wins the most medals (13), and the most gold medals (7)
- March 13 – Alexander Popov clocks 21.50 to break the world record in the men's 50m freestyle (short course) in Desenzano del Garda, Italy
Tennis
- Grand Slam in tennis men's results:
- Grand Slam in tennis women's results:
- Davis Cup – Sweden wins 4-1 over Russia.
- Federation Cup – In the last event to be called the "Federation Cup", Spain wins 3-0 over the USA. The following year would see the event renamed the Fed Cup.
Volleyball
- Men's World League: Italy
- Men's World Championship: Italy
- Men's European Beach Volleyball Championships: Jan Kvalheim and Bjørn Maaseide (Norway)
- Women's World Grand Prix: Brazil
- Women's World Championship: Cuba
- Women's European Beach Volleyball Championships: Beate Bühler and Danja Müsch (Germany)
Water Polo
- Men's World Championship: Italy
- Women's World Championship: Hungary
Awards
- Associated Press Male Athlete of the Year – George Foreman, Boxing
- Associated Press Female Athlete of the Year – Bonnie Blair, Speed skating
References
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) | http://www.answers.com/topic/1994-in-sports | crawl-002 | refinedweb | 1,723 | 52.43 |
Opened 7 years ago
Closed 5 years ago
Last modified 4 years ago
#4256 closed Bugs (fixed)
boost::make_shared() may issue stack overflow while constructing large objects
Description
By default stack size for windows executable is 1Mb. The program below fails with stack overflow exception. In debug builds the stack overflow exception issued with A_Size >= "stack size" / 3. In release builds due to optimizations, the stack overflow exception issued with A_Size >= "stack size" / 2.
#include <cstddef> #include <boost/make_shared.hpp> #include <boost/shared_ptr.hpp> const std::size_t A_Size = 512; struct A { char buf_[A_Size * 1024]; }; int main() { boost::shared_ptr<A> pa(boost::make_shared<A>()); //boost::shared_ptr<A> pa(new A()); return 0; }
Attachments (0)
Change History (12)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
comment:4 Changed 6 years ago by
I ran into this using 1.47 yesterday
I was in debug mode VS2010. I needed a pretty large receive buffer for a TCPReceiver.
The first enum caused a stack overflow error. Reducing the size stopped the error.
Release mode did not complain about either size.
struct TCPRawData{
void* pParent;
this blew the stack - enum{max_length = 1048576};
this was fine - enum{max_length = 500000};
char buffer[max_length];
int bytesReceived;
TCPRawData(void* parent): pParent(parent){}
};
comment:5 Changed 6 years ago by
This was the offending code from the TCPReceiver.
TCPReadBuff = boost::make_shared<TCPRawData>(this);
comment:6 Changed 5 years ago by
This is not fixed, at least in Boost 1.50. It's reproducible in VS2008 Debug build. Please reopen.
comment:7 Changed 5 years ago by
The above example (with A_Size = 512) works for me with the latest Boost and VS2005 Debug.
comment:8 Changed 5 years ago by
It does fail with A_Size=1024 though, which is probably what you mean.
comment:9 Changed 5 years ago by
comment:10 Changed 5 years ago by
comment:11 Changed 4 years ago by
Not fixed in boost 1.55 either
comment:12 Changed 4 years ago by
Can you please tell me how to reproduce?
(In [69250]) Fix make_shared to not copy the deleter. Refs #4256. Refs #3875. | https://svn.boost.org/trac10/ticket/4256 | CC-MAIN-2017-47 | refinedweb | 356 | 72.26 |
Having Fun with YOLOKit
Enumerating collections in Objective-C is often verbose and clunky. If you're used to Ruby or worked with Underscore or Lo-Dash in JavaScript, then you know there're more elegant solutions. That is exactly what the creators of YOLOKit thought when they created this nifty library. YOLOKit's tagline is Enumerate Foundation delightfully and they mean it.
1. Installation
Adding YOLOKit to an Xcode project is very easy with CocoaPods. Include the pod in your project's Podfile, run
pod update from the command line, and import
YOLO.h wherever you want to use YOLOKit.
If you're not using CocoaPods, then download the library from GitHub, add the relevant files to your project, and import YOLOKit's header.
2. Using YOLOKit
YOLOKit has a lot to offer, but in this quick tip I'll only focus on a few of the methods YOLOKit has in its repertoire.
Minimum and Maximum
Let's start simple with extracting the minimum and maximum value of an array. Take a look at the following code snippet to see how it works.
NSArray *numbers = @[ @(1), @(2), @(45), @(-12), @(3.14), @(384) ]; // Minimum id min = numbers.min(^(NSNumber *n) { return n.intValue; }); id max = numbers.max(^(NSNumber *n) { return n.intValue; }); NSLog(@"\nMIN %@\nMAX %@", min, max);
The above code snippet results in the following output.
MIN -12 MAX 384
The syntax may seem odd and you may be wondering why
min and
max take a block, but this actually adds more power to theses methods. You can do whatever you like in the block to determine what the minimum and maximum value of the array is. The following example should clarify this.
NSArray *words = @[ @"this", @"is", @"a", @"example", @"for", @"everyone" ]; // Minimum id shortest = words.min(^(NSString *n) { return (NSInteger)n.length; }); id longest = words.max(^(NSString *n) { return (NSInteger)n.length; }); NSLog(@"\nSHORTEST %@\nLONGEST %@", shortest, longest);
This code snippet results in the following output.
SHORTEST a LONGEST everyone
YOLOKit is flexible and doesn't complain about the type of the block arguments. However, to satisfy the compiler, we cast the return value of the block to
NSInteger, because that's what it expects.
Filtering Arrays
Selecting & Rejecting
There are a number of methods to filter arrays, including
select and
reject. Let's see how we can filter the array of numbers and words we created earlier.
NSArray *filteredNumbers = numbers.select(^(NSNumber *n) { return n.intValue > 10; }); NSLog(@"FILTERED NUMBERS\n%@", filteredNumbers); NSArray *filteredWords = words.reject(^(NSString *n) { return n.length <= 2; }); NSLog(@"FILTERED WORDS\n%@", filteredWords);
You have to admit that this is very nice to look at. It's concise and very legible. The arrays in the above examples are simple, but note that you can use arrays that are much more complex than this. The following example illustrates this.
NSArray *people = @[ person1, person2, person3, person4, person5, person6 ]; NSArray *males = people.select(^(Person *p) { return p.sex == 0; }); NSArray *females = people.reject(^(Person *p) { return p.sex == 0; });
Subarrays
YOLOKit also defines
first and
last, but they don't do what you expect them to do. In other words, they're not equivalent to
NSArray's
firstObject and
lastObject methods. With
first and
last you can create a subarray from the original array. Take a look at the following example.
NSArray *subsetNumbers = numbers.first(3); NSArray *subsetWords = words.last(2); NSLog(@"SUBSET NUMBERS\n%@", subsetNumbers); NSLog(@"SUBSET WORDS\n%@", subsetWords);
The above code snippet results in the following output.
SUBSET NUMBERS ( 1, 2, 45 ) SUBSET WORDS ( for, everyone )
Manipulating Arrays
Sorting
Sorting an array is trivial with YOLOKit. Let's see what it takes to sort the array of numbers we created earlier. It's that easy.
NSArray *sortedNumbers = numbers.sort; NSLog(@"%@", sortedNumbers);
Uniquing
One of the benefits of using
NSSet is that it doesn't contain duplicate objects. However, uniquing an array of objects is trivial with YOLOKit. Let's add a few additional numbers with YOLOKit's
concat method and then unique the array with
uniq.
// Concatenate numbers = numbers.concat(@[@1, @2, @3, @4]); NSLog(@"CONCAT %@", numbers); // Unique & Sort numbers = numbers.uniq.sort; NSLog(@"UNIQ %@", numbers);
Have you noticed I also sorted the array by chaining
uniq and
sort? The goal isn't to turn Objective-C code into Ruby or JavaScript, but I'm sure you agree that this code snippet is concise, and very easy to read and understand.
Reversing & Shuffling
// Reversing NSArray *reversedNumbers = numbers.reverse; // Shuffling NSArray *shuffledWords = words.shuffle; NSLog(@"REVERSED\n%@", reversedNumbers); NSLog(@"SHUFFLED\n%@", shuffledWords);
The above code snippet results in the following output.
REVERSED ( 384, "3.14", "-12", 45, 2, 1 ) SHUFFLED ( for, is, everyone, example, a, this )
Other Methods
There are a lot of other methods to work with arrays, such as
rotate,
sample,
without,
set,
transpose, etc. I encourage you to browse YOLOKit on GitHub to find out more about them.
There are also methods that can be used with
NSDictionary,
NSNumber, and
NSString. The following code snippet shows you how to convert a string into an array of words.
id wordsInString = @"You only live once. Right?".split(@" "); NSLog(@"STRING %@", wordsInString);
STRING ( You, only, live, "once.", "Right?" )
3. Considerations
Code Completion
Because of YOLOKit's odd syntax, Xcode won't be of much help when it comes to code completion. It will show you a list of suggestions for YOLOKit's methods, but that's about it. If you want to use YOLOKit, you'll have learn the syntax.
Performance
YOLOKit isn't optimized for performance as this GitHub issue shows. However, it does make your code prettier and more readable. Using a
for loop to loop over an array will be faster and more performant than YOLOKit's methods and it's important that you keep this in mind.
Conclusion
Do I recommend YOLOKit? Yes and no. The above considerations shouldn't keep you from using YOLOKit, but make sure that you don't use YOLOKit if performance is important, because there are better options available—like the good ol'
for loop.
The long of the short is that you should only use YOLOKit if you feel that it adds value to your project. Also consider that your colleagues need to learn and appreciate YOLOKit's syntax. I think YOLOKit is a great project that clearly shows how incredibly expressive Objective-C can be. For me, that's the most important lesson I take away from YOLOKit. | http://code.tutsplus.com/tutorials/having-fun-with-yolokit--cms-21277 | CC-MAIN-2014-35 | refinedweb | 1,071 | 59.4 |
We’ve been using the load average to see the health of the servers.
There are several drawbacks associated with using the load average:
Networking is the first thing that comes to many people’s minds when we are talking about containers.
Most container technologies use the Network Namespaces feature of the Linux Kernel. The network namespaces provide an isolated network stack in the operating system.
You can create a virtualized network stack with its…)…
I have a bunch of Bash scripts. I separated them into groups of three. The first group has 3 scripts, the second group has 3 scripts, the third group has…
VPC (Virtual Private Cloud) is one of the most important services of AWS. You can create a redundant network on VPC.
As you might guess, you can create a public network and a private network on AWS. …
Working as serverless is the fashion these days. However some problems are still there. Deployment!
Amazon Web Services has introduced canary release for Lambda functions. So, we will be able to rolling out new software versions in production by slowly.
The first version of the Lambda function (index.js):
exports.handler =…
IPTraf is one of the network debug tools. You can monitor the network activity via IPTraf.
You can setup some filters on IPTraf.
Open the filters: | https://adil.medium.com/ | CC-MAIN-2021-43 | refinedweb | 219 | 67.86 |
)
John Charles Olamendy(7)
Nimit Joshi(6)
Jignesh Trivedi(4)
Abhimanyu K Vatsa(3)
Scott Lysle(3)
Christophe Marcel(3)
Jean Paul(2)
Daniel Stefanescu(2)
Abdul Rasheed Feroz Khan(1)
Manpreet Singh(1)
Rathrolla Prem Kumar (1)
Gagan Sharma(1)
Deepak Solanki(1)
Matt Hadden(1)
Pramod Gehlot(1)
Sam Hobbs(1)
Yatendra Sharma(1)
Vijay S(1)
Sahil Sharma(1)
Claudio Inacio(1)
Akshay Patel(1)
Ramesh Kartik(1)
Abhishek Kumar Ravi(1)
Abhishek Goswami(1)
Shweta Lodha(1)
Sourabh Mishra(1)
Deepti Nahar(1)
Nipun Tomar(1)
Anubhav Chaudhary(1)
Anoop Kumar Sharma(1)
Ankur Mishra(1)
Kailash Chandra Behera(1)
Sourav Kayal(1)
Vinod Kumar(1)
Suthish Nair(1)
Sachin Kalia(1)
Sharad Gupta(1)
Anil Kumar(1)
Deepak Middha(1)
Shivani (1)
Joe Miguel(1)
James Divine(1)
Destin joy(1)
Sonia Bhadouria Vishvkarma(1)
Vijai Anand Ramalingam(1)
Davin Martyn(1)
Manish Singh(1)
Azim Zahir(1)
Mukesh Kumar(1)
Prabhakar Parihar(1)
Karthikeyan Anbarasan(1)
Microsoft Press(1)
Venkatesan Jayakantham(1)
Srihari Chinna(1)
Vasanth (1)
Abdul 0(1)
Sriram Surapureddy(1)
Manoj Pal(1)
Krishnan LN(1)
Ashish Singhal(1)
Mike Gold(1)
Martin Kropp(1)
Surapureddy Sriram(1)
Sarvesh Damle(1)
lubos (1)
klaus_salchner (1)
Suprotim Agarwal(1)
Susan Abraham(1)
Bosco Wilfred(1)
Raghavnayak (1)
Resources
No resource found
Create a Virtual Machine In Azure And Attach An Empty Disk To It
Jul 08, 2016.
In this article, you will learn how to create a virtual machine in Azure and attach an empty disk to it.
Manage Search Schema Of Search Administration At The SharePoint Admin Center On Office 365
Apr 25, 2016.
In this article you will learn how to manage Search Schema of Search Administration at the SharePoint Admin Center on Office 365.
How To Generate A Script In SQL Server 2012
Dec 19, 2015.
In this article I will show you how to generate a script with schema and data and create a new database in SQL 2012.
About Schema In SQL Server
Nov 24, 2015.
This article will help you to know about Schema and view the Schema changes history in SQL Server.
SharePoint: Create List With Lookup, Choice, Date And Person Fields (Schema)
Oct 13, 2015.
In this article you will learn how to create a list with Lookup, Choice, Date and Person Fields in SharePoint.
Display a Message When the ListView Becomes Empty in Windows Store Apps
May 06, 2015.
If you need to display a message when a ListView no longer has any items when creating a Windows Store app, there isn't all that much to it!
Manage SQL Azure Security
May 03, 2015.
In this article you wil learn how to manage SQL Azure Security: Logins, Users, Roles, Schemas and Permissions..
Retrieve Checked Out Documents From Site Collection Using Content Search Web Part
Mar 27, 2015.
This article helps in retrieving the Checked Out documents of the current user from the entire Site collection using a Content Search web part in SharePoint 2013.
How to Display the Empty GridView in case of no Records in Database
Mar 23, 2015.
In this article you will learn to display an empty GridView when there are no records in the database.
Schemas in SQL Server
Feb 03, 2015.
This article explains schemas in SQL Server.
Create Content Type Using Schema
Nov 17, 2014.
In this article we can explore how to create a content type using XML Schema.
Xml and Schema Validator
Aug 22, 2014.
This article shows how to create a simple and generic module for validating XML files with their respective schema files.
Entity Framework: Code First Data Annotations
Aug 13, 2014.
Entity Framework code First approach allows us to use our POCO (domain) classes as model and Entity framework is use this classes to querying data, change tracking and other updating function.
Content Result in Controller Sample in MVC: Day 9
Aug 04, 2014.
In this article we will see how to use content results in a controller in.
Introducing ASP.Net Web API 2- Adding Controller: Day 2
Jul 21, 2014.
This article describes how to create an empty ASP.NET Web API 2 Controller and by using jQuery we can call the Web API in the web form..
Checking For Empty or Null String in C#
Mar 10, 2014.
In this article we will look at how to simplify checking for NULL and empty strings using C#.
Memory-Optimized Tables in SQL Server 2014
Mar 03, 2014.
In this article you will learn about memory-optimized tables in SQL Server 2014.
Using Entity Framework to Work With Database
Feb 11, 2014.
This article describes the database creation with the Entity Framework. You can also learn to apply the Data Annotation attributes in the database schema creation.
Query to Find All About Database Schema
Jan 26, 2014.
In this short tip you will learn how to find information about a database schema in SQL Server.
Creating Web Forms Application Using Visual Studio 2013
Dec 26, 2013.
In this article I am creating the Web Forms Application by using the Empty Project Template in Visual Studio 2013..
Working With ASP.Net Identity in Empty Project in Visual Studio 2013
Dec 10, 2013.
In this article you will learn to work with ASP.NET Identity in an Empty Project template in Visual Studio 2013.
Working With MVC 5 in Visual Studio 2012
Nov 27, 2013.
This article describes how to work with an Empty MVC 5 Project using Visual Studio 2012.
Oracle SQL Commands : Part 3
Oct 29, 2013.
In Oracle we can create our own procedure. A procedure is a collection of SQL statement that can be called by any valid object name and create a user-defined procedure we use the command create Procedure.
Show Alert And Focus On Textbox If Empty Using Knockout
Oct 10, 2013.
In today's Article I will tell you about Show Alert and Focus on Textbox if Found it Empty Using Knockout.
Copy Table Schema and Data From One Database to Another Database in SQL Server
Oct 03, 2013.
This article is all about how to copy table and its data in SQL Server using Query as well as graphically.
Let’s Understand the Levels of Document Object Model (DOM)
Sep 27, 2013.
This article describes the levels of the Document Object Model (DOM).
Validating an XML Document Programmatically
Aug 18, 2013.
This article provides an example of validating a XML document using the XmlReader and XMLScema class of System.Xml and System.Xml.Schema.
Introduction to Web Project Templates in Visual Studio 2013 Preview
Aug 02, 2013.
This article introduces ASP.NET Web Project development using the various project templates in Visual Studio 2013 Preview.
5 Tips to Improve Performance of C# Code: Part 3
Jul 13, 2013.
Welcome to the "C# Performance Improvement Article Series". This is my third presentation.
Filter Array Element in PHP
Jun 29, 2013.
In this article I will explain Filters Array element in PHP.
Schema Compare Using Visual Studio
May 25, 2013.
Consider a situation where we need to compare multiple databases and update one of those with latest changes. This can be easily achieved by using Schema Comparison Utility available in VS 2013/2012/2010/2008/2005, almost all versions.
The Database Principal Owns a Schema in the Database, and Cannot be Dropped
May 06, 2013.
I encountered a problem today and thought to share with all of you so that in the future, if you confront such an issue then this article may help you.
Reverse String and Empty String in PHP
Feb 28, 2013.
In this article I describe the PHP how to reverse and how to identify string is empty or not.
Visual Studio 2012 SQL Schema Comparison
Jan 30, 2013.
Visual Studio 2012 is very much a developer friendly and robust IDE. This article covers the enhanced schema comparison tool in terms of friendliness using self explanatory pictures/images.
Schema in SQL Server 2012
Dec 03, 2012.
In this article I describe schemas in SQL Server 2012.
Squadron - Empty List Addin
Nov 24, 2012.
In this article we can explore a new Addin inside Squadron for SharePoint 2010 tool.
DetailsView Control in ASP.Net: Part 3
Nov 16, 2012.
This is my series of articles on DetailsView Control and in this article we will discuss how to display Empty Data message with DetailsView Control using EmptyDataText and EmptyDataTemplate.
Secure WS in VB.NET
Nov 10, 2012.
This code covers the .NET (VB) implementation of the security of web services using the Microsoft “The Favorites Service” security modified schema.
Generating XML from SQL Database in VB.NET
Nov 10, 2012.
This sample shows how you can obtain a Dataset from (in this case) a SQL Server database, and then write it out to an XML Document. As an "Extra Added Bonus(tm)", it can show you how to write the schema as well.
XML Schema Validator in VB.NET
Nov 10, 2012.
The XML Schema Validator checks if a given XML document is well formed and has a valid schema model. If it finds the document is not a valid XML schema, it generates the error telling the problem in the schema.
XML TreeView in VB.NET
Nov 10, 2012.
The.
Secure WS in VB.NET
Nov 09, 2012.
This code covers the .NET (VB) implementation of the security of webservices using the Microsoft “The Favorites Service” security modified schema.
Architecture of Search Schema in SharePoint 2013
Oct 23, 2012.
In this article we can see some very good information about the Search Schema of SharePoint 2013. The search index is one of the most important elements in search architecture. What is in our search index determines what people will find when they look for information by entering search queries or by interacting with internet or intranet pages.
The Name 'Scripts' Does Not Exist in the Current Context
Oct 16, 2012.
There is a very small bug in the MVC 4 Empty Template that can be seen when you create a new MVC 4 Application with an "Empty" template and start adding views for CRUD operations (using scaffolding). And when you run the application and try to navigate you get the error "The Name 'Scripts' Does Not Exist in the Current Context".
CSS Selectors in HTML : Part 2
Oct 10, 2012.
Today, we are going to explore CSS Selectors in HTML.
How to Localize Site Columns in SharePoint 2010 Using Visual Studio 2010
Jun 20, 2012.
In this article you will be seeing how to localize site columns in SharePoint 2010 using Visual Studio 2010.
Merge the Content of DataSets in ASP.NET
Jun 17, 2012.
This article shows the use of the Merge() method in C# to merge two or more DataSet objects that have largely similar schemas to exist in the same Data Container.
Entity Framework: Part 1
Jan 18, 2012.
The Entity Framework facilitates the development and conceptual use of application models for development instead of directly using relational storage schema.
TextBoxWatermarkExtender Control in AJAX
Jan 17, 2012.
The.
XML SchemaValidator
Dec 01, 2011.
This article shows how to validate an XML document against a schema using C#. For demonstration of this, I have created a GUI application using Visual Studio 2005 Express Edition..
Create a Strongly Typed DataSet Using The XML Schema Definition Tool (XSD.exe)
Jun 27, 2011.
In this article we are going to see how to create a strongly typed DataSet from a XSD schema file using the XML Schema Definition Tool (XSD.exe).
Some Cool and Usefull Tips and Tricks For GridView
Apr 11, 2011.
Here are some cool and usefull tips and tricks for GridView in ASP.NET.
SQL Server Schema Comparison in Visual Studio 2010
Mar 05, 2011.
This article explains the new tool in visual studio 2010 which most of the users require seeing the comparison report of the 2 different database schema (say a development and a production db) in order to get the object that has been changed for the current releases..
Schemas in SQL Server
Oct 18, 2010.
Schemas are nothing but logical grouping of objects either it may be a table or view or stored procedure or function or any other DB objects.
Model-First Design using ADO.NET Entity Framework 4.0
Jul 19, 2010.
In this article we are going to see the second most importment part of the ADO.NET entity framework 4.0.
Validation in Linq to SQL and ASP.NET
May 19, 2010.
In this article, I will show how to add business rules to Linq to SQL data model default set of schemas.
Compare Two XML Files Using .Net
Mar 23, 2010.
In this tool, we Validate the schema and compare the data's of two XML files.
Getting a Database Schema
Mar 11, 2010.
In this article I will explain getting a Database Schema.
Creating a Guest Book in ASP.NET
Jan 29, 2010.
In this article I will explain Creating a Guest Book in ASP.NET.
Using the Data Form Wizard in ADO.NET
Dec 17, 2009.
In this article I will explain about using the Data Form Wizard in ADO.NET.
Adding Typed DataSets in ADO.NET
Dec 13, 2009.
In this article I will explain about Adding Typed DataSets in ADO.NET.
Working with DataSets
Dec 04, 2009.
In this article I will explain about working with DataSets.
Visual Studio .NET and XML Support
Dec 02, 2009.
In this article I will explain you about Visual Studio .NET and XML support..
An XML Document and its Items
Nov 19, 2009.
In this article I will explain you about XML Document and its Items.
XML Namespaces
Nov 18, 2009.
In this article I will explain you about XML namespace, DTD and schemas and Extensible Hyper Text Markup Language..
Parsing BizTalk Messages in .NET Components Through Orchestration
Apr 30, 2009.
This article discusses how to parse BizTalk messages in .NET Components through Orchestration.
Promoting Schemas Properties in BizTalk Server
Mar 18, 2009..
Create a SQL Server CE Database From SQL Schema File
Mar 04, 2009.
In this article, I will explain how to create a SQL Server CE .sdf database file from a database schema defined in .sql scripts.
Best Practices for Data Transfer in SQL Server 2005
Jun 23, 2008.
This article talks about some best practices and the process of data transfer in SQL Server 2005.
Biztalk Messaging Services
Mar 27, 2008.
This article is intended to illustrate the concepts of mapping in Biztalk Server 2006.
Biztalk Messaging Services: Flat File Schema
Jan 21, 2008.
This article is intended to consolidate the principles of messaging in Biztalk Server 2006.
View database structure using C#
Jan 16, 2008.
This article describes an easy approach for examining all of the tables, views, and columns in a database..
Transfer DataGrid Row to Another Empty DataGrid
Aug 14, 2006.
This article is about how to transfer row of a dataGrid to another dataGrid.
Using XML and XSLT
Jan 18, 2006.
XML Stylesheet Transformation(XSLT) is defined as a language for converting XML documents to other document formats. This article shows how to perform the transformation using classes of .NET Framework.
How to Execute Oracle Stored Procedures Dynamically in C#
Nov 10, 2005.
In this article, I wiil show how we can store schema of stored procedures in an XML file and load and run the stored procedure from UI application using C# and Oracle.
Using Snippets in Visual Studio 2005
Nov 10, 2005.
Visual Studio contains several built-in intellisense code snippets. Snippets are like templates and can be described completely in XML. If you understand the snippet xml schema, you can easily begin to create your own snippets.
Best Approach for Designing Interoperable Web Service
Mar 15, 2005.
This article will clarify and explain in detail the different Web Service Design Methodologies as defined by the Web Services Standardization Groups, clarify the terms, highlight their differences...
TranslateSQL
Oct 05, 2004.
TranslateSQL is a utility for generating SQL-Server SQL scripts based on an existing Oracle schema, in essence 'translating' Oracle schemas into SQL-Server databases..
In-depth Look at WMI and Instrumentation: Part I
Sep 14, 2004.
WMI provides a consistent programmatic access to management information in the enterprise. It uses the typical provider and consumer concept where you have on one side components providing this management information while on the other side management applications can subscribe and consume it.
Retrieving Schema Information Using ADO.NET and C#
Jul 06, 2004.
In this article, I will describe the ways to retrieve the database schema information using System.Data.SqlClient class and the System.Data.OleDb class.
Validating Input Xml Data Files
Jun 28, 2004.
In this article we'll discus two strategies for validating input XML data files. The validation of an XML input file could occur at various instances of processing.
SOAP Client in Windows XP
May 06, 2004.
This article covers the basics of using the SOAP Client software included with Microsoft Windows XP Professional to access Web Services using SOAP..
About Empty-Schema. | http://www.c-sharpcorner.com/tags/Empty-Schema | CC-MAIN-2017-04 | refinedweb | 2,855 | 66.44 |
I'm having a huge problem trying to learn how to export classes from a DLL.
I can do some functions just fine but the classes are mangled :S
I've tried from .def file, I've used extern "C" but when I did, it threw errors and won't export the class at all.
In codeblocks it won't create a .lib file so I tried linking the .a file but that still doesn't work. I'm not sure what to do. I'd probably prefer either loadlibrary or .lib but I want to learn how to do it via a .def file.
Exports.hpp
#ifndef EXPORTS_HPP_INCLUDED #define EXPORTS_HPP_INCLUDED #define EXPORT __declspec(dllexport) class EXPORT Point; //Forward declaration of a class I want to export. This is all I did. #endif // EXPORTS_HPP_INCLUDED
Systems.hpp
#ifndef SYSTEM_HPP_INCLUDED #define SYSTEM_HPP_INCLUDED #ifdef _WIN32_WINNT #undef _WIN32_WINNT #endif #define _WIN32_WINNT 0x0500 #include <Windows.h> #include <TlHelp32.h> #include <iostream> #include "Strings.hpp" #include <Time.h> #include <vector> #include "Exports.hpp" EXPORT DWORD SystemTime(); EXPORT DWORD GetTimeRunning(); EXPORT DWORD TimeFromMark(int TimeMarker); EXPORT std::string TheTime(); EXPORT int AddOnTermination(void(*function)(void)); EXPORT std::string GetEnvironmentVariables(const std::string Variable); EXPORT void SetTransparency(HWND hwnd, BYTE Transperancy = 0); #endif // SYSTEM_HPP_INCLUDED
Then in the CPP file I just have the definitions of each of those functions. They don't have the EXPORT infront of them.
For classes I just forward declare them in the Exports header and put export betwee class and the classname.
My Main file looks like:
#include "System.hpp" //File that includes the exports. BOOL WINAPI }
I have around 200 classes to export and about 300 functions. I don't mind exporting the functions one by one or using a def file with ordinal exportation but I have no clue how to do the classes.
Any idea? How do I write the typedef for a class? Am I exporting them right? | https://www.daniweb.com/programming/software-development/threads/427330/export-classes-and-functions-in-a-dll | CC-MAIN-2021-25 | refinedweb | 319 | 60.11 |
The function double atan2(double y, double x); returns the arc tangent of y/x, expressed in radians. Function atan2 takes into account the sign of both arguments in order to determine the quadrant.
Function prototype of atan2
double atan2(double y, double x);
- y : A floating point value representing an Y-coordinate.
- x : A floating point value representing an X-coordinate.
Return value of atan2
Function atan2 returns the principal arc tangent of y/x, in the interval [-Pi, +Pi] radians.
C program using atan2 function
The following program shows the use of atan2 function to calculate inverse tangent of y/x.
#include <stdio.h> #include <math.h> #define PI 3.14159 int main(){ double Y, X, radian, degree; printf("Enter value of Y and X\n"); scanf("%lf %lf", &Y, &X); radian = atan2(Y, X); /* * Radian to degree conversion * One radian is equal to 180/PI degrees. */ degree = radian * (180.0/PI); printf("The arc tan2 of %0.4lf and %0.4lf is %0.4lf radian\n", Y, X, radian); printf("The arc tan2 of %0.4lf and %0.4lf is %0.4lf degree\n", Y, X, degree); return 0; }
Output
Enter value of Y and X 5 5 The arc tan2 of 5.0000 and 5.0000 is 0.7854 in radian The arc tan2 of 5.0000 and 5.0000 is 45.0000 in degree | https://www.techcrashcourse.com/2015/08/atan2-math-library-function.html | CC-MAIN-2019-47 | refinedweb | 229 | 71.31 |
At this point, you should have the GNU tools configured, built, and installed on your system. In this chapter, we present a simple example of using the GNU tools in an AVR project. After reading this chapter, you should have a better feel as to how the tools are used and how a
Makefile can be configured.
This project will use the pulse-width modulator (
PWM) to ramp an LED on and off every two seconds. An AT90S2313 processor will be used as the controller. The circuit for this demonstration is shown in the schematic diagram. If you have a development kit, you should be able to use it, rather than build the circuit, for this project.
The source code is given in demo.c. For the sake of this example, create a file called
demo.c containing this source code. Some of the more important parts of the code are:
iocompat.htries to abstract between all this differences using some preprocessor
#ifdefstatements, so the actual program itself can operate on a common set of symbolic names. The macros defined by that file are:
OCRthe name of the OCR register used to control the PWM (usually either OCR1 or OCR1A)
DDROCthe name of the DDR (data direction register) for the OC output
OC1the pin number of the OC1[A] output within its port
TIMER1_TOPthe TOP value of the timer used for the PWM (1023 for 10-bit PWMs, 255 for devices that can only handle an 8-bit PWM)
TIMER1_PWM_INITthe initialization bits to be set into control register 1A in order to setup 10-bit (or 8-bit) phase and frequency correct PWM mode
TIMER1_CLOCKSOURCEthe clock bits to set in the respective control register to start the PWM timer; usually the timer runs at full CPU clock for 10-bit PWMs, while it runs on a prescaled clock for 8-bit PWMs
PWMis being used in 10-bit mode, so we need a 16-bit variable to remember the current value.
PWM.
PWMregister. Since we are in an interrupt routine, it is safe to use a 16-bit assignment to the register. Outside of an interrupt, the assignment should only be performed with interrupts disabled if there's a chance that an interrupt routine could also access this register (or another register that uses
TEMP), see the appropriate FAQ entry.
PWMand enables interrupts.
sleep_mode()puts the processor on sleep until the next interrupt, to conserve power. Of course, that probably won't be noticable as we are still driving a LED, it is merely mentioned here to demonstrate the basic principle.
This first thing that needs to be done is compile the source. When compiling, the compiler needs to know the processor type so the
-mmcu option is specified. The
-Os option will tell the compiler to optimize the code for efficient space usage (at the possible expense of code execution speed). The
-g is used to embed debug info. The debug info is useful for disassemblies and doesn't end up in the
.hex files, so I usually specify it. Finally, the
-c tells the compiler to compile and stop – don't link. This demo is small enough that we could compile and link in one step. However, real-world projects will have several modules and will typically need to break up the building of the project into several compiles and one link.
$ avr-gcc -g -Os -mmcu=atmega8 -c demo.c
The compilation will create a
demo.o file. Next we link it into a binary called
demo.elf.
$ avr-gcc -g -mmcu=atmega8 -o demo.elf demo.o
It is important to specify the MCU type when linking. The compiler uses the
-mmcu option to choose start-up files and run-time libraries that get linked together. If this option isn't specified, the compiler defaults to the 8515 processor environment, which is most certainly what you didn't want.
Now we have a binary file. Can we do anything useful with it (besides put it into the processor?) The GNU Binutils suite is made up of many useful tools for manipulating object files that get generated. One tool is
avr-objdump, which takes information from the object file and displays it in many useful ways. Typing the command by itself will cause it to list out its options.
For instance, to get a feel of the application's size, the
-h option can be used. The output of this option shows how much space is used in each of the sections (the
.stab and
.stabstr sections hold the debugging information and won't make it into the ROM file).
An even more useful option is
-S. This option disassembles the binary file and intersperses the source code in the output! This method is much better, in my opinion, than using the
-S with the compiler because this listing includes routines from the libraries and the vector table contents. Also, all the "fix-ups" have been satisfied. In other words, the listing generated by this option reflects the actual code that the processor will run.
$ avr-objdump -h -S demo.elf > demo.lst
Here's the output as saved in the
demo.lst file:
demo.elf: file format elf32-avr Sections: Idx Name Size VMA LMA File off Algn 0 .text 000000d0 00000000 00000000 00000094 2**1 CONTENTS, ALLOC, LOAD, READONLY, CODE 1 .data 00000000 00800060 000000d0 00000164 2**0 CONTENTS, ALLOC, LOAD, DATA 2 .bss 00000003 00800060 00800060 00000164 2**0 ALLOC 3 .stab 0000075c 00000000 00000000 00000164 2**2 CONTENTS, READONLY, DEBUGGING 4 .stabstr 00000d21 00000000 00000000 000008c0 2**0 CONTENTS, READONLY, DEBUGGING 5 .comment 00000011 00000000 00000000 000015e1 2**0 CONTENTS, READONLY Disassembly of section .text: 00000000 <__ctors_end>: 0: 20 e0 ldi r18, 0x00 ; 0 2: a0 e6 ldi r26, 0x60 ; 96 4: b0 e0 ldi r27, 0x00 ; 0 6: 01 c0 rjmp .+2 ; 0xa <.do_clear_bss_start> 00000008 <.do_clear_bss_loop>: 8: 1d 92 st X+, r1 0000000a <.do_clear_bss_start>: a: a3 36 cpi r26, 0x63 ; 99 c: b2 07 cpc r27, r18 e: e1 f7 brne .-8 ; 0x8 <.do_clear_bss_loop> 00000010 <__vector_8>: #include "iocompat.h" /* Note [1] */ enum { UP, DOWN }; ISR (TIMER1_OVF_vect) /* Note [2] */ { 10: 1f 92 push r1 12: 0f 92 push r0 14: 0f b6 in r0, 0x3f ; 63 16: 0f 92 push r0 18: 11 24 eor r1, r1 1a: 2f 93 push r18 1c: 8f 93 push r24 1e: 9f 93 push r25 static uint16_t pwm; /* Note [3] */ static uint8_t direction; switch (direction) /* Note [4] */ 20: 80 91 62 00 lds r24, 0x0062 24: 88 23 and r24, r24 26: f1 f0 breq .+60 ; 0x64 <__SREG__+0x25> 28: 81 30 cpi r24, 0x01 ; 1 2a: 71 f4 brne .+28 ; 0x48 <__SREG__+0x9> if (++pwm == TIMER1_TOP) direction = DOWN; break; case DOWN: if (--pwm == 0) 2c: 80 91 60 00 lds r24, 0x0060 30: 90 91 61 00 lds r25, 0x0061 34: 01 97 sbiw r24, 0x01 ; 1 36: 90 93 61 00 sts 0x0061, r25 3a: 80 93 60 00 sts 0x0060, r24 3e: 00 97 sbiw r24, 0x00 ; 0 40: 39 f4 brne .+14 ; 0x50 <__SREG__+0x11> direction = UP; 42: 10 92 62 00 sts 0x0062, r1 46: 04 c0 rjmp .+8 ; 0x50 <__SREG__+0x11> 48: 80 91 60 00 lds r24, 0x0060 4c: 90 91 61 00 lds r25, 0x0061 break; } OCR = pwm; /* Note [5] */ 50: 9b bd out 0x2b, r25 ; 43 52: 8a bd out 0x2a, r24 ; 42 } 54: 9f 91 pop r25 56: 8f 91 pop r24 58: 2f 91 pop r18 5a: 0f 90 pop r0 5c: 0f be out 0x3f, r0 ; 63 5e: 0f 90 pop r0 60: 1f 90 pop r1 62: 18 95 reti static uint8_t direction; switch (direction) /* Note [4] */ { case UP: if (++pwm == TIMER1_TOP) 64: 80 91 60 00 lds r24, 0x0060 68: 90 91 61 00 lds r25, 0x0061 6c: 01 96 adiw r24, 0x01 ; 1 6e: 90 93 61 00 sts 0x0061, r25 72: 80 93 60 00 sts 0x0060, r24 76: 8f 3f cpi r24, 0xFF ; 255 78: 23 e0 ldi r18, 0x03 ; 3 7a: 92 07 cpc r25, r18 7c: 49 f7 brne .-46 ; 0x50 <__SREG__+0x11> direction = DOWN; 7e: 21 e0 ldi r18, 0x01 ; 1 80: 20 93 62 00 sts 0x0062, r18 84: e5 cf rjmp .-54 ; 0x50 <__SREG__+0x11> 00000086 <ioinit>: void ioinit (void) /* Note [6] */ { /* Timer 1 is 10-bit PWM (8-bit PWM on some ATtinys). */ TCCR1A = TIMER1_PWM_INIT; 86: 83 e8 ldi r24, 0x83 ; 131 88:; 8a: 8e b5 in r24, 0x2e ; 46 8c: 81 60 ori r24, 0x01 ; 1 8e: 8e bd out 0x2e, r24 ; 46 #if defined(TIMER1_SETUP_HOOK) TIMER1_SETUP_HOOK(); #endif /* Set PWM value to 0. */ OCR = 0; 90: 1b bc out 0x2b, r1 ; 43 92: 1a bc out 0x2a, r1 ; 42 /* Enable OC1 as output. */ DDROC = _BV (OC1); 94: 82 e0 ldi r24, 0x02 ; 2 96: 87 bb out 0x17, r24 ; 23 /* Enable timer 1 overflow interrupt. */ TIMSK = _BV (TOIE1); 98: 84 e0 ldi r24, 0x04 ; 4 9a: 89 bf out 0x39, r24 ; 57 sei (); 9c: 78 94 sei 9e: 08 95 ret 000000a0 <main>: void ioinit (void) /* Note [6] */ { /* Timer 1 is 10-bit PWM (8-bit PWM on some ATtinys). */ TCCR1A = TIMER1_PWM_INIT; a0: 83 e8 ldi r24, 0x83 ; 131 a2:; a4: 8e b5 in r24, 0x2e ; 46 a6: 81 60 ori r24, 0x01 ; 1 a8: 8e bd out 0x2e, r24 ; 46 #if defined(TIMER1_SETUP_HOOK) TIMER1_SETUP_HOOK(); #endif /* Set PWM value to 0. */ OCR = 0; aa: 1b bc out 0x2b, r1 ; 43 ac: 1a bc out 0x2a, r1 ; 42 /* Enable OC1 as output. */ DDROC = _BV (OC1); ae: 82 e0 ldi r24, 0x02 ; 2 b0: 87 bb out 0x17, r24 ; 23 /* Enable timer 1 overflow interrupt. */ TIMSK = _BV (TOIE1); b2: 84 e0 ldi r24, 0x04 ; 4 b4: 89 bf out 0x39, r24 ; 57 sei (); b6: 78 94 sei ioinit (); /* loop forever, the interrupts are doing the rest */ for (;;) /* Note [7] */ sleep_mode(); b8: 85 b7 in r24, 0x35 ; 53 ba: 80 68 ori r24, 0x80 ; 128 bc: 85 bf out 0x35, r24 ; 53 be: 88 95 sleep c0: 85 b7 in r24, 0x35 ; 53 c2: 8f 77 andi r24, 0x7F ; 127 c4: 85 bf out 0x35, r24 ; 53 c6: f8 cf rjmp .-16 ; 0xb8 <main+0x18> 000000c8 <exit>: c8: f8 94 cli ca: 00 c0 rjmp .+0 ; 0xcc <_exit> 000000cc <_exit>: cc: f8 94 cli 000000ce <__stop_program>: ce: ff cf rjmp .-2 ; 0xce <__stop_program>
avr-objdump is very useful, but sometimes it's necessary to see information about the link that can only be generated by the linker. A map file contains this information. A map file is useful for monitoring the sizes of your code and data. It also shows where modules are loaded and which modules were loaded from libraries. It is yet another view of your application. To get a map file, I usually add
-Wl,-Map,demo.map to my link command. Relink the application using the following command to generate
demo.map (a portion of which is shown below).
$ avr-gcc -g -mmcu=atmega8 -Wl,-Map,demo.map -o demo.elf demo.o
Some points of interest in the
demo.map file are:
The
.text segment (where program instructions are stored) starts at location 0x0.
The last address in the
.text segment is location
0x114 ( denoted by
_etext ), so the instructions use up 276 bytes of FLASH.
The
.data segment (where initialized static variables are stored) starts at location
0x60, which is the first address after the register bank on an ATmega8 processor.
The next available address in the
.data segment is also location
0x60, so the application has no initialized data.
The
.bss segment (where uninitialized data is stored) starts at location
0x60.
The next available address in the
.bss segment is location 0x63, so the application uses 3 bytes of uninitialized data.
The
.eeprom segment (where EEPROM variables are stored) starts at location 0x0.
The next available address in the
.eeprom segment is also location 0x0, so there aren't any EEPROM variables.
We have a binary of the application, but how do we get it into the processor? Most (if not all) programmers will not accept a GNU executable as an input file, so we need to do a little more processing. The next step is to extract portions of the binary and save the information into
.hex files. The GNU utility that does this is called
avr-objcopy.
The ROM contents can be pulled from our project's binary and put into the file demo.hex using the following command:
$ avr-objcopy -j .text -j .data -O ihex demo.elf demo.hex
The resulting
demo.hex file contains:
:1000000020E0A0E6B0E001C01D92A336B207E1F700 :100010001F920F920FB60F9211242F938F939F93DD :10002000809162008823F1F0813071F4809160004A :100030009091610001979093610080936000009718 :1000400039F41092620004C08091600090916100C8 :100050009BBD8ABD9F918F912F910F900FBE0F90E6 :100060001F90189580916000909161000196909387 :100070006100809360008F3F23E0920749F721E001 :1000800020936200E5CF83E88FBD8EB581608EBD81 :100090001BBC1ABC82E087BB84E089BF78940895BA :1000A00083E88FBD8EB581608EBD1BBC1ABC82E01B :1000B00087BB84E089BF789485B7806885BF8895C1 :1000C00085B78F7785BFF8CFF89400C0F894FFCF3D :00000001FF
The
-j option indicates that we want the information from the
.text and
.data segment extracted. If we specify the EEPROM segment, we can generate a
.hex file that can be used to program the EEPROM:
$ avr-objcopy -j .eeprom --change-section-lma .eeprom=0 -O ihex demo.elf demo_eeprom.hex
There is no
demo_eeprom.hex file written, as that file would be empty.
Starting with version 2.17 of the GNU binutils, the
avr-objcopy command that used to generate the empty EEPROM files now aborts because of the empty input section
.eeprom, so these empty files are not generated. It also signals an error to the Makefile which will be caught there, and makes it print a message about the empty file not being generated.
Rather than type these commands over and over, they can all be placed in a make file. To build the demo project using
make, save the following in a file called
Makefile.
Makefilecan only be used as input for the GNU version of
make. | http://www.nongnu.org/avr-libc/user-manual/group__demo__project.html | CC-MAIN-2017-13 | refinedweb | 2,286 | 70.63 |
I needed to create a word cloud for a recent text analysis project I was working on. It wasn’t easy to find resources to actually just create a text cloud without a bunch of other random shit on it, so I decided to make one. Here’s a no-nonsense tutorial on how to create a word cloud in Python. I’ve simplified pages and pages of reading into just 10 lines of Python.
To follow along you’ll need to get an image, here’s the cloud image I used. You’ll also need to install the
matplotlib,
numpy, and
wordcloud libraries. You can install these with the line below in your terminal:
pip install matplotlib numpy wordcloud
Handling Imports
As we always do, the first thing we need to do is handle our imports. From the
wordcloud library we’ll import the
WordCloud function and the
STOPWORDS list. We need the
WordCloud function to create our word cloud and
STOPWORDS are words that don’t make sense to include in the word cloud.
STOPWORDS words include its, an, the, etc …
We’ve used
matplotlib.pyplot in some of our tutorials already, such as the one on plotting a random dataset, or checking to see if more polarizing YouTube titles get more views. We use this library for plotting. The other two libraries we’ll use here are
numpy for fast math operations and access to the
array type, and
PIL to load the image. We didn’t explicitly install
PIL above because it comes with
matplotlib.
from wordcloud import WordCloud, STOPWORDS import matplotlib.pyplot as plt import numpy as np from PIL import Image
Creating the Word Cloud
Now let’s create our word cloud function. This function will take one parameter, the
text that we’ll make the word cloud from. The first thing we’ll do in our function is make a
set out of the
STOPWORDS we imported. Next, let’s make a mask out of the image. This frame mask will be what makes the shape of our word cloud. To make the mask, we’ll open up our image and turn it into an
np.array type object.
Once we have these set up, we can create the word cloud. All we’ll do is call the
WordCloud function and pass it some parameters. In this example, we’ve passed in the maximum number of words we want in the cloud, the mask for the shape, the stop words for the words to ignore, and the background color. After creating the word cloud, we’ll use the
imshow function from
matplotlib.pyplot to show the word cloud and not show the axis. The
interpolation option is for how we want to show the image, to learn more, read about interpolation in matplotlib.
# wordcloud function def word_cloud(text):.show()
For examples of word clouds, check out these word clouds about Obama’s Presidency in Headlines
Word Cloud Options/Help Results
Here are the parameters and options for the
WordCloud function.. | https://pythonalgos.com/create-a-word-cloud-in-10-lines-of-python/ | CC-MAIN-2022-27 | refinedweb | 507 | 72.16 |
" why are you using a function? Just create a constant and use that. Calling a function which always returns the same value is a code smell. That's not to say it is always wrong, but it smells a bit off. How about zero-to-many functions? E.g. you have a situation where calling function() twice might return different values. Okay, here's an example:. Hidden state is generally bad, because it makes it hard to reason about the function call, hard to reproduce results, hard to debug, hard to test. Think about the difference in difficulty in confirming that math.sin() of some value x returns the value 0.5, and confirming that random.random() of some hidden state returns a specific value: py> assert math.sin(0.5235987755982989) == 0.5 versus: py> state = random.getstate() py> random.seed(12345) py> assert random.random() == 0.41661987254534116 py> random.setstate(state) [...] >. Methods that appear to take zero arguments actually take one argument, it is just that it is written in a different place: "some string".upper() is merely different syntax for: upper("some string") with the bonus that str.upper and MyClass.upper live in different namespaces and so can do different things. So I have no problem with zero- argument methods, or functions with default values. Remember that a code smell does not mean the code is bad. Only that it needs to be looked at a bit more carefully. Perhaps it is bad. Or perhaps, like durian fruit, it smells pretty awful but tastes really good. > There are code smells that are the opposite in fact, methods with long > parameter lists are generally seen as code smell (“passing a > paragraph”). Absolutely! You'll get no disagreement from me there. -- Steven | https://mail.python.org/pipermail/python-list/2014-February/666728.html | CC-MAIN-2019-39 | refinedweb | 293 | 69.79 |
Opened 12 years ago
Closed 12 years ago
#3314 closed defect (fixed)
[patch] smart_unicode throws UnicodeDecodeError when got a instance with utf-8 encoded string
Description
When smart_unicode get instance instead of string it uses
unicode(str(s)) to convert it to string, but if instance returns a
utf-8 encoded string you will get a
UnicodeDecodeError.
Attachments (3)
Change History (9)
Changed 12 years ago by
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
Hi Nesh--thanks for spotting this! I don't understand one thing about your patch, why do you first try to decode from ASCII and only use the DEFAULT_CHARSET when it fails? I'd simply do it like this:
def smart_unicode(s): if not isinstance(s, basestring): s = unicode(str(s), settings.DEFAULT_CHARSET) elif not isinstance(s, unicode): s = unicode(s, settings.DEFAULT_CHARSET) return s
comment:3 Changed 12 years ago by
Yes, that's was my first idea, but I'm not sure why in the first place the
str is used so I'm wrapped this in the
try...except block for a quick fix.
Also if
str call is not essential then we can simply use:
def smart_unicode(s): if not isinstance(s, unicode): s = unicode(str(s), settings.DEFAULT_CHARSET) return s
Regarding the tests, I'll try to add some and send updated patch (with tests) during the day.
Changed 12 years ago by
smart_unicode patch
Changed 12 years ago by
test case
comment:4 Changed 12 years ago by
Patch is fixed, I also added special case for objects that implement
__unicode__ and the test case.
Sorry for double form_utils.diff attachment.
fast fix | https://code.djangoproject.com/ticket/3314 | CC-MAIN-2019-22 | refinedweb | 280 | 65.35 |
Let us use an array of characters (str) to store the string and variables nletter, ndigit, nspace and nother to count the letters, digits, white spaces and other characters, respectively.
The program given below first reads a string from the keyboard using the gets function and then uses a while loop to determine the desired counts. An if-else-if statement is used within the while loop to test the ith character and update the respective counter. Note the use of variable ch to minimize access to the array element (possibly improving its efficiency) as well as to improve the readability of the code. Finally, the program prints the counts.
/* Count letters, digits, whitespaces and other chars in a given string */
#include <stdio.h>
void main()
{
char str [81];
int nletter, ndigit, nspace, nother; /* char counts */
int i;
clrscr();
printf("Enter a line of text:\n");
gets(str);
/* count characters in string str */
nletter = ndigit = nspace = nother = 0; /* init counts */
i = 0;
while (str[i] != '\0')
{
char ch= str[i];
if (ch>= 'A' && ch<= 'Z' || ch>= 'a' && ch<= 'z')
nletter++;
else if (ch>= '0' && ch<= '9')
ndigit++;
else if (ch == ' ' || ch =='\n' || ch == '\t')
nspace++;
else nother++;
i++;
}
/* print counts */
printf("Letters: %d \tWhite spaces : %d", nletter, nspace);
printf(" Digits : %d \tOther chars : %d\n", ndigit, nother);
getch();
}
The output of the program is given | http://ecomputernotes.com/c-program/write-a-program-to-count-digitswhite-spaceothers | CC-MAIN-2020-05 | refinedweb | 225 | 61.09 |
Here’s a fun game written with Python Turtle Graphics. It’s called Arachnophobia, and is basically a spider version of Whack-a-Mole.
Please note spiders are mostly innocent and do not deserve to die just because you may be afraid of them. No spiders were harmed in the production of this game.
You can play the game online at this repl.it. The goal is to use the mouse to click on spiders as they appear. However, this is where we run into some of the limitations of browser-based Python programming, as the fastest time the online version can handle is 1000 ms between “frames” (You’ll see what I mean if you play it…)
Browser Based Version of Arachnophobia Python Turtle Graphics Game
Arachnophobia Python Turtle Graphics Game
You will find that running the game on a desktop or laptop will give you a much better experience. For one, you can change the speed by adjusting the
SPEED constant (try
500, as in 500 ms between frames). Also, if you install
playsound via
pip install playsound, there is a fun sound effect every time you click on a spider.
Below is the code listing for the game. As usual, I strongly encourage you to type the code in for yourself, even if there are parts you don’t understand. But if that seems like a drag, just paste it into a new file for now and save and run.
I will explain some of the details after the listing, but please note that even though we are using a module which some people consider to be for beginners at programming, there are some fairly advanced techniques used, and many of the choices I have made are the result of a considerable amount of experience writing these kinds of games.
If you want to learn more about how to write games using Python, why not get in touch using the details on our contact page for further information or to book a free 1/2 hour no-obligation lesson?
Here’s is the listing:
You will need to download the spider image and save it the same folder as the program, naming it
spider.gif (right-click, save image as).
import turtle import random try: import playsound # Not part of standard Library. SOUND = True except ImportError: SOUND = False WIDTH = 800 HEIGHT = 400 CURSOR_SIZE = 20 SQUARE_SIZE = 50 NUM_ROWS = 5 NUM_COLS = 5 BG_COLOR = "yellow" TITLE = "Arachnophobia" COLORS = ("red", "black") SPEED = 500 NUM_TRIES = 20 def init_screen(): screen = turtle.Screen() screen.title(TITLE) screen.setup(WIDTH, HEIGHT) canvas = screen.getcanvas() return screen, canvas def create_board(): board = [] for i in range(NUM_ROWS): for j in range(NUM_COLS): tur = turtle.Turtle(shape="square") tur.setheading(90) board.append(tur) tur.penup() tur.shapesize(SQUARE_SIZE / CURSOR_SIZE) tur.color(COLORS[0] if i % 2 == j % 2 else COLORS[1]) tur.onclick(lambda x, y, tur=tur: click(tur)) x = -NUM_COLS / 2 * SQUARE_SIZE + j * SQUARE_SIZE + SQUARE_SIZE / 2 y = NUM_ROWS / 2 * SQUARE_SIZE - i * SQUARE_SIZE - SQUARE_SIZE / 2 tur.goto(x, y) return board def click(tur): global score, high_score # These values are modified within this function. if board.index(tur) == spider_pos: if SOUND: playsound.playsound("ouch2.mp3", False) score += 1 if score > high_score: high_score = score update_score() def toggle_turtle(tur): if tur.shape() == "square": tur.shape("spider.gif") else: tur.shape("square") # Turtles lose their onclick binding when image is used, so we have to rebind. tur.onclick(lambda x, y, tur=tur: click(tur)) screen.update() def update_score(): pen.clear() pen.write(f"Score: {score} High Score: {high_score}", font=("Arial", 16, "bold")) def reset(): global spider_pos, pen, score, high_score, board, counter # Reset screen screen.clear() screen.bgcolor(BG_COLOR) screen.register_shape("spider.gif") screen.tracer(0) # Disable animation # Initialise board board = create_board() spider_pos = 0 toggle_turtle(board[spider_pos]) # Score score = 0 pen = turtle.Turtle() pen.hideturtle() pen.penup() pen.goto(-119, -160) update_score() # Let's go counter = 0 screen.update() game_loop() def game_over(): pen.goto(-80, -20) pen.color("white") pen.write("Game Over", font=("Arial", 24, "bold")) def game_loop(): global spider_pos, counter # These values are modified within this function. toggle_turtle(board[spider_pos]) spider_pos = random.randrange(NUM_ROWS * NUM_COLS) toggle_turtle(board[spider_pos]) counter += 1 if counter > NUM_TRIES: spider_pos = -999 # Avoid clicking in between rounds game_over() canvas.after(2000, reset) return # Very important to ensure loop is not called again. screen.ontimer(game_loop, SPEED) if __name__ == "__main__": screen, canvas = init_screen() high_score = 0 reset() turtle.done()
Some observations about the above code:
- Constants are used to avoid “magic numbers” scattered throughout the program
- The board is based on the concept of a 2d grid of individual turtle objects.
- The turtle objects have a click-handler attached, but all click events are handled by one function due to the use of the
lambdaexpression (this is a fairly advanced technique).
- The
boardis created using a nested FOR loop. See link for more info.
- If you are concerned about the use of global variables, please read this article
- It is a good idea to clear the screen on reset, otherwise there can be an unseen accumulation of stray turtle objects existing in memory which can cause the program to slow down as you play multiple rounds.
That’s it for now. As I said above, writing this kind of game is non-trivial, especially if you want to be able to do it entirely independently eventually. If you have any questions, go ahead and ask either in a comment or by email and I’ll get back to you with an answer.
Happy computing! | https://compucademy.net/python-turtle-graphics-game-arachnophobia/ | CC-MAIN-2022-27 | refinedweb | 918 | 63.09 |
I just made a program to calculate the distance to a rainbow...it seemed to work. ..but the Prof wants that version which I think is callled "Interactive" and he wants an "Noninteractive" version. That would be be with InData.txt files. Well I did that and when I went to test the program it just said "Press any key to continue" thats it. I checked the files and there was an Output.txt file with the answers..is this what this should do? Only show the outpt.txt file vs. showing on the comptuer screen. I hope this is right, please let me know. thanks
Bryan
Here is my Noninteractive version...Please let me know if this is built correctly.
Code://***************************************** //How tall is a rainbow //This program finds out how tall a //rainbow is with the distance of the //rainbow already defined by the programmer //***************************************** #include <iostream> #include <iomanip> #include <fstream> #include <cmath> using namespace std; float PI; float MagicAngle; float RadiansMa; float TangentMa; float RainbowDistance; float RainbowHeight; ifstream InData; ofstream OutData; int main() { OutData << fixed <<showpoint; InData.open("InData.txt"); OutData.open("OutData.txt"); const double PI = 3.14159265; const double MagicAngle = 42.3333333; InData >> RainbowDistance; RadiansMa = MagicAngle * PI/180; TangentMa = tan(RadiansMa); RainbowHeight = TangentMa * RainbowDistance; OutData << setprecision(4); OutData << "Distance is " <<RainbowDistance << endl << "Height of rainbow is " <<RainbowHeight <<endl; return 0; } | https://cboard.cprogramming.com/cplusplus-programming/40959-week-3-still-have-problems.html | CC-MAIN-2017-26 | refinedweb | 225 | 59.7 |
goto is a jump statement used to transfer program control unconditionally from one part of a function to another. I have used the word unconditionally because there is no restriction on control transfer. You can transfer program control from one position to any position within a function. Many programmers uses
goto to gain full control on their program.
Syntax of
goto statement
label: goto label;
Parts of
goto statement
goto is divided in two parts, label definition and
goto keyword.
- Label is a valid C identifier followed by colon symbol :. Label specifies control transfer location. Each label work as a bookmark to specific location. You are free to define any number of labels anywhere inside a function.
gotois a keyword used along with label name to transfer program control to the mentioned label. You can only transfer program control to a label within same function. Cross function control transfer, using
gotois not possible and results in compilation error.
There is no restriction in order of
goto and label. You are free to define a label anywhere inside a function. For example both the below examples are valid.
Label definition above
goto statement
label1: goto label1;
Label definition below
goto statement
goto label2; label2:
Flowchart of
goto statement
Use of
goto in programming
goto makes program less readable, error-prone and hard to find bugs. Due to the unconditional control transfer from one part to other part of a function. At one point, you as a programmer cannot tell how program control came to a certain point in your code. Therefore, most of the programmers avoid the usage of
goto.
Important note: You should avoid usage of
goto as much as possible. Try to replace
goto statements with a combination of if...else, switch...case and loopping statements.
Consider below example of
goto to demonstrate when and why you should avoid usage of
goto.
#include <stdio.h> int main() { int i=1; start: goto print; print: printf("%d ", i); goto next; increment: i++; goto print; next: if(i < 10) goto increment; else goto exit; printf("I cannot execute."); exit: return 0; }
The above program is hard to read and find bugs if any. Hence, you must avoid usage of
goto in such situations.
Okay lot of programmers including me talks about the unreliability of
goto. However,
goto can be useful in many cases such as.
- Moving out of a deeply nested loop.
- Transferring control to any nested loop level from other.
Note: You can construct any loop using combination of
goto and
if...else.
Example program to demonstrate
goto statement
Many text and other materials on internet make use of
goto as a loop. However, I believe
goto should not be used as a replacement of loops. In below program I will show one real use of
goto statement.
As I spoke earlier you can make use of
goto to exit from a deeply nested loop. In below program I will show how to get out of a deeply nested loop using
goto statement.
/** * C program to demonstrate usage of goto */ #include <stdio.h> int main() { /* Variable declaration */ int i, j, k; /* Some sample loop */ for(i=1; i<=10; i++) { for(j=1; j<=10; j++) { k = 1; while(k<=10) { /* Some condition */ if(j==5 && k==5) { /* Move the program control outside the loop */ goto out_of_loop; } printf("%d ", k); k++; } } } /* goto label */ out_of_loop: return 0; }
<pre><code> ----Your Source Code---- </code></pre> | https://codeforwin.org/2017/09/goto-statement-c.html | CC-MAIN-2019-04 | refinedweb | 573 | 56.86 |
What about the backend?
Remember when we discussed how React.js is a UI library.
The requirement
So we need to return a list of tweets when the UI asks for it.
The UI will make a request (“please give me a list of tweets”) and our application will respond accordingly.
This React.js, Blazor, Angular or anything else for our UI.
Retrieve a list of tweets using MediatR
There are several ways we can go about implementing this application logic but my preferred one is to use a small library by Jimmy Bogard, called MediatR..
This is where we’re going to represent our Request (query) and Response (model). We’ll start by replacing the contents of List.cs with the following:
using MediatR; using System.Collections.Generic;), rather we’re just saying we want all tweets." } }; } } }
Make sure you’ve added these using statements and you’re good to go.
using MediatR; using System.Collections.Generic; using System.Threading; using System.Threading.Tasks;
That’s all we need (for now). A simple hardcoded list of tweets.
Hooking it all up
There’s just a couple more things to do if we’re going to invoke this request from our React frontend.
First we need to tell our application we’re using MediatR (so it can find and wire up all our handlers, like the one we just created above).
Head over to Startup.cs and add this anywhere in
ConfigureServices:
services.AddMediatR(Assembly.GetExecutingAssembly());
This instructs MediatR to look for any handlers in the currently executing assembly (or .NET Core API/React project).
You’ll also need to add two using statements to the top of startup.cs:
using MediatR; using System.Reflection;
Now we have a handy query to run, which will return a list of tweets, but we have no way of running it yet.
We need a way to invoke this query from our React application, take the results and put them in our React component’s
state.
For this we can employ ASP.NET Core Web API.
Add a new controller to the Features/Tweets folder and call it anything you like (I called mine TweetController.cs).
Replace the contents with the following:
using System.Threading.Tasks; using MediatR; using Microsoft.AspNetCore.Mvc; using MyTweets.Features.Tweets; [Route("); } }
This is how you can invoke any MediatR request, using the
Send method.
In this case we instantiate a new instance of our
List.Query class.
When we “send” this via MediatR, it will locate and execute our handler and return the resulting data (our list of “tweets”).
Call your API from React
Now we can return tweets from our API you can actually test this in the browser.
Head over to
https://<your-app-here>/tweet and you should see the raw JSON tweets data displayed in your browser.
But remember, we have a shiny new React.js component that we want to use to show these tweets, let’s put that to work.
Head over to the ClientApp/src/components/Tweets/List.js component.
Remember this code?
state = { tweets: [ "One tweet", "Two tweets", "Three tweets", "Four" ] }
We need to replace this hardcoded state with the data from a call to our API.
For this we can use the native HTTP Fetch API built into javascript (and supported by most modern browsers).
Add a
componentDidMount method below the state declaration in List.js (and above the ‘Render’ method) as follows:
async componentDidMount() { const response = await fetch('tweet'); const data = await response.json(); this.setState({tweets: data.tweets}); }
But, run this now and you may well get a nasty looking error!
So what gives?
Handling nulls
Our code in the render method is attempting to loop over the tweets array…
{this.state.tweets.map(tweet => <Tweet text={tweet}/>) }
The error occurs because, until the network fetch returns data,
this.state.tweets is not set to anything.
In this case
state.tweets is literally null.
So when we try to call
map on it, javascript complains that it can’t call
map on something which is null.
We know that
state.tweets will be set to an array of tweets one the network call finishes, but by that time React has given up waiting and already shouted its loud, angry looking error at us!
We can easily avoid this however if we simply provide some default state for the list of tweets…
export default class List extends React.Component { state = { tweets:[] } async componentDidMount() { const response = await fetch('tweet'); const data = await response.json(); this.setState({tweets: data.tweets}); } render() { return ( <> <h3>Tweets</h3> {this.state.tweets.map(tweet => <Tweet text={tweet}/>) } </>); } }
Now React will happily render our component (initially with an empty list of tweets), then update the UI once the network call completes.
Give it a spin
Run your application now; your React component will make a network call to the API, retrieve a list of tweets and render them in the browser!
Nice!
Making progress
It’s worth taking a moment to reflect on what we’ve achieved up to this point.
We now have a clean architecture which enables us to build our application’s logic (queries and commands) in a way which doesn’t box us in to any specific UI framework.
Let’s say we suddenly needed to rewrite our UI (or a part of it) using Blazor Server.
No problem; our MediatR request, response and handlers all stay the same, we would would just invoke them from our Blazor Server components directly (instead of going via an API controller).
Pretty neat huh?
Next we’ll look at introducing a lightweight data store for testing so we can make our application handle new tweets. | https://jonhilton.net/react/backend/ | CC-MAIN-2022-05 | refinedweb | 952 | 66.74 |
Details
Description
This item introduces a discussion of how to reduce the time necessary to start a large cluster from tens of minutes to a handful of minutes.
Issue Links
- depends upon
HADOOP-3364 Faster image and log edits loading.
- Closed
HADOOP-3369 Fast block processing during name-node startup.
- Closed
- incorporates
HADOOP-3248 Improve Namenode startup performance
- Closed
- relates to
-
Activity
- All
- Work Log
- History
- Activity
- Transitions
What is the replica count in this benchmark?
I am guessing it is 3 (6 million / 500 nodes =12,000 objects / per node. 36000 / 12000 = 3 replicas).
Could you clarify?
What does "process block reports" include? Does it include the time for generation of block reports in datanode and the time for namenode to receive the block reports? Or is it only the time to process all block reports not including receiving time?
I was wondering how the numbers would be affected if you had the same number of objects but 1000 datanodes instead of 500 datanodes and 250 datanodes instead of 500 datanodes.
Do you have any guess?
Yes, replication = 3.
Block report processing in my test includes only name-node processing. No preparation time, no RPC overhead.
The processing time is proportional to the number of data-nodes (confirmed by the tests).
This is because the name-node locks namespace for each block report and processes them sequentially one after another.
How do you measure only processing time? Did you have to change the source code to add logging lines for these measurements or did you use the code as it is? The datanode records the time in log after it prepares and finishes sending the block report and the namenode reports after it leaves the safemode. I am trying to get similar benchmarks. Could you please clarify how you were able to only measure the processing time?
Note. I updated the numbers in the tables above. Due to a calculation error the number for load edits was too high.
> How do you measure only processing time?
Yes I modified code and add logging lines printing processing times. It is in the patches linked to this issue.
For measuring block report processing I use modified NNThroughputBenchmark.
Therefore, it is an estimate not the real cluster processing time, but it is sort of an upper bound for how fast the name-node can process those reports.
After the two optimizations
HADOOP-3364 and HADOOP-3369 the load time is improved by a factor of 2.
The biggest progress is achieved in saving image and block processing, each of which is almost 4 times faster.
- image saving is 4 times faster
- block processing is 4 times faster
The table below summarizes sizes and compares new and old time measurements.
This leads to the optimized startup time of 5 minutes, out of which
I think more improvements can be made here especially in the loading part.
For edits log we should optimize ADD and CLOSE transactions as noted in
HADOOP-3364.
For image loading it is probably block processing, but that needs to be evaluated.
Leaving this issue open for now.
Estimates.
The name-node startup consists of 4 major steps:
I am estimating the name-node startup time based on a 10 million objects image.
The estimates are based on my experiments with a real cluster.
The numbers are scaled proportionally (4.3) to 10 mln objects.
Under the assumption that each file on average has 1.5 blocks 10 mln objects
translates into 4 mln files and directories, and 6 mln blocks.
Other claster parameters are summarized in the following table.
This leads to the total startup time of 10 minutes, out of which
Optimization.
HADOOP-3248.
It is highly recommended to set the initial heap size -Xms close to the maximum heap size because
all that memory will be used by the name-node any way.
In case of loading, object allocations are unavoidable, so we will need to make sure
there are no intermediate allocations of temporary objects.
In case of saving, all object allocations should be eliminated, which is done in
HADOOP-3248.
namespace tree starting from the root in order to find the object's parent directory and then to insert the object into it.
We can take advantage here of that directory children are not interleaving with children of other directories in the image.
So we can find a parent once and then add then include all its children without repeating the search.
According to my experiments most of the processing time here goes into first adding all blocks into needed replication queue,
and then removing them from the queue. During startup all blocks are guaranteed to be under-replicated in the beginning.
But most of them and in the regular case all of them will be removed from that list.
So in a sense we create dynamically a huge temporary structure just to make sure that all blocks have enough replicas.
In addition to that in pre
HADOOP-2606versions (before release 0.17) the replication monitor would start processing
those under-replicated blocks, and try to assign nodes for copying blocks.
The structure works fine during regular operation because it contains only those blocks that are in fact under-replicated.
The processing of block reports goes 5 times faster if the addition to the needed replications queue is removed.
only one lookup is necessary because the name-space is locked and there is only thread that modifies it. | https://issues.apache.org/jira/browse/HADOOP-3022 | CC-MAIN-2017-26 | refinedweb | 913 | 64.71 |
Microcontroller Monday - Adafruit FT232H - CircuitPython
This isn't the first time that this board has crossed my path, in fact it first landed on my desk in 2015!
So what is it?
A USB to serial breakout board that enables any PC to have a series of GPIO pins broken out and ready for use on a breadboard. In the past it used a Python 2 library to control the state of the pins, but now there is a CircuitPython library.
CircuitPython?
CircuitPython is a fork of MicroPython, a version of Python 3 for microcontrollers. It was created by the Adafruit team and in a short amount of time it has become quite a popular language.
But why is Python popular?
Python has seen a great surge in popularity, and right now we can use it on devices as small as a microcontroller, or as large as a data centre. We can create physical computing projects using the same language which is used to manage data centres for NASA. Python is becoming the best cross platform / project language to get the job done.
Yeah but language x is better because it is quicker / compiles / produces Unicorns. This is probably true, and these platforms offer great benefits over Python. But if I can use Python on many different devices, I reduce the amount of work required for me to create a project.
So what can FT232H with CircuitPython offer?
It is still very early days for this project, but at the time of writing we have GPIO pin access, SPI and I2C. We cannot use Neopixels :( at this time, but I am sure that this will change in future updates.
Update 15/10/2019 Neopixels via SPI!
Nice timing @adafruit Working well with FT232H here in the UK. pic.twitter.com/zt6BWGiE2q— biglesp (@biglesp) October 15, 2019
Adafruit have released a Neopixel library for SPI, which means the FT232H can now use Neopixels via the SPI interface :)
The FT232H is merely an interface board between our big PC and the electronic components that we wish to control. We install CircuitPython on our PC and this in turn talks tot he FT232H.
So is it easy to install?
Pretty much yeah. Follow Adafruit's instructions and you are ready to go.
So why should I use this over a typical CircuitPython device?
Now that is a good point. The FT232H is a remarkable piece of kit but it is $15. So why do we need it? Well the thing is we don't. This is a special piece of kit and those using it with CircuitPython will be creating projects that rely on buttons and LEDs and a powerful computer to back it up. Think of it as a device that enables your PC to have a small number of GPIO pins.
If we do not need the power of a full PC, and our project can be run directly for the device, then perhaps a Trinket M0 or Gemma M0 is more for you.
So what did you build Les?
First of all I made a quick flashing LED project, you press the button and the LED flashes ten times.
Here is the code!
import board import digitalio import time led = digitalio.DigitalInOut(board.C0) led.direction = digitalio.Direction.OUTPUT button = digitalio.DigitalInOut(board.C1) button.direction = digitalio.Direction.INPUT while True: if button.value == True: for i in range(10): led.value = True time.sleep(0.2) led.value = False time.sleep(0.2)
Is that all you made?
Well no, I also made a quick demo of a button controlled web browser.
Press the button and a web page opens in your default browser.
import board import digitalio import webbrowser import time button = digitalio.DigitalInOut(board.C1) button.direction = digitalio.Direction.INPUT while True: if button.value == True: time.sleep(0.2) webbrowser.open_new_tab("")
So where can I buy an FT232H?
In the USA you can go direct to Adafruit and for the UK you can pick one up from Pimoroni. | https://bigl.es/microcontroller-monday-adafruit-ft232h-circuitpython/ | CC-MAIN-2020-16 | refinedweb | 672 | 75.1 |
psql command line tutorial and cheat sheet
One of my favorite moments from our recent Postgres episode of The Changelog was when Craig taught me a few
psql tricks. This tutorial is a bit like that, only way more dense and easily referenced. 👌
One of my favorite moments from our recent Postgres episode of The Changelog was when Craig taught me a few
psql tricks. This tutorial is a bit like that, only way more dense and easily referenced. 👌
There was a discussion in Slack today about the recent Postgres episode on The Changelog and a mention of considering CockroachDB in order to be distributed-by-default and Postgres compatible. But why is CockroachDB Postgres compatible? Here’s a breakdown from Ben Darnell, CTO and Co-Founder of Cockroach Labs…
CockroachDB is built to be largely compatible with PostgreSQL, meaning that software written to use PostgreSQL can sometimes (often!) be used with CockroachDB without changes.. The Changelog #417
PostgreSQL aficionado Craig Kerstiens joins Jerod to talk about his (and our) favorite relational database. Craig details why Postgres is unique in the world of open source databases, which features are most exciting, the many things you can make Postgres do, and what the future might hold. Oh, and some awesome
psql tips & tricks!
I don’t know what would possess someone to build Battleship in Postgres, but here we have it.
pgxis a framework for developing PostgreSQL extensions in Rust and wants to make that process as idiomatic and safe as possible. Currently,
pgxsupports Postgres v10, v11, and v12.
If this interests you, check out the examples directory that shows you how to work with arrays, errors, strings, and more. Go Time #137?
I use
EXPLAIN ANALYZE infrequently enough that the time I spend squinting at the results and then searching for help interpreting them often outweighs the benefit of the query altogether. Not anymore, baby! This nifty tool increases the readability of
EXPLAIN ANALYZE by 1000%. And that’s science!
This comes from the just-announced Supabase, which is “an open source alternative to Firebase”. Oh, and if you’re thinking
NOTIFY is good enough… maybe it is, maybe it isn’t.
A general rule, but as with most things in software, there are exceptions:
Is Manual Modification of the Data Directory Ever Justified?
Sadly, I can’t answer “no” to this question. There are circumstances under which there is no reasonable alternative.
I’m definitely guilty of this and have mucked things up in the past (in dev, not prod!).
I still manually delete
postmaster.pid a few times a month as Postgres doesn’t shut down completely sometimes when I reboot my computer. That file doesn’t get cleaned up, which results in Postgres not launching after the reboot.
The title is not clickbait or hyperbole. I intend to prove that by virtue of both design and implementation that PostgreSQL is objectively and measurably a better database than anything currently available, with or without money considerations.
He goes on to detail 15(ish) reasons why Postgres stands out from the crowd. A compelling argument. I’d love to see similar write-ups by people who disagree.
Joe detects performance bottlenecks and recommends optimizations from the comfort of your Slack team.
This is a nice lessons learned post from one engineering team making a database switch.…
I like the update at the end, which emphasizes the important of tests for making a switch of this magnitude:.
This new job scheduler for Postgres is written in Go and has some seriously advanced features such as chaining tasks, mix-and-match SQL with executables, configurable repetitions, cron-style scheduling, and much more.
When your database is the source of truth, it’s often useful to inspect that truth and reuse it elsewhere in your application.
import pgStructure from "pg-structure"; async function demo() { const db = await pgStructure({ database: "db", user: "u", password: "pass" }, { includeSchemas: ["public"] }); const table = db.get("contact"); const columnNames = table.columns.map(c => c.name); const columnTypeName = table.columns.get("options").type.name; const indexColumnNames = table.indexes.get("ix_mail").columns; const relatedTables = table.hasManyTables; }
We all know Postgres is a great relational database (you do know that, don’t you?). When it comes time for a pub/sub solution, however, we often reach for Kafka, Redis, or RabbitMQ. But did you know that Postgres is pretty well suited as a persistent pub/sub server as well?
There are very few use cases where you’d need a dedicated pub/sub server like Kafka. Postgres can easily handle 10,000 insertions per second, and it can be tuned to even higher numbers. It’s rarely a mistake to start with Postgres and then switch out the most performance critical parts of your system when the time comes.
Check the linked article for how they use Postgres in this fashion and a nice list of other benefits. For my money, the fact that I’m not adding another moving part to my infrastructure is reason enough to start with Postgres and go from there.
Based on Derek’s now page he has ended his 7 year sabbatical and he’s taking Seth Godin’s advice to publish something every day. What Derek shared here is part of that commitment…
This week, I wrote a shopping cart to sell my books directly from my own site. So I took a couple extra hours today to put my code into public view, so anyone can play around with it.
It’s a working self-contained shopping cart / store. It’s a very concrete example of using stored procedures to keep all the data logic together in one place. You can use it from JavaScript, Python, Ruby, or any language you want, since all the functionality is in the database itself. It works.
Aquameta is a web-based IDE for full-stack web development. Developers can manage HTML, CSS, Javascript, database schema, views, templates, routes, tests and documentation, and do version control, branching, pushing, pulling, user management and permissions, all from a single web-based IDE. In theory. And mostly in practice.
Under the hood, Aquameta is a “datafied” web stack, built entirely in PostgreSQL. The structure of a typical web framework is represented in Aquameta as big database schema with 6 postgreSQL schemas containing ~60 tables, ~50 views and ~90 stored procedures. Apps developed in Aquameta are represented entirely as relational data, and all development, at an atomic level, is some form of data manipulation. Also in theory. And mostly in practice.
This is super experimental, but what a cool idea. Eric Hanson’s been at it it off-and-on for 20 years now… pager.!
From generating fake data to improving
psql‘s output, there’s a tip here for everyone. I especially appreciated the first one. I didn’t know you could search previous queries in
psql and the idea of tagging your queries via trailing comments for easier retrieval is a good one! | https://changelog.com/topic/postgresql | CC-MAIN-2020-50 | refinedweb | 1,164 | 64.2 |
Hi,
In my org we have been using Maven and have a handful of custom plugins and have decided to employ Gradle 6.4.1.
But we need to continue using the custom maven plugins.
This Old retired post, is very close to my situation - but does not work in 6.4.1
( How to download and evoke a maven plugin? (reply posted by [Hans Dockter]))
I get unable to resolve class
com.mycompany.maven.plugins.Alpha
when used like
def mojo = new com.mycompany.maven.plugins.Alpha()
What could I be doing wrong?
Could you point me to related documentation (gradle ver 6.4.1)
thank you. | https://discuss.gradle.org/t/using-a-custom-maven-plugin-in-gradle-6-4-1/38306 | CC-MAIN-2020-50 | refinedweb | 108 | 69.38 |
:
Customers wish to look at such options to mitigate the impact from:
In a future post I'll circle back on the underlying account lockout policy discussion, so let's park that one for right now. What I do want to cover in this post is ADFS and how it can impact account lockouts should you have an aggressive lockout policy enabled.
Update 3-9-2014: Please also review this post for an issue requiring a hotfix to resolve with Extranet Account Lockout Protection
In the previous versions of ADFS there was no native mechanism within ADFS itself to prevent brute force attacks upon ADFS. If AD has a password lockout policy set, then an external entity hammering the ADFS logon page could then lockout an AD account. If an entity knew the user account name, they could access the ADFS proxy page and enter a bad password for the user account. The below is an example for ADFS 2.0 running on Windows 2008 R2.
In order to mitigate this, the external firewall in front of the ADFS server could be set to only allow HTTPS traffic to the ADFSFS proxy server is initiated from Office 365. As discussed at MEC, this will have to be a planning point for the upcoming OAuth changes in Q2 this CY. As part of the authentication changes, by default clients will connect directly to the ADFS!
Apart from locking down the firewall, Windows Server 2012 R2 ADFS now adds a feature to natively allow the ADFSFS server then automatically allows the account to retry the authentication.
Only Windows Server 2012 R2 has the Extranet Lockout feature. For this and other reasons you want to look at deploying Server 2012 for your ADFS infrastructure. Some reasons include:
As mentioned above, only ADFS 2012 R2 has the Extranet Lockout feature. Thus the ADFS infrastructure must be upgraded or installed as this version. For upgrade steps, please check out the excellent ASKPFE PLAT blog!
While the Extranet Lockout feature is enabled on the ADFS server, you must also deploy an ADFS proxy.
Traffic must hit the ADFS proxy. If you publish the ADFS server instead or your network misroutes the traffic and bypasses the proxy, the Extranet Lockout feature will not work as expected. Trust me, I’ve been there – but more on that later in a separate blog post!!
The other base ADFS requirements and prerequisites are also documented on TechNet.
As with the other articles in the recent ADFS posts, this is again in the Tailspintoys.ca lab. The ADFS namespace is adfs.tailspintoys.ca. The environment looks like the diagram below. The ADFS server is deployed on the internal corporate network and is joined to AD. The ADFS proxy is deployed in the DMZ, and is in a workgroup. Since we are using ADFS 2012 R2, the ADFS proxy uses Web Application Proxy (WAP) rather than a dedicated ADFSFS sign in page?
Browse to the ADFS sign in page in IE11 at
And we enter a bad password 11 times…
Staying with the LAN Manager freak show, look what happened to that poor user, their account is now locked out.
On the ADFS. In case of an attack in the form of authentication requests with invalid (bad) passwords that come through the Web Application Proxy, AD FS extranet lockout enables you to protect your users from an AD FS account lockout. In addition to protecting your users from an AD FS account lockout, AD FS extranet lockout also protects against brute force password guessing attacks.
There are three ADFS settings that we need to look at with respect to the Extranet Lockout feature.
The intent is that the ADFS administrator will define a maximum number of failed authentication requests that the ADFS proxy will allow in a certain time period. Once these authentication attempts have been used up for that specific user, then the ADFS server will go into <Seinfeld> soup Nazi -- no auth for you!!! </Seinfeld>. The ADFSFS server must be set to a lower value than the AD DS account lock out threshold, else the AD DS account will lock out before the ADFS proxy ceases to attempt authentication and enabling this on ADFS is pretty pointless!!
This is a global setting on the ADFS server, and the settings apply to all domains that the ADFS)
Opening up PowerShell on the ADFS server, and querying for the *Extranet* values we can see the default Extranet Lockout settings. Extranet Lockout is disabled by default.
Where is the default value for the lockout threshold coming from? Since it is disabled, 2147483647 is the maximum value in an Int32 data type. Run [int32]::maxValue in PowerShell to see.
Let’s now configure the ADFS server so that the ADFS | Fl *extranet*
$Timespan = New-TimeSpan -Minutes 60
Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 4 -ExtranetObservationWindow $Timespan
Get-AdfsProperties | Fl *extranet*
(Each command is one line, please ensure that it does not word wrap)
When I first tried to configure this feature, I ran into this wonderful error:
Huh???
As we saw above, there is definitely a property on the ADFS:
It’s always the little things that get me……
After waiting a minute for the ADFS proxy to pickup on the change, we can test to make sure this is working!
Remember that AD DS is set to lockout after 10 invalid logons, and AD FS will cease after 4 failed authentication attempts.
Again we browse to the ADFSFS proxy, the account is still active – boyashaka !
Just to prove what is in the security event log of the ADFS ‘attack”.
In addition to the content and links in the previously published ADFS blog posts there is also the following:
Troubleshooting AD FSFS to federate the authentication request will not be able to do so whilst this account is in a state of Extranet Lockout. Because of this some organisations may still choose to restrict access to their ADFS proxy via firewall rules and to set “reasonable” AD account lockout policies. We can talk more next time about why locking an AD account out after 3 bad attempts is not so good…...
Awesome stuff(as always) thanks alot of sharingjokes 'unset' for a particular user, they're unable to authenticate. ADFS auditing logs the following exception to the security log:
Exception details:
Microsoft.IdentityServer.Service.AccountPolicy.ADAccountLookupException: Exception of type 'Microsoft.IdentityServer.Service.AccountPolicy.ADAccountLookupException' was thrown.
at Microsoft.IdentityServer.Service.AccountPolicy.AccountLockoutPolicy.IsAccountThrottled(String userName)
at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateToken(SecurityToken token)
at Microsoft.IdentityModel.Tokens.SecurityTokenHandlerCollection.ValidateToken(SecurityToken token)
at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.GetEffectivePrincipal(SecurityTokenElement securityTokenElement, SecurityTokenHandlerCollection securityTokenHandlerCollection)
at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.Issue(RequestSecurityToken request, IList`1& identityClaimSet)
Is this a known issue? Thanks in advance.
The badPwdCount attribute as "unset" is fixed in hotfix
Thanks Brian!
That update is interesting as it talks about the GMSA - so I played with it here :
Cheers,
Rhoderick
So, how to recover if an account is 'soft locked-out' by means of ADFS? Is there an 'unlock' checkbox somewhere?
Awesome post, thanks.
Ronny - there is no unlock button.
Either
wait the prescribed time
reset the badPW count on the user object.
Or change the password
Cheers,
Rhoderick
Thanks | http://blogs.technet.com/b/rmilne/archive/2014/05/05/enabling-adfs-2012-r2-extranet-lockout-protection.aspx | CC-MAIN-2015-18 | refinedweb | 1,207 | 53.51 |
Random list
From HaskellWiki
Revision as of 11:33, 5 November 2006 by DonStewart (Talk | contribs)
The following question was asked on #haskell:
How would I print an infinite stream of random strings, generated from an argument list?
The easy way to do this is to use an infinite list of randoms, and use that to index the argument list. This illustrates the use of lazy, infinite lists quite nicely.
import System.Random import Control.Monad import System.Environment main = do g <- newStdGen -- get a new random generator args <- getArgs -- get the arguments -- do some error checking when (null args) $ print "Usage: ./a.out [arg1 ... argn]" -- generate an infinite list of random numbers -- and now use them to generate an infinite list of strings -- print them out let ns = randomRs (0,length args-1) g strs = map (\n -> args !! n) ns mapM_ putStrLn strs
when run, produces:
$ runhaskell A.hs the quick brown fox the fox fox brown quick brown fox quick brown quick brown quick the brown brown quick brown brown quick brown fox quick quick the fox quick ... | https://wiki.haskell.org/index.php?title=Random_list&oldid=7930 | CC-MAIN-2017-26 | refinedweb | 180 | 69.01 |
Robin FriedJan 24, 2018B (-:Reply0
B (-:
Hi Robin. My guess the 'name' and 'fieldkey' is needed for people put weird characters in column names. As doing a ton of programming, I never put any weird character in a column name (always English too), I have spent tons of hours on that sort of stuff. Users don't like that. The fieldkey is generated from the name replacing characters (like space) so the code doesn't break
Sounds like a bad fix to a problem better solved by simple stating in documentation that special characters aren’t kosher in field names .
Actually, on thinking this through a bit more... Since this is still beta, a more elegant solution would be to leave field name and field key there but by default to generate the field key as follows (in pseudo-code):
if fieldName has no spaces or special characters, fieldKey is assigned fieldName exactly as entered by programmer.
else
fieldKey = fieldName minus special characters;
This would mean that in the majority of cases, when a user bases his/her query on fieldName, it would still work, because fieldName and fieldKey would be the same. In those cases where they are not same, then the user would have to refer to the documentation to see that fieldKey should be used for queries (this would be a change either way to the documentation as it is now.) This would reduce the support headache and in addition, will ensure any legacy code that assumes fieldKey and not fieldName is used for queries will not break - if that in fact is the reason for the two names. Otherwise, i would just get rid of the duplication and simply state that spaces and special characters are not supported in field names (many commercial databases have this restriction I believe - certainly Oracle if I remember correctly.)
Robin
So here's another baffling issue: After I deleted the records from the SpecialPromotions table and then synced the database AND then noticed that those items I deleted are showing up .... I then noticed in the coding framework that there's an option to view the "Live" database, did that and saw the records I deleted still there, despite my syncing effort (and sinking understanding of things). From the live view I could then press the Edit live database button. I did this and deleted the errant records. Surely this is all stuff that will be straightened out when wixcode is formally released.
Hi I hope some could help me on this.
I want to filter my database by owner ID (_owner.)
All item with the same owner ID (_owner) will display on a repeater.
Thank you,
Geo
Hi Geo,
To find the owner id call user.id, then filter a dataset according to the results. The dataset should be connected to a repeater.
Dear Yisrael, I had tested out the code you proposed and it seemed to work perfectly. Now it seems no longer to work. Very sorry to raise this again. The site is taylorcaldwell.com. The database is synced with 2 records, one which the dates are within range and one where not. Both records appear in the resulting filter. Even if I put a date starting and ending a year earlier, both records will appear. Here is the code:
import wixData from 'wix-data';
export function promoDS_ready() {
//Add your code for this event here:
$w.onReady(function () {
$w("#promoDS").onReady( () => {
console.log("The dataset is ready");
let d = new Date();
d.setHours(0,0,0,0);
// console.log(d);
$w("#promoDS").setFilter( wixData.filter()
.lt("startDate", d)
.or (
wixData.filter().gt("endDate", d)
)
)
.then( () => {
console.log("Dataset is now filtered");
} )
.catch( (err) => {
console.log(err);
} );
});
});
}
Just a quick not, that I've "hidden" the "Promotions" page, since both records appear and users should not see that until tomorrow. You can test the hidden page though,
Thanks in advance,
Robin
I realize this is an old post, but I was curious if you found a solution Robin? I am doing something similar, I am trying to get a daily menu to update based on the current date.
HI my requirement is same as you but couldnt work
please have a look at the code and help me
i am new to javascript ,made this website yesterday
i am using a button click after i select the dates from date picker
i tried to copy paste some code and run but wont work.
i have no idea where im wrong
import wixData from 'wix-data'; // ... export function button2_click(event, $w) { //between($w('#datePicker1').value, $w('#datePicker2').value)
var datePicker1 = new Date($w('#datePicker1').value); var datePicker2 = new Date($w('#datePicker2').value); function between() { let newQuery = wixData.between("title", datePicker1, datePicker2); $w('#dynamicDataset').setFilter(newQuery); } wixData.query("NewspaperAnalysis") .between("title", datePicker1, datePicker2) .find() .then( (results) => { if(results.items.length > 0) { let items = results.items; let firstItem = items[0]; let totalCount = results.totalCount; let pageSize = results.pageSize; let currentPage = results.currentPage; let totalPages = results.totalPages; let hasNext = results.hasNext(); let hasPrev = results.hasPrev(); let length = results.length; let query = results.query; } else { // handle case where no matching items found } } ) .catch( (error) => { let errorMsg = error.message; let code = error.code; } ); }
Hello everyone i am new on wix and yo be sincere i don't know anything about JavaScript. I need your help please.
I would like to setup a filter. Let me explain you.
There are 5 or more entreprises that have the same products(let' 5 products) but differents prices.
So i want any user that come to my website could filter the prices of the products he or she wants.
So if i want to know the price of enterprise 1 and enterprise 4 about priduct 1 and product 3 the
So if i want to know the price of enterprise 1 and enterprise 4 about priduct 1 and product 3 the filter displays only what i want. Please please please help me guys. | https://www.wix.com/corvid/forum/community-discussion/filter-set-based-on-date-comparison/p-3 | CC-MAIN-2019-47 | refinedweb | 995 | 73.98 |
Project : madwifi
Revision : 1529
Author : proski
Date : 2006-04-25 23:25:41 +0200 (Tue, 25 Apr 2006)
Log Message :
Don't include asm/page.h from compat.h
compat.h is included forcedly from all files, so asm/page.h becomes the
first Linux header to be included. There is nothing in compat.h that
needs asm/page.h. Files that need asm/page.h should include it
directly.
This patch also eliminates the dependency of hal.c on the kernel
headers. This avoids rebuilding hal.c if some kernel headers have
timestamps in the future.
Affected Files:
* trunk/include/compat.h updated
Modified: trunk/include/compat.h
===================================================================
--- trunk/include/compat.h 2006-04-25 19:40:32 UTC (rev 1528)
+++ trunk/include/compat.h 2006-04-25 21:25:41 UTC (rev 1529)
@@ -69,8 +69,6 @@
#endif
#ifdef __KERNEL__
-#include <asm/page.h>
-
#define KASSERT(exp, msg) do { \
if (unlikely(!(exp))) { \
printk msg; \
Project : madwifi
Revision : 1528
Author : proski
Date : 2006-04-25 21:40:32 +0200 (Tue, 25 Apr 2006)
Log Message :
Don't use Linux includes just to have a definition for NULL
Using any external includes can cause hal.c to be recompiled at the
second stage, replacing the correct hal.o.
This should close ticket #557.
Affected Files:
* trunk/ath_hal/hal.c updated
Modified: trunk/ath_hal/hal.c
===================================================================
--- trunk/ath_hal/hal.c 2006-04-24 11:49:36 UTC (rev 1527)
+++ trunk/ath_hal/hal.c 2006-04-25 19:40:32 UTC (rev 1528)
@@ -3,20 +3,16 @@
* there is a source for hal.o
*/
-#include <linux/config.h>
-#include <linux/version.h>
-#include <linux/module.h>
-
void ath_hal_getwirelessmodes(void) {}
void ath_hal_init_channels(void) {}
-char *ath_hal_buildopts[] = { "dummy", NULL };
-char *ath_hal_probe(void) { return NULL; }
+char *ath_hal_buildopts[] = { "dummy", (char *) 0 };
+char *ath_hal_probe(void) { return (char *) 0; }
char ath_hal_version[] = "dummy";
void ath_hal_mhz2ieee(void) {}
void ath_hal_computetxtime(void) {}
void *ath_hal_attach(int a, void *b, int c, void *d, int *status)
{
*status = 1;
- return NULL;
+ return (void *) 0;
}
void ath_hal_process_noisefloor(void) {}
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/madwifi/mailman/madwifi-cvs/?viewmonth=200604&viewday=25 | CC-MAIN-2017-09 | refinedweb | 374 | 62.14 |
After provisioning our first load balancer in the previous post using Octavia, we will now add listeners and a pool of members to our load balancer to capture and route actual traffic through it.
Creating a listener
The first thing which we will add is called a listener in the load balancer terminology. Essentially, a listener is an endpoint on a load balancer reachable from the outside, i.e. a port exposed by the load balancer. To see this in action, I assume that you have followed the steps in my previous post, i.e. you have installed OpenStack and Octavia in our playground and have already created a load balancer. To add a listener to this configuration, let us SSH into the network node again and use the OpenStack CLI to set up our listener.
vagrant ssh network source demo-openrc openstack loadbalancer listener create \ --name demo-listener \ --protocol HTTP\ --protocol-port 80 \ --enable \ demo-loadbalancer openstack loadbalancer listener list openstack loadbalancer listener show demo-listener
These commands will create a listener on port 80 of our load balancer and display the resulting setup. Let us now again log into the amphora to see what has changed.
amphora_ip=$(openstack loadbalancer amphora list \ -c lb_network_ip -f value) ssh -i amphora-key ubuntu@$amphora_ip pids=$(sudo ip netns pids amphora-haproxy) sudo ps wf -p $pids sudo ip netns exec amphora-haproxy netstat -l -p -n -t
We now see a HAProxy inside the amphora-haproxy namespace which is listening on the VIP address on port 80. The HAProxy instance is using a configuration file in /var/lib/octavia/. If we display this configuration file, we see something like this.
# Configuration for loadbalancer bc0156a3-7d6f-4a08-9f01-f5c4a37cb6d2 global daemon user nobody log /dev/log local0 log /dev/log local1 notice stats socket /var/lib/octavia/bc0156a3-7d6f-4a08-9f01-f5c4a37cb6d2.sock mode 0666 level user maxconn 1000000 defaults log global retries 3 option redispatch option splice-request option splice-response option http-keep-alive frontend 34ed8dd1-11db-47ee-a682-24a84d879d58 option httplog maxconn 1000000 bind 172.16.0.82:80 mode http timeout client 50000
So we see that our listener shows up as a HAProxy frontend (identified by the UUID of the listener) which is bound to the load balancer VIP and listening on port 80. No forwarding rule has been created for this port yet, so traffic arriving there does not go anywhere at the moment (which makes sense, as we have not yet added any members). Octavia will, however, add our ports to the security group of the VIP to make sure that our traffic can reach the amphora. So at this point, our configuration looks as follows.
Adding members
Next we will add members, i.e. backends to which our load balancer will distribute traffic. Of course, the tiny CirrOS image that we use does not easily allow us to run a web server. We can, however, use netcat to create a “fake webserver” which will simply answer to each request with the same fixed string (I have first seen this nice little trick somewhere on StackExchange, but unfortunately I have not been able to dig out the post again, so I cannot provide a link and proper credits here). To make this work we first need to log out of the amphora again and, back on the network node, open port 80 on our internal network (to which our test instances are attached) so that traffic from the external network can reach our instances on port 80.
project_id=$(openstack project show \ demo \ -f value -c id) security_group_id=$(openstack security group list \ --project $project_id \ -c ID -f value) openstack security group rule create \ --remote-ip 0.0.0.0/0 \ --dst-port 80 \ --protocol tcp \ $security_group_id
Now let us create our “mini web server” on the first instance. The best approach is to use a terminal multiplexer like tmux to run this, as it will block the terminal we are using.
openstack server ssh \ --identity demo-key \ --login cirros --public web-1 # Inside the instance, enter: while true; do echo -e "HTTP/1.1 200 OK\r\n\r\n$(hostname)" | sudo nc -l -p 80 done
Then, do the same on web-2 in a new session on the network node
source demo-openrc openstack server ssh \ --identity demo-key \ --login cirros --public web-2 while true; do echo -e "HTTP/1.1 200 OK\r\n\r\n$(hostname)" | sudo nc -l -p 80 done
Now we have two “web server” running. Open another session on the network node and verify that you can reach both instances separately.
source demo-openrc # Get floating IP addresses web_1_fip=$(openstack server show \ web-1 \ -c addresses -f value | awk '{ print $2}') web_2_fip=$(openstack server show \ web-2 \ -c addresses -f value | awk '{ print $2}') curl $web_1_fip curl $web_2_fip
So at this point, our web servers can be reached individually via the external network and the router (this is why we had to add the security group rule above, as the source IP of the requests will be an IP on the external network and thus would by default not be able to reach the instance on the internal network). Now let us add a pool, i.e. a set of backends (the members) between which the load balancer will distribute the traffic.
pool_id=$(openstack loadbalancer pool create \ --protocol HTTP \ --lb-algorithm ROUND_ROBIN \ --enable \ --listener demo-listener \ -c id -f value)
When we now log into the loadbalancer again, we see that a backend configuration has been added to the HAProxy configuration, which will look similar to this sample.
backend af116765-3357-451c-8bf8-4aa2d3f77ca9:34ed8dd1-11db-47ee-a682-24a84d879d58 mode http http-reuse safe balance roundrobin fullconn 1000000 option allbackups timeout connect 5000 timeout server 50000
However, there are still no real targets added to the backend, as the load balancer does not yet know about our web servers. As a last step, we now add these servers to the pool. At this point, it is important to understand which IP address we use. One option would be to use the floating IP addresses of the servers. Then, the target IP addresses would be on the same network as the VIP, leading to a setup which is known as “one armed load balancer”. Octavia can of course do this, but we will create a slightly more advanced setup in which the load balancer will also serve as a router, i.e. it will talk to the web servers on the internal network. On the network node, run
pool_id=$(openstack loadbalancer pool list \ --loadbalancer demo-loadbalancer \ -c id -f value) for server in web-1 web-2; do ip=$(openstack server show $server \ -c addresses -f value \ | awk '{print $1'} \ | sed 's/internal-network=//' \ | sed 's/,//') openstack loadbalancer member create \ --address $ip \ --protocol-port 80 \ --subnet-id internal-subnet \ $pool_id done openstack loadbalancer member list $pool_id
Note that to make this setup work, we have to pass the additional parameter –subnet-id to the creation command for the members pointing to the internal network on which the specified IP addresses live, so that Octavia knows that this subnet needs to be attached to the amphora as well. In fact, we can see here that Octavia will add ports for all subnets which are specified via this parameter if the amphora is not yet connected to this subnet. Inside the amphora, the interface connected to this subnet will be added inside the amphora-haproxy namespace, resulting in the following setup.
If we now look at the HAProxy configuration file inside the amphora, we find that Octavia has added two server entries, corresponding to our two web servers. Thus we expect that traffic is load balanced to these two servers. Let us try this out by making requests to the VIP from the network node.
vip=$(openstack loadbalancer show \ -c vip_address -f value \ demo-loadbalancer) for i in {1..10}; do curl $vip; sleep 1 done
We see nicely that every second requests goes to the first server and every other request goes to the second server (we need a short delay between the requests, as the loop in our “fake” web servers needs time to start over).
Health monitors
This is nice, but there is an important ingredient which is still missing in our setup. A load balancer is supposed to monitor the health of the pool members and to remove members from the round-robin procedure if a member seems to be unhealthy. To allow Octavia to do this, we still need to add a health monitor, i.e. a health check rule, to our setup.
openstack loadbalancer healthmonitor create \ --delay 10 \ --timeout 10 \ --max-retries 2 \ --type HTTP \ --http-method GET \ --url-path "/" $pool_id
After running this, it is instructive to take a short look at the terminal in which our fake web servers are running. We will see additional requests, which are the health checks that are executed against our endpoints.
Now go back into the terminal on web-2 and kill the loop. Then let us display the status of the pool members.
openstack loadbalancer status show demo-loadbalancer
After a few seconds, the “operating_status” of the member changes to “ERROR”, and when we repeat the curl, we only get a response from the healthy server.
How does this work? In fact, Octavia uses the health check functionality that HAProxy offers. HAProxy will expose the results of this check via a Unix domain socket. The health daemon built into the amphora agent connects to this socket, collects the status information and adds it to the UDP heartbeat messages that it sends to the Octavia control plane via UDP port 5555. There, it is written into the various Octavia tables and finally collected again from the API server when we make our request via the OpenStack CLI.
This completes our last post on Octavia. Obviously, there is much more that could be said about load balancers, using a HA setup with VRRP for instance or adding L7 policies and rules. The Octavia documentation contains a number of cookbooks (like the layer 7 cookbook or the basic load balancing cookbook) that contain a lot of additional information on how to use these advanced features. | https://leftasexercise.com/2020/05/08/openstack-octavia-creating-listeners-pools-and-monitors/ | CC-MAIN-2021-04 | refinedweb | 1,717 | 55.27 |
IPython has a few magics for working with your engines.
This assumes you have started an IPython cluster, either with the notebook interface,
or the
ipcluster/controller/engine commands.
from IPython import parallel rc = parallel.Client() dv = rc[:] rc.ids
Creating a Client registers the parallel magics
%px,
%%px,
%pxresult,
pxconfig, and
%autopx.
These magics are initially associated with a DirectView always associated with all currently registered engines.
Now we can execute code remotely with
%px:
%px a=5
%px print(a)
%px a
with dv.sync_imports(): import sys
%px from __future__ import print_function %px print("ERROR", file=sys.stderr)
You don't have to wait for results. The
%pxconfig magic lets you change the default blocking/targets for the
%px magics:
%pxconfig --noblock
%px import time %px time.sleep(5) %px time.time()
But you will notice that this didn't output the result of the last command.
For this, we have
%pxresult, which displays the output of the latest request:
%pxresult
Remember, an IPython engine is IPython, so you can do magics remotely as well!
%pxconfig --block %px %matplotlib inline
%%px import numpy as np import matplotlib.pyplot as plt
%%px can also be used as a cell magic, for submitting whole blocks.
This one acceps
--block and
--noblock flags to specify
the blocking behavior, though the default is unchanged.
dv.scatter('id', dv.targets, flatten=True) dv['stride'] = len(dv)
%%px --noblock x = np.linspace(0,np.pi,1000) for n in range(id,12, stride): print(n) plt.plot(x,np.sin(n*x)) plt.title("Plot %i" % id)
%pxresult
It also lets you choose some amount of the grouping of the outputs with
--group-outputs:
The choices are:
engine- all of an engine's output is collected together
type- where stdout of each engine is grouped, etc. (the default)
order- same as
type, but individual displaypub outputs are interleaved. That is, it will output the first plot from each engine, then the second from each, etc.
%%px --group-outputs=engine x = np.linspace(0,np.pi,1000) for n in range(id+1,12, stride): print(n) plt.figure() plt.plot(x,np.sin(n*x)) plt.title("Plot %i" % n)
When you specify 'order', then individual display outputs (e.g. plots) will be interleaved.
%pxresult takes the same output-ordering arguments as
%%px,
so you can view the previous result in a variety of different ways with a few sequential calls to
%pxresult:
%pxresult --group-outputs=order
When a DirectView has a single target, the output is a bit simpler (no prefixes on stdout/err, etc.):
from __future__ import print_function def generate_output(): """function for testing output publishes two outputs of each type, and returns something """ import sys,os from IPython.display import display, HTML, Math print("stdout") print("stderr", file=sys.stderr) display(HTML("<b>HTML</b>")) print("stdout2") print("stderr2", file=sys.stderr) display(Math(r"\alpha=\beta")) return os.getpid() dv['generate_output'] = generate_output
You can also have more than one set of parallel magics registered at a time.
The
View.activate() method takes a suffix argument, which is added to
'px'.
e0 = rc[-1] e0.block = True e0.activate('0')
%px0 generate_output()
%px generate_output()
As mentioned above, we can redisplay those same results with various grouping:
%pxresult --group-outputs order
%pxresult --group-outputs engine
When you raise exceptions with the parallel exception, the CompositeError raised locally will display your remote traceback.
%%px from numpy.random import random A = random((100,100,'invalid shape'))
Remember, Engines are IPython too, so the cell that is run remotely by %%px can in turn use a cell magic.
%%px %%timeit from numpy.random import random from numpy.linalg import norm A = random((100,100)) norm(A, 2)
As of IPython 1.0, you can instruct
%%px to also execute the cell locally.
This is useful for interactive definitions,
or if you want to load a data source everywhere,
not just on the engines.
%%px --local import os thispid = os.getpid() print(thispid) | https://nbviewer.jupyter.org/github/ipython/ipython/blob/2.x/examples/Parallel%20Computing/Parallel%20Magics.ipynb | CC-MAIN-2018-47 | refinedweb | 666 | 58.28 |
Python Program to Find the Size of Primitive Data Types
Grammarly
In this tutorial, you will learn how to find the size of Primitive Data Types using the sys.getsizeof() function and print function of the python programming language.
How to Find the size of Primitive Data Types in Python?
Let’s take a look at the source code , here the values are assigned in the code and the sys.getsizeof() carries out the function.
import sys a=sys.getsizeof(12) print(a) b=sys.getsizeof('bugs') print(b) c=sys.getsizeof(('b','u','g','s')) print(c) d=sys.getsizeof(['b','u','g','s']) print(d) e=sys.getsizeof({1,2,3,4}) print(e) f=sys.getsizeof({1:'a',2:'b',3:'c',4:'d'}) print(f)
OUTPUT:
28 53 72 88 216 232
- At the start, we use the
importfunction with
syswhich allows to access certain system-specific parameters and functions.
- In the next line of code we declare the function
sys.getsizeof()with the variable
a, the integer
12as the input value, followed by a
a.
- Similarly, we declare the function
sys.getsizeof()with the variable
b, the string
('bugs')as the input, followed by the
b.
- We declare the function
sys.getsizeof()with the variable
c, the tuple
(('b','u','g','s'))enclosed in double parenthesis as the input, followed by the
c.
- We declare the function
sys.getsizeof()with the variable
d, the list
(['b','u','g','s'])enclosed in square brackets and a parenthesis as the input, followed by the
d.
- We declare the function
sys.getsizeof()with the variable
e, the set
({1,2,3,4})enclosed in curly brackets and parenthesis as the input, followed by the
e.
- We declare the function
sys.getsizeof()with the variable
f, the dict
({1:'a',2:'b',3:'c',4:'d'})enclosed in curly brackets and parenthesis as the input, followed by the
f.
NOTE:
- A byte, or eight bits, is used as the fundamental unit of measurement for data, computers use binary digits which is 0 and 1 to store data.
- The import function searches for the module initially in the local scope or area and the value returned is seen in the output.
- The sys module in Python provides various functions and variables that are used to manipulate different parts of the Python runtime environment.
- The integers are zero, positive or negative whole numbers without a fractional part.
- A string is a sequence of characters, where a character is simply a symbol. The string in enclosed in ‘ ‘ single quotes.
- Tuples are a data structure that store an ordered sequence of values, in simpler terms tuples are used to store multiple items in a single variable. Each value of the tuple is enclosed in single quotes ‘ ‘ and separated by a comma.
- Lists are used to store multiple items in a single variable, a list is created by placing all the items also called elements inside square brackets [] , separated by commas.
- A Set is an unordered collection data type that is iterable, mutable and has no duplicate elements. In simpler terms, sets are used to store multiple items in a single variable. The elements are enclosed in curly brackets and separated by commas.
- The dict() function is a constructor which creates a dictionary, an unordered collection of items where each item of a dictionary has a key/value pair. The pair of keys/values are separated by a : colon with a comma between each pair and enclosed in single quotes. | https://developerpublish.com/academy/courses/python-examples/lessons/python-program-to-find-the-size-of-primitive-data-types/ | CC-MAIN-2021-49 | refinedweb | 586 | 66.44 |
Are.
It turns out that much of the time spent building large C++ projects is effectively spent parsing the same headers again and again, over, and over, and over, and over, and over….
There are three possible solutions to this problem:
- Shred your CPU and buy a new one that’s twice as fast.
- Use C++ modules: import instead of #include. This will soon become the best solution, but it’s not standardized yet. For WebKit’s purposes, we can’t use it until it works the same in MSVCC, Clang, and three-year-old versions of GCC. So it’ll be quite a while before we’re able to take advantage of modules.
- Use unified builds (sometimes called unity builds).
WebKit has adopted unified builds. This work was done by Keith Miller, from Apple. Thanks, Keith! (If you’ve built WebKit before, you’ll probably want to say that again: thanks, Keith!)
For a release build of WebKitGTK+, on my desktop, our build times used to look like this:
real 62m49.535s
user 407m56.558s
sys 62m17.166s
That was taken using WebKitGTK+ 2.17.90; build times with any 2.18 release would be similar. Now, with trunk (or WebKitGTK+ 2.20, which will be very similar), our build times look like this:
real 33m36.435s
user 214m9.971s
sys 29m55.811s
Twice as fast.
The approach is pretty simple: instead of telling the compiler to build the original C++ source code files that developers see, we instead tell the compiler to build unified source files that look like this:
// UnifiedSource1.cpp
#include "CSSValueKeywords.cpp"
#include "ColorData.cpp"
#include "HTMLElementFactory.cpp"
#include "HTMLEntityTable.cpp"
#include "JSANGLEInstancedArrays.cpp"
#include "JSAbortController.cpp"
#include "JSAbortSignal.cpp"
#include "JSAbstractWorker.cpp"
Since files are included only once per translation unit, we now have to parse the same headers only once for each unified source file, rather than for each individual original source file, and we get a dramatic build speedup. It’s pretty terrible, yet extremely effective.
Now, how many original C++ files should you #include in each unified source file? To get the fastest clean build time, you would want to #include all of your C++ source files in one, that way the compiler sees each header only once. (Meson can do this for you automatically!) But that causes two problems. First, you have to make sure none of the files throughout your entire codebase use conflicting variable names, since the static keyword and anonymous namespaces no longer work to restrict your definitions to a single file. That’s impractical in a large project like WebKit. Second, because there’s now only one file passed to the compiler, incremental builds now take as long as clean builds, which is not fun if you are a WebKit developer and actually need to make changes to it. Unifying more files together will always make incremental builds slower. After some experimentation, Apple determined that, for WebKit, the optimal number of files to include together is roughly eight. At this point, there’s not yet much negative impact on incremental builds, and past here there are diminishing returns in clean build improvement.
In WebKit’s implementation, the files to bundle together are computed automatically at build time using CMake black magic. Adding a new file to the build can change how the files are bundled together, potentially causing build errors in different files if there are symbol clashes. But this is usually easy to fix, because only files from the same directory are bundled together, so random unrelated files will never be built together. The bundles are always the same for everyone building the same version of WebKit, so you won’t see random build failures; only developers who are adding new files will ever have to deal with name conflicts.
To significantly reduce name conflicts, we now limit the scope of using statements. That is, stuff like this:
using namespace JavaScriptCore;
namespace WebCore {
//...
}
Has been changed to this:
namespace WebCore {
using namespace JavaScriptCore;
// ...
}
Some files need to be excluded due to unsolvable name clashes. For example, files that include X11 headers, which contain lots of unnamespaced symbols that conflict with WebCore symbols, don’t really have any chance. But only a few files should need to be excluded, so this does not have much impact on build time. We’ve also opted to not unify most of the GLib API layer, so that we can continue to use conventional GObject names in our implementation, but again, the impact of not unifying a few files is minimal.
We still have some room for further performance improvement, because some significant parts of the build are still not unified, including most of the WebKit layer on top. But I suspect developers who have to regularly build WebKit will already be quite pleased.
8 Replies to “On Compiling WebKit (now twice as fast!)”
Removing unused includes and replacing #includes with forward-declares can also help. You can check with
How does this interact with -j and more importantly (top) memory usage during clean build?
I’m guessing -j is just as usual at make/ninja level, so -j8 will actually build 64 .cpp files in parallel now, due to 8 unified ones?
Is the top memory usage of clean build still about the same for same amount of compilation units (which are now much longer) or if we are memory starved, should we start looking at possibly having to reduce -j to lower than CPU core count (more than before unified builds)?
I wouldn’t change -j at all. It’s still only 8 .cpp files built in parallel, they’re just 8x larger files. That doesn’t mean you should start building only four files in parallel; that’s not going to help you.
I recommend passing -GNinja to CMake to get the Ninja generator, though, which has sane defaults and doesn’t require passing -j.
Actually, someone is currently telling me the opposite of what I just said. Unified builds increase RAM requirements, and everything gets way slower if you run out of RAM. If you’re running out of memory, then of course you should definitely reduce -j.
Yeah, I was asking in terms of Gentoo packaging and the user support afterwards (most users compile on their own system afterall) from potential more OOM’s, etc.
Chromium recently received this for us as well, but it’s an opt-in due to the increased memory requirements.
If now due to RAM starvation people will need to reduce -j to lower than their CPU count, then that sounds a bit counter-productive to the supposed lower build times and maybe in some cases we could end up with longer build times instead?
I guess I can’t really muck with automatically lowering -j for them and just continue business as usual, especially if you don’t provide an opt-out for unified build either. But that remains to be found out also while packaging it in the future.
We already make cmake use ninja in our webkit-gtk packages since webkit-gtk-2.8.3 times (most other cmake using packages default to make).
if somebody is reading this. DO NOT DO THIS.
Unified builds change the semantic of the language: e.g. the static keyword changes its meaning, anonymous namespace start to confict etc.
you really should be using precompiled headers instead. They are the real predecessor of modules.
The “Meson can do this for you automatically!” link is broken.
Thanks, fixed! | https://blogs.gnome.org/mcatanzaro/2018/02/17/on-compiling-webkit-now-twice-as-fast/ | CC-MAIN-2021-17 | refinedweb | 1,256 | 65.52 |
Christophe LEROY <christophe.leroy@xxxxxx> writes:
Le 17/09/2018 Ã 11:03, Aneesh Kumar K.V a ÃcritÂ:
Christophe Leroy <christophe.leroy@xxxxxx> writes:
Hi,
I'm having a hard time figuring out the best way to handle the following
situation:
On the powerpc8xx, handling 16k size pages requires to have page tables
with 4 identical entries.
I assume that hugetlb page size? If so isn't that similar to FSL hugetlb
page table layout?
No, it is not for 16k hugepage size with a standard page size of 4k.
Here I'm trying to handle the case of CONFIG_PPC_16K_PAGES.
As of today, it is implemented by using the standard Linux page layout,
ie one PTE entry for each 16k page. This forbids the use the 8xx HW
assistance.
Initially I was thinking about handling this by simply modifying
pte_index() which changing pte_t type in order to have one entry every
16 bytes, then replicate the PTE value at *ptep, *ptep+1,*ptep+2 and
*ptep+3 both in set_pte_at() and pte_update().
However, this doesn't work because many many places in the mm core part
of the kernel use loops on ptep with single ptep++ increment.
Therefore did it with the following hack:
/* PTE level */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)
+typedef struct { pte_basic_t pte, pte1, pte2, pte3; } pte_t;
+#else
typedef struct { pte_basic_t pte; } pte_t;
+#endif
@@ -181,7 +192,13 @@ static inline unsigned long pte_update(pte_t *p,
: "cc" );
#else /* PTE_ATOMIC_UPDATES */
unsigned long old = pte_val(*p);
- *p = __pte((old & ~clr) | set);
+ unsigned long new = (old & ~clr) | set;
+
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)
+ p->pte = p->pte1 = p->pte2 = p->pte3 = new;
+#else
+ *p = __pte(new);
+#endif
#endif /* !PTE_ATOMIC_UPDATES */
#ifdef CONFIG_44x
@@ -161,7 +161,11 @@ static inline void __set_pte_at(struct mm_struct
*mm, unsigned long addr,
/* Anything else just stores the PTE normally. That covers all
64-bit
* cases, and 32-bit non-hash with 32-bit PTEs.
*/
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)
+ ptep->pte = ptep->pte1 = ptep->pte2 = ptep->pte3 = pte_val(pte);
+#else
*ptep = pte;
+#endif
But I'm not too happy with it as it means pte_t is not a single type
anymore so passing it from one function to the other is quite heavy.
Would someone have an idea of an elegent way to handle that ?
Thanks
Christophe
Why would pte_update bother about updating all the 4 entries?. Can you
help me understand the issue?
Because the 8xx HW assistance expects 4 identical entries for each 16k
page, so everytime a PTE is updated the 4 entries have to be updated.
What you suggested in the original mail is what matches that best isn't it?
That is a linux pte update involves updating 4 slot. Hence a linux pte
consist of 4 unsigned long? | http://lkml.iu.edu/hypermail/linux/kernel/1809.2/02275.html | CC-MAIN-2021-49 | refinedweb | 456 | 70.33 |
As I usually do after wrapping up a long-term consulting project, I recently took a long break from work to level-up my programming skills. On my list of technologies to muck around with were Angular 2, TypeScript, and RxJS.
I like to have a small, fun project to hack on when I’m learning a new framework or programming language. This time I decided to combine my love for music production and the web platform to build a rudimentary sampler in the web browser.
A sampler is a musical instrument that allows a musician to load audio recordings into memory and play them back using a bank of pressure-sensitive pads or a MIDI controller. These recordings can be simple percussive sounds — such as snare drums, kicks, hi-hats, etc. — or sections of existing songs. Once they’ve been loaded into the sampler, these recordings — or samples — can be rearranged to create entirely new songs. For example, here’s a video of a musician creating a hip-hop beat using audio samples from several sources on an Akai MPC2000 XL:
While most genres of music make use of samplers in one way or another, their use is especially prevalent in hip-hop and the myriad sub-genres of EDM.
Before Moore’s Law turned the personal computer into a full-featured music production studio, artists composed their tracks using hardware samplers such as Akai’s legendary MPC or Ensoniq’s SP1200. Modern day artists are more likely to use software samplers running on their laptops or even iPads over bulky pieces of hardware.
In this series of tutorials, we’ll build exactly that: a software sampler that runs inside your web browser. Once we’re done, our sampler will be able to load audio samples, change their playback rates and volumes, apply simple effects to them, and play them back in response to events from a MIDI controller or a computer keyboard.
We’ll use Angular 2 as our web framework, the WebAudio API to process and play back audio, and the WebMIDI API to talk to our MIDI controller.
To begin with, let’s go over a few Angular 2 and WebAudio fundamentals.
Set up an Angular 2 Project
If you have no prior experience with Angular 2, I would recommend that you at least read the official Angular quickstart and setup guide before moving forward with this tutorial.
Done? You should have ended up with a simple Hello World type app backed by a build system that automatically recompiles your code and reloads the browser every time a file is modified. Hang on to this code, we’ll build our sampler on top of it.
At this point, you might want to go into
styles.css and nuke the contents.
Play a Sound with the WebAudio API
The WebAudio API lets you play, synthesize, manipulate, and monitor audio in the browser with an incredible amount of precision and control. Instead of going into the theory behind the API, I’m going to take a code-first approach and explain parts of the API as we use them to build our sampler.
To begin with, let’s look at the simplest use case for WebAudio: loading an audio file from a URL and playing it back.
In this tutorial we’ll load a single sample from a URL and play it back when a button is clicked.
Create an AudioContext
Open
app.component.ts from the quickstart in your text editor. It should look like this:
import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: `<h1>Hello {{name}}</h1>`, }) export class AppComponent { name = 'Angular'; }
In order to do anything at all with the WebAudio API, we must create a new instance of
AudioContext. We’ll talk about what
AudioContext does in a bit. For now, just follow along.
Let’s create the
AudioContext right after Angular has finished initializing our
AppComponent.
import { Component, OnInit } from '@angular/core'; @Component({ selector: 'my-app', template: ``, }) export class AppComponent implements OnInit { private audioContext: AudioContext; ngOnInit() { this.audioContext = new AudioContext(); } }
In the code above, our
AppComponent implements the
OnInit interface, which defines a single method:
ngOnInit(). This is a lifecycle hook that is invoked after the component has been initialized and all of its inputs and output bindings (more on these later) are live.
ngOnInit() is a great place to create our
AudioContext. In the code above, we declare a property called
audioContext on
AppComponent and initialize it inside
ngOnInit().
Initializing an
AudioContext is simple: we call
new AudioContext() and we’re on our way.
Lifecycle what?
From the moment it’s created to the moment it’s destroyed, each Angular 2 component has a lifecycle that’s managed by the framework itself. For each stage in its lifecycle, a component can implement a method called a lifecycle hook that gives it a chance to implement custom behavior during that stage. For a full list of lifecycle hooks that Angular invokes, read the chapter on lifecycle hooks in the documentation.
Why did we initialize
audioContext inside
ngOnInit() when we could have done it inside
AppComponent’s constructor? A component’s input and output bindings are undefined at the time its constructor is called. If your initialization code depends on the values of these bindings, it will fail if you put it inside the constructor. By the time
ngOnInit() is called, all the input and output bindings have been checked for modifications once and are ready for use.
Our simple example doesn’t declare any bindings on
AppComponent so it doesn’t matter where we put our initialization code, but putting it in
ngOnInit is good practice and you should get used to doing it early on.
Fetch Audio Data from a URL
Even though Angular comes with its own HTTP library located in the
angular2/http module, we’re not going to use it for this tutorial. As of this writing, the library does not support loading data into
ArrayBuffers without resorting to ugly hacks.
If you’re reading this in the future and the Angular team has added support for
ArrayBuffers to the HTTP library, leave a comment on this article or write to me at contact@ankursethi.in and I will update this section. For now, we’ll use a simple
fetch() to download our audio sample.
Let’s add a new method to
AppComponent:
fetchSample(): Promise<AudioBuffer> { return fetch('samples/snare.wav') .then(response => response.arrayBuffer()) .then(buffer => { return new Promise((resolve, reject) => { this.audioContext.decodeAudioData( buffer, resolve, reject ); }) }); }
We begin by fetching the audio sample from a URL using
fetch(). Then, we call
arrayBuffer() on the response object to read the response data into a new
ArrayBuffer. We now have a reference to a buffer containing binary audio data stored in
snare.wav. However,
.wav is not an audio format that the WebAudio API understands. In fact, the only format the WebAudio API understands is linear PCM. Before we can play our file, we must somehow turn it into linear PCM. But how?
We use the
decodeAudioData() method on our
AudioContext to decode the data and create a new
AudioBuffer object. This object contains the PCM data we’re looking for. In the end, our
fetchSample() method returns a
Promise that resolves with the
AudioBuffer.
Let’s load the sample when our component is initialized. At the same time, let’s add a play button to the web page. It’s a good idea to disable the play button while the sample is loading.
Change the component’s template so it looks like this:
`<button [disabled]='loadingSample'>play</button>`
The
[disabled]='loadingSample' syntax is a form of one-way data binding called property binding. This code binds the
disabled property of the play button to the value of
AppComponent instance. The direction of data flow is from the component class to the template, i.e, any change to the
loadingSample property inside the component instance will reflect in the template, but not the other way round.
Now add the
AppComponent. Also add an additional property called
audioBuffer to store a reference to our decoded audio data.
export class AppComponent implements OnInit { private audioContext: AudioContext; private loadingSample: boolean = false; private audioBuffer: AudioBuffer; ngOnInit() { this.audioContext = new AudioContext(); } fetchSample(): Promise<AudioBuffer> { return fetch('samples/snare.wav') .then(response => response.arrayBuffer()) .then(buffer => { return new Promise((resolve, reject) => { this.audioContext.decodeAudioData( buffer, resolve, reject ); }) }); } }
Finally, call
fetchSample() inside
ngOnInit():
ngOnInit() { this.audioContext = new AudioContext(); this.loadingSample = true; this.fetchSample() .then(audioBuffer => { this.loadingSample = false; this.audioBuffer = audioBuffer; }) .catch(error => throw error); }
We’re ready to play our sample!
Play a Sample
Add a new method to the component:
playSample() { let bufferSource = this.audioContext.createBufferSource(); bufferSource.buffer = this.audioBuffer; bufferSource.connect(this.audioContext.destination); bufferSource.start(0); }
We start out by creating a new instance of
AudioBufferSourceNode and setting its
buffer property to the
audioBuffer we created in the previous section. An instance of
AudioBufferSourceNode represents a source of audio in the PCM format stored inside an
AudioBuffer.
Next, we connect
bufferSource to the
destination of our
audioContext. More on this in the next section.
Finally, we tell the
bufferSource to play the audio stored in
audioBuffer immediately by calling
bufferSource.start(0).
That’s it. The only thing we need to do now is to call our
playSample() method when the play button is clicked. But first, let’s take a step back and talk about the WebAudio API.
Audio Contexts and Audio Nodes in the WebAudio API
All audio operations in the WebAudio API happen inside an
AudioContext, an object representing a graph of audio processing nodes. Each audio processing node in this graph is responsible for performing a single operation on the audio signal that flows through it. The API defines nodes for generating audio signals, controlling volume, adding effects, splitting stereo channels, analyzing audio frequency and amplitude, etc. In code, these audio processing nodes are represented by objects that implement the
AudioNode interface.
Our journey through the audio processing graph begins at an audio source node, a type of audio node that represents a source of audio. We’ve already seen one type of audio source node in this tutorial: the
AudioBufferSourceNode.
The signal from an audio source node can be piped into an instance of an
AudioDestinationNode using
AudioNode’s
connect() method. In
playSample(), we pipe the output of our
AudioBufferSourceNode to the
destination property of our
AudioContext. This property is an instance of an audio destination node that represents the computer’s currently selected audio output device. The audio processing graph created by
playSample() looks like this:
This is the simplest audio processing graph possible, consisting of only a source of audio and a destination. In reality, we’re not limited to just two audio nodes per audio context. Before our audio signals exits the graph at an audio destination node, it could pass through a number of audio effect nodes or analyzer nodes that change the nature of the audio signal in significant ways.
Here’s an example of a more complex audio graph:
As we add more features to our sampler, we’ll talk about the different kinds of audio nodes available to us and use them to construct more complex audio graphs. For now, this is all you need to know to start using the WebAudio API.
With that out of the way, let’s move on and add a click handler to our play button so we can finally hear our audio sample.
Add a Button with a Click Handler
Modify the
AppComponent template to look like this:
`<button (click)='onClick()' [disabled]='loadingSample'>play</button>`
Then, add a new method to
AppComponent:
onClick() { this.playSample(); }
The
(click)='onClick()' syntax is a form of one way binding called event binding. It’s used to declare an expression that is executed in the scope of the component when an event occurs in its view. In this case, we call the
onClick() method on
AppComponent when our button is clicked.
Refresh your browser and click the play button. Congratulations, you’ve just loaded and played an audio file using the WebAudio API!
Adjust Playback Speed
Adjusting the playback speed of the audio sample is a simple matter of changing bufferSource’s
playbackRate property. Let’s add a private variable to our component:
private playbackRate: number = 1.0;
Add a range input to the template:
<div> <button (click)='onClick()' [disabled]='loadingSample'>play</button> </div> <div> <input type='range' min='0.01' max='5.0' step='0.01' [(ngModel)]='playbackRate'> </div>
Since the
ngModel directive is part of Angular’s
FormsModule, we need to inject it into our application. At the top of
app.module.ts, add:
import { FormsModule } from '@angular/forms';
Then, edit the
imports section of the configuration passed to the
NgModule decorator so that it includes
FormsModule:
imports: [ BrowserModule, FormsModule ]
So far in this tutorial we’ve worked with two kinds of bindings: event bindings and property bindings. Both of these are forms of one-way binding.
The
[(ngModel)]='playbackRate' syntax in the code above is a form of two-way binding. Here, we’re telling Angular to update the value of our component’s
playbackRate property whenever the user moves the range input, and to update the value reflected by the range input whenever the user changes the component’s
playbackRate property.
ngModel is a built-in Angular directive that makes this possible.
Finally, change the
playSample() code so it sets the playback rate of
bufferSource before playing the sample:
playSample() { let bufferSource = this.audioContext.createBufferSource(); bufferSource.buffer = this.audioBuffer; bufferSource.playbackRate.value = this.playbackRate; bufferSource.connect(this.audioContext.destination); bufferSource.start(0); }
Now try moving the playback rate slider and clicking the play button.
Adjust Playback Volume
To control the playback volume of our sample, we’ll add an audio processing node called
GainNode between the
AudioBufferSourceNode and the destination of our
AudioContext. After we’re done, our audio processing graph will look like this:
GainNode can be used to boost or attenuate the audio signal that passes through it. A gain of 1.0 means no change in the audio signal. A gain greater than 1.0 means the signal will be boosted (often causing unwanted distortions in the signal), and a gain of less than 1.0 means the signal will be attenuated.
First, add a property called
gain to the component:
private gain: number = 1.0;
Next, add another slider to the template and bind its value to
gain:
<div> <button (click)='onClick()' [disabled]='loadingSample'>play</button> </div> <div> <label for='playbackRate'>Playback rate:</label> <input type='range' id='playbackRate' min='0.01' max='5.0' step='0.01' [(ngModel)]='playbackRate'> </div> <div> <label for='gain'>Volume:</label> <input type='range' id='gain' min='0.01' max='5.0' step='0.01' [(ngModel)]='gain'> </div>
It’s a good idea to add labels to the
gain and
playbackRate sliders so we know which one is which.
Finally, modify the audio graph in
playSample():
playSample() { let bufferSource = this.audioContext.createBufferSource(); bufferSource.buffer = this.audioBuffer; bufferSource.playbackRate.value = this.playbackRate; let gainNode = this.audioContext.createGain(); gainNode.gain.value = this.gain; bufferSource.connect(gainNode); gainNode.connect(this.audioContext.destination); bufferSource.start(0); }
Now try moving the volume slider and clicking the play button.
Next Steps
In addition to the basics of the WebAudio API, this tutorial covered lifecycle hooks, event bindings, property bindings, and two-way bindings in Angular 2.
In the next tutorial, we’ll learn how to trigger our sample using a MIDI controller or computer keyboard. | https://ankursethi.in/2016/01/13/build-a-sampler-with-angular-2-webaudio-and-webmidi-lesson-1-introduction-to-the-webaudio-api/ | CC-MAIN-2021-21 | refinedweb | 2,598 | 53.41 |
13 October 2011 14:02 [Source: ICIS news]
HOUSTON (ICIS)--KMG Chemicals’ fiscal fourth-quarter net income fell 65% year on year to $1.2m (€864,000), mainly because of higher raw materials and distribution costs, as well as costs from consolidating manufacturing operations, the US specialty chemicals producer said on Thursday.
KMG’s operating income for the three months ended 31 July fell 57% year on year to $2.7m. The company warned last month to expect lower operating and net income.
KMG’s sales rose 19% year on year to $74.2m, driven by higher sales in the company's electronic chemicals and wood treatment chemicals businesses, it said.
CEO Neal Butler said the electronics chemicals business benefited from a strong semiconductor market and global price increases.
However, KMG’s price increases did not keep up with a “rapid escalation” in costs, he said. Electronic chemicals is KMG’s largest segment by sales, accounting for 54% of total sales during the quarter.
As for KMG’s outlook, ?xml:namespace>
Absent a global recession, KMG should see organic growth in its core electronic chemicals business in 2012 and beyond, Butler said.
At the same time, KMG will look for “additional consolidating acquisitions” in electronic chemicals and wood treating chemicals to drive growth, he added.
($1 = €0.72)
For more on KM | http://www.icis.com/Articles/2011/10/13/9499952/us-kmg-chemicals-fiscal-q4-net-income-falls-65-but-sales-rise-19.html | CC-MAIN-2014-42 | refinedweb | 222 | 55.34 |
In this part two we are going to see a little about the items contained in every windows phone seven project template. Then we will see GUI designing and coding part. For new Visual studio users you can first read part one before going to part two.
Know items in WP7 solution:
Solution in visual studio is a container or folder where related projects are saved. With the help of solution explorer we can view items and perform item management tasks in a solution or a project. Solution explorer is located at the right of the IDE, if it is not there you can show it by going to view menu from the menu bar and then Solution Explorer or just Ctrl + R or Ctrl +Alt + L (These shortcuts change).
Number of downloads: 3
Let’s discuss in brief about these items you see in solution explorer.
App.xaml file;
This is an Extensible Application Markup Language (XAML) file which is an important part of Silverlight programming. In particular, developers use this file for storing resources that are used throughout the application. Herein you can define application-level resources such as colors, brushes and style objects used throughout the application. The XAML code also initialize the ApplicationLifetimeObject property for creating PhoneApplicationService class which provides access to various aspects of the application`s lifetime.
Number of downloads: 0
The App.xaml together with its code-behind file App.xaml.cs defines an instance of the application class. This class encapsulates a Silverlight for Windows Phone application and provides its entry point.
Number of downloads: 0
The App.xaml.cs also has a constructor in an Application class which has a handler for the UnhandledException event. Also there is a RootFrame property which identifies the starting page of the application
This file defines the main UI of the project. By default the designer shows the document in split view, one part shows the XAML code and the other one shows a designer of the user interface elements.
Number of downloads: 0
From the toolbox you can customize the interface by dragging its components to the designer. Initially, the interface came with the application name and title labels which you can edit or remove them.
Also this file contains its C sharp code file, MainPage.xaml.cs. This file contains a partial class named MainPage that defines the visuals you will actually see on the screen when you run the Windows Phone program.
Number of downloads: 0
ApplicationIcon.png / Background.png / SplashScreenImage.jpg;
These are picture files present in Windows Phone 7. The ApplicationIcon.png file contains the icon that identifies the application in the quick launch screen of the phone device. If you are using VS 2010 you can open it in a built-in image editor else if you have Express edition you can open it in other image editor. The SplashScreenImage.jpg is the image that display when the program is initialized. The Background.png is the image that is used when the application is pinned as an application tile.
Creating GUI and Coding:
OK, now return to MainPage.xaml where we can customize the UI. Remember the application which we have run in part one, now select the page name Text Block and go to properties then you can change the properties of any tool. But now just change the text to “DIC Application” and font size to 64. Also you can change the text of the two buttons and their names as shown below..
Number of downloads: 0
Number of downloads: 0
Fine, let me stop telling about UI because it sounds easy, and from this brief now you can do for the rest of available tools. My design will be like this for now:
Number of downloads: 0
Now double clicking on any of the tools it will open MainPage.xaml.cs file with a ready-made click event function where you can write code for it.
Number of downloads: 0
Also you can change the load event of the form by double clicking on the open space of the form. Another thing which you need to know is that if you double click any control, it will update the XAML to include that control_Click event handler.
Now in button Ok event replace this code:
private void button2_Click(object sender, RoutedEventArgs e) { MessageBox.Show("I love Dream In Code"); }
Also you can manage errors in the application by putting new page that will display the error message and you create an event handler for the unhandledException event. This event is raised whenever an exception in the application is not caught. Although your application should include proper handling for any exceptions that you can deal with. To add new page just go to solution explorer, right-click on the project node, point to add and select New Item. In the dialog window, select Windows Phone Portrait Page, give it a name say ErrorPage.xaml and then click Add.
In the ErrorPage.xaml change the code:
<!--ContentPanel - place additional content here--> <Grid x:</Grid>
To be:
<!--ContentPanel - place additional content here--> <Grid x: <Border BorderBrush="White"> <TextBlock x: </Border> </Grid>
In the file ErrorPage.xaml.cs insert the following namespace directive in the top of the file:
using System.Windows.Navigation;
Then in the error page partial class insert the following code and your class will be like:
public partial class ErrorPage : PhoneApplicationPage { public ErrorPage() { InitializeComponent(); } public static Exception Exception; protected override void OnNavigatedTo(NavigatingEventArgs ex) { ErrorText.Text = Exception.ToString(); } }
The rest of code needs C# and Silverlight knowledge and this tutorial is not about them but is about introduction to Windows Phone 7 programming in Visual Studio.
Now another thing you need to know here is about the dynamic layout. When you try to rotate the emulator the display will not change to accommodate the new orientation.
Number of downloads: 0
Number of downloads: 0
This is because, by default, Silverlight programs for WP7 run in portrait mode, and XNA programs run in landscape mode. To fix this, in the root PhoneApplicationPage in MainPage.xaml file just change the code:
SupportedOrientations="Portrait"
To:
SupportedOrientations="PortraitOrLandscape"
SupportedOrientations is a property of PhoneApplicationPage which sets member of the SupportedPageOrientation enumerator to Portrait, Landscape or PortraitOrLandscape.
Now recompile and try to rotate the emulator the display will be as shown below:
Number of downloads: 4
This was the story about coding, now let me finish this tutorial by introducing a little about compilation errors. If it happens that you try to run a code with some errors, errors will be displayed in error list window. This window displays errors, warnings and messages produced by the compiler in a list that you can double-click an item to automatically open the relevant source code file where the error occur. Then fix the error and rerun again.
Thank you a lot to those who follows this tutorial, I think it help, don’t forget to rate and credit this tutorial. See you again in other Dream In Code threads. | http://www.dreamincode.net/forums/topic/212883-creating-a-windows-phone-7-application-in-vs2010-for-non-vs-userspart/ | crawl-003 | refinedweb | 1,176 | 61.97 |
Windows Server 2008 One Year On — Hit Or Miss? 386
magacious writes "Friday marked a year to the day since Microsoft launched Windows Server 2008, but did it have quite the impact the so-called software giant expected, or did it make more of a little squeak than a big bang? Before its arrival on 27 February 2008, it had been five long years since the release of the last major version of Windows Server. In a world that was moving on from simple client/server applications, and with server clouds on the horizon, Windows Server 2003 was looking long in the tooth. After a year of 'Vista' bashing, Microsoft needed its server project to be well received, just to relieve some pressure. After all, this time last year, the panacea of a well-received Windows 7 was still a long way off. So came the new approach: Windows Server 2008."
Not a matter of opinion.. (Score:2, Informative)
Re:Not a matter of opinion.. (Score:5, Funny)
Second comment on the thread, and it's already been Godwin'ed. I _am_ impressed.
Re: (Score:3, Funny)
Man, have you seen the picture [photobucket.com] of his cat?
Re:Not a matter of opinion.. (Score:5, Insightful)
It's not useless, and in fact, it's the very first thing I thought to myself when I read the summary.
To further your own analogy, how seriously could you take an article that, in it's first paragraph dismissed the Nazi Germany as a something the world over-reacted to, and never should have taken seriously?
It sets a tone, that perhaps the author's views are badly colored.
whats it give us? (Score:5, Informative).
Re:whats it give us? (Score:4, Insightful)
2k3 just works.
Does anyone have a compelling reason to use 2k8?
Re:whats it give us? (Score:4, Informative)
Re: (Score:2)
...and can fully leverage the new GP features in Vista, assuming you chose to deploy Vista in the first place.
Yes that's true, but you can push a "Group Policy Client Side Extension" package (with WSUS if you have it setup) that gives you those features on XP and Server 2K3 as well...
RODC seems like a good idea... your AD forest has to be upgraded to 2K8's schema though, right?
Re: (Score:3, Insightful)
2k3 is good, but I have having to restart every week or so when MS puts out updates..
Re: (Score:2)
I've been curious why all the sudden there are several servers that announced regular routine maintenance cycles where they would be unavailable... They used to be available 24/7.
Re: (Score:2, Informative)
There are multiple issues which can cause what you describe, the most commonly one i've encountered in the wild is the combination of a WS08 bug (for which there is a hotfix) together with McAfee.
Most likely: [microsoft.com]
Maybe (SMB2 only): [microsoft.com]
Basically: If you have issues like that, don't reboot the servers. Open a PSS case.
Re: (Score:2)
While I wouldn't automatically shriek with horror at seeing a NT 4 SP6 server in a production environment, I might sort of wonder if the people running the thing might be better off with a Win2kSP4 server.
I ran literally one of the largest NT4 backed networks in the world (military) for a while, and liked it just fine, but you have to admit it had some serious problems.
Besides, you can play game4s on Win2k. NT sucked for gaming.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
Re:whats it give us? (Score:5, Informative)
The main things is the ability to do a "core" (minimalistic) install, hyper-v, the terminal service enhancements as you mentioned, IIS7 (thats actually a very, very big deal for
.NET shops) and souped up Active Directory. The rest is mostly enhanced management (incremental upgrades and some new features here and there to make stuff faster/easier) and incremental improvements on most things, and support for Vista specific features. Its also decently faster overall.
The first things i mentioned are actually pretty major, if you need them, but obviously are irrelevant if all you're using it for is a file server, of course
:)
Re: (Score:3, Insightful)
I would say a minimal install is very relevant for a file server... Who wants tons of crap on a machine thats only acting as a file server?
Re:whats it give us? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
No, it handles AD just fine. I use it every day for that. To map UIDs properly you need one of: a replicated
/etc/passwd file, schema extensions for AD, or an LDAP server. Depending on what you are doing, I think those are acceptable solutions for most situations, the first one being the most common for one or two file servers hanging out on a Windows domain. But, like you say, Samba 4 will eliminate the need for this and make it that much easier to integrate.
Re:whats it give us? (Score:5, Insightful)
I don't disagree with what your saying, but I don't think thats the main reason people should go for a NT based solution.
I really, seriously think its the Trained Chimp factor.
If you set up a NT network properly, lock it down, and make sure someone with a clue looks in on it every once in a while, you can have a much lower pricepoint trained chimp fix the day to day problems; sure, there will be more day to day problems, but your chimps are a lot cheaper, and easier to find.
Also, I had a lot of problems trying to work with earlier versions of Samba; I imagine a lot of other people did, as well, it's going to take a while to get over the distrust.
Re: (Score:3, Interesting)
Actually no, I'm a busy admin and I don't have time to follow these instructions for getting Samba hooked up to Active Directory: [samba.org]
Then I have to install ACL support and headache that goes with that, hoping something doesn't scramble my file system. In most businesses, Windows Server is not terribly expensive and allows the admin to get more done in less time.
Note, there are distros that offer GUIs for getting this done but they generally cost $$$.
Re: (Score:2)
This is all true... the best admin advice I can think of was when Scotty (from Star Trek lore) fairly yelled at a young engineer "How many times do I have to tell you, the right tool for the right job!"
Re:whats it give us? (Score:5, Insightful)
I'm a busy admin too. Fortunately it doesn't take long at all to install Ubuntu Server, apt-get install likewise-open, and then type "domainjoin-cli join my.domain my-username" in the command line.
When you use being "busy" as an excuse for being ignorant of your options, you do your employer a disservice. That page you linked to hasn't had a major edit in two years or so, and it does not reflect the current best practices for setting up a simple Linux/Samba file server with AD integration. And no, no extra $$ is required for Ubuntu Server.
Re: (Score:2)
When you use being "busy" as an excuse for being ignorant of your options, you do your employer a disservice
Not necessarily. They hired a windows admin, they should expect to have a windows admin. Seeing as they made that decision, they're fine with him using Windows options that cost money, and it's almost certainly worth more to the employer to have him save time using Windows. Is it worth it to the employer to pay him to learn to use samba? Maybe, but that's their decision, not his. It's entirely possible, even likely, that when all is said and done, the cost of training and the extra support time will easily
Re: (Score:2)
Re:whats it give us? (Score:5, Insightful)
That page you linked to hasn't had a major edit in two years or so, and it does not reflect the current best practices for setting up a simple Linux/Samba file server with AD integration.
Then what the fsck is it doing on the samba.org site? Why isn't it removed if not updated? You know, this IS one of the real pitfalls of Linux, whenever you're looking for a guide you're likely to find something that's two years old and may or may not be valid. If documentation sucks, documentation re-verification on newer versions suck even more. I bet that's 99% of the reason Ubuntu got their code names down the way they do, if you search for "active directory hardy", "active directory intrepid", "active directory jaunty" you're much more likely to get relevant hits than "active directory ubuntu" or worse yet "active directory linux".
Re:whats it give us? (Score:5, Insightful)
Re: (Score:3, Insightful)
Oh please, in Debian installer, at the stage formatting disks before copying base system, installer offers you choose mount options, which include mounting with acls, anyway as far as I know, any modern distro comes with acls installed and any moder file system supports acls, you just need to enable them by adding a mount option in fstab, so I wouldn't call this the most difficult step in configuring linux file server, may be that was true 10 years ago, but not now.
Then it takes about a whole afternoon to f
Re:whats it give us? (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Yeah, powershell is slick. Though making commandlets is so simple in C# (or reusable scripts in powershell itself), that this is more of a "nice to have" than something worth paying for. Though im guessing they'll push Powershell 2.0 with it (it is in the Windows 7 beta), and now THAT is slick.
I do find it a little ironic, that of all things that Microsoft could have done better than Unix, one of the ones they wiped the floor with, is the -Shell- scripting. Go figures.
Re: (Score:2)
NTBackup in 2008 can no longer backup data located on a remote share which is a PITA (at least I can't do this, does anyone know different?). I need this because I backup several servers onto 1 backup device. So today my backups are still done on a 2003 server.
Re: (Score:2)
Backup in 2008 is "cooler in other ways" though - backup images that are made are VHDs (Hyper-Vs image format) and are bootable! How about that for bare metal restore
You can get NTBackup working on Windows Vista: [petri.co.il]
I'd imagine those instructions would work on Server 2K8 as well... they're based on the same codebase afaik
Re: (Score:2)
Remote apps instead of a full desktop - already done by X11 and citrix for many years.
Hyper-v - already done by xen, kvm, vmware and a whole load more, most linux distros already had some kind of vm shipping by default.
Re: (Score:2)
Been trying to talk them into a *nix back end with the app on a Samba share, but they ain't buyin it...
Re: (Score:2)
Some good, lots bad. (Score:2)
I expected more driver support (Score:4, Interesting)
I've installed Win2008 a few times and it always surprises me that I have to dig up the driver disks for the storage controllers... never have to do that when I install Fedora or Debian.
Re: (Score:3, Informative)
Basic Open Source versus Proprietary issue. It's a lot easier for a hardware company to get drivers added to Linux distros than to Windows install disks.
Re:I expected more driver support (Score:5, Interesting)
This is really an about face... 10 years ago, Linux was the platform you often couldn't get running due to missing hardware drivers -- you really had to be very careful about what hardware you chose.
Also, Windows 2000 was the easy-to-use OS.. Linux was the server OS with usability issues..
Is it starting to change, so that Linux is actually more usable than Windows server?
That would be the day...
Now if only we could get a true match for Windows Active Directory. So that the software on Windows Desktop machines, works EXACTLY as if the environment was powered by Windows servers, Exchange for e-mail, etc.
Re: (Score:2)
Re: (Score:2)
>This is really an about face... 10 years ago, Linux was the platform you often couldn't get running due to missing
hardware drivers -- you really had to be very careful about what hardware you chose.
Nothing has really changed here, if your hardware is not supported on Linux out the box then the chances are it won't work at all. In Windows land you expect to have to provide a driver disk, this option doesn't really exist on Linux.
This is really caused by the infrequent releases of Windows vs the yearly o
Re: (Score:3, Interesting)
Samba4 is excruciatingly close to true AD support. I'm now using it for my own network for a handful of WinXP computers. I think in about 1 year Samba4 will be ready for production.
OpenChange is also moving at a fast pace.
Re: (Score:2, Insightful)
That's a very unfair comparison. Servers need to be extremely cautious with drivers in order to provide the sort of 99.999% uptime expected for industry. Fedora and Debian are more comparable to MacOS or Windows XP this way, where it's easier to update and support oddball hardware configurations.
No, install CentOS or run Oracle or VMware servers on it, something with commercial support expected on it, and you're going to run into driver limitations because they've not had a year or more to test it under ser
Re:I expected more driver support (Score:5, Informative)
You don't get 5 nines out of a single server install, sorry. The only way you get that is with HA clustering and automatic failover.
PC hardware, even expensive stuff, is not reliable enough no matter what $VENDOR's sales pitch is.
You might get lucky and get a single reliable box, but if you deploy a non-trivial number of servers you will need to plan for hardware/software failures.
Re: (Score:2)
Nope... (Score:3, Interesting)
RHEL 5.3 still has tons more drivers than Win2k8. I know from very painful experience.
It's a natural consequence of
a) as mentioned before, the nature of the licensing, but probably more importantly...
b) the release cycle. RHEL is pretty good about timely major updates compared to eternities for MS service packs.
Re: (Score:3, Interesting)
The biggest reason for the "extra step" for a lot of drivers in Ubuntu is because of "non-free" drivers. Because of the spirit of Ubuntu, they have to make you feel guilty about using an nVidia or Broadcom driver before you go "It's on $#@!ing notebook, just install it."
Re: (Score:2)
You must have some vastly different hardware than anything I've used in years. For a concrete example; I am a devotee of Lenovo Thinkpads, especially the T series. Linux installs on these like a dream. To install Windows on them, you have to go into the BIOS, set the SATA controller to compatibility mode, do the install, get a very hard to find disk from Lenovo's site since Windows will not try to re-detect the root volume's storage controller if it changes (it will blue-screen with INACCESSIBLE BOOT DEV
Re: (Score:2)
If you are trying to install vanilla Windows XP (i.e. no applied service packs or anything) on that laptop, then I'm not surprised given that XP was released 2 years before SATA (XP in 2001, Serial ATA in 2003). Kinda hard to have DRIVER SUPPORT for something that won't be released for another 2 years.
Try creating a slipstreamed install CD with at least service pack 2 and possibly the drivers for the SATA controllers. Should save you a ton of trouble in installing a fresh XP installation.
Re: (Score:3, Informative)
The problem isn't that it's difficult to get storage drivers into Windows -- Microsoft actively solicits all the major IHV's to provide them. The problem is that the cutoff date for submission can be a year or more in advance of when Windows finally ships. This guarantees that drivers for the latest hardware won't be included.
Re:I expected more driver support (Score:5, Informative)
Can't answer your question (Score:2)
because none of the businesses I see have adopted 2008 server.
Very few have any Vista desktops either.
Re:Can't answer your question (Score:5, Interesting)
To add a voice: I'm seeing more Linux installs than Win2k8 and Vista combined. This many mean nothing, or may mean I'm seeing what the average person is seeing. Consolidation and cost are driving what I'm seeing. When you see a row of several hundred blades running RHEL (replacing Windows in some cases) it's fairly convincing.
Re:Can't answer your question (Score:5, Interesting)
The data center where my servers are is a mixed client data center. It's not the decision of a single company there. There is one company who is using Windows server 2k3 but they are not upgrading. Some of their stuff is moving to Linux/Solaris. The RHEL stuff is a different company that replaced all their Windows servers and went full on RHEL. In my area, we use a mix of Win2k3, Solaris (5.8-10), and Linux (CentOS). There is a ton of telecomms stuff in half the data center as well. I'm not seeing any growth in Windows servers, quite the opposite. That's why I thought my experience might be 'average' so to speak.
Re:Can't answer your question (Score:5, Interesting)
Your experience would be average--for low-end stuff. Generally, if you have the money to be leveraging a lot of Windows Server, you have the money (and often need) your own DC, or a sizable chunk of one.
Anybody whose cup of tea is ASP.NET should be running, not walking, to Server 2008. IIS7 is so much more useful and performant it's not even funny.
Re: (Score:2)
Generally, if you have the money to be leveraging a lot of Windows Server, you used to have the money (and often need) your own DC, or a sizable chunk of one.
there, fixed that for you
:)
to be fair, his experience also is average for the high-end. Big shops tend to run larger systems, sometimes Solaris, sometimes IBM running RHEL. If they have the money that its no object, they *still* don't tend to run Windows servers.
Re: (Score:2)
Depends on the business...
Technically oriented businesses often have a lot more linux, businesses where the primary focus is not computing related tend to have a lot of windows (often managed by external companies).
Also a lot of office related stuff is usually all windows, but backend and internet related stuff can be linux based... A lot of smallish companies who think they're 100% windows often have linux boxes and don't realise it... A huge amount of networking equipment runs linux these days.
Re: (Score:2)
Our software (Dental Office Management, Kodak, Practice Works)is certified to run on W2K for the sever and XP pro only, we are actually running on W2003 and a collection of XP pro and one XP home machines for the client and are getting away with it. I don't see W2008 happening for years and Vista will be skipped. Our last system ran on Xenix originally and later on SCO Open Server!
No news is good news (Score:5, Interesting)
Outside of removing ISA Server from the Small Business suite, I've read very few negative opinions on 2K8. If you dont need 64-Bit goodness, it might not be worth upgrading from a stable 2K3 environment.
Re:No news is good news (Score:5, Informative)
I recently setup a client of mine with two Win2k8 64-bit servers (in a larger virtual VMware setup). So far, it's worked out very well. It's fast, stable (uptime is exactly equal to the number of days since we last had to reboot for a patch), and played nice with everything already present. Active Directory and Exchange 2007 migrated from the previous Win2k/Exchange 2k setup without a hitch. In other words: no complaints at all, other than the price (which wasn't too bad, since the client received non-profit pricing - but most of what I setup is Linux or FreeBSD and I greatly prefer that pricetag!).
Things I noticed that have improved:
* The group policy editor is a bit easier to use, and less confusing.
* The Vista performance/health monitor is actually pretty good, and provides a really handy ntop-like interface for seeing which service is doing what with the network (not as fine grained as I'd like, but it's a good starting point).
* The old Services-For-Unix services are more tightly integrated, and it was very easy to get NFS up and running.
* Less is installed by default, and adding just the required services was very straightforward.
* The scheduler seems to have improved, because processes distribute over CPUs more widely, and throughput/responsiveness "feels" better.
* The new role-based manager for file serving is a bit easier to find, but is really similar.
* A couple of new diagnostic wizards have appeared, including one for Group Policy - it helped me find a couple of problems I hadn't thought about.
Items I wasn't so fond of:
* Activation. It doesn't matter if you have a charity volume license anymore - you still have to activate. That bugs me, because this server has to last for years, and I worry that if I have to restore a backup in 5 years time the activation wizard may make my life difficult.
* Volume shadow copies are STILL not configured to my liking by default.
* If you want to use some of the new active directory features, you need a pure Win2k8 domain on the server side. It works with "legacy" Win2k/2k3 systems around, but only if they aren't domain controllers.
* The start menu/icons are straight from Vista.
* License management makes less sense, since the license control tools are now hidden away - checking CAL status is a pain.
Overall, for an MS operating system it's pretty good. I don't see a compelling reason to run out and upgrade any Win2k3 systems that are working well - but for new servers, it works great.
Re: (Score:2, Funny)
uptime is exactly equal to the number of days since we last had to reboot for a patch
So... last Tuesday?
Re: (Score:2)
Re:No news is good news (Score:5, Funny)
Hello, this is Bob from Marketing here at FUD Advertising, and we've got this new account from these guys in Washington state called Microsoft.
We've decided to move them into full page adds in Technology and General Media, with short TV spots in support later. We want to go with "Movie-Style" ads: brief quotes from professionals who use the product and speak to potential buyers (Edit from Boss: scratch that
... they want us to call them "users". Sounds like drug addicts to me, but whatever. They write the checks).
We love the idea, because these short quotes are so meaningless, easy to manipulate, memorable and almost perfectly supportive. We think black background, big type with product name at the top, nice picture, and quotes with attributions below
... you know, like a movie ad in the paper.
So, this is what we have so far.
"Less confusing!"
"Pretty good!"
"A good starting point!!"
"Seems to have improved!!"
Send comments to my assistant by Friday.
Thaaaaaanks. That would be Greeaaaaaaaat.
Re: (Score:2)
No, but you can slash this dot --> .
Re: (Score:3, Informative)
"64-Bit goodness" was available with win2K3 [microsoft.com] as well so even that's not a reason to go with win2K8.
Re: (Score:2)
There's also a hard to find copy of Win2k with 64 bit enabled.
Re: (Score:3, Interesting)
ISA provided unmatched flexibility for what it did, but in the (too often) wrong hands, it was a nightmare to configure. Under any circumstance, IMHO, Sonicwall sucks. Unreliable, prone to reset under load (multiple VPNs) and just cheap garbage.
Re: (Score:2, Funny)
By the way, what were those "few negative issues" that you were referring to?
It's not made in white plastic or brushed aluminum with an Apple logo.
Does anyone use Server Core? (Score:2, Interesting)
Re: (Score:2, Informative)
We deployed internally (we're an IT consulting company).
We use it to run our DC/DNS/DHCP primary infrastructure server. Works fine. I see no advantage right now though, and wouldn't deploy such a setup at a customers site.
In WS08 R2,
.NET support will be added to Server Core. This will make it a great option for big web server farms.
Re: (Score:2)
It's not really commandline only, it loads the gui components and then runs cmd.exe instead of explorer.exe, you still have a gui, can still use the mouse and move your cmd.exe windows around, and you can still load gui based apps... It's not like the pure text consoles offered by a unix based os.
TSGateway (Score:4, Informative)
The terminal service gateway is also pretty good. A controlled way to allows TS from the Internet into the clients on the subnet.
Native Backup has improved. (Score:3, Informative)
Server 2008 has a much improved backup utility. It's easy to setup (I just make one backup job that repeats nightly), and will provide a BMR (Bare Metal Restore). The best part however, is the ability to assign multiple USB drives to a backup job. Which ever one is plugged in at the time, it will backup to it. This allows the admin or employee to swap drives before they leave office at night.
My only major gripe is that the backup utility will only do a file level backup. Exchange 2007 is not supported. In theory, you could stop the Exchange Store prior to the backups taking place, be we all know that's just not feasible. Instead, Microsoft states you *must* use a 3rd party backup program or their DPM 2007 product for backup/restore of Exchange! Damn
:(
Why do you need a special OS to run a server ?!? (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Wow dude, you're out there! First of all, there are a lot of people out there that value the straight forward setup approach that Microsoft often gives you for that high dollar. Of course when I'm running Oracle and spend many thousands on it I install it on a free OS but I certainly can't apt-get install Oracle.
Aptitude is great and all, but you're forgetting apt-get install apache-modssl, mod_mysql, php and the myriad of other things that usually have to get installed too in order to do anything useful w
Re:Why do you need a special OS to run a server ?! (Score:3, Insightful)
Actually, despite what MS will tell you, a server should be fundamentally different to a desktop, it should have a lot less software installed... MS's server versions are quite the opposite, they're basically desktops with additional server applications installed, they have a ton of desktop related functionality that is completely useless on a server sitting in a rack somewhere.
Re:Why do you need a special OS to run a server ?! (Score:4, Informative)
Yeah, I know what you mean. IME, Linux is much more valuable to me because it offers more flexibility over the life of a system. If the organisation grows and I need more concurrent users, I don't need to worry about the license. If I need to add a service on an existing server, I don't need to worry about whether Moderately Enterprisey Edition has what I need, or if I can only do it on one of the Really Quite Enterprisey Edition boxes. I can install a zillion times in different VM's, and not have to read the EULA with a fine toothed comb to know if it was legal. In many ways, I'd consider an expensive Linux preferable to a free Windows.
That said, the Windows Server thing isn't that hard to grok. It's just market segmentation, plus a decision to only bundle the server and administrative application bundle with particular variations of the OS. If you prefer, think of it as buying the application bundle, and getting a free, tuned and tweaked version of Windows that is just there to run the expensive application bundle. Net result is that you don't need to worry about compatibility between the applications and your existing OS. MS comes to the table from a proprietary mindset. That's not inherently 100% terrible. And, more important than anything else, they bring some quite good tools. You can decide those tools aren't worth the headaches that come with MS for your situation. But, if you've ever set up NIS and NFS home directories on a bunch of Linux boxes, and you've joined Windows machines to a domain... You know that joining a Windows box to a domain is a heck of a lot more convenient than deploying NIS.
I'm a UNIX admin who has worked with Windows servers, but even coming from my "UNIX 4 eva" side of the fence, I have to admit that the MS solutions make some things very convenient compared to the most analagous UNIX options. Just make sure you know which edition you need, so you install the Windows Server OS that will actually use all of your RAM.
:)
Re:Why do you need a special OS to run a server ?! (Score:5, Informative)
Where I work, a typical server costs $5,500, Windows costs around $600, physically putting the server in the datacenter costs $2,000, and labor for installing, configuring, and supporting the server costs $3,000 over the its life. At the end of the day, Windows servers cost around $11,100. Switching to Linux would save us $600, reducing our costs by 5%.
A typical server with 256GB of RAM would run about $60,000. This server would require the Enterprise editions of Windows Server, so that would run about $3,000. The other costs would remain the same and at the end of the day, the OS is still only five percent of the total.
Re: (Score:2)
you obviously need tons of servers, as they still have the 10 connection limit imposed (on ports less than 1024) on WinXP.
My .02 (Score:2, Informative)
The only new feature that I've seen is DFS and even that is broken. The UI design team moved stuff for the sake of moving stuff and made everything bigger and chunkier. It also spams new windows that have a tendency to put themselves in the background like nobody's business. Also, the new DC's are giving all kinds of DNS errors.
Now maybe the DFS and DNS problems will be worked out in tim
Re: (Score:2)
DFS isn't new, it's been around for years; the latest incarnation with delta replication appearing with 2003 R2. All 2008 adds is transparent Access Based Enumeration for DFS shares and the ability to have more than 5000 DFS targets in a single namespace.
Works well as workstation (Score:5, Interesting)
Re:Works well as workstation (Score:4, Informative)
I don't know whether to mod the parent or reply, but I second this sentiment wholeheartedly.
I am running one as a replacement for my 2003 server/domain controller at my house and also as a Vista-like workstation and game machine. I absolutely love it!
It's just like Vista except for no UAC, no DRM and no annoying slowdowns. In other words, it's everything that Vista should have been, and this is running on only $500 worth of hardware (quad 6600, 4GB RAM).
The 64-bit Vista drivers were a bit difficult to find because my motherboard "doesn't support" Server 2008, but after crossing that hurdle (loading the network driver from a different motherboard with the same chipset because Asus locks out 2008), it's been the best computer I have ever owned.
Works great as a laptop OS! (Score:3, Informative)
I've been using Server2008 x64 on my t61p laptop since it first came out.
It's great! It feels zippier than Vista. It has a smaller install footprint. (actually even wireless isn't installed by default: you have to add it manually). It's been completely rock-solid.
I even use Hyper-V when giving demos at conferences. (unfortunately Hyper-V doesn't cooperate with wireless and disables sleep/hibernate, so I can't use it routinely.)
My experience with 2008 (Score:5, Informative)
Two position statements first: 1) I'm primarily a Unix sysadmin of multiple flavours and love it, 2) I've only used Server 2008 on my test VM network.
Having setup a private network thanks to a company purchased Technet subscription, I now have two Active Directory Domain Controllers, a WSUS server and Terminal Server. My take on 2008 is that when approached the right way, it's actually a very nice operating system.
I like the new Terminal Services seamless window capability, the default policy of only installing the minimum required services, the new look Server Manager, even IIS7 looks nicely moduler. In fact, I could imagine managing a network of 2008 machines in a way that I never could with 2003. Now that might be my lack of fundamental 2003 knowledge (I can use it, but wouldn't describe myself as a "Windows System Administrator").
The reality, even for us Unix/Linux advocates, is that we're probably going to have to interop with Windows Server from time to time, and if it's Server 2008 that I'm having to work with, then I can live with that.
Windows server what? (Score:2)
I have yet to see one, and I see a lot of servers. Seems like 2k3 is good enough and people run other OSs for bigger tasks and virtualization. So... I've seen way more recent deployments of RedHat, CentOS, Ubuntu LTS and W2k3 than 2k8. Maybe it's the Vista smell, I don't know.
Anyone even bothered? (Score:2)
Seriously, we haven't bothered.
Sure we will have to someday as servers are retired and 2003 goes off MOLP but it doesn't seem like a big deal to me to start some push to do it.
More of a quiet snooze then a dramatic miss.
Advanced Firewall settings (Score:3, Interesting)
ACTIVATION?? (Score:5, Interesting)
The fact that I have to activate my OS is annoying. With 2K3, there was a volume licensing option, but with 2K8, that option is gone, and I have to either allow my server to talk to a public Microsoft activation server, or run a KMS server in house.
Sorry, Microsoft, If you don't trust me, I don't trust you.
Re: (Score:3, Funny)
I can see why that would be a terrible idea for a server.
Re:Anything like 2k3? (Score:5, Insightful)
You can mock all you want, but I find decreasing the attack vector for an out of the box install a sensible approach. Something all server intallations should do, regardless of their creators image.
Re:Anything like 2k3? (Score:5, Funny)
Yeah, I know. Thankfully a new installation is safely locked down so that you can only browse the Microsoft website. Imagine what might happen if you could browse the web freely. You might accidently end up here [samba.org] which everybody knows is a site full of trojans and malware.
Re: (Score:2)
The obscure thing you need to do is to add the site in question to your trusted sites zone.
Of course if you are trying to download firefox which sends you to a different mirror each time, it could take a few goes until you get enough firefox mirrors listed.
Re: (Score:2)
Re: (Score:2)
Hmm... Except for the part where it costs $1k for the "standard" version, or almost $500 for the "Web Server" version.
Vista Ultimate is $320, and that's retail. More like $120 more on a Dell.
So... Is it actually fast enough to justify spending hundreds or thousands of dollars on software, instead of hardware?
Or I'll just stick to Ubuntu, and spend the thousands on hardware.
Re:2008 is the 2nd best desktop MS ever made (Score:5, Funny)
Re: (Score:2)
It does beg the question, why does a "server" os need directx 10?
Re: (Score:2)
Obviously, on the world stage, that is still nontrivial money for a lot of people; but you can, easily, get a machine with 8gigs of RAM for under $500. A genuinely decent machine with 8gigs for under $1000. | http://tech.slashdot.org/story/09/02/28/1648216/windows-server-2008-one-year-on-hit-or-miss | CC-MAIN-2015-18 | refinedweb | 6,313 | 71.04 |
What is the Python Tkinter Combobox?
A special extension of Python Tkinter, the
ttk module brings forward this new widget. The Python Tkinter Combobox presents a drop down list of options and displays them one at a time. It’s good for places where visibility is important and has a modern look to it.
The Python Combobox is actually a sub class of the widget
Entry. Hence it inherits many options and methods from the
Entry class as well as bringing some new ones of it’s to the table.
ComboBox Syntax
You must specially import the ttk module to be able to use Comboboxes.
from tkinter import ttk Combo = ttk.Combobox(master, values.......)
Some important Combobox options are listed below.
Assigning a ComboBox Values
In the example we create a list of values, and then feed them to the
values option in the Combobox.
By Default the Combobox is spawned with no value selected. In other words, it’s blank. If you wish, you can avoid this using the
set() function.
from tkinter import * from tkinter import ttk root = Tk() root.geometry("200x150") frame = Frame(root) frame.pack() vlist = ["Option1", "Option2", "Option3", "Option4", "Option5"] Combo = ttk.Combobox(frame, values = vlist) Combo.set("Pick an Option") Combo.pack(padx = 5, pady = 5) root.mainloop()
If the ComboBox was not what you were looking for, it has a variant called the SpinBox which may interest you.
Retrieving Combobox values
Once the user has selected an option, we need a way to retrieve his input. For this we need a button which the User must trigger. The button calls a function that uses the
get() function to retrieve the current value of the Combobox.
from tkinter import * from tkinter import ttk def retrieve(): print(Combo.get()) root = Tk() root.geometry("200x150") frame = Frame(root) frame.pack() vlist = ["Option1", "Option2", "Option3", "Option4", "Option5"] Combo = ttk.Combobox(frame, values = vlist) Combo.set("Pick an Option") Combo.pack(padx = 5, pady = 5) Button = Button(frame, text = "Submit", command = retrieve) Button.pack(padx = 5, pady = 5) root.mainloop()
Video Code
The Code from our Video on Tkinter ComboBox Widget on our YouTube Channel for CodersLegacy.
import tkinter as tk from tkinter import ttk class Window: def __init__(self, master): self.master = master # Frame self.frame = tk.Frame(self.master, width = 200, height = 200) self.frame.pack() self.vlist = ["Option1", "Option2", "Option3", "Option4", "Option5"] self.combo = ttk.Combobox(self.frame, values = self.vlist, state = "readonly") self.combo.set("Pick an Option") self.combo.place(x = 20, y = 50) root = tk.Tk() root.title("Tkinter") window = Window(root) root.mainloop()
This marks the end of the Tkinter Combobox article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article content can be asked in the comments section below.
You can head back to the main Tkinter article using this link. | https://coderslegacy.com/python/python-gui/python-tkinter-combobox/ | CC-MAIN-2022-40 | refinedweb | 476 | 62.04 |
BlockoBlocko
Blocko is a block-based WYSIWYG editor written in ClojureScript and compiled to JavaScript. Currently, Blocko is not yet production ready, so use at your own risk.
InstallInstall
NPMNPM
- Run:
npm i blocko-editor
- Import it:
import blocko from 'blocko';
BrowserBrowser
UsageUsage
blocko.core.init({ container: '#editor', initialContent: [], onChange: (content) => { // store `content` in your database here. } });
APIAPI
container: any JS element that can be targeted via
querySelector
initialContent: a JS or JSON object representing the data
onChange: a callback function called when content changes
DevelopmentDevelopment
To develop Blocko simply run
./build.sh dev, which will then compile to
public/js/blocko.js a development version of Blocko that also auto-reloads as you make changes. After that is done, open
public/index.html in your browser and have fun!
Once you're done with development and want to get production version, then:
- To get the browser production build, run
./build.sh release-browserand check inside
dist/browserfor a brand new
blocko.jsand a
blocko.cssfile.
- To get the NPM production build, run
./build.sh release-npmand check inside
dist/npmfor a brand new
blocko.jsand a
blocko.cssfile. Note that you have to import the CSS file in your project manually. | https://www.npmjs.com/package/blocko-editor | CC-MAIN-2022-27 | refinedweb | 202 | 51.55 |
dynamic_gru¶
- api_attr
declarative programming (static graph)
paddle.fluid.layers.
dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None, origin_mode=False)[source]
Note: The input type of this must be LoDTensor. If the input type to be processed is Tensor, use StaticRNN .
This operator is used to perform the calculations for a single layer of Gated Recurrent Unit (GRU) on full sequences step by step. The calculations in one time step support these two modes:
If
origin_modeis True, then the formula used is from paper Learning Phrase Representations using RNN Encoder Decoder for Statistical Machine Translation .\[ & = u_t \odot h_{t-1} + (1-u_t) \odot \tilde{h_t}\end{aligned}\end{align} \]
if
origin_modeis False, then the formula used is from paper Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling\[ & = (1-u_t) \odot h_{t-1} + u_t \odot \tilde{h_t}\end{aligned}\end{align} \]
\(x_t\) is the input of current time step, but it is not from
input. This operator does not include the calculations \(W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}\) , Note thus a fully-connect layer whose size is 3 times of
sizeshould be used before this operator, and the output should be used as
inputhere. \(h_{t-1}\) is the hidden state from previous time step. \(u_t\) , \(r_t\) , \(\tilde{h_t}\) and \(h_t\) stand for update gate, reset gate, candidate hidden and hidden output separately. \(W_{uh}, b_u\) , \(W_{rh}, b_r\) and \(W_{ch}, b_c\) stand for the weight matrix and bias used in update gate, reset gate, candidate hidden calculations. For implementation, the three weight matrix are merged into a tensor shaped \([D, D \times 3]\) , the three bias are concatenated as a tensor shaped \([1, D \times 3]\) , where \(D\) stands for the hidden size; The data layout of weight tensor is: \(W_{uh}\) and \(W_{rh}\) are concatenated with shape \([D, D \times 2]\) lying on the first part, and \(W_{ch}\) lying on the latter part with shape \([D, D]\) .
- Parameters
input (Variable) – A LoDTensor whose lod level is 1, representing the input after linear projection. Its shape should be \([T, D \times 3]\) , where \(T\) stands for the total sequence lengths in this mini-batch, \(D\) for the hidden size. The data type should be float32 or float64.
size (int) – Indicate the hidden size. .
is_reverse (bool, optional) – Whether to compute in the reversed order of input sequences. Default False.
gate_activation (str, optional) – The activation function corresponding to \(act_g\) in the formula. “sigmoid”, “tanh”, “relu” and “identity” are supported. Default “sigmoid”.
candidate_activation (str, optional) – The activation function corresponding to \(act_c\) in the formula. “sigmoid”, “tanh”, “relu” and “identity” are supported. Default “tanh”.
h_0 (Variable, optional) – A Tensor representing the initial hidden state. It not provided, the default initial hidden state is 0. The shape is \([N, D]\) , where \(N\) is the number of sequences in the mini-batch, \(D\) for the hidden size. The data type should be same as
input. Default None.
- Returns
A LoDTensor whose lod level is 1 and shape is \([T, D]\) , where \(T\) stands for the total sequence lengths in this mini-batch \(D\) for the hidden size. It represents GRU transformed sequence output, and has the same lod and data type with
input.
- Return type
Variable
Examples
import paddle.fluid as fluid dict_dim, emb_dim = 128, 64 data = fluid.data(name='sequence', shape=[None], dtype='int64', lod_level=1) emb = fluid.embedding(input=data, size=[dict_dim, emb_dim]) hidden_dim = 512 x = fluid.layers.fc(input=emb, size=hidden_dim * 3) hidden = fluid.layers.dynamic_gru(input=x, size=hidden_dim) | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/dynamic_gru.html | CC-MAIN-2021-04 | refinedweb | 596 | 54.22 |
Name:
[171] Florian Mayer
Member: 93 months
Authored: 7 videos
Description: I'm a student from Vienna, the federal capital of Austria. Please visit `My Blog <>`_ ...
Decorators: Introducing functools.wraps [ID:887] (3/3)
in series: Advanced Python
video tutorial by Florian Mayer, added 09/08
Name:
[171] Florian Mayer
Member:.
functools.wraps is a function that allows you to take over the docstring and the name of another function when you make a decorator return another function than the one it takes.
import warnings from functools import wraps def deprecated(func): @wraps(func) def new_func(*args, **kwargs): warnings.warn("This function is old", DeprecationWarning, 2) return func(*args, **kwargs) return new_func @deprecated def i_am_old(foo, bar): """ This is a super-old docstring """: 711 <<
Also nice, but i didnt get the point of sense of this oO maybe it has something to do with my low knowledge about python? :D
Hi Florian,
Do you practice any rehearsals before you make a screen cast?
Preliminary Drawings?
Rough Drafts?
Have a co-worker look at what you are doing?
Run it by some users before publishing?
If something in the presentation goes wrong, can you not re-record it?
It is difficult for the viewer to know what you see as correct or incorrect results or the meaning of the results. We need your explanation of what we might expect to see and when shown something, what it is that we are looking at and what this result means to us.
I found this presentation extremely difficult to follow, it seemed like you were all over the place. I didn't understand what you were trying to accomplish, I didn't understand what it was that you were trying to convey. I didn't understand your example. I didn't understand the functions you pulled in. Then when some results weren't what you were expecting, I saw a lot of movement and heard a of clicking, but didn't know what was going on. It doesn't look like very much preparation was made before producing the screen cast. It definitively needs re-recording.
I think that one technique that might help you would be to have all the text typed out ahead of time and then in your presentation just go through highlighting various portions of the code, explaining, what each piece of the code does..
Thanks,
Lots of Luck,
Bruce
: ^ >
wrapper.__doc__ = wrapped.__doc__
wrapper.__name__ = wrapped.__name__
Before returning the wrapper looks more straightforward to me. (Also copy __module__ if you need to.) But maybe it's only me: I like to know what I'm doing =)
A little bit too quick towards the end (for my taste); I didn't know where to look first in order to retrace the shown, and thus didn't get it
great series thanks
The ShowMeDo Robot says - this video is now published, thanks for adding to ShowMeDo. | http://showmedo.com/videotutorials/video%3Fname%3D3370020 | CC-MAIN-2014-42 | refinedweb | 488 | 61.46 |
Export To Germany Hot Manual Tools Blank Stunt B B Gun
US $7.5-9.0 / Piece
1 Piece (Min. Order)
German toy guns
US $1.99-4.28 / Piece
5 Pieces (Min. Order)
pressure guns, air tool, inflator gauge, pump parts
US $3-3.8 / Piece
500 Pieces (Min. Order)
Best on sales very new type germany kraft hammer drills popular rivet gun
US $29.9-49.9 / Piece
1 Piece (Min. Order)
2017 Portable hand operate hot hand tools grease gun
US $45-55 / Set
100 Sets (Min. Order)
Adjustable Safety pencil Pocket Air dust Blow Gun, Aluminum
US $0.21-0.85 / Piece
500 Pieces (Min. Order)
top quality big volume large airflow rate dust cleaning gun
US $2.15-2.3 / Piece
500 Pieces (Min. Order)
german type air duster gun tools blister package DG-10B
US $1.0-1.0 / Bags
1000 Bags (Min. Order)
CE Cheap Polyurethane Foam Manufacturer Opp Bag Pistol Gun
US $2.85-3.0 / Piece
1 Piece (Min. Order)
45kgs 45L Bucket Grease Pump,bucket grease gun,greasebucket
US $45-55 / Set
100 Sets (Min. Order)
large airflow rate air blowing dust gun
US $2.15-2.3 / Piece
500 Pieces (Min. Order)
25PCS Air Blow Gun Kit
US $3.5-4 / Set
500 Sets (Min. Order)
12L LD-608 German/American Type Repeating Air operated Grease Gun,Hand Tools Oil Gun,Professional
US $45-55 / Set
100 Sets (Min. Order)
top quality big volume blowing dust gun large airflow rate dust cleaning gun
US $2.15-2.3 / Piece
500 Pieces (Min. Order)
import export colombia german paint manufacturers prona spray gun 5501
US $7.98-9.98 / Piece
500 Pieces (Min. Order)
German gun manufacturer with good quality TF-C007-C
US $0.99-2.99 / Piece
3000 Pieces (Min. Order)
Meso Injection Gun/Mesotherapy Gun Price/Mesotherapy for Sale
US $250-2500 / Unit
1 Unit (Min. Order)
High quality germany pressol hand grease gun
US $2.9-3.8 / Piece
1200 Pieces (Min. Order)
500cc Germany type high pressure heavy butter grease gun
US $4.3-4.3 / Pieces
200 Pieces (Min. Order)
Low Price Smart Plastic Screen Pen Gun from China
US $0.08-0.13 / Piece
2000 Pieces (Min. Order)
Instant-Read Temperature Gun baby forehead thermometer gun
US $1-10 / Piece
500 Pieces (Min. Order)
Germany style nylon cable tie fanstening tool 2.4-4.8mm cable tie tensioning tool LS-600A cable tie gun
US $4.7-5 / Piece
1 Piece (Min. Order)
Easy to use and Standard level MM 281 1/35 German 3 Isolation Gun B type with etching parts Plastic model at reasonable prices
US $500-1000 / Piece
10 Pieces (Min. Order)
High Quality Outdoor Waterproof Safety Military camouflage tape gun
US $0.4-2.04 / Roll
10 Rolls (Min. Order)
Professional Aluminum Metal PU Car Wash Polyurethane Foam Spray Gun
US $4.7-5.0 / Piece
1 Piece (Min. Order)
Germany suppliers low cost greenhouse with irrigation guns/snake hose
US $1.48-6 / Piece
100 Pieces (Min. Order)
Syringe Gun Fat Injection Gun
US $20-30 / Set
2 Sets (Min. Order)
germany hand pop rivet gun
US $1.1-1.4 / Piece
600 Pieces (Min. Order)
German different kinds paint spray gun
US $345-463.2 / Piece
1 Piece (Min. Order)
25ft 50f 75ft 100FT 2017 flexible made in china hose with high pressure water guns
US $0.5-1.28 / Piece
100 Pieces (Min. Order)
Most popular cheap pressol grease gun/industrial grease gun/pressol germany grease gun
US $2-4 / Piece
1000 Pieces (Min. Order)
Manufacturer Factory Directly Cheap Top Quality Customized 15/16 degree Coil Nail Gun CN90 for Germany market
US $3.3-20.9 / Box
100 Boxes (Min. Order)
Custom logo printed microfiber gun/silver/jewelry/floor/germany/sunglasses cleaning cloth in roll
US $0.01-0.3 / Piece
2000 Pieces (Min. Order)
Professional wholesale tria laser hair removal machine price/808nm diode laser hair removal with German laser gun
US $2200.0-2900.0 / Unit
1 Unit (Min. Order)
Ep 3# Lithium Plastic Gears Grease Oil Lubricant For German Grease Gun
US $1.0-2.0 / Kilogram
1 Kilogram (Min. Order)
baby body thermometer infrared gun
US $9-11 / Unit
100 Units (Min. Order)
Personalized metal Pens low price pen gun color
US $0.4-1.4 / Pieces
500 Pieces (Min. Order)
Heavy duty grease gun for high quality grease gun
US $45-55 / Set
100 Sets (Min. Order)
- About product and suppliers:
Alibaba.com offers 7,440 german gun manufacturer products. About 7% of these are tattoo gun, 2% are grease guns, and 2% are mesotherapy gun. A wide variety of german gun manufacturer options are available to you, such as free samples, paid samples. There are 7,411 german gun manufacturer suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Pakistan, and Germany, which supply 97%, 1%, and 1% of german gun manufacturer respectively. German gun manufacturer products are most popular in Domestic Market, North America, and Oceania. You can ensure product safety by selecting from certified suppliers, including 3,883 with ISO9001, 949 with Other, and 381 with ISO13485 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show german gun manufacturer or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/german-gun-manufacturer.html | CC-MAIN-2018-05 | refinedweb | 903 | 69.89 |
src/examples/life.c
Conway’s game of life
We use a cartesian grid and the generic time loop.
#include "grid/cartesian.h" #include "run.h"
We need two fields to store the old and new states as well as a field to store the age of each cell.
scalar a[], b[], age[];
The lower-left corner is at (-0.5,-0.5) (the default box size is one) i.e. the domain spans (-0.5,-0.5) (0.5,0.5) and is discretised using 256^2 cells.
The generic
run() function implements the main time loop.
We initialise zeros and ones randomly (the
noise() function returns random numbers between -1 and 1) in a circle centered on the origin of radius 2.
event init (i = 0) { foreach() { a[] = (x*x + y*y < sq(0.2))*(noise() > 0.); age[] = a[]; } boundary({a}); }
Animation
We generate images of the age field every 5 timesteps for the first 1000 timesteps of the evolution.
We mask out dead cells (i.e. cells for which
age is zero).
scalar m[]; foreach() m[] = age[] ? 1 : -1; output_ppm (age, mask = m, n = 512, file = "age.gif", opt = "--delay 1"); }
Game of life algorithm
We count the number of live neighbors in a 3x3 neighbourhood.
int neighbors = - a[]; for (int i = -1; i <= 1; i++) for (int j = -1; j <= 1; j++) neighbors += a[i,j];
If a cell is alive and surrounded by 2 or 3 neighbors it carries on living, otherwise it dies. If a cell is dead and surrounded by exactly 3 neighbors it becomes alive.
b[] = a[] ? (neighbors == 2 || neighbors == 3) : (neighbors == 3);
The age of live cells is incremented.
Here we swap the old state (
a) with the new state (
b).
swap (scalar, a, b); }
Evolution of the age of cells | http://basilisk.fr/src/examples/life.c | CC-MAIN-2021-49 | refinedweb | 300 | 83.76 |
There is at logic 0, a chip recognizes it will be receiving or sending data), a clock signal for clocking the serial data into the device, and the serial data stream itself.
Many hobbyists use microcontrollers such as the Arduino to control and use SPI devices. Oftentimes, you just want to test the electronic device to make sure it and its associated circuitry is working properly. This Instructables will show you how to set up and program a simple proto board circuit using the Arduino Uno to drive SPI data to a peripheral circuit which, in this case, is an Analog Devices AD7376 digital potentiometer. It could be any 8-bit SPI device using this circuit.
Step 1:
Assemble the Circuit: Place the Arduino board on the proto board in a convenient location. Next insert the DIP switches with all the pins on one side connected to ground (these DIP switches will provide the 8 bits of data that will be read by the Arduino and then sent serially to the AD7376). Wire digital pins 2 through 9 of the Arduino to the other side of the DIP switch, one wire per switch element
Step 2:
Next, look at the datasheet of the device you wish to drive to find its pin-outs. Then connect Arduino pin 13 to the SPI devices Clock pin, Arduino pin 11 to the SPI device’s SDI (Serial Data In) pin, and Arduino pin 10 to SPI device’s CS (Chip Select).
Step 3:
Next, you must set a few parameters on the SPI operation – whether the LSB (Least SIgnificant Bit) or MSB (Most Significant Bit) is sent first, if the data is clocked into the peripheral on a rising or falling edge of the clock, and if you want to slow the transmit speed of the data by reducing the frequency of the clock). This data is explained well on the Arduino site () and in Wikipedia so will not be covered here. For this example, the MSB is sent first, it is clocked on a rising edge.
Step 4:
Now it is time for the software. You must download the Arduino IDE (Integrated Development Environment) from. This development tool allows you to write code that can be uploaded to the Arduino board and then executed.Insert the following code into the Arduino IDE. Note the comments in the code which tell you what each part of the code is doing.
/*
******************************************************
* SPI test (Driving a Digital Potentiometer in this case)
*
* This module uses the Arduino SPI library (comes bundled with
* the Arduino IDE) to enable communication between an
* Arduino program and an SPI enabled peripheral chip.
*
* The routine reads in 8 bit values, stores the value
* in variable “pot”, and then sends “pot” out via the SPI.
* *
* The SPI library uses pin 13 of the Arduino Uno for clock.
* Serial data is sent out on pin 11.
*
* This routine uses pin 10 as the chip select for the
* SPI device to be programmed.
*
* Pins 2-9 are used to read in values of the 8 bits for the byte
* to be sent via SPI
******************************************************
#include <SPI.h> // Links prewritten SPI library into the code
void setup()
{
pinMode(2, INPUT); // Set pins 2-9 as inputs
pinMode(3, INPUT);
pinMode(4, INPUT);
pinMode(5, INPUT);
pinMode(6, INPUT);
pinMode(7, INPUT);
pinMode(8, INPUT);
pinMode(9, INPUT);
pinMode(10, OUTPUT); // Set SPI pins to be outputs
pinMode(11, OUTPUT);
pinMode(13, OUTPUT);
digitalWrite(2, HIGH); // Set Arduino pull-up resistors active
digitalWrite(3, HIGH); // This sets an internal to the chip
digitalWrite(4, HIGH); // pull-up resistor on so an unconnected pin
digitalWrite(5, HIGH); // is reliably at logic 1
digitalWrite(6, HIGH);
digitalWrite(7, HIGH);
digitalWrite(8, HIGH);
digitalWrite(9, HIGH);
digitalWrite(10, HIGH);
digitalWrite(11, HIGH);
digitalWrite(13, HIGH);
SPI.begin(); // Initialize SPI parameters
SPI.setBitOrder(MSBFIRST); // MSB to be sent first
SPI.setDataMode(SPI_MODE3); // Set for clock rising edge
SPI.setClockDivider(SPI_CLOCK_DIV64); // Set clock divider (optional)
// See Arduino site or Wikipedia for more info on these settings
}
void loop()
{
int val = 0; // “val” is a test variable used when reading pins
int j = 0; // “j” is a variable used in data read operation
byte pot = B00000000; // Zero all bits in byte “pot”
for (int i = 0; i < 8; i++) // Loop to read each of 8 input pins
{
j = i + 2; // Add 2 to loop count to match input pins
val = digitalRead(j); // Read appropriate pin
if(val == HIGH)
{
bitSet(pot, i); // Set appropriate bit to 1 based on loop count i
} // Otherwise leave it at 0
}
}
digitalWrite(10,LOW); // Drop SPI chip-select to 0 (Arduino pin 10)
SPI.transfer(pot); // Do SPI transfer of variable pot
digitalWrite(10,HIGH); // Raise chip-select to 1
delay(10000); // Delay loop 10 seconds (pick your time frame)
} // Data will be read and sent once every 10 seconds based on this | https://duino4projects.com/using-arduino-control-test-spi-electronic-device/ | CC-MAIN-2021-10 | refinedweb | 816 | 53.14 |
2. Smart Cutebot Samples for Python
2.1. Add Python File
Download to unzip it: EF_Produce_MicroPython-master Go to Python editor
We need to add Cutebot.py for programming. Click “Load/Save” and then click “Show Files (1)” to see more choices, click “Add file” to add Cutebot.py from the unzipped package of EF_Produce_MicroPython-master.
2.2. API
CUTEBOT(object)
Create an object.
set_motors_speed(self, left_wheel_speed: int, right_wheel_speed: int)
Set the speed of both wheels:
`left_wheel_speed: int` Speed of the left: -100~100 `right_wheel_speed: int` Speed of the right: -100~100
set_car_light(self, light: int, R: int, G: int, B: int)
Set the color of the headlights:
`light`:Choose the lights `R`:channel color-255` `G`:channel color-255` `B`:channel color-255`
get_distance(self, unit: int = 0)
Get the distance from the ultrasonic sound sensor:
`unit`detecting the distances:` 0 `cm,` 1 `lnch
get_tracking(self)
Get the status from the tracking headers:
return:`00` all in white `10` left in black and right in white `01` left in white and right in black `11` all in black
set_servo(self, servo, angle)
Choose the servos and set the angles/speed:
`servo (number)`choose the servos 1,2 `angle (number)`set the angles of the servo 0~180
2.3. Samples
Sample 1: Drive the car at a full speed.
from microbit import * from Cutebot import * ct = CUTEBOT() ct.set_motors_speed(100, 100)
Result
The speed of the left and right wheels is at 100, the car moves forward at the full speed.
Sample 2: Turn the headlights on
from microbit import * from Cutebot import * ct = CUTEBOT() ct.set_car_light(left, 0, 90, 90) ct.set_car_light(right, 200, 200, 0)
Result
The two headlights light up in different colours.
Sample 3: Obstacles avoidance
from microbit import * from Cutebot import * dis = CUTEBOT() while(True): i = dis.get_distance(0) if i>3 and i<20: dis.set_motors_speed(-50, 50) sleep(500) else: dis.set_motors_speed(50, 50)
Result
The Cutebot turns its direction once it detects any obstacle ahead of it.
Sample 4: Line-tracking
from microbit import * from Cutebot import * dis = CUTEBOT() while(True): i = dis.get_tracking() if i == 10: dis.set_motors_speed(10, 50) if i == 1: dis.set_motors_speed(50, 10) if i == 11: dis.set_motors_speed(25, 25)
Result
The Cutebot drives along with the black line.
Sample 5: Control the servo
from microbit import * from Cutebot import * dis = CUTEBOT() while(True): dis.set_servo(0,180) sleep(1000) dis.set_servo(0,0) sleep(1000)
Result
The servo connecting to S1 continues driving back and forth.
2.4. FAQ
About the reported error: | https://www.elecfreaks.com/learn-en/microbitKit/smart_cutebot/cutebot-python.html | CC-MAIN-2022-27 | refinedweb | 428 | 65.73 |
No Expansion Icon When No Children (Part 1)
By Geertjan on Mar 01, 2013
Sometimes you have Nodes with an expansion icon, i.e., the "plus" sign, even though the Object doesn't have Children. Once the user tries to expand the Node, the "plus" sign disappears, and no Children are shown. Kind of misleading because the user thought that there would be Children because the expansion icon was shown. Would be nicer if there were to be no expansion icon if the Object has no Children.
Here's how to solve that, via Children.createLazy:
public class MyNode extends AbstractNode { public MyNode(NodeKey key) { super(Children.createLazy(new MyCallable(key)), Lookups.singleton(key)); setDisplayName(key.toString()); } private static class MyCallable implements Callable<Children> { private final NodeKey key; private MyCallable(NodeKey key) { this.key = key; } @Override public Children call() throws Exception { //Check, somehow, that your key has children, //e.g., create "hasChildren" on the object //to look in the database to see whether //the object has children; //if it doesn't have children, return a leaf: if (!key.hasChildren()) { return Children.LEAF; } else { return Children.create(new MyChildFactory(key), true); } } } } | https://blogs.oracle.com/geertjan/date/20130301 | CC-MAIN-2014-15 | refinedweb | 190 | 59.4 |
We would like to leverage ACT to tier data from their HCP to an HCP belonging to 3rd party service provider. In order to accomplish this, the third party provider needs to create a Tenant and user account for us. It is my understanding that they also need to enable MAPI. However, this provider does not expose MAPI to the outside world.
So the question is: Is it possible to set up ACT to a 3rd party HCP without MAPI, and how?
Yes this is possible.
Because you are not enabling MAPI you will need to create the namespace on the Service Provider side and then choose the option to use an existing bucket when creating your storage component. That should be it though and you will be all set. | https://community.hitachivantara.com/thread/11244-act-to-3rd-party-without-mapi | CC-MAIN-2018-34 | refinedweb | 130 | 80.31 |
A beginning is a very delicate time.
Shout out to my boy Frank Herbert for that one.
On May 22nd I start the Iron Yard, and a new chapter of life. It was far past time to take the plunge and man, did I take it. No money, a bunch of debt, car broke down; that’s how the Cinderella Story of Tech starts, right? Sure, it can. I can also end up selling brooms to strangers with the blind guy in Broad Ripple. That man is a treasure by the way, and we could all learn a thing or two from him.
I’ve already learned a lot though, and some very important learnings they are. I don’t know who’s going to end up reading this, but if it’s someone starting a new endeavour like this, there are some things that it might help you to know.
First, you’re not going to school to learn how to code. Sure you’ll be banging out lines and files and projects, but you’re not learning how to do it. You’re just going to do it. What you’re learning how to do is think about problems and how to solve them, how to build a toolbox and how to know what to reach for. You’re learning how to think in a new language and once you’ve got that, you’ll just start speaking it. Look at this:
public class Main {
public static void main(String[] args) {
}
}
I don’t know what this does, exactly. I know why it’s there, I know you need it to write functional Java, but hell if I know what a
void or an
args is. In my CS 102 class in college tests were fill in the blank, definitions, know every granular piece of information about every keyword and replicate, regurgitate. No wonder I got a lot of sleep during class.
To me, coding is isn’t a puzzle with a pile rigidly-defined pieces (there’s some of that but not exclusively, or even importantly), but it is a puzzle that needs solving, in the abstract. It’s a conversation with an idea in your head. You have phrases, expressions, methods of communicating your idea to a computer that need to be plugged in and manipulated to make your idea make sense. You wouldn’t cut pieces to fit the picture on your coffee table or use a piece from a different puzzle, but you kind of can with code, so I don’t think of this discipline as learning a programming language, but as learning the language of programming. Nailing down the rules of syntax and grammar come after learning how to think.
Michael Hartl of learnenough.com calls this technical sophistication, knowing how to think about problems, how to evaluate possible solutions, and sure, ultimately how to implement them, but definition and implementation are secondary to being able to communicate with your idea and find the right ways to translate it.
Next, this will be both easier and harder than you think. I’m 31(?). My neuroplasticity is shot. Making fundamental changes to how you approach problem-solving and learning to think like an engineer is going to be like backing an elephant up a flight of stairs. That said, the changes cascade. It’s one of those things where “it’ll just click” is a real thing. It’s why programs like the Iron Yard take a lot of pride in saying that they can take a 45 year old housewife and turn her into a software developer. There aren’t a lot of secrets in programming, it just really looks like that when you look at a wall of code that looks like a seizure made out of letters and numbers.
The hard part is going to be you, being willing to be malleable and let yourself think in ways you’ve never had to, and to understand that if you’re not getting something, it’s not because you’re bad at it or incapable, but that your fighting against X number of years of linear thinking building some damn high walls in your brain. That wall’s not coming down because you strike it once, trust your mind to adapt as you put time and effort into telling it to think this way, not that way.
Finally, don’t be afraid to use your resources, and you’ve got a hell of a lot of resources. It’s a shoulders-of-giants kind of thing: everything you want to do has been done, probably in about ten different ways. Stack Overflow, CodePen, w3schools, instructors, classmates, and a thousand different other things are out there for you. You’ll lean on these a lot even as a professional, so it behooves you to learn how to navigate them now, add to that technical sophistication we were talking about. I mean that’s cheating, but it’s not cheating. You’re not cheating, you’re fine.
Really though, there’s no reason to think that you need to figure all of this out on your own. Software development is a community effort and the name of the game is effecient use of resources. Don’t be afraid to crib a bit from the efforts of others; sometimes it’s alright to know that something works before you know why, especially when you’re learning. Making something work and figuring out how after you’ve seen it go can be a powerful educational experience.
Much of this is opinion and inference and I expect it to be challenged both by other people and by my own experience, but after I adjusted my perception of code school and how coding works as a discipline it feels a lot more managable. I hope all this sticks for me and if you’re reading this and just starting something new as well, I hope it helps you, too. | https://medium.com/@joshuaflack/a-beginning-is-a-very-delicate-time-7df934679a78 | CC-MAIN-2018-39 | refinedweb | 1,002 | 76.25 |
A User's Look at the Cocoon Architecture
- The Cocoon Architecture in Detail
- Advanced Sitemap Features
- Using the Command-Line Interface
- Practical Examples and Tips
- Wrapping Up the User Perspective
In Chapter 4, "Putting Cocoon to Work," you saw a simplified view of the Cocoon architecture. You built a first version of a news portal in Chapter 5, "Cocoon News Portal: Entry Version." Now that we have gone over the basics, it is time to fill in the missing pieces from a user perspective. This chapter presents additional Cocoon components and concepts you can use to build more advanced applications than the ones you have seen so far.
We will start by describing the architecture and further features of the sitemap in detail. A Cocoon-based application can become quite large. The sitemap becomes more complicated to manage as you add new pipelines. We will show you how to organize an application's structure so that it is easier to maintain. New components allow you to connect your Cocoon-based application to a database and diagnose what might be going wrong if something does not work as planned. We will also explain how Cocoon can be used without running it in a servlet engine and give some practical tips on how to tune an installation for maximum performance.
The Cocoon Architecture in Detail
Before we begin, let's look at a figure that gives an overview of the Cocoon architecture. It might help you to refer to Figure 6.1 when reading about the individual building blocks that make up Cocoon in the following sections. This figure is actually a simplified view of the architecture, because the dependencies of the components contained in Cocoon are more complicated than this figure shows. We will get into more detail as we progress through this book. Imagine that each chapter is a layer of Cocoon that you are slowly peeling away to see more and more of what is inside.
Cocoon is made up of several blocks of functionality. Starting at the top of Figure 6.1, you see Cocoon integrated into a servlet engine. This can be a standalone servlet engine, such as Apache Tomcat, or part of an application server, such as IBM WebSphere.
The Cocoon framework forms the envelope around the component-based architecture, including the different Cocoon components, such as generators and transformers, that can be used to build document pipelines, the XML and XSLT components, and any custom components built for a specific application.
As you can seen from the figure, each block in the Cocoon architecture has its own configuration file. Until now, we have only talked about the central Cocoon configuration filethe sitemap. The additional configuration files we will look at in this chapter are also important, because they allow you to define and configure various aspects of a Cocoon-based application, such as how a running Cocoon should react to changes in the sitemap or whether Cocoon should cache pipelines. In general, you will need to alter something in these configuration files only when development of the application is finished and you are ready to put it into a production environment.
Figure
6.1 The big picture of Cocoon.
Cocoon is a component-based system. As such, it uses parts of Avalon, a major Apache project for component-based Java architectures. Apart from Avalon component management, Cocoon also integrates the Avalon logging architecture, as shown at the bottom of Figure 6.1.
Avalon Integrated into Cocoon
In addition to including actual software components that can be used in an application, Avalon provides a set of rules and Java interfaces that are used in Cocoon to configure components. For example, Avalon allows components to be reused via a pooling mechanism. Therefore, Avalon provides components to manage these pools and also defines how a component should be written so that it can be pooled. Cocoon components then implement these interfaces.
The Avalon project is divided into several subprojects. However, not all the subprojects are used in Cocoon. The following is a list of subprojects that are used:
The Avalon LogKit. A Java-based logging API. This logging functionality is used throughout all the Avalon-based projects and inside Cocoon. The logging configuration is very flexible, as you will see.
The Avalon Framework. The base of Avalon. It defines several concepts and interfaces for component development in Java. It defines the basics of defining, configuring, and managing software components and how to use them.
The Avalon Excalibur project. Layered on top of the Avalon Framework. It implements common reusable components and offers some component management facilities to fine-tune your installation.
This chapter looks at the possibilities Avalon provides in the context of how they are actually used inside Cocoon. For example, when we talk about logging, we give tips on how to optimize the performance of a Cocoon application. Also, for a more detailed overview of Avalon, see Chapter 8, "A Developer's Look at the Cocoon Architecture."
First, however, we'll start our configuration tour of Cocoon with the configuration file read by the servlet engine when Cocoon is started.
The Web Application Configuration
When Cocoon runs as a servlet, the servlet engine processes a configuration file during the startup phase. The servlet engine reads the web application deployment descriptor (which is located at WEB-INF/web.xml in your Cocoon context directory) and uses the parameters in this file to perform the initial configuration of Cocoon.
The web.xml file contains the startup configuration that is required to get the system running. The most important piece of information is the location of the configuration file for the Avalon-based Cocoon components. In Listing 6.1, which is a snippet from a web.xml file, the name and location of the configuration file are entered as parameters inside the init-param tag.
Listing 6.1 The Avalon Configuration Location in web.xml
<!-- This parameter points to the main configuration file for Cocoon. Note that the path is specified in absolute notation but it will be resolved relative to the servlets webapp context path --> <init-param> <param-name>configurations</param-name> <param-value>/cocoon.xconf</param-value> </init-param>
In a default installation of Cocoon, this file is called cocoon.xconf and is located in the Cocoon context directory. You have probably already seen it when looking for the sitemap, which is also located there by default. The cocoon.xconf file is an XML file that contains a description of the used Avalon components for Cocoon and their configuration. Configuring the name and location of this file inside web.xml allows you to choose your own name and location for the file if you wish. However, we recommend that you leave the defaults as is. From now on we will refer to this file simply as cocoon.xconf, regardless of where you place it and what name you choose.
Although the sitemap components, such as transformers and generators, are also Avalon-based components, they are not listed inside cocoon.xconf. They are listed inside the sitemap, as you saw in Chapter 4. This means that a site administrator building a Cocoon-based application does not need to know about cocoon.xconf. When designing an application, it is easier to reference only one file instead of having to view several files at once. cocoon.xconf will become important when you want to fine-tune the installation or replace any of the default components, such as the XML parser.
Configuring Components in cocoon.xconf
One of Cocoon's advantages is that it forms a flexible framework around other components that come from different projects, such as those hosted by Apache. For example, instead of being able to use only a specific XML parser, Cocoon allows you to choose which actual implementation you might want to use by allowing these components to be configured via cocoon.xconf. In addition, cocoon.xconf can be used to pass parameters to the components so that different aspects can be configured. Listing 6.2 is a brief excerpt from cocoon.xconf that shows the basics of this configuration.
Listing 6.2 An Excerpt from cocoon.xconf
<?xml version="1.0"?> <cocoon version="2.0"> <parser class="org.apache.cocoon.components.parser.XercesParser"/> <hsqldb-server <parameter name="port" value="9002"/> <parameter name="silent" value="true"/> <parameter name="trace" value="false"/> </hsqldb-server> ... </cocoon>
Unlike the sitemap, cocoon.xconf does not use a namespace. Each component you want to configure is defined inside the root element called cocoon using its own specific element. Listing 6.2 has two configured components: parser and hsqldb-server. These are the logical names under which Cocoon looks for a concrete implementation. The actual Java class that then implements the expected functionality is configured via the class attribute. As you can see from Listing 6.2, the default parser is the Xerces Parser from Apache. Apart from allowing different implementations to be used, cocoon.xconf allows the components to be configured using individual parameter tags. Each parameter tag consists of a name and value attribute. This lets you pass information such as the port number to the configured database. HSQLDB is an open-source database that is included in the Cocoon distribution. It is used in the practical database examples later in this chapter. We will also discuss the attributes pool-max and pool-min when we look at ways to optimize Cocoon's performance.
If you change something inside cocoon.xconf, these changes are not reflected automatically. To apply the changes, you have to reinstantiate Cocoon. One way of doing this is by restarting your servlet engine. However, this is not always an ideal solution, because you will affect other servlets also currently running in the same servlet engine. It might also take some time for the engine to restart.
Fortunately, Cocoon provides another way to force the reload of cocoon.xconf. You can directly request the root node where Cocoon is mounted (such as) and then add the request parameter cocoon-reload with the value true. The whole URL looks like this:
This restarts Cocoon with the changed cocoon.xconf.
Because restarting can be a time-consuming process, you should avoid it in a production environment. You can turn off this feature by setting the parameter allow-reload in the web application deployment descriptor (web.xml) to no. The default for this setting is yes, as shown in Listing 6.3.
Listing 6.3 Allowing Cocoon Reloading in web.xml
<!-- Allow reinstantiating (reloading) of the cocoon instance. If this is set to "yes" or "true", a new cocoon instance can be created using the request parameter "cocoon-reload" .--> <init-param> <param-name>allow-reload</param-name> <param-value>yes</param-value> </init-param>
Remember, this parameter is not in cocoon.xconf. It is in the web.xml file used to control certain settings for a servlet. This parameter should be set to no in a production environment, because the default allows anyone to start the reloading of your Cocoon installation by accessing the URL just listed. If someone were to abuse this, Cocoon would spend all its time reloading the configuration files, which would prevent any other activity.
In addition to component configuration, another important piece of information contained in cocoon.xconf is the location of the sitemap. The last line of cocoon.xconf looks like this:
This definition tells Cocoon where to look for the main sitemap and how to handle its reloading. Although you can change the file attribute by entering a different location and name, we have never needed to change this setting. So we recommend that you do not change it either.
As you might have noticed during your first steps with Cocoon, changes made to the sitemap are automatically reflected after some time without a restart of your servlet engine being necessary.
When configured appropriately, Cocoon occasionally checks the sitemap for changes. Each time a change is detected, the old sitemap is discarded and the new one is used. Cocoon detects this change using the last modification date, which is automatically set by the operating system for a file when it is saved. So even if you do not change the sitemap but save it unchanged, Cocoon assumes that it has changed and reloads it.
As explained in Chapter 4, a servlet can act only on an incoming request. So Cocoon can check for changes only when a request for a document is received. The automatic reloading can be done in a synchronous or asynchronous manner. You can set this reload method by specifying either synchron or asynchron in the attribute reload-method in cocoon.xconf for the sitemap location. The default is asynchron. (Note that this is the correct way to write these parameterswithout ous on the end.)
In synchronous mode, the new sitemap is generated in memory from the configuration file. After this process is finished, it is used and the request is served with this new sitemap.
In asynchronous mode, the new sitemap is generated in the background, and the incoming request is served by the old sitemap. All further requests are then processed by the old sitemap until the generation is finished. From that time on, all documents are generated using the new sitemap.
Synchronous mode is very useful when you develop your application, because each change to the sitemap is reflected immediately. Asynchronous mode is more useful for a production environment in which the sitemap changes very rarely.
Although the automatic reloading of the sitemap seems to be a very useful feature, it has potential dangers. Assume that you change the sitemap to an invalid state, either by creating invalid XML or by making some other mistake that prevents Cocoon from being able to create the sitemap. The next request enters Cocoon, and the sitemap generation process is triggered.
In synchronous mode, the sitemap is generated immediately, but it fails due to the error you made beforehand. So you get a Cocoon error page, because Cocoon cannot process your request. The whole Cocoon installation is "dead" until you correct the error.
In asynchronous mode, the situation is even worse. When the request comes in, the sitemap generation process is started in the background. The current request and all further requests are processed by the old sitemap. The generation of the new sitemap fails because of the error. All further requests are then served using the old sitemap. If the changes made to the sitemap were only slight, it might take a while before anyone realizes that the old sitemap is still being used.
Cocoon provides a parameter that allows you to control whether the sitemap should be checked and reloaded. You can prevent Cocoon from reloading the sitemap by setting the attribute check-reload in cocoon.xconf to false. If you use the default, the sitemap is checked for reloading.
But what if you really changed the sitemap and you made a mistake? The first thing to do is check if your sitemap still contains well-formed XML, so load it into your favorite XML editor and check this. If it is well-formed but still does not work, you should use the logging facilities in Cocoon to find any error you perhaps made.
LogKit Configuration
Cocoon is based on the Avalon logging facilities, which are very flexible and powerful. You can configure details about what should be logged and what should be done with the log messages.
Cocoon has five log levels:
DEBUG
INFO
WARNING
ERROR
FATAL_ERROR
Each component sends out log messages at one of these five levels. The LogKit then decides what should be done with this message.
Using the configuration, you can decide that only certain levels should really be logged to a file. For production sites, you will usually want to log only messages with a level of ERROR or FATAL_ERROR. In contrast, when developing your application, you will always want to see all levels. Because of the ordering of the different levels, each level contains all the following levels. Therefore, setting the level to DEBUG results in all messages being logged. Setting the level to WARNING results in all messages with a level of WARNING, ERROR, or FATAL_ERROR being logged.
The first thing you have to configure, however, is where Cocoon can find the LogKit configuration. This is done by another parameter in the web application deployment descriptor (web.xml), as shown in Listing 6.4.
Listing 6.4 The Location of the LogKit Configuration in the Web Application Deployment Descriptor
<!-- This parameter indicates the configuration file of the LogKit management --> <init-param> <param-name>logkit-config</param-name> <param-value>/WEB-INF/logkit.xconf</param-value> </init-param>
The standard place for the LogKit configuration is WEB-INF/logkit.xconf inside your Cocoon context directory. This configuration file is an XML document that describes the LogKit configuration. Listing 6.5 is a simple example.
Listing 6.5 An Excerpt from the LogKit Configuration<
<logkit> <factories> <factory type="priority-filter" class=
"org.apache.avalon.excalibur.logger.factory.PriorityFilterTargetFactory"/> <factory type="servlet" class="org.apache.avalon.excalibur.logger.factory.PriorityFilterTargetFactory"/> <factory type="servlet" class=
"org.apache.avalon.excalibur.logger.factory.ServletTargetFactory"/> <factory type="cocoon" class="org.apache.avalon.excalibur.logger.factory.ServletTargetFactory"/> <factory type="cocoon" class=
"org.apache.cocoon.util.log.CocoonTargetFactory"/> </factories> <targets> <cocoon id="cocoon"> <filename>${context-root}/WEB-INF/logs/cocoon.log</filename> <format type="cocoon"> %7.7{priority} %{time} [%8.8{category}] (%{uri})"org.apache.cocoon.util.log.CocoonTargetFactory"/> </factories> <targets> <cocoon id="cocoon"> <filename>${context-root}/WEB-INF/logs/cocoon.log</filename> <format type="cocoon"> %7.7{priority} %{time} [%8.8{category}] (%{uri})
%}:
%{message}\n%{throwable}</format> </servlet> </priority-filter> </targets> <categories> <category name="cocoon" log- <log-target <log-target </category> </categories> </logkit>%{message}\n%{throwable}</format> </servlet> </priority-filter> </targets> <categories> <category name="cocoon" log- <log-target <log-target </category> </categories> </logkit>
The first part of the configuration file deals with factories for the logging targets. Factories are used inside component-based architectures to allow the flexible creation of components. They remove the need to "hard-wire" specific implementations into the system. You can compare this part of the configuration file with the components section of the sitemap, where you define the available generators, transformers, and so on.
These factories define components that are to receive the log events. In this example, the cocoon factory writes log events to a file. The servlet factory logs into the servlet log, and the priority-filter filters events.
These factories are then used in the targets section to instantiate real targets. When the cocoon target is instantiated, it receives the location of the log file (the filename tag) and in what format (the format tag) the log messages should be written.
The third part of the configuration is the categories section. Each component inside Cocoon can log into different categories. Usually they all log into the root category, which is also called cocoon.
So the LogKit configuration defines this category. A category gets a log level and a set of targets. All log events with this log level (or above) are sent to all the targets. So, in this example, all log events with DEBUG or higher are sent to a target called cocoon (logging into a file) and a target called filter.
This "filter" uses the priority filter to filter the log events. In this configuration, the filter discards all messages that do not have the level ERROR or FATAL_ERROR. Messages with one of these two levels are sent to the servlet target. So they are logged into the servlet log as well.
As you can see from this example, even a simple LogKit configuration can get very complex (and therefore complicated). But in most cases, it is sufficient to change the used log level. You can do this simply by changing the log-level attribute of the cocoon category. When you use a file-based configuration like this, you also can add new targets and categories without changing the code.
In case of a problem, you should have a look at the log file and see if you can find any description of the problem in the file. If the log level is not DEBUG, you should switch it. But be careful: A change to the log level (or any other change in the LogKit configuration) is not reflected immediately. You need to reinstantiate Cocoon in order for this to happen. You can force this by specifying the parameter cocoon-reload or by changing cocoon.xconf.
Changing the level to DEBUG causes the log file to become very large. Logging is also quite a time-consuming process, so you will want to set the level as low as possible (such as to ERROR) in a production environment.
How Requests Are Processed Inside Cocoon
Whenever a request for a document is sent to Cocoon, the root sitemap is taken to respond to the request. The pipelines section of the root sitemap is then processed top-down. All map:pipeline sections marked as internal-only using the attribute internal-only are skipped. The process follows the steps described next. For the moment, we will neglect the views (they are explained in a separate section), because they would only confuse this description:
If a match directive is found, the matcher tests a value against a given pattern. If the value matches, the directives inside the matcher are executed next, and possible values from the matcher can be used by specific keys. If the value does not match, the next directive on the same level is executed next.
If an action directive is found, the action is executed immediately. If the action returns keys for value substitution, the directives inside the action are executed next. If no keys are provided, the directive on the same level is next.
If a selector directive is found, the selector performs the various test cases from top to bottom. When the value is equivalent to the first test case, the directives inside this case are executed next, and all others are ignored. If no test case matches, the default case (if it's available) defines the next directives to execute.
If a generator directive is found, it builds the starting point for the XML processing pipeline. The next directive on the same level is executed. The generator is not yet started.
If a transformer directive is found, the transformer is added at the end of the XML processing pipeline, but it is not executed yet. Then the next directive on the same level is executed.
If a serializer directive is found, the serializer builds the end of the XML processing pipeline, and the buildup pipeline is executed. The generator feeds its XML through the various transformers. The serializer produces the document, and the processing is finished.
If a reader directive is found, the reader delivers the document, and the processing is finished.
If a redirect occurs, the processing is stopped. If the redirect points to a sitemap resource, it is processed. If the redirect is an external link, the client is redirected to it. If the link is internal, a new request is processed by Cocoon, starting at the main sitemap.
If a mount for a subsitemap is found, the processing is passed on to the subsitemap. When the subsitemap processing is finished, the document is processed.
If a content aggregation directive is found, this special generator is added as the starting point of the XML processing pipeline.
If an error occurs, the error handler of the current map:pipeline is called.
As you can see from this flow description, actions, matchers, and selectors are executed immediately when the sitemap is processed. The same applies for a reader.
But generators, transformers, and serializers are not executed immediately. They are chained to build the processing pipeline. Only when this pipeline is complete (when a serializer is added) is the whole pipeline executed.
Because the XML is processed in this created pipeline, all other sitemap components not chained in this pipeline have no access to the XML. Thus, an action, matcher, or selector cannot be influenced by this XML, nor can they influence it.
Cocoon distinguishes between two pipeline types: the event pipeline and the stream pipeline. As the name implies, the event pipeline deals with SAX events. It consists of the usual XML processing pipeline (generator and transformers) without the serializer. A stream pipeline streams the final document to the client. It consists of only a reader or of an event pipeline in combination with a serializer.
For a Cocoon user, this information is important to know in order to understand caching (which we will explain later) and the cocoon protocol.
The cocoon protocol invokes an internal request to the sitemap. The resulting document can be used, for example, as the input for a generator or transformer or for content aggregation. All these components require XML. The generator reads produced XML, the xslt transformer uses stylesheets, and the content aggregation aggregates XML documents and generates from these documents one XML document.
But the cocoon protocol calls an arbitrary pipeline, which has a serializer at the end. It could, in the best case, return XML as a stream of characters or, even worse, HTML or any other format. How does this work? As you might guess, the answer is the event pipeline.
Whenever the cocoon protocol is used, only the event pipeline is built. Remember, the event pipeline is the XML processing pipeline without the serializer. So the event pipeline directly outputs XML as SAX events. Therefore, all components requiring XML can very easily use the cocoon protocol. Obviously, the cocoon protocol must not point to a pipeline using a reader.
Now let's get on with explaining these mysterious SAX events in detail.
SAX Event Handling
XML pipelines also work internally with the SAX model. Therefore, a generator sends SAX events to the following component in the pipeline. This component sends SAX events to the next one, and so on until the final serializer gets the final SAX events, serializes them, and creates the output document.
It might seem unimportant to a Cocoon user that the SAX model is used, but it has an impact on how pipelines must be built. SAX events have only one direction: from top to bottom, if you think about how they are written in the sitemap. It is not possible to send SAX events back up the pipeline.
A transformer transforms the incoming XML stream. There are two possible categories of transformers. In the first one, a transformer transforms the document as a whole, like the xslt transformer does. The stylesheet for the xslt transformer contains all the information for each node in the XML document.
The other category is a transformer that listens for specific XML elements that it will transform. For example, the sql transformer waits for special elements that set the SQL connection and the SQL query. All other elements surrounding the SQL statements are ignored. By ignored, we mean that they are passed unchanged from the sql transformer to the next component in the pipeline, as shown in Figure 6.2.
In order to get the sql transformer working, the incoming SAX events of the previous component in the pipeline (perhaps the generator) must contain those special elements for the sql transformer. So this is the first simple rule: If a component is listening for specific information, that information must be provided by a previous component in the pipeline.
There are more transformers that act like the sql transformer. The ldap transformer is another example of a transformer that reacts to special tags. It listens for some elements and then queries an LDAP system. If you want to build complex pipelines that have more than one transformer of this category, you have to think carefully about what you really want to do.
Imagine that you want to read an XML document from the local hard drive. This XML document contains information for the sql transformer. The sql transformer fetches data from the database that is then feed into the ldap transformer.
From these requirements, you should be able to build up your XML document. It should look like Listing 6.6.
Figure
6.2 SAX event handling.
Listing 6.6 An Example of Dependent Components
<?xml version="1.0"?> <document> <LDAP> <LDAP-INFORMATION> <SQL> <SQL-INFORMATION/> </SQL> </LDAP-INFORMATION> </LDAP> </document>
The information for the sql transformer is surrounded by the elements for the ldap transformer. Because the fetched data is the input for the LDAP query, it must be contained inside the LDAP elements.
In order to make the example work, you have to define your pipeline according to your XML document. As the ldap transformer waits for information from the sql transformer, the pipeline should look like Listing 6.7.
Listing 6.7 A Pipeline of Dependent Components
<map:generate <map:transform <map:transform <map:serialize/>
The sql transformer needs to come before the ldap transformer. Why is this so? The answer lies in the SAX events. As mentioned, SAX events are sent in only one direction. The ldap transformer needs information from the sql transformer, so the SQL query must be done first.
If you put the sql transformer after the ldap transformer, the statements and elements for the sql transformer would be directly used as the information for the ldap transformer. This LDAP query would then fail, and the sql transformer would never get its information.
So the second important rule is this: When building pipelines, you need to be aware of the events or data flow. In other words, you need to know the dependencies between your transformation steps. For example, if transformer A needs information from transformer B, you have to put transformer B before transformer A in the pipeline, and the elements for transformer B must be nested inside those for transformer A.
Of course, you need not stick to this simple rule. In some cases, the information delivered from one transformer cannot be used directly by another transformer. Then you should use intermediate stylesheet transformation, which converts the data of the first transformer to usable input for the second transformer.
In the preceding example, the order of the components in the pipeline would still be the same, but you could then add a stylesheet transformation between the sql transformer and the ldap transformer stage. This stylesheet would convert the response from the sql transformer into a suitable request for the ldap transformer.
Using an intermediate stylesheet is very important if you have circular dependencies. Imagine a pipeline in which you first have a SQL query, and then a dependent LDAP query, and after that a second SQL query that needs information from the LDAP transformation.
The simple approach shown in Figure 6.3 will not work. If you follow the rule we set up, you would build the structure of the commands as set out in the first block at the beginning of the chain in the figurefirst the outer tags for the last sql transformer, and then the tags for the ldap transformer, and inside them the tags for the first sql transformer. However, because a sql transformer is in front of the ldap transformer, the last sql transformer never receives any of its commands, because the first sql transformer will have already processed them. There is no way to tell each sql transformer which SQL tags are for the first transformer and which are for the second.
The only solution that works in a case like this is to use an intermediate stylesheet, as shown in Figure 6.4.
The starting document containing the commands must then contain only the LDAP query with the nested SQL query for the first sql transformer. After the ldap transformer in the pipeline, you need a stylesheet transformation, which adds the SQL statement for the last sql transformer around the data fetched from the LDAP query. This can then be processed by the following sql transformer.
Figure
6.3 Incorrect chaining of dependent transformers.
Figure
6.4 Using an intermediate stylesheet.
As you can see from the example that uses transformers and intermediate stylesheets, pipelines can get quite complicated. You need to be aware of how things work in order to build your pipeline. However, in our experience with Cocoon, we have very rarely had such complex dependencies. It is more often the case that you need more than two transformers, but they are not dependent, so you do not need an intermediate stylesheet transformation.
This section introduced the additional files that control how Cocoon is configured. It also showed you how components in Cocoon can receive parameters through these configuration files. Cocoon components are based on design principles set out by the Apache project Avalon. Cocoon also uses the Avalon logging mechanism. We also looked at how a request is processed inside Cocoon and how the XML tags are sent through a pipeline as SAX events. After taking a user's look at the various configuration files, we can now return to the sitemap, which is the most important configuration file from a user perspective. We will look at the features not already explained in Chapter 4. | http://www.peachpit.com/articles/article.aspx?p=30037 | CC-MAIN-2019-35 | refinedweb | 5,546 | 56.25 |
learning functions and I am having trouble with some local variables. I keep getting "warning C4700: uninitialized local variable 'y' used", same for the variable x. Are my loops in combinations with cin >> messing up the code? I am pretty new to programming, any help would be greatly appreciated. Heres the code:
Code:#include <iostream> using namespace std; //function prototype double calculateRetail(double, double); int main() { double x, y, z; while(x < 0) { cout << "This progam calculates and displays an items retail price. Enter the whole sale price...\n"; cin >> x; } while(y < 0) { cout << "Enter the markup percentage...\n"; cin >> y; } z = calculateRetail(x, y); cout << z << " is the retail price.\n"; return 0; } double calculateRetail(double x, double y) { return x * y * .01; } | https://cboard.cprogramming.com/cplusplus-programming/96526-warning-c4700.html | CC-MAIN-2017-09 | refinedweb | 125 | 66.84 |
Finding the maximum power of a photovoltaic device.
Posted April 15, 2014 at 08:38 PM | categories: optimization, python | tags: | View Comments
Updated April 04, 2016 at 11:54 AM
A photovoltaic device is characterized by a current-voltage relationship. Let us say, for argument's sake, that the relationship is known and defined by
\(i = 0.5 - 0.5 * V^2\)
The voltage is highest when the current is equal to zero, but of course then you get no power. The current is highest when the voltage is zero, i.e. short-circuited, but there is again no power. We seek the highest power condition, which is to find the maximum of \(i V\). This is a constrained optimization. We solve it by creating an objective function that returns the negative of (\i V\), and then find the minimum.
First, let us examine the i-V relationship.
import matplotlib.pyplot as plt import numpy as np V = np.linspace(0, 1) def i(V): return 0.5 - 0.5 * V**2 plt.figure() plt.plot(V, i(V)) plt.savefig('images/iV.png')
<matplotlib.figure.Figure object at 0x11193ec18> [<matplotlib.lines.Line2D object at 0x111d43668>]
Now, let us be sure there is a maximum in power.
import matplotlib.pyplot as plt import numpy as np V = np.linspace(0, 1) def i(V): return 0.5 - 0.5 * V**2 plt.plot(V, i(V) * V) plt.savefig('images/P1.png')
[<matplotlib.lines.Line2D object at 0x111d437f0>]
You can see in fact there is a maximum, near V=0.6. We could solve this problem analytically by taking the appropriate derivative and solving it for zero. That still might require solving a nonlinear problem though. We will directly setup and solve the constrained optimization.
from scipy.optimize import fmin_slsqp import numpy as np import matplotlib.pyplot as plt def objective(X): i, V = X return - i * V def eqc(X): 'equality constraint' i, V = X return (0.5 - 0.5 * V**2) - i X0 = [0.2, 0.6] X = fmin_slsqp(objective, X0, eqcons=[eqc]) imax, Vmax = X V = np.linspace(0, 1) def i(V): return 0.5 - 0.5 * V**2 plt.plot(V, i(V), Vmax, imax, 'ro') plt.savefig('images/P2.png')
Optimization terminated successfully. (Exit mode 0) Current function value: -0.192450127337 Iterations: 5 Function evaluations: 20 Gradient evaluations: 5 [<matplotlib.lines.Line2D object at 0x111946470>, <matplotlib.lines.Line2D object at 0x11192c518>]
You can see the maximum power is approximately 0.2 (unspecified units), at the conditions indicated by the red dot in the figure above.
Copyright (C) 2016 by John Kitchin. See the License for information about copying.
Org-mode version = 8.2.10 | http://kitchingroup.cheme.cmu.edu/blog/2014/04/15/Finding-the-maximum-power-of-a-photovoltaic-device/ | CC-MAIN-2018-30 | refinedweb | 450 | 62.04 |
Syntax:
#include <vector> TYPE& front(); const TYPE& front() const;
The front() function returns a reference to the first element of the vector, and runs in constant time.
For example, the following code uses a vector and the sort()_algorithm to display the first word (in alphabetical order) entered by a user:
vector<string> words; string str; while( cin >> str ) words.push_back(str); sort( words.begin(), words.end() ); cout << "In alphabetical order, the first word is '" << words.front() << "'." << endl;
When provided with this input:
now is the time for all good men to come to the aid of their country
…the above code displays:
In alphabetical order, the first word is 'aid'. | http://www.cppreference.com/wiki/stl/vector/front | crawl-002 | refinedweb | 111 | 58.82 |
IRC log of xproc on 2010-12-16
Timestamps are in UTC.
15:57:25 [RRSAgent]
RRSAgent has joined #xproc
15:57:25 [RRSAgent]
logging to
15:57:27 [Norm]
zakim, this is xproc
15:57:27 [Zakim]
ok, Norm; that matches XML_PMWG()11:00AM
15:57:30 [ht]
ht has joined #xproc
15:57:48 [ht]
zakim, code?
15:57:48 [Zakim]
the conference code is 97762 (tel:+1.617.761.6200 tel:+33.4.26.46.79.03 tel:+44.203.318.0479), ht
15:58:44 [Zakim]
+??P26
15:58:54 [ht]
zakim, ? is me
15:58:54 [Zakim]
+ht; got it
15:59:51 [Vojtech]
Vojtech has joined #xproc
16:00:33 [Zakim]
+Alex_Milows
16:00:37 [Zakim]
+[ArborText]
16:00:57 [Norm]
zakim, who's here?
16:00:57 [Zakim]
On the phone I see +1.413.624.aaaa, ht, Alex_Milows, PGrosso
16:00:58 [Zakim]
On IRC I see Vojtech, ht, RRSAgent, Zakim, PGrosso, Norm, alexmilowski, dshabano, Liam, caribou
16:01:49 [Norm]
zakim, aaaa is me
16:01:49 [Zakim]
+Norm; got it
16:01:53 [Norm]
zakim, who's here?
16:01:53 [Zakim]
On the phone I see Norm, ht, Alex_Milows, PGrosso
16:01:54 [Zakim]
On IRC I see Vojtech, ht, RRSAgent, Zakim, PGrosso, Norm, alexmilowski, dshabano, Liam, caribou
16:02:36 [Zakim]
+??P20
16:02:50 [Vojtech]
Zakim, ?? is Vojtech
16:02:50 [Zakim]
+Vojtech; got it
16:03:25 [Norm]
Meeting: XML Processing Model WG
16:03:25 [Norm]
Date: 16 December 2010
16:03:25 [Norm]
Agenda:
16:03:25 [Norm]
Meeting: 185
16:03:25 [Norm]
Chair: Norm
16:03:25 [Norm]
Scribe: Norm
16:03:27 [Norm]
ScribeNick: Norm
16:03:31 [Norm]
Present: Norm, Henry, Alex, Paul, Vojtech
16:03:44 [Norm]
Topic: Accept this agenda?
16:03:44 [Norm]
->
16:03:56 [Norm]
Accepted.
16:04:10 [Norm]
Topic: Accept minutes from the previous meeting?
16:04:10 [Norm]
->
16:04:16 [Norm]
Accepted.
16:05:06 [Norm]
Topic: Next meeting: telcon, 6 Jan 2011?
16:05:15 [Norm]
Skipping the rest of December for seasonal festivities.
16:05:17 [Norm]
No regrets heard.
16:05:28 [Norm]
Topic: Resolve question of parsing rules for “{“ and “}” in p:document-template and any other issues.
16:06:04 [Norm]
->
16:07:52 [Norm]
Henry: I want to behave as much like XQuery as possible.
16:08:03 [Norm]
Alex: I think it's XSLT not XQuery that we're copying.
16:09:23 [Norm]
Vojtech: In the first version of the document there were different rules. Now we have rules more like XQuery wrt curly braces and quotes.
16:11:55 [Norm]
Some discussion about the current rules.
16:12:03 [Norm]
Vojtech: I think we're currently consistent with what XQuery does.
16:13:29 [Norm]
Norm: I think the only question is, if you see "}" in xpath-mode, do you look for another "}" or do you end the expression?
16:14:15 [Norm]
Vojtech: No, I proposed that if you're not in XPath mode and you see "}" then you just treat it literally.
16:14:55 [Norm]
Norm: So "}}" remains "}}"?
16:15:15 [Norm]
Vojtech: No. If you see "}}" you output "}", if you see an unescaped "}", you just recover from the escaping error and output the "}"
16:15:48 [Norm]
Alex: I think the current rules could allow nested expressions in the future.
16:16:01 [Norm]
Vojtech: I'm just observing that you could recover, I'm not pushing for it.
16:17:27 [Norm]
Norm: I think it will be confusing to do error recovery, so I propose not.
16:18:43 [Norm]
Alex: We could do that, and be done with it.
16:20:02 [Norm]
Proposal: Add a new rule to "regular-mode" which states that an unquoted "}" is an error.
16:21:10 [Norm]
Accepted.
16:21:29 [Norm]
Norm: I'll also add a note to the xpath-mode section to note that "}" doesn't look for "}}", it ends "greedily"
16:22:11 [Norm]
Norm: I'll update the spec and republish it in our space, with a plan to make it an official note in January if no one sees any other problems.
16:24:01 [Norm]
Topic: Last Call comments on our processor profiles document
16:24:11 [Norm]
->
16:26:11 [Norm]
Vojtech: Does the minimum profile require parsing of namespaces?
16:26:27 [Norm]
Vojtech: Now I think it's clear in the text.
16:26:50 [Norm]
Vojtech: The other issue is the one that David Lee raised about having a much simpler profile.
16:27:03 [Norm]
Henry: That amounts to subsetting XML.
16:30:10 [Norm]
Norm: I'm very conflicted. I think what David Lee wants makes logical sense, but it's not clear that we have remit to go there.
16:33:01 [Norm]
More discussion about infosets and subsets of XML and the fact that our section 2 says we start with a namespace well-formed document.
16:33:36 [Norm]
Henry observes that if we named a smaller subset, then parsers that could do it only wouldn't be able to handle any well-formed XML which is not the case today.
16:34:39 [Norm]
Proposal: Politely decline on the basis that it wouldn't be XML. It's not illogical, but we can't go there.
16:35:04 [Norm]
Accepted.
16:35:18 [Norm]
Topic: Any other business?
16:35:46 [Norm]
Vojtech: What about my question about connecting output ports of compound steps?
16:36:39 [Norm]
->
16:37:32 [Norm]
Norm: I think 5.11 overstates what we intended.
16:37:38 [Norm]
Vojtech: Our implementation allows it.
16:38:33 [Norm]
Norm: Yeah, I guess it makes sense. I don't think my implementation allows it but that's neither here nor there.
16:38:42 [Norm]
Vojtech: I think the rule in 5.11 is quite convenient.
16:39:31 [Norm]
Norm: And allowing it is an editorial change were forbidding it would be a technical changes, because 5.11 says it's currently allowed.
16:40:02 [Norm]
Proposal: Add an erratum to say that the output port of a compound step can be directly connected to any of the compound step's readable ports.
16:41:14 [Norm]
Accepted.
16:41:32 [Norm]
ACTION: Norm to write an erratum to allow he output port of a compound step can be directly connected to any of the compound step's readable ports.
16:42:27 [Norm]
Alex: Do we produce a new version with the errata merged in?
16:42:39 [Norm]
Norm: We can, but we don't have to.
16:42:43 [Norm]
Henry: It's polite to do it.
16:42:47 [Norm]
Paul: That would be a second edition.
16:43:16 [Norm]
Henry: I think it's fine to wait at least a year.
16:43:26 [Norm]
Norm: So do I.
16:43:45 [Norm]
Norm: Happy holidays and happy new year to one and all!
16:43:50 [Norm]
Adjourned.
16:43:55 [Zakim]
-Vojtech
16:43:56 [Zakim]
-PGrosso
16:43:56 [Zakim]
-Alex_Milows
16:44:00 [Norm]
rrsagent, set logs world visible
16:44:01 [Zakim]
-Norm
16:44:06 [Norm]
rrsagent, draft minutes
16:44:06 [RRSAgent]
I have made the request to generate
Norm
16:44:08 [PGrosso]
PGrosso has left #xproc
16:44:25 [ht]
norm, you need exim4_4.70 or above
16:44:31 [ht]
what distro are you running?
16:45:22 [Zakim]
-ht
16:45:23 [Zakim]
XML_PMWG()11:00AM has ended
16:45:25 [Zakim]
Attendees were +1.413.624.aaaa, ht, Alex_Milows, PGrosso, Norm, Vojtech
16:45:59 [ht]
I'm on debian lenny, and had to use backports, but that seems to have worked
18:07:41 [ht]
ht has joined #xproc
18:15:33 [dshabano]
dshabano has joined #xproc
18:15:49 [Zakim]
Zakim has left #xproc
18:58:56 [htt]
htt has joined #xproc
18:59:27 [htt]
htt has left #xproc | http://www.w3.org/2010/12/16-xproc-irc | CC-MAIN-2018-26 | refinedweb | 1,356 | 81.73 |
UNGETC(3P) POSIX Programmer's Manual UNGETC(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
ungetc — push byte back into input stream
#include <stdio.h> int ungetc(int c, FILE *stream);
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.seeko(), fsetpos(), or rewind()) or fflush() all pushed-back bytes have been read, or discarded by calling fseek(), fseeko(), fsetpos(), or rewind() (but not fflush()),.
Section 2.5, Standard I/O Streams, fseek(3p), getc(3p), fsetpos(3p), read(3p), rewind(3p), setbufGETC(3P)
Pages that refer to this page: stdio.h(0p), fgetc(3p), fgetpos(3p), fgets(3p), fseek(3p), fsetpos(3p), stdin(3p) | http://man7.org/linux/man-pages/man3/ungetc.3p.html | CC-MAIN-2017-43 | refinedweb | 168 | 65.52 |
GWT in Action
samzenpus posted about 7 years ago | from the read-all-about-it dept.
FAG in Action (0, Offtopic)
heauxmeaux (869966) | about 7 years ago | (#20402311)
Wow (1, Insightful)
thatskinnyguy (1129515) | about 7 years ago | (#20402343)
Re:Wow (5, Funny)
Drew McKinney (1075313) | about 7 years ago | (#20402449)
Re:Wow (0, Troll)
fimbulvetr (598306) | about 7 years ago | (#20404925)
Re:Wow (3, Interesting)
cromar (1103585) | about 7 years ago | (#20402467)
Is it just me or is anyone else glad to see a review on Slashdot without a chapter by chapter summary? One of the most pointless, pedantic mistakes in book reviews is to summarize each chapter. Hoozah for the writer!
Re:Wow (3, Funny)
hansamurai (907719) | about 7 years ago | (#20402481)
Don't Use Google Web Toolkit.
Worse than Wicket? (3, Interesting)
kisrael (134664) | about 7 years ago | (#20402741):Worse than Wicket? (1)
D-Cypell (446534) | about 7 years ago | (#20402837)
Re:Worse than Wicket? (3, Insightful)
kisrael (134664) | about 7 years ago | (#20402905):Worse than Wicket? (0)
Anonymous Coward | about 7 years ago | (#20402963)
Because I'm in Soviet Russia you insensitive clod!
Re:Worse than Wicket? (2, Interesting)
nuzak (959558) | about 7 years ago | (#20404309)
I'm an example of the last case -- a java burnout. We never needed scalability (internal apps with maybe 50 total users), so I'm back to cobbling together perl and python and even php now. Ironically, the need to glue these apps together has led to adopting more enterprisey stuff like SOAP and ESB's, but in a very ala-carte way. Useful enterprise apps remain grown, not engineered.
Re:Worse than Wicket? (1, Interesting)
Anonymous Coward | about 7 years ago | (#20404751)
One of the parts of Wicket that annoyed me the most, was that if you needed reasonable URLs it offered very little abstraction over plain old servlets. The default URLs (ie.
How all this nonsense makes things easier is anyones guess. It's easier and cleaner to write a small abstraction over the servlet API and disregard the framework altogether.
Although I don't really like Wicket, or any Java web framework for that matter, I do think well thought out web frameworks can increase productivity and result in cleaner code when used properly. I think Django, Rails and Seaside are reasonably good examples of this.
Re:Worse than Wicket? (3, Informative)
chillenious (1149451) | about 7 years ago | (#20404821)
Re:Worse than Wicket? (4, Insightful)
eddy the lip (20794) | about 7 years ago | (#20403559):Worse than Wicket? (2, Interesting)
kisrael (134664) | about 7 years ago | (#20403883) counter argument comes from people who are heavily OO in their outlook. (more specifically, I think a gap tends emerge between people who cut their teeth on simple CGI-wrapping things, and got used to using a HashMap mentality for most client/server interaction, and people who cut their teeth on apps and applets, for whom the gap between Java code and what's physically displayed onscreen seems weird and in need of being abstracted away.) Anyway, these folks say that, look, the organization I want is OO, writing this kind of Object->screen element->response mapping ls a repetitive task, so I'm willing to buy into someone else's approach to do that work. That's where something like Wicket comes in.
My recent semi-revelation for this kind of tool-kit was this: "text" is amazing. It has been the lifeblood of programming for decades, ever since we could get away from punched cards I guess, despite attempts at cutesy visual languages. It's just so powerful, concise, easy to manipulate, easy to automate that manipulation, and in its own way visual (hence the wars over the one true brace style etc) that it will be around for a long long time. But text is, by the way we use it, rather procedural, and while you can do OO in it, that's a new level. But sacrificing the ability to adjust HTML and CSS and Javascript in TEXT on the alter of Objects just seems like too big of a sacrifice to me.
Of course, taking an anti-OO stance is a heresy (see [geocities.com] ) and you have to be careful so you don't seem like a crazy nut job who hasn't worked on large scale projects. But my latest take on it is this: "People and Programs are both best defined by what they DO, not what they ARE".
Another way of looking at it: in Java, when stuff goes wrong, what's your stack trace like? There's always that boilerplate JVM/Server engine at the bottom, but if the core of what's at the top ain't your code, your debugging is going to be a lot harder, because your mistake is in how you set up the other guy's objects to do the work, not you doing the work yourself. (Assuming the other guy's stuff is solid.)
Re:Worse than Wicket? (3, Insightful)
joshv (13017) | about 7 years ago | (#20404073)
Re:Worse than Wicket? (1)
kisrael (134664) | about 7 years ago | (#20404255):Worse than Wicket? (1)
eddy the lip (20794) | about 7 years ago | (#20404919) hash map for dispatching requests, and now I'm almost completely OO. I don't think it's the only way to do things, but I'm more comfortable with it. I certainly like the freedom to drop into procedural or functional when I think it fits better, but my bias is definitely for objects. I try not to be a zealot about it. That just doesn't have any place in a field with pretty well defined parameters. I like to leave that for the theologians.
Of course, after committing many of my own sins of over (or just poor) engineering, I have a very low tolerance for it in third party stuff. I was recently working in something that had six levels of inheritance put together by someone who obviously didn't understand composition. I didn't sleep well a few nights.
Things like HTML generating classes just for the sake of staying in a comfortable paradigm boggle my mind. Like you said, ain't nuthin' wrong with text. I'd rather learn something new than shoehorn it into The One True Way (whichever way that happens to be.) I find there's a lot more flexibility and power in letting each technology be itself, and adapting to it, instead.
I don't imagine we've seen the last of this kind of thing by a long shot. I'm convinced, though, that sticking to fundamentals is what's helped me keep up on new technologies, and keep myself relevant. If everyone else is on a bandwagon, it just means it's quieter where I am.
(Liked the site linked in your sig, by the way. Good stuff.)
Re:Worse than Wicket? (1)
pebs (654334) | about 7 years ago | (#20404971)
That's why I like to have the source code for my 3rd party libraries that I use. That way I can learn the other guy's code. I can link the code in with my IDE so that when a problem occurs I can trace through the code in those 3rd party libs.
With Java, I don't use any 3rd party libraries that aren't open source. And some of the key libraries I use (though not all), I am familiar with the source code and how it works. If I am not familiar, I can always become familiar with it when a problem occurs. It's generally good to pick 3rd party projects based on how clean their source code is (among other things), so when you do have to go dig in, its not as difficult.
I think this situation can give you an advantage over writing your own.
Re:Worse than Wicket? (2, Insightful)
poot_rootbeer (188613) | about 7 years ago | (#20403971) natively developed code in each area.
Re:Worse than Wicket? (1)
eddy the lip (20794) | about 7 years ago | (#20405007) with one of the other tools in your kit.
I can see the appeal of staying in your own paradigm, and maybe there's even places where it's the right way to go. But I'm pretty sure that knowing each individually gives me better results faster. Might not be true for all.
Re:Worse than Wicket? (2, Informative)
chillenious (1149451) | about 7 years ago | (#20404645)
Sorry to hear it doesn't work for you. A little OO-obsessed though? Because the framework takes a stance to actually try to provide a real OO programming model where other frameworks simply don't?
I agree that there is a lot to say for simplicity. Especially if your problem is simple. However, my experience is that model 2 just doesn't cut it. I shudder to think back to the horrible instances I've seen of code duplication you get when you can't properly reuse widgets and the tons of hacks I've seen to get around the statelessness of web applications. And then scripting in templates, making them hard to sync with changed designs/ hire a designer to work on them and simply hard to track where logic is put in the first place. I don't know about you, but I never had much fun refactoring JSP and Velocity templates.
Wicket focusses on OO as that facilitates reuse and lets you better cope with complexity. Wicket enforces clean templates so that you won't get yourself into maintenance hell. But it may or may not work for you. It does for me and many others [eweek.com] , but so many people, so many tastes. In the end it is a trade off that can be annoying in the short term, but should save you trouble in the long run.
If what you are trying to do fits the request/ response paradigm, that's fine. I for one, prefer to reason about screens that have panels, forms, fields, tabs and buttons on them, and I don't want to rewrite half of my pages just because I decide to put a wizard in a tab, or move a pageable list to another page or whatever.
If you can use Ajax all the way, a simpler approach like using HTML + JS and maybe DWR should work fine, though a library like GWT should help you avoid all those nasty browser issues etc, AND let you write strongly typed (maintainable) code.
Re:Worse than Wicket? (1)
CryBaby (679336) | about 7 years ago | (#20405939) sweet spot is complex applications (esp. with a lot of Ajax) that *happen* to be served over http. iow, it's probably not a great fit for simple, high-volume public sites (although Wicket devs may disagree by pointing out some of the new features in 1.3 that offer excellent support for stateless pages).
2) You won't like Wicket if any time you see a decorator or composite pattern you either a) don't recognize it or b) refer to it as an "over-engineered" solution. Of course, if that's the case, you probably shouldn't be using Java in the first place.
In general, if you enjoy programming in Java and you like the idea of writing web UI code in more or less the same way you write domain code, you'll probably prefer Wicket to just about any other framework. To back that statement up a bit, I'll mention that *all* of the programmers on my team now write Wicket UI code as well as middle tier code even though some of them have never written a web app before in their life. That was not the case when we used Tapestry because, as nice as that framework is, you still have to think in terms of the request/response cycle to do anything complex and you have to learn a lot about it by rote, as opposed to Wicket where you can learn most of the framework simply by exploring it from within your IDE.
Sure, the "purity" of GWT's approach is undeniably appealing, but I'm reluctant to give up the simplicity of HTML templates, regular Java code that runs on a real JVM, and regular, non-generated JavaScript. As far as being irritated by having to keep your ultra-simple Wicket HTML templates in sync with your straightforward, plain Java code, I'll just say that it's much better than having to keep templates with embedded logic in sync with an abstract, runtime enhanced class along with a bunch of XML configuration. So, coming from Tapestry 4, code/template synchronization in Wicket has been a complete non-issue for me.
I'm not sure if you're advocating the abandonment of all web frameworks or if you just don't see the advantage of a component-based framework as opposed to an action-based framework. The problem with a "bare-bones, no framework" approach is that it usually translates into a custom framework for each application and/or a big mess of copy-and-paste spaghetti code that can only be deciphered by the original author.
The advantage of component-based frameworks is that components naturally correspond to objects whereas action-based frameworks lend themselves to a more procedural approach. I also think that web pages themselves make more sense when viewed as a collection of potentially reusable components (even when I wrote PHP apps, I wrote them in a component-oriented style), but I guess that's a matter of opinion.
Regarding that last statement about "writing HTML in HTML and writing Javascript in Javascript", I would just add "writing Java in Java" and point out that that's exactly why I prefer Wicket over something like GWT or Tapestry. When I'm writing Java, whether it's business or presentation logic, I want it to be "just Java". I want to subclass and compose objects in the normal way without having to understand lots of magical or abstract behavior that the framework is going to invisibly add to my code. When I'm writing HTML, I don't want to mix in logic via JSP tags or pseudo-components like @If -- I'd rather concentrate on layout, CSS, and other purely visual issues. Same with JavaScript -- when I need more JS behavior than what's already built into Wicket (which is essentially just Ajax - think tightly integrated DWR) I can write plain JavaScript using whatever libraries or JS frameworks I prefer. Wicket *allows* you to wrap your JS in Java classes if you want to, but you can also choose to write your JavaScript completely outside of the framework and/or use Ajax callback hooks to push more behavior into pure JS. In my experience, there are good use cases for both approaches.
I don't mean to suggest that Wicket is the ideal solution for every web app. I just wanted to relate a very different experience than you've had and one that I think is more representative of developers who use Wicket on a regular basis. To sum up, I think Wicket lies somewhere between the 100% JavaScript approach of GWT and the "old-fashoined" MVC action-based frameworks. For me, my team, and our applications that's turned out to be the right mix.
Re:Wow (3, Informative)
doofusclam (528746) | about 7 years ago | (#20403177) you need to write anything with a rich web gui and a large amount of interaction between this and a backend. I can use Eclipse to single step through my Java code, with breakpoints and all that stuff. GWT also takes all the browser specific hassles away, for example with the differing rich text area implementations used by all the browsers. Without these niceties i'd have given up on web development before i'd even started.
Oh and the google group for GWT is great - i've had problems answered within ten minutes. The Google team are very visible and are obviously proud of their product. Rightly so, too.
Re:Wow (1)
matto11 (1133407) | about 7 years ago | (#20403263)
Re:Wow (1)
trondotcom (1148541) | about 7 years ago | (#20402903)
Re:Wow (0)
Anonymous Coward | about 7 years ago | (#20403085)
Don't buy this book, sir.
Re:Wow (2, Interesting)
Angostura (703910) | about 7 years ago | (#20403483)
Re:Wow (1)
thatskinnyguy (1129515) | about 7 years ago | (#20403633)
Re:Wow (1)
PoliTech (998983) | about 7 years ago | (#20403869)
Google Web Toolkit
Who'd a thunk?
The more I learn about JavaScript... (4, Insightful)
SanityInAnarchy (655584) | about 7 years ago | (#20402385):The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20402535)
Re:The more I learn about JavaScript... (1)
SanityInAnarchy (655584) | about 7 years ago | (#20402671)
I know, for one thing, that JavaScript started out as a horrible little language, and within a year or so, it became a beautiful little language. I imagine that we could do better, were the same thing attempted today.
Re:The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20403445)
Re:The more I learn about JavaScript... (0)
misleb (129952) | about 7 years ago | (#20403629)
What more do you expect to get from JavaScript? The simple fact is that it lacks some very basic modern OO language constructs such as interfaces, abstract classes, and threads. And by the time you add those in, you'll just have reinvented Java. So what would be the point?
-matthew
Re:The more I learn about JavaScript... (2, Informative)
Gospodin (547743) | about 7 years ago | (#20404219):The more I learn about JavaScript... (1)
misleb (129952) | about 7 years ago | (#20404761)
-matthew
Re:The more I learn about JavaScript... (1)
Gospodin (547743) | about 7 years ago | (#20405117):The more I learn about JavaScript... (1)
fredrik70 (161208) | about 7 years ago | (#20405925)
Re:The more I learn about JavaScript... (1)
Octopus (19153) | about 7 years ago | (#20402537)
Re:The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20402619)
Someone did build a webserver platform around JavaScript.
It was a product from Borland called Intrabuilder.
Circa 1997:
Re:The more I learn about JavaScript... (2, Funny)
jmyers (208878) | about 7 years ago | (#20402719)
duh... IIS and asp
@ Language=JavaScript
Re:The more I learn about JavaScript... (2, Interesting)
AKAImBatman (238306) | about 7 years ago | (#20402735). Which means that Javascript doesn't have to be slow.
I figure you could create a JSHttpServlet object and map a file of your own extension (it can be *.js if you want it to be) to that servlet. The servlet then creates a JS environment that maps HttpRequest and HttpResponse as global variables. You can even map HttpSession and a PrintWriter output stream if you want. (That gets pretty close to JSP territory.
the point of GWT (2, Informative)
davido42 (956948) | about 7 years ago | (#20402739)
(Disclaimer: I evaluated GWT and decided that it was cool for larger web apps, but for a smallish website I think it is overkill. So.. I'm not a GWT user, sorry.)
When I was developing my site (which you will visit now [bitworksmusic.com] and purchase lots of album downloads), I had to deal with the fact that browsers do not implement things consistently. In particular, IE seems to do things differently than every other browser. The idea of GWT is to do all the hard browser bug workarounds and compatibility work for you, so that you write some code in Java and poof! Your web app will look and behave the same across all browsers everywhere. Among the downsides, you have to learn GWT of course, and your resulting code is almost guaranteed to be less efficient and slower to load than if you just code directly in Javascript/HTML/etc.
In the end, I ditched GWT in favor of simplicity, dealing with IE issues as they arose (my native development platform is Firefox). Then again, my site has very limited functionality. YMMV.
david
Re:the point of GWT (1, Interesting)
Anonymous Coward | about 7 years ago | (#20405157)
Maybe take another look at it. The learning curve is pretty minimal if you a) know java and b) have exposure to any other windowing toolkit, ala tcl/tk, swt, swing, etc. Couple hours tops, honestly. Secondly, the obfustication tech. makes the resulting javascript much smaller than hand-coded javascript, much the same way that a C compiler can write more efficient assembly than you or I could. Finally, I actually found the opposite about app size when I have used it with my customers -- for small apps it is perfect because you can produce a really slick interface very quickly. If you app has a lot of distinct functionality, and maybe parts of it aren't yet ported to GWT (for example) GWT actually gets in your way.
Re:The more I learn about JavaScript... (1)
RManning (544016) | about 7 years ago | (#20402769)
Not to date myself, but I remember using Netscape LiveWire. It sucked. I was so glad to move to Perl/CGI and then even happier when we moved to J2EE.
Anyway, there's been a number of server side JavaScript implementations [wikipedia.org] .
Re:The more I learn about JavaScript... (2, Interesting)
fireboy1919 (257783) | about 7 years ago | (#20402819) a lot of the modern pure-javascript libraries (prototype especially is known for breaking existing javascript libraries, for example).
Re:The more I learn about JavaScript... (1)
Bluesman (104513) | about 7 years ago | (#20403553)
You can work around this by writing object-oriented Javascript, like this: The only problem you run into is if you want to call a member function as a result of an event. I haven't found a good way to do this without using some sort of global variable or function, because of the brain-dead way "this" works in Javascript.
But the vast majority of namespace problems in Javascript can be solved with this approach.
Re:The more I learn about JavaScript... (1)
larry bagina (561269) | about 7 years ago | (#20405169)
Re:The more I learn about JavaScript... (1)
Bluesman (104513) | about 7 years ago | (#20405305)
Re:The more I learn about JavaScript... (1)
majorbugger (1149409) | about 7 years ago | (#20402863)
Re:The more I learn about JavaScript... (1)
Abcd1234 (188840) | about 7 years ago | (#20403923) former).
Re:The more I learn about JavaScript... (1)
larry bagina (561269) | about 7 years ago | (#20405259)
Re:The more I learn about JavaScript... (1)
ZeroConcept (196261) | about 7 years ago | (#20402877)
- Differences in browsers object models makes it counter-intuitive to write cross-browser JavaScript code, GWT can take care of these differences for you.
Re:The more I learn about JavaScript... (1)
anomalous cohort (704239) | about 7 years ago | (#20404179):The more I learn about JavaScript... (1)
multipartmixed (163409) | about 7 years ago | (#20402881) slower or faster than any similar functional or imperitave language I can think of (Pascal, Object-Oriented Turing, Miranda, LISP) for comparable jobs. Although LISP probably kicks its ass if your job is heavily recursive.
> But then, you still have that problem, even if it's JavaScript generated from Java code.
I don't know what you're talking about here. JavaScript is completely unreladed to Java.
That said, there is a JavaScript interpreter written in Java called Rhino. But if you were looking for speed, I would certainly look at a C implementation (like spidermonkey or NJS) first.
Re:The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20402995)
Do you even know what this article is about? Google Web Toolkit works by compiling Java code into Javascript and HTML. [google.com] ::The more I learn about JavaScript... (1)
AKAImBatman (238306) | about 7 years ago | (#20403113):The more I learn about JavaScript... (1)
AKAImBatman (238306) | about 7 years ago | (#20403161)
Re:The more I learn about JavaScript... (1)
multipartmixed (163409) | about 7 years ago | (#20403651) concern myself, first and foremost, with correctness and ease-of-coding. Spidermonkey's guts are a little heady but not really all that bad. Design-type comments would be nice, though.
I may change my tune later this month, however, when I try to write multithreaded JavaScript programs.
Have you ever seen C libs embedded in Rhino, maybe built with gcj? I'm no Java guy but I have written enough JNI crap that I could probably cobble together an interface between Rhino and my backend C libs.
Re:The more I learn about JavaScript... (1)
AKAImBatman (238306) | about 7 years ago | (#20404557), make a JNI wrapper. Before you do that, though, make sure there isn't a Java library that already does what you want. There are very few things above the level of a system driver that Java does not have a cross-platform library for.
Re:The more I learn about JavaScript... (1)
larry bagina (561269) | about 7 years ago | (#20405339)
I think the spidermonkey code is a mess... Try taking a look at kjs/WebKit JavaScriptCore [webkit.org] . It's much cleaner and (imo) easier to add new native functions/classes.
You seem quite confused. (1, Informative)
Anonymous Coward | about 7 years ago | (#20404017)
Re:The more I learn about JavaScript... (1)
Bluesman (104513) | about 7 years ago | (#204034 was thinking about how much work it would take to implement something like that, and I realized that Javascript has just about every one of those features, although some are implemented in odd ways. The scoping rules are a bit strange, and the "this" operator is handled poorly, IMHO, but everything else is almost exactly what I'd want.
I think Javascript gets a bad rap because you have to worry about browser compatibility. But if you ever use it while only targetting a single browser, it's a dream to work with, and all of the annoyances go away.
And it's much more powerful than I used to think, before I started working on my now half-finished Javascript app [sourceforge.net] . (Shameless plug there.)
Its called pike. (1, Interesting)
Anonymous Coward | about 7 years ago | (#20403731)
Having a C-like syntax would be good for people who are used to C or Perl and don't want to learn about s-expressions. Automatic memory management is a must."
You just described pike (although it has both static and dynamic typing). [ida.liu.se]
Re:The more I learn about JavaScript... (1)
multipartmixed (163409) | about 7 years ago | (#20403863)
> REALLY don't like is the way it handles inheritance and class hierarchies. I think Smalltalk did a better job there.
I suppose constants and macros would be features worth adding, too... but then, some of the macros I have seen in C make me wish they had never been invented.
But the S-expression-like syntax in JavaScript (var a = {prop: value, prop2: value2}) totally rocks. As do the scoping rules, WITH THE EXCEPTION of it's idiot rule of "oh-its-not-initialized-so-lets-make-it-a-global"
Oh, and vian looks neat! Ever thought about writing something like dialog/xdialog/cdialog for the browser? You could use your same screen interface. I think that would be awesome.
Re:The more I learn about JavaScript... (1)
Bluesman (104513) | about 7 years ago | (#20404343)
I think the coolest thing potentially is being able to send the commands to a real unix server and then pipe the output back to the Javascript shell. Obviously this has potentially serious security implications, but with a secure setup could be extremely useful.
Re:The more I learn about JavaScript... (1)
Abcd1234 (188840) | about 7 years ago | (#20403865) programmer can be a little more lazy. But, IMHO, the benefits don't outweigh the risks.
Personally, I much prefer something like Smalltalk (no, not Ruby... Ruby is just a weak attempt to replicate Smalltalk, and they did a crappy job of it). And transforming Smalltalk into a scripting language would be pretty easy, I think...
Re:The more I learn about JavaScript... (1)
misleb (129952) | about 7 years ago | (#20403419)
Um, you understand that GWT is not a language, it is a framework, right? Java is the language. And JavaScript is most CERTAINLY not more powerful than Java.
Don't give in to the temptation because it would make you look quite foolish.
-matthew
Re:The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20405371)
Err, think again.
Where are the closures in Java? Anonymous functions?
Re:The more I learn about JavaScript... (1)
plams (744927) | about 7 years ago | (#20403441)
Re:The more I learn about JavaScript... (0)
Anonymous Coward | about 7 years ago | (#20403657)
Re:The more I learn about JavaScript... (1)
skidv (656766) | about 7 years ago | (#20404211)
Java 7 Kernel will actually make javascript faster to execute? Sun claims that it will because it only loads the classes it needs. Care to comment?
Re:The more I learn about JavaScript... (1)
AKAImBatman (238306) | about 7 years ago | (#20404461)
I suppose it depends on whether the price of orange juice directly affects the outcome of soccer games played in the Miami Orange Bowl.
Re:The more I learn about JavaScript... (1)
mpcooke3 (306161) | about 7 years ago | (#204044:The more I learn about JavaScript... (2, Informative)
ispeters (621097) | about 7 years ago | (#20404571). The resulting code is obfuscated, which reduces download time, and has all the dead code eliminated, which reduces download time. Also, for broken interpreters, like the one in IE, a smaller script executes faster, on top of downloading more quickly.
I guess what I'm saying is that GWT brings a compiler to Javascript and that has many benefits. For the GWT compiler, Java is to Javascript as C is to assembler in a C compiler. Generally speaking a C compiler produces better machine code than someone writing assembler by hand. The same is true for the GWT compiler--its output runs faster than a comparable programme hand-written in Javascript.
Ian
Just when GWT 1.4 comes out of beta (0)
Anonymous Coward | about 7 years ago | (#20402479)
I Still don't know what GWT is (4, Informative)
chad_r (79875) | about 7 years ago | (#20402759)
It's so simple, but... (1, Troll)
C10H14N2 (640033) | about 7 years ago | (#20403967)
Hell. No.
Re:It's so simple, but... (3, Informative)
Eponymous Bastard (1143615) | about 7 years ago | (#20404745) at all (that's the reason why we didn't pick it up for our UI, our company does business logic in C#).
Since it is also a widget library you should be able to have complex widgets with multiple DOM elements and access them as a unit. You could write your own javascripts objects to access the DOM, but then it's basically a GUI library.
As far as I can tell, the whole point is to write both client and server with a single language and interface, which is very useful.
For example, in our case, the pages are generated with ASP.Net in C#. If we want to disable a button when the page is served we do buttonSubmit.Enabled=false; but if we want to reenable it without a postback, we have to add javascript code to find the DOM node for the button and then enable it. Imagine how messy it gets when you want to add a row to a datagrid. After a while you can't tell all the interactions in the GUI handling logic. I'd love to have something that allows me to write an event handler where I can just write "buttonSubmit.Enabled=true;" and let the compiler work out all the DOM walking code.
So you have three choices:
- Serve the full page (formatted in server side language) and then tweak it in javascript, relying on knowledge of the widget library's rendering.
- Serve a bare bones page and do all the GUI in Javascript, both initial and updates. Let the server handle business logic only. I see some posters have suggested moving the initial javascript rendering to the server.
- Serve a full page and let something like GWT convert the client side code from working with the same objects as the server to a working Javascript/DOM implementation. This is the approach GWT is taking.
Of course, I only looked at it for a couple hours, so someone will probably tell me all the ways I'm wrong.
Re:I Still don't know what GWT is (1)
jilles (20976) | about 7 years ago | (#20404229)
The IDE part is integrated into eclipse and offers all the advanced development features that Java programmers are used to like incremental compilation, refactoring, auto-completion etc. To the developer, GWT looks like just another Java library that builds on all the idioms and patterns they know already. Of course you can debug your application as well either before or after compiling to javascript (i.e. set breakpoint in java code, launch browser to test and trigger the breakpoint, then inspect java code). Basically GWT fixes a lot of stuff that makes AJAX development suck so much: you work in a advanced IDE instead of a crappy text editor that doesn't have a clue about the semantics of what you type; you debug in one of the best debuggers around; browser compatibility issues are taken care off during compilation; you'll know if there's a type error because Java is statically typed; you can do unit testing; etc. In short, GWT is about bringing lots of good development practices from server side Java to browser client side. There's not that many tools around that do this to the extent that GWT does.
GWT is part of a growing set of web development related technology coming from Google. Other nice stuff is the Guice dependency injection serverside component framework for Java and the Gears offline web application component. Presumably, Google eats their own dogfood so the stuff is demonstrably scalable and quite capable as well. Also interesting to learn for the Java hating crowd on slashdot is that the world's largest internet company is a Java shop.
Beyond hope (5, Insightful)
k-zed (92087) | about 7 years ago | (#20402801)
Come on, doesn't anybody read these?
Re:Beyond hope (1)
drdanny_orig (585847) | about 7 years ago | (#20403403)
Re:Beyond hope (4, Funny)
Bluesman (104513) | about 7 years ago | (#20403485)
Just when you need mod points (1)
WarwickRyan (780794) | about 7 years ago | (#20403561)
Re:Just when you need mod points (1)
owlstead (636356) | about 7 years ago | (#20404687)-ins, and it does the parsing and code completion exceptionally well. Anyway, I've to spend most of my time on Eclipse - and that's the one that *I* like - so it seems I got the better deal.
Re:Beyond hope (1)
ceoyoyo (59147) | about 7 years ago | (#20404171)
Re:Beyond hope (1)
owlstead (636356) | about 7 years ago | (#20404737)
GWT *and* Java (4, Informative)
owlstead (636356) | about 7 years ago | (#20402861)
Before somebody grills me, the version of WindowBuilder Pro that I am using is a bit unstable and crashes Eclipse now and then. Lots of memory is also recommended (then again, if you are a developer, you need lots of memory anyway).
Re:GWT *and* Java (2, Informative)
owlstead (636356) | about 7 years ago | (#20402919)
Nah, didn't think so
Why should GWT be on my radar screen? (2, Insightful)
scottsk (781208) | about 7 years ago | (#20403243)
Re:Why should GWT be on my radar screen? (1)
myatmpinis1234 (697897) | about 7 years ago | (#20404231)
Re:Why should GWT be on my radar screen? (1)
nuzak (959558) | about 7 years ago | (#20404549)
It is a servlet, so it works with servlet containers. Like tomcat, resin, jetty, or those big ol JavaEE containers.
I'm not sure that tells you anything about why to use it though.
Oops. (1)
Limburgher (523006) | about 7 years ago | (#20403377)
I want to write desktop apps with JS/GWT/whatever (2, Interesting)
joe_n_bloe (244407) | about 7 years ago | (#20403393)
JS is a great language and GWT is a great tool, especially the hosted development environment. But it will never reach its potential until it is a general purpose application programming language.
Very nice toolkit with some problems (3, Informative)
plams (744927) | about 7 years ago | (#20404077):Very nice toolkit with some problems (1)
owlstead (636356) | about 7 years ago | (#20405741) JavaScript is an interesting one though.
"Also, developing on an AMD64 based Linux I discovered that the hosting environment just doesn't work running from a 64bit JVM."
Weird. Just for my information; I assume there are some native parts involved? I could not find any info on that.
Google DB/LDAP API? (1)
Doc Ruby (173196) | about 7 years ago | (#20404477)
Shameless plug - chess board diagram composer (3, Interesting)
SashaM (520334) | about 7 years ago | (#20404593)] .
Here's the point of GWT... (0)
Anonymous Coward | about 7 years ago | (#20405059)
Anyone who has maintained a pile of Javascript for a few years understands the problem with hand coding javascript and/or the alternative javascript frameworks on the market right now -- lack of good IDEs/debuggers/tools like for other languages, among other problems. GWT counters this by allowing you to work with Java, and providing tools that integrate with your current toolset -- Intellij, Eclipse, NetBeans, etc.
Personally, I will be surprised if this model isn't the way we are all doing browser development within the next 5-10 years -- write your code in one high-level language and have it compiled into a given target (IE: turned into js, css, html, etc). This is exactly what has happened in the evolution of computing languages over the past 50 years, why shouldn't it happen in the web? | http://beta.slashdot.org/story/89553 | CC-MAIN-2014-41 | refinedweb | 6,313 | 70.43 |
UriInfo always null when using @Contextslowdive Aug 20, 2012 4:20 PM
I did a little bit of searching about this issue, and most places seem to mention RESTEasy. Since this application I am working on needs to be deployable to both Weblogic and JBoss, I am hoping to be able to use java/javax libraries. This is my code:
@Context
protected UriInfo uriInfo;
@Context
protected HttpHeaders headers;
The headers variable is not null, but uriInfo is... Also, this gets populated correctly when deploying to Weblogic. I am using JBoss AS 7.1.1Final and my deployment is an EAR containing a RAR and 3 WARs.
1. Re: UriInfo always null when using @ContextNicklas Karlsson Aug 21, 2012 2:09 AM (in response to slowdive)
Is the @Context UriInfo available if used as a method parameter?
2. Re: UriInfo always null when using @Contextslowdive Aug 21, 2012 11:46 AM (in response to Nicklas Karlsson)
Thanks a bunch! Making it a method parameter solved the problem. Strange that it wouldn't work as a member variable though.
3. Re: UriInfo always null when using @ContextNicklas Karlsson Aug 21, 2012 2:53 PM (in response to slowdive)
Don't know. 5.2.2 of the spec says
An instance of UriInfo can be injected into a class field or method parameter using the @Context annotation
4. Re: UriInfo always null when using @ContextNicklas Karlsson Aug 23, 2012 2:17 AM (in response to Nicklas Karlsson)
Tried a simple
@Path("/status")
public class Status
{
@Context
UriInfo info;
and put a breakpoint in one my called methods and I did manage to get a non-null instance on 7.1.1 so it's not a direct bug, I think
5. Re: UriInfo always null when using @Contextslowdive Aug 24, 2012 11:40 AM (in response to Nicklas Karlsson)
Well, I thought this was resolved, but now I am seeing inconsistent results. In summary
- We are using the CXF JAX-RS implementation
- If I use the context annotation for a member variable, it is always null
- If I use it as a method parameter, sometimes it is null, and sometimes it is not. I could not determine a clear pattern for what is causing it to be one way or another.
- Either way works fine in Weblogic | https://community.jboss.org/thread/204237 | CC-MAIN-2015-32 | refinedweb | 384 | 59.33 |
multikdf
Project description
pymultikdf
- PBKDF2
- bcrypt
- scrypt
What is a Key Derivation Function?
From wikipedia ():
In cryptography, a key derivation function (or KDF) derives one or more secret keys from a secret value such as a master key, a password, or a passphrase using a pseudo-random function.[1][2] KDFs.
What is PBKDF2?
PBKDF2 (Password-Based Key Derivation Function 2) is a key derivation function that is part of RSA Laboratories’ Public-Key Cryptography Standards (PKCS) series, specifically PKCS #5 v2.0, also published as Internet Engineering Task Force’s RFC 2898. It replaces an earlier standard, PBKDF1, which could only produce derived keys up to 160 bits long.
See:
What is bcrypt?
bcrypt is a key derivation function for passwords BSD and other systems including some Linux distributions such as SUSE Linux.[2] The prefix “$2a$” or “$2b$” (or “$2y$”) in a hash string in a shadow password file indicates that hash string is a bcrypt hash in modular crypt format.[3] The rest of the hash string includes the cost parameter, a 128-bit salt (base-64 encoded as 22 characters), and 184 bits of the resulting hash value (base-64 encoded as 31 characters).[4] The cost parameter specifies a key expansion iteration count as a power of two, which is an input to the crypt algorithm.
See:
What is scrypt?]
See:
Relationship to existing packages
Existing python packages for PBKDF2, bcrypt, scrypt
- pip install fastpbkdf2
- pip install bcrypt
- pip install scrypt
Why a new module?
Sometimes one wants to try or use MULTIPLE different Key Derivation Functions. In such cases, instead of installing MULTIPLE SEPARATE python, packages, just this single module can be installed and used.
This may also be a convenience when porting your code to run under ‘Python For Android ()
Are there any differences?
Exactly and ONLY the following C functions have been wrapped
From fastpbkdf2:
fastpbkdf2_hmac_sha1 fastpbkdf2_hmac_sha256 fastpbkdf2_hmac_sha512
From bcrypt:
bcrypt_kdf
From scrypt:
crypto_scrypt
The following methods should be exactly equivalent to the corresponding methods in the existing python wrappers:
--------------------------------------------------------------- Module.method Identical to --------------------------------------------------------------- multikdf.fastpbkdf2.pbkdf2_hmac fastpbkdf2.pbkdf2_hmac multikdf.bcrypt.kdf bcrypt.kdf multikdf.scrypt.hash scrypt.hash ---------------------------------------------------------------
Test code
See multikdf.test (test.py under the multikdf module directory)
import os from .fastpbkdf2 import pbkdf2, algorithm as hash_algorithms from .bcrypt import bcrypt_kdf from .scrypt import scrypt_kdf min_passwd_len = 8 max_passwd_len = 10 min_pbkdf_rounds = 1000 max_pbkdf_rounds = 5000 step_pbkdf_rounds = 200 min_bcrypt_rounds = 2 max_bcrypt_rounds = 8 min_scrypt_r = 7 max_scrypt_r = 8 min_scrypt_p = 1 max_scrypt_p = 2 min_scrypt_n = 13 max_scrypt_n = 14 def test_pbkdf2(s): for l in range(min_passwd_len, max_passwd_len + 1): i = os.urandom(l) for r in range(min_pbkdf_rounds, max_pbkdf_rounds + 1, step_pbkdf_rounds): for h in hash_algorithms.keys(): print('Testing pbkdf2: l=%d, r=%d, h=%s' % (l, r, h)) pbkdf2(i, s, r=r, kl=kl, h=h) def test_bcrypt(s): for l in range(min_passwd_len, max_passwd_len + 1): i = os.urandom(l) for r in range(min_bcrypt_rounds, max_bcrypt_rounds + 1): print('Testing bcrypt: l=%d, r=%d' % (l, r)) bcrypt_kdf(i, s, r=r, kl=kl) def test_scrypt(s): for l in range(min_passwd_len, max_passwd_len + 1): i = os.urandom(l) for r in range(min_scrypt_r, max_scrypt_r + 1): for p in range(min_scrypt_p, max_scrypt_p + 1): for n in range(min_scrypt_n, max_scrypt_n + 1): print('Testing scrypt: l=%d, r=%d, p=%d, n=%d' % ( l, r, p, n)) scrypt_kdf(i, s, r=r, p=p, n=n, kl=kl) s = os.urandom(64) kl = 64 test_pbkdf2(s) test_bcrypt(s) test_scrypt(s)
INSTALLING:
From github directly using pip:
pip install 'git+'
From github after downloading / cloning:
python setup.py install
From pypi:
pip install multikdf
LICENSE
The files under multikdf/c/fastpbkdf2 are from ctz and are copied unchanged from These files under the terms of the CC0 1.0 Universal License - see the file named LICENSE under multikdf/c/fastpbkdf2
The files under multikdf/c/py-bcrypt are from py-bcrypt (automatically exported from code.google.com/p/py-bcrypt) and imported unchanged. These files under the terms of the ISC/BSD licence. See the file named LICENSE under multikdf/c/py-bcrypt
The files under multikdf/c/scrypt are from Tarsnap and are copied unchanged from The files under multikdf/c/scrypt/lib are licensed under the terms of the 2-clause BSD license. See the file named README.md under the directory multikdf/c/scrypt/lib.
The files under multikdf/c/scrypt/libcperciva are licensed under the terms of the license specified in the file multikdf/c/scrypt/libcperciva/COPYRIGHT.
All remaining files in this package are licensed under the GNU General Public License version 3 or (at your option) any later version. See the file LICENSE-GPLv3.txt for details of the GNU General Public License version 3.
Documentation (pydoc)
Package multikdf
PACKAGE CONTENTS
bcrypt fastpbkdf2 libmultikdf scrypt test
FUNCTIONS
getbuf(l)
multikdf.fastpbkdf2
FUNCTIONS
pbkdf2(i, s, r=1000, kl=64, h='SHA512') i-->bytes: input data (password etc) s-->bytes: salt r-->int: rounds kl-->int: desired key length in bytes h-->str: hash function (name) Returns-->bytes: pbkdf2_hmac(h, i, s, r, kl=None) Should be identical to original fastpbkdf2.pbkdf2_hmac h-->str: hash function (name) i-->bytes: input data (password etc) s-->bytes: salt r-->int: rounds kl-->int: desired key length in bytes Returns-->bytes:
DATA
algorithm = {'sha1': None, 'sha256': None, 'sha512': None}
multikdf.bcrypt
FUNCTIONS
bcrypt_kdf(i, s, r=10, kl=64) i-->bytes: input data (password etc) s-->bytes: salt (os.urandom) r-->int: rounds kl-->int: desired key length in bytes Returns-->bytes: (rounds * PerSec) = Machine-specific constant kdf(password, salt, desired_key_bytes, rounds) Should be identical to original bcrypt.kdf password-->bytes: input data (password etc) salt-->bytes: salt desired_key_bytes-->int: desired key length in bytes rounds-->int: rounds Returns-->bytes:
multikdf.scrypt
FUNCTIONS
hash(i, s, N=16384, r=8, p=1, buflen=64) Should be identical to scrypt.hash i-->bytes: input data (password etc) s-->bytes: salt N-->int: General work factor. Should be a power of 2 if N < 2, it is set to 2. Defaults to 16384 r-->int: Memory cost - defaults to 8 p-->int: Compuation (parallelization) cost - defaults to 1 buflen-->int: Desired key length in bytes Returns-->bytes: scrypt_kdf(i, s, r=8, p=1, n=14, kl=64) i-->bytes: input data (password etc) s-->bytes: salt (os.urandom) r-->int: Memory cost - defaults to 8 p-->int: Compuation (parallelization) cost - defaults to 1 n-->int: General work factor. passed to scrypt as 2^n if n < 1, it is set to 1. Defaults to 14 (scrypt n=16384) Returns-->bytes: (r * p) should be < 2^30 see pydoc scrypt.hash (2^n) * r * p * PerSec = Machine-specific constant
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/multikdf/ | CC-MAIN-2019-47 | refinedweb | 1,135 | 53.61 |
Hi,
are there plans to move away from DTDs and adopt RelaxNG instead?
As an example, I want to extend the forrest document schema with some
special tags to support in-lining of publications. In a project page, I
want to add publications related to this project.
For that it would be convenient to have the schema support any tag from
an external namespace and change the stylesheet to interpret the tag
accordingly.
Because of the rigidity to DTDs, the only thing I managed to come up
with (for the publication feature) is to add some special attributes for
the <p> tag and change the stylesheet accordingly. That's a bit ad-hoc.
<p publication="" project="forrest"/> is a valide entry in a forrest
document and the stylesheet interprets it via a <xsl:template.
Another way would be to define my own DTD for documents. But this is
annoying because I need to change it whenever the forrest document DTD
changes. With RelaxNG, I can define my schema as an extension of the
forrest schema (I just say that the main content model can contain
<publication> entries) and I am all set.
I haven't looked at the code of forrest in terms of validation. But
RelaxNG comes with a suite of open source Java tools that could be used
for validation. James Clark has also developed an amazing mode for emacs
with incremental validation of documents against RelaxNG schemas.
regards,
Arnaud | http://mail-archives.apache.org/mod_mbox/forrest-dev/200402.mbox/%3C4032A8A6.8040504@lucent.com%3E | CC-MAIN-2014-23 | refinedweb | 242 | 61.46 |
Exegesis_03 - Operators
Damian Conway <damian@conway.org>
Maintainer: Larry Wall <larry@wall.org> Date: 3 Oct 2001 Last Modified: 29 May 2006 Number:.]
[Update: For instance, despite the beautiful lyrics above, diamond does not live, tilde is now the concatenate operator, and star as a prefix operator has mutated into the
[,] reduce operator. (Though
* in a signature still means "slurpy".)]
In Apocalypse 3, Larry describes the changes that Perl 6 will make to operators and their operations. As with all the Apocalypses, only the new and different are presented -- just remember that the vast majority of operator-related syntax and semantics will stay precisely as they are in Perl 5.
To better understand those new and different aspects of Perl 6 operators, let's consider the following program. Suppose we wanted to locate a particular data file in one or more directories, read the first four lines of each such file, report and update their information, and write them back to disk.
We could do that with this:
sub load_data ($filename ; $version, *@dirpath) {
[Update: Optional args are now marked with a
? suffix or a default assignment.]
$version //= 1; @dirpath //= @last_dirpath // @std_dirpath // '.'; @dirpath ^=~ s{([^/])$}{$1/};
[Update: Hyper smartmatch is now
»~~«.]
my %data; foreach my $prefix (@dirpath) {
[Update: Now spelled:
for @dirpath -> $prefix {
]
my $filepath = $prefix _ $filename;
[Update: Concatenation is now
~.]
if (-w -r -e $filepath and 100 < -s $filepath <= 1e6) { my $fh = open $filepath : mode=>'rw' or die "Something screwy with $filepath: $!"; my ($name, $vers, $status, $costs) = <$fh>;
[Update: iterating a filehandle is now
@$fh or
=$fh.]
next if $vers < $version; $costs = [split /\s+/, $costs]; %data{$filepath}{qw(fh name vers stat costs)} = ($fh, $name, $vers, $status, $costs);
[Update:
qw() would now be a function call. In general you'd use
<...> instead.]
} } return %data; } my @StartOfFile is const = (0,0);
[Update: Now you'd say
constant @StartOfFile = (0,0);
or
my @StartOfFile is readonly = (0,0);
]
sub save_data ( %data) { foreach my $data (values %data) { my $rest = <$data.{fh}.irs(undef)>
[Update a constant hash subscript would now be
.<fh> instead. The
irs property is now
newline.]
seek $data.{fh}: *@StartOfFile; truncate $data.{fh}: 0; $data.{fh}.ofs("\n"); print $data.{fh}: $data.{qw(name vers stat)}, _@{$data.{costs}}, $rest;
[Update: instead of underline,
prefix:<~> is now the string context operator.]
} } my %data = load_data(filename=>'weblog', version=>1); my $is_active_bit is const = 0x0080; foreach my $file (keys %data) { print "$file contains data on %data{$file}{name}\n"; %data{$file}{stat} = %data{$file}{stat} ~ $is_active_bit;
[Update: Since
~ is concatenation, numeric XOR is now
+^ instead.]
my @costs := @%data{$file}{costs}; my $inflation; print "Inflation rate: " and $inflation = +<> until $inflation != NaN; @costs = map { $_.value } sort { $a.key <=> $b.key } map { amortize($_) => $_ } @costs ^* $inflation;
[Update: These closure arguments now require a comma after them. (And a single-arg sort routine will do the Schwartzian Transform for you automatically, but you can still do it this way.)]
my sub operator:∑ is prec(\&operator:+($)) (*@list : $filter //= undef) {
[Update: Syntax for declaring such an operator would now be:
my sub prefix:<∑> is equiv(&prefix:<+>) (*@list, +$filter) {
However, there's a built-in
[+] reduce operator that already does sums.]
reduce {$^a+$^b} ($filter ?? grep &$filter, @list :: @list);
[Update:
??:: is now
??!!. But the syntax is illegal--you can't have a lower-precedence comma inside a tighter-precedence
??!!.]
} print "Total expenditure: $( ∑ @costs )\n";
[Update: General interpolation is now just a closure, and print with a newline is usually done with
say, so you'd just write it:
say "Total expenditure: { [+] @costs }";
or just:
say "Total expenditure: ", [+] @costs;
]
print "Major expenditure: $( ∑ @costs : {$^_ >= 1000} )\n";
[Update: An adverbial block may not have spaces between the colon and the block. Also,
$^_ is really just
$_.]
print "Minor expenditure: $( ∑ @costs : {$^_ < 1000} )\n"; print "Odd expenditures: @costs[1..Inf:2]\n";
[Update: Now written
1..Inf:by(2) or
1..*:by(2).]
} save_data(%data, log => {name=>'metalog', vers=>1, costs=>[], stat=>0});
The first subroutine takes a filename and (optionally) a version number and a list of directories to search:
sub load_data ($filename ; $version, *@dirpath) {
Note that the directory path parameter is declared as
*@dirpath, not
@dirpath. In Perl 6, declaring a parameter as an array (i.e
@dirpath) causes Perl to expect the corresponding argument will be an actual array (or an array reference), not just any old list of values. In other words, a
@ parameter in Perl 6 is like a
\@ context specifier in Perl 5.
To allow
@dirpath to accept a list of arguments, we have to use the list context specifier -- unary
* -- to tell Perl to "slurp up" any remaining arguments into the
@dirpath parameter.
This slurping-up process consists of flattening any arguments that are arrays or hashes, and then assigning the resulting list of values, together with any other scalar arguments, to the array (i.e. to
@dirpath in this example). In other words, a
*@ parameter in Perl 6 is like a
@ context specifier in Perl 5.
[Update: This flattening now happens lazily.]
In Perl 5, it's not uncommon to see people using the
||= operator to set up default values for subroutine parameters or input data:
$offset ||= 1; $suffix ||= $last_suffix || $default_suffix || '.txt'; # etc.
Of course, unless you're sure of your range of values, this can go horribly wrong -- specifically, if the variable being initialized already has a valid value that Perl happens to consider false (i.e if
$suffix or
$last_suffix or
$default_suffix contained an empty string, or the offset really was meant to be zero).
So people have been forced to write default initializers like this:
$offset = 1 unless defined $offset;
which is OK for a single alternative, but quickly becomes unwieldy when there are several alternatives:
$suffix = $last_suffix unless defined $suffix; $suffix = $default_suffix unless defined $suffix; $suffix = '.txt' unless defined $suffix;
Perl 6 introduces a binary 'default' operator --
// -- that solves this problem. The default operator evaluates to its left operand if that operand is defined, otherwise it evaluates to its right operand. When chained together, a sequence of
// operators evaluates to the first operand in the sequence that is defined. And, of course, the assignment variant --
//= -- only assigns to its lvalue if that lvalue is currently undefined.
The symbol for the operator was chosen to be reminiscent of a
||, but one that's taking a slightly different angle on things.
So
&load_data ensures that its parameters have sensible defaults like this:
$version //= 1; @dirpath //= @last_dirpath // @std_dirpath // '.';
Note that it will also be possible to provide default values directly in the specification of optional parameters, probably like this:
sub load_data ($filename ; $version //= 1, *@dirpath //= @std_dirpath) {...}
As if it weren't broken enough already, there's another nasty problem with using
|| to build default initializers in Perl 5. Namely, that it doesn't work quite as one might expect for arrays or hashes either.
If you write:
@last_mailing_list = ('me', 'my@shadow'); # and later... @mailing_list = @last_mailing_list || @std_mailing_list;
then you get a nasty surprise: In Perl 5,
|| (and
&&, for that matter) always evaluates its left argument in scalar context. And in a scalar context an array evaluates to the number of elements it contains, so
@last_mailing_list evaluates to
2. And that's what's assigned to
@mailing_list instead of the actual two elements.
Perl 6 fixes that problem, too. In Perl 6, both sides of an
|| (or a
&& or a
//) are evaluated in the same context as the complete expression. That means, in the example above,
@last_mailing_list is evaluated in list context, so its two elements are assigned to
@mailing_list, as expected.
The next step in
&load_data is to ensure that each path in
@dirpath ends in a directory separator. In Perl 5, we might do that with:
s{([^/])$}{$1/} foreach @dirpath;
but Perl 6 gives us another alternative: hyper-operators.
Normally, when an array is an operand of a unary or binary operator, it is evaluated in the scalar context imposed by the operator and yields a single result. For example, if we execute:
$account_balance = @credits + @debits; $biblical_metaphor = @sheep - @goats;
then
$account_balance gets the total number of credits plus the number of debits, and
$biblical_metaphor gets the numerical difference between the number of
@sheep and
@goats.
That's fine, but this scalar coercion also happens when the operation is in a list context:
@account_balances = @credits + @debits; @biblical_metaphors = @sheep - @goats;
Many people find it counter-intuitive that these statements each produce the same scalar result as before and then assign it as the single element of the respective lvalue arrays.
It would be more reasonable to expect these to act like:
# Perl 5 code... @account_balances = map { $credits[$_] + $debits[$_] } 0..max($#credits,$#debits); @biblical_metaphors = map { $sheep[$_] - $goats[$_] } 0..max($#sheep,$#goats);
That is, to apply the operation element-by-element, pairwise along the two arrays.
Perl 6 makes that possible, though not by changing the list context behavior of the existing operators. Instead, Perl 6 provides a "vector" version of each binary operator. Each uses the same symbol as the corresponding scalar operator, but with a caret (
^) dangled in front of it. Hence to get the one-to-one addition of corresponding credits and debits, and the list of differences between pairs of sheep and goats, we can write:
@account_balances = @credits ^+ @debits; @biblical_metaphors = @sheep ^- @goats;
[Update: Hyper operators are now written with
»...« quotes.]
This works for all unary and binary operators, including those that are user-defined. If the two arguments are of different lengths, the operator Does What You Mean (which, depending on the operator, might involve padding with ones, zeroes or
undef's, or throwing an exception).
If one of the arguments is a scalar, that operand is replicated as many times as is necessary. For example:
@interest = @account_balances ^* $interest_rate;
Which brings us back to the problem of appending those directory separators. The "pattern association" operator (
=~) can also be vectorized by prepending a caret, so we can apply the necessary substitution to each element in the
@dirpath array like this:
@dirpath ^=~ s{([^/])$}{$1/};
Having ensured everything is set up correctly,
&load_data then processes each candidate file in turn, accumulating data as it goes:
my %data; foreach my $prefix (@dirpath) {
The first step is to create the full file path, by prefixing the current directory path to the basic filename:
my $filepath = $prefix _ $filename;
And here we see the new Perl 6 string concatenation operator: underscore. And yes, we realize it's going to take time to get used to. It may help to think of it as the old dot operator under extreme acceleration.
Underscore is still a valid identifier character, so you need to be careful about spacing it from a preceding or following identifier (just as you've always have with the
x or
eq operators):
# Perl 6 code # Meaning $name = getTitle _ getName; # getTitle() . getName() $name = getTitle_ getName; # getTitle_(getName()) $name = getTitle _getName; # getTitle(_getName()) $name = getTitle_getName; # getTitle_getName()
In Perl 6, there's also a unary form of
_. We'll get to that a little later.
[Update: Changing to
~ for these solved the identifier problem.]
Of course, we only want to load the file's data if the file exists, is readable and writable, and isn't too big or too small (say, no less than 100 bytes and no more than a million). In Perl 5 that would be:
if (-e $filepath && -r $filepath && -w $filepath and 100 < -s $filepath && -s $filepath <= 1e6) {...
which has far too many
&&'s and
$filepath's for its own good.
In Perl 6, the same set of tests can be considerably abbreviated by taking advantage of two new types of operator chaining:
if (-w -r -e $filepath and 100 < -s $filepath <= 1e6) {...
First, the
-X file test operators now all return a special object that evaluates true or false in a boolean context but is really an encapsulated
stat buffer, to which subsequent file tests can be applied. So now you can put as many file tests as you like in front of a single filename or filehandle and they must all be true for the whole expression to be true. Note that because these are really nested calls to the various file tests (i.e.
-w(-r(-e($filepath)))), the series of tests are effectively evaluated in right-to-left order.
The test of the file size uses another new form of chaining that Perl 6 supports: multiway comparisons. An expression like
100 < -s $filepath <= 1e6 isn't even legal Perl 5, but it Does The Right Thing in Perl 6. More importantly, it short-circuits if the first comparison fails and will evaluate each operand only once.
Having verified the file's suitability, we open it for reading and writing:
my $fh = open $filepath : mode=>'rw' or die "Something screwy with $filepath: $!";
The
: mode=>'rw' is an adverbial modifier on the
open. We'll see more adverbs shortly.
The
$! variable is exactly what you think it is: a container for the last system error message. It's also considerably more than you think it is, since it's also taken over the roles of
$? and
$@, to become the One True Error Variable.
Contrary to earlier rumors, the "diamond" input operator is alive and well and living in Perl 6 (yes, the Perl Ministry of Truth is even now rewriting Apocalypse 2 to correct the ... err ... "printing error" ... that announced
<> would be purged from the language).
[Update: The Ministry of Truth was caught in its Big Lie, and
<> is now a
qw//.]
So we can happily proceed to read in four lines of data:
my ($name, $vers, $status, $costs) = <$fh>;
Now, writing something like this is a common Perl 5 mistake -- the list context imposed by the list of lvalues induces
<$fhE> to read the entire file, create a list of (possibly hundreds of thousands of) lines, assign the first four to the specified variables, and throw the rest away. That's rarely the desired effect.
In Perl 6, this statement works as it should. That is, it works out how many values the lvalue list is actually expecting and then reads only that many lines from the file.
Of course, if we'd written:
my ($name, $vers, $status, $costs, @and_the_rest) = <$fh>;
then the entire file would have been read.
[Update: It works a bit differently from that now, but has the same effect. Lists are evaluated lazily by default, so the assignment only ever ends up demanding however many lines it needs from the iterator. But it's misleading to say that "It works out how many values the lvalue list is expecting" as if that were a separate step in advance.
Apart from the new sigil syntax (i.e. hashes now keep their
% signs no matter what they're doing), the remainder of
&load_data is exactly as it would have been if we'd written it in Perl 5.
We skip to the next file if the current file's version is wrong. Otherwise, we split the costs line into an array of whitespace-delimited values, and then save everything (including the still-open filehandle) in a nested hash within
%data:
next if $vers < $version; $costs = [split /\s+/, $costs]; %data{$filepath}{qw(fh name vers stat costs)} = ($fh, $name, $vers, $status, $costs); } }
Then, once we've iterated over all the directories in
@dirpath, we return the accumulated data:
return %data; }
Perl 6 variables can be used as constants:
my @StartOfFile is const = (0,0);
which is a great way to give logical names to literal values, but ensure that those named values aren't accidentally changed in some other part of the code.
When the data is eventually saved, we'll be passing it to the
&save_data subroutine in a hash. If we expected the hash to be a real hash variable (or a reference to one), we'd write:
sub save_data (%data) {...
But since we want to allow for the possibility that the hash is created on the fly (e.g. from a hash-like list of values), we need to use the slurp-it-all-up list context asterisk again:
sub save_data (*%data) {...
We then grab each datum for each file with the usual
foreach ... values ... construct:
foreach my $data (values %data) {
and go about saving the data to file.
[Update: Now "
for %data.values -> $data {...}".]
Because the Perl 6 "diamond" operator can take an arbitrary expression as its argument, it's possible to set a filehandle to read an entire file and do the actual reading, all in a single statement:
my $rest = <$data.{fh}.irs(undef)>
The variable
$data stores a reference to a hash, so to dereference it and access the
'fh' entry, we use the Perl 6 dereferencing operator (dot) and write:
$data.{fh}. In practice, we could leave out the operator and just write
$data{fh}, since Perl can infer from the
$ sigil that we're accessing the hash through a reference held in a scalar. In fact, in Perl 6 the only place you must use an explicit
. dereferencer is in a method call. But it never hurts to say exactly what you mean, and there's certainly no difference in performance if you do choose to use the dot.
The
.irs(undef) method call then sets the input record separator of the filehandle (i.e. the Perl 6 equivalent of
$/) to
undef, causing the next read operation to return the remaining contents of the file. And because the filehandle's
irs method returns its own invocant -- i.e. the filehandle reference -- the entire expression can be used within the angle brackets of the read.
[Update: The use of parameterized methods for object modifiers is deprecated in favor of the
but operator. However, this sort of thing should be set on the filehandle object outside the loop in any event.]
A variation on this technique allows a Perl program to do a shell-like read-from-filename just as easily:
my $next_line = <open $filename or die>;
or, indeed, to read the whole file:
my $all_lines = < open $filename : irs=>undef >;
[Update: Make it:
my $all_lines = slurp $filename;
]
Having grabbed the entire file, we now rewind and truncate it, in preparation for writing it back:
seek $data.{fh}: *@StartOfFile; truncate $data.{fh}: 0;
You're probably wondering what's with the asterisk ... unless you've ever tried to write:
seek $filehandle, @where_and_whence;
in Perl 5 and gotten back the annoying
"Not enough arguments for seek" exception. The problem is that
seek expects three distinct scalars as arguments (as if it had a Perl 5 prototype of
seek($$$)), and it's too fastidious to flatten the proffered array in order to get them.
It's handy to wrap the magical
0,0 arguments of the
seek in a single array (so we no longer have to remember this particular incantation), but to use such an array in Perl 5 we would then have to write:
seek $data->{fh}, $StartOfFile[0], $StartOfFile[1]; # Perl 5
In Perl 6 that's not a problem, because we have
* -- the list context specifier. When used in an argument list, it takes whatever you give it (typically an array or hash) and flattens it. So:
seek $data.{fh}: *@StartOfFile; # Perl 6
massages the single array into a list of two scalars, as
seek requires.
[Update: Now use
[,] to "reduce with comma".]
Oh, and yes, that is the adverbial colon again. In Perl 6,
seek and
truncate are both methods of filehandle objects. So we can either call them as:
$data.{fh}.seek(*@StartOfFile); $data.{fh}.truncate(0);
Or use the "indirect object" syntax:
seek $data.{fh}: *@StartOfFile; truncate $data.{fh}: 0;
And that's where the colon comes in. Another of its many uses in Perl 6 is to separate "indirect object" arguments (e.g. filehandles) from the rest of the argument list. The main place you'll see colons guarding indirect objects is in
[Update: We still use an indirect object colon, but it is no longer construed as an adverbial colon. Also, the examples above would require parens around the indirect object.]
Finally,
&save_data has everything ready and can write the four fields and the rest of the file back to disk. First, it sets the output field separator for the filehandle (i.e. the equivalent of Perl 5's
$, variable) to inject newlines between elements:
$data.{fh}.ofs("\n");
Then it prints the fields to the filehandle:
print $data.{fh}: $data.{qw(name vers stat)}, _@{$data.{costs}}, $rest;
Note the use of the adverbial colon after
$data.{fh} to separate the filehandle argument from the items to be printed. The colon is required because it's how Perl 6 eliminates the nasty ambiguity inherent in the "indirect object" syntax. In Perl 5, something like:
print foo bar;
could conceivably mean:
print {foo} (bar); # Perl 5: print result of bar() to filehandle foo
or
print ( foo(bar) ); # Perl 5: print foo() of bar() to default filehandle
or even:
print ( bar->foo ); # Perl 5: call method foo() on object returned by # bar() and print result to default filehandle
In Perl 6, there is no confusion, because each indirect object must followed by a colon. So in Perl 6:
print foo bar;
can only mean:
print ( foo(bar) ); # Perl 6: print foo() of bar() to default filehandle
and to get the other two meanings we'd have to write:
print foo: bar; # Perl 6: print result of bar() to filehandle foo() # (foo() not foo, since there are no # bareword filehandles in Perl 6)
and:
print foo bar: ; # Perl 6: call method foo() on object returned by # bar() and print result to default filehandle
In fact, the colon has an even wider range of use, as a general-purpose "adverb marker"; a notion we will explore more fully below.
The printed arguments are: a hash slice:
$data.{qw(name vers stat)},
[Update: Now generally written:
$data<name vers stat>.]
a stringified dereferenced nested array:
_@{$data.{costs}},
[Update: Now written:
~@($data<costs>).]
and a scalar:
$rest;
The new hash slice syntax was explained in the previous Apocalypse/Exegesis, and the scalar is just a scalar, but what was the middle thing again?
Well,
$data.{costs} is just a regular Perl 6 access to the
'costs' entry of the hash referred to by
$data. That entry contains the array reference that was the result of splitting
$cost in in
&load_data).
So to get the actual array itself, we can prefix the array reference with a
@ sigil (though, technically, we don't have to: in Perl 6 arrays and array references are interchangeable in scalar context).
That gives us
@{$data.{costs}}. The only remaining difficulty is that when we print the list of items produced by
@{$data.{costs}}, they are subject to the output field separator. Which we just set to newline.
But what we want is for them to appear on the same line, with a space between each.
Well ... evaluating a list in a string context does precisely that, so we could just write:
"@{$data.{costs}}" # evaluate array in string context
But Perl 6 has another alternative to offer us -- the unary underscore operator. Binary underscore is string concatenation, so it shouldn't be too surprising that unary underscore is the stringification operator (think: concatenation with a null string). Prefixing any expression with an underscore forces it to be evaluated in string context:
_@{$data{costs}} # evaluate array in string context
Which, in this case, conveniently inserts the required spaces between the elements of the costs array.
Now that the I/O is organized, we can get down to the actual processing. First, we load the data:
my %data = load_data(filename=>'weblog', version=>1);
Note that we're using named arguments here. This attempt would blow up badly in Perl 5, because we didn't set
&load_data up to expect a hash-like list of arguments. But it works fine in Perl 6 for two reasons:
&load_datawith named parameters;
and
=>operator isn't in Kansas anymore.
In Perl 5,
=> is just an up-market comma with a single minor talent: It stringifies its left operand if that operand is a bareword.
In Perl 6,
=> is a fully-fledged anonymous object constructor -- like
[...] and
{...}. The objects it constructs are called "pairs" and they consist of a key (the left operand of the
=>), and a value (the right operand). The key is still stringified if it's a valid identifier, but both the key and the value can be any kind of Perl data structure. They are accessed via the pair object's
key and
value methods:
my $pair_ref = [1..9] => "digit"; print $pair_ref.value; # prints "digit" print $pair_ref.key.[3]; # prints 4
So, rather than getting four arguments:
load_data('filename', 'weblog', 'version', 1); # Perl 5 semantics
&load_data gets just two arguments, each of which is a reference to a pair:
load_data( $pair_ref1, $pair_ref2); # Perl 6 semantics
When the subroutine dispatch mechanism detects one or more pairs as arguments to a subroutine with named parameters, it examines the keys of the pairs and binds their values to the correspondingly named parameters -- no matter what order the paired arguments originally appeared in. Any remaining non-pair arguments are then bound to the remaining parameters in left-to-right order.
So we could call
&load_data in any of the following ways:
load_data(filename=>'weblog', version=>1); # named load_data(version=>1, filename=>'weblog'); # named (order doesn't matter) load_data('weblog', 1); # positional (order matters)
There are numerous other uses for pairs, one of which we'll see shortly.
Having loaded the data, we go into a loop and iterate over each file's information. First, we announce the file and its internal name:
foreach my $file (keys %data) { print "$file contains data on %data{$file}{name}\n";
[Update:
for %data.kv -> $file, $entry { say "$file contains data on $entry<name>";
]
Then we toggle the "is active" status bit (the eighth bit) for each file. To flip that single bit without changing any of the other status bits, we bitwise-xor the status bitset against the bitstring
0000000010000000. Each bit xor'd against a zero stays as it is (0 xor 0 --> 0; 1 xor 0 --> 1), while xor'ing the eighth bit against 1 complements it (0 xor 1 --> 1; 1 xor 1 --> 0).
But because the caret has been appropriated as the Perl 6 hyper-operator prefix, it will no longer be used as bitwise xor. Instead, binary tilde will be used:
%data{$file}{stat} = %data{$file}{stat} ~ $is_active_bit;
This is actually an improvement in syntactic consistency since bitwise xor (now binary
~) and bitwise complement (still unary
~) are mathematically related:
~x is
(-1~x).
[Update: Symbolic XORs and NOTs now consistently use ^ rather than ~.]
Note that we could have used the assignment variant of binary
~:
%data{$file}{stat} ~= $is_active_bit; # flip only bit 8 of status bitset
[Update: Is now
+^= for the numeric XOR assignment operator.]
but that's probably best avoided due to its confusability with the much commoner "pattern association" operator:
%data{$file}{stat} =~ $is_active_bit; # match if status bitset is "128"
By the way, there is also a high precedence logical xor operator in Perl 6. You guessed it:
~~.
[Update: No, that's now the smart-match operator, just to avoid the =~ confusion. High precedence XOR is
^^ instead.]
This finally fills the strange gap in Perl's logical operator set:
Binary (low) | Binary (high) | Bitwise ______________|_______________|_____________ | | or | || | | | | and | && | & | | xor | ~~ | ~ | |
And it will also help to reduce programmer stress by allowing us to write:
$make = $money ~~ $fast;
instead of (the clearly over-excited):
$make = !$money != !$fast;
In both Perl 5 and 6, it's possible to create an alias for a variable. For example, the subroutine:
sub increment { $_[0]++ } # Perl 5 sub increment { @_[0]++ } # Perl 6
works because the elements of
@_ become aliases for whatever variable is passed as their corresponding argument. Similarly, one can use a
for to implement a Pascal-ish
with:
for my $age ( $person[$n]{data}{personal}{time_dependent}{age} ) { if ($age < 12) { print "Child" } elsif ($age < 18) { print "Adolescent" } elsif ($age < 25) { print "Junior citizen" } elsif ($age < 65) { print "Citizen" } else { print "Senior citizen" } }
Perl 6 provides a more direct mechanism for aliasing one variable to another in this way: the
:= (or "binding") operator. For example, we could rewrite the previous example like so in Perl 6:
my $age := $person[$n]{data}{personal}{time_dependent}{age};
[Update: Make that:
my $age := $person[$n]<data><personal><time_dependent><age>;
]
if ($age < 12) { print "Child" } elsif ($age < 18) { print "Adolescent" } elsif ($age < 25) { print "Junior citizen" } elsif ($age < 65) { print "Citizen" } else { print "Senior citizen" }
Bound aliases are particularly useful for temporarily giving a conveniently short identifier to a variable with a long or complex name. Scalars, arrays, hashes and even subroutines may all be given less sequipedalian names in this way:
my @list := @They::never::would::be::missed::No_never_would_be_missed; our %plan := %{$planning.[$planner].{planned}.[$planet]}; temp &rule := &FulfilMyGrandMegalomanicalDestinyBwahHaHaHaaaa;
In our example program, we use aliasing to avoid having to write
@%data{$file}{costs} everywhere:
my @costs := @%data{$file}{costs};
An important feature of the binding operator is that the lvalue (or lvalues) on the left side form a context specification for the rvalue (or rvalues) on the right side. It's as if the lvalues were the parameters of an invisible subroutine, and the rvalues were the corresponding arguments being passed to it. So, for example, we could also have written:
my @costs := %data{$file}{costs};
(i.e. without the
@ dereferencer) because the lvalue expects an array as the corresponding rvalue, so Perl 6 automatically dereferences the array reference in
%data{$file}{costs} to provide that.
More interestingly, if we have both lvalue and rvalue lists, then each of the rvalues is evaluated in the context specified by its corresponding lvalue. For example:
(@x, @y) := (@a, @b);
aliases
@x to
@a, and
@y to
@b, because
@'s on the left act like
@ parameters, which require -- and bind to -- an unflattened array as their corresponding argument. Likewise:
($x, %y, @z) := (1, {b=>2}, %c{list});
binds
$x to the value
1 (i.e.
$x becomes a constant),
%y to the anonymous hash constructed by
{b=>2}, and
@z to the array referred to by
%c{list}. In other words, it's the same set of bindings we'd see if we wrote:
sub foo($x, %y, @z) {...} foo(1, {b=>2}, %c{list});
except that the
:= binding takes effect in the current scope.
And because
:= works that way, we can also use the flattening operator (unary
*) on either side of such bindings. For example:
(@x, *@y) := (@a, $b, @c, %d);
aliases
@x to
@a, and causes
@y to bind to the remainder of the lvalues -- by flattening out
$b,
@c, and
%d into a list and then slurping up all their components together.
Note that
@y is still an alias for those various slurped components. So
@y[0] is an alias for
$b,
@y[1..@c.length] are aliases for the elements of
@c, and the remaining elements of
@y are aliases for the interlaced keys and values of
%d.
When the star is on the other side of the binding, as in:
($x, $y) := (*@a);
[Update: Now
[,]@a instead.]
then
@a is flattened before it is bound, so
$x becomes an alias for
@a[0] and
$y becomes an alias for
@a[1].
The binding operator will have many uses in Perl 6 (most of which we probably haven't even thought of yet), but one of the commonest will almost certainly be as an easy way to swap two arrays efficiently:
(@x, @y) := (@y, @x);
Yet another way to think about the binding operator is to consider it as a sanitized version of those dreaded Perl 5 typeglob assignments. That is:
$age := $person[$n]{data}{personal}{time_dependent}{age};
is the same as Perl 5's:
*age = \$person->[$n]{data}{personal}{time_dependent}{age};
except that it also works if
$age is declared as a lexical.
Oh, and binding is much safer than typeglobbing was, because it explicitly requires that
$person[$n]{data}{personal}{time_dependent}{age} evaluate to a scalar, whereas the Perl 5 typeglob version would happily (and silently!) replace
@age,
%age, or even
&age if the rvalue happened to produce a reference to an array, hash, or subroutine instead of a scalar.
We should also note that the binding of the
@costs array:
my @costs := @%data{$file}{costs};
shows yet another case where Perl 6's sigil semantics are much DWIM-mier than those of Perl 5.
In Perl 5 we would probably have written that as:
local *costs = \ @$data{$file}{costs};
and then spent some considerable time puzzling out why it wasn't working, before realising that we'd actually meant:
local *costs = \ @{$data{$file}{costs}};
instead.
That's because, in Perl 5, the precedence of a hash key is relatively low, so:
@$data{$file}{costs} # means: @{$data}{$file}{costs} # i.e. (invalid attempt to) access the 'costs' # key of a one-element slice of the hash # referred to by $data # problem is: slices don't have hash keys
whereas:
@{$data{$file}{costs}} # means: @{ $data{$file}{costs} } # i.e. dereference of array referred to by # $data{$file}{costs}
The problem simply doesn't arise in Perl 6, where the two would be written quite distinctly, as:
%data{@($file)}{costs} # means: (%data{@($file)}).{costs} # (still an error in Perl 6)
and:
@%data{$file}{costs} # means: @{ %data{$file}{costs} } # i.e. dereference of array referred to by # %data{$file}{costs}
respectively.
[Update: You now have to write
@(%...) instead.
@% would be construed as an illegal sigil. You can also write it using a
.@ postfix.]
One of the perennial problems with Perl 5 is how to read in a number. Or rather, how to read in a string...and then be sure that it contains a valid number. Currently, most people read in the string and then either just assume it's a number (optimism) or use the regexes found in perlfaq4 or Regexp::Common to make sure (cynicism).
Perl 6 offers a simpler, built-in mechanism.
Just as the unary version of binary underscore (
_) is Perl 6's explicit stringification specifier, so to the unary version of binary plus is Perl 6's explicit numerifier. That is, prefixing an expression with unary
+ evaluates that expression in a numeric context. Furthermore, if the expression has to be coerced from a string and the string does not begin with a valid number, the stringification operator returns
NaN, the not-a-number value.
That makes it particularly easy to read in numeric data reliably:
my $inflation; print "Inflation rate: " and $inflation = +<> until $inflation != NaN;
The unary
+ takes the string returned by
<> and converts it to a number. Or, if the string can't be interpreted as a number,
+ returns
NaN. Then we just go back and try again until we do get a valid number.
Note that these new semantics for unary
+ are a little different from its role in Perl 5, where it is just the identity operator. In Perl 5 it's occasionally used to disambiguate constructs like:
print ($x + $y) * $z; # in Perl 5 means: ( print($x+$y) ) * $z; print +($x + $y) * $z; # in Perl 5 means: print( ($x+$y) * $z );
To get the same effect in Perl 6, we'd use the adverbial colon instead:
print ($x + $y) * $z; # in Perl 6 means: ( print($x+$y) ) * $z; print : ($x + $y) * $z; # in Perl 6 means: print( ($x+$y) * $z );
Another handy use for pairs is as a natural data structure for implementing the Schwartzian Transform. This caching technique is used when sorting a large list of values according to some expensive function on those values. Rather than writing:
my @sorted = sort { expensive($a) <=> expensive($b) } @unsorted;
and recomputing the same expensive function every time each value is compared during the sort, we can precompute the function on each value once. We then pass both the original value and its computed value to
sort, use the computed value as the key on which to sort the list, but then return the original value as the result. Like this:
my @sorted = # step 4: store sorted originals map { $_.[0] } # step 3: extract original sort { $a.[1] <=> $b.[1] } # step 2: sort on computed map { [$_, expensive($_) ] } # step 1: cache original and computed @unsorted; # step 0: take originals
The use of arrays can make such transforms hard to read (and to maintain), so people sometimes use hashes instead:
my @sorted = map { $_.{original} } sort { $a.{computed} <=> $b.{computed} } map { {original=>$_, computed=>expensive($_)} } @unsorted;
That improves the readability, but at the expense of performance. Pairs are an ideal way to get the readability of hashes but with (probably) even better performance than arrays:
my @sorted = map { $_.value } sort { $a.key <=> $b.key } map { expensive($_) => $_ } @unsorted;
Or in the case of our example program:
@costs = map { $_.value } sort { $a.key <=> $b.key } map { amortize($_) => $_ } @costs ^* $inflation;
Note that we also used a hyper-multiplication (
^*) to multiply each cost individually by the rate of inflation before sorting them. That's equivalent to writing:
@costs = map { $_.value } sort { $a.key <=> $b.key } map { amortize($_) => $_ } map { $_ * $inflation } @costs;
but spares us from the burden of yet another
map.
More importantly, because
@costs is an alias for
@%data{$file}{costs}, when we assign the sorted list back to
@costs, we're actually assigning it back into the appropriate sub-entry of
%data.
Perl 6 will probably have a built-in
sum operator, but we might still prefer to build our own for a couple of reasons. Firstly
sum is obviously far too long a name for so fundamental an operation; it really should be
∑. Secondly, we may want to extend the basic summation functionality somehow. For instance, by allowing the user to specify a filter and only summing those arguments that the filter lets through.
Perl 6 allows us to create our own operators. Their names can be any combination of characters from the Unicode set. So it's relatively easy to build ourselves a
∑ operator:
my sub operator:∑ is prec(\&operator:+($)) (*@list) { reduce {$^a+$^b} @list; }
We declare the
∑ operator as a lexically scoped subroutine. The lexical scoping eases the syntactic burden on the parser, the semantic burden on other unrelated parts of the code, and the cognitive burden on the programmer.
The operator subroutine's name is always
operator:whatever_symbols_we_want. In this case, that's
operator:∑, but it can be any sequence of Unicode characters, including alphanumerics:
my sub operator:*#@& is prec(\&operator:\) (STR $x) { return "darn $x"; } my sub operator:† is prec(\&CORE::kill) (*@tIHoH) { kill(9, @tIHoH) == @tIHoH or die "batlhHa'"; return "Qapla!"; } my sub operator:EQ is prec(\&operator:eq) ($a, $b) { return $a eq $b # stringishly equal strings || $a == $b != NaN; # numerically equal numbers } # and then: warn *#@& "QeH!" unless E<dagger> $puq EQ "Qapla!";
Did you notice that cunning
$a == $b != NaN test in
operator:EQ? This lovely Perl 6 idiom solves the problem of numerical comparisons between non-numeric strings.
In Perl 5, a comparison like:
$a = "a string"; $b = "another string"; print "huh?" if $a == $b;
will unexpectedly succeed (and silently too, if you run without warnings), because the non-numeric values of both the scalars are converted to zero in the numeric context of the
==.
But in Perl 6, non-numeric strings numerify to
NaN. So, using Perl 6's multiway comparison feature, we can add an extra
!= NaN to the equality test to ensure that we compared genuine numbers.
[Update: Now you'd just use
=== to compare two values within their type's definition of value equality.]
Meanwhile, we also have to specify a precedence for each new operator we define. We do that with the
is prec trait of the subroutine. The precedence is specified in terms of the precedence of some existing operator; in this case, in terms of Perl's built-in unary
+:
my sub operator:∑ is prec( \&operator:+($) )
[Update: Now done with "
is equiv".]
To do this, we give the
is prec trait a reference to the existing operator. Note that, because there are two overloaded forms of
operator:+ (unary and binary) of different precedences, to get the reference to the correct one we need to specify its complete signature (its name and parameter types) as part of the enreferencing operation. The ability to take references to signatures is a standard feature in Perl 6, since ordinary subroutines can also be overloaded, and may need the same kind of disambiguation when enreferenced.
If the operator had been binary, we might also have had to specify its associativity (
left,
right, or
non), using the
is assoc trait.
Note too that we specified the parameter of
operator:∑ with a flattening asterisk, since we want
@list to slurp up any series of values passed to it, rather than being restricted to accepting only actual array variables as arguments.
The implementation of
operator:∑ is very simple: we just apply the built-in
reduce function to the list, reducing each successive pair of elements by adding them.
Note that we used a higher-order function to specify the addition operation. Larry has decided that the syntax for higher-order functions requires that implicit parameters be specified with a
$^ sigil (or
@^ or
%^, as appropriate) and that the whole expression be enclosed in braces.
So now we have a
∑ operator:
$result = ∑ $wins, $losses, $ties;
but it doesn't yet provide a way to filter its values. Normally, that would present a difficulty with an operator like
∑, whose
*@list argument will gobble up every argument we give it, leaving no way -- except convention -- to distinguish the filter from the data.
But Perl 6 allows any subroutine -- not just built-ins like
∑.
A subroutine's adverbs are specified as part of its normal parameter list, but separated from its regular parameters by a colon:
my sub operator:∑ is prec(\&operator:+($)) ( *@list : $filter //= undef) {...
This specifies that
operator:∑ can take a single scalar adverb, which is bound to the parameter
$filter. When there is no adverb specified in the call,
$filter is default-assigned the value
undef.
We then modify the body of the subroutine to pre-filter the list through a
grep, but only if a filter is provided:
reduce {$^a+$^b} ($filter ?? grep &$filter, @list :: @list); }
The
?? and
:: are the new way we write the old
?: ternary operator in Perl 6. Larry had to change the spelling because he needed the single colon for marking adverbs. But it's a change for the better anyway --it was rather odd that all the other short-circuiting logical operators (
&& and
|| and
//) used doubled symbols, but the conditional operator didn't. Well, now it does. The doubling also helps it stand out better in code, in part because it forces you to put space around the
:: so that it's not confused with a package name separator.
[Update: We've changed
:: to
!! to reduce that confusion, and because of the ? vs ! symbology of true? vs false? that pervades the rest of Perl 6.]
You might also be wondering about the ambiguity of
??, which in Perl 5 already represents an empty regular expression with question-mark delimiters. Fortunately, Perl 6 won't be riddled with the nasty
?...? regex construct, so there's no ambiguity at all.
Adverbial semantics can be defined for any Perl 6 subroutine. For example:
sub mean (*@values : $type //= 'arithmetic') { given ($type) { when 'arithmetic': { return sum(@values) / @values; } when 'geometric': { return product(@values) ** (1/@values) } when 'harmonic': { return @values / sum( @values ^** -1 ) } when 'quadratic': { return (sum(@values ^** 2) / @values) ** 0.5 } } croak "Unknown type of mean: '$type'"; }
Adverbs will probably become widely used for passing this type of "out-of-band" behavioural modifier to subroutines that take an unspecified number of data arguments.
[Update: Nowadays any named parameter may be specified adverbially.]
OK, so now our
∑ operator can take a modifying filter. How exactly do we pass that filter to it?
As described earlier, the colon is used to introduce adverbial arguments into the argument list of a subroutine or operator. So to do a normal summation we write:
$sum = ∑ @costs;
whilst to do a filtered summation we place the filter after a colon at the end of the regular argument list:
$sum = ∑ @costs : sub {$_ >= 1000};
or, more elegantly, using a higher-order function:
$sum = ∑ @costs : {$^_ >= 1000};
Any arguments after the colon are bound to the parameters specified by the subroutine's adverbial parameter list.
[Update: Now you'd probably just write:
$sum = ∑ @costs, :filter{$_ >= 1000};
or just
$sum = [+] grep {$_ >= 1000}, @costs;
]
Note that the example also demonstrates that we can interpolate the results of the various summations directly into output strings. We do this using Perl 6's scalar interpolation mechanism (
$(...)), like so:
print "Total expenditure: $( ∑ @costs )\n"; print "Major expenditure: $( ∑ @costs : {$^_ >= 1000} )\n"; print "Minor expenditure: $( ∑ @costs : {$^_ < 1000} )\n";
Finally (and only because we can), we print out a list of every second element of
@costs. There are numerous ways to do that in Perl 6, but the cutest is to use a lazy, infinite, stepped list of indices in a regular slicing operation.
In Perl 6, any list of values created with the
.. operator is created lazily. That is, the
.. operator doesn't actually build a list of all the values in the specified range, it creates an array object that knows the boundaries of the range and can interpolate (and then cache) any given value when it's actually needed. That's useful, because it greatly speeds up the creation of a list like
(1..Inf).
Inf is Perl 6's standard numerical infinity value, so a list that runs to
Inf takes ... well ... forever to actually build. But writing
1..Inf is OK in Perl 6, since the elements of the resulting list are only ever computed on demand. Of course, if you were to
print(1..Inf), you'd have plenty of time to go and get a cup of coffee. And even then (given the comparatively imminent heat death of the universe) that coffee would be really cold before the output was complete. So there will probably be a warning when you try to do that.
But to get an infinite list of odd indices, we don't want every number between 1 and infinity; we want every second number. Fortunately, Perl 6's
.. operator can take an adverb that specifies a "step-size" between the elements in the resulting list. So if we write
(1..Inf : 2), we get
(1,3,5,7,...). Using that list, we can extract the oddly indexed elements of an array of any size (e.g.
@costs) with an ordinary array slice:
print @costs[1..Inf:2]
You might have expected another one of those "maximal-entropy coffee" delays whilst
undef's that theoretically exist after
@costs' last element, but slices involving infinite lists avoid that problem by returning only those elements that actually exist in the list being sliced. That is, instead of iterating the requested indices in a manner analogous to:
sub slice is lvalue (@array, *@wanted_indices) { my @slice; foreach $wanted_index ( @wanted_indices ) { @slice[+@slice] := @array[$wanted_index]; } return @slice; }
infinite slices iterate the available indices:
sub slice is lvalue (@array, *@wanted_indices) { my @slice; foreach $actual_index ( 0..@array.last ) { @slice[+@slice] := @array[$actual_index] if any(@wanted_indices) == $actual_index; } return @slice; }
(Obviously, it's actually far more complicated -- and lazy -- than that. It has to preserve the original ordering of the wanted indexes, as well as cope with complex cases like infinite slices of infinite lists. But from the programmer's point of view, it all just DWYMs).
[Update: Now we write that
1..* to better indicate that the top bound is not
Inf but "Whatever".]
By the way, binding selected array elements to the elements of another array (as in:
@slice[+@slice] := @array[$actual_index]), and then returning the bound array as an lvalue, is a neat Perl 6 idiom for recreating any kind of slice-like semantics with user-defined subroutines.
And so, lastly, we save the data back to disk:
save_data(%data, log => {name=>'metalog', vers=>1, costs=>[], stat=>0});
Note that we're passing in both a hash and a pair, but that these still get correctly folded into
&save_data's single hash parameter, courtesy of the flattening asterisk on the parameter definition:
sub save_data (*%data) {...
It's okay if your head is spinning at this point.
We just crammed a huge number of syntactic and semantic changes into a comparatively small piece of example code. The changes may seem overwhelming, but that's because we've been concentrating on only the changes. Most of the syntax and semantics of Perl's operators don't change at all in Perl 6.
So, to conclude, here's a summary of what's new, what's different, and (most of all) what stays the same.
++and
--
!,
~,
\, and
-
[Update:
~ is now
+^ or
~^, and
\ now builds a
Capture object that degenerates to reference semantics.]
**
=~and
!~
[Update: Smartmatch is now
~~.]
*,
/, and
%
+and
-
<<and
>>
[Update: Now
+< or
~< and
+> or
~>.]
&and
|
[Update:
& is now
+& or
~&, while
| is now
+| or
~|.]
=,
+=,
-=,
*=, etc.
,
not
and,
or, and
xor
->(dereference) becomes
.
.(concatenate) becomes
_
[Update:
~ instead.]
+(identity) now enforces numeric context on its argument
^(bitwise xor) becomes
~
[Update: No, it becomes
+^ or <~^>.]
=>becomes the "pair" constructor
? :bbeeccoommeess
?? ::
[Update:
??!!.]
..becomes even lazier
[Update: All lists are lazy by default now.]
<,
>,
lt,
gt,
==,
!=, etc. become chainable
-r,
-w,
-x, etc. are nestable
<>input operator are more context-aware
[Update: prefix:<=> is now the iterator iterater.]
&&and
||operators propagate their context to both their operands
xrepetition operator no longer requires listifying parentheses on its left argument in a list context.
[Update: The list-repeating form is now
xx instead.]
_is the explicit string context enforcer
[Update:
~.]
~~is high-precedence logical xor
[Update:
^^.]
*is a list context specifier for parameters and a array flattening operator for arguments
[Update: Use
[,] for arguments.]
^is a meta-operator for specifying vector operations
[Update: »op« now.]
:=is used to create aliased variables (a.k.a. binding)
//is the logical 'default' operator | http://search.cpan.org/~lichtkind/Perl6-Doc-0.36/lib/Perl6/Doc/Design/E03.pod | CC-MAIN-2014-23 | refinedweb | 8,342 | 51.99 |
Opened 10 years ago
Closed 8 years ago
#510 closed defect (wontfix)
ST_Estimated_Extent doesn't work with views
Description
If a GIS-enabled view is set up with metadata in the geometry_columns table, the ST_Estimated_Extent command throws an error:
select st_estimated_extent('public','myview','the_geom'); ERROR: LWGEOM_estimated_extent: couldn't locate table within current schema SQL state: XX000
even though the relation exists:
SELECT count(*)=1 FROM pg_catalog.pg_namespace nm JOIN pg_catalog.pg_class c ON nm.oid=c.relnamespace JOIN pg_catalog.pg_attribute a ON c.oid=a.attrelid WHERE nm.nspname='public' AND c.relname='myview' AND attname='the_geom'
This error appears to affect the FDO PostGIS Provider, since I see this error in the logs near crashes in AutoCAD Map 3D.
Change History (4)
comment:1 Changed 10 years ago by
comment:2 Changed 9 years ago by
comment:3 Changed 8 years ago by
comment:4 Changed 8 years ago by
ST_EstimtedExtent only works with real tables, for which stats exist. Estimating the extent of a query would be pretty wild guess.
Note: See TracTickets for help on using tickets.
As a workaround for FDO users that use the PostGIS Provider, you can try replacing one of the functions:
This workaround has made AutoCAD Map3D 2008 and 2010 more stable (fewer crashes) when loading layers and generally working with the FDO PostGIS Provider. | https://trac.osgeo.org/postgis/ticket/510 | CC-MAIN-2020-24 | refinedweb | 223 | 54.73 |
CodePlexProject Hosting for Open Source Software
Hello,
I'm playing around with Joints. I can create a Joint using the JointFactory with no problem. However, I don't have an actual "Joint" class to use so I can instantiate my joints and reference them later. Here are my includes:
using;
I'm using Farseer 2.1.3 with XNA. I imagine I'm either trying to go about this the wrong way, not referencing something correctly, or just doing something dumb. :) I'm open to suggestions.
Solution:
Had to include:
using FarseerGames.FarseerPhysics.Dynamics.Joints;
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://farseerphysics.codeplex.com/discussions/210638 | CC-MAIN-2017-43 | refinedweb | 131 | 69.68 |
I had an old project i started working on again. Last time i worked on it was when i had C# Express 2010, no i have 2012 and I when to add a new windows form the same way I normally do but now I am getting an error. I think something changed in 2012, I am not sure what I need to do to fix it. all the old windows forms still work normally.
Error: "'ARCode' could not be found (are you missing a using directive or an assembly reference?)"
The 04:21 AM.
problem fixed, it was the namespace.
Forum Rules | http://forums.codeguru.com/showthread.php?532059-Getting-an-error-trying-to-display-a-new-windows-form&p=2100245 | CC-MAIN-2015-06 | refinedweb | 103 | 90.5 |
I started taking a C++ class two weeks ago, so I'm relatively new to this. I'm trying to grasp the fundamentals of arrays, but I'm absolutely stuck with this problem.
I'm trying to write a program that has a declaration in main() to store the string "Vacation is near"
into an array named 'message'. It should have a function call to 'display()' that accepts 'message' in a parameter
named 'strng' and then display the first eight elements (by 'elements', does that include spaces?) of the 'message' array.
I'm not exactly sure as to what to put inside the 'for' loop.
If you could help clarify, I'd appreciate any help!
My program is here:
Code:#include <iostream> using namespace std; // function prototype void display(char[]); int main() { char message[] = "Vacation is near"; // call display() function display (message); system ("pause"); return 0; } void display(char strng[]) { for (int i = 0; i < 8; i++) { // I'm completely confused as to what to put here? } return; } | http://cboard.cprogramming.com/cplusplus-programming/128743-array-help-needed.html | CC-MAIN-2014-52 | refinedweb | 168 | 59.84 |
A Beginner's Python Tutorial/Exception Handling
If you haven't seen them before, you're not trying hard enough. What are they? Errors. Exceptions. Problems. Know what I'm talking about? I got it with this program:
- Code Example 1 - buggy program
def menu(list, question): for entry in list: print 1 + list.index(entry), print ") " + entry return raw_input(question) - 1 # running the function # remember what the backslash does answer = menu(['A','B','C','D','E','F','H','I'],\ 'Which letter is your favourite? ') print 'You picked answer ' + (answer + 1)
This is just an example of the menu program we made earlier. Appears perfectly fine to me. At least until when I first tried it. Run the program, and what happens?
Bugs - Human Errors[edit]
The most common problems with your code are of your own doing. Sad, but true. What do we see when we try to run our crippled program?
- Code Example 2 - error message
Traceback (most recent call last): File "/home/steven/errortest.py", line 10, in -toplevel- answer = menu(< I'll snip it here >) File "/home/steven/errortest.py", line 6, in menu return raw_input(question) - 1 TypeError: unsupported operand type(s) for -: 'str' and 'int'
Say what? What Python is trying to tell you (but struggling to find a good word for it) is that you can't join a string of letters and a number into one string of text. Let's go through the error message and have a look at how it tells us that:
File "/home/steven/errortest.py", line 10, in -toplevel-tells us a couple of things. File "/home/steven/errortest.py" tells us which file the error occurred in. This is useful if you use lots of modules that refer to each other. line 10, in -toplevel- tells us that it is in line # 10 of the file, and in the top level (that is, no indentation).
answer = menu(['A','B','C','D','E','F','H','I'],'Which letter is your favourite? ')duplicates the code where the error is.
- Since this line calls a function, the next two lines describe where in the function the error occurred.
TypeError: unsupported operand type(s) for -: 'str' and 'int'tells you the error. In this case, it is a 'TypeError', where you tried to subtract incompatible variables.
There are multiple file and code listings for a single error, because the error occurred with the interaction of two lines of code (e.g. when using a function, the error occurred on the line where the function was called, AND the line in the function where things went wrong).
Now that we know what the problem is, how do we fix it. Well, the error message has isolated where the problem is, so we'll only concentrate on that bit of code.
- Code Example 3 - calling the menu function
answer = menu(['A','B','C','D','E','F','H','I'],\ 'Which letter is your favourite? ')
This is a call to a function. The error occurred in the function in the following line
- Code Example 4 - Where it went wrong
return raw_input(question) - 1
raw_input always returns a string, hence our problem. Let's change it to input(), which, when you type in a number, it returns a number:
- Code Example 5 - Fixing it
return input(question) - 1
Bug fixed!
Exceptions - Limitations of the Code[edit]
Okay, the program works when you do something normal. But what if you try something weird? Type in a letter (lets say, 'm') instead of a number? Whoops!
- Code Example 6 - Another error message
Traceback (most recent call last): File "/home/steven/errortest.py", line 10, in -toplevel- answer = menu(< I'll snip it here >) File "/home/steven/errortest.py", line 6, in menu return input(question) - 1 File "", line 0, in -toplevel- NameError: name 'm' is not defined
What is this telling us? There are two code listings - one in line 10, and the other in line 6. What this is telling us is that when we called the menu function in line 10, an error occurred in line 6 (where we take away 1). This makes sense if you know what the input() function does - I did a bit of reading and testing, and realised that if you type in a letter or word, it will assume that you are mentioning a variable! so in line 6, we are trying to take 1 away from the variable 'm', which doesn't exist.
Have no clue on how to fix this? One of the best and easiest ways is to use the try and except operators.
Here is an example of try being used in a program:
- Code Example 7 - The try operator
try: function(world, parameters) except: print world.errormsg
This is an example of a really messy bit of code that I was trying to fix. First, the code under try: is run. If there is an error, the compiler jumps to the except section and prints world.errormsg. The program doesn't stop right there and crash, it runs the code under except: then continues on.
Let's try that where the error occurred in our code (line 6). The menu function now is:
- Code Example 8 - testing our fix
def menu(list, question): for entry in list: print 1 + list.index(entry), print ") " + entry try: return input(question) - 1 except NameError: print "Enter a correct number"
Try entering a letter when you're asked for a number and see what happens. Dang. We fixed one problem, but now it has caused another problem further down the track. This happens all the time. (Sometimes you end up going around in circles, because your code is an absolute mess). Let's have a look at the error:
- Code Example 9 - Yet another error message
Traceback (most recent call last): File "/home/steven/errortest.py", line 12, in -toplevel- print 'You picked answer', (answer + 1) TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
What has happened this time is that the menu function has returned no value - it only printed an error message. When, at the end of the program, we try to print the returned value plus 1, what is the returned value? There is no returned value? So what is 1 + ... well, we have no clue what we are adding 1 to!
We could just return any old number, but that would be lying. What we really should to is rewrite the program to cope with this exception. With what? try and except!
- Code Example 10 - yet another solution
# from when we finish defining the function answer = menu(['A','B','C','D','E','F','H','I'],\ 'Which letter is your favourite? ') try: print 'You picked answer', (answer + 1) # you can put stuff after a comma in the 'print' statement, # and it will continue as if you had typed in 'print' again except: print '\nincorrect answer.' # the '\n' is for formatting reasons. Try without it and see.
Problem solved again.
Endless Errors[edit]
The approach that we used above is not recommended. Why? Because apart from the error that we know can happen, except: catches every other error too. What if this means we never see an error that could cause problems down the track? If except: catches every error under the sun, we have no hope of controlling what errors we deal with, and the other ones that we want to see, because so far we haven't dealt with them. We also have little hope of dealing with more than one type of error in the same block of code. What should one do, when all is hopeless? Here is an example of code with such a situation:
- Code Example 11 - The Problem We Face
print 'Subtraction program, v0.0.1 (beta)' a = input('Enter a number to subtract from > ') b = input('Enter the number to subtract > ') print a - b
Okay, you enter your two numbers and it works. Enter a letter, and it gives you a NameError. Lets rewrite the code to deal with a NameError only. We'll put the program in a loop, so it restarts if an error occurs (using continue, which starts the loop from the top again, and break, which leaves the loop):
- Code Example 12 - Dealing With NameError
print 'Subtraction program, v0.0.2 (beta)' loop = 1 while loop == 1: try: a = input('Enter a number to subtract from > ') b = input('Enter the number to subtract > ') except NameError: print "\nYou cannot subtract a letter" continue print a - b try: loop = input('Press 1 to try again > ') except NameError: loop = 0
Here, we restarted the loop if you typed in something wrong. In line 12 we assumed you wanted to quit the program if you didn't press 1, so we quit the program.
But there are still problems. If we leave something blank, or type in an unusual character like ! or ;, the program gives us a SyntaxError. Lets deal with this. When we are asking for the numbers to subtract, we will give a different error message. When we ask to press 1, we will again assume the user wants to quit.
- Code Example 13 - Now, dealing with SyntaxError
print 'Subtraction program, v0.0.3 (beta)' loop = 1 while loop == 1: try: a = input('Enter a number to subtract from > ') b = input('Enter the number to subtract > ') except NameError: print "\nYou cannot subtract a letter" continue except SyntaxError: print "\nPlease enter a number only." continue print a - b try: loop = input('Press 1 to try again > ') except (NameError,SyntaxError): loop = 0
As you can see, you can have multiple except uses, each dealing with a different problem. You can also have one except to deal with multiple exceptions, by putting them inside parentheses and separating them with commas.
Now we have a program that is very difficult, to crash by an end user. As a final challenge, see if you can crash it. There is one way I have thought of - if you read the section on human error carefully, you might know what it is. | https://en.wikibooks.org/wiki/A_Beginner%27s_Python_Tutorial/Exception_Handling | CC-MAIN-2020-34 | refinedweb | 1,681 | 71.95 |
For doing things like
setTimeout(function () { ... setTimeout(arguments.callee, 100); }, 100);
I need something like
arguments.callee. I found information at javascript.info that
arguments.callee is deprecated:
This property is deprecated by ECMA-262 in favor of named function expressions and for better performance.
But what should be then used instead? Something like this?
setTimeout(function myhandler() { ... setTimeout(myhandler, 100); }, 100); // has a big advantage that myhandler cannot be seen here!!! // so it doesn't spoil namespace
BTW, is
arguments.callee cross-browser compatible?
Yes, that's what, theoretically, should be used. You're right. However, it doesn't work in some versions of Internet Explorer, as always. So be careful. You may need to fall back on
arguments.callee, or, rather, a simple:
function callback() { // ... setTimeout(callback, 100); } setTimeout(callback, 100);
Which does work on IE.
minitech answer is quite good, but it is missing one more scenario. Your declare function called callback, which means two things, first the function is object in memory, and the second, the function name is only for referencing to the object. If you, for any reason break the reference between these two, the proposed code will not work too.
Proof:
function callback() { // ... setTimeout(callback, 100); } setTimeout(callback, 100); var callback2 = callback; //another reference to the same object callback = null; //break the first reference callback2(); //callback in setTimeout now is null.
From developer Mozilla page in the description is:
Warning: The 5th edition of ECMAScript (ES5) forbids use of arguments.callee() in strict mode. Avoid using arguments.callee() by either giving function expressions a name or use a function declaration where a function must call itself.
obviously this is the first example of workaround "by either giving function expressions a name", but lets see how we can deal with "or use a function declaration where a function must call itself" and what will that bring:
function callback(){ //... setTimeout(innercall(), 100); function innercall(){ //innercall is safe to use in callback context innercall.caller(); //this will call callback(); } }
Then we are safe to do whatever we want with the callback reference:
var callback2 = callback; callback = null; callback2(); //will work perfectly.
But what should be then used instead? Something like this?
Yes, you answered your own question. For more information, see here:
Why was the arguments.callee.caller property deprecated in JavaScript?
It has a pretty good discussion about why this change was made. | https://javascriptinfo.com/view/49272/arguments-callee-is-deprecated-what-should-be-used-instead | CC-MAIN-2020-40 | refinedweb | 398 | 60.41 |
Department of Computational Social Science, George Mason University
What's better than awesome data? Awesome data on a map. I loved Rolf Fredheim's tutorial on mapping GDELT in R, so I decided to try and replicate (some of) it in Python.
For the basics of working with GDELT, check out my previous tutorial, Rolf's, or John Beieler's.
# Some code to style the IPython notebook and make it more legible. # CSS styling adapted from # from IPython.core.display import HTML styles = open("Style.css").read() HTML(styles)
The key Python package you need for this is the Basemap toolkit for Matplotlib. I found it to be non-trivial to set up on my Mac. Before you can install it, you need the GEOS library. The Mac binaries helped me, but your mileage may vary.
(Here's one advantage that R has over Python; the packages seem to work much easier out of the box, and you get slightly prettier maps at the end.)
import datetime as dt from collections import defaultdict import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap
# Set this variable to the directory where the GDELT data files are PATH = "GDELT.1979-2012.reduced/"
We quickly refresh ourselves on the column names and indices; for this analysis, we care about the geolocation data: the lat and long coordinates for each actor, and for the event itself.
with open(PATH+"2010.reduced.txt") as f: col_names = f.readline().split("\t") for i, col_name in enumerate(col_names): print i, col_name
0 Day 1 Actor1Code 2 Actor2Code 3 EventCode 4 QuadCategory 5 GoldsteinScale 6 Actor1Geo_Lat 7 Actor1Geo_Long 8 Actor2Geo_Lat 9 Actor2Geo_Long 10 ActionGeo_Lat 11 ActionGeo_Long
data = [] for year in range(1979, 2013): f = open(PATH + str(year) + ".reduced.txt") for raw_row in f: row = raw_row.split("\t") actor1 = row[1][:3] actor2 = row[2][:3] both = actor1 + actor2 if "RUS" in both: data.append(raw_row) print "Russia-related records:", len(data)
Russia-related records: 3928349
Next, we iterate through all the rows and count the events by coordinates. We'll use a defaultdict keyed on a (lat, long) tuple to store the counts. Then we'll run some basic summary statistics on the counts to get an idea for the distribution; are there many points with roughly similar event counts, or a few with high event counts and many with low ones?
point_counts = defaultdict(int) # Defaultdict with (lat, long) as key for row in data: row = row.split("\t") try: lat = float(row[10]) lon = float(row[11]) point_counts[(lat, lon)] += 1 except: pass # Get some summary statistics counts = np.array(point_counts.values()) print "Total points:", len(counts) print "Min events:", counts.min() print "Max events:", counts.max() print "Mean events:", counts.mean() print "Median points:", np.median(counts)
Total points: 52603 Min events: 1 Max events: 1086076 Mean events: 73.4356215425 Median points: 2.0
These numbers suggest a distribution with very high variance; the number of events at most points are small, but some get very, very large. In fact, we might be dealing with a power law. Work with social-scientific or complex data for any length of time, and you'll start seeing power laws everywhere. There are some good reasons for this, but we don't need to worry about them in order to make a cool map.
So why do we care about the distribution right now? Because we need to know how to plot the points. Make the smallest ones too small, and we won't be able to see them; make the largest ones too large, and it will overwhelm the rest of the map. A log-scale is probably a good call here; a point with 10x as many events will be twice the size on our map.
Putting data on maps is quite literally more art than science. It took me some tweaking to find something that looked good. I ended up settling on taking the log of the event count for each point + 1 (since $log(1) = 0$ ), and multiplying it by 2 to get a size in points. Take my word for it, or play around with it to make it better.
def get_size(count): ''' Convert a count to a point size. Log-scaled. ''' scale_factor = 2 return np.log10(count + 1) * scale_factor
Now we need to draw the actual map. We do this in several steps; as I understand it, each step is drawing a different 'layer' onto our image.
First, we define a Basemap object. We pick a projection for it, choose a resolution (we should go with low, since we're looking at a big chunk of the world) and set a bounding box.
Once we have the Basemap, we call a few of its built-in methods to draw the coastlines, the country boundaries, fill the landmass with color, and draw the box around the map.
Finally, once we've done all that, we'll iterate over each of our points and draw them on the map, one at a time. Calling the map object itself will convert the lat and long coordinates to y and x on our map image (and remember that lat = y, long = x). The plot method will draw the point at the map itself. For each point, we'll use the get_size function above to convert the event count to a size. Finally, we'll fix point transparency (alpha) at 0.3: enough to see, without completely overwhelming the map, and providing a nice layering effect when many points cluster together. But again, more art than science; tweak it and see what looks good to you!
# Note that we're drawing on a regular matplotlib figure, so we set the # figure size just like we would any other. plt.figure(figsize=(12,12)) # Create the Basemap') # Light gray event_map.drawmapboundary() # Draw the points on the map: for point, count in point_counts.iteritems(): x, y = event_map(point[1], point[0]) # Convert lat, long to y,x marker_size = get_size(count) event_map.plot(x,y, 'ro', markersize=marker_size, alpha=0.3)
Next we'll map interactions; to keep things manageable and reduce map clutter, we'll only look at 2012 events. For each event, we get the lat and long coordinates for Actor 1 (columns 6,7) and Actor 2 (columns 8,9).
We'll count again using a defaultdict, but this time the keys will be pairs of points.
# Defaultdict with ((lat, long), (lat,long)) as key interaction_counts = defaultdict(int) for row in data: row = row.split("\t") # Skip row if not in 2012 if row[0][:4] != '2012': continue try: lat_1 = float(row[6]) lon_1 = float(row[7]) lat_2 = float(row[8]) lon_2 = float(row[9]) interaction_counts[((lat_1, lon_1), (lat_2, lon_2))] += 1 except: pass # Check point data: counts = np.array(interaction_counts.values()) print "Total point-pairs:", len(counts) print "Min events:", counts.min() print "Max events:", counts.max() print "Mean events:", counts.mean() print "Median points:", np.median(counts)
Total point-pairs: 27785 Min events: 1 Max events: 34160 Mean events: 7.24218103293 Median points: 1.0
Instead of points, here we'll be drawing lines between points. So instead of size, let's scale line transparency. Again, it took some tweaking to find a scaling that worked well, but I ended up settling on log-scaling relative to the maximum count (since transparency is always between 0 and 1), scaled by 0.25. Play around with it and see what works for you.
max_val = np.log10(counts.max()) def get_alpha(count): ''' Convert a count to an alpha val. Log-scaled ''' scale = np.log10(count) return (scale/max_val) * 0.25
The first step of the map drawing will be the same. Create the Basemap and draw the key geographic features.
Next, we'll draw a line for each point. Specifically, we'll draw a great circle segment, which has a nice Basemap function, is the actual 'straight line' on a globe, and looks pretty cool to boot. Confusingly, unlike for drawing points, we don't need to convert lat/long coordinates to x/y in order to draw the great circle line.
# Draw the basemap like before plt.figure(figsize=(12,12))') event_map.drawmapboundary() # Draw the lines on the map: for arc, count in interaction_counts.iteritems(): point1, point2 = arc y1, x1 = point1 y2, x2 = point2 # Only plot lines where both points are on our map: if ((x1 > 10 and x1 < 100 and y1 > 20 and y1 < 70) and (x2 > 10 and x2 < 100 and y2 > 20 and y2 < 70)): line_alpha = get_alpha(count) # Draw the great circle line event_map.drawgreatcircle(x1, y1, x2, y2, linewidth=2, color='r', alpha=line_alpha)
So there you have some cool visualizations with GDELT. Of course, pretty pictures != actual analysis. But sometimes visualizations can help kickstart your analysis by showing you things you might not have noticed otherwise. And once you have good analysis, visualizations can be one of the best ways to convey your point quickly and concisely.
So what are you waiting for? Get to work! | http://nbviewer.jupyter.org/github/dmasad/GDELT_Intro/blob/master/GDELT_Mapping.ipynb | CC-MAIN-2016-30 | refinedweb | 1,504 | 73.98 |
Build the ActionScript side of your extension into a SWC
file. The SWC file is an ActionScript library — an archive file
that contains your ActionScript classes and other resources, such
as its images and strings.
When you package a native extension, you need both the SWC file
and a separate library.swf file, which you extract from the SWC
file. The SWC file provides the ActionScript definitions for authoring
and compilation. The library.swf provides the ActionScript implementation
used by a specific platform. If different target platforms of your
extension require different ActionScript implementations, create
multiple SWC libraries and extract the library.swf file separately
for each platform. A best practice, however, is that all the ActionScript
implementations have the same public interfaces. (Only one SWC file
can be included in the ANE package.)
The SWC file contains a file called library.swf. For more information,
see The SWC file and SWF files in the ANE package.
Use one of the following ways to build the SWC file:
Use Adobe Flash Builder to create a Flex library project.
When
you build the Flex library project, Flash Builder creates a SWC
file. See Create Flex library projects.
Be
sure to select the option to include Adobe AIR libraries when you
create your Flex library project.
Ensure that the SWC is
compiled to the correct version of the SWF format. Use SWF 11 for
AIR 2.7, SWF 13 for AIR 3, and SWF 14 for AIR 3.1. You can set the SWF
file format version in the project’s properties. Select ActionScript Compiler
and enter this Additional Compiler Argument:
-swf-version 13
Use the command-line tool acompc to build a Flex library
project for AIR. This tool is the component compiler provided with
the Flex SDK. If you are not using Flash Builder, use acompc directly.
See Using compc, the component compiler.
For
example:
acompc -source-path $HOME/myExtension/actionScript/src
-include-classes sample.extension.MyExtensionClass sample.extension.MyExtensionHelperClass
-swf-version=13
-output $HOME/myExtension/output/sample.extension.myExtension.swc
The SWF version
specified when compiling the ActionScript library is one of the factors,
along with extension descriptor namespace, that determines whether the
extension is compatible with an AIR application. The SWF version
of the extension cannot exceed the SWF version of the main application
SWF file.
Compatible AIR application version
ANE SWF version
Extension namespace
3.0
10-13
ns.adobe.com/air/extension/2.5
3.1
14
ns.adobe.com/air/extension/3.1
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/air/extensions/WS99209310cacd98cc2d13931c1300f2c84c7-8000.html | crawl-003 | refinedweb | 431 | 50.63 |
pthread_attr_setguardsize()
Set the size of the thread's guard area
Synopsis:
#include <pthread.h> int pthread_attr_setguardsize( pthread_attr_t* attr, size_t guardsize );
Since:
BlackBerry 10.0.0
Arguments:
- attr
- A pointer to the pthread_attr_t structure that defines the attributes to use when creating new threads. For more information, see pthread_attr_init().
- guardsize
- The new value for the size of the thread's guard area.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The pthread_attr_setguardsize() function sets the size of the thread's guard area in the attribute structure attr to guardsize.
If guardsize is 0, threads created with attr have no guard area; otherwise, a guard area of at least guardsize bytes is provided. You can get the default guardsize value by specifying _SC_PAGESIZE in a call to sysconf().
The guardsize attribute controls the size of the guard area for the thread's stack. This guard area helps protect against stack overflows; guardsize bytes of extra memory is allocated at the overflow end of the stack. If a thread overflows into this buffer, it receives a SIGSEGV signal.
The guardsize attribute is provided because:
- Stack overflow protection can waste system resources. An application that creates many threads can save system resources by turning off guard areas if it trusts its threads not to overflow the stack.
- When threads allocate large objects on the stack, a large guardsize is required to detect stack overflows.
Returns:
- EOK
- Success.
- EINVAL
- Invalid pointer, attr, to a pthread_attr_t structure, or guardsize is invalid.
Classification:
Caveats:
If you provide a stack (using attr's stackaddr attribute; see pthread_attr_setstackaddr()), the guardsize is ignored, and there's no stack overflow protection for that thread.
The guardsize argument is completely ignored when using a physical mode memory manager.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_attr_setguardsize.html | CC-MAIN-2018-05 | refinedweb | 320 | 57.87 |
How often have you coded up a simple Windows Form application to clear up the difference between similar events? Where does TextChanged get fired in the sequence KeyDown, KeyPress, KeyUp. When exactly does Load get fired in relation to other events on a complex control?
ControlInspector is designed to answer these and many other questions by hooking all events on an arbitrary windows form control; in a user control; or in a complete windows form. It will recurse through the controls collection and hook events on every sub control, and has special handling for context menus and main menus on forms to make sure that these don't get excluded.
In summary; Control Inspector is like a native .net version of Spy++ for .net events
This article is intended to accompany ContronInspector and give some insight into the techniques it uses. If you just want to use ControlInspector to diagnose your own applications, or to understand Windows Form events better then just download the compiled version of the software, and don't worry about the source!
When you first open Control Inspector you will be presented with a blank screen. You can use the File/Open option to open an arbitrary assembly (.net exe or dll file); you will then be presented with a list of Windows Forms Controls and Forms that are part of the assembly. The entry you select will be loaded into memory and instantiated either by hosting it in a form, or if it is a form it will be constructed directly. You can also use the File/Windows Forms option to display a list of available controls from the System.Windows.Forms namespace. Note that you won't be able to construct some of these controls (eg ButtonBase) because although they derive from Control they are not directly usable.
If the control is hosted in a form, you will see the ControlHostForm above. It has been written to display a red grid around your control so you can see where the control you are analysing ends.
The first tab of the event viewer, "All Events" shows a complete list of events trapped by ControlInspector in the that they sequence. You can look at individual controls by clicking on their individual tab. When you have a particular control in focus, you are able to use the property editor to make on the fly changes to the control (useful to see what effect events are fired by this, and in what order). For example, if you enable anchoring you then can see the resize events generated by resizing the hosting form.
The events are hooked before the control is displayed, so you will get all the initialisation events; and if you close the hosting form, then you will see the events fired until the control dies.
You are able to uncheck particular events to handle either by focusing on the control and using the checked list box, or by right clicking on a particular event in the event view and selecting "stop tracking this event". This option will only stop tracking events for this individual control. There are options to stop tracking groups of events for all controls (for example, all mouse movement related events) to stop your event list getting over populated.
If you aren't interested in ControlInspector under the covers, I suggest you stop reading now!
Last week I attended a Guerilla .NET course hosted by DevelopMentor. I can highly recommend this training company as the whole week was inspiring, and the instructors highly knowledgeable: You know who you are guys!
The instructors issued a challenge for the the class to come up with the best program that they could using any of the techniques that they had learnt during the week. The challenge would be judged on Thursday, so I had just a few days to get busy.
One of the topics covered on the course was Reflection and I decided to use this to discover information about Windows Forms controls and hook on to their events.
ControlInspector also has to use Reflection.Emit to generate a function and delegate that exactly corresponds to the event type to allow it to hook on to arbitrary events. It can only hook on to events that following the function prototype:
void eventName(object sender, eventargstype x)
where eventargstype derives from the EventArgs type. All the standard WindowsForms events do; and your events should also use this structure so there should be no problems with this approach.
The main part of the code is split between MainForm.cs which contains the UI and code for hooking on to events, and GenerateEventAssembly.cs which generates the IL for a function which matches a given delegate. Lets talk about the way the events are hooked in the first place:
void HookEvents(object o, string name) {
Type t = o.GetType();
...
Using Reflection, we step trhough all the events on a particular type. The EventHandlerType will be the type of the delegate required to hook on to this event; eg: void EventHandler(object sender, EventArgs e)
void EventHandler(object sender, EventArgs e)
foreach(EventInfo ei in t.GetEvents())
{
// Discover type of event handler
Type eventHandlerType = ei.EventHandlerType;
// eventHandlerType is the type of the delegate
// (eg System.EventHandler)
// what we need, is to find the type of the second parameter of the
// delegate, eg System.EventArgs
MethodInfo mi = eventHandlerType.GetMethod("Invoke");
ParameterInfo[] pi = mi.GetParameters();
Now comes the magic. The function GetEventConsumerType generates an a class dynamically that has a method "HandleEvent" of exactly the right types. This class is derived from ControlEvent, which contains a function void GenericHandleEvent(object sender, object eventargs) so the generated code is kept to a minimum (I can't write IL for toffee: I wrote a class in C# which did the required type conversion, ran ILDASM on it, then used that as a basis to automatically generate these arbitrary types).
GetEventConsumerType
HandleEvent
void GenericHandleEvent(object sender, object eventargs)
// Get a class derived from ControlEvent which has a HandleEvent method
// taking the appropriate parameters
ControlEvent ce
= GenerateEventAssembly.Instance.GetEventConsumerType(pi[1].ParameterType);
// Hook onto this control event to get the details of all events fired
ce.ControlName = name;
ce.EventName = ei.Name;
ce.EventTrackInfo = trackInfo;
ce.EventFired += new EventHandler(eventFired);
controlEventList.Add(ce);
// Wire up the event handler to our new consumer
Delegate d = Delegate.CreateDelegate(eventHandlerType, ce, "HandleEvent");
ei.AddEventHandler(o, d);
}
...
Finally, if this is a control type, we recurse through sub-controls
if (o is Control) {
Control c = (Control) o;
if (c.Controls != null) {
foreach(Control subControl in c.Controls) {
HookEvents(subControl, name + "/" + ControlName(subControl));
}
}
...
}
The code to do the IL generation is quite well commented, so I won't go into greater detail about that here.
void AddEventsToTreeView(ControlEvent ce, TreeView treeView,
bool includeControlName)
Because of the generic way in which events are hooked up, there is a test user control contained in the ControlInspector.exe - UserControlTest. This contains a button which fires off a user-defined event just to prove that everything is working correctly.
Removing the tab pages from the form when a new control is loaded exhibits a strange bug in the 1.0 framework which I have tried my best to work around. My thanks to instructor Ian Griffiths who helped me with this issue.
I didn't win the Thursday Challenge!
The current project has been tested under VS.NET 2002 and VS.NET 2003, and it works fine on both of them. The downloadable file is a VS.NET 2002 project, but works fine if you upgrade it. I will be releasing a new version with some more changes soon.
1.0 Initial release
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Reflection.Emit
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/3317/ControlInspector-monitor-Windows-Forms-events-as-t?msg=855358 | CC-MAIN-2017-43 | refinedweb | 1,330 | 62.88 |
I need add my java program with a picture.
I need add my java program with a picture. Good evng Frnds
Friends i created 1 jar file in my desktop. I need add this program with 1 picture. Whenever the user double clicking on that picture The program must start instead
hibernate
hibernate hi
what are the necessary jar files that we need to set in the classpath to execute a hibernate application
Add Jar in JPA project
Add Jar in JPA project
In this section, you add all required jar files.
Right...:
Then open a "Add JAR/Folder window. Select all
Hibernate code problem - Hibernate
://
Thanks. Add all the dependent jar files in ur...; Hi friend,
I thinks, add hibernate-annotation.jar
if you have any...Hibernate code problem Hi
This is Raju.I tried the first example
What XML JavaObject-XML transformation tools to use? JDOM, Dom4J, XOM, XStream, JAXB, JiBX, PojoXml
What XML JavaObject-XML transformation tools to use? JDOM, Dom4J, XOM, XStream, JAXB, JiBX, PojoXml I just need some simple and stable xml tools to transform java objects to XML and back from XML to Java Objects. Provided.
a problem during add jar file javax.annotation.Resource
a problem during add jar file javax.annotation.Resource when i use this jar file in my application i got this problem pls tell me about it
Access restriction: The type Resource is not accessible due to restriction on required
How to use JAR file in Java
in JAR files, then you just need to download one single file and Run it.
When you....
JAR files also add portability as this file can be downloaded anywhere and run...JAR which stands for Java ARchive allows a programmer to archive or collect
jar file
jar file how to run a java file by making it a desktop icon i need complete procedur ..through cmd
need a jar file to come out of sleep mode
need a jar file to come out of sleep mode Hi
I need a jar file... mode .So I need a jar file to run and be hidden and check the incoming SMS... explain what for I need to this file .
there is an application program with name
Hibernate - EJB
hibernate ejb hibernatepersistence jar Need to know about hibernate ejb hibernatepersistence jar
While creating a jar how to add MySQL database to project
While creating a jar how to add MySQL database to project Hi,
Please tell me how to attach MySQL database to the Java project while building a jar
or their is any other process
I need Oracle connector jar file. where will i download?
I need Oracle connector jar file. where will i download? I need Oracle connector jar file. where will i download
JAR Generation
JAR Generation Hi,
I have done this code.
Can u pls tell me how to create a jar for this in eclipse, this is only a single java file?
package...);
frame.getContentPane().add(jb);
jb2=new JButton("Generate");
jb2.setBounds(20
java - Hibernate
java tell me how to configure hibernate in eclipse,where i have to add jar files and how can i get the jar files please tell me the clear...://
Thanks
how can i add hibernate plugin to eclipse?
how can i add hibernate plugin to eclipse? how can i add hibernate plugin to eclipse
Adding Jar into Eclipse
Adding Jar into Eclipse Hi,
Please provide Step by step procedure to add jar, tld files and configurations in Eclipse Helios version and i am using Jboss5.
Thanks&Regards,
Shiva s
Adding Spring and Hibernate Capabilities
to add the spring and
hibernate libraries and configuration files to the web... Libraries
Now we will download hibernate libraries and add it to the project...Adding Spring and Hibernate Capabilities
hibernate
hibernate how to impot mysql database client jar file into eclipse for hibernate configuration
Hi Friend,
Please visit the following link:
creating JAR - Java Beginners
installed SQL and Tomcat.I created a JAR of my java project and included... and tried to update the library of the web project.Unfortunately, when i add the new JAR to the lib, the WEBAPPlication libraries doesnot reflect the new class added
Hibernate
Hibernate hi sir i need hibernate complete tutorial for download
Hi Friend,
Please visit the following link:
Hibernate Tutorials
Thanks
HIBERNATE IN CONSOLE & SERVLET
HIBERNATE IN CONSOLE & SERVLET
( part-3...;
In this continuation of the earlier tutorial on Hibernate( July-2005) , the author gives a demo for using Hibernate in a console application & a servlet.
Need of ORM
Need of ORM Why do you need ORM tools like hibernate............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver
ARNING... wat to do?? and provide me jar lib
hibernate - Hibernate
hibernate is there any tutorial using hibernate and netbeans to do a web application add,update,delete,select
Hi friend,
For hibernate tutorial visit to :
First Hibernate 4 Example with Eclipse
installed in your
computer properly. Next you will need
jar files of Hibernate...') now to
create a Hibernate example add the jar files of Hibernate. To add... project which you want to add the Hibernate's jar files
then Right click ->
How to Access MS Access in jar.
How to Access MS Access in jar. how do i access my Ms-Access file placed in the same jar file where my application code/class file r present??? Want to access it via Code or is their any alter-native?? Do i need any Driver to do
Download Hibernate 4
/files/
When you will download the hibernate 4 jar file you should be care about... will see a folder
lib\required into which the basic jar file
"hibernate...
classmate-0.5.4.jar
dom4j-1.6.1.jar
javassist-3.12.1.GA.jar
hibernate-commons
Jar file creation - Swing AWT
Jar file creation i have developed a swing application using Net beans IDE..i am also using 3 rd party JAR files...
my application is a serail port... the application i gets a JAR file a lib file is also created in dist folder
Hibernate Performing the usual DB operations
HIBERNATE-BASICS Hibernate Performing the usual DB operations... outlines the basic features of the Hibernate environment and the code for performing... hibernate2.jar. Besides this, we find a number of jar files in the
c:\hibernate2
including jar file in maven pom.xml
including jar file in maven pom.xml Hi,
I want to include a jar file from the WEB-INF/lib folder. Please let's know how to to do this?
Thanks....
Add the following code into your pom.xml file and the error will go away
How to add dynamically rows into database ?Need help pls
How to add dynamically rows into database ?Need help pls Hi everyone,
I really have a problem of insert multiple rows into the database.Now i can...;
<input type="button" value="Add Row" onclick="addRow();" />
<input to resolve the following Exception
HTTP Status 500 -
type Exception report
Creating JAR File - Java Beginners
Creating JAR File Respected Sir,
Thankyou very much for your reply, I had tried what you told me.
The same error is coming back again to me... to change the contents of my manifest file. I read somewhere regarding i have to add
Hibernate Training
;
Recommendations for Persistent Entities
Add Hibernate Tags ...
Hibernate Training
Hibernate
Training Course Objectives
I want to build sessionfactory in hibernate 4. Need help.
I want to build sessionfactory in hibernate 4. Need help. Hello,
I want to build sessionfactory in hibernate 4. Need help
NEED A PROG
NEED A PROG whats the program to add,delete, display elements of an object using collecions. without using linked list
Hi Friend,
Try... of new element to add: ");
int in=input.nextInt();
list.add
need code
need code Create Vehicle having following attributes: Vehicle No., Model, Manufacturer and Color. Create truck which has the following additional attributes:loading capacity( 100 tons?).Add a behavior to change the color
Need help with this!
Need help with this! Can anyone please help me...("Option 1: Add a student");
System.out.println("Option 2: Modify a student... of students you wish to add");
numberOfStudentsString = dataIn.readLine
Hibernate criteria conjunction..
of bellow example-
Steps-
1.Copy library (jar file) of hibernate into your lib folder... Hibernate criteria conjunction.. What is criteria conjunction in Hibernate?
In Hibernate, Criteria Conjunction works as logical
Hibernate - Hibernate
Hibernate pojo example I need a simple Hibernate Pojo example ...){ System.out.println(e.getMessage()); } finally{ } }}hibernate mapping <class name... information, it will help u
add XMl to JTable
add XMl to JTable Hi..
i saw the program of adding add XMl to JTable using DOM parser,but i need to do that in JAXB ,is it possible to do? help me
Hibernate 4 Annotations
;yourProjectName -> Finish
Add Hibernate's jar files (to read how to add jar... be required to download additionally).
Hibernate annotation supported jar files.
hibernate-commons-annotations-4.0.1.Final.jar
hibernate-jpa-2.0
add imageview to uiview
:[UIImage imageNamed:@"imageName.png"]];
You need to add a UIImageView to UIView
Add an Image or Image URL to your ImageView.
Code:
UIImageView...add imageview to uiview i want to add an image to UIView background
Add Date Time together
Add Date Time together I want to add datetime with time and result must be datetime. i am unable to do please help me in php mysql but i need... second text box and add both the text box means datetime with time
I need an example of sessionfactory
I need an example of sessionfactory Hi,
I need an example of session factory in hibernate. If you can provide me one with, that would be great...Thanks
struts hibernate how to integrate struts and hibernate ?is need any plugin ?programmer manually create that plguin?
Hi.../struts-hibernate/
Thanks
add
Java Example to add two numbers Java Example to add two numbers Here is a java example that accepts two integer from the user and find their sum.
import java.util.*;
class AddNumbers
{
public static void main
Hibernate Annotations
.
Download and add the Hibernate-Annotations jar file
in the project...Hibernate Annotations
You have already familiar with hibernate so, here you
will learn only the Hibernate
PHP add parameter to url - PHP
PHP add parameter to url need an example of parse_url in PHP.
Thanks in Configuration File
relationship. <mapping></mapping> tag is
used to add the hibernate... add Hibernate mapping file programmatically using
the addResource() method...Hibernate Configuration File
In this section we will read about the hibernate
add
Hibernate-HQL subquery - Hibernate
Hibernate-HQL subquery Hi,
I need to fetch latest 5 records from...) where ROWNUM <=5;
--------------------------
I need an equivalent query... for more details
Downloading Struts & Hibernate
is extracted in the code
directory.
5. Now we will add the hibernate...:
dist:
[jar] Building jar: C:\Struts-Hibernate-Integration\dist...
Downloading Struts & Hibernate
Listing the Main Attributes in a JAR File Manifest
Listing the Main Attributes in a JAR File Manifest
Jar Manifest: Jar Manifest file is the main
section of a jar file. This file contains the detailed information about
Hibernate How To
to add ordering ability to in Hibernate
Application when using...Hibernate How To
Adding the ordering ability in your
Hibernate... to serach the result using Hibernate Criteria
Finding Unique ResultIn
Hibernate- Oracle connection - Hibernate
Hibernate- Oracle connection In Eclipse I tried
Windows --> Open perspective--> other
in that Database Development
Right click on databaseconnection --> New
Oracle
Added ojdbc14 Jar file path
UID
Hibernate 4 Annotations Tutorial
displays all the required jar files:
Here are the names of Hibernate 4 required jar...-annotations-4.0.2.Final.jar
hibernate-core-4.2.6.Final.jar
hibernate-jpa-2.0...Video Tutorial: Hibernate 4 Annotations Tutorial
In this video tutorial I
Hibernate 4.3 Hello World: Example
and add the
Hibernate libraries files. You will also have to add the MySQL JDK...Developing Hibernate 4 Hello World Example
In this section you will learn how to develop your first Hibernate 3.4 Hello
World example.
Now this version
need a login applet in java
need a login applet in java i'm java beginner. Can some java master... RENTAL");
lab.setBounds(100,20,200,20);
add(lab);
label1 = new Label("Select Boat: ");
label1.setBounds(20,50,180,20);
add(label1);
combo=new JComboBox
Java date add day
Java date add day
In this tutorial, you will learn how to add days to date.
Sometimes, there is a need to manipulate the date and time (like adding days... to add few days to current date and return the date of that day. For this,
we
hibernate projection
hibernate projection why projection are used and in what situation do we need to use projections
Hibernate Book
to develop yourself.
Hibernate in Action carefully explains the concepts you need... need to start working with Hibernate now.
The book focuses on the 20% you need 80% of the time. The pages saved are used to introduce you to the Hibernate
need to generate ID
need to generate ID hai,
i need to generate ID i.e when i select... of empID stored in database is 4, the next time after i select add employ option...);
JLabel lab=new JLabel("Select Name:");
final String st[] = {"Add Employee
Hibernate Collection Mapping
Hibernate Collection Mapping
In this tutorial you will learn about the collection mapping in Hibernate.
Hibernate provides the facility to persist... the persistent collection-value
fields. Hibernate injects the persistent collections based
Regarding Hibernate
Regarding Hibernate Both JDBC and Hibernate are used to connect to database then whats the need of going to hibernate? What are the main differences? And could you let me know please
Hibernate application
Hibernate application Hi,
I am using Netbeans IDE.I need to execute a **Hibernate application** in Netbeans IDE.can any one help me to do | http://roseindia.net/tutorialhelp/comment/93411 | CC-MAIN-2014-42 | refinedweb | 2,323 | 56.66 |
Sep 05, 2017 08:11 PM|claudiogc|LINK
Hi!
I'm newbie in asp.net and mvc so I don't understand why i'm getting: /Claudio/Novo
RouteConfig.cs
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace Ftec.Cadastro.Site { public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Claudio", action = "Index", id = UrlParameter.Optional } ); } } }
ClaudioController.cs
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Ftec.Cadastro.Site.Models; namespace Ftec.Cadastro.Site.Controllers { public class ClaudioController : Controller { // GET: Claudio public ActionResult Index() { return View(); } //[HttpPost] If i uncomment this line i get that error. public ActionResult Novo() { return View(); } } }
Novo.cshtml
@{ ViewBag. <input type="submit" /> </form> </div>
Index.cshtml
@{ ViewBag.Novo</a>
How can i fix this error?
Contributor
6227 Points
Sep 05, 2017 08:19 PM|ryanbesko|LINK
The Novo view is posting to \Claudio\Index. You do not have an action called Index that is decorated with [HttpPost]. Create that method, do your updates and then do RedirectToAction("Index").
All-Star
47480 Points
Sep 05, 2017 08:19 PM|a2h|LINK
Make sure you have the Novo.cshtml View file inside the Claudio folder in solution
EDIT : Sorry, I test the code with HTTPPost commented out. ryanbesko solution is the correct one
Second Edit: An Alternative solution is to use Action Selectors and add a Get Method like below
//Your Existing method [HttpPost] public ActionResult Novo() { return View(); } //New Method [HttpGet] [ActionName("Novo")] public ActionResult GetNovo() { return View(); }
Sep 06, 2017 06:17 AM|Velen|LINK
Hi claudiogc,
claudiogc//[HttpPost] If i uncomment this line i get that error.
According to your code, the following code in Index.cshtml, you access the Novo action method in Claudio controller with a GET method:
<a href="\Claudio\Novo">Novo</a>
But the the Novo action method would only allow the POST method if you decorate the action method with [HttpPost] (you said uncomment the line above). So that's why you get the 404 Not Found error. The solution is to create a new Nova action method decorated with [HttpGet] to receive the GET request.
Or you could remove the [HttpPost] decorator and check the http method wtihin the current method so you don't have to create a new one:
//no decorator here, allow both GET and POST method
public Action Nova()
{
if (Request.HttpMethod == "GET") { //actions for GET method ... } else if (Request.HttpMethod == "POST") { //actions for POST method... }
}
ryanbeskoThe Novo view is posting to \Claudio\Index. You do not have an action called Index that is decorated with [HttpPost]. Create that method, do your updates and then do RedirectToAction("Index").
Actually, if we do not use the ActionMethodSelectorAttribute to decorate an action method. It would allow both POST method or GET method. In that case, we need to check the http method manually like above.
If you have any other questions, please feel free to contact me any time.
Best Regards
Velen
3 replies
Last post Sep 06, 2017 06:17 AM by Velen | https://forums.asp.net/t/2128128.aspx?+HttpPost+error+ | CC-MAIN-2017-39 | refinedweb | 536 | 52.05 |
Python3: Mutable, Immutable… everything is object!
Object Oriented Programming
Object Oriented Programming (OOP) is a programming paradigm in which the relevant real world concepts for solving a problem are modeled through classes and objects; and under this concept, the programs consist of a series of interactions between these objects.
Object
To understand this paradigm we first have to understand what is a class and what is an object. An object is an entity that groups together a related state and functionality. The state of the object is defined through variables called attributes, while the functionality is modeled through functions that are known by the name of object methods.
An example of an object could be a car, in which we would have attributes such as the brand, the number of doors or the type of fuel and methods such as starting and stopping. Or any other combination of attributes and methods depending on what is relevant to our program.
Class
A class, on the other hand, is nothing more than a generic template from which to instantiate objects; template that is the one that defines what attributes and methods will have the objects of that class. This is why everything in Python is an object.
Continuing with the example: in the real world there is a set of objects that we call cars that have a set of common attributes and a common behavior, this is what we call class. However, my car is not the same as my neighbor’s car, and although they belong to the same class of objects, they are different objects.
Unlike other programming languages where the language supports objects, in Python everything is really an object, including integers, lists, and even functions.
One way to verify this is by using the built-in function
isinstance(object, classinfo), which returns True if the specified object is of the specified type, and False otherwise.
>>> isinstance(1, object)
True>>> isinstance(False, object)
Truedef my_func():
return "hello"
>>> isinstance(my_func, object)
True
As we could verify everything in Python is an object; therefore, all data in Python code is represented by objects or by relationships between objects. And the most commonly used standard composite data types to represent objects, we can see them in the following table:
These objects can be classified as:
- Mutable Objects: because their content (or that value) can be changed at runtime.
- Immutable Objects: because their content (or that value) cannot be changed at runtime.
Type
The simplest way to check the type of object we are working with in Python is to use the built-in
type() function. This will allow us to see that everything can be treated in the same way, as an object -or instance- of the class to which they belong.
>>> x = 42
>>> typex
<class 'int'>>>> y = 24.5
>>> type(y)
<class 'float'>>>> def f(x):
... return (x+1)
...
>>> type(f)
<class 'function'>>>> import math
>>> type(math)
<class 'module'>
With these examples, we can verify that all objects are treated in the same way.
Identity
Another of the built-in Python functions is
id() which returns the address of an object in memory.
>>> x = 1
>>> id(x)
10105088
We create an object with the name of
x and assign it the value of
1. Then we use
id(x) to see that the object is at memory address
10105088.
This allows us to check interesting things about Python. Let’s say we create two variables in Python, one named
x and one named
y, and assign them the same value. For example here:
>>>>> y = "Holberton"
We can use the equality operator (==) to verify that they do indeed have the same value in Python’s eyes:
>>> x == y
True
But are these the same object in memory? In theory, there can be two very different scenarios here.
A scenario (A) in which we actually have two different objects, one with the name of x and one with the name of y, which happen to have the same value. And a scenario (B) where only one object is stored, which has two names that refer to it.
We can use the function
id() presented above to verify this:
>>>>>>> x == y
True
>>> id(x)
139798528064800
>>> id(y)
139798528064872
So, as we can see, Python’s behavior matches Scenario (A) described above. Although
x == y in this example (that is,
x and
yy have the same values), they are different objects in memory. This is because
id(x)!= id(y), as we can explicitly verify:
>>> id(x) == id(y)
False
There is a shorter way to do the above comparison, and that is to use Python’s is operator. Checking if x is y is the same as checking
id(x) == id(y), which means if
x and
y are the same object in memory:
>>> x == y
True
>>> id(x) == id(y)
False
>>> x is y
False
This allows us to see the important difference between the equality operator
== and the identity operator
is. As you can see from the example above, it is entirely possible that the two names in Python (
x and
y) are subject to two different objects (and therefore
x is y is
iFalse), where these two objects have the same value (so that
x == y is
True).
How can we create another variable that points to the same object that
xpoints to? What is called aliasing, which is when two or more variables refer to the same object. We can simply use the assignment operator
=, like this:
>>>>> z = x
To verify that they actually point to the same object, we can use the is operator:
Of course this means they have the same address in memory, as we can explicitly check using
id:
>>> id(x)
139798528064944
>>> id(z)
139798528064944
And of course they have the same value, so we also expect
x == z to return
True:
>>> x == z
True
Mutable and Immutable Objects
Mutable and Immutable Objects
As we indicated, in Python everything is an object, however, there is an important distinction between objects. Some objects are mutable while others are immutable.
Immutable objects
For some types in Python, once we have instantiated those types, they never change. They are immutable. For example,
int objects are immutable in Python. What will happen if we try to change the value of an
int object?
>>> x = 37598
>>> x
37598
>>> x = 37599
>>> x
37599
Well, it seems that we changed successfully. But is this really going to be so? What exactly happened under the hood here? To find out, let’s use
id to investigate further:
>>> x = 37598
>>> x
37598
>>> id(x)
139798529290128
>>> x = 37599
>>> x
37599
>>> id(x)
139798528694160
So we can see that by assigning
x = 37599, we don’t change the value of the object that
x had been linked to earlier. Rather, we create a new object and bind the name
x to it. So after assigning
x = 37598 to
x by using
x = 37598, we had the following state:
And after using
x = 37599 we create a new object and bind the name
x to this new object. The other object with the value of
37598 is no longer accessible by
x (or any other name in this case):
Whenever we assign a new value to a name (in the example above
x) that is bound to an
int object, we actually change the binding of that name to another object.
The same also applies to tuples, strings (str objects), and booleans. In other words, int (and other types of numbers such as float), tuple, bool, and str are immutable objects. Let’s test this hypothesis. What happens if we create a tuple object and then give it a different value?
>>> my_tuple = (1, 2, 3)
>>> id(my_tuple)
139798528063096
>>> my_tuple = (3, 4, 5)
>>> id(my_tuple)
139798528064608
Like an int object, we can see that our assignment actually changed the object that the name
my_tuple is bound to. What happens if we try to change one of the elements of
my_tuple?
>>> my_tuple[0] = 'a new value'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
As we can see, Python doesn’t allow us to modify the content of the
my_tuple object, because it is immutable.
The value of objects of immutable type cannot change without changing the identity of the object. Therefore, whenever we change the value that a variable refers to, we are actually changing the reference object of that variable to a new one. Python keeps an internal counter on how many references an object has. Once the counter reaches zero, which means that no reference is made to the object, the garbage collector in Python removes the object, thus freeing up memory.
Mutable Objects
Some types in Python can be modified after creation and are called mutables. For example, we know that we can modify the content of a
list object:
>>> my_list = [1, 2, 3]
>>> my_list[0] = 'new value'
>>> my_list
['new value', 2, 3]
Does that mean we actually create a new object by assigning a new value to the first element of
my_list? Again we can use id to check:
>>> my_list = [1, 2, 3]
>>> id(my_list)
139798488481416
>>> my_list
[1, 2, 3]
>>> my_list[0] = 'new value'
>>> id(my_list)
139798488481416
>>> my_list
['new value', 2, 3]
Thus, our first assignment
my_list = [1, 2, 3] creates an object at address
139798488481416, with values of
1,
2, and
3:
Then we modify the first element of this list object using
my_list[0] = “new value", that is, without creating a new list object:
Now, let’s create two names,
xand
y both linked to the same list object. We can verify that by using is, or by explicitly checking its ids:
>>> x = y = [1, 2]
>>> x is y
True
>>> id(x)
139798488480520
>>> id(y)
139798488480520
>>> id(x) == id(y)
True
What happens now if we use
x.append(3)? That is, if we add a new element
(3) to the object with the name of
x?
Will
x change? and
y? Well, as we already know, they are basically two names of the same object:
Since this object has changed, when we check their names we can see the new value:
>>> x.append(3)
>>> x
[1, 2, 3]
>>> y
[1, 2, 3]
Note that
x and
y have the same
id as before, as they are still bound to the same
list object:
>>> id(x)
139798488480520
>>> id(y)
139798488480520
Why is it important and how does Python handle mutable and immutable objects?
It is important to know how Python handles mutable and immutable objects to avoid errors or modifying data when that is not the wish. Let’s look at an example.
Next, we define a list (mutable object)
my_list, and tuple (immutable object)
my_tuple. what will happen when we try to execute each of the following statements?
>>> my_list[0][0] = 'Changed!' (1)
>>> my_tuple[0][0] = 'Changed!' (2)
In statement (1), what we are trying to do is change the first my_list element, that is, a tuple. Since a tuple is immutable, this attempt is bound to fail:
>>> my_list[0][0] = 'Changed!'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
Note that what we were trying to do is not change the list, but change the content of its first element. Let’s consider statement (2). In this case, we are accessing the first element of
my_tuple, which happens to be a list, and we modify it. Let’s review this case further and look at the addresses of these elements:
>>> my_tuple = ([1, 1], 2, 3)
>>> id(my_tuple)
139798528063096
>>> type(my_tuple[0])
<class 'list'>
>>> id(my_tuple[0])
139798520723336
When we change
my_tuple[0][0], we don’t really change
my_tuple at all! In fact, after the change, the first element of
my_tuple will still be the object whose memory address is
139798520723336. However, we change the value of that object:
>>> my_tuple[0][0] = 'Changed!'
>>> id(my_tuple)
139798528063096
>>> id(my_tuple[0])
139798520723336
>>> my_tuple
(['Changed!', 1], 2, 3)
Both
id(my_tuple) and
id(my_tuple[0]) remain the same after change.
Since we only modify the value of
my_tuple[0], which is a mutable object of type list, Python allowed this operation.
How arguments are passed to functions and what does that imply for mutable and immutable objects
Its important to know the increment(n):
... n += 1
...
>>> a = 9
>>> increment(a)
>>> a
9
>>> def increment(l):
... l += [4]
...
>>> l = [1, 2, 3]
>>> increment(l)
>>> l
[1, 2, 3, 4]
Preallocation in Python
Now a homework for you:
1. create two variables with values between -5 and 256 and then check if they reference to the same object.
2. Do the same but using values for the variables out of the range above.
What happened?
In Python, upon startup, Python3 keeps an array of integer objects, from -5 to 256. For example, for the int object, macros called NSMALLPOSINTS and NSMALLNEGINTS are used.
What does this mean? This means that when you create an int from the range of -5 and 256, you are actually referencing to the existing object.
This is made to avoid to create again objects that are commonly used and because in that way you can represent any ASCII character. | https://2120.medium.com/python3-mutable-immutable-everything-is-object-80f53327a588?source=post_internal_links---------2---------------------------- | CC-MAIN-2022-33 | refinedweb | 2,211 | 65.96 |
In today's world wide web, Single Sign On for multiple web applications is a common requirement, and it is not an easy thing to implement when these web applications are deployed under different domains. Why? Because user authentication and maintenance of user's "logged on" status on web applications (specially in ASP.NET applications) is something which is totally dependent on HTTP cookies, and two web applications cannot simply share a single cookie if they are deployed under different domains.
The following article has a detailed discussion on user authentication in ASP.NET and its internal implementation strategy. It also has a thorough analysis on some Single Sign On implementation approaches in ASP.NET and their goods and bads. Take a look at this article if you haven't already:
Yes, I have done a sample SSO application based on the proposed model. It's not just another "Hello world", it's a working application that implements SSO across three different sites under three different domains. The hard work is done, and the soft output is, you just need to extend a class for making an ASPX page a "Single Sign On" enabled one in your ASP.NET application. You, of course, have to set up an SSO site and configure your client applications to use the SSO site, that's all (merely a 10 minute work).
The SSO implementation is based on the following high level architecture:
There may be unlimited number of client sites (in our example, 3 sites) which could participate under a "Single Sign On" umbrella, with the help of a single "Single Sign On" server (call this the SSO site,). As described in the previous article, the browser will not store an authentication cookie for each different client site in this model. Rather, it will store an authentication cookie for only the SSO site () which will be used by the other sites to implement Single Sign On.
In this model, each and every request to any client site (which takes part in the SSO model) will internally be redirected to the SSO site () for setting and checking the existence of the authentication cookie. If the cookie is found, the authenticated pages for the client sites (that are currently requested by the browser) are served to the browser, and if not found, the user is redirected to the login page of the corresponding site.
Initially, the browser doesn't have any authentication cookie stored for. So, hitting any authenticated page in the browser for,, or results in an internal redirection to (for checking the authentication cookie and retrieving the user Token) and then serving the authentication page in the browser output.
I have developed a sample Single Sign On application that incorporates three different sites (,, and) and an SSO server site (). The sample SSO implementation code is available for download with this article. You just need to download and set up the sites as instructed in the next section. Once you are done with that, you can test the implementation in different scenarios.
The following section has step by step instructions to test the Single Sign On functionality in different scenarios, and each testing scenario has a Firebug network traffic information that depicts the total number of requests (including the lightweight redirect requests) and their length in size. The number of redirect requests and their length in size are marked within green for easy understandability.
Hit the following three URLs in the browser in three different tabs of the same browser window.
Three different login screens will be presented in each different tabs, for each different site:
For presenting the login screen, in total, four requests are sent to the servers, among which three are redirect requests (marked in green). The redirect request sizes are very small (in terms of bytes), and are negligible even considering network latency.
Use one of any following credentials in any one of the login screens to log on. Let's log onto with user1/123.
Available credentials:
After login, the following screen will be provided for user1 onto.
For login, in total, three requests are sent to the servers, among which two are redirect requests (marked in green). The redirect request sizes are very small (in terms of bytes), and are negligible even considering network latency.
As user1 has logged on to, he should be logged onto the other remaining sites: and at the same time, if those sites are browsed in the same window or in different tabs in the same window. Hitting an authenticated page in and should not present a login screen.
Let's just refresh the current page at and in their corresponding window (currently, the login screen is being shown in the browser):
You will see that instead of showing the login screen, the authenticated home page is being shown. So, user1 is logged onto all three sites:,, and.
Each home page shows a "Go to Profile Page" link which you can click to navigate to another page. This demonstrates that clicking on hyperlinks and navigating to other pages in the application also works without any problem.
For browsing authenticated pages after login, in total, 3 requests are sent to the servers, among which 2 are redirect requests (marked in green). The redirect request sizes are also very small (in terms of bytes), and are negligible even considering network latency.
As expected, the user's "Sign on" status should only be valid for the current session ID, and any authenticated page URL hit to any one of the three sites will be successful if the URL is hit in the same browser window or in a different tab of the same browser window. But, if a new browser window is opened, and an authenticated URL is hit there, it should not be successful and the request should be redirected to the login page (because that is a different browser session).
To test this, open a new browser window and hit any URL of the three sites that points to an authenticated page (you can copy and paste the existing URL addresses). This time, instead of showing the page output, you will see the request will be redirected to the login page as follows (assuming that you hit a URL of):
For hitting an authenticated page on a different session, in total, 4 requests are sent to the servers, among which 3 are redirect requests (marked in green). The redirect request sizes are very small (in terms of bytes), and are negligible even considering network latency.
To log out of the sites, click on the "Log out" link of the home page of. The system will log out user1 from the site and will redirect to the login screen again:
For logging out, in total, 4 requests are sent to the servers, among which 3 are redirect requests (marked in green). The redirect request sizes are very small (in terms of bytes), and are negligible even considering network latency.
As user1 is logged out of the site, he should be logged out from and at the same time. So, hitting any authenticated page URL of and should now redirect to their corresponding login screens.
To test this, refresh the current page of and. Instead of refreshing the page, the system will now redirect the requests to their login pages:
Same as login.
The sample SSO implementation has been developed using Visual Studio 2010, .NET 4.0 Framework, and tested in IIS 7 under a Windows Vista machine. However, it doesn't use any 4.0 framework specific technology or class library and hence, it can be converted to use for a lower level framework without much effort, if required.
Follow these steps to set up the example SSO implementation in your machine:
As the names imply:
Right click on "Sites" and click on "Add Web Site...":
Provide the necessary inputs in the following input form and click "OK":
The site "" will be created in IIS. After creating the site, the site might be shown with a red cross sign in IIS Explorer, indicating that the site is not started yet (this happens in my IIS in Windows Vista Home Premium). In this case, you need to select the site and click on the Restart icon to make sure it starts (the Restart icon is available in the right-middle portion of the screen in IIS Explorer).
Make sure all application pools are running under .NET Framework 4.0 (as the web application has been built in Framework 4.0). To do that, right click on the corresponding application pools (that have the same names as the site names) and select the .NET Framework version in the form:
127.0.0.1 localhost
127.0.0.1
127.0.0.1
127.0.0.1
127.0.0.1
If things are correctly done, you should be able to run the sites and test correctly as shown above. Otherwise, please verify if there is any thing missing or misconfigured by reviewing the steps from the start.
Good question. The sample SSO implementation works fine. But, as a developer, you would likely be more interested in how to implement SSO in your ASP.NET sites using the things developed. While implementing the SSO model, I tried to make a pluggable component (SSOLib.dll) so that it requires minimum programmatic change and configuration. Assuming that you have some existing ASP.NET applications, you need the following steps to implement "Single Sign On" across them:
<!--Configuration section for SSOLib-->
<configSections>
<sectionGroup name="applicationSettings"
type="System.Configuration.ApplicationSettingsGroup, System,
Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<section name="SSOLib.Properties.Settings"
type="System.Configuration.ClientSettingsSection, System,
Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089"
requirePermission="false" />
</sectionGroup>
</configSections>
<applicationSettings>
<SSOLib.Properties.Settings>
<setting name="SSOLib_Service_AuthService" serializeAs="String">
<value></value>
</setting>
</SSOLib.Properties.Settings>
</applicationSettings>
<appSettings>
<add key="SSO_SITE_URL"
value="{0}" />
<add key="LOGIN_URL" value="~/Login.aspx" />
<add key="DEFAULT_URL" value="~/Page.aspx" />
</appSettings>
<!--End Configuration section for SSOLib-->
Note: Modify the configuration values according to the SSO site URL of your setup and your application specific needs.
System.Web.UI.Page
SSOLib.PrivatePage
That's it! You should be done with your SSO implementation.
OK, there is a high chance that you already have a base class which is extended by the code-behind classes in your ASP.NET applications. If that is so, integrating the SSOLib.PrivatePage may become even easier for you.
Let's say there is already a BasePage class which is extended by the code-behind classes of the authenticated page (pages which are accessible onto to the authenticated users) in one of your applications. In this case, instead of modifying the code-behind classes of all the ASPX pages, you might just need to modify the BasePage so that it extends SSOLib.PrivatePage, and you are done.
BasePage
class BasePage : SSOLib.PrivatePage
{
...
}
Another alternative is to modify SSOLib.PrivatePage to extend the existing BasePage (you have the source code, you can do it) and modify all the existing aspx.cs classes of the authenticated pages to extend SSOLib.PrivatePage as suggested. That is:
class PrivatePage : BasePage
{
...
}
If there is any conflicting code or method between the existing BasePage class and the SSOLib.PrivatePage class, you might need to modify some code in these two classes. It would be preferable not to change the code of SSOLib.PrivatePage unless any bug is discovered, and it would be better to change the existing BasePage code as required. But, feel free to change the code of SSOLib.PrivatePage if you really need to, it's all yours!
Good question. In the ideal case, using this example SSO model, you won't have to write a single line of Sign On oriented code to implement SSO in your ASP.NET applications (except some configuration and inheritance changes). How is this possible? Who is managing all the dirty SSO stuff?
SSOLib and the SSO site are the two magicians doing all the tricks. SSOLib is a DLL which is used by each client site to carry out the following things:
The following diagram depicts the role of SSOLib in the SSO model:
The most important thing inside SSOLib is the PrivatePage class which is inherited by the code-behind pages of the authenticated classes. This class inherits the System.Web.UI.Page class, and overrides the OnLoad() method, as follows:
PrivatePage
OnLoad()
public class PrivatePage : Page
{
protected override void OnLoad(EventArgs e)
{
//Set caching preferences
SetCachingPreferences();
//Read QueryString parameter values
LoadParameters();
if (IsPostBack)
{
//If this is a postback, do not redirect to SSO site.
//Rather, hit a web method to the SSO site
//to know user's logged in status
//and proceed based on the status
HandlePostbackRequest();
base.OnLoad(e);
return;
}
//If the current request is marked not
//to be redirected to SSO site, do not proceed
if (SessionAPI.RequestRedirectFlag == false)
{
SessionAPI.ClearRedirectFlag();
base.OnLoad(e);
return;
}();
}
base.OnLoad(e);
}
}
Basically, OnLoad() is called whenever a Page object is loaded as a result of a URL hit in the browser, and the core SSO logic is implemented inside this method. All the codes is self descriptive and documented to depict what is going on.
Page
More on the SSOLib functionality in the following sections.
The SSO site has the following two important functionalities:
Following is the core functionality that is performed by Authenticate.aspx. The codes is self-descriptive and documented for easy understandability.
protected void Page_Load(object sender, EventArgs e)
{
//Read request paramters and populate variables
LoadRequestParams();
if (Utility.StringEquals(Action, AppConstants.ParamValues.LOGOUT))
{
//A Request paramter value Logout indicates
//this is a request to log out the current user
LogoutUser();
return;
}
else
{
if (Token != null)
{
//Token is present in URL request. That means,
//user is authenticated at the client site
//using the Login screen and it redirected
//to the SSO site with the Token in the URL parameter,
//so set the Authentication Cookie
SetAuthCookie();
}
else
{
//User Token is not available in URL. So, check
//whether the authentication Cookie is available in the Request
HttpCookie AuthCookie =
Request.Cookies[AppConstants.Cookie.AUTH_COOKIE];
if (AuthCookie != null)
{
//Authentication Cookie is available
//in Request. So, check whether it is expired or not.
//and redirect to appropriate location based upon the cookie status
CheckCookie(AuthCookie, ReturnUrl);
}
else
{
//Authentication Cookie is not available
//in the Request. That means, user is logged out of the system
//So, mark user as being logged out
MarkUserLoggedOut();
}
}
}
}
Another good question. The core SSO logic seems pretty straightforward. That is:
If current request is a PostBack,
If this is a PostBack in Login page (For Login)
Do Nothing
Else
Do not redirect to SSO site. Rather, invoke a web service at SSO site
to know user's logged in status, using the User *Token.
If user is not logged out
Proceed the normal PostBack operation.
Else
Redirect to login page
Else
If current request is not redirected from the SSO Site,
Redirect it to SSO site with setting ReturnUrl with
the current Request URL and parameters.
Else
Get user's Logged in status on SSO Site
by invoking a web service with user *Token
If user is logged out there,
Redirect to Login page
If current request is a page refresh,
Redirect to SSO site with ReturnUrl
Else
Redirect to the originally requested URL
End If
End If
*User token is a hash code of a GUID that identifies a user's login onto the SSO site uniquely. Each time a user is logged onto the SSO site, the token is generated at the SSO site, and this token is used later to set the authentication cookie and to retrieve the user object by the client sites.
But there are some obvious issues that were needed to be handled to implement the SSO logic. These are marked in bold in the above logic:
Implement "Redirect to login page" and "Redirect to the originally requested URL"
SSOLib.PrivatePage redirects to the SSO site, or redirects to the currently requested page at the client site, based upon the situation. But, there is a problem if SSOLib.PrivatePage redirects to a page of the current site. As each authenticated page extends the SSOLib.PrivatePage class, a redirect to a page in the current site from SSOLib.PrivatePage would redirect to itself again and again, and will cause an infinite redirect loop.
To solve this issue, an easy fix could be to add a request parameter (say, Redirect=false) to indicate that the request should not be redirected any further. But, this would allow the user to see the Request parameter and allow the user to "hack" the system by altering its value. So, instead of using a Request parameter, I used a Session variable to stop further redirection, before redirecting to any URL of the current site from SSOLib.PrivatePage. In OnLoad(), I check the Session variable and reset it and return as follows:
Redirect=false
Request
OnLoad
()
//If the current request is marked not to be redirected to SSO site, do not proceed
if (SessionAPI.RequestRedirectFlag == false)
{
SessionAPI.ClearRedirectFlag();
return;
}
Detect whether "Current request is not redirected from the SSO Site", and whether "current request is a page refresh"
SSOLib.PrivatePage redirects to the SSO site for setting or checking the authentication cookie. After the SSO site is done with its work, it redirects back to the calling site using the URL that is set in ReturnUrl.
ReturnUrl
This also creates a scenario where the client site might again redirect to the SSO site and the SSO site again redirects to client site and creates an infinite redirection loop. Unlike the previous situation, this time, a Session variable could not be used because the redirection is occurring from the SSO site, and the client site and the SSO site have different Session states. So, a Request parameter value should be used to prevent further redirect to the SSO site once the SSO site redirects to the client site.
But again, using a Request parameter to prevent redirection would allow the user to alter it and break the normal functionality. To work-around this, the Request parameter value is set with a hash of a GUID (RequestId=Hash(New GUID)), and this is appended from the SSO site before redirecting back to the client-site URL.
(RequestId=Hash(New GUID)
The redirect request executes the OnLoad() method of SSOLib.PrivatePage again, and this time, it finds the RequestId, and this indicates that this request is redirected back from the SSO site and hence this should not be redirected to the SSO site further.
RequestId
But, what if the user alters the value of the RequestId in the query string and hits the URL, or the user just refreshes the current page?
As each different request is to be redirected to the SSO site (except the postback hits), in this case, this request should be redirected to the SSO site as usual. But, the request URL already contains a RequestId, and despite this, the request should be redirected to the SSO site. So, how should SSOLib.PrivatePage understand this?
There is only one way. A specific RequestId should be valid for each particular redirect from the SSO site only, and once the RequestId is received at the client site from the Request parameter, it should expire instantly so that even if the next URL hit contains the same RequstId, or if the next URL contains an invalid value, it redirects to the SSO site.
RequstId
The following logic has been used to handle this scenario:();
}
//And,
UserStatus userStatus = AuthUtil.Instance.GetUserStauts(Token, RequestId);
if (!userStatus.UserLoggedIn)
{
//User is not logged in at SSO site. So, return the Login page to user
RedirectToLoginPage();
return;
}
if (!userStatus.RequestIdValid)
{
//Current RequestId is not valid. That means,
//this is a page refresh and hence, redirect to SSO site
RedirectToSSOSite();
return;
}
if (CurrentUser == null || CurrentUser.Token != Token)
{
//Retrieve the user if the user is not found
//in session, or, the current user in session
//is not the one who is currently logged onto the SSO site
CurrentUser = AuthUtil.Instance.GetUserByToken(Token);
if (CurrentUser.Token != Token || CurrentUser == null)
{
RedirectToSSOSite();
return;
}
}
On the other hand, before redirecting to the client site, the SSO site generates a RequestId, appends it with the query string, and puts it in Application using the RequestId as the key and value. Following is how the SSO site redirects back to the client site:
Application
/// <summary>
/// Append a request ID to the URl and redirect
/// </summary>
/// <param name="Url"></param>
private void Redirect(string Url)
{
//Generate a new RequestId and append to the Response URL.
//This is requred so that, the client site can always
//determine whether the RequestId is originated from the SSO site or not
string RequestId = Utility.GetGuidHash();
string redirectUrl = Utility.GetAppendedQueryString(Url,
AppConstants.UrlParams.REQUEST_ID, RequestId);
//Save the RequestId in the Application
Application[RequestId] = RequestId;
Response.Redirect(redirectUrl);
}
Note that, before redirection, RequestId is stored in the Application scope to mark that this RequestId is valid for this particular response to the client site. Once the client site receives the redirected request, it executes the GetUserStatus() Web Service method, and following is how the GetUserStatus() web method clears the RequestId from the Application scope so that any subsequent requests with the same RequestId or any request with an invalid RequestId can be tracked as an invalid RequestId:
GetUserStatus()
/// <summary>
/// Determines whether the current request is valid or not
/// </summary>
/// <param name="RedirectId"></param>
/// <returns></returns>
[WebMethod]
public UserStatus GetUserStauts(string Token, string RequestId)
{
UserStatus userStatus = new UserStatus();
if (!string.IsNullOrEmpty(RequestId))
{
if ((string)Application[RequestId] == RequestId)
{
Application[RequestId] = null;
userStatus.RequestIdValid = true;
}
}
userStatus.UserLoggedIn =
HttpContext.Current.Application[Token] == null ? false : true;
return userStatus;
}
The GetUserStauts() Web Service method returns the user's status inside a UserStatus object, which has two properties: UserLoggedIn and RequestIdValid.
GetUserStauts()
UserStatus
UserLoggedIn
RequestIdValid
Once a user is logged onto the SSO site via the Authenticate Web Service method, it generates a User Token (hash code of a new GUID) and stores the user Token inside an Application variable using the Token as the Key:
Authenticate
/// <summary>
/// Authenticates user by UserName and Password
/// </summary>
/// <param name="UserName"></param>
/// <param name="Password"></param>
/// <returns></returns>
[WebMethod]
public WebUser Authenticate(string UserName, string Password)
{
WebUser user = UserManager.AuthenticateUser(UserName, Password);
if (user != null)
{
//Store the user object in the Application scope,
//to mark the user as logged onto the SSO site
//Along with the cookie, this is a supportive way
//to trak user's logged in status
//In order to track a user as logged onto the SSO site
//user token has to be presented in the cookie as well as
//he/she has to be presented in teh Application scope
HttpContext.Current.Application[user.Token] = user;
}
return user;
}
When the user logs out of the system from any client site, the authentication cookie is removed, and also the user object is removed from the Application scope (inside Authenticate.aspx.cs in the SSO site):
/// <summary>
/// Logs out current user;
/// </summary>
private void LogoutUser()
{
//This is a logout request. So, remove the authentication Cookie from the response
if (Token != null)
{
HttpCookie Cookie = Request.Cookies[AppConstants.Cookie.AUTH_COOKIE];
if (Cookie.Value == Token)
{
RemoveCookie(Cookie);
}
}
//Also, mark the user at the application scope as null
Application[Token] = null;
//Redirect user to the desired location
//ReturnUrl = GetAppendedQueryString(ReturnUrl,
// AppConstants.UrlParams.ACTION, AppConstants.ParamValues.LOGOUT);
Redirect(ReturnUrl);
}
So, without redirecting to the SSO site, it is possible to know the user's logged in status just by checking the user's presence in the Application scope of the SSO site. The client sites invoke the Web Service method of the SSO site, and the SSO site returns the user's logged in status inside the UserStatus object.
This method of knowing the user's logged in status is handy because when a postback occurs, the client sites would not want to redirect to the SSO site (because, if they do that, the postback event methods cannot be executed).
In such cases, they invoke the web method to know the user's logged in status, and if the user is not available at the SSO site, the current request is redirected to the login page. Otherwise, the normal postback event method is executed.
True. Once a user is authenticated, he/she is stored in the Application scope to mark as logged in. But, the Application scope is a global scope irrespective of the site and user sessions. So, there is a risk that the user might also get marked as logged in for all browser sessions.
This sounds risky. But, this is handled with care so that the user object of a particular browser session is not available to other browser sessions. Let us now see how this has been handled.
Once a user logs onto the SSO site, the user is stored in the Applicationscope against the user Token, which is valid only for a particular user Login session.
Token
If some direct request is hit in a new window (hence with a new Session) with the user Token (with or without the RequestId) by copying the URL from the address bar, the system will not let the URL request bypass the login screen. Why? Because the authentication cookie that is set by the SSO site is a "non-persistent" cookie, and hence this cookie is sent by the browser to the SSO site only if subsequent requests are hit in the same browser session (from the same browser window or different tabs in the same window). That means, if a new browser window is opened, it does not have any authentication cookie to send to the SSO site, and naturally, the request is redirected to the login page of the client site. So, even if a user is stored in the Application scope in the SSO site, that user object is stored against a different user Token as a key, that can never be accessed for any new request in the new session, because this request does not know about the existing user Token, and once the user logs onto this new browser session, it gets a new user Token which never matches with the existing ones.
Session
The web.config of the SSO site has configuration options for configuring the cookie timeout value and for enabling/disabling the sliding expiration of the cookie.
<appSettings>
<add key="AUTH_COOKIE_TIMEOUT_IN_MINUTES" value="30"/>
<add key="SLIDING_EXPIRATION" value="true"/>
</appSettings>
The cookie timeout value can be configured in the web.config of the SSO site and the timeout value applies to all client sites under the SSO. That is, if the cookie timeout value is specified in the web.config as 30 minutes and if user1 logs onto, the cookie is available for the next 30 minutes in the browser, and hence user1 is signed on the other two sites for this 30 minutes, unless user1 is logged out of the site.
Now, how is this cookie timeout implemented? Simple, by setting the cookie expiration time, of course.
Unfortunately, I couldn't do that. Why? Because, by default, when a cookie is set in the Response, it is created as a non-persistent cookie (the cookie is stored only in the browser's memory for the current session, not in the client's disk). If the expiry date is specified for the cookie, ASP.NET runtime automatically instructs the browser to store the cookie as a persistent cookie.
Response
In our case, we don't want to create a persistent cookie, because this will let the other sessions to also send the authentication cookie to the SSO site and eventually mark the user as logged in. We do not want that to happen.
But, the expiration datetime has to be set somehow. So, I stored the expiration value in the cookie's value, along with appending to the user's Token, as follows:
/// <summary>
/// Set authentication cookie in Response
/// </summary>
private void SetAuthCookie()
{
HttpCookie AuthCookie = new HttpCookie(AppConstants.Cookie.AUTH_COOKIE);
//Set the Cookie's value with Expiry time and Token
int CookieTimeoutInMinutes = Config.AUTH_COOKIE_TIMEOUT_IN_MINUTES;
AuthCookie.Value = Utility.BuildCookueValue(Token, CookieTimeoutInMinutes);
//Appens the Token and expiration DateTime to build cookie value
Response.Cookies.Add(AuthCookie);
//Redirect to the original site request
ReturnUrl = Utility.GetAppendedQueryString(ReturnUrl,
AppConstants.UrlParams.TOKEN, Token);
Redirect(ReturnUrl);
}
/// <summary>
/// Set cookie value using the token and the expiry date
/// </summary>
/// <param name="Value"></param>
/// <param name="Minutes"></param>
/// <returns></returns>
public static string BuildCookueValue(string Value, int Minutes)
{
return string.Format("{0}|{1}", Value,
DateTime.Now.AddMinutes(Minutes).ToString());
}
Eventually, when the cookie is received at the SSO site, its value is retrieved as follows:
/// <summary>
/// Reads cookie value from the cookie
/// </summary>
/// <param name="cookie"></param>
/// <returns></returns>
public static string GetCookieValue(HttpCookie Cookie)
{
if (string.IsNullOrEmpty(Cookie.Value))
{
return Cookie.Value;
}
return Cookie.Value.Substring(0, Cookie.Value.IndexOf("|"));
}
And, the expiration date time is retrieved as follows:
/// <summary>
/// Get cookie expiry date that was set in the cookie value
/// </summary>
/// <param name="cookie"></param>
/// <returns></returns>
public static DateTime GetExpirationDate(HttpCookie Cookie)
{
if (string.IsNullOrEmpty(Cookie.Value))
{
return DateTime.MinValue;
}
string strDateTime =
Cookie.Value.Substring(Cookie.Value.IndexOf("|") + 1);
return Convert.ToDateTime(strDateTime);
}
If SLIDING_EXPIRATION is set to true in the web.config, the cookie expiration date-time value is increased with each request, with the minute value specified in AUTH_COOKIE_TIMEOUT_IN_MINUTES in the web.config. The following code does that:
SLIDING_EXPIRATION
true
AUTH_COOKIE_TIMEOUT_IN_MINUTES
/// <summary>
/// Increases Cookie expiry time
/// </summary>
/// <param name="AuthCookie"></param>
/// <returns></returns>
private HttpCookie IncreaseCookieExpiryTime(HttpCookie AuthCookie)
{
string Token = Utility.GetCookieValue(AuthCookie);
DateTime Expirytime = Utility.GetExpirationDate(AuthCookie);
DateTime IncreasedExpirytime =
Expirytime.AddMinutes(Config.AUTH_COOKIE_TIMEOUT_IN_MINUTES);
Response.Cookies.Remove(AuthCookie.Name);
HttpCookie NewCookie = new HttpCookie(AuthCookie.Name);
NewCookie.Value =
Utility.BuildCookueValue(Token, Config.AUTH_COOKIE_TIMEOUT_IN_MINUTES);
Response.Cookies.Add(NewCookie);
return NewCookie;
}
Yes! It surely can be used, but before that, some security and other cross-cutting issues have to be addressed. This is just a basic implementation, and I didn't verify the model with a professional Quality Assurance process (though I did some basic acceptance testing myself). Also, this authentication does not offer the full flexibility and powers that Forms authentication provides. Additionally, it does not have the built-in authorization mechanism of Forms authentication, and hence you might need to write some more customization on the current SSO implementation, based upon your specific requirements.
However, I'll try to update the SSO model to enrich it with more features and make it robust so that this could be used in commercial systems without requiring any customization.
Any suggestion or feedback is highly welcome. Ad. | http://www.codeproject.com/Articles/114484/Single-Sign-On-SSO-for-cross-domain-ASP-NET-applic?fid=1589995&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2015-14 | refinedweb | 5,105 | 52.19 |
The SQL Server Mobile data provider classes in the Microsoft.Data.SqlServerCe namespace provide programmatic access to SQL Server Mobile databases from a managed application running on a supported device. The classes are similar to the classes in the .NET data provider for SQL Server. They let you connect to a SQL Server Mobile database, execute commands, retrieve result sets, refresh result sets, work with data offline, and synchronize local updates with the database. The data provider for SQL Server Mobile does not support batch queries or nested transactions.
The following subsections provide examples that show how to use the SQL Server Mobile classes and include descriptions of the classes. You need a reference to the System.Data.SqlServerCe assembly to compile and run the examples. To add the reference, select Microsoft SQL Mobile from the .NET tab of the Add Reference dialog box in Visual Studio 2005.
This example creates a database named TestDb.sdf:
using System;
using System.Data.SqlServerCe;
class Program
{
static void Main(string[] args)
{
SqlCeEngine engine = new SqlCeEngine(
"data source=TestDb.sdf; database password=password;");
engine.CreateDatabase( );
engine.Dispose( );
Console.WriteLine("Press any key to continue.");
Console.ReadKey( );
}
}
Running the example creates the mobile database (.sdf file) in the bin\Debug folder (if you compile a debug version of the example).
You can connect to this database in SQL Server Management Studio. From the main menu, select View Registered Server Types SQL Server Mobile
. Right-click SQL Server Mobile Edition Databases in the Registered Servers window and select Server Registration from the context menu to open the New Server Registration dialog box. Complete the Database file field with the full path to the TestDb.sdf file and the Password field with the password. Click the Save button to register the mobile database.
The local connection string that can be specified either in the SqlCeEngine
class constructor or using the LocalConnectionString
property has properties described in Table 21-1.
Property
Description
autoshrink threshold
Percent of free space allowed in the database before autoshrink starts. The default value is 60. A value of 100 disables autoshrink.
data source
Name of the SQL Server Mobile database file (.sdf) and, optionally, specifies the absolute path.
database password
Database password up to 40 characters long. If not specified, the default is no password.
A database password cannot be recovered if lost.
default lock timeout
Length of time, in milliseconds, that a transaction will wait for a lock. The default value is 2000.
default lock escalation
Number of locks a transaction will acquire before escalating from row to page or from page to table. The default value is 100.
encrypt database
Boolean value specifying whether the database is encrypted. You must specify a password to enable database encryption. The default value is false.
If the database password is lost, the data cannot be retrieved.
flush interval
Interval before all committed transactions are committed to disk, in seconds. The default value is 10.
locale identifier
Locale ID (LCID) to use with the database.
max buffer size
Largest amount of memory, in kilobytes, that SQL Server Mobile can use before it starts flushing data changes to disk. The default value is 640.
max database size
Maximum size of the database file, in megabytes. The default value is 128.
mode
Specifies how the database is opened. The options are:
Read Write
Opens the database so that other processes can open and modify the database
Read Only
Opens a read-only copy of the database
Exclusive
Opens the database so that other processes cannot open or modify the database
Shared Read
Opens the database so that other processes are allowed read-only access to the database
The default mode is Read Write.
temp file directory
Location of the temporary database. The data source is used for temporary storage if a temporary database is not specified.
temp file max size
Maximum size of the temporary database file, in megabytes. The default value is 128.
The classes used to manage SQL Server Mobile
databases and access data in a SQL Server Mobile database are described in Table 21-2. The data access classes are similar to those for the SQL Server data provider. Corresponding classes are prefixed by SqlCe instead of Sqlfor example, SqlCeConnection instead of SqlConnection.
Class
SqlCeCommand
T-SQL statement to execute against a database.
SqlCeCommandBuilder
Automatically creates single-table commands based on a SELECT query. Also used to update a database with changes made to a DataTable or DataSet object using a data adapter.
SqlCeConnection
Connection to the SQL Server Mobile database.
SqlCeDataAdapter
Used to fill a DataTable or DataSet object and subsequently update the database with changes made offline.
SqlCeDataReader
Provides access to a result set as a forward-only stream of data rows.
SqlCeEngine
Represents the SQL Server Mobile Database Engine. Used to create, modify, and manage a SQL Server Mobile database.
SqlCeError
Information about a specific SqlCeException object returned by the SQL Server Mobile data provider.
SqlCeErrorCollection
Collection of all errors generated by the SQL Server Mobile data provider.
SqlCeException
The exception raised when the provider returns a warning or error from the SQL Server Mobile database.
SqlCeFlushFailureEventArgs
Data for a flush failure (FlushFailure) event.
SqlCeFlushFailureEventHandler
The method that handles the FlushFailure event.
SqlCeInfoMessageEventArgs
Data for a warning (InfoMessage) event from the database.
SqlCeInfoMessageEventHandler
The method that handles the InfoMessage event.
SqlCeLockTimeoutException
The exception raised when a lock timeout occurs.
SqlCeParameter
A parameter to a SQL command (SqlCeCommand).
SqlCeParameterCollection
A collection of parameter (SqlCeParameter) objects and their mappings to columns.
SqlCeRemoteDataAccess
A remote data access instance.
SqlCeReplication
A replication instance.
SqlCeResultSet
An updateable, bindable, scrollable cursor.
SqlCeRowUpdatedEventArgs
Data for the row updated (RowUpdated) event that occurs when a row in the database is updated using a data adapter.
SqlCeRowUpdatedEventHandler
The method that handles the RowUpdated event.
SqlCeRowUpdatingEventArgs
Data for the row updating (RowUpdating) event that occurs before a row in the database is updated using a data adapter.
SqlCeRowUpdatingEventHandler
The method that handles the RowUpdating event.
SqlCeTransaction
A SQL transaction.
SqlCeTransactionInProgressException
The exception raised when an attempt is made to modify a database while a transaction is in progress.
SqlCeUpdatableRecord
A row of updateable data from the database. The SqlCeResult set contains a collection of SqlCeUpdatableRecord objects.
The SqlCeEngine
class public properties and methods used to create and manage SQL Server Mobile
databases are described in Table 21-3.
Constructor
Takes an optional argument specifying the connection string to the SQL Server Mobile database.
LocalConnectionString
The connection string to the SQL Server Mobile database. The connection string properties are described in Table 21-1.
Methods
Compact( )
Reclaims space in the database file and changes properties of the database specified in the local connection string.
CreateDatabase( )
Creates a new database.
Repair( )
Attempts to repair a corrupted database.
Shrink( )
Reclaims space in the database file.
Verify( )
Verifies that the database is not corrupted.
The examples in this section show how to maintain a SQL Server Mobile
database using the SqlCeEngine class.
This example verifies that a database is not corrupted. If the database is corrupted, it is repaired.
using System;
using System.Data.SqlServerCe;
class Program
{
static void Main(string[] args)
{
// connect to the database
SqlCeEngine engine = new SqlCeEngine(
"data source=TestDb.sdf; database password=password;");
// check if the database is corrupted and repair if it is
if (!engine.Verify( ))
{
engine.Repair(null, RepairOption.RecoverCorruptedRows);
Console.WriteLine("Database repaired.");
}
Console.WriteLine("Press any key to continue.");
Console.ReadKey( );
}
}
This example connects to the SQL Server Mobile database created in the preceding example. The Verify( ) method of the SqlCeEngine class checks the checksum for each database page to determine whether the database file is corrupt. A corrupt database file returns false and should be repaired using the Repair( ) method of the SqlCeEngine class. Repair( ) takes a single argument from the RepairOption enumerationeither DeleteCorruptedRows or RecoverCorruptedRows. The RecoverCorruptedRows option causes the engine to try to recover data from corrupted pages. However, the data is not guaranteed to be free of corruption. The DeleteCorruptedRows option results in data that is free of corruption, but because corrupt data is discarded, significant data can be lost.
The internal structure of a SQL Server Mobile database can become fragmented over time, resulting in wasted space. You can use the Shrink( ) or Compact( ) method of the Engine class to reclaim the space:
engine.Shrink( );
The Shrink( ) method of the SqlCeEngine class is used to reclaim wasted space in the .sdf file. The Compact( ) method is described in the following subsection.
You can configure the database to automatically shrink when a fragmentation threshold is exceeded by setting the autoshrink threshold property (described in Table 21-1) in the LocalConnectionString property of the SqlCeEngine object.
The Compact( ) method of the SqlCeEngine class reclaims space in the database just as the Shrink( ) method does, but also lets you change database connection settings by specifying them in an optional argument. For example, the following statement changes the database password to newPassword:
engine.Compact("database password=newPassword;");
SQL Server Mobile
is file based, so you can perform some common database tasks using the filesystem. You can back up a database by closing all open connections to it and copying the .sdf database file. Similarly, you can restore the database by copying the backup .sdf file to its original location.
You drop a database by closing all connections to it and deleting the .sdf file using the filesystem APIs. For example, the following statement deletes the database named TestDb.sdf created at the beginning of this section:
System.IO.File.Delete("TestDb.sdf");
Because SQL Server Mobile does not support SMO, you create a table by executing T-SQL DDL commands using the ExecuteNonQuery( ) method of the SqlCeCommand class. This example creates a table named TestTable containing two columns:
using System;
using System.Data.SqlServerCe;
class Program
{
static void Main(string[] args)
{
SqlCeConnection conn = new SqlCeConnection(
"data source=TestDb.sdf; database password=password;");
conn.Open( );
SqlCeCommand cmd = new SqlCeCommand(
"CREATE TABLE TestTable(ID int, Description nvarchar(100))",
conn);
cmd.ExecuteNonQuery( );
conn.Close( );
Console.WriteLine("Press any key to continue.");
Console.ReadKey( );
}
}
The example uses SqlCeConnection and SqlCeCommand objects to execute the CREATE TABLE T-SQL command against the SQL Server Mobile database. This is similar to how you would accomplish the same task in SQL Server using SqlConnection and SqlCommand objects.
This example adds two rows to the SQL Server Mobile table named TestTable created in the preceding example. The example then reads the new rows from the database and outputs them to the console.
You execute queries against a SQL Server Mobile database by using the SQL Server Mobile database classes similarly to using the SQL Server data provider against a SQL Server 2005 database. This example uses a SqlCeDataAdapter
object to do the following:
Retrieve the contents of the table named TestTable into a DataTable object. Because TestTable has no rows, the DataTable object will have no rows.
Add two rows to the DataTable object.
Update the SQL Server Mobile database with the new rows.
The example then uses a SqlCeDataReader object to display the rows added to the table from the database.
using System;
using System.Data;
using System.Data.SqlServerCe;
class Program
{
static void Main(string[] args)
{
// create a data adapter and configure a command builder
// to update the database
SqlCeDataAdapter da = new SqlCeDataAdapter(
"SELECT * FROM TestTable",
"data source=TestDb.sdf; database password=password;");
SqlCeCommandBuilder cb = new SqlCeCommandBuilder(da);
// retrieve the results from the database into a DataTable
DataTable dt = new DataTable( );
da.Fill(dt);
// add two rows to the DataTable
dt.Rows.Add(new object[] { 1, "Row 1 description" });
dt.Rows.Add(new object[] { 2, "Row 2 description" });
// update the database with the new rows
da.Update(dt);
// create a connection for the data reader
SqlCeConnection conn = new SqlCeConnection(
"data source=TestDb.sdf; database password=password;");
conn.Open( );
// create the data reader
SqlCeCommand cmd = new SqlCeCommand(
"SELECT * FROM TestTable", conn);
SqlCeDataReader dr = cmd.ExecuteReader( );
// output the rows to the console
while (dr.Read( ))
Console.WriteLine(dr["ID"] + ", " + dr["Description"]);
// clean up
dr.Close( );
conn.Close( );
Console.WriteLine(Environment.NewLine + "Press any key to continue.");
Console.ReadKey( );
}
}
The console output is shown in Figure 21-1.
A SqlCeException
object is created when a data provider for SQL Server mobile encounters an error. These exceptions are handled in a typical manner. The following example catches a SqlCeException object, raised because a nonexistent table is queried, and returns details about the exception:
using System;
using System.Data.SqlServerCe;
class Program
{
static void Main(string[] args)
{
SqlCeConnection conn = new SqlCeConnection(
"data source=TestDb.sdf; database password=password;");
conn.Open( );
SqlCeCommand cmd = new SqlCeCommand("SELECT * FROM Table1", conn);
try
{
SqlCeDataReader dr = cmd.ExecuteReader( );
}
catch (SqlCeException
ex)
{
foreach (SqlCeError sce in ex.Errors)
{
Console.WriteLine("HResult = {0:X}", sce.HResult);
Console.WriteLine("Message = {0}", sce.Message);
Console.WriteLine("NativeError = {0:X}", sce.NativeError);
Console.WriteLine("Source = {0}", sce.Source);
Console.WriteLine( );
}
}
finally
{
conn.Close( );
}
Console.WriteLine("Press any key to continue.");
Console.ReadKey( );
}
}
The console output is shown in Figure 21-2.
The SqlCeException class inherits from the Exception class and adds the several properties described in Table 21-4.
Errors
A collection of SqlCeError objects, each containing details about an exception generated by the SQL Server Mobile data provider.
HResult
The hrESULTa numeric value that corresponds to a specific exception. This corresponds to the value of the HResult property for the first SqlCeError object in the SqlCeErrorCollection collection returned by the Errors property.
InnerException
Inherited from Exception class.
The description for the first SqlCeError object in the SqlCeErrorCollection collection returned by the Errors property.
NativeError
The native error number for the first SqlCeError object in the SqlCeErrorCollection collection returned by the Errors property.
Source
The name of the provider that caused the exception. This corresponds to the value of the Source property for the first SqlCeError object in the SqlCeErrorCollection collection returned by the Errors property.
StackTrace | http://www.yaldex.com/sql_server/progsqlsvr-CHP-21-SECT-3.html | crawl-003 | refinedweb | 2,311 | 50.63 |
I am having a problem with the use of a class member that does not seem to be constructed.
I have a name space called "DogActivity".
In this name space there are two classes " DogSounds and Bark".
In the Bark class I have a private: instance of DogSounds.
Now in the constructor of the DogSounds class there is a private: int variable called NumSounds that is initialized to 0.
The problem is if i create an instance of DogSound in my main program the constructor is called and the NumSounds variable is set to 0 as planned.
But
If i create an instance of Bark in the main program the constructor of DogSounds is never called and NumSounds is never initialized.
here is an example of my namespace with classes.
namespace DogActivity { class DogSounds { public: DogSounds() { NumSonds = 0;} private: int NumSounds; }; class Bark{ private: DogSounds dogSounds; }; }
Now here is how i would use in main
using namespace DogActivity; int main ( int argc, char * argv[] ) { Bark bark; // Do work with bark bellow }
But when I create the instance of Bark the privat: dogSounds' variable "NumSounds" is never set to 0. The Constructor of dogSounds is never called, why is this? shouldn't the constructor of dogSounds be called as soon as i create an instance of Bark? what might i be doing wrong here?
Edited by greenzone, 26 April 2014 - 01:20 PM. | http://www.gamedev.net/topic/655931-private-instance-in-class-with-in-namespace-question/ | CC-MAIN-2016-40 | refinedweb | 232 | 70.43 |
ICS312. LEX Set 25. LEX. Lex is a program that generates lexical analyzers Converting the source code into the symbols (tokens) is the work of the C program produced by Lex. This program serves as a subroutine of the C program produced by YACC for the parser. Lex. Any character (or string of characters) except those (called metacharacters) which have a special interpretation, such as () [] {} + * ? | etc.
For instance the string “if” in a regular expression will match the identical string in the source code.
[01a-z]
A character class matches a single symbol in the source code that is a member of the class.
For instance [01a-z] matches the character 0 or 1 or any lower case alphabetic character
For instance [0-9]+ matches any sequence of digits in the source code.
5. A "*" following a regular expression denotes 0 or more occurrences of that expression.
6. A “?" following a regular expression denotes 0 or 1 occurrence of that expression.
For instance [a-z]+|9 matches either a lower case alphabetic or the digit “9”.
For example:
(a|b)+ matches e.g. abba
while
a|b+ match a or a string of b’s.
9. Regular expressions can be concatenated
For instance:
[a-zA-Z]*[0-9]+[a-zA-Z]
matches any sequence of 0 or more letters, followed by 1 or more digits, followed by 1 letter
10. If you want to include one of these symbols in a regular expression simply as a character, you can either use the c escape symbol “\” or double quotes.
For example: [0-9]”+”[0-9] or [0-9]\+[0-9]
match a digit followed by a plus sign, followed by a digit
What kinds of strings can be matched by the regular
What kinds of strings can be matched by the regular
From highest to lowest
Concatenation
Closure (*)
Alternation ( OR )
Examples:
a | bcf means the symbol a OR the string bcf
a( bcf* ) is the string abc followed by 0 or more repetitions of the
symbol f. Note: this is the same as (abcf*)
Consider the set of strings (ie. language)
{an b an | n > 0}
A context-free grammar that generates this language is:
S -> b
b -> a b a
However, as we will show later, it is not possible to construct
a regular expression that recognizes this language.
It’s not relevant to this course, but you may be interested to know that it is, in turn, not possible to construct a context-free grammar for a language whose definition is a simple extension of that given above:
{an b an bn an | n > 0}
NOTE. when using a macro name as part of a regular expression, you need to enclose the name in curly parentheses {}.
assigns macro name signed_int to
an optional sign followedby an integer
assigns the macro name number to a signed_int followed by an optional fractional part followed by an optional exponent part
assigns the macro name alpha to the character class given by a-z and A-Z
assigns the macro name identifier to an alpha character followed by the alternation of either alpha characters or digits, with 0 or more repetitions.
Using the regular expression for an identifier
on the previous slide, what would be the first
token of the following string?
MAX23= Z29 + 8
Lex picks as the "next" token, the longest
string that can be matched by one of it regular
expressions.
In this case, MAX23 would be matched as an identifier,
not just M or MA or MAX
/* A standalone LEX program that counts identifiers and commas */
/* Definition Section */
%{
int nident = 0; /* # of identifiers in the file being scanned */
int ncomma = 0; /* # of commas in the file */
%}
/* definitions of macro names*/
digit [0-9]
alph [a-zA-Z]
%%
/* Rules Section */
/* basic of patterns to recognize and the code to execute when they occur */
{alph}({alph}|{digit})* {++nident;}
"," {++ncomma;}
. ;
%%
/* subroutine section */
/* the last part of the file contains user defined code, as shown here. */
main()
{
yylex();
printf( "%s%d\n", "The no. of identifiers = ", nident);
printf( "%s%d\n", "The no. of commas = ", ncomma);
}
/* LEX calls this function when the end of the input file is reached */
yywrap(){}
%{ /* ARITH.Y Yacc input for a arithmetic expression evaluator */
#include <stdio.h> /* for printf */
#define YYSTYPE int
int yyparse(void);
int yylex(void);
void yyerror(char *mes);
%}
%token number
%%
program : expression {printf("answer = %d\n", $1);}
;
expression : expression '+' term {$$ = $1 + $3;}
| term
;
term : term '*' number {$$ = $1 * $3;}
| number
;
%%
void main() {
printf("Enter an arithmetic expression\n");
yyparse();}
/* prints an error message */
void yyerror(char *mes) {printf("%s\n", mes);}
%{
/* lexarith.l lex input for a arithmetic expression evaluator */
#include “y.tab.h”
#include <stdlib.h> /* for atoi */
#define YYSTYPE int
extern YYSTYPE yylval;
%}
digit [0-9]
%%
{digit}+ {yylval = atoi(yytext); return number; }
(" "|\t)* ;
\n {return(0);} /* recognize Enter key as EOF */
. {return yytext[0];}
%%
int yywrap() {} | https://www.slideserve.com/kuniko/ics312 | CC-MAIN-2018-39 | refinedweb | 813 | 56.69 |
DEBSOURCES
Skip Quicknav
sources / devil / 1.7.8181: Added slideshow capabilities with Page Up and Page Down.
- Windows Example: Added mipmap functionality to 0-9 keys (0 goes to main image).
- Added Mathematica interface project.
- Added the ilur - commandline ILU frontend.
- Added IL_NO_GAMES define.
- Updated documentation.
- Added new ILU images to the manual; set the manual license to GFDL.
- Fixed iluBuildMipmaps (only generated one).
- Redid iluBuildMipmaps to use iluScale functions instead of nearest.
- Added an option to describe image formats in configure script.
- Fixed PNM loading bug.
- Added support for DDS g16b16, g32b32, r16, r32, a2r10g10b10 and a2b10g10r10 images.
- Added support for VTF files with 64-byte headers.
- Fixed some errors in image type conversion.
- Added IL_VTF_COMP define to control the output format of VTF files.
- Added French translation of errors.
- Updated libraries.txt with new external libraries.
1.7.7
---
- Redefined clamping values in il.h.
- Added 64-bit integer types.
- Fixed bug in iRegisterLoad ().
- Added WBMP support (loading and saving).
- EXR files can now be loaded as file streams and lumps.
- Changed iNeuQuant to take number of colors in palette.
- Compiled MNG support back in.
- Added Sun Raster file reading.
- Better Linux configuration scripts
- Added libsquish and nVidia Texture Tools support - accessed through ilCompressDXT.
- Added IL_ALPHA support.
- Fixed a bug dealing with 16-bit luminance .psd images.
- Added ilutGLSetTex2D and ilutGLSetTex3D.
- Added support for BGRA4444 .vtf images.
- Changed support for cubemaps to be separate from animation lists.
- Fixed possible buffer overruns in .hdr and .pnm loading code.
- Changed GIF loading code to better load files with local palettes in each frame.
- Added support for environment maps and animations to .vtf loading.
- Added ilClampNTSC function.
- Added TIFF saving to file streams.
- Added JPEG 2000 saving.
- Added support for more .jp2 formats.
- Improved the internal DXT compressor.
- Added OpenEXR saving.
- Reenabled PhotoCD support.
- Fixed SoftImage PIC support.
- Updated MSVC++ x64 projects.
- Readded DXT extension functions.
- Added IFF loading.
- Fixed pixel shift issue with iluRotate.
- Added code in ilSaveL to return the number of bytes required by the output file data.
- Added code to determine the filetypes of JP2, MDL, XMP and EXR from their headers.
- Changed the return values of ilSaveF and ilSaveL to integers telling the number of bytes written.
- Added checks for malformed .psd files.
- Added Windows Mobile projects.
- Added DevIL.Net project.
- Added TPL format loading.
- Updated ilutConvertToSDLSurface.
- Rearranged how mipmaps are stored internally and accessed.
- Fixed bug loading malformed RLE Targa images.
- Added support in ilCopyPixels when copying from indexed images.
- Added Doxygen documentation.
- Updated main documentation.
1.7.5
---
- Added check in iluScale for parameters of 0, which would cause a crash.
- Cleaned up the tons of extra lines in il_dds.c.
- Added .vtf support.
- Fixed bug in file caching if the buffer was too small.
- Fixed crash saving .tga files if no author name string present.
- Fixed crash in ilActive* if a number too large is specified.
- Added support for alpha-only formats (IL_ALPHA).
- Better conversion from lower bpp to higher bpp data.
1.7.4
---
- Added German translations of error codes in ILU.
- Added ilutGLSubTex back to ilut.h.
- Added 64-bit Windows configurations to projects.
- Fixed various 64-bit bugs.
- Fixed dependency of ILUT on MSVCRT DLL.
- Redefined ILsizei as size_t.
- Changed allocation functions to ILsizei from ILuint.
- Removed ILvoid, since it is now illegal in GCC 4.2.
- Started rewriting Windows HD Photo support from scratch.
- Started DirectX 10 code.
- Changed ilSaveCHeader's second argument to char*.
- Changed ilutGLScreenie to use Unicode if needed.
- Fixed various Unicode problems with ilSetString.
- Added progressive saving of .jpg with IL_JPG_PROGRESSIVE flag.
- Added more to the Fortran wrapper.
- Fixed TIF loading/saving (#define LZW_SUPPORT not enabled in libtiff).
- Changed MSVC++ to use MSVCRT libraries again.
- Added IL_BLIT_BLEND to influence alpha blending in ilBlit/ilOverlayImage.
1.7.3
---
- ilGenImage now returns ILuint instead of ILint.
- Added pkgconfig devil.pc.
- Applied bugfixes from Richard Sims (mainly .psd).
- Added new Xcode framework.
- Added Fortran support.
- Changed MSVC++ 2005 project to link with static MSVCRT library.
- Added patch by SF user robin_charlton for Windows BMP support in ILUT.
- Updated Cmake files.
- Added preliminary Microsoft HD Photo support.
- Added better Unicode support.
- Changed .jp2 loading so that file streams can now be loaded.
- Changed limit/clamp in il.h to IL_LIMIT/IL_CLAMP.
- Added .hdr saving.
- Added Dutch/Spanish/Arabic/Japanese translations of error codes in ILU.
- Added ilut_directxm.c (DirectX Mobile support) from Vincent Richomme.
- Fixed a possible buffer overrun in iClipString.
1.7.2
---
- Added preliminary OpenEXR support.
- Fixed crash in ilSwapColours with images having bits-per-channel greater than 8.
- Fixed ilClearImage not clearing images properly with bpc greater than 8.
- Added MSVC++ 8 projects back.
- Added MSVC++ Unicode projects.
1.7.1
---
- Added Mac OS X .icns support
- Fixed a bug with non-standard .ico files that have PNG compression and palettes.
- Added JPEG 2000 support with JasPer.
- Added support for 256x256 and 512x512 JPEG 2000 encoded .icns.
1.7.0
---
- Fixed a windows padding problem with bitmaps (thanks to robin_charlton)
- Added ILUT-X11 thank's to Jesse Maurais patch
- fixed PNM loading problem if the image contained 0x20 (a gray) in the beginning of the image
- added internal asserts to check pedanticaly check consistency in debug builds
- fixed SSE3 check
- added support for lua with swig
- from now the format of an opengl image can be GL_ALPHA too. (only when for opengl, not usable anywhere else). it's more an hack then a solution.
- new versions of ilReadRLE8Bmp, ilReadRLE4Bmp from Björn Ganster
- applied submitted patch #1645286. patch for reading X offset and Y offset from TIFF files.
- fixed lump reading bug. from Bjürn Ganster
- Fixed gif loading patched from Bjürn Ganster
- Fixed bug #1643309. wrong buffer size on ilSaveL with IL_BMP
- Fixed some iluCompareImage patched from James Kirkpatrick
- Fixed many security holes patched from Bjürn Ganster
- fixed bug #1637588. iMemSwap bugs
- fixed bug #1634099. Infinite loop on corrupt JPEG
- applied submitted patch #1632474. GIF loading fails on incomplete images
- applied patch #1612477. _UNICODE / UNICODE fixes. more win32 unicode fixes.
- TIFF compression method changed to LZW
- added the DEPRECATED macro and deprecated iluGenImage and iluDeleteImage
- glCompress2DARB now works with OS X too.
- incorporated patches from debian.
- Added support for non power of 2 texture loading (if available)
- fixed bug #1609417. Bug in iluGetImageInfo
- iGetIntegervImage(IL_IMAGE_CHANNELS) now returns a correct value
- fixed bug #1561642. ILstring is poorly implemented (w.r.t const and others)
- fixed bug #1562955. PSD 16 bits
- ilInit and ilShutdown can now be called more then one times or shutdown even if not initialized without crashing anything.
- fixed bug #662903 ilBlit result incorrect when source is not (0,0)
- internal clamp functions now are macros
- ilSetAlpha now returns a boolean value since it can fail
- fixed an error in the configure script were breaking compilation if libpng-config wasn't in the path
- applied submitted patch #1550471. missing const in il_tiff.c
- fixed bug #1554447. broken bmp loading with less then 256 color and a palette
- applied submitted patch #1554358. headers fixes
- Added MSVC++ Express 2008 projects and removed MSVC++ 6 projects.
- Added support for loading Windows Vista icons.
1.6.8 RC2
-----
- applied submitted patch #1539074. Fixes some bugs with paletted images, adds iluScaleAlpha (from jbitnet)
- fixed OpenGL loading
- altivec code is now merged inside IL sources, not on a separate directory
- more dds fixes
- now the returned number of layer, mipmaps or subimage is correct (computed at every call)
- added ilGenImage() and ilDeleteImage()
- fixed iluMirror, ilMirror duplicate code
- added _mm_malloc _mm_free for vectorized code memory managment
- fixed precedence for memory mangment variants
- Altivec,SSE,SSE2,SSE3 checks completed
- applied submitted patch #1483941
- applied submitted patch #1476315
- applied submitted patch #1085415 DXT3 and DXT5 save is broken
- applied submitted patch #1504388.(by Hans de Goede)
- fixed bug #1411053 now Lump loads parameters are const pointer
- fixed bug #1211071 pcx with the padding header not 0 filled will be loaded
- fixed bug #1092521 png save memory leak fixed
- fixed bug #1183924 iluScale checks for return values of ilTexImage
- fixed bug #1173264 iluInvertAlpha inverting the wrong channels
- fixed bug #1119508 il_tiff high order bit garbage when using uint32 as shorts.
- ilTypeFromExt is now exported
- ILstring now is (const char*). The typecastings are now not necessary to pass ILstrings
- iluFlipImage now is 3x faster and doesn't use any memory allocation
- IL_LUMINANCE and IL_LUMINANCE_ALPHA are now correctly handled; corrected function:
iluInvertAlpha, [iluScaleColours needs testing]
- iluScaleColours will return error when passing not BYTE or UNSIGNED_BYTE images, needs enhanchments to enable all the types.
- Finalized the Unicode support for windows.
- Fixed inlining for small function (i.e.g. endianess, rouding..)
- Added to configure script the ability to configure the path for the libraries
1.6.8 RC1
-----
- Fixed a lot of minor bugs in various image format (especially TIFF)
- Rewritten documentation
- Dev-Cpp support
- Added ilSetAlpha
- Added ilModAlpha
- Removed ilSetSharedPal
- Fixed a common TARGA/TIFF/PNG memory Leak
- Fixed bug 785053 in il_header.c
- Fixed bug 785178 in il_header.c
- Fixed stack problem for name reuse
1.6.7
-----
- Added hdr image format
1.6.6
-----
- Added .gif support back in.
- Fixed some DDS problems.
- Added DirectX 9 support.
1.6.5
-----
- Created a stress test application.
- Found out that you have to set PNG_NO_STDIO when compiling libpng on Win32,
or else it will try to write via fprintf on an error, causing a crash.
- Fixed a similar problem with libjpeg, though you don't have to recompile.
- Fixed a problem converting palette'd images to higher than IL_UNSIGNED_BYTE.
- Changed 1-bit .bmp files to use 0 and 1 instead of 0 and 255 (using a palette).
- Fixed a divide by 0 in iluPixelize() if the pixel size was 0.
- Fixed some problems with iluEqualize().
- Found and fixed three memory leaks using Paul Nettle's memory manager.
- Changed the filters to properly work on images with higher than 8-bit channels.
- Fixed some uninitialized values when using ilConvertImage on palette'd images.
- Fixed a problem saving .dds mipmaps.
- Fixed a bug saving .dds files with blocks of all the same colour.
- Added support for the IL_LUMINANCE_ALPHA format.
- Fixed a bug loading corrupt .gif files.
- Removed .gif support, to comply with the Unisys patent (though the code is still there).
- Added preliminary region checking in ILU.
1.6.1
-----
- Fixed a bug loading 8-bit .psp files.
1.6.0
-----
- Removed the NeuQuant samples limit of 15.
- Fixed conversion to IL_FLOAT and IL_DOUBLE types.
- Rewrote a lot of the Endian conversion routines.
- Added missing Endian conversion to .gif and .ico files.
- Removed the unneeded il(u/t)_error.h files.
- Added the devil_internal_exports.h file and moved lots of il(u/t)_internal.h in there.
- Fixed a bug converting from IL_BGR to IL_LUMINANCE on IL_INT and IL_UNSIGNED_INT types.
- Went through the code and fixed many possible problems if ialloc failed.
- Fixed a possible memory leak in iGetPaddedData().
- Removed a function not being used from the .sgi code.
- Fixed several leaks and double allocations in iConvertPal().
- Fixed a possible leak in iluColoursUsed().
- Added IL_LOAD_EXT and IL_SAVE_EXT functionality to ilGetString().
- Merged in Ryan Butterfoss's changes to the DDS saving code.
- Moved ilSetPal() from il_devil.c to il_pal.c.
- Fixed potential memory leaks while loading a corrupt .gif file.
- Added the IL_FILE_WRITE_ERROR #define.
- Fixed double flipping in ilCopyPixels() and ilSetPixels(), along with some crashes.
- Fixed a crash calling iluColoursUsed() when a 1x1 image was bound.
- Fixed a leak if .dds mipmap reading failed.
- Fixed leaks in ilRegisterMipNum() and ilRegisterNumImages() if they failed.
- Sped up several aspects of loading and saving .sgi files.
- Added ILUT_D3D_POOL for ilutSetInteger() and ilutGetInteger().
- Changed ilGetString() to return an ILstring.
- Added IL_BGRA support back in to ilApplyProfile().
- Removed double flipping when saving .pcx and .jpg files.
1.5.6
-----
- Fixed a problem loading .psd files when IL_MEM_SPEED_HINT is set to IL_FASTEST.
- Fixed a bug loading corrupted .pcx files.
- Fixed some bugs using ilLoadL.
1.5.5
-----
- Temporarily removed .pcd support, since no .pcd images were loading correctly.
- Fixed problems with greyscale .bmp, .jpg and .pcx files.
- Made ilLoadFromJpegStruct and ilSaveFromJpegStruct exportable even if IL_NO_JPG defined.
- Fixed a bug loading .gif files with "local palettes".
- Fixed a problem loading .ico/.cur files with ColoursUsed set to 0 in their headers.
- Fixed a bug loading 4-bit .bmp images.
- Added a check to see if a .jpg is missing data at the end, so it does not loop infinitely.
- Fixed the Delphi headers.
- Added a check for extensions in GIF87A files.
- Added .psd saving.
- Optimized ilutGetPaddedData a bit.
- Added some checks to prevent corrupt .psd files from crashing the library when loading.
- Fixed a problem using ilLoadL with the IJL.
- Updated the Win CE project files.
- Added .plt support.
1.5.1
-----
- Fixed several problems with the internal file routines that caused many bugs.
- Fixed a simple problem loading .gif files.
1.5.0
-----
- Removed the printfs that somehow got left in il_sgi.c.
- Added ILUT_D3D_MIPLEVELS to ilutSetInteger().
- Updated the static library workspace.
- Updated ilAddAlpha, ilAddAlphaKey and ilRemoveAlpha in il_convert.c to deal
with differing bits per channel.
- Changed the registered loading/saving function handler so that they can return errors.
- Changed ILU and ILUT to use DevIL's memory handler.
- Removed the last of the malloc/free calls.
- Changed the file/lump reading functions to set IL_FILE_READ_ERROR automatically
when one occurs.
- Added IL_EOF for use with other languages.
- Fixed a small memory leak in the .psp loading code.
- Fixed a bug where images with types larger than short were not saving as .sgi correctly.
- Added checks for all file reads to see if they succeeded.
- Added recognition for the .pdd extension (.psd renamed).
- Fixed some leaks in the .dcx code.
- Added ilutFreePaddedData().
- Fixed some possible leaks in ilTexImage().
- Fixed a problem with loading some .dds volume textures (LinearSize field appears to be
incorrect on some images).
- Fixed a problem loading some .tga files.
- Added more checks to determine if a .dds file has a volume texture in it.
- Added ilGetDXTCData().
- Fixed some RLE .sgi loading bugs.
- Added direct S3TC/DXTC to OpenGL code.
- Added ILUT_DIRECT3D8 for use with ilutRenderer.
- Added direct DXTC to Direct3D 8 code.
- Added IL_DXTC_DATA_FORMAT for use with ilGetInteger(v).
- Added support for more formats with IL_TYPE_UNKNOWN.
- Added .pix support.
- Added .pxr support.
- Added .xpm support.
- Fixed MSVC++ post-build, thanks to Nick Marley.
- Changed the load order to check for popular image formats first.
1.4.2
-----
- Fixed a crash with the .psp code and a large memory leak.
- Fixed .gif loading, even loading animations better than I did with libungif.
- Added the static library workspace to /projects/msvc/static.
- Fixed another Mac OS X compilation problem.
- Removed references to libungif.
1.4.0
-----
- Fixed .dds volume texture support.
- Wrote the VolTex test application that writes out the individual slices of a volume texture.
- Changed the internal iConvertImage to make things much nicer.
- Fixed iluRotate so that it works on colour-indexed images.
- Added ilGetAlpha().
- Added iluInvertAlpha().
- Fixed Halo palette loading (didn't close the file handle after reading).
- Added .act palette support (Adobe Color Table).
- Added support for the Paint Shop Pro file format (.psp).
- Added the last bit of code to load TIFF files from memory buffers.
- Fixed the out-of-date Intel JPEG Library support.
- Rewrote .gif support.
- Fixed some problems with the Mac OS X compilation.
- Added Sam's fixes for .jpg and .bmp handling.
1.3.1
-----
- Fixed a huge ilLoadPal() bug.
- Disabled SDL (hopefully temporarily) to get rid of their main() mangling.
- Fixed greyscale .bmp saving bug.
- Removed the iluScaleTest() declaration.
1.3.0
-----
- Reworked sources tree.
- Updated *nix support, now uses autoconf/automake.
- Optimized ilCopyPixels and ilSetPixels if the destination and source formats/types are the same (suggestion by Kenneth Hurley).
- Fixed an ilSetPixels memory leak.
- Fixed the Big Endian versions of GetLittleFloat and GetLittleDouble.
- Changed iprintf to ilprintf to fix an error compiling with Cygwin.
- Changed iCurImage to iluCurImage and ilutCurImage in ILU and ILUT, respectively, to help with static libs.
- Removed ilShutDown call from ilInit (gave an erroneous error).
- Changed return type of ilSaveF and ilSaveL to return how many bytes were written.
- Added colour-indexed support to .tif saving.
- Fixed colour-indexed support with .bmp saving.
- Added MSVC++ 7 solutions and projects.
- Fixed an ilConvertImage bug when a colour profile was present in the image.
- Upgraded to LittleCMS 1.0.8.
- Updated some of the DDS loading to load more images.
- Added .psd support.
- Added some more Endian swap routines.
- Added offsets to internal image struct and added IL_IMAGE_OFFX and IL_IMAGE_OFFY to il.h.
- Added some more permissable Modes to ilSetInteger.
- Changed a malloc to ialloc in il_profiles.c.
- Added Kenneth Hurley's contributions to the DDS code.
- Added NeuQuant code and defines for controlling quantization.
- Fixed PowerBasic test files.
- Added GDI+ and MFC static library interfaces.
- Added more Modes for ilSetString, along with setting the C Header output string.
- Added DDS saving.
- Added preliminary (not functional) .psp support.
- Added a config.h generator for MSVC++.
1.2.4
-----
- Moved ilInit to il_istack.c.
- Changed iFreeMem to ilShutDown, and made it external.
- Rewrote much of the DDS support.
- Made a "Dynamic" project in the MSVC++ workspace, where it delayloads some external libraries for a smaller memory footprint.
1.2.2
-----
- Removed saving support for .bmp files with negative heights.
- Added ilApplyProfile for colour profile support.
- Moved several projects from the 'TestIL' directory to 'Examples'.
- Fixed the DDS cyan bug.
- Added DCX support.
- Fixed the iluScaleAdvanced return value, thanks to Rune Kock.
1.2.0
-----
- When .bmp files don't have biColorsUsed set, DevIL now uses a default value when a palette is present.
- Added FreeBSD makefiles, thanks to Wojciech Sobczuk.
- Added .dds loading support.
- Updated the manual.
- Fixed some bugs in the documentation.
1.1.9
-----
- Removed .oil support -- it wasn't used anyway.
- Better Linux makefiles, thanks to Ismo K�rkk�inen.
- Fixed a problem compiling in Linux.
- Renamed all DevIL files with il_*, to make it easier to compile as a static library (no name conflicts).
- Added IL_STATIC_LIB if you want to compile as a static library (no pragma options in MSVC++).
1.1.8
-----
- Added the new scaling features to ILU.
- Fixed crash when saving .tif files.
- Added full Windows CE support.
- Fixed iBindImageTemp in ILUT, correcting several functions that used this.
- Fixed ILUT's GL functions when the window did not have a width of 4x.
- Added TexImage and Resize to the ilImage C++ wrapper.
- Corrected Big Endian support in the data (all data was shown with blue and red swapped, and the alpha was in the wrong position).
1.1.5
-----
- Made ilSetPixels accept negative offsets.
- Cleaned up rle.c some and added credits to the top.
- Fixed .gif animation loading.
- Made memset and memcpy intrinsic -- removed ilMemSet and ilMemCpy.
- Fixed a bug loading .jpg files with overrided versions of the loading functions.
- Fixed a bug loading ASCII .pnm files that do not end with an endline.
- Fixed a loading problem with .png files with bit depths less than 8.
- Fixed a loading problem with .png files of type PNG_COLOR_TYPE_GRAY_ALPHA.
- Changed .png loading to use the gamma values on PCs.
- Removed .lbm support.
- Added several new examples.
- Fixed the RGB order of jpeg loading/saving.
- Enabled ilTypeFunc functionality.
- Removed the bit filters from ILU.
1.1.3
-----
- Added IL_FLOAT and IL_DOUBLE support to ilConvertImage().
- Fixed a large bug when writing to "lumps".
- Added ilSetString() and implemented behaviour in states.c for customized strings.
- Changed writing to use a const void* buffer instead of a void* buffer.
- Fixed a bug in reading non-compressed .oil files.
- Fixed lots of problems with mipmaps.
- Added ilSetMemory() and callbacks.
1.1.1
-----
- Added palette support to iluEqualize().
- Added palette support to ilu convolution filters.
- Fixed iluEnlargeImage(), which was erroring out.
- Fixed iluScale(), which was not setting the origin of some images correctly.
- Changed all function parameters from char* to const char*.
- Removed WinMain from the GLTest examples but used a linker setting to get rid of the console window.
- Added preliminary .mng support, thanks to libmng.
- Removed anal debug memory messages.
- Added ilutD3D8Texture() and ilutD3D8VolumeTexture().
- Fixed crash with some palette'd .png images.
- Added ilutD3D8TexFromFile(), ilutD3D8VolTexFromFile() and several more D3D8 functions.
- Removed png_.h from /OpenIL to get rid of any libpng conflicts.
- Added interlace support for .png saving.
- Fixed some problems with origins (notably targa loading).
- Wrote OpenILUT/BeOS.cpp.
- Fixed .bmp saving.
- Fixed .pnm loading when the file is of zero length.
- Fixed ilClearImage(), which was only setting the first byte of each pixel.
- Fixed an iluNoisify() bug where it would crash when the tolerance was too low, and also another bug where the output is garbled.
- Removed the last of the ilutOgl function declarations and the GL compressed functions.
- Fixed the JPEG blockiness problems with some rare JPEGs.
- Removed the two poor gamma correction functions.
- Changed all OpenIL references to NeoIL.
- Added ilBlit().
- Fixed ilBindImage() behaviour when a new image was requested.
- Added footer writing to .tga saving.
- Added .tga extension support.
- Fixed .sgi RLE writing.
- Added iGetFlipped() internally to speed up temporary flipping of an image (usually for saving).
- Added origin correction to .oil saving.
- Started using libpng 1.0.11.
- Fixed 1-bit .bmp loading (previously assumed luminance when supposed to be textured).
- Added gif.c.
- Removed the glext.h dependency.
- Added ilSet() and ilutSet().
- Fixed a bug saving 32-bit .bmp files.
2.1.4b
------
- Fixed crash in iluGammaCorrectCurve() when the image had a palette.
- Non-Windows systems no longer call ilutCompGLInit().
- Moved #include <windows.h> in alloc.c inside the #ifdef _WIN32.
- Fixed so much with multiple bpc's and image volumes.
- Fixed some quirks in ilConvertImage().
- Found and fixed a memory leak in 4-bit .pcx and 16-bit .tga support.
2.1.3b
------
- Started the major work needed to make OpenIL use multiple bytes per channel.
- Created convbuff.c and converted ilConvertBuffer() to convert between types instead of just between formats (extremely exhausting!).
- Finished 2 bpc support for .sgi files.
- Finished 2 bpc support for .png files.
- Found and fixed some problems in ilCopyPixels().
- Changed iluNoisify to accept a float parameter.
- Added ilApplyPal() and iluReplaceColour().
- Fixed a bug in all saving functions where it would crash if a file had not been read beforehand.
- Fixed iLoadDataInternal()'s problem with origins.
- Added iluLoadImage().
- Fixed a problem where ilutGLBuildMipmaps() was flipping images that did not need to be flipped.
- Fixed .png-saving of palette'd images where the colours were in the wrong order (bgr instead of rgb in some cirumstances).
- Fixed a problem saving palettes in the .oil format.
- Added batch conversion to WindowsTest.
- Created openilu/bit.c and moved the bit filters over there.
- Fixed ilutGetPaddedData() where it was not flipping the image and swapping colours.
- Made ilConvertImage() not perform conversions when unnecessary.
- Added seeking and telling to the file writing functions.
- Fixed memory leak in ilCopyPixels(), thanks to m|G-21.
- Added 4-bit .pcx support.
- Added support for the Intel Jpeg Library (IJL).
- Fixed a bug in ilCopyPixels() when the buffer was larger than the image itself.
- Removed IL_IMAGE_DATA from il/il.h.
- Added .lif support.
- Moved #pragmas for external libs into internal.h from il/il.h (fixed SDL linking problem).
- Updated the .oil specs to include lzo compression.
- Added pause / resume to AnimTest.
- Fixed 3d bilinear filtering in iluScale().
- Fixed .tif-saving orientation problem.
- Changed most functions using ILfloats as parameters to use ILclampf instead.
2.1.1b
------
- Added back lzo support to oil.c.
- Added alloc.* to help find memory leaks in debug mode in Windows.
- Found and fixed two small memory leaks:
- In iLoadOilInternal(), in oil.c, the directory was not being freed.
- In iConvertPalette(), in convert.c, NewImage's palette was not being freed.
- Added key colour support (not thoroughly tested yet).
- Fixed a problem in AnimTest/WindowsTest where the openfilename buffer was not large enough.
- Fixed a bug in ilFixCur() in convert.c where it was setting the type as the format of the image.
- Rewrote ilutGLSetTex().
2.1.0b
------
- Renamed ilSetDefaultCallbacks() to ilResetRead() and ilSetFileCallbacks() to ilSetRead().
- Created ilResetWrite() and ilSetWrite().
- Added IL_SEEK_XXX #defines.
- Fixed .sgi loading (was reading too much per channel).
= Added ilGetLumpPos().
- Added ilSaveF and ilSaveL.
- Revamped a lot of the internal file routines (especially saving).
- Fixed .bmp saving and loading (both padding issues).
- Fixed .pcx saving and added support for truecolour .pcx files (including 32-bit).
- Changed ilutSetWinClipboard() to convert images to bgr format before sending to the clipboard.
- Rewrote bitfile routines to use ILHANDLEs.
- Rewrote most saving routines to utilize saving to file streams and memory lumps.
- Wrote ilutWinLoadUrl().
- Created an internal file buffer in files.c.
- Created ilutOglBindCompressed() (untested so far) and ilutOglMipCompressed().
- Fixed colour quantization.
- Fixed bugs in ilSetPixels() that caused incorrect copies.
- Added drag-and-drop capability to WindowsTest.
- Fixed a rare case of ilCopyPixels flipping the image when it wasn't supposed to.
- Fixed WindowsTest not working correctly in Windows 2000.
- Wrote oil.c, oil.h and the Oil Gen project to utilize the new .oil format (tentative name).
- Added OS/2-style .bmp loading.
- Created AnimTest to test animation (mainly .oil).
2.0.9b
------
- Fixed a problem reading some 8-bit .bmp files where the palette was read incorrectly.
- Changed iluScale and the filter functions to convert palette'd images to their truecolour counterparts. I will change them back at the end when I fix the problems with colour quantization.
- Added palette support to iluGammaCorrectCurve.
- Fixed iluSwapColours() where iCurImage was NULL.
- ilut's OpenGL functions now resize a texture before sending it to OpenGL if the texture is too large.
- Fixed a problem when sending large images to OpenGL (now supports extremely large images).
- Fixed a padding bug when loading low-bit .bmp images.
- Changed il.h's ILAPIENTRY and ILAPI #define's.
- Removed the #pragma from the top of WindowsTest.
- Changed the ilActive* functions to reset to the base image when 0 is used as the parameter.
- Fixed ilLoadPal where it was reading extensions incorrectly.
- Added .wal support.
- Fixed a problem in iGetActiveNum, where it was trying to access a NULL pointer's Next pointer.
2.0.8b
------
- Changed the ilActive* functions to use the current image, not the base image.
- Added the IL_CUR_IMAGE #define to get the current image name via ilGetInteger().
- Modified WindowsTest to preserve the original image.
2.0.7b
------
- Added SDL timing to the Benchmark project, though I cannot get it to link correctly.
- Added colour quantization, thanks to romka.
- Changed the WindowsTest icon.
- Included the debug libs and dlls in the full Windows installer.
- Modified ilGenImages()/ilBindImage()/ilDeleteImages() behaviour so that you can bypass ilGenImages() and call ilBindImage() directly. This probably requires more thorough testing but appears to be stable.
2.0.6b
------
- Fixed some linker problems, so I'm uploading it as 2.0.6b.
2.0.5b
------
- Changed ilNewImage() to set the format and type of an image.
- Fixed an error in the ilGetInteger() documentation.
- Added some new features to the DDrawTest project that were already present in WindowsTest.
- Added loading functions to the API-specific ilut functions for easier loading.
- Added ilutSetHBitmap() and made ilutGetWinClipboard() utilize it.
- Added ilutOglSetTex(), which doesn't work yet.
- Added saving functions to the API-specific ilut functions.
- Added the IL_USE_KEY_COLOUR #define and ilKeyColour(), which do not actually do anything yet.
- Fixed a problem in several OpenGL functions where it was not using the resized bitmap.
- Fixed a bug in iluSharpen() where it was using iCurImage instead of CurImage and another where it was using a depth of 0.
- Added iluDeleteImage() and iluGenImage().
- Added iluGetImageInfo().
- Got the GL_RGB8 and GL_RGBA8 stuff working in the OpenGL functions when ILUT_OPENGL_CONV is enabled.
- Fixed problems in iConvertPal() when converting from IL_PAL_BGR32 or IL_PAL_RGB32 to IL_PAL_BGRA32 and IL_PAL_RGBA32 palette formats.
- Renamed ilutOglBuildMipmaps() to ilutOglBindMipmaps() and created a new ilutOglBuildMipmaps().
2.0.3b
------
- Added subimage support to ilCopyImageAttr().
- Added the IL_ACTIVE_XXX #define's for use with ilGetInteger/v().
- Added ilCloneCurImage().
- Modified Windows Test to work with subimages.
- Fixed a problem loading some (very few that I've found...) .pcx images.
- Fixed the Windows Test window width problem.
- Added format and type mode setting.
- Added 4-bit rle .bmp support.
- Updated ilAddAlphaKey().
- Fixed a bug in ilSaveSgi() where it wasn't changing from bgr(a) to rgb(a).
- Modified the MSVC++ project settings to generate separate debug dlls and libs, so you don't have to change the directories everytime you switch from release -> debug or vice-versa.
- Added iluEnlargeImage().
- Changed ilSaveTarga() to preserve the current image's palette.
- Made an ilConvertBuffer() function that iConvertImage() now uses.
- Changed ilFlipImage() to use memcpy() instead of a for loop.
- Fixed a problem in ilSetPixels() and made ilSetPixels() and ilCopyPixels() use the new ilConvertBuffer() function.
- Changed ilOverlayImage() to use ilConvertBuffer().
2.0.0b
------
- Added resources to ilu and ilut.
- Got rid of ilVersion and put its functionality in ilGetInteger. I did the same for ilu and ilut, too.
- Moved ilFlipImage and ilSwapColours to ilu.
- Made ilAddAlpha, ilAddAlphaKey and ilRemoveAlpha internal to il.
- Made all of rle.c internal to il. There was no need to make it public.
- Added .pic, .pnm and .sgi validity functions.
- Changed all the *F functions (e.g. ilLoadTargaF) to restore the file stream's previous state before the call.
- Added pattern.c and everything within.
- Added rawdata.c and everything within.
- Changed the functions in raw.c to accept "normal" parameters.
- Changed internal functions to prevent from using ilGetState (which was rather nasty to use) and removed ilGetState.
- Removed dependency on OpenGL.
- Removed using the paletted texture extension in ilut for OpenGL.
- Added ilGetPalette().
- I don't know how it happened, but I inadvertently left out two very important lines in iLoadJpegInternal() that cleaned-up after libjpeg, so now they're in.
- Changed iLoadTiffInternal() a lot.
- Fixed a possible leak in iLoadPngInternal and streamlined it some.
- Hopefully fixed all problems displaying palette'd images.
- Added ilGetExtension.
- Hopefully finally figured the formula for padding for .bmp files.
- Disabled checking for extensions in all the ilLoadXXX functions so they can be forced to load that specific type of image no matter what.
- Added more support for converting palettes via the IL_CONV_PAL mode.
- Added better registration in register.c.
- Fixed problems with libtiff, but I had to override using memcmp, because that is where it would fail in the libtiff library - Also overrided the warning and error functions of libtiff.
- Got rid of ilSetDoomPal() and made a ilSetSharedPal() function to use by stupid file formats that don't have a palette but are colour-indexed nonetheless.
- Made unified ilLoad and ilSave functions that use enums.
- Changed the image stack size to 1024 (but it can be enlarged...).
- Changed ilut around. Users need to change any ilut code they've used previously.
- Added compression control features to ilHint().
- Added boolean values to ilGetInteger() and vice-versa.
- Added the iluBitFilter functions.
- Added interpolation to the iluScaleNd() functions.
- Got rid of the unused ilAlphaFunc().
- Removed ilOverlayImage().
- Started overriding the error/warning handlers for libjpeg in jpeg.c.
- Moved bitfile functions into the private OpenIL sector.
- Began lbm support.
- Included new Delphi headers and the all-new Linux makefiles.
- Added a unified ilLoadPal() and got rid of the specific palette-loading functions.
- Fixed 1 bpp and 4 bpp .bmp loading.
- Fixed 8-bit rle .bmp loading.
- Got lbm support working, but it only works with one image I have and none of the PSP-generated ones.
- Made all utility.c functions private. Equivalent functionality is in ilGetInteger().
- Added iluImageParameter() to control filtering in iluScale() and placement in iluEnlargeCanvas().
- Unified ilCopyPixels1D, ilCopyPixels2D and ilCopyPixels3D into an ilCopyPixels.
- Combined ilCrop2D and ilCrop3D into an ilCrop.
- Changed ilSetError() to handle stack overflows in the error stack more gracefully.
- Used IL_MEM_SPEED_HINT in targa.c, bmp.c and pcx.c to decode quicker if IL_FASTEST is set.
- Got OpenIL* compiling and running under DJGPP again.
- Attempted to do ilutSetWinClipboard() and ilutGetWinClipboard() (but failed miserably!).
- Added 3d mipmap generation in ilu (totally untested).
- Changed ilTexSubImage() to ilSetData().
- Moved ilCompareImage() and ilColoursUsed() to ilu.
- Removed ilDefaultStates() from public scope.
- Removed all the extension stuff.
- Changed ilSaveJascPal() to ilSavePal() and added support for saving palettes to ilSave().
- Changed ilDeleteImages() to use a quicker algorithm (not having to check against the linked list).
- Removed ilSetPixel() due to objections.
- Renamed ilCreateDefaultTex() to ilDefaultImage().
- Removed ilutGetState().
- Changed OpenIL* project settings to optimize for size instead of speed.
- Removed ilutOglIsExtensionSupported() from ilut.
- Changed the IL_OPENGL, IL_ALLEGRO and IL_DIRECTX #defines to ILUT_OPENGL, ILUT_ALLEGRO and ILUT_DIRECTX, respectively.
- Mapped ilu and ilut errors to their corresponding OpenIL errors.
- Fixed ilPushAttrib, ilPopAttrib and their ilut counterparts.
- Moved file handling from internal.c/.h to the new files.c/.h.
- Added IL_IMAGE_DEPTH and IL_IMAGE_SIZE_OF_DATA #defines.
- Looked at libjpeg docs and jdatasrc.c to figure how to make it use all the input types OpenIL supports.
- Started 1-bit .pcx support.
- Added iluEmboss() and iluEmbossDark().
- Made iluScaleColours() work with palette'd images.
- Added the ilFilters class to the C++ wrapper.
- Added an iluNoisify() function.
- Optimized the iluScalexD_() functions a lot.
- Multiplied the image size by a correction factor of 4/3 when using IL_FASTEST with several formats. This is mostly for poor compression schemes that can result in larger compressed images than the uncompressed versions (e.g. RLE).
- Fixed a bug in ilGenImages when an image name is being reused (was NULL after deletion but never recreated).
- Reinstated .ico support. All icons I've passed to this have worked so far.
- Added ilSetPixels().
- Fixed a severe problem in ilCopyPixels().
- Created the "3d Test" and the "3d Targa Gen" projects to test 3d images.
- Fixed problems in several functions that would crash when a given parameter was NULL.
- Made ilCopyPixels() and ilSetPixels() heed set origins.
- Commented the hell out of some of the test apps.
- Renamed WindowTest to GLTest.
- Created a Windows Test (Windows-specific code).
- Fixed a buttload of little bugs and similar stuff that were found when using the Windows Test App.
- Fixed a serious bug in ilCopyImage().
- Fixed a bug in iluSharpen() where the image was sharpened with a flipped version of the image.
- Added IL_COLOR_INDEX -> IL_LUMINANCE conversion in ilConvertImage().
- Fixed ilConvertImage() with several conversions and optimized it a lot.
- Renamed pattern.c as io.c (loading and saving, along with determining and verifying functions).
- Changed the MSVC++ proejct setting to be a little more friendly.
- Fixed the problem with ilSavePal() sometimes saving 0-length .pal files.
- Created the GdiTest project.
- Finally got around to writing ilSaveTiff().
- Added the IL_LIB_TIFF_ERROR #define.
- Added ilRegisterMipNum() and ilRegisterNumImages().
- Added ilutLoadResource().
- Fixed ilutSetWinClipboard() and ilutGetWinClipboard().
- Worked some on iluRotate() and got it working.
- Added back ilOverlayImage() (crude but working...) with alpha blending.
- Redid iluColoursUsed() with a hash table.
- Added ilutSetHPal(().
- Made a neat installer with NullSoft's SuperPiMP installer dev kit.
- Fixed a bug in ilSaveJpeg() when saving images with alpha channels (libjpeg doesn't accept alpha channels).
- Started on matrix.c.
- Added iluGammaCorrectCurve().
- Added preliminary support for Half-Life's model format skins.
1.6.0b
------
- Decided to up OpenIL to a beta status.
- Fixed a potentially harmful bug if too many images were used in iEnlargeStack().
- Changed ilBindImage(), ilGenImages() and ilDeleteImages() to be more like their OpenGL counterparts.
- Fixed ilConvertPal().
- Added ilCompareImage().
- Added ilSetPixel().
- Fixed ilSaveBitmap() and ilLoadBitmap().
- Fixed many small bugs and changed some small things around that weren't noteworthy enough to document, imo.
- Included the first (yet incomplete) documentation.
1.5.9a
------
- Added the ilIsImage() definition to il.h. I quite obviously had forgotten to when I made the function, so it has just been sitting there in many releases...oh well.
- Looked at the GIMP file associations and noticed PNM, which collectively describes pbm/pgm/ppm, so I decided to change some things in ppmpgm.c, such as renaming it to pnm.c and renaming functions.
- Fixed two pretty major .bmp bugs. One stemmed from the fact that I misread the .bmp documentation and thought it was word-aligned, but it was dword-aligned. The other was the the biColoursUsed member of the .bmp header isn't filled-out correctly half the time, so now it's always calculating a 256-entry palette.
1.5.8a
------
- Fixed a major bug in ilTexSubImage2D_().
- Renamed ilTexSubImage2D() and ilTexSubImage2D_() to ilTexSubImage() and ilTexSubImage_(), respectively.
- ilClearImage now uses the correct error code.
- Fixed a bug in ilTexImage_() that sets IL_OUT_OF_MEMORY, even when there is plenty of memory, thanks to ABee.
- Fixed bugs in the ilCopyPixels family that dealt with not calculating the offsets correctly and added error-checking to them to make sure the caller wasn't requesting dimensions too large.
- Replaced all the 4x4 filter matrices with 3x3 filter matrices.
- Added ilColoursUsed().
- Added iluPixelize().
- Changed ilColoursUsed() to use a hash table, but it's still pretty damn slow...
- Added ilHint().
- Added ilutGetHPal().
- Added ilClearColour() and changed ilClearImage() to use the values passed to it.
- Optimized ilColoursUsed() by using a totally different algorithm (quicksort).
- Fixed ilConvertImage(GL_LUMINANCE) when the source was a bgr(a) image.
1.5.7a
------
- Added some support for 1-bit .bmp's.
- 16-bit targa files are now converted to 24-bit on-the-fly.
- Fixed a bug in iReadUnmapTga() where it only read 24-bit targas.
- Added iluVersion() and ilutVersion().
- Added ILUT_OPENGL_CONV to ilut to be enforced in ilutOglFormat() in ilut/opengl.c.
- Started enforcing IL_CONV_PAL (automatically converts palette'd images to unmapped images).
- Added checks for IL_ORIGIN_SET to more image formats.
- Moved ilMirrorImage() and ilNegativeImage() to ilu from il.
- Added iluEnlargeCanvas().
- Fixed a bug in ilCopyPixels2D() and ilCopyPixels3D() when the destination was not the same size as the source.
- Added iluCropImage2D() and iluCropImage3D().
- Fixed a stupid bug in iluNegativeImage().
- Changed a lot of ilu function names from iluxxxImage to iluxxx - the Image part was sorta redundant, as this *is* an image lib.
- Added filter.c and everything within.
1.5.2a
------
- Started differentiating between SizeOfPlane and SizeOfData somewhat.
- Updated header.c to output the depth.
- Fixed a bug in ilCopyImage_() where it was using iCurImage instead of Image.
- Updated ilMirrorImage() to work with depths of other than just 1.
- Updated ilFlipImage() to work with depths of other than just 1.
- Changed ilCopyPixels() to ilCopyPixels2D() and created 1D and 3D versions.
- Got rid of the now-obsolete IL_IMAGE_1D, IL_IMAGE_2D and IL_IMAGE_3D #define's and removed Target params from ilActiveImage(), ilActiveMipmap() and ilBindImage().
- Added new scaling functions (iluScaleImage1D and iluScaleImage3D) to ilu.
- Added new rotation function to ilu.
- Renamed ilTexImage2D() to ilTexImage().
- Fixed where I accidentally left ILUT_USE_ALLEGRO #define'd in ilut.h.
1.5.0a
------
- Removed ilRealloc()/ilRecalloc() from being global...it's only used in istack.c now.
- Changed the ILTargaSave struct in il.h to use a 255-char array instead of a char pointer for ID, as VB didn't like it, and I also removed IDLen.
- Added ilVersion() and the IL_VERSION #define in il.h. This is to check to see if your executable was compiled with a different version of OpenIL than what is on the user's system.
- Removed ilGetFloat()/ilGetFloatv() from il.h.
- Updated the Cpp Wrapper project by adding an ilRender class.
- Fixed ilSaveBitmap() from unnecessarily swapping the colours.
- Updated ilSwapColours() to work with palettes better.
- Added an option in the OpenIL states to automagically convert palette'd images on loading, which is not used yet.
- Commented the butt out of OpenIL.
- Added image identification to ilLoadImage().
- Changed comments at the beginning of some exported functions to use //! instead of just //, so if you want to create documentation with DOxygen () if desired.
- Added ilMirrorImage().
- Added ilNegativeImage().
- Changed ilutConvertToAlleg() and ilutConvertToHBitmap() of ilut to be exported.
- Changed iSetInputFile() to use itell instead of ftell.
- Created a new iDefaultEof() function that will work on implementations that don't have one.
- Moved all the default file-reading functions to il.h and exported them.
- Removed all references to fEofProc().
- Changed ilGenImages() to use ilNewImage(1, 1, 1, 1) instead of ilNewImage(0, 0, 0, 0) to prevent division by 0 errors.
- Added ilSetDefaultFileCallbacks().
- Added Visual Basic stuff, thanks to Timo.
1.4.7a
------
- Fixed a potentially fatal error in ilConvertImage if converting from a palette'd image.
- Fixed ReadProc to take 4 parameters instead of just 3...it was causing problems with iread, which takes two size parameters instead of just one.
- Started preliminary support for 1 and 4-bit .bmp's.
- Fixed 8-bit .bmp loading.
- Rewrote iFreeMem() and a little of ilDeleteImages() in istack.c to fix a rare but very harmful bug when an image isn't loaded completely.
- Added support for more palettes in ilConvertPal().
- Updated png.c to use the new cross-language file-reading. This is the only lib that I could get to use the cross-language stuff for operating on already-opened files.
1.4.6a
------
- Updated openil.def to include ilSetFileCallbacks, ilRegisterFormat and ilRemoveFormat.
- Changed all the new portable file functions to use the __stdcall convention (ILAPIENTRY) and created default functions that use __stdcall, as the stdio.h file functions use __cdecl.
1.4.5a
------
- Changed ilut's DllMain().
- Added support for the .jpeg and .jpe extensions instead of just .jpg.
- Fixed iSeekFile().
- Added complete support for using your own file routines (so other languages can use the ilLoadxxxF file routines).
- Added register.c and register.h in and appropriate typedefs and function declarations in il.h.
- Removed the "else" in front of all the tests in ilLoadImage and ilSaveImage...they were unnecessary.
- Rewrote ilutOglScreen() to not close the current image.
- Rewrote ilConvertImage() to not close the current image.
- Updated djgpp.mak to use register.c.
1.4.2a
------
- Moved all il*.h out of /OpenIL* and into /include/il.
- Changed ABee's e-mail address where relevant.
- Updated the openil.def file, which didn't include the jpeg functions.
- Updated the Delphi headers to use the correct default IL_NO_XXX #define's.
- #define'd _IL_BUILD_LIBRARY in the internal.h files of ilu and ilut. I'm so surprised this warning didn't pop up earlier, but oh well, it's a Microsoft product I'm compiling with. ;-)
- Updated the out of date djgpp.mak (not tested).
1.4.1a
------
- Changed png.h to png_.h
- Added internal but exported functions to the .def files.
- Better Delphi support from Alexander Blach, plus a lovely test app in the /Delphi/Test folder.
- Changed the readme.txt file some.
1.4.0a
------
New goodies, mostly thanks to Alexander Blach (ABee).
- Delphi headers in the /Delphi directory created by ABee.
- .Def files in the /Def directory created by ABee.
- .Def files added to the projects.
1.3.6a
------
Not really much in this release...just trying to make the library more bearable to use.
- Several fixes in the project files
- Changes to the readme.
- Fixed the icons, which got corrupted in previous releases without my knowledge.
- Replaced the #flipCode logo, which had gotten corrupted, too.
1.3.5a
------
Just bugfixes mostly.
- ilutStartup() doesn't call ilutOglInit() anymore, due to problems with OpenGL not being initialized beforehand.
- If image loading doesn't succeed, a future call to ilDeleteImages should not fail anymore.
1.3.4a
------
- Changed targa.c in iReadUnmapTga() by commenting-out the line that used GL_UNSIGNED_SHORT_5_5_5_1_EXT. It required glext.h.
- Restructured project files from Lightman.
- Cpp wrapper compiles into a .lib.
1.3.3a
------
- Added IL_IMAGE_DATA, IL_PALETTE_BPP and IL_PALETTE_NUM_COLS to il.h and used them in states.c.
- Updated the extremely out-of-date readme.txt.
- Changed all the local variables in iConvertImage to static.
- Started on converting from GL_COLOR_INDEX in iConvertImage().
- In ilConvertPal(), if the dest and src format are the same, it now returns GL_TRUE instead of erroring.
- Modified ilut's MakeGLCompliant() to use il's ilConvertImage().
- Shortened all source filenames to 8.3 character format for systems without long filenames.
- Moved the globals from istack.c to istack.h.
- Added image validation for targas to tga.c.
- Combined BMPHEAD and BMPINFO together in bmp.h and bmp.c as BMPHEAD.
- Added image validation for bitmaps to bmp.c.
- Created ilSaveJascPal().
- Added all the IL_NO_* #define's to il.h and commented them out (except for IL_NO_GIF).
- Started on ilSavePcx() - doesn't work yet.
- Fixed ilSaveSgi() to save in rgb(a) format instead of bgr(a) format.
- Added IL_VERSION_1_3_3 to il.h
1.3.1a
------
- Completely rewrote rle.c and added it back into the project.
- ilRleCompress() added to rle.c.
- Changed targa.c to utilize rle compression.
- Started ilSaveSgi() in sgi.c.
- Changed all the SaveLittle* and SaveBig* functions in endianness.c where I had copied them from LoadLittle* and LoadBig*, respectively, so they were not swapping the right value.
- sgi.c's ilSaveSgi() works, except for saving rle-compressed files.
- Changed .jpg's origin to upper-left.
- Changed .pcx's origin to upper-left.
- Edited bmp.c to read in a pad pixel instead of just a pad byte when the image's width is of an uneven dimension.
- Implemented ilSaveBitmap() in bmp.c in its entirety.
1.2.8a
------
- Finished the majority of the Cpp Wrapper project.
- Found out how to initialize OpenIL at startup in gcc at, so changed main.c of each lib.
- Actually got Allegro to utilize OpenIL with some minor reworking. This should be automagic in the next version.
1.2.7a
------
- Got rid of the TARGA struct in targa.h since it wasn't being used and modified function parameters accordingly.
- Added several functions to endianness.c and started using them. I won't be able to read whole structs from files at once anymore so that I can support both little and big endian processors. Oh well, that's the price I pay for portability. =/ I didn't change pal.c, because it doesn't use iread yet...will be changed in the next release. I can't answer for libs I am utilizing, but OpenIL should be fairly portable to big-endian systems now.
- Added support for saving .png files.
1.2.6a
------
- Added preliminary support for DirectX (ack) in ilut's directx.c (and .h).
- Did everything in mipmap.c from scratch.
- Changed iCurrentImage to iCurImage...just nicer-looking and easier to type. =]
- Changed the Next and Mipmaps data members of ILimage to be of type ILimage...a whole lot easier to use than having to cast GLvoid*. Also added NumNext and NumMips members - not sure if they are necessary though. They may just add bloat to an already large struct. Also added SizeOfPlane to the struct (will help with 3d texture volumes).
- Streamlined ilReadUncompBmp() in bmp.c a lot.
- The targa functions now skip over the image id instead of allocating memory for it, reading it, then immediately freeing the memory.
- Implemented ilTexImage2D and ilTexSubImage2D functions.
- Changed all loading functions and functions that update iCurImage. The snippet of code that did this previously was like this:
Image = ilNewImage(Width, Height, Depth, Bpp);
if (Image == NULL) {
ilSetError(IL_OUT_OF_MEMORY);
return GL_FALSE;
}
ilCloseImage(iCurImage);
ilSetCurImage(Image);
Now the code is like:
if (!ilTexImage2D(Width, Height, Depth, Bpp, GL_RGB, NULL)) {
ilSetError(IL_OUT_OF_MEMORY);
return GL_FALSE;
}
- Fixed .bmp support to correctly skip padding.
- Found out .pcx support is pretty shoddy...I will rewrite it soon.
- Rewrote the .pcx reading function iUncompressPcx from scratch...works perfectly. =] Sometimes it's just best to give something a fresh approach.
- Added .pcx validity test functions.
- Changed .raw functions to take Depth as an parameter.
- Changed from absolute to relative paths for the test .exe's in the MSVC++ project settings.
- Added back in MakeGLCompliant() to opengl.c of ilut. Finally got WindowTest to display images with dimensions that are not powers of 2, as MakeGLCompliant() automagically converts the texture to the appropriate dimensions. The only foreseeable problem is if the texture is greater than 256x256, because the VooDoo series of cards may choke and die. Is there an elegant way around this? Maybe I could introduce a new ilDisable/ilEnable() enum. I'm using glGetIntegerv(GL_MAX_TEXTURE_SIZE, &MaxTexW); as a temporary hack right now.
- Worked some more on iConvertImage() and performed the first test - converting from rgb to luminance, and it works. =] The values I used for converting are based on the NTSC values for television and were obtained from, section 6.
- Changed the #define IL_ILLEGAL_PARAM to IL_INVALID_PARAM. Maybe I should just use IL_INVALID_VALUE?
- Added #pragma comment(linker, "/NODEFAULTLIB:libc") to openil\internal.h to get rid of that damn warning.
- Cleaned-up openilut\opengl.c a little bit and got rid of all those erraneous commented-out functions. Also wrapped wglGetProcAddress() in an #ifdef _WIN32/#endif pair. Also flips the texture if the origin is different than the current OpenIL origin to match (will if the user sets it correctly) OpenGL's origin.
- Added ilGetPalBaseType() in utility.c.
- Added support for Dr. Halo palettes (always output with .cut files).
- Added back in png.c and png.h to the project.
- Changed dll.c to main.c and wrapped the DllMain in an #ifdef/#endif pair.
- Made a makefile for Djgpp. It's missing the .c files that require an external library to operate, but they can be added back in easily (not using dos edit!). It has been tested to create appropriate output, but has yet to be tested in an actual program.
- Added an \objs folder for each OpenIL* dir for Djgpp compile.
- Found out IL_PACKSTRUCT needs to come before the name of the struct instead of after...changed it in all files that use IL_PACKSTRUCT. =]
- Fixed an extremely harmful memory problem in sgi.c in iReadRleSgi() where I was malloc'ing only Head->ZSize instead of Head->ZSize * sizeof(GLubyte*). Took me a few hours to find that one...the VC++ debugger didn't help much at all. =/ Rewrote iGetScanLine() to not use iExpandScanLine() and got rid of iExpandScanLine().
- Started on iff.c, my new image format (oh gawd, not another one =), but then I found out later that .iff is already a graphics format from the Amiga. Need to find a new extension...
- Changed png.c to use libpng's png_set_read_fn(). Added validity-checking functions for png's.
1.1.9a
------
- Changed the screenshot functions in ilut.
- Changed the WindowTest around some.
1.1.8a
------
- Added back a few loading formats that were inadvertedly removed in the 1.01a release.
- Worked some on the empty iConvertImage function and wrote an ilConvertImage function. There's a lot more work to do on iConvertImage.
- Replaced false and true with GL_FALSE and GL_TRUE, respectively.
- Added back in .jpg support.
- Added in .tif support with the help of libtiff.
- Added in .col support.
- Improved .ppm and .pgm support considerably - also added .pbm support (btw, psp4 does not output proper binary .pbm files, so I couldn't test it well =/ ).
- ilSaveImage added.
- .pic, .pcd and .cut loading added.
- Changed .pcx to be in rgb order (had it as bgr for the palette).
- Added utility.cpp and functions in it.
- Updated ilu's error strings.
- Updated ilCloseImage to take heed of mipmaps, extra and chained images (none of which are used yet but may be soon enough...).
1.0.1a
------
Restructured lots of the library so that it now uses an image stack (in imagestack.*). Now images have to be bound before being used. All loading functions now load directly into the stack.
0.0.1a - First release
---------------------
Released in sorta bad condition with little documentation. | https://sources.debian.org/src/devil/1.7.8-6/NEWS/ | CC-MAIN-2019-43 | refinedweb | 8,523 | 62.44 |
NAME
feature_test_macros - feature test macros
DESCRIPTION
Feature test macros allow the programmer to control the definitions that are exposed by system header files when a program is compiled.
NOTE: In order to be effective, a feature test macro must be defined before including any header files. This can be done either in the compilation command (cc -DMACRO=value) or by defining the macro within the source code before including any headers. The requirement that the macro must be defined before including any header file exists because header files may freely include one another. Thus, for example, in the following lines, defining the _GNU_SOURCE macro may have no effect because the header <abc.h> itself includes <xyz.h> (POSIX explicitly allows this):
#include <abc.h> #define _GNU_SOURCE #include <xys.h>.:
_POSIX_C_SOURCE >= 200112L
in the feature test macro requirements in the SYNOPSIS of a man page, it is implicit that the following has the same effect:
_XOPEN_SOURCE >= 600
_POSIX_C_SOURCE >= 200809L
in the feature test macro requirements in the SYNOPSIS of a man page, it is implicit that the following has the same effect:
_XOPEN_SOURCE >= 700:
:
If __STRICT_ANSI__ is not defined, or _XOPEN_SOURCE is defined with a value greater than or equal to 500 and neither _POSIX_SOURCE nor _POSIX_C_SOURCE is explicitly defined, then the following macros are implicitly defined:
operations into references before 2.10; 199506L in glibc versions before 2.5; 199309L in glibc versions before 2.1) and _XOPEN_SOURCE with the value 700 (600 in glibc versions before 2.10; 500 in glibc versions before 2.2). In addition, various GNU-specific extensions are also exposed..
If _FORTIFY_SOURCE is set to 1, with compiler optimization level 1 (gcc -O1) and above, checks that shouldn’t change the behavior of conforming programs are performed. With _FORTIFY_SOURCE set to 2, some more checking is added, but some conforming programs might fail.
Some of the checks can be performed at compile time _SOURCE are not defined by default.
If _POSIX_SOURCE and _POSIX_C_SOURCE are not explicitly defined, and either __STRICT_ANSI__ is not defined or _XOPEN_SOURCE is defined with a value of 500 or more, then
•
Multiple macros can be defined; the results are additive.
CONFORMING TO
POSIX.1 specifies _POSIX_C_SOURCE, _POSIX_SOURCE, and _XOPEN_SOURCE.
_XOPEN_SOURCE_EXTENDED was specified by XPG4v2 (aka SUSv1), but is not present in SUSv2 and later. _FILE_OFFSET_BITS is not specified by any standard, but is employed on some other implementations.
_BSD_SOURCE, _SVID_SOURCE, _DEFAULT.
EXAMPLE
The program below can be used to explore how the various feature test macros are set depending on the glibc version and what feature test macros are explicitly set. The following shell session, on a system with glibc 2.10, shows some examples of what we would see:
$
Program source
/* _ISOC11_SOURCE printf("_ISOC11 _DEFAULT_SOURCE printf("_DEFAULT); }
SEE ALSO
The section "Feature Test Macros" under info libc.
/usr/include/features.h
COLOPHON
This page is part of release 5.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://man.cx/feature_test_macros(7) | CC-MAIN-2020-29 | refinedweb | 506 | 54.52 |
When to Use a Logarithmic Scale
- Sep 19 • 9 min read
- Key Terms: log scale
In this tutorial, I'll explain the importance of log scales in data visualizations and provide a simple example.
Simply put, log scales can help visualize between large descrepancies of values on a single axis - such as if you wanted to compare net worth of individuals worth \\(40,000 and \\)800,000,000.
Import Modules
import matplotlib.pyplot as plt from matplotlib import ticker import matplotlib.ticker as tick from matplotlib.ticker import ScalarFormatter
Linear Scale
x_values = list(range(1, 1001)) y_values = list(range(1, 1001))
In this plot below, I plot a simple function of
y=x. So, for every input value of
x, you get the same output value regarded as
y. Here's the relationship of the first few values detailed in a table.
A linear scale assigns equal horizontal or vertical distances to axes values. Take note of the sequential x-axes and y-axes values that each increase by 200.
plt.figure(figsize=(8, 8)) plt.plot(x_values, y_values) plt.title("y=x Function On a Y-Axis Linear Scale");
Log Scale
First off, what are logarithms? Logarithms help us answer the question: how many of one number do we multiply to get another number?
For example, how many 3s do we multiply to get 9? The answer is 3 x 3 = 9 so we had to multiple 3 twice to get 9.
This logic is powerful in helping us build a new scale to easily compare small and large values on a chart.
The number line scale below by Math is Fun helps visualize the differences between a linear scale and logarithm scale.
Going back to our earlier example, below is the function
y=x with the y-axis on a logarithmic scale.
All the same data points from above are plotted; however, notice how the y-axis tick values jump from
1 to
10 to
100 to
1K. With each y-axis tick value, there's an exponential increase.
plt.figure(figsize=(8, 8)) plt.plot(x_values, y_values, label='linear scale'); plt.yscale('log') plt.title("y=x Function On a Y-Axis Log Scale") ax = plt.gca() ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
Real-Life Example: Visualizing Net Worth of People
I attended the University of Michigan for college.
Below, I randomly generated fake net worth data for eight individuals. Since I went to Michigan, I also found the actual net worth data for three extremely wealthy alumni of the university including: Stephen M. Ross, Bobby Kotick and Tom Brady.
data ={'net_worth_us_dollars': [40000, 14000, 120000, 8300, 3200, 3500, 28000, 120000, 150000, 7000000000, 7700000000, 180000000], 'name': ['Joe Smith', 'Jill Brown', 'Mark James', 'Sean Gopher', 'Mary Blake', 'Paul George', 'Melanie Smith', 'Joe Gold', 'Bill Brew', 'Bobby Kotick', 'Stephen M. Ross', 'Tom Brady']} df = pd.DataFrame(data)
Below is a printout of the net worth of these 11 individuals sorted from most wealthy to least wealthy.
The wealthiest individual has a net worth of \\(7,700,000,000 and the least wealthy individual has a net worth of \\)3,200.
df.sort_values(by='net_worth_us_dollars', ascending=False)
High Net Worth Individuals Bar Chart - Linear Scale
Here is a horizontal bar chart of the names of individuals and their net worth on a linear scale.) ax = plt.gca() ax.xaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values))
It's glaringly obvious that we cannot see the net worth of the 8 least wealthy individuals. This is a big problem as it makes this graph uninterpretable.
High Net Worth Individuals Bar Chart - Log Scale
Here is a horizontal bar chart of the names of individuals and their net worth on a logarithmic scale.
df.set_index('name')['net_worth_us_dollars'].sort_values().plot(kind='barh', figsize=(12, 8), logx=True) plt.xlabel("Net Worth [$]", labelpad=16) plt.ylabel("Name", labelpad=16) plt.title("Net Worth of a Sample of University of Michigan Alumni", y=1.02, fontsize=20) ax = plt.gca() ax.xaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
Look closely at how the scale on the x-axis changed.
This visualization is much better! We can now easily interpret the net worth of all 11 individuals on this visualization.
Real-Life Example: Tesla Inc. (TSLA) Stock Price Over Time
Tesla is a company best known for their electric vehicles. They IPOed on June 29, 2010. Since then, their stock has been trading on the NASDAQ as the symbol TSLA.
In recent years, the Tesla stock has surged upwards despite a lot of volatility.
df_tesla = pd.read_csv('TSLA.csv')
df_tesla.head()
df_tesla['date_datetime'] = pd.to_datetime(df_tesla['Date']) df_tesla['date_month_day_year'] = df_tesla['date_datetime'].dt.strftime('%b %-d, %Y')
Tesla Stock Price Over Time - Linear Scale
On this linear scale below, we can see the huge spike around April 2013. However, before that, at a glance, the stock looks fairly stable. TSLA seemed like a rather boring holding early on.
df_tesla.set_index('date_month_day_year')['Close'].plot(kind='line', figsize=(12, 8), rot=30) plt.ylabel("Close Price", labelpad=16) plt.xlabel("Date", labelpad=16) plt.title("Tesla Inc. (TSLA) Stock Price Over Time", y=1.02, fontsize=20);
Tesla Stock Price Over Time - Log Scale
The visualization below shows the trend of the Tesla stock price over time on a log scale.
The linear scale above was a bit deceiving. Now, it's much easier to see that in the first ~3 years (until April 2013) after the IPO, the stock significantly increased from ~
18 to ~
37 - doubling in price. That would be a great return for investors! Yet, nowadays I'd be hard-pressed to find investors touting the first 3 years of Tesla's stock performance.
ax = df_tesla.set_index('date_month_day_year')['Close'].plot(kind='line', figsize=(14, 8), rot=30, logy=True) plt.ylabel("Close Price", labelpad=18) plt.xlabel("Date", labelpad=18) plt.title("Tesla Inc. (TSLA) Stock Price Over Time", y=1.02, fontsize=20) for axis in [ax.yaxis]: axis.set_major_formatter(ScalarFormatter()) ax.set_yticks([25, 50, 75, 100, 125, 150, 175, 200, 250, 300, 350, 400]);
| https://dfrieds.com/data-visualizations/when-use-log-scale | CC-MAIN-2019-26 | refinedweb | 1,004 | 58.99 |
Does anyone here live in the bay (Gloaming Hill area) with VDSL and what speeds are you getting?
Also, is there a way to find the cabinets without driving the streets?
use the chorus map:
and see what others nearby are getting on VDSL, just type in near by addresses and it shows the speed they are conected, someone near by should have VDSL. Not 100% accurate but it gives an idea, things like cable quality come into play
see where the nearest cabinet is there, but you will need to use it inconjunction with this one too:
as it shows you the approximate boundaries for the cabinets/exchanges. find your area, then match it to where the cabinet is. its not 100% but its pretty close
also remember the cables doesnt always take the direct route
When looking to move into a place, its always a helpful idea to physically ask for an ISP to run a Prequal for you. Gives you a more reasonable expectation of what your likely to get.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
04fuxake:
AKLWestie:
And remember to have a master filter installed
Is that something I should specify or something that he tech should know? We will be ditching the landline and going naked.
Depends on your situation. You can always get away with running a single connection from the ETP.
Point of installing a master filter, is to isolate the POTS network from your DSL. Generally you also use CAT5E or better for your dsl jackpoint to help the signal along that bit more.
So if your line is purely a single run (IE no joins off it - this does count jackpoints which are connected but not in use) then your already likely getting the best case.
This picture essentially summarizes what i have said above.
Added advantage of having a master filter, is in the case of a fault it can often help with troubleshooting.
Do remember, if your flatting. Speak to your landlord, they should have no issue at all once explained clearly and in the end it is a bonus for their future clients but its best to be upfront in saying something like 'Hey, we are keen to move in, however we would like to get vdsl installed which involves a master filter installation done by a trained professional to ensure the best connectivity.'
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. | https://www.geekzone.co.nz/forums.asp?forumid=49&topicid=193858 | CC-MAIN-2019-04 | refinedweb | 436 | 66.27 |
Turbo C - Pointer to an Array
Here is a simple example on how to declare and use pointer variables that is used to point to an array in C Programming Language.
Pointers are used to point to a memory location. In Turbo C, size of pointer variable in 2 bytes since it is a 16 bit applications where as in visual c++ it would be 4 bytes since it 32 bit applilcation. Today 64 bit compilers will allocate 8 bytes for a pointer variable.
The following is a good example illustrating the concept of pointer that is pointing to an array. p is an pointer that points to integer array called myarray. Initially when p is assigned, it is pointing to the first location of the array. When it is incremented in the loop, it is pointing to next element in the array.
Source Code
#include <stdio.h>
void main()
{
int i;
int myarray[] = { 100, 200, 300, 400, 500 };
int *p = myarray;
int numelements = sizeof(myarray) / sizeof(myarray[0]);
for(i = 0; i < numelements; i++)
{
printf("Element[%d] = %d\n", i + 1, *p);
p++;
}
}
Output
Element[1] = 100
Element[2] = 200
Element[3] = 300
Element[4] = 400
Element[5] = 500 | http://www.softwareandfinance.com/Turbo_C/Pointer_To_An_Array.html | CC-MAIN-2016-44 | refinedweb | 200 | 60.95 |
For many computer programs it is necessary to round numbers. For example an invoice amount should only have two decimal places and a tool for time management often does not have to be accurate to the millisecond. Fortunately you don‘t have to write a method for that yourself. In Java or JavaScript you can use Math.round, Python has a built-in function for rounding and the Kotlin Standard Library also contains a method for this purpose. Anyway some of these functions have a few surprises in store and violate the principle of least astonishment. The principle of least astonishment was first formulated by Geoffrey James in his book The Tao of Programming. It states that a program should always behave in the way the user expects it to, but it can also be applied to source code. Thus a method or a class should have a name that describes its behavior in a proper way.
So, what would you expect a method with the name round to do? The most common way to round numbers is the so called round half up method. It means that half-way values are always rounded up. For example 4.5 gets rounded to 5 and 3.5 gets rounded to 4. Negative numbers get rounded in the same way, for example -4.5 gets rounded to -4. In fact the Math.round functions in Java and JavaScript use this kind of rounding and thus behave in a way most people would expect.
But in other programming languages this can be different. Actually I used the Python built-in rounding function for some time without recognizing it does not always round half-way values up. For example round(3.5) results in 4 as you would expect, but round(4.5) also returns 4. That‘s because Python uses the so called round half to even method for rounding values. This means that half-way values are always rounded to the nearest even number. The advantage in this kind of rounding is that if you add mulitple rounded values the error gets minimized, so it can be beneficial for statistical calculations. If you still want to round half-way values up in Python, you can implement your own rounding function:
def round_half_up(number, decimals: int): rounded_value = int(number * (10**decimals) + 0.5) / (10**decimals) if rounded_value % 1 == 0: rounded_value = int(rounded_value) return rounded_value round_half_up(4.5, decimals=0) # results in 5
A different way in Python to round half-way values up is to use the decimal module, which contains different rounding modes:
from decimal import * Decimal("4.5").quantize(Decimal("1"), rounding=ROUND_HALF_UP) # results in 5
It should be noted that the ROUND_HALF_UP mode in this module does actually not use the round half up method as explained above, but the also very common round half away from zero method. So for positive numbers the results are the same, but -4.5 does not get rounded to -4, but -5.
Python is by the way not the only programming language that uses the round half to even method. For example Kotlin and R round half-way values to the nearest even number, too. However for Kotlin there are several easy ways to round half-way values up: you could use the methods roundToInt or roundToLong from the standard library or the Math.round method from Java instead of the method round.
It should also be noted that the explained methods for rounding are not the only ones. Instead of rounding half-way values up you could also use the round half down method, so rounding 3.5 would result in 3. And instead of rounding half to even you could use the round half to odd method and 4.5 would get rounded to 5, as would 5.5. There are some more methods and everyone of them has its use case, so you should always choose carefully.
To sum it up, rounding is not as easy as it seems. Although most programming languages have a method for rounding in their standard library you should always take a closer look and check if the rounding function you want to use behaves in the way you expect and want it to.
Sometimes you will be surprised. | https://schneide.blog/2021/04/05/rounding-numbers-is-not-that-easy/ | CC-MAIN-2022-27 | refinedweb | 714 | 73.58 |
Re: Sharing Code
From: Peter Foot [MVP] (feedback_at_nospam-inthehand.com)
Date: 02/07/05
- ]
Date: Mon, 7 Feb 2005 23:04:21 -0000
I have come across what I think is the same error when referencing a device
dll project from a desktop visual basic exe project (they weren't in the
same solution).
What I found would happen is that when the device dll was built visual
studio would copy a number of compact framework assemblies into the debug or
release folder along with the dll (usually Microsoft.WindowsCE.Forms.dll and
System.Windows.Forms.dll).
In the desktop project when you add a reference to the dll, it sees these
additional dlls and sets the path up as a default reference path for the
desktop project. This means your link to System.Windows.Forms.dll now
incorrectly points to the device version. This explains the reference to AGL
because this is a namespace within the compact framework
System.Windows.Forms assembly - which has a drastically different
architecture from the desktop version.
What cured it for me was to go into the project properties for the desktop
application, manually remove the reference path, and delete the
System.Windows.Forms.dll from the output folder of the device dll project.
However occasionally it still decides to automatically change it back for me
:-(
Peter
-- Peter Foot Windows Embedded MVP | "Rafael Sancho" <RafaelSancho@discussions.microsoft.com> wrote in message news:B5F80B66-F966-45E8-BE4A-F2A08CE1C208@microsoft.com... > I'm not sure if you understand my problem. I've created 2 diferent form > projects, one smart device and one desktop, but they're in the same > Solution. In this solution are another 2 srmart device projects (class > library projects). > > The 2 form projects have references to this class library projects. If I > select the smart device form project as start up project I have no > problem, > but if I select the desktop project as startup project, after I build the > solution, the forms do not open anymore (the message I wrote before > appears). > I've read the page you've said before I started, that's why I've created 2 > diferent form projects. > > I've tryed to create 2 diferent solutions (one all Smart Device and one > with > desktop forms ans sharing the code), but when I add the files to the > desktop > solution a copy of the files is created in the solution folder, and if I > change one of the files in the smart device solution the changes do not > appear in the desktop solution. > > Am I doing something wrong? > > Thans for your attention, > > Rafael Sancho > > > "Daniel Moth" wrote: > >> You can share code with compilation constants but that doesn't work well >> with forms (because the code is autogenerated for you and the resource >> files >> are incompatible). Personally, I have different forms for the two >> platforms >> and share the rest of the non-UI code. For some cases where the UI is >> identical, you can design it in the Smart Device Project but *do not* >> open >> the form in the desktop project. >> >> For your specific problem, "show all files" in solution explorer, and >> delete >> the resx file under the problematic form. >> >> For more on sharing code between platforms read this: >> >> >> Cheers >> Daniel >> -- >> >> >> >> "Rafael Sancho" <Rafael Sancho@discussions.microsoft.com> wrote in >> message >> news:0144E8C3-4079-43A8-91C5-F8D3B2B7BF68@microsoft.com... >> > Hi, >> > >> > I'm trying to share code between a WCE application and a W32 >> > application. >> > I've created a Solution and created 4 projects (one Windows app, one >> > Smart >> > Device App - Selected as an Windows App, two Smart Device App - >> > Selected >> > as >> > Class Library). >> > >> > I've developed all the WCE app without problem, but when I tryed to >> > develop >> > the W32 app, I have problems after compiling the first time. The forms >> > do >> > not >> > open, and the following error apear: >> > >> > "An error occurred while loading the document. Fix the error, and try >> > loading the document again. The error message follows: >> > >> > An exception occurred while trying to create an instance of >> > System.Windows.Forms.Form. The exception was 'Unable to load DLL >> > (AGL)'". >> > >> > I've read some stuff about sharing code, but any of them describes this >> > problem. Does anyone knows what is this error and how to solve it? Am I >> > doing >> > something wrong? >> > >> > Thanks in advance. >> >>
- ] | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.compactframework/2005-02/0610.html | crawl-002 | refinedweb | 717 | 62.38 |
kaaedit 0.21.0
kaa - console text editor.
Kaa is a small and easy CUI text editor for console/terminal emulator environments.
Contents
- Overview
- Setup
- Command line options
- Terminal setting
- Usage
- Customization
- Hacking
- Acknowledgement
- Links
- Version history
Overview
Kaa is an easy yet powerful text editor for console user interface, providing numerous features like
- Macro recording.
- Undo/Redo.
- Multiple windows/frames.
- Syntax highlighting.
- Grep.
- Python debugger.
- Open source software(MIT).
- More to come!
See for more screen shots.
Kaa is easy!
Kaa is very easy to learn in spite of its rich functionality. Only thing you need to remember is "To display menu, hit F1 key or alt+'/' key". Most of basic feartures-supplyed
Command line options
To start kaa, type
usage: kaa [-h] [--version] [--no-init] [--init-script INIT_SCRIPT] [--palette PALETTE] [--term TERM] [file [file ...]] positional arguments: file optional arguments: -h, --help show this help message and exit --version show version info and exit --no-init skip loading initialization script --init-script INIT_SCRIPT execute file as initialization script instead of default initialization file --palette PALETTE color palette. available values: dark, light. --term TERM, -t TERM specify terminal type. With Terminal.app you can set 256 color mode:
- Select Preferences menu.
- Open the Settings tab.
- Select xterm-256color for "Declare terminal as" field.
For iTerm2, you can:
-reference character will be replaced to substring matched group in the search string. For example, when search string is '(a+)(b+)' and replace string is '\2\1', matched string 'aabb' will be replaced to 'bbaa'.
Grep dialog
Grep dialog has three input field. Search is a plain text or regular expression string to search. Directory is a directory to start searching. If Tree button was checked, files are searched recursively. Filenames is space separeted list of file spec in shell-style wildcards (e.g., *.txt *.py *.doc). Up arrow key displays history of each input field.
In the grep result window, use F9 and F10 key to traverse matches forward/backward.
Python console
Unlike Python's interactive console, Python console in kaa does not execute Python script until you hit alt+Enter key. Until then you can edit Python script as if you are with editors without worrying about newlines and indentations.
When alt+Enter key was hit, all text in the window is executed as Python script and the value of the expression is printed out to console window. If the script contains print expression, the text will also be printed out to console window. If a part of text in the console window is selected, only text in the selected region will be execused. highly exprimental
Execute kaadbg package in Python interpreter to connect target program with kaa debugger. kaadbg is Python package installed as a part of kaa. To use another Python interpreter than kaa installed, you can install kaadbg separetely.
$ sudo pip install -U kaadbg
Currentry, opend,.
Customization
Kaa executes a Python script file at ~/.kaa/__kaa__.py on startup. You can write Python script to customize kaa as you like.
Sample - Show line numbers
from kaa.filetype.default import defaultmode defaultmode.DefaultMode.SHOW_LINENO = True
defaultmode.DefaultMode is base class of all text file types. Line number is diplayed if Defaultmode.SHOW_LINENO is True. If you want to show line number of paticular file types, you can update SHOW_LINENO attribute of each file type classes.
# Show line number in HTML mode from kaa.filetype.html import htmlmode htmlmode.HTMLMode.SHOW_LINENO = True
Sample - Customize key binding
Assign same keyboard shortcut of splitting windows command as Emacs.
from kaa.keyboard import * from kaa.filetype.default.defaultmode import DefaultMode DefaultMode.KEY_BINDS.append({ ((ctrl, 'x'), '2'): 'editor.splithorz' # Assign C-x 2 })
In this example, key sequence C-x 2 (control+x followed by 2) is assigned to 'editor.splithorz' command.
Sample - Change color palette
Change color palette to light.
import kaa kaa.app.DEFAULT_PALETTE = 'light' # Use `light' palette. Default is `dark'
Hacking
You can get the recent source code from github.
$ git clone
To run test, you need to install py.test
$ pip-3.3 install -U pytest $ cd kaa $ py.test
There is an expreimental.
Acknowledgement
I really appreciate for your help.
Links
Version history
0.21.0 - 2013.12.15
- Respect encoding declaration on loading/saving file in HTML/Python mode.
- Paste from OS clipboard didn't work on Mac.
Past versions
0.20.0 - 2013.12.13
- Save clipboard history to disk.
- Python debugger: Display status of target process.
0.19.0 - 2013.12.11
- Support system clipboard.
0.18.0 - 2013.12.10
- Optimizations. Kaa responds quicker than previous version.
- Error highlighting javascipt attribute in html mode was fixed.
- White space characters inserted by auto-indent are automatically removed if cursor moved to another position without entering a character.
- reStructuredText Mode: Non-ASCII punctuations were not recognized as separater of inline mark ups.
0.17.0 - 2013.12.06
- reStructuredText Mode: Recognize non-ASCII punctuation as separater 'j 'Dir' 'x '0' encodnig):
- 48 downloads in the last day
- 843 downloads in the last week
- 6060 downloads in the last month
- Author: Atsuo Ishimoto
- License: MIT License
- Categories
- Package Index Owner: atsuoi
- DOAP record: kaaedit-0.21.0.xml | https://pypi.python.org/pypi/kaaedit/0.21.0 | CC-MAIN-2014-15 | refinedweb | 860 | 60.31 |
I'm using VC++ 2008
I've spent my day trying to learn about dll's and how to use them but there's a surprising lack of resources available on the internet.
I've been following a tutorial and it's given this code.
within my dll project i have a file called dllmain.cpp that says this:
I have another .cpp file with the same name as my project in which I copied this from the tutorial:I have another .cpp file with the same name as my project in which I copied this from the tutorial:Code://; }
jkgh.h contains:jkgh.h contains:Code:#include "jkhg.h" // This is an example of an exported function. int MyDLLFunc2(char *TEXT) { MessageBox(NULL,TEXT,"",MB_OK); return true; }
And I added a .def file that contains:And I added a .def file that contains:Code:int MyDLLFunc2(void);
Code:LIBRARY JKHG DESCRIPTION This is my DLL file! EXPORTS MyDLLFunc2 @1
The application that calls the dll contains this precompiled header:
and the main cpp file contains this:and the main cpp file contains this:Code:#pragma once #include <windows.h>
I know this is a long post but I don't know which code is critical and which isn't.I know this is a long post but I don't know which code is critical and which isn't.Code:#include "stdafx.h" typedef UINT (CALLBACK* importFunc1)(char *TEXT); HINSTANCE hDLL; importFunc1 MyFunc2; UINT uReturnVal; using namespace System; int main(array<System::String ^> ^args) { hDLL = LoadLibrary("c:\\jkhg.dll"); if (hDLL != NULL) { MyFunc2 = (importFunc1)GetProcAddress(hDLL,"MyDLLFunc2"); if (!MyFunc2) { //Show error message FreeLibrary(hDLL); return false; } else { uReturnVal = MyFunc2("HELLO WORLD!"); } } return true; }
And now to my problem. I successfully load the dll from c:\jkhg.dll but now matter what I do myFunc2 always remains empty and this block of code in viI.cpp always executes:
I'm gratefull for any amount of help or light-shedding.I'm gratefull for any amount of help or light-shedding.Code:if (!MyFunc2) { //Show error message FreeLibrary(hDLL); return false; }
sorry about the lousy names, I tend to bang on the keyboard during tutorial readings. | https://cboard.cprogramming.com/windows-programming/103201-explicit-dll-calls.html | CC-MAIN-2017-13 | refinedweb | 365 | 66.94 |
NFC on BlackBerry 10 - Reading and Writing Tags using native APIs
Introduction
This article is part of a series intended to help developers wishing to exploit NFC in their BlackBerry® 10 applications. Readers of this article should have some pre-existing knowledge of the fundamental architecture of NFC systems and be familiar with C++. It would be beneficial to have read at least the first parts of the corresponding article from the BlackBerry® 7 series of articles entitled: "Reading and Writing NFC Tags" since this covers tag concepts which are as applicable to BlackBerry 10 as they are to BlackBerry 7. The BlackBerry 7 article can be found here.
This article will explain how to use the BlackBerry 10 C++ native API to develop software that can read and write NFC tags.
This is a revised version of the article that addresses the updated BlackBerry Dev Alpha device running version 10.0.09 (AKA beta 3) of the BlackBerry 10 platform.
The Authors
This article was co-authored by Martin Woolley and John Murray both of whom work in the RIM Developer Relations team. Both Martin and John specialize in NFC applications development (amongst other things).
About NFC Tags
NFC tags are essentially contactless memory cards that store a message in a standard format. When a tag comes into proximity with an NFC reader, the content of the tag is transferred across the contactless interface to the reader from where it is usually dispatched to an appropriate application for processing. The standard format used for messages stored on NFC tags is called the NFC Data Exchange Format or “NDEF” for short.
The NFC Forum defines four types of tag, known as Type 1, Type 2, Type 3 and Type 4. The differences between the types include their maximum storage capacity and supported protocols. Types 1-3 only allow communication at layer 3 of the ISO14443 NFC protocol stack whereas a Type 4 tag supports layer 4 and ISO7816-4 APDUs can be used to communicate with such a tag.
Physically, tags come in a variety of shapes and sizes. Figure 1 shows three tags, one type 1 tag in the form of a wrist band, one type 1 tag in the form of a key fob and the third, a type 2 tag in the form of a card. A useful format is for the tag to be embedded in a self-adhesive paper sticker so that it can be easily attached to surfaces.
Figure 1 - Tags
NDEF messages stored on a tag contain one or more NDEF records. Various types of record are defined including a selection of commonly used types defined by the NFC Forum as “well known types”. Well known types include Text, URI and Smart Poster.
On BlackBerry 10, developers can inform the system that their application is interested in NFC tags containing records of one or more types and to receive and process tag related data when encountered by the BlackBerry device. Additionally, the native APIs allow applications to detect the proximity of a tag and write an NDEF message to it. Both these use cases are described in this article in the following sections.
NFC Tool - The Sample Application
Figure 2 - NFC Tool's home screen
To accompany this article, we’ve written an application called “NFC Tool” which demonstrates how to read and write tags. The application was written in C++ using the BlackBerry 10 NDK. We’ve used the Cascades™ framework for the user interface along with the powerful Qt framework for certain other aspects. "NFC Tool" contains functionality in addition to tag reading and writing, which are the subject of other articles in this series.
In the sections that follow, we’ll use code fragments from NFC Tool to illustrate exactly how to proceed in your own code. We’ve released the application code in full in our GitHub® repository, details of which can be found at the end of this article.
NFC Tool - Design and Implementation
Before we get into the NFC aspects of NFC Tool, let’s review the basic design and implementation approach taken for this application. Figure 3 gives an overview of the basic flow between pages. Note that not all pages are shown.
Figure 3 - NFC Tool user interface overview
For the purposes of this article, we’ll look at the following functions those NFC Tool offers, each of which is selectable from the menu page:
- Read – Read an NFC tag
- Write URI – Write a tag with well-known type “U”
- Write Smart Poster – Write a tag with well-known type “Sp”
- Write Text – Write a tag with well-known type “T”
- Write Custom – Write a tag with “external” TNF and a custom type value
- About – Show the about page with information about NFC Tool
Common to functions 1-5 is an event log page that displays information about tag detection, reading or writing activity as it takes place. The purpose of this page is to provide visibility of the key events occurring during execution of your selected function so as to aid understanding of the process.
The basic UI design is simple. The home screen provides a list of menu items that you can select by touching. We chose a list oriented approach because we intend to add further menu items in the future and a scrollable list is easily extended in this way.
The implementation follows a fairly simple structure too. We used Cascades QML for all presentation layer aspects of the application. Control logic was implemented in C++ using Qt Signals and Slots. For NFC operations, we call BlackBerry 10 platform APIs and wrap the NFC operations in a class called NfcManager which acts like a facade for other classes to use. We also have a class called NfcWorker that allows us to execute NFC operations in their own thread. This is important because reading NFC event objects using the BlackBerry Platform Services library functions involves making a blocking call and we do not want this to interfere with our user experience.
Selecting an item from the menu causes a new page to slide in from the right. This behavior is a consequence of the Cascades component we used for the main menu page and I’ll come to this shortly. The great thing about Cascades is that “implicit animations”, such as this sliding behavior, are completely free of charge. We had to do no explicit coding to create this attractive UI effect.
Since the user interface was created using Cascades, we have QML documents for each page. The main page, main.qml uses the NavigationPane component. The NavigationPane component provides your UI with a stack oriented or drill down structure. Navigating to the next page involves pushing a page object onto a stack. The item at the top of the stack is displayed to the user, usually with an automatic, implicit animation, as described.
All pages apart from the main menu feature one or more buttons at the foot of each page. This too is a feature of NavigationPane. The back button is provided by default and requires no coding unless you want to do something non-standard. By default, the back button pops the current page from the top of the stack so that the user “goes back to” the previous page. You can add your own buttons too of course.
Figure 4 shows part of the main.qml page. Note that we load details of the items that the menu list contains from an XML file. The contents of this file are shown in Figure 5.
NavigationPane { id: nav objectName: "navPane" Page { id: menuListPage content: Container { background: Color.create ("#262626") preferredWidth: 768 layout: DockLayout { } Container { layout: DockLayout { topPadding: 2 bottomPadding: 2 } ListView { id: menuList; objectName: "list" dataModel: XmlDataModel { source: "models/menumodel.xml" } listItemComponents: [ ListItemComponent { type: "menuitem" MenuItem { } } ] } } } } onTopChanged: { if (page == menuListPage) { // Clear selection when returning to the menu list page. menuList.clearSelection (); } } }
Figure 4 - QML for the main menu page
<root> <menuitem title="Read" image="asset:///images/read.png" file="reader.qml" itemName="item_read"/> <menuitem title="Write URI" image="asset:///images/uri.png" file="write_uri.qml" itemName="item_uri"/> <menuitem title="Write Smart Poster" image="asset:///images/sp.png" file="write_sp.qml" itemName="item_sp"/> <menuitem title="Write Text" image="asset:///images/text.png" file="write_text.qml" itemName="item_text"/> <menuitem title="Write Custom" image="asset:///images/custom.png" file="write_custom.qml" itemName="item_custom"/> <menuitem title="Send vCard (SNEP)" image="asset:///images/snep.png" file="snep_vcard.qml" itemName="item_snep_vcard"/> <menuitem title="Emulate Tag" image="asset:///images/tag.png" file="emulate_sp.qml" itemName="item_emulate_tag"/> <menuitem title="ISO7816 APDU" image="asset:///images/iso7816.png" file="" itemName="item_iso7816"/> <menuitem title="About" image="asset:///images/about.png" file="about.qml" itemName="item_about"/> </root>
Figure 5 - Main menu XML data
Data from the XML file is bound to the UI component with which it is associated and individual attributes can be referenced using the ListItemData alias and the appropriate attribute name from the XML. Figure 6 shows part of the MenuItem.qml file that specifies the markup for an individual menu item in the list.
ImageView { // The image is bound to the data in models/recipemodel.xml image attribute. imageSource: ListItemData.image leftMargin: 3 } Label { // The title is bound to the data in models/recipemodel.xml title attribute. text: ListItemData.title leftMargin: 30 textStyle { base: SystemDefaults.TextStyles.TitleText color: Color.Black } layoutProperties: StackLayoutProperties { verticalAlignment: VerticalAlignment.Center } }
Figure 6 - Referencing XML attributes in QML
We implemented the greater majority of our control logic in C++ and made good use of the excellent Qt “Signals and Slots” capability. Each QML page has a corresponding C++ object that implements required slots and takes care of one or two other issues. In Figure 7 you can see how we “wired up” some of our signals and slots in the MainMenu.cpp class. The essence of the connect statement is that it means, “if this object emits this SIGNAL, then execute the following SLOT function owned by this object”.
Figure 7 sets up the SIGNAL and SLOT connections that relate to the selection of specific menu items and the resultant calling of the related object’s show() function. Figure 8 shows how to make use of the selectionChanged signal which our ListView component in the main.qml page will emit whenever the user selects a new item.
Figure 9 shows how we emit the appropriate signal from within some conditional logic in onListSelectionChanged in MainMenu.cpp. The signals emitted here correspond to the signal/slot connections shown in Figure 7. Between them, Figures 7, 8 and 9 show how we navigate to the next page from the main menu.
QObject::connect(this, SIGNAL(read_selected()), _eventLog, SLOT(show())); QObject::connect(this, SIGNAL(write_uri()), _writeURI, SLOT(show())); QObject::connect(this, SIGNAL(write_sp()), _writeSp, SLOT(show())); QObject::connect(this, SIGNAL(write_text()), _writeText, SLOT(show())); QObject::connect(this, SIGNAL(write_custom()), _writeCustom, SLOT(show())); QObject::connect(this, SIGNAL(send_vcard_selected()), _sendVcard, SLOT(show())); QObject::connect(this, SIGNAL(emulate_tag_selected()), _emulateSp, SLOT(show())); QObject::connect(this, SIGNAL(iso7816_selected()), _apduDetails, SLOT(show())); QObject::connect(this, SIGNAL(about_selected()), _about, SLOT(show()));
Figure 7 – Main Menu Signals and Slots
QObject::connect(listView, SIGNAL(selectionChanged(const QVariantList, bool)), this, SLOT(onListSelectionChanged(const QVariantList, bool)));
Figure 8 - Connecting to the ListView selectionChanged signal
// only part of this function is shown here void MainMenu::onListSelectionChanged(const QVariantList indexPath, bool selected) { if (selected) { // We use the sender to get the list view for accessing the data model and then the actual data. if (sender()) { ListView* menuList = dynamic_cast<ListView*>(sender()); DataModel* menuModel = menuList->dataModel(); QVariantMap map = menuModel->data(indexPath).toMap(); if (map["itemName"].canConvert(QVariant::String)) { QString item = map["itemName"].toString(); qDebug() << "selected item name=" << item; if (item.compare("item_read") == 0) { qDebug() << "Read Tag was selected!"; startListening(); _eventLog->setMessage("Hello"); emit read_selected(); } else if (item.compare("item_uri") == 0) { qDebug() << "Write URI was selected!"; emit write_uri(); } else if ..........................
Figure 9 - Handling list selection changes
As mentioned, class NfcManager presents a simple, facade-like interface, which other classes can use to initiate the various NFC operations. Figure 10 shows the public functions as defined in the header file. As you may have guessed from the presence of a static function called getInstance(), NfcManager is a singleton, making it easy to locate and use from anywhere in the application.
public: static NfcManager* getInstance(); void startEventProcessing(); void stopNdefListener(); void writeUri(QString* uri); void writeSp(QString* sp_uri, QString* sp_text); void writeText(QString* text); void writeCustom(QString* domain, QString* type, QString* payload); void stopNfcWorker();
Figure 10 - NfcManager public functions
NfcWorker makes use of QtConcurrent::run to run code in a background thread. Java® developers should find this pattern familiar as it is not dissimilar to the Java Runnable interface. It has some nice additional features however; including the ability to associate a QFutureWatcher with the thread. QFutureWatcher monitors thread execution and through signals and slots can report progression through the thread’s various states to whatever functions you choose to connect to the signals it can emit. Figure 11 shows how we used this in one part of NfcWorker.
void NfcManager::startEventProcessing() { _future = new QFuture<void>; _watcher = new QFutureWatcher<void>; _workerInstance = NfcWorker::getInstance(); *_future = QtConcurrent::run(_workerInstance, &NfcWorker::startEventLoop); _watcher->setFuture(*_future); QObject::connect(_watcher, SIGNAL(finished()), this, SLOT(workerStopped())); QObject::connect(_workerInstance, SIGNAL(message(QVariant)), this, SLOT(message(QVariant)), Qt::QueuedConnection); QObject::connect(_workerInstance, SIGNAL(clearMessages()), this, SLOT(clearMessages()), Qt::QueuedConnection); }
Figure 11 - QtConcurrent and QFutureWatcher in NfcManager
The BlackBerry 10 Invocation Framework
In BlackBerry 10 OS, the Invocation Framework (IF), which was introduced in the 10.0.06 Dev Alpha release, provides the ability for the user to perform an action on an item identified as content. This framework enables the client (applications, service or viewers) to send a message to a target (applications, service or viewers) to perform a particular action. The framework also offers the capability to discover what targets are available on the device.
Many of the capabilities of NFC on the device have been integrated with the Invocation Framework in order to allow the developer to focus on the business logic of his application rather than having to be concerned with lower level aspects of how NFC works.
In particular, the reading of NFC Tags has been integrated with the Invocation Framework and in what follows we’ll show how to read NFC Tags using this framework. The diagram at the end of this section illustrates the basic relationship between the Invocation Framework and tag reading. "iF" is short hand for "Invocation Framework" by the way.
By integrating NFC with the Invocation Framework it means that simpler integration of NFC capabilities with the higher level APIs in Qt and QML become possible.
It is not the intent of this article to explore these aspects, which will be the subject of future articles.
Reading NFC Tags
Introduction
BlackBerry 10 allows developers to register an interest with the Invocation Framework in particular types of NFC NDEF data. This is achieved through an entry in the bar-descriptor.xml file as will be presented below. Reading an NFC tag then involves the Invocation Framework delivering an InvokeRequest object to the application and other APIs then being used to extract the NDEF messages and records that the InvokeRequest object contains.
A recipe for reading NFC tags from the BlackBerry 10 NDK
Reading a tag is accomplished in four steps, which we present below as a kind of standard “recipe”.
Figure 12 - Recipe for reading NFC tags
We’ll now proceed to examine each of the four steps at a code level.
Step 1 – Register for NDEF message types through Invocation Framework
This is achieved through adding a stanza in the bar-descriptor.xml file associated with you application. The information in this stanza is parsed and incorporated into your application’s BAR file as meta-data when you package and sign it. When your application is installed on the device this information is used to register your application’s interest in being notified when a number of NFC NDEF events occur.
<invoke-target <type>APPLICATION</type> <filter> <action>bb.action.OPEN</action> <mime-type>application/vnd.rim.nfc.ndef</mime-type
> <property var="uris" value="ndef://1/Sp,ndef://1/T,ndef://1/U"/> </filter> </invoke-target>> <property var="uris" value="ndef://1/Sp,ndef://1/T,ndef://1/U"/> </filter> </invoke-target>
Figure 13 - Registering with IF
Let’s look at this stanza in a bit more detail. The “<invoke-target ..>” tag identifies our application as being the target of one or more invocation framework events.
The next point to notice is that there is a “<filter>” element. This defines the type of invocation framework event and MIME type in which we’re interested as well as some additional URI based filters.
The MIME type “application/vnd.rim.nfc.ndef” is a RIM custom MIME type that identifies as type of NFC NDEF tag. The <property> element goes on to specify URI filters and this is how we indicate which particular NDEF types we're interested in receiving into our application. In our example we have included a list of three URI values, separated by commas. Sp means Smart Poster, T means Text and U means URI. You must use the syntax exactly as shown here.
The “<action>” tag has the value of “bb.action.OPEN”. This means that when the invocation framework is presented with an object of the MIME types we’ve registered for, our application will be asked to “OPEN” the associated data. That is, it will be presented with the contents of the NDEF tag that has just been detected and read by the NFC stack in the handset. Our application is then responsible for parsing the content and doing something with it; in our case we simply display it on the screen.
Step 2 – Create a bb::system::InvokeManager object
bb::system::InvokeManager * _invokeManager; ...... _invokeManager = new bb::system::InvokeManager();
Figure 14 - Creating an InvokeManager object
The APIs include the InvokeManager class. This is a useful class for working with the invocation framework and we use it in NfcTool to make reading (in fact *receiving*) NFC tag data really easy. So one of our first steps, in the MainMenu.cpp class is to create an instance of this class.
Step 3 – Connect InvokeManager invoked signal to your slot
QObject::connect(_invokeManager, SIGNAL(invoked(const bb::system::InvokeRequest&)), this, SLOT(receivedInvokeRequest(const bb::system::InvokeRequest&)));
Figure 15 - Connecting InvokeManager signal/slot
InvokeManager uses Qt signals and slots. In NfcTool we connect the "invoked" signal to a method calls receivedInvokeRequest in the MainMenu class which we have designated as a slot. As you can see, it takes a parameter of type InvokeRequest.
Step 4 - Process NDEF messages in InvokeRequest objects
Receive an InvokeRequest object from the InvokeManager via a call to our slot method; Extract the request data from the InvokeRequest object Interpret the request data as an NDEF message for each NDEF record in this NDEF message { extract NDEF record attributes according to the record type; }
Figure 16 - Tag reading event loop in pseudo code
The final step involves extracting the NDEF data from the InvokeRequest object we received from the invocation framework. Figure 16 expresses the basic algorithm in pseudo code. We’ll take a look at an example C++ implementation next.
void MainMenu::receivedInvokeRequest(const bb::system::InvokeRequest& request) { QByteArray data = request.data(); if (request.mimeType().compare("application/vnd.rim.n
fc.ndef") == 0) { emit launchEventLog(); _nfcManager = NfcManager::getInstance(); _nfcManager->handleTagReadInvocation(data); }fc.ndef") == 0) { emit launchEventLog(); _nfcManager = NfcManager::getInstance(); _nfcManager->handleTagReadInvocation(data); }
........
}
Figure 17 – Slot method which receives InvokeRequest from the framework
Figure 17 shows the relevant parts of our receivedInvokeRequest method. This method is the slot connected to the InvokeManager's "invoked" signal, so whenever the invocation framework has data we're interested in, it calls this method and passes the data as an InvokeRequest object. Our job is to extract the contents of the InvokeRequest object and transform it into NDEF data from the tag that was read. As you can see from the code fragment, we proceed by extracting the request's data payload, checking the mime type and assuming it's what we expect it to be, we then call another method to handle the transformation of our payload byte array into an NDEF message containing one or more NDEF records. Ultimately, this takes place in the NfcWorker class. Let's take a look at the primary aspects of this.
void NfcWorker::handleTagReadInvocation(const QByteArray data) { nfc_ndef_message_t *ndefMessage; CHECK( nfc_create_ndef_message_from_bytes(reinterpret_cas
tDtD ata(),ata(), Data(), textLen); emit message(QString("Language: %1").arg(language)); emit message(QString("Text: %1").arg(text)); } } } }Data(), textLen); emit message(QString("Language: %1").arg(language)); emit message(QString("Text: %1").arg(text)); } } } }
Figure 18 – Processing NFC NDEF data from an InvokeRequest object
Figure 18 shows the steps involved in processing the NFC tag data which was passed to us inside an InvokeRequest object from the invocation framework. Per the pseudo code description in Figure 16, we work our way from a byte array which we extracted from the InvokeRequest object that the invocation framework sent us, then through the NDEF messages, each of which contains one or more NDEF records and for each record, we extract the record’s attributes according to its type. Your code will differ according to whatever it is you’re doing, but hopefully this will get your started. Take a look at the nfc/nfc.h, nfc/nfc_ndef.h and nfc/nfc_types.h header files for the full list of NFC tag related functions in the API.
Writing NFC Tags
Now that we understand how to read an NDEF tag let’s consider how to write a tag. There are four common tag types that are useful to learn how to write since they demonstrate all of the NFC C++ APIs that you will need to construct others. They are:
- A URI Tag – this consists of a single URI such as
- A Text Tag—this consists of a single string of readable text such as “Hello, NFC!”
- A Smart Poster Tag – this consists of a URI, like in a URI tag, and one or more text annotations describing the URI that can be in different languages
- A Custom Tag – this consists of a unique domain (usually a DNS domain like “my.domain.com” ) that identifies the tag as being specific to this organization and a type ( like “myownrecord” ). Together these identify a unique namespace to prevent clashes with tags from other organizations. Lastly, an arbitrary payload completely determined by the use you wish to make of the tag.
The process for writing a tag can be described by the following 4 step recipe:
Recipe for Writing a Tag
Figure 19 - Recipe for writing NFC tags
Step 1 – Initialize the BlackBerry Platform Services (BPS) library
When writing a tag, we use the BlackBerry Platform Services (BPS) APIs. Our first job therefore is to initialize BPS. Initializing the BPS library requires a single function call only as you can see from Figure 20. Note that you must remember to include the bps/bps.h header file in your class. This function must be the first BPS function you call for a thread when you want to use BPS library functions.
Step 2 – Request NFC events from the BPS
#include <bps/bps.h>
#include <bps/nfc_bps.h> ...... rc = bps_initialize();
rc = nfc_request_events();
Figure 20 - Requesting NFC events
Similarly, requesting the delivery of NFC events via BPS is also achieved with a single function call. Note that you need to include bps/nfc_bps.h for this function to be available.
The last step is the similar to that for reading an NDEF tag. The only difference is that we need to detect and process a BPS NFC event type of NFC_TAG_READWRITE_EVENT which represents the presentation of a tag to the handset.
The delivery of this event uses the same BPS framework used to deliver Invocation Framework events and the processing is almost identical. This has already been explained earlier in this article so let’s concentrate on the item that is different: Step 3.
In preparing to read a tag, we registered with the Invocation Framework for specific NDEF Message types such as URI, Text and Smart Poster. In preparing to write, a tag we’re only interested in being notified when a suitable target is presented to the device since we’re going to write new NDEF data to it.
You need to make a decision here. There are three types of NFC Target detection events that we can register for:
- ISO_1444_3 – in this case, you will receive notification when a target is presented to the device that uses the ISO 1444-3 protocol and you wish to interact with the target using the protocol that’s defined at this level.
- ISO_1444_4 – in this case, you receive notification when a target is presented to the device that uses the ISO 1444-4 protocol and you wish to interact with the target using the protocol that’s defined at this level. This would typically be by constructing APDUs, sending them to the tag and receiving APDU’s in response.
- NDEF_TAG – in this case, you receive notification when a target is presented to the device that has been formatted for use as an NDEF tag and you wish to interact with the target using NDEF messages.
In the case of this application, we’re interested only in NDEF formatted tags so the simplest approach is to select NDEF_TAG as the target detection event in which we’re interested. That’s not to say that you couldn’t use, say, ISO_1444_4. If you did you would have to be prepared to test each detection event yourself to determine whether there was an NDEF structure on the tag that could be written to. It’s a case of whether you want the NFC framework to check this for you or whether you want to do it yourself. In this case, we use NDEF_TAG and let the framework do the work for us.
Right, let’s look at the four cases of URI, Text, Smart Poster and Custom Tag writing in turn. Since the process is very similar in each case, I’ll spend more time in the first example and then highlight any significant differences in the other cases.
Writing a URI Tag
Here’s what’s presented to the user when he wishes to write a URI tag using NFC Tool.
Figure 20 - NFC Tool : Writing a URI tag
Preparation for Writing a URI Tag
When the user identifies that he wishes to write a URI NDEF Message to a tag when it’s presented to the device the function: prepareToWriteNdefUriTag() of the NfcWorker class is called in response to the “Write” button in the UI to prepare to write the URI when a tag is presented at a later time.
void NfcWorker::prepareToWriteNdefUriTag(QString uri) { [...] emit message(QString("Preparing to write URI: %1").arg(uri)); _ndefUri = uri; CHECK(nfc_register_tag_readerwriter(NDEF_TAG)); }
Figure 21 - Preparing to write a URI tag
Some code has been removed from the actual example to highlight the three main points.
- Firstly, a signal called message() is emitted that explains what is happening. In fact this SIGNAL will be connected to a SLOT in the event log class to show log this event on the screen;
- Secondly, the actual uri to be written is saved to be used later when a tag is presented;
- And finally nfc_register_tag_readerwriter() is used to register for NFC events. The CHECK() macro is just a convenience to be able to handle the return code from the NFC API calls.
Handling the Target Detected Event for Writing a URI Tag
Once the application has been registered to receive NDEF Target detection events these events will start to be presented in the main listen handler of the application in exactly the same way as already described in the section on reading an NDEF tag.
These are handled in the function called handleNfcWriteUriTagEvent(), the highlights of which are shown below.
void NfcWorker::handleNfcWriteUriTagEvent(bps_event_t *event) { [...] nfc_event_t *nfcEvent; nfc_target_t* target; nfc_ndef_record_t* myNdefRecord; nfc_ndef_message_t* myNdefMessage; if (NFC_TAG_READWRITE_EVENT == bps_event_get_code(event)) { rc = nfc_get_nfc_event(event, &nfcEvent); rc = nfc_get_target(nfcEvent, &target); myNdefRecord = makeUriRecord( Settings::NfcRtdUriPrefixNone, _ndefUri); CHECK(nfc_create_ndef_message(&myNdefMessage)); CHECK(nfc_add_ndef_record(myNdefMessage, myNdefRecord)); CHECK(nfc_write_ndef_message_to_tag(target, myNdefMessage, false)); CHECK(nfc_delete_ndef_message(myNdefMessage, true)); emit message(QString("Tag Type Written URI: %1").arg(_ndefUri)); } [...] }
Figure 22 - Handling the target detected event
The main points to note are:
- As a defensive measure, the event type that is being handled is check to ensure it’s of the correct type, namely: NFC_TAG_READWRITE_EVENT. We’re assured that an NDEF target has been presented that can be written to.
- An NDEF Record is constructed using a function called makeUriRecord() which is shown below.
- A pointer to an empty NDEF Message is obtained from the framework into which the URI NDEF Record is inserted.
- The NDEF Message is written to the tag and then deleted.
nfc_ndef_record_t* NfcWorker::makeUriRecord(uchar_t prefix, QString uri) { nfc_ndef_record_t* record = 0; int len = uri.length(); uchar_t payload[len + 1]; payload[0] = prefix; memcpy(&payload[1], uri.toUtf8().constData(), len); CHECK( nfc_create_ndef_record(NDEF_TNF_WELL_KNOWN, “U”, payload, len+1, 0, &record)); return record; }
Figure 23 - Creating the URI type NDEF record
The makeUriRecord() function uses a standard NFC API function to create an NDEF Record of TNF “Well Known” type and value “U”, and constructs a payload of two parts:
- The URI that will be used
- And a prefix byte that allow common prefixes such as. to be encoded efficiently in the scarce space on a small tag.
Successful writing of the URI tag is communicated back to the user in the event log screen.
Figure 24 - Event log during URI writing process
Writing a Text Tag
Writing a Text NDEF message to an NDEF tag is almost identical to writing a URI NDEF Message. The only significant difference is that we use a different function to build the NDEF Text record. The makeTextRecord() function uses a standard NFC API function to create an NDEF Record of TNF “Well Known” type and value “T”, and constructs a payload of two parts:
- The Text value that will be used
- And a language field consisting of a status byte that encodes both the length of the language code ( e.g. “en” for “English” would be length 2 ) and an indication of the encoding of the Text value. In this case it’s UTF-8.
nfc_ndef_record_t* NfcWorker::makeTextRecord(QString language, QString text) { [...] nfc_ndef_record_t* record = 0; int textLen = text.length(); int languageLen = language.length(); int totalLen = textLen + languageLen + 1; uchar_t payload[totalLen]; int offset = 0; // set status byte. Since text is UTF-8 and RFU must be 0, bits 7 and 6 // are 0 and therefore the entire status byte value is the language code length payload[offset] = languageLen; // including encoding indication for UTF-8 offset += 1; memcpy(&payload[offset], language.toUtf8().constData(), languageLen); offset += languageLen; memcpy(&payload[offset], text.toUtf8().constData(), textLen); CHECK( nfc_create_ndef_record(NDEF_TNF_WELL_KNOWN, “T”, payload, totalLen, 0, &record)); return record; }
Figure 25 - creating a Text type NDEF record
Writing a Smart Poster Tag
Writing a Smart Poster NDEF Message to a tag may appear to be a more complex task than writing a URI or a Text tag since it will contain multiple NDEF records. However, the NFC NDEF API makes this a much simpler task by providing a number of functions that allow you to construct a Smart Poster tag without needing to know about the detailed structure of the tag layout itself. In fact we only need to look at the differences in the handleNfcWriteSpTagEvent() method in the sample application to understand how to do it.
The code fragment below shows the main aspects of how to build the Smart Poster Tag. In essence:
- Create an empty NDEF Record of “Well Known” type with value “Sp” for a Smart Poster
- Use nfc_set_sp_uri() to set the value of the URI associated with the Smart Poster NDEF Message into the empty NDEF Record we’ve just created.
- Use nfc_add_sp_title() to add the text to be associated with the URI in the language “en” for English. Notice that you can add additional text records each for a different language to describe the single URI record.
- Create and empty NDEF Message using nfc_create_ndef_messag() and add the NDEF record we’ve been populating with URI and Text information to that NDEF Message.
- Then just write the NDEF Message to the tag as before.
That’s all there is to it! Easy!
void NfcWorker::handleNfcWriteSpTagEvent(bps_event_t *event) { [...] uint16_t code = bps_event_get_code(event); nfc_target_t* target; nfc_ndef_record_t* spNdefRecord; nfc_ndef_message_t* myNdefMessage; if (NFC_TAG_READWRITE_EVENT == code) { CHECK(nfc_get_target(event, &target)); spNdefRecord = makeSpRecord(); CHECK(nfc_create_ndef_message(&myNdefMessage)); CHECK(nfc_set_sp_uri(spNdefRecord, _ndefSpUri.toUtf8().constData())); CHECK( nfc_add_sp_title(spNdefRecord, “en”, _ndefSpText.toUtf8().constData(), false)); CHECK(nfc_add_ndef_record(myNdefMessage, spNdefRecord)); CHECK(nfc_write_ndef_message_to_tag(target, myNdefMessage, false)); CHECK(nfc_delete_ndef_message(myNdefMessage, true)); emit message(QString("Tag Type Sp Written: %1 %2").arg(_ndefSpUri) .arg(_ndefSpText)); } [...] } nfc_ndef_record_t* NfcWorker::makeSpRecord() { nfc_ndef_record_t* record = 0; uchar_t payload[0]; CHECK( nfc_create_ndef_record(NDEF_TNF_WELL_KNOWN, “Sp”, payload, 0, 0, &record)); return record; }
Figure 26 - Creating a Smart Poster NDEF record
Writing a Custom Tag
A Custom tag comprises three sets of data encoded into an NDEF message as described previously. The sample application presents the following information to the user when he wants to write such a message to a tag.
Figure 27 - Writing a custom tag
The process is very similar to the process for writing Text and URI tags since there aren’t any functions that allow you to build one simply like in the case of the Smart Poster Tag. In fact the only difference is in the function makeCustomRecord() used to build the NDEF Record from the Domain, Type and Text attributes.
nfc_ndef_record_t* NfcWorker::makeCustomRecord(QString domain, QString type, QString text) { [...] nfc_ndef_record_t* record = 0; int textLen = text.length(); QString domain_plus_type = domain.append(":"); domain_plus_type = domain_plus_type.append(type); int totalLen = textLen; uchar_t payload[totalLen]; int offset = 0; memcpy(&payload[offset], text.toUtf8().constData(), textLen); CHECK( nfc_create_ndef_record(NDEF_TNF_EXTERNAL, domain_plus_type.toUtf8().constData(), payload, totalLen, 0, &record)); return record; }
Figure 28 - Creating a custom NDEF record
The key points to note are:
- A Custom NDEF Record uses a TNF Type of “External” rather than the type “Well Known” as in the case of the URI, Text and Smart Poster tags.
- The actual value of the record type associated with the TNF "External" is a concatenation of the domain value and the private type using a semi-colon as the join character. So, if my domain was: "foo.com" and my private type was: "splat", then the record type would be: "foo.com:splat".
- The payload of the NDEF record just contains the content that the end user specified.
Summary
We hope that this article helps you exploit the BlackBerry 10 NFC APIs for tag reading and writing and that our exploration of the more general design and implementation aspects of our sample application “NFC Tool” are useful in helping you get started with BlackBerry 10 application development using C++. Qt and Cascades.
You can download NFC Tool, including its full source code from:
NFC Tool as written for the BlackBerry 10 "Dev Alpha" device and requires the following versions of the NDK and device software to build and run:
- BlackBerry 10® Native SDK 10.0.9
- BlackBerry® Dev Alpha Device Software 10.0.9
You can find details of other NFC related articles and sample applications written by Martin and John at:
NFC Article and Code Index
You can contact Martin or John either through the BlackBerry support forums or through Twitter®: | https://supportforums.blackberry.com/t5/tkb/articleprintpage/tkb-id/Cascades@tkb/article-id/49 | CC-MAIN-2017-13 | refinedweb | 5,925 | 51.68 |
Python has a reasonably good standard library module for handing dates and times but it can be a little confusing to a beginner probably because the first code they encounter will look something like the below with very little explanation.
import datetime print("Running on %s" % (datetime.date.today())) myDate = datetime.datetime(2018,6,18,16,13,0)
Why is it datetime.datetime? It is a simple explanation but one I’ve rarely seen included.
All Pythons classes for handling dates and times are in the module called datetime (naturally enough). This module contains a class for dates with no time element (datetime.date), a class for times (datetime.time) and a class for when you need both called unsurprisingly (but a little unfortunately) datetime.datetime, hence the code above.
It also contains 2 more classes; datetime.timedelta which is the interval between two dates / datetimes (the result of subtracting one datetime from another) and tzinfo, standard for time zone info, which is used to handle timezones in the time and datetime classes.
To add to the confusion, if you want to to get the date / time / datetime as of now, there is not a standard across the three; datetime uses the now() method, date uses the today() method and time does not have one! You have to use datetime and get the time part as below
import datetime # Get the date and time as of now as a datetime print(datetime.datetime.now()) # Get the date as of now (today) print(datetime.date.today()) # Get the time as of now - have to use datetime! print(datetime.datetime.now().time())
The confusion does not end there. If you want to format the date / time / datetime in a particular way you can use the strftime() method – probably short for string format time. The same method exists in all classes. Why it is called time and not date or something more generic is beyond me, datetime.date.strftime() makes little sense.
If you are reading in strings and need them parsed into a date / time / datetime there is strptime() method – probably short for string parse time – but this only exists in the datetime class. So you have to use a similar trick as above and create a datetime and extract just the date or time part.
Once you get passed the quirks above, you should find the datetime module straight forward to use. However if you do find yourself needing a library with more power, try the dateutil library. It can be installed with the usual pip install python-datetutil command. | https://quackajack.wordpress.com/category/core-language/ | CC-MAIN-2018-47 | refinedweb | 429 | 72.76 |
Qt Quick Components provides a set of QML components for building user interfaces. The components allow the interface to accept user input and provide feedback indicators.
This documentation covers Qt Quick Components for Symbian. The Qt Quick Components for MeeGo 1.2 Harmattan documentation covers MeeGo 1.2 Harmattan-specific Qt Quick Components.
Regarding Qt Quick and Qt Quick Components, note the following:
Qt Quick Components consists of three component sets: one for the Symbian platform, one for the MeeGo 1.2 Harmattan platform, and the Extras set that is common for Symbian and MeeGo 1.2 Harmattan platforms. Each set has a different import statement as follows:
import com.nokia.symbian 1.1 // Symbian components import com.nokia.meego 1.1 // MeeGo 1.2 Harmattan components import com.nokia.extras 1.1 // Extras
Do try out the example applications that are supplied with full source code. Each of example has a walkthrough describing how the application is constructed and how it functions. See the Examples and demos page for details. | http://doc.qt.digia.com/qtquick-components-symbian-1.1/ | CC-MAIN-2015-06 | refinedweb | 171 | 53.07 |
I am trying to read a text file from the hard disk of my computer, and define every line of it as an element of a string array. I can read the file and separate the lines, I just do not know how to put them inside an array. I would be very thankful for your helps!
I would personally user a List<string> to add the lines to (because we don't know how many lines there will be), and then use the Linq ToArray() method to convert to an array of strings.
(Alternatively, you could keep using the List<string> instead using the array, but it depends on what you're doing with it).
using System.Linq; List<string> myList = new List<string>(); ... // after getting line from file ... myList.Add(lineFromFile); ... // Once whole file read in to list, convert to array. string[] lines = myList.ToArray();
Herbie
thanks a lot for answering. it's a very great idea. Liked it.
I myself tried the following code to do the job, but your code is far more simple and better:
using System; using System.IO; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { StreamReader myreader = new StreamReader(@"D:\ekhtera\textFile.txt"); string[] array = new string [arrayDimension(myreader)]; StreamReader reader = new StreamReader(@"D:\ekhtera\textFile.txt"); string line = ""; int index = 0; while (line != null) { line = reader.ReadLine(); if (line != null) array[index] = line; index++; } Console.WriteLine(array[4]); reader.Close(); Console.ReadKey(); } private static int arrayDimension(StreamReader newStream) { string line = ""; int i = 0; while (line != null) { line = newStream.ReadLine(); if (line != null) i++; } newStream.Close(); return i; } } }
@rezaElc87: It's definitely worth learning about the collection classes in .NET :
Herbie
In addition, .Net actually has a method for this, File.ReadAllLines:
string[] lines = System.IO.File.ReadAllLines("filename.txt");
Just be careful with this if the file can be very large, because the whole array must fit in memory. If you need to process a large file line by line it's better to use StreamReader or File.ReadLines instead without storing the entire file contents in memory.
@Sven Groot: That is the ultimate answer. I was actually looking for an answer as simple as this one.
thanks a lot
best regards!
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | http://channel9.msdn.com/Forums/Coffeehouse/StreamReader-class | CC-MAIN-2014-15 | refinedweb | 407 | 68.67 |
Everyone, At the risk of igniting another long-drawn-out flamewar over themin()/max() macros, I have an idea. There is a section in the config dialogs labeled "Kernel hacking."Under it there is the SysRQ option. Why don't we put an entry under thatdialog and label it "Use new min()/max() macros" and make it a y/n field.Then we can add dozens of warnings to the help dialog about it, and allowthe user/hacker to select the macro they want. In any code which uses the macros, you can simply do this:#include <linux/config.h>....#ifdef CONFIG_USE_NEW_MINMAX minimum = min(int, number[0], number[1]);#else minimum = min(number[0], number[1]);#endif This way, some hackers can use the two-arg min()/max() inside an #ifdef block,other hackers can use the three-arg min()/max() inside an #ifdef block, and people who don't care can select either. Comments, flames, suggestions, anyone? If the output is good, I'llpublish a patch which will add the Config.in option and default it toCONFIG_USE_NEW_MINMAX=y, since that was the decree of the Great Penguin Overlord ;)Brad=====Brad ChapmanPermanent e-mail: kakadu_croc@yahoo.comCurrent e-mail: kakadu@adelphia.netAlternate e-mail: kakadu@netscape.net__________________________________________________Do You Yahoo!?Make international calls for as low as $.04/minute with Yahoo! Messenger unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2001/8/24/123 | CC-MAIN-2017-47 | refinedweb | 248 | 59.4 |
All of dW
-----------------
AIX and UNIX
Information Mgmt
Lotus
Rational
Tivoli
WebSphere
-----------------
Java technology
Linux
Open source
SOA & Web services
Web development
XML
-----------------
dW forums
-----------------
alphaWorks
-----------------
All of IBM
Generate code with Eclipse's Java Emitter Templates
Document options requiring JavaScript are not displayed
Connect to your technical community
Help us improve this content
Level: Intermediate
Adrian Powell, Senior Software Developer, IBM
27 Apr 2004
Eclipse's Java™ Emitter Templates (JET) is an open source tool for generating code within the Eclipse Modeling Framework (EMF). JET is similar to Java Server Pages, but is powerful and flexible enough to generate Java, SQL, and any other languages, including JSPs. This article covers how to create and configure JET and deploy it in a variety of environments.
Overview of Java Emitter Templates (JET)
Developers commonly use tools which generate repetitive code. Eclipse users are familiar with standard tools to generate for(;;) loops, main() methods, and accessor methods for selected attributes. Automating these simple, mechanical tasks speeds up development and makes our lives easier. In some cases, such as generating deploy code for J2EE servers, the code generation may save us time and it may hide implementation-specific complexity which is what makes it possible to deploy to different J2EE servers. Code generation isn't just for large tool vendors, but can be used effectively within many projects. Eclipse's Java Emitter Templates (JET), which is packaged as a part of the Eclipse Modeling Framework (EMF), is a simple and functional way to add code generation to your project. In this article, we will explore how JET can be used in a variety of environments.
for(;;)
main()
What is JET?
Java Emitter Templates are very similar to Java Server Pages (JSPs). Both JETs and JSPs use the same syntax, and are compiled to Java behind the scenes. Both are used to separate the responsibility for rendering pages from the model and controller. Both accept objects passed into them as an input argument, both allow inserting string values within code ("expressions"), and allow direct use of Java code to perform loops, declare variable, or perform logical flows ("scriptlets"). Both are good ways of representing the structure of a generated object (web page, Java class, or file) while supporting customization of the details.
JETs differ from JSPs in a few key ways. In a JET, the structure of the markup may be changed to support generating code in different languages. Typically the input to a JET will be a configuration file and not user input (though there is nothing forbidding this). And also typically, JET processing will take place only once for a given workflow. These are not technical limitations, and you may find uses for JETs which are quite different.
Starting out
Creating a Template
To work with JETs, create a new Java Project, JETExample and set the source folder to be src. To enable JET for this project, right click and select Add JET Nature. This will create a templates directory off the root of your new project. The default JET configuration uses the project root as the destination for the compiled Java files. To straighten this out, open the properties window for the project, select JET Settings, and set the source container to be src. When the JET compiler runs, it will output the JET Java files into the correct source folder.
JETExample
src
templates
Now we are ready to create the first JET. The JET compiler creates a Java source file for each JET, so the convention is to name the template NewClass.javajet, where NewClass will be the name of whatever class will be generated. This isn't enforced, but it helps to avoid confusion.
NewClass.javajet
NewClass
Start by creating a new file in the templates directory called GenDAO.javajet. You will get a dialog box warning you of compile errors on line 1 column 1 of your new file. If you look closely at the details, it is telling you "The jet directive is missing". This is technically correct as we have just created an empty file, but it can be confusing and misleading. Click 'OK' to close the warning and 'Cancel' to clear the New File dialog (the file is already created). To avoid this problem from coming up again, our first task is to create the jet directive.
GenDAO.javajet
jet
Every JET must start with the jet directive. This tells the JET compiler what the compiled Java template will look like (not what the template generates, just what the compiled template class looks like; the terminology is confusing, so bear with me). It also gives some of the standard Java class information. For example, we'll use the following:
<%@ jet
package="com.ibm.pdc.example.jet.gen"
class="GenDAO"
imports="java.util.* com.ibm.pdc.example.jet.model.*"
%>
Listing 1 is really self-explanatory. When the JET template is compiled, it will create a Java file GenDAO in com.ibm.pdc.example.jet.gen which will import the given packages. Again, this is just what the template will look like, not what the template will generate -- that comes next. Notice that the Java file name for the JET output is defined in the jet declaration, and is unrelated to the filename. If two templates declare the same class name, then they will interfere each others' changes with no warning. This can happen if you copy and paste template files without properly modifying all of the jet declaration. Because there are warnings when you try to create new files in the template directory, copy and paste is common, so stay on your guard.
GenDAO
com.ibm.pdc.example.jet.gen
Like JSPs which get their information via pre-declared variables like session, error, context, and request, JETs use pre-declared variables to pass information into the template. JETs use only two implicit variables: stringBuffer of type StringBuffer (surprise) which is used to build the output string when generate() is called; and the argument, handily called argument of type Object. The first line of a typical JET template will be to cast this to a more appropriate class, as shown in Listing 2.
stringBuffer
StringBuffer
generate()
argument
Object
<% GenDBModel genDBModel = (GenDBModel)argument; %>
package <%= genDBModel.getPackageName() %>;
As you can see, the default syntax for JETs is identical to JSPs, with <%...%> used to escape code or "scriptlets", and <%= ... %> used to print the value of an expression. Like JSPs, judicious use of <% ... %> tags will allow you to add any logical loops or constructs, just as you would be able to do in any Java method. For example:
Welcome <%= user.getName() %>!
<% if ( user.getDaysSinceLastVisit() > 5 ) { %>
Whew, thanks for coming back. We thought we'd lost you!
<% } else { %>
Back so soon? Don't you have anything better to do?
<% } %>
When you have completed defining your JET, save it and right click on it in the Package Explorer. Select
Compile Template. If everything goes well, a new class GenDAO will be created in the com.ibm.pdc.example.jet.gen package. It just has one method on it, public String generate(Object argument) (see Listing 4), the result of which will be whatever you have defined in the javajet template file.
public String generate(Object argument)
javajet
package com.ibm.pdc.example.jet.gen;
import java.util.*;
public class GenDAO
{
protected final String NL = System.getProperties().getProperty("line.separator");
protected final String TEXT_1 = NL + "Hello, ";
protected final String TEXT_2 = NL + "\t ";
public String generate(Object argument)
{
StringBuffer stringBuffer = new StringBuffer();
stringBuffer.append(TEXT_1);
stringBuffer.append( argument );
stringBuffer.append(TEXT_2);
return stringBuffer.toString();
}
}
Breaking out common code
After writing a few templates, you might notice some common elements being repeated, for instance something as simple as adding a copyright declaration to all of your generated code. As with JSPs, this is handled by the include declaration. Place any elements you wish to include in a file, say 'copyright.inc' and then, in your javajet template, add the statement <%@ include file="copyright.inc" %>. The include file will be added completely into the compiled output, so it can reference any variables which have been declared up until that point. The extension .inc can be whatever you choose, just don't pick anything ending with jet or JET will try to compile your include file with understandably poor results.
include
<%@ include file="copyright.inc" %>
.inc
Customizing the JET compilation
If an include file is not sufficient and you want to add additional methods or customize the generation, the simplest way is to create a new JET skeleton. A skeleton file is a template which describes what the compiled JET template will look like. The default skeleton looks like Listing 5.
public class CLASS
{
public String generate(Object argument)
{
return "";
}
}
All of the import statements will go at the top, CLASS will be replaced with the name of the class that we set in the class attribute of the jet declaration, and the body of the generate() method will be replaced with the code to do all of the generation. So, to change what the compiled template code looks like, we just have to create a new skeleton file and perform whatever customization we want, but still leave these basic elements in place.
CLASS
class
To create a custom skeleton, create a new file in the templates directory called custom.skeleton as shown in Listing 6.
custom.skeleton
public class CLASS
{
private java.util.Date getDate() {
return new java.util.Date();
}
public String generate(Object argument) {
return "";
}
}
Then in any JET template which you want to use this custom skeleton, add the attribute skeleton="custom.skeleton" to the jet declaration in the javajet file.
skeleton="custom.skeleton"
Alternatively, you could have this extend a baseclass as public class CLASS extends MyGenerator, and add all necessary helper methods in the base class. This is a little cleaner, as it keeps the common code common, and it makes development easier as the JET compiler doesn't always give the nicest error messages.
public class CLASS extends MyGenerator
Custom skeletons also allows you to change the method name and argument list for the generate() method, so a sufficiently perverse developer can make very customized templates. I was slightly inaccurate when I said that JET replaces the body of generate() with the code to generate. It actually just replaces the body of the last method declared in the skeleton, so careless code changes to the skeleton can be a good way to hurt yourself and confuse your coworkers.
Working with CodeGen
As you can see, once the template has been compiled, it is a standard Java class. To use this in an application, you need only distribute the compiled template class and not the javajet template. Alternatively, you may wish to give the user the ability to make changes to the template and at startup time automatically recompile the template. The Eclipse Modeling Framework (EMF) does this, so anyone with the need or interest can go into plugins/org.eclipse.emf.codegen.ecore/templates and change how the EMF generates their model or editor.
plugins/org.eclipse.emf.codegen.ecore/templates
If you only wish to only distribute the compiled template class, the build process may be automated. So far, we've only seen how to compile the JET templates using the JET Eclipse plugin, but we can script this or do the generation as an ANT task.
Runtime template compilation
To give the end users the power to customize your templates (and the frustration of debugging them), you can choose to compile your templates at runtime. There are several ways of doing this and for the first pass we'll use the utility class org.eclipse.emf.codegen.jet.JETEmitter which abstracts away some of the details for us. The obvious (but generally wrong) code is quite simple, as shown in Listing 7.
org.eclipse.emf.codegen.jet.JETEmitter
String uri = "platform:/templates/MyClass.javajet";
JETEmitter jetEmitter = new JETEmitter( uri );
String generated = jetEmitter.generate( new NullProgressMonitor(), new Object[]{argument} );
You'll find the first problem if you try to run this in a standard main() method. The generate() method will throw a NullPointerException because JETEmitter assumes that it is being called by a plugin. In its initialization, it calls CodeGenPlugin.getPlugin().getString(), which will fail as CodeGenPlugin.getPlugin() will be null.
NullPointerException
JETEmitter
CodeGenPlugin.getPlugin().getString()
CodeGenPlugin.getPlugin()
The simple solution of turning this code into a plugin will work, but not completely. The current implementation of JETEmitter creates a hidden project called .JETEmitters which will contain the generated code. However, JETEmitter does not add the classpath of the plugin to this new project, so the generated code will not compile if it references any objects outside of the standard Java library. The early builds for version 2.0.0 appear to be addressing this issue, but as of early April, they still don't have this fully implemented. To work around this problem, you must extend the JETEmitter class to override the initialize() method and add in your own classpath entries. Remko Popma has written a good example jp.azzurri.jet.article2.codegen.MyJETEmitter(see Resources) which will handle this until JET adds this feature properly. The modified code looks like Listing 8.
.JETEmitters
initialize()
jp.azzurri.jet.article2.codegen.MyJETEmitter
String base = Platform.getPlugin(PLUGIN_ID).getDescriptor().getInstallURL().toString();
String uri = base + "templates/GenTestCase.javajet";
MyJETEmitter jetEmitter = new MyJETEmitter( uri );
jetEmitter.addClasspathVariable( "JET_EXAMPLE", PLUGIN_ID);
String generated = jetEmitter.generate( new NullProgressMonitor(),
new Object[]{genClass} );
Command line
Happily compiling a JET from the command line isn't troubled by the classpath issues which make compilation from a main() method so difficult. In the case above, the difficulty wasn't compiling the javajet to Java code but compiling this Java code to .class. From the command line, we have much more control over the classpath so breaking the steps up makes everything smooth and easy. The only trick is that we need to run Eclipse in a "headless" (without the user interface) mode, but even this has been taken care of. To compile a JET, look at plugins/org.eclipse.emf.codegen_1.1.0/test. This directory contains sample scripts for Windows and Unix, and a sample JET to verify.
.class
plugins/org.eclipse.emf.codegen_1.1.0/test
As an ANT task
There is an ANT Task, jetc, which may take either a single template attribute, or a fileset for multiple templates. Once you configure the classpath of the jetc task, compilation of the template will be as smooth as with standard Java classes. See the Resources for more information on how to acquire and use the task.
jetc
template
fileset
Customizing JET to generate JSPs
As a default, JET uses "<%" and "%>" to markup their template, but this is the same markup that JSPs use. If you wish to generate JSPs, you will have to change the delimiters. You do this in the jet declaration at the head of the template, using the startTag and endTag attributes, as in Listing 9. In this case, I've used "[%" and "%]" for the start and end delimiters, and as you can see, the "[%= expression %]" is treated properly, just like "<%= expression %>" before.
startTag
endTag
<%@ jet
package="com.ibm.pdc.example.jet.gen"
class="JspGen"
imports="java.util.* "
startTag = "[%"
endTag = "%]"
%>
[% String argValue = (String)argument; %]
package [%= argValue %];
Tying it all together
It's an unfortunate truth that much code is reused through copy-and-paste, on the big scale and the small. Many times the solution isn't obvious, and even object-oriented languages may not help. In the cases where the same basic code pattern is repeated, but with small implementation changes, placing the common code in a template and then using JET to generate the variations is an excellent way to save mechanical time and effort. JSPs have forged this path already, so JET borrows heavily from their success. JETs use the same basic layout and semantics as JSPs, but allow greater customization. Templates may be precompiled for greater control, or distributed and compiled at runtime for greater flexibility.
In the next article, we will look at making the generated code ready for Prime Time by allowing users to customize the code and still allow regeneration by integrating our changes on a field-by-field or method-by-method basis, or even more fine-grained levels. We will also bundle it all up in a plugin to show one way of integrating code generation into your development process.? | http://www.ibm.com/developerworks/library/os-ecemf2/index.html | crawl-002 | refinedweb | 2,742 | 54.63 |
Content-type: text/html
asctime, asctime_r, ctime, ctime_r, gmtime, gmtime_r, localtime, localtime_r, mktime - Converts time units
Standard C Library: (libc.so, libc.a)
#include <time.h>
char *asctime(
const struct tm *timeptr) ;
char *asctime_r(
const struct tm *timeptr,
char *buffer) ;
char *ctime(
const time_t *timer) ;
char *ctime_r(
const time_t *timer,
char *buffer) ;
struct tm *gmtime(
const time_t *timer) ;
struct tm *gmtime_r(
const time_t *timer,
struct tm *result) ;
struct tm *localtime(
const time_t *timer ) ;
struct tm *localtime_r(
const time_t *timer,
struct tm *result) ;
time_t mktime(
struct tm *timeptr) ;
[Digital] The following functions are supported in order to maintain backward compatibility with previous versions of the operating system. You should not use them in new designs.
int asctime_r(
const struct tm *timeptr,
char *buffer,
int len) ;
int ctime_r(
const time_t *timer,
char *buffer,
int len) ;
int gmtime_r(
const time_t *timer,
struct tm *result) ;
int localtime_r(
const time_t *timer,
struct tm *result) ;
Interfaces documented on this reference page conform to industry standards as follows:
asctime_r(), ctime_r(), gmtime_r(), localtime_r(): POSIX.1c
asctime(), ctime(), gmtime(), localtime(), mktime(): XPG4, XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Points to a type tm structure that defines space for a broken-down time value. Points to a variable that specifies a time value in seconds. Points to a character array that is at least 26 bytes long. This array is used to store the generated date and time string. Specifies an integer that defines the length of the character array.
The asctime, ctime, gmtime, localtime, mktime, and tzset functions convert time values between tm structures, time_t type variables, and strings.
[POSIX] The asctime_r, ctime_r, gmtime_r, and localtime_r functions in libc_r.a are threadsafe because they do not return pointers to static data.
The tm structure, which is defined in the <time.h> header file, contains the following elements:
A time_t variable, also defined in <time.h>, contains the number of seconds since the Epoch, 00:00:00 UTC 1 Jan 1970.
A string used to represent a time value has a five-field format. For example:
Tue Nov 9 15:37:29 1993\n\0
The asctime function converts the tm structure pointed to by the timeptr parameter to a string with this five-field format. The function uses the following members of the tm structure: tm_wday tm_mon tm_mday tm_hour tm_min tm_sec tm_year
The ctime function converts the time_t variable pointed to by the timer parameter to a string with the five-field format. Local timezone information is set as though the tzset function had been called. This function is equivalent to asctime(localtime(timer)).
The gmtime function converts the time_t variable pointed to by the timer parameter to a tm structure, expressed as GMT (Greenwich Mean Time).
The localtime function converts the time_t variable pointed to by the timer parameter to a tm structure, expressed as local time. This function corrects for the local timezone and any seasonal time adjustments. Local timezone information is set as if the tzset function had been called.
The mktime function converts the tm structure pointed to by the timeptr parameter to a time_t variable. The function uses the following members of the tm structure: tm_year tm_mon tm_mday tm_hour tm_min tm_sec tm_isdst The values of these members are not restricted to the ranges defined in <time.h>. The range for tm_sec is increased to [0-61] to allow for an occasional leap second or double leap second.
A positive value for tm_isdst tells the mktime function that Daylight Saving Time is in effect. A zero (0) value indicates that Standard Time is in effect. A negative values directs the mktime function to determine whether Daylight Saving Time is in effect for the specified time. Local timezone information is set as if the tzset function had been called.
On successful completion of the call, values for the timeptr->tm_wday and timeptr->tm_yday members of the structure are set. The other members are set to specified times, but have their values forced to the ranges indicated previously. The final timeptr->tm_mday is not set until the values of the members timeptr->tm_mon and timeptr->tm_year are determined. If member tm_isdst is given as a negative number, it is set to 0 or 1 by mktime, depending on whether Daylight Saving Time is in effect at the specified time.
The asctime, ctime, gmtime, and localtime functions are not supported for multithreaded applications.
[POSIX] Instead, their reentrant equivalents, asctime_r, ctime_r, gmtime_r, and localtime_r, should be used with multiple threads.
When any of the asctime, ctime, gmtime, or localtime functions complete successfully, the return value may point to static storage, which may be overwritten by subsequent calls to these functions. On error, these functions return a null pointer and errno is set to a value indicating the error.
Upon successful completion, the asctime, asctime_r, ctime, and ctime_r functions return a pointer to a character string that expresses the time in a fixed format.
Upon successful completion, the gmtime and gmtime_r functions return a pointer to a tm structure containing converted GMT time information.
Upon successful completion, the localtime and localtime_r functions return a pointer to a tm structure containing converted local time.
Upon successful completion, the mktime function returns the specified time since the Epoch as a value of type time_t. If the time since the Epoch cannot be represented, mktime returns the value (time_t)-1 to indicate the error.
[Digital] In addition to returning (time_t)-1 when the time since the Epoch cannot be represented, the mktime function also sets errno to the value ERANGE. This extension is provided to support times prior to the Epoch (that is, negative time_t values); in which case, the value (time_t)-1 may also correspond to the time 23:59:59 UTC 31 December 1969 (one second before the Epoch). For applications supporting pre-Epoch times, it is therefore necessary to check both the return value and the value of errno to reliably determine whether an error occurred. Note that this extension is not a standard feature and may not be portable to other UNIX platforms.
[Digital] Upon successful completion, the obsolete versions of the asctime_r, ctime_r, gmtime_r, and localtime_r, functions return a value of 0 (zero). Otherwise, -1 is returned and errno is set to indicate the error.
With the exception of mktime(), if any of these functions fails, errno may be set to the following value: [Digital] The buffer, timer, or timeptr parameter is null, the len parameter is less than 1.
If mktime() is not able to represent the time since the Epoch, it returns the value (time_t)-1 and sets errno to the following value: [Digital] The time since the Epoch cannot be represented by mktime.
Functions: difftime(3), getenv(3), strftime(3), time(3), timezone(3)
Standards: standards(5) delim off | http://backdrift.org/man/tru64/man3/localtime_r.3.html | CC-MAIN-2017-09 | refinedweb | 1,136 | 51.28 |
Hazelcast 1.4: Distributed Events
What is new:
- Add, remove and update events for queue, map, set and list
- Distributed Topic for pub/sub messaging
- Integration with J2EE transactions via JCA complaint resource adapter
- ExecutionCallback interface for distributed tasks
- Cluster-wide unique id generator
Hazelcast documentation covers all these new features already but lets go over the Distributed Topic feature together. No configuration is needed to run the following code. Just download the zip, add the hazelcast.jar into your project, and run the following code on 5 JVM instances. You have cluster of 5 JVMs for pub/sub messaging! No config, no nothing...
import com.hazelcast.core.Topic; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.MessageListener; public class Sample implements MessageListener { public static void main(String[] args) { Sample sample = new Sample(); Topic topic = Hazelcast.getTopic ("default"); topic.addMessageListener(sample); topic.publish ("my-message-object"); } public void onMessage(Object msg) { System.out.println("Message received = " + msg); } }
I hope the new Topic is simple and functional enough. Remember that topic operations are cluster-wide. If you add a MessageListener from member M, you will receive all messages published by any member in the cluster, including the new members joined after you added the listener.
Listeners will process the events/messages in the order they are actually fired/published. If event A occurred before event B on cluster member M, then it is guaranteed that all of the listeners of these events in the cluster will process event A before B.
Documentation at hazelcast.com covers all these new stuff with code samples so please visit the site for details.
What is next? Extending the Queue implementation to java.util.concurrent.BlockingQueue and some other cool stuff. Complete list of planned features can be found here. Got interesting feature in mind, let me know.
Website:
Weblog:
Group :
- Login or register to post comments
- 1630 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) | http://java.dzone.com/announcements/hazelcast-14-distributed-event | crawl-002 | refinedweb | 340 | 50.02 |
A.
There are a couple of different ways that you can use to describe languages, the most popular (and my favorite) being BNF, but for our purposes, that's certainly overkill. Instead, let's just do this informally.
First, we'll allow all existing Groovy operators and control structures - there are good reasons to strip those out on occasion, but this isn't one of them. We'll cover how to do that at a later date. What this means is that structures like while (condition) {block} and if (condition) {block} will work just fine. We'll also be able to create methods inline, just as you can with Groovy scripts. And standard comments (i.e., // and /* */) will also work as expected.
We'll need the following commands to create an environment that allows for turtle graphics:
- Turtle / Movement Commands
- forward number
- move the turtle forward by number of steps
- back number
- move the turtle backward by number of steps
- right number
- turn the turtle number of degrees to the right (clockwise)
- left number
- turn the turtle number of degrees to the left (counter clockwise)
- return the turtle to the starting position, without drawing any lines
- Pen / Drawing Commands
- pendown
- set the pen to draw when the turtle moves
- penup
- set the pen to not draw when the turtle moves
- pencolor Color
- change the pen color to Color
- Animation / Demo Commands
- show
- show the turtle icon
- hide
- hide the turtle icon
- speed number
- change the speed at which the turtle draws, larger is faster
double, and since Groovy will auto convert
intto
double, we're done with that. Color is a little more problematic - ideally, we'd make it easy to use and understand, and have a whole bunch of code to do that, but since we're currently just making a sample program, we'll just use
javafx.scene.paint.Color.
org.netdance.napili.Turtle. I'll talk about how to turn those methods into a DSL in the next post. For now, here's a sample program that you can run using these commands, along with the standard stuff that Groovy provides:
def circle(size) {
45.times {
forward size
right 8
}
}
speed 3
penup
forward 250
pendown
pencolor Color.PURPLE
12.times {
circle 10
right 30
}
hide
This program draws a series of interlocking circles, like you'd get from the child's toy Spirograph. We define a method (circle), set the speed a bit higher to make it all draw quickly, move forward without drawing to center the design, set the pen color to purple (PURPLE is one of many predefined values in javafx.scene.paint.Color), then draw 12 circles, turning 30 degrees after each. At the end, we park the turtle in the home position and then hide it. If you're unfamiliar with Groovy,
Integer.times(Closure) is a method that Groovy places on all values of Integer (and primitives are auto boxed to their class type in Groovy) - it just executes the closure as many times as the value of the integer - remember, parenthesis are optional for methods with arguments, so the circle method is really the equivalent of
Integer.valueOf(45).times({forward size; right 8;}) - I trust you'll agree that the format I used is actually easier to read, once you're used to it.
See you next time, where I'll talk about how to turn the methods on Class Turtle into the DSL above.
(As usual, this article is cross posted from my main blog site.)
- Login or register to post comments
- Printer-friendly version
- driscoll's blog
- 4442 reads | https://weblogs.java.net/node/891227/atom/feed | CC-MAIN-2015-32 | refinedweb | 602 | 63.63 |
So I'm making a Pascal's triangle and I can't figure out why this code isn't working. It prints out something like this
[]
[1]
[1, 2]
[1, 3, 3]
[1, 4, 6, 4]
[1, 5, 10, 10, 5]
[1, 6, 15, 20, 15, 6]
[1, 7, 21, 35, 35, 21, 7]
[1, 8, 28, 56, 70, 56, 28, 8]
[1, 9, 36, 84, 126, 126, 84, 36, 9]
def triangle(rows):
for rownum in range (rows):
newValue=1
PrintingList = list()
for iteration in range (rownum):
newValue = newValue * ( rownum-iteration ) * 1 / ( iteration + 1 )
PrintingList.append(int(newValue))
print(PrintingList)
print()
I would change
PrintingList = list() to
PrintingList = [newValue].
triangle(10) then gives you the following:
]
Which is a valid Pascal's triangle. | https://codedump.io/share/EbnNP2E7MXL8/1/pascal39s-triangle-in-python | CC-MAIN-2018-13 | refinedweb | 124 | 60.18 |
SuggestedStories
This page had been hijacked by wiki-spammers. I rolled back this page to the most recent version that contained non-spam posts, and deleted the irrelevant stuff. Randolph Peters.
Ideas include:
I like the idea of a test container - they could actually be JVM instances that are persistant, controlled by the FitNesse server via RMI or a socket protocol, and could receive commands from the server to run tests or suites. It would speed up launching tests, support suite-level object sharing, and running multiple tests in parallel.
'We do this at PrintSoft? by putting (by hand) "!define FITNESSE_ROOT {this is my directory}" in the content.txt file in the FitNesse root directory'
- .KenHorn
The handling of SetUp and TearDown? pages isn't really correct since a possible hierarchy isn't handled.
StefanRoock, email: stefanATstefanroockDOTde
This function sould be easy to implement based on the previous story suggestion ("Support debugging")
StefanRoock, email: stefanATstefanroockDOTde
MarkEnsign?
Even a pointer on how to code it ourselves could be helpful.
Thanks,
keithDOTadeneyATprintsoftDOTde
where produce() does the test, and pages()/imgs()/ppm() just return cached test results.
Great tool thanks,
Keith
-
|word| definition|
when searching certain words are found, and certain word on the very same page are simply not found.
-
-JjL
-
--
[.FrontPage] [.RecentChanges]
- Transcending SetUp/TearDown? inheritance. (+1 - IljaPreuss)
- When I include a page of classpaths they don't seems to be picked up by child pages, it this a bug?
- It would be nice if we could make nested collapsible sections, for instance, we're using Fitnesse's symbolic links to let each programmer run tests against their own classpaths, we collapse the path definitions together but it would be nice to have sub-collapsable sections for jars that are shared by all users and the jars that are unique to the specific environment or for separting jar paths by relavence. Nested collapsible sections would also be valuable for separating variables and fixtures into groups. And because things are easier to remember when you hear them three times in a row, don't forget to implement nested collapsible sections!
- tag !include ^SubPage have to inclued subpages. Currently it tells "Page include failed because the page ^SubPage does not exist." though displays valid link.
- add a special variable that is the name of the current page like $ {CURRENT_PAGE}, this would be useful when running a suite of tests and trying to match a test with a section of a generated log file
- add the ability to jump to the first error after a suite of tests has run or collapse all included tests, except ones that contain errors - some of these suites are getting long Cooper Jager
- reiterating the PAGE-LEVEL TOC request below. Fitnesse is simply a great wiki, FIT aside, but something almost all regular wikis have that is missing is a !pagecontents (or !pagecontents 1-2 for only headers1-2) that inserts an indented TOC of the headers on the page. Very useful! I would think you should be able to reuse some of the 'contents' fixture code that does the same thing for the entire wiki site.
- FitLibraryForDotNet?
- FAQs
- Cookbook recipes (Java and .NET)
- SuggestedStories.ParameterizedVersionControl
- When archiving test results (out of CommandLineTestRunner?, etc), the pages look rather ugly with no style sheets. The style sheet structure that is used does not lend very well to archiving test results. It would be nice to have the link statement read: &lt;link href="./files/css/fitnesse.css" ... instead of &lt;link href="/files/css/fitnesse.css" ... this would allow you to have a non-fitnesse directory that will still interpret the style sheets directly. All that you need to do then is have the "files" tree in the same folder as your output, tailor the css sheets appropriately, and the archival pages should look the same as they do when run directly out of fitnesse.
- Escape wiki word syntax. When you add text to a page that LooksLikeWikiWord? you can use !-ThisSyntax-! to get rid of the question mark. However, if you use LooksLikeWikiWord? all over the page, you have to use !-ThisSyntax-! all over the page. It would be a nice convenience to have some syntax that would identify a string as a WordThatLooksLikeWikiWordButShouldNotBeTreatedAsSuch for the entire page.
- Extended error keyword. Ability to assert a specific type of exception or message. Something like error("This is the expected message") or error[MyCustomException?].
- Data Table Fixture for .NET (like a row fixture but sitting on top of a .NET DataTable) would be useful if requirements specify state of data separately from the application view of that data.
- User-Defineable Suites - It's great to organize your pages in a certain way, and then have the suites run them by that hierarchy, but sometimes you want a few different ways of looking at your tests. For instance, I could organize my stories by task, and then I could run a suite that is focused on that task... but if I wanted to look at the stories from the first iteration, I have to manually pick and choose the tests out. It would be great if I could make a wiki page that has a list of the tests I want to run, and then they would be run like a suite.
- Reread Password File every time - I'd like to be able to change somebody's password (or add a new account) without having to restart FitNesse (availability is very important to me) -Stephen Starkey
- Group-level restrictions - Just like UNIX. I'd like to be able to put users into groups and limit certain parts of pages only to folks in that group (i.e. allow read access to one group, and read-write access to another, more special group) -Stephen Starkey
- User-level restrictions - Just like UNIX. I'd like to be able to limit certain functions only to specific users. -Stephen Starkey (that should just about do it..hehe)
- If a user is not yet authenticated, ask for a password when the click "Edit" - not after they have made edits and clicked "Save".
- The alias form doesn't work with a url (Rick)
- What about adding some markup elements for managing and tracking stories?
- Exceptions like ClassNotFoundException? are shown in a very small font. Use a larger font. Precede the stack trace with a message meaningful for customers like "technical problem; contact your programmers" (But that's a Fit issue)
- It would be nice to have a special version of !contents that is able to list all pages in wiki (see)
- Fix italics markup for single-character strings.
- Make the content type of the generated HTML pages "text/html; charset=utf-8".
- .RecentChanges filter per Sub-wiki. A project team mostly wants to know how their project sub-wiki is changing. markW.
- Limit the number of *.Zip files to 5. They are Tribbles. (Note the use of Metaphor) markW.
- SuiteOfSuites?
- !=text=! for monofont literals.
- Compare different versions of pages. 8 hours
- In-page hyperlinks
- Anchors are generated for headers (!1, etc.)
- A table of contents function creates a bullet list of all anchors on the current page
- Enable external linking to anchors
- Ability to call ant tasks. Perhaps this is a reach -- but the framework already supports wikis and fit. Maybe a better approach is that we can write custom ant fixtures instead.
- Duplicate the buttons at both the top and bottom of the page.
- Not all lines starting with a digit are outline levels. In particular, " * 1 some text here" should not render as a bullet, followed by a numbered outline level, followed by "some text here".
- If it isn't a big change, it would be nice to have valid xml (xhtml transitional or something like that) so that the pages can be styled or converted to pdf etc.
- It'd be nice if we could use the email/IM/wiki standard of asterisk *foo* to represent bold instead of the bizzare three-single-quotes. Likewise for _italics_
- A "Back" button on the Edit page would be nice (i don't always trust my browser)
- How about a "preview" button when editing which would show how the edited page would look without persisting a new version of the page until you click "save". Pages which change often that you want to look "just right" currently generate too many versions. KevinWilliams++ AndrewMcDonagh
- Sometimes FitNesse starts "acting funny"(technical term), it would be nice if there were a clean way to stop the process without using system utilities.
- Recognize newline characters from standard out and replace them with break read in the generated wiki/html page after a test or suite is executed. -ChrisWilliams?
- I'm using FitNesse in German. So we have some Umlaute which can't be typed, either in the normal way or with html syntax. It would be nice to just be able to use them. -.DanielFlueck
- 'Umlaute' work nicely for me, maybe a Browser Issue? -Stefan.Haslinger@gmx.at
- What about a short list of the important Wiki Formatting at the end of the Editbox? Something like the MoinMoinWiki is doing. -.DanielFlueck
- It would help allot if there were a testlist tag that would work like the contents tag but only list pages that are tests or test suites. Adding each test to a parent page manually just to avoid having the header, footer, errorlog, etc. show up in the contents is a real pain when continually adding tests to a page.
- Identify a FIT table as distinct from a normal table. Inside a FIT table, do *not* process wiki words -- treat all text inside tables as if surrounded with bang-dash.
- Option to put the button links (test/edit/etc.) on the bottom as well as the top. This is useful for long pages so you don't have to scroll so far.
- HTML co-existence with WikiML. You could deactivate it by default, I suppose, requiring an switch to be set in an XML pref file. Those that dont like it dont have to change anything and those who do use it can benefit from the robustness of HTML while using FitNesse. (Isn't it possible to simply let the &lt;HTML&gt; by-pass the parser?) I for one could use more nicely formatted tables, more color, more fonts, a little Javascript, include some pre-existing editable web pages etc. A more secure and traditional wiki environment could be maintained by simply not switching html on.
- An address-bar command: ?contains with the functionality of its counterpart !contains so you can quickly see which pages exist in a directory without having to add it to a page then remove it or drill down in the OS to hava a look... Something quick.
- WYSIWYG. Its trivial to implement this today and much easier for business-types who don't care to learn either WikiMarkUP or HTML. Here's a live demo of an Open Source WYSIWYG editor. It's free, all HTML and works with IE and Mozilla and was designed for the same text area box that FitNesse is now using. Seed Wiki uses a similar editor on their site. This same editor can be demoed at the developers site. Microsoft explains how to build one for IE complete with evolutive demos. I heard (unconfirmed) that Netscape also supports the same technology. Wards 1st wiki is what, 8 years old now? This seems like a natural evolution for Wiki. WYSIWYG was a lot harder to do back then in the days before the internet boom. Now, if you can edit email, you'd know how to edit a WYSIWYG FitNesse wiki. This would help the same audience that the Excel button benefitted.
- Remove the 2nd BIG gauge from the .FrontPage. Its a nice looking graphic, true, but after the 100th visit to the page you begin to wonder if the little one isn't enough...
- Click the gauge to navigate back to the FrontPage.
- Allow the !contents element to list its elements within the left sidebar to avoid long lists that push page content down. Something like !contentsSB SB=SideBar?
- `Alternative` to Wiki Words. Make anything between backticks ` a wiki reference [ ie: `link` ] because WikiWords? arent always the most natural choice. In addition to standard WikiWords? you could have onesThatDontExactlyConform or HaveRepeatingCAPS or One2or3Numbers? or AccentsIncluded? or AnyCOMBOyouWant2 use. The only constraint being as to whether or not the folder that would be created would make the OS happy. Include a wiki-way compliance switch somewhere that can be over-ridden for people who don't care whether or not wiki-way happy-collisions ever occur.
- A PREVIEW button in the edit area to make edit tweaking quicker while at the same time : Move the Paste Table From Excel below Save button then put time reducing the ZipFileTribbleProblem?. Put the Preview button to the right of Save and add a Cancel button to the right of it to lessen mouse movement to the navigation arrows.
- Limiting the .zip file count. I second the Tribble allusion above. Even 10 or 20 would be much better than the hundreds that I now have to deal with in many directories. (You DID limit it to some number, right?) No matter how cute those rascals are, they're getting hard to manage.
- Orphaned pages - all pages without any reference to them
- Provide an RSS feed of changes. If not RSS then maybe email is fine.
- Put back the Total Suite Execution Time feature. Seems to have gotten lost in one or another revision, and we like that metric.. :-). -Stephen Starkey
- Brian Marick and Ward Cunningham's ^NotesOnErrorMessages?.
- Daniel Parker's notes on ^SimpleDateFormat?.
- Configurable text heights for table especially in the stack output - when debugging tests its almost impossible to read the output so each time the Font height must be made larger and then smaller when running the test.
- Could you make the FitNesse logo clickable and linked to the .FrontPage?
- Could you please include a stylesheet? We'd like to change the style of the textarea, for example, but there's no way to do that without altering the code and recompiling.
- files with spaces barf in the files/
- Wonderfull, but it is not working with characters like (text corrupted - please fix), etc. What am I doing wrong? I'm hosting this WIKI on a Red Hat Linux 9. Thank's -- Leandro
- Support navigation from test result page to editor. This is crucial for large test pages. If a test fails, I'd like to click on the error to jump to the WikiML code for the line.
- State of Tests: We often have to deal with the state of tests, like test planned, tester in progress, test definition finished, test succeeds, test obsolete. The state may differ per project. Therefore configurable states would be useful (e.g. via a state page like TestStates?). The fitnesse users can choose states arbitrarily. It is simply for the organization of tests without predefined semantics. The ! contents command could be parameterized with states like !contents test-planned (StefanRoock)
- It would be nice to archive past test runs in a similar way we now archive past wiki page contents. keithDOTadeneyATprintsoftDOTde
Suggested Refactorings to Add To Refactor Page
- MakeSubPage - from a page, click on Refactor, click on MakeSubPage.
- MoveTree - &lt;B style="color:black;background-color.#A0FFFF "&gt;Move a page&lt;/b&gt; and all its subpages to another branch of the Wiki.
An HTML element ID for the fit.Summary tableI am trying to use Ant to run FitNesse tests in a headless manner. I am using HTMLUnit to run my suite and I want to verify that the "total count" of the suite shows 0 wrong, 0 ignored, 0 exceptions. It is much easier to locate an element of an HTML page if it has an ID (the id attribute).
Command line switch to turn on the UpdaterI don't want the Updater to run unless I explicitly ask it to run, which I would do each time I update my FitNesse code base. So I want a command line switch which would cause it to run on startup. I don't care if the server stays up, or just prints a "succeeded" message and exits.
Looping Action FixtureMM&gt; LoopingActionFixture?.... It sounds like a cool idea. Its definately.
Test ContainerI've created a class I call TextContext? which my SetUp page instantiates. This context contains all my page-level testing globals, such as a connection to my server that I'm testing. I'd like to have the ability for objects to be available down through a suite of pages, so I can connect to my server, run a suite of tests, then disconnect - rather than connecting/disconnecting on each page.
Ideas include:
- an object registry (I'm guessing like an RMI registry)
- proxying objects out of the server (so socket connections can stay open)
- A "test container" which can run multiple pages or a suite in one JVM, under the direction/control of the main FitNesse server
Special VariablesA special variable like ${FITNESSE_ROOT} that contains the full pathname of the Fitnesse root directory. There are probably other such variables like FITNESSE_PORT. It might also be a good idea to find a way to access environment variables. Perhaps a syntax like this $${environment_variable}
'We do this at PrintSoft? by putting (by hand) "!define FITNESSE_ROOT {this is my directory}" in the content.txt file in the FitNesse root directory'
- Allow alternative labels in fixture tables -- I don't want the users to see the method names - yes they should be able to cope, but it would help acceptance.
- To avoid full Wiki editing, allow an edit mode where only the single table is editable, either in wiki src form (with the rows only being available, and the excel import tidying (possibly add an 'open in excel option'), or in a table form where all the cell contents are editable. Again this is about user usability
- I'd need some way of capturing the tests / current wiki state into CVS, so I can track / tag it for releases. Since everything evolves, I'd want to be able to rebuild the source from a single point, including the acceptance tests at that point. This may mean each project uses a distinct fitnesse wiki and I just check the whole thing into CVS. It might be nice however (thinking aloud here) to be able to resolve a single test into a salf-contained unit (with all inherited Classpaths) (this could be a help thing to show what command would be used for this test (maybe this already exists?)
- Does Refactoring's rename allow the movement of a page around the wiki? Got a NPE when I tried, not sure if it's a bug of a feature :o)
- Not sure if this is a fitnesse (i think it is) or a FIT issue (Ward's site is down) - I can't see any mention of test lifecycle in the docs - Are multiple tests on a single page run as if from a single controlling test method? Just trying to determine both from a threading point of view and a VM one - ie is the VM run command invoked once per page? Are there plans to support classloader style tricks (for static state) in order to speed up test times? (a la jUnit)
Date format problemfound in brazil when trying to rename a page.
!|java.text.ParseException: Unparseable date: "Mon, 09 Jun 2003 15:23:34 GMT"| | at java.text.DateFormat.parse(Unknown Source)| | at fitnesse.responders.FileResponder.setNotModifiedHeader(Unknown Source)| | at fitnesse.responders.FileResponder.prepareFileResponse(Unknown Source)| | at fitnesse.responders.FileResponder.makeResponse(Unknown Source)| | at fitnesse.FitnesseServer.makeResponse(Unknown Source)| | at fitnesse.FitnesseServer.serve(Unknown Source)| | at fitnesse.socketservice.SocketService$ServerRunner.run(Unknown Source)| | at java.lang.Thread.run(Unknown Source)| ||?
Standard Out needs to be "formatted" as textIf a Fixture (or an application) generates output on StdOut, it shows up at the top of the test page. However, newlines
Support debuggingIt is easy to debug your tests using the FileRunner? from FIT. But you the tests in HTML and not the internet WIKI format of Fitnesse. The following code is a sketch of the solution:
public class CostumizedFitnesseRunner { !|private static final String TEARDOWN = "TearDown";| |private static final String SETUP = "SetUp";| |private static final String TMP_SRC_FILE_PREFIX = "FitnesseTest_";| |private static final String HTML_EXTENSION = ".html";| |private static final String RESULT_PREFIX = "Result_";| !|public static void main(String[] args)| |{| ||String path = null;| ||String testName = null;| !||if (args.length != 2)| ||{| |||System.out.println("Usage: java fitnesse.debug.CostumizedFitnesseRunner &lt;path&gt; &lt;testname&gt;");| ||}| ||else| ||{| |||path = args[0];| |||testName = args[1];| !|||try| |||{| ||||WikiPage wikiPage = FileSystemPage.makeRoot(path, testName);| ||||HtmlWikiPage htmlWikiPage = new HtmlWikiPage(wikiPage.getData());| ||||String html = htmlWikiPage.testableHtml();| !||||if (html.length() == 0) {| |||||System.out.println("Wiki page not found: " + path + "/" + testName);| |||||System.exit(-1);| ||||}| !||||WikiPage setUpPage = FileSystemPage.makeRoot(path, SETUP);| ||||String setUpHtml = new HtmlWikiPage(setUpPage.getData()).testableHtml();| !||||WikiPage tearDownPage = FileSystemPage.makeRoot(path, TEARDOWN);| ||||String tearDownHtml = new HtmlWikiPage(tearDownPage.getData()).testableHtml();| !||||File tmpSrcFile = File.createTempFile(TMP_SRC_FILE_PREFIX, HTML_EXTENSION);| ||||String tmpDstFileName = RESULT_PREFIX + tmpSrcFile.getName();| ||||FileOutputStream fos = new FileOutputStream(tmpSrcFile);| ||||PrintStream ps = new PrintStream(fos);| ||||ps.print(setUpHtml);| ||||ps.print(html);| ||||ps.print(tearDownHtml);| ||||fos.close();| !||||FileRunner runner = new FileRunner();| !||||runner.run(new String[] { tmpSrcFile.getAbsolutePath(), tmpDstFileName });| |||}| |||catch (Exception e)| |||{| ||||e.printStackTrace();| |||}| ||}| |}| }
The handling of SetUp and TearDown? pages isn't really correct since a possible hierarchy isn't handled.
StefanRoock, email: stefanATstefanroockDOTde
HTML dump of Fitnesse-WIKIIntegrate a function into fitnesse to generate a html dump of the wiki pages in Fitnesse. These pages can then be versioned with CVS.
This function sould be easy to implement based on the previous story suggestion ("Support debugging")
StefanRoock, email: stefanATstefanroockDOTde
Access to local files
Widget to sum up a column in a table.Thanks!
MarkEnsign?
Return Additional Metrics DataWe are considering using FitNesse for our test suites, however we need a way of additionally displaying test metrics (number of pages printed, number of images ripped, pages generated per minute, ...). Some of these metrics are like additional test checks i.e. number of pages printed, others need to be plotted over time i.e. pages generated per minute.
Even a pointer on how to code it ourselves could be helpful.
Thanks,
keithDOTadeneyATprintsoftDOTde
where produce() does the test, and pages()/imgs()/ppm() just return cached test results.
Great tool thanks,
Keith
Graceful GettersMost of our objects use getters. It would be nice if FitNesse could drop the gets out on the row and column fixtures, so I could say "account balance?" instead of "get account balance?"
Smart !fixture directiveThe !fixture directive should be aware of the !path directives in force for at least the page it appears on. This would allow for simpler !fixture specifications. It would be even nicer if it were aware of those in affect on the page it was being used on (i.e. in the drop-down list). The significance of this last statement is if a classpath element that contains a fixture is added on a sub-wiki then the fixture name could potentially be shortened when editting that sub-wiki page.
Secure the new Shutdown feature-The new orderly shutdown feature is a security hole and needs to have password protection added. As a convenience feature on my local Fitnesse server I have added a table that is comprised of a single cell that contains the shutdown URL labled as "Shutdown FitNesse Server" to the top level PageHeader. Clicking this "button" on any page shuts down the server as desired. Sweet. The problem is I could add the same information to any public FitNesse server (i.e. this one or butunclebob.com) and then anybody could easily shutdown the server for the entire world (assuming you're using the latest jar). Of course nobody has to have modification permissions to shutdown a FitNesse server. All it really takes is crafting the correct URL in your browser and the server goes down! Password protection would help prevent this.- Withdrawn. This feature is already secured. I had failed to recognize this as I was always operating logged into my server whenever I exercised my button.
Restore Properties property to pages such as /PageHeader in the distribution.-It is annoying to have to manually edit the XML properties file to restore this feature so that modifications such as that described in the preceding item may be made.- Withdrawn after reading the Properties bullet on .FitNesse.MarkupPageAttributes
-
Searching
- Search to support searching in subpages only instead of whole Wiki
- Search not to miss words. In the test case, I have now like 500 words in subpages in a format like this:
when searching certain words are found, and certain word on the very same page are simply not found.
-
Variable names with punctuationIt would be nice if variable names would support punctuation. Especially since Java System properties are implicitly defined as variables and they all use a period for separating words. Currently how would you access a System property like user.dir? I can't do something like !path ${user.dir} currently.
- trim space from page names when doing a rename and move refactoring
- consider prepopulating the refactor field with the current page name (some people may not want this, though)
-
Arbitrary variable assignment within tablesIt would be nice to be able to assign a variable in any table cell that could then be used in any other, regardless of the fixtures used. Syntactically, this could use ${VAR_NAME}= to assign whatever value would normally be rendered in that cell to that variable. For instance:
--
Server side scripting languagesWould it be hard to use a page on a server as a fixture? FitNesse would then just call a server-side script (in PHP, ASP, ...) and gets the answer from that call. This would enable FitNesse to work with all web server scripts in any language. - WillemBogaerts?.
[.FrontPage] [.RecentChanges] | http://fitnesse.org/SuggestedStories | crawl-001 | refinedweb | 4,335 | 64.81 |
Hey guys, so ive jsut started teaching my self C++ and so far ive been doing ok. Today i worked on coding my own original prototype based of the example used. Code Blocker kicked back a lot of errors and i cant figure out why or what i am doing wrong. please help if you guys can.
thanks guys. glad to be a part of the community. and hopefully this code is nice and clean enough for you guys. let me know if i should post differently next time.
Josh
Code:#include <iostream> using namespace std; int age ( int x, int y ); int main() { int x; int y; cout<<"Please input the month you were born (number form): "; cin>> int x; cin.ignore(); cout<<"Please input the year of your birth: "; cin>> int y; cin.ignore(); cout<<"your age is:"<< age ( x, 2011 - y ) <<"\n"; cin.get(); } int age ( int x, int y ); { return x, 2011 - y } | http://cboard.cprogramming.com/cplusplus-programming/138784-first-prototype-gone-bad.html | CC-MAIN-2014-15 | refinedweb | 157 | 91.51 |
Theming UI refers to the ability to perform a change in visual styles in a consistent manner that defines the “look and feel” of a site. Swapping color palettes, à la dark mode or some other means, is a good example. From the user’s perspective, theming involves changing visual styles, whether it’s with UI for selecting a theme style, or the site automatically respecting the user’s color theme preference at the OS-level. From the developer’s perspective, tools used for theming should be easy-to-use and define themes at develop-time, before applying them at runtime.
This article describes how to approach theming with Mimcss, a CSS-in-JS library, using class inheritance—a method that should be intuitive for most developer as theming is usually about overriding CSS property values, and inheritance is perfect for those overrides.
Full discloser: I am the author of Mimcss. If you consider this a shameless promotion, you are not far from the truth. Nevertheless, I really do believe that the theming technique we’re covering in this article is unique, intuitive and worth exploring.
General theming considerationsGeneral theming considerations
Styling in web UI is implemented by having HTML elements reference CSS entities (classes, IDs, etc.). Since both HTML and CSS are dynamic in nature, changing visual representation can be achieved by one of the following methods:
- Changing the CSS selector of an HTML element, such as a different class name or ID.
- Changing actual CSS styling for that HTML element while preserving the selector.
Depending on the context, one method can be more efficient than another. Themes are usually defined by a limited number of style entities. Yes, themes are more than just a collection of colors and fonts—they can define paddings, margins, layouts, animations and so on . However, it seems that the number of CSS entities defined by a theme might be less than a number of HTML elements referencing these entities, especially if we are talking about heavy widgets such as tables, trees or code editors. With this assumption, when we want to change a theme, we’d rather replace style definitions than go over the HTML elements and (most likely) change the values of their
class attributes.
Theming in plain CSSTheming in plain CSS
In regular CSS, one way theming is supported is by using alternate stylesheets. This allows developers to link up multiple CSS files in the HTML
<head>:
<link href="default.css" rel="stylesheet" type="text/css" title="Default Style"> <link href="fancy.css" rel="alternate stylesheet" type="text/css" title="Fancy"> <link href="basic.css" rel="alternate stylesheet" type="text/css" title="Basic">
Only one of the above stylesheets can be active at any given time and browsers are expected to provide the UI through which the user chooses a theme name taken from the values of the
<link> element’s
title attribute. The CSS rule names (i.e. class names) within the alternative stylesheets are expected to be identical, like:
/* default.css */ .element { color: #fff; } /* basic.css */ .element { color: #333; }
This way, when the browser activates a different stylesheet, no HTML changes are required. The browser just recalculates styles (and layout) and repaints the page based on the “winning” values, as determined by The Cascade.
Alternate stylesheets, unfortunately, are not well-supported by mainstream browsers and, in some of them, work only with special extensions. As we will see later, Mimcss builds upon the idea of alternate stylesheets, but leverages it in a pure TypeScript framework.
Theming in CSS-in-JSTheming in CSS-in-JS
There are too many CSS-in-JS libraries out there, and there is no way we can completely cover how theming works in CSS-in-JS in a single post to do it justice. As far as CSS-in-JS libraries that are tightly integrated with React (e.g. Styled Components), theming is implemented on the
ThemeProvider component and the Context API, or on the
withTheme higher-order component. In both cases, changing a theme leads to re-rendering. As far as CSS-in-JS libraries that are framework-agnostic, theming is achieved via proprietary mechanisms, if theming is even supported at all.
The majority of the CSS-in-JS libraries—both React-specific and framework-agnostic—are focused on “scoping” style rules to components and thus are mostly concerned with creating unique names for CSS entities (e.g. CSS classes). In such environments, changing a theme necessarily means changing the HTML. This goes against the alternative stylesheets approach described above, in which theming is achieved by just changing the styles.
Here is where Mimcss library is different. It tries to combine the best of both theming worlds. On one hand, Mimcss follows the alternate stylesheets approach by defining multiple variants of stylesheets with identically named CSS entities. On the other hand, it offers the object-oriented approach and powerful TypeScript typing system with all the advantages of CSS-in-JS dynamic programming and type safety.
Theming in MimcssTheming in Mimcss
Mimcss is in that latter group of CSS-in-JS libraries in that it’s framework-agnostic. But it’s also created with the primary objective of allowing everything that native CSS allows in a type-safe manner, while leveraging the full power of the TypeScript’s typing system. In particular, Mimcss uses TypeScript classes to mimic the native CSS stylesheet files. Just as CSS files contain rules, the Mimcss Style Definition classes contain rules.
Classes open up the opportunity to use class inheritance to implement theming. The general idea is that a base class declares CSS rules used by the themes while derived classes provide different style property values for these rules. This is very similar to the native alternative stylesheets approach: activate a different theme class and, without any changes to the HTML code, the styles change.
But first, let’s very briefly touch on how styles are defined in Mimcss.
Mimcss basicsMimcss basics
Stylesheets in Mimcss are modeled as Style Definition classes, which define CSS rules as their properties. For example:
import * as css from "mimcss" class MyStyles extends css.StyleDefinition { significant = this.$class({ color: "orange", fontStyle: "italic" }) critical = this.$id({ color: "red", fontWeight: 700 }) }
The Mimcss syntax tries to be as close to regular CSS as possible. It is slightly more verbose, of course; after all, it is pure TypeScript that doesn’t require any plug-ins or pre-processing. But it still follows regular CSS patterns: for every rule, there is the rule name (e.g.
significant), what type of rule it is (e.g.
$class), and the style properties the rule contains.
In addition to CSS classes and IDs, style definition properties can define other CSS rules, e.g. tags, keyframes, custom CSS properties, style rules with arbitrary selectors, media, @font-face, counters, and so on. Mimcss also supports nested rules including those with pseudo classes and pseudo-elements.
After a style definition class is defined, the styles should be activated:
let styles = css.activate(MyStyles);
Activating styles creates an instance of the style definition class and writes the CSS rules to the DOM. In order to use the styles, we reference the instance’s properties in our HTML rendering code:
render() { return <div> <p className={styles.significant.name}> This is a significant paragraph. </p> <p id={styles.critical.name}> This is a critical paragraph. </p> </div> }
We use
styles.significant.name as a CSS class name. Note that the
styles.significant property is not a string, but an object that has the
name property and the CSS class name. The property itself also provides access to the CSS Object Model rule, which allows direct rule manipulation; this, however, is outside of the scope of this article (although Louis Lazaris has a great article on it).
If the styles are no longer needed, they can be deactivated which removes them from the DOM:
css.deactivate(styles);
The CSS class and ID names are uniquely generated by Mimcss. The generation mechanism is different in development and production versions of the library. For example, for the
significant CSS class, the name is generated as
MyStyles_significant in the development version, and as something like
n2 in the production version. The names are generated when the style definition class is activated for the first time and they remain the same no matter how many times the class is activated and deactivated. How the names are generated depends on in what class they were first declared and this becomes very important when we start inheriting style definitions.
Style definition inheritanceStyle definition inheritance
Let’s look at a simple example and see what Mimcss does in the presence of inheritance:
class Base extends css.StyleDefinition { pad4 = this.$class({ padding: 4 }) } class Derived extends Base { pad8 = this.$class({ padding: 8 }) } let derived = css.activate(Derived);
Nothing surprising happens when we activate the
Derived class: the
derived variable provides access to both the
pad4 and the
pad8 CSS classes. Mimcss generates a unique CSS class name for each of these properties. The names of the classes are
Base_pad4 and
Derived_pad8 in the development version of the library.
Interesting things start happening when the
Derived class overrides a property from the base class:
class Base extends css.StyleDefinition { pad = this.$class({ padding: 4 }) } class Derived extends Base { pad = this.$class({ padding: 8 }) } let derived = css.activate(Derived);
There is a single name generated for the
derived.pad.name variable. The name is
Base_pad; however, the style is
{ padding: 8px }. That is, the name is generated using the name of the base class, while the style is taken from the derived class.
Let’s try another style definition class that derives from the same
Base class:
class AnotherDerived extends Base { pad = this.$class({ padding: 16 }) } let anotherDerived = css.activate(AnotherDerived);
As expected, the
anotherDerived.pad.name has the value of
Base_pad and the style is
{ padding: 16px }. Thus, no matter how many different derived classes we may have, they all use the same name for the inherited properties, but different styles are assigned to them. This is the key Mimcss feature that allows us to use style definition inheritance for theming.
Creating themes in MimcssCreating themes in Mimcss
The main idea of theming in Mimcss is to have a theme declaration class that declares several CSS rules, and to have multiple implementation classes that are derived from the declaration while overriding these rules by providing actual styles values. When we need CSS class names, as well as other named CSS entities in our code, we can use the properties from the theme declaration class. Then we can activate either this or that implementation class and, voilà, we can completely change the styling of our application with very little code.
Let’s consider a very simple example that nicely demonstrates the overall approach to theming in Mimcss.: a theme simply defines the shape and style of an element’s border.
First, we need to create the theme declaration class. Theme declarations are classes that derive from the
ThemeDefinition class, which itself derives from the
StyleDefinition class (there is an explanation why we need the
ThemeDefinition class and why themes should not derive directly from the
StyleDefinition class, but this is a topic for another day).
class BorderTheme extends css.ThemeDefinition { borderShape = this.$class() }
The
BorderTheme class defines a single CSS class,
borderShape. Note that we haven’t specified any styles for it. We are using this class only to define the
borderShape property type, and let Mimcss create a unique name for it. In a sense, it is a lot like a method declaration in an interface—it declares its signature, which should be implemented by the derived classes.
Now let’s define two actual themes—using
SquareBorderTheme and
RoundBorderTheme classes—that derive from the
BorderTheme class and override the
borderShape property by specifying different style parameters.
class SquareBorderTheme extends BorderTheme { borderShape = this.$class({ border: ["thin", "solid", "green"], borderInlineStartWidth: "thick" }) } class RoundBorderTheme extends BorderTheme { borderShape = this.$class({ border: ["medium", "solid", "blue"], borderRadius: 8 // Mimcss will convert 8 to 8px }) }
TypeScript ensures that the derived classes can only override a property using the same type that was declared in the base class which, in our case, is an internal Mimcss type used for defining CSS classes. That means that developers cannot use the
borderShape property to mistakenly declare a different CSS rule because it leads to a compilation error.
We can now activate one of the themes as the default theme:
let theme: BorderTheme = css.activate(SquareBorderTheme);
When Mimcss first activates a style definition class, it generates unique names for all of CSS entities defined in the class. As we have seen before, the name generated for the
borderShape property is generated once and will be reused when other classes deriving from the
BorderTheme class are activated.
The
activate function returns an instance of the activated class, which we store in the
theme variable of type
BorderTheme. Having this variable tells the TypeScript compiler that it has access to all the properties from the
BorderTheme. This allows us to write the following rendering code for a fictional component:
render() { return <div> <input type="text" className={theme.borderShape.name} /> </div> }
All that is left to write is the code that allows the user to choose one of the two themes and activate it.
onToggleTheme() { if (theme instanceof SquareBorderTheme) theme = css.activate(RoundBorderTheme); else theme = css.activate(SquareBorderTheme); }
Note that we didn’t have to deactivate the old theme. One of the features of the
ThemeDefinition class (as opposed to the
StyleDefintion class) is that for every theme declaration class, it allows only a single theme to be active at the same time. That is, in our case, either
RoundBorderTheme or
SquareBorderTheme can be active, but never both. Of course, for multiple theme hierarchies, multiple themes can be simultaneously active. That is, if we have another hierarchy with the
ColorTheme declaration class and the derived
DarkTheme and
LightTheme classes, a single
ColorTheme-derived class can be co-active with a single
BorderTheme-derived class. However,
DarkTheme and
LightTheme cannot be active at the same time.
Referencing Mimcss themesReferencing Mimcss themes
In the example we just looked at, we used a theme object directly but themes frequently define elements like colors, sizes, and fonts that can be referenced by other style definitions. This is especially useful for separating the code that defines themes from the code that defines styles for a component that only wants to use the elements defined by the currently active theme.
CSS custom properties are perfect for declaring elements from which styles can be built. So, let’s define two custom properties in our themes: one for the foreground color, and one for the background color. We can also create a simple component and define a separate style definition class for it. Here is how we define the theme declaration class:
class ColorTheme extends css.ThemeDefinition { bgColor = this.$var( "color") frColor = this.$var( "color") }
The
$var method defines a CSS custom property. The first parameter specifies the name of the CSS style property, which determines acceptable property values. Note that we don’t specify the actual values here; in the declaration class, we only want Mimcss to create unique names for the custom CSS properties (e.g.
--n13) while the values are specified in the theme implementation classes, which we do next.
class LightTheme extends ColorTheme { bgColor = this.$var( "color", "white") frColor = this.$var( "color", "black") } class DarkTheme extendsBorderTheme { bgColor = this.$var( "color", "black") frColor = this.$var( "color", "white") }
Thanks to the Mimcss (and of course TypeScript’s) typing system, developers cannot mistakenly reuse, say, the
bgColor property with a different type; nor they can specify values that are not acceptable for a color type. Doing so would immediately produce a compilation error, which may save developers quite a few cycles (one of the declared goals of Mimcss).
Let’s define styles for our component by referencing the theme’s custom CSS properties:
class MyStyles extends css.StyleDefinition { theme = this.$use(ColorTheme) container = this.$class({ color: this.theme.fgColor, backgroundColor: this.theme.bgColor, }) }
The
MyStyles style definition class references the
ColorTheme class by calling the Mimcss
$use method. This returns an instance of the
ColorTheme class through which all its properties can be accessed and used to assign values to CSS properties.
We don’t need to write the
var() function invocation because it’s already done by Mimcss when the
$var property is referenced. In effect, the CSS class for the
container property creates the following CSS rule (with uniquely generated names, of course):
.container { color: var(--fgColor); backgroundColor: var(--bgColor); }
Now we can define our component (in pseudo-React style):
class MyComponent extends Component { private styles = css.activate(MyStyles); componentWillUnmount() { css.deactivate(this.styles); } render() { return <div className={this.styles.container.name}> This area will change colors depending on a selected theme. </div> } }
Note one important thing in the above code: our component is completely decoupled from the classes that implement actual themes. The only class our component needs to know about is the theme declaration class
ColorTheme. This opens a door to easily “externalize” creation of themes—they can be created by third-party vendors and delivered as regular JavaScript packages. As long as they derive from the
ColorTheme class, they can be activated and our component reflects their values.
Imagine creating a theme declaration class for, say, Material Design styles along with multiple theme classes that derive from this class. The only caveat is that since we are using an existing system, the actual names of the CSS properties cannot be generated by Mimcss—they must be the exact names that the Material Design system uses (e.g.
--mdc-theme--primary). Thankfully, for all named CSS entities, Mimcss provides a way to override its internal name generation mechanism and use an explicitly provided name. Here is how it can be done with Material Design CSS properties:
class MaterialDesignThemeBase extends css.ThemeDefinition { primary = this.$var( "color", undefined, "mdc-theme--primary") onPrimary = this.$var( "color", undefined, "mdc-theme--on-primary") // ... }
The third parameter in the
$var call is the name, which is given to the CSS custom property. The second parameter is set to
undefined meaning we aren’t providing any value for the property since this is a theme declaration, and not a concrete theme implementation.
The implementation classes do not need to worry about specifying the correct names because all name assignments are based on the theme declaration class:
class MyMaterialDesignTheme extends MaterialDesignThemeBase { primary = this.$var( "color", "lightslategray") onPrimary = this.$var( "color", "navy") // ... }
Multiple themes on one pageMultiple themes on one page
As mentioned earlier, only a single theme implementation can be active from among the themes derived from the same theme declaration class. The reason is that different theme implementations define different values for the CSS rules with the same names. Thus, if multiple theme implementations were allowed to be active at the same time, we would have multiple definitions of identically-named CSS rules. This is, of course, a recipe for disaster.
Normally, having a single theme active at a time is not a problem at all—it is likely what we want in most cases. Themes usually define the overall look and feel of the entire page and there is no need to have different page sections to use different themes. What if, however, we are in that rare situation where we do need to apply different themes to different parts of our page? For example, what if before a user chooses a light or dark theme, we want to allow them to compare the two modes side-by-side?
The solution is based on the fact that custom CSS properties can be redefined under CSS rules. Since theme definition classes usually contain a lot of custom CSS properties, Mimcss provides an easy way to use their values from different themes under different CSS rules.
Let’s consider an example where we need to display two elements using two different themes on the same page. The idea is to create a style definition class for our component so that we could write the following rendering code:
public render() { return <div> <div className={this.styles.top.name}> This should be black text on white background </div> <div className={this.styles.bottom.name}> This should be white text on black background </div> </div> }
We need to define the CSS
top and
bottom classes so that we redefine the custom properties under each of them taking values from different themes. We essentially want to have the following CSS:
.block { backgroundColor: var(--bgColor); color: var(--fgColor); } .block.top { --bgColor: while; --fgColor: black; } .block.bottom { --bgColor: black; --fgColor: white; }
We use the
block class for optimization purposes and to showcase how Mimcss handles inheriting CSS styles, but it is optional.
Here is how this is done in Mimcss:
class MyStyles extends css.StyleDefinition { theme = this.$use(ColorTheme) block = this.$class({ backgroundColor: this.theme.bgColor, color: this.theme.fgColor }) top = this.$class({ "++": this.block, "--": [LightTheme], }) bottom = this.$class({ "++": this.block, "--": [DarkTheme], }) }
Just as we did previously, we reference our
ColorTheme declaration class. Then we define a helper
block CSS class, which sets the foreground and background colors using the custom CSS properties from the theme. Then we define the
top and
bottom classes and use the
block class. Mimcss supports several methods of style inheritance; the
styles.top.name is
"top block" where we’re combining the two CSS classes (the actual names are randomly generated, so it would be something like
"n153 n459").
"++"property to indicate that they inherit from the
"++"property simply appends the name of the referenced class to our class name. That is, the value returned by the
Then we use the
"--"property to set values of the custom CSS variables. Mimcss supports several methods of redefining custom CSS properties in a ruleset; in our case, we just reference a corresponding theme definition class. This causes Mimcss to redefine all custom CSS properties found in the theme class with their corresponding values.
What do you think?What do you think?
Theming in Mimcss is intentionally based on style definition inheritance. We looked at exactly how this works, where we get the best of both theming worlds: the ability to use alternate stylesheets alongside the ability to swap out CSS property values using an object-oriented approach.
At runtime, Mimcss applies a theme without changing the HTML whatsoever. At build-time, Mimcss leverages the well-tried and easy-to-use class inheritance technique. Please check out the Mimcss documentation for a much deeper dive on the things we covered here. You can also visit the Mimcss Playground where you can explore a number of examples and easily try your own code.
And, of course, tell me what you think of this approach! This has been my go-to solution for theming and I’d like to continue making it stronger based on feedback from developers like yourself.
I’ve been using TypeScript with other CSS in JS libraries for a couple of years and the experience has been fantastic, especially when it comes to passing design tokens around.
A core concept of components is that CSS, HTML, and any JS that drives display logic all represent the same concern. Arbitrarily styling a component tree from the top is a violation of of those concerns. A component might change and accidently invalidate its relationship to the top level theme. (This implicit relationship is what makes the cascade so brittle.)
One of the challenges with scoped components is responsive design, e.g. changing the order of named grid areas based on screen width, because parent elements can’t reach into their children and make changes via class selectors, and child components have no concept of their placement on the page. (You can cheat by providing hooks from the child component, but that can easily be abused. Shadow DOM largely prevents this by design.) Custom properties mitigate this issue by narrowing the surface area of a style change to a single property value instead of an entire class. Changing a custom property’s value via a media query will cascade down through your components, which are hopefully using the custom property as intended.
The problem with custom properties is that they’re just as brittle as the rest of the cascade: they can be overwritten with arbitrary values, are subject to naming collisions, and there’s no way to validate them before they’re used. There’s also a performance penalty because resolving custom properties has to be done at runtime. If fonts and colors don’t have to change at runtime (just refresh the page when the user changes themes), there’s no reason to bind them to custom properties.
If you’re using a CSS in JS solution, your JS can track variables much more explicitly and validate them before they’re used by the component. For example, you can make sure a color variable both exists and matches an expected value before the component actually uses it. With SASS, you can do all of this at compilation, so there’s no performance penalty at runtime, and just render the component with a different CSS class assignment.
Still, having statically typed CSS is amazing. I can’t overstate how easy it is to refactor and how fluid writing CSS is when your IDE auto-completes things like which breakpoints you can use. | https://css-tricks.com/defining-and-applying-ui-themes-using-the-mimcss-css-in-js-library/ | CC-MAIN-2022-21 | refinedweb | 4,246 | 54.52 |
Abstract editThis.
Development notes editAMG: Below this point I will jot down errata and other things that will contribute to the next iteration of the Brush project. Please bear with my rambling, for I wish to retain not only my conclusions but the illogic and missteps needed to reach them.
Page contents
Summary of possible changes editAMG: Since this page is long, I'll go ahead and list here very briefly all the substantial changes to the paper I'm considering. I'm not committed to anything yet though.
- Allow $ in math expressions to be omitted for variables starting with non-ASCII characters
- Fix the precedence of & and | relative to && and ||
- int() behaves like Tcl entier()
- real() behaves like Tcl double()
- Support for complex numbers and vector math
- / is real division, // is integer division, regardless of operand type
- Though / won't force result to be real if it's a whole number and both arguments are integers
- % works for real numbers as well as integers
- [incr] works for reals
- {x:} is the empty list immediately after index x (was before in the paper)
- {:x} is the empty list immediately before index x ({:x} was not allowed in the paper)
- New [loop] and [collect] commands take the place of [for], [foreach], [while], and [dict for].
- Change the string representation of references so that late-bound indexes say $... instead of &...@ or any such complicated thing
- Have $&...^ notation instead of &...^ for late-bound indexes
- &...^ is incorrect because it is (eventually) taking a value, hence should start with $
- $...^ is incorrect because it treats the variable as containing the reference, rather than making a reference to the variable
- $&...^ is ugly, but it's in fact shorthand for...
- $[: &...]^ which is even uglier and unreasonable to type
- Create a [lambda] command which compiles its lambda argument and binds its local variables to the locals of its caller
- Drop the superfluous $ from reference composition and allow ^ in more places
- Turn garbage collection inside-out
- Rename the create subcommands to new, e.g. [list new]
- Change [set] to handle sets (was lots) rather than assign variables (probably won't actually do this)
- Cache the hash values in the Tcl_Obj rather than the hash table (probably won't actually do this)
- Get constant-time dict element removal via an improved, lazy algorithm
- Require parentheses around expressions containing spaces used in list indexes when more than one index is given
- Allow end prefixes in math expressions, but retain end in the string representation of the result
- Index arguments aren't arbitrary expressions, instead revert to pre-TIP 176 [1] Tcl behavior
- Add - for don't-care assignments in extended [set]
- No, call it / because Reasons.
- Offer customizable behavior for multi-variable [set] being given too few values
- Actually this becomes a powerful, general-purpose data struture unpacking mini-language which also can be used with the [loop] and [collect] commands
- Let [proc]'s parameter list syntax accept a very similar notation
- Tweak [proc] parameter list syntax to make special characters be a separate list element
- Be strict about metacharacters in contexts where Tcl currently treats them as literal
- Be lenient about whitespace between backslash and newline (maybe not)
- The [take] command, perhaps
- Fix creative writing problem with namespace variables
- Typographic fixes
Math improvements edit
Automatic $ in math expressionsAMG: The restrictions on page 10 can be loosened up a bit. Right now the dollar sign on variables in math expressions can be skipped if the name follows C identifier rules (alphanumerics and underscores only, doesn't start with a number), doesn't use indexing or dereferencing, is named literally, and isn't named "eq", "ne", "in", "ni" (or "end"... need to add that to the list).The "alphanumerics and underscores" part can be relaxed to be any characters other than significant metacharacters and operators. So if your variable is called مینڈک, that would work too. (Assuming Brush supports right-to-left script, that is.)
Fixing operator precedenceAMG: There's no good reason to keep the precedence of & and | near && and ||. Put the bitwise operators somewhere after comparison. This was just for C compatibility, which is not required. Tcl compatibility would be nice, but compatibility is an anti-requirement of Brush.
Generalization to real numbersAMG: I'm mystified about why certain things in Tcl are limited to integers until you switch to a different, more complicated syntax. Brush ought to clean this up.
int() and double()AMG: I don't like the name double() because it's tied too closely to the implementation. Instead I prefer real() which describes what's happening at a higher level and doesn't require the programmer to know C to understand the name. Seriously, double() sounds like it should be returning double (two times) its argument!As for int(), I don't like how Tcl masks to the machine word size. The non-masking Tcl function is entier() which is a bizarre name. So Brush will adopt entier()'s behavior but name it int(). Want to mask to the machine word size? Use & to do it.Hmm, renaming double() to real() puts me in mind of complex numbers. I hadn't considered that before. I don't see why complex number notation couldn't be added, along with imag() and conj() and other such operations.Maybe vector math would be nice too.
Integer and real divisionAMG: My main reason for needing double() (or real(), as it may now be known) is to force real division even when the numbers may appear to be integers. I think a better solution is to make integer and real division be distinct operators, like Python has done. I'm already breaking compatibility with the Tcl [expr] notation in several ways, why not one more? So let's have / be real division and // be integer division.Real division isn't quite the same thing as forcing either argument to be a real. This is only done when both arguments are integers but the numerator is not a multiple of the denominator.And yes, this gotcha just bit me for the millionth time. That's why I want this change.
Real modulusAMG: In the same vein, % should work the same as fmod when at least one argument is not an integer.
[incr] and realsAMG: [incr] should be defined to support real numbers as well as integers.
Empty list range notation edit
First revisionAMG: On page 29, I never was happy with the fact that [set &x{end+1:} z] is required to append to the list in x. (By the way, that should be (z) not z.) I'd much rather be able to just say "end".But what I have is necessary for consistency because the stride denotes the range including just before the first element of the stride (end, let's say) up to just after the second element of the stride (omitted in this case, indicating an empty stride). That empty stride (consisting only of the space before the final element) is then replaced with a list consisting of z.Here's the superior solution. Allow the first stride index to be omitted, keeping the second. Since one of the two indexes is omitted, the stride is empty, so that's one important property maintained. Since the second index is specified, the stride denotes the space after the indexed element, so that's the other property I want.How does it look? [set &x{:end} (z)]. I'll take it!Also, I think I'll change the example to append two elements, not just one. That shows the utility of this feature, since appending a single element is already accomplished by [set &x{end+1} a], as shown in a previous example on the same page.
Second revisionAMG: When I first wrote the Brush paper, I envisioned only leaving out the second index to signify an empty list range, with the second value defaulting to -1. Thus, &a{5:} is a reference to the empty list range immediately before the element with index 5 in variable a.Later I became annoyed at having to write &a{end+1:} which refers to the empty list range immediately before the element after the last. Too convoluted. I wanted to be able to instead refer to the empty list range immediately after the last element. This is of course the same list range, but explained more easily. So I came up with &a{:end}.Now that I look at it, I think it would make more sense to reverse the behavior of omitting indexes so that the colon serves as a visual mnemonic as to whether the referenced empty list range is before or after the stated index.This looks much better to me.Here's a simple example:
set &a (c e g) # c e g set &a{0:} (d) # put d after c set &a{:0} (a b) # put a b before c set &a{:end} (f) # put f before g set &a{end:} (h i) # put h i after g : $a # a b c d e f g h iAMG: When using this new empty list notation, it's no longer the case that the list begins immediately before or ends immediately after the first and second indexes, respectively. It doesn't matter how the omitted index is defined. So instead describe empty list notation as a special case. {x:} is the empty list immediately after index x, and {:x} is the empty list immediately before index x. It's that simple.
Rethinking loops editAMG: I like [foreach] a lot, whereas [for] bothers me as being unintuitive and nothing more than a copy of C. While thinking this over last night, I took inspiration from [lcomp] and Ada and came up with a way to aggregate all looping into a general form: the [loop] command.Here's a table summarizing and comparing the possible syntax. You may be familiar with some of these forms from the [lcomp] page. Combining everything like this is a bit more than simple sugar because it allows the various forms to be combined in new ways, including those not demonstrated in the table.The do "keyword" in the following is optional when it is the second-to-last argument. I include it to be explicit, but it can be left out the same way then and else are optional in Tcl's [if] command.The over and unpackover forms are very special. Instead of putting each list value in the variable, they store references to the list values. This allows in-place modification of the list without having to keep track of indexes or generate a new list. I really look forward to this feature.Now take this a step further. I also envision a [collect] command that takes all the above forms plus takes one or more initial arguments (before the [loop]-style arguments) which are expressions to be evaluated after each iteration of the loop (before the step clause, if there is one). The results of these evaluations are collected into a list which is returned as the result. For [collect], it'll often make sense to leave off the do clause entirely, though it may occasionally be useful to set variables used in the result expressions.Also I should mention that Brush expressions allow the leading dollar sign of variable names to be omitted in simple cases.For example, here are all the Python PEP 202 [2] and PEP 274 [3] examples written in Brush:
set &nums (1 2 3 4) set &fruit (Apples Peaches Pears Bananas) proc &range (count) {collect i init {set &i -1} count $count {incr i}} proc &zip (l1 l2) {collect {(i1, i2)} for &i1 in $l1 and &i2 in $l2} collect i for &i in [range 10] collect i for &i in [range 20] if {i % 2 == 0} collect {(i, f)} for &i in $nums for &f in $fruit collect {(i, f)} for &i in $nums for &f in $fruit if {[string index $f 0] eq "P"} collect {(i, f)} for &i in $nums for &f in $fruit if {[string index $f 0] eq "P"} if {i % 2 == 0} collect {i for &i in [zip $nums $fruit] if {$i{0} % 2 == 0} collect i {[format %c $i]} for &i in [range 4] collect k v for (&k &v) in $dict collect {[string tolower $x]} 1 for &x in $list_of_email_addrs proc &invert (d) {collect v k for (&k &v) in $d} collect {(k, v)} {k + v} for &k in [range 4] for &v in [range 4]I intend to allow nesting of C-style for loops within a single invocation of [loop] or [collect]:
collect ($i $j) init {set &i 0} while {i < 3} step {incr &i}\ init {set &j 0} while {j < 3} step {incr &j}will return {0 0} {0 1} {0 2} {1 0} {1 1} {1 2} {2 0} {2 1} {2 2}.If the while or until clause is left out of a C-style for loop, it will loop forever or until interrupted. However, if it is iterating in parallel with a Tcl-style foreach loop via the and clause, it will stop when its parallel loop does.
collect ($i $j) init {set &i 0} step {incr &i} and &j in (0 1 2)will return {0 0} {1 1} {2 2}.Another possible feature is the else clause which supplies code to execute if [break] is never called. This is useful for searches to implement the failing case, thereby avoiding the need for a success/failure flag or double-checking the iteration counter against the limit. But perhaps else isn't the best name for it (that's what Python uses) since that name would easily confuse its interaction with the if clause whose purpose is merely to conditionally skip nested loops. last might be better because it's only executed after the last iteration, upon failure of the while or until conditional or running out of items in the in or over or unpack or unpackover lists. I'd prefer to avoid finally because, in the [try] command, the finally clause is executed no matter what, even in the face of break or analogous.Here's an example that prints "match" if two lists have at least one corresponding element that's numerically equal:
loop for &x in $list1 and &y in $list2 do { if {$x == $y} { puts match break } } last { puts mismatch }Not recommended, but this code could instead be written:
loop for &x in $list1 and &y in $list2 if {$x == $y} do { puts match break } last { puts mismatch }If I were to change last back to else, you might expect it to print "mismatch" repeatedly rather than only at the end of the loop. That's why I want to avoid the term else.I could further reduce the opportunity for confusion by forbidding if immediately before do so that it is only legal for conditionally skipping nested loops. Likewise I could forbid it as the first clause. Thus it would only be valid when sandwiched between loops. For example, the above table would no longer show "Conditional iteration" except on the "Conditional combinatorial" line.On second thought, in the case of [collect], if can indeed be worthwhile after the last loop, before the optional do. If the if fails, no result entry is generated, which is useful for filtering. For example, this would return a list of all even numbers returned by the [numbers] command:
collect x for &x in [numbers] if {!(x & 1)}Does last make any sense for [collect]? I don't think so. It's not going to contribute to the result. But perhaps allow it anyway in case someone needs to put in side effects that only happen when [break] isn't used. I doubt anyone ever will want this.
ExampleAMG: Here's Tcl code that sets all real number elements in a table, which is a list of lists of lists of cell values, to have a certain [format]-style precision.
for {set i 0} {$i < [llength $table]} {incr i} { for {set j 0} {$j < [llength [lindex $table $i]]} {incr j} { for {set k 0} {$k < [llength [lindex $table $i $j]]} {incr k} { if {[string is double -strict [lindex $table $i $j $k]]} { lset table $i $j $k [format $precision\ [lindex $table $i $j $k]] } } } }Here's the equivalent Brush code:
loop for §ion over &table for &row over §ion^ for &cell over &row^ { if {[string is real $cell@]} { set $cell [string format $precision $cell@] } }Notes:
- Use of over avoids all use of [lindex], [lset], [llength], [incr], and loop counter variables.
- This is my major motivation.
- Only one invocation of [loop] is needed since it permits nesting by repeated for arguments.
- This cuts down on indentation.
- Nested invocation is permitted if style or other considerations demand it, but it's not required.
- §ion^ and &row^ use late-bound reference composition.
- Saying $section and $row would fail because these variables don't have values before [loop] executes.
- Saying §ion and &row would fail because the iteration is over the referenced lists, not the references themselves.
- §ion^ composes a reference to whatever $section refers to at the moment that §ion^ is dereferenced. Likewise &row^.
- $cell doesn't contain the cell value but rather a reference to the element.
- [string is real] instead of [string is double -strict].
- I think "real" is a better term than "double" for checking for real numbers.
- I want -strict to be the default mode of operation because empty string is not a number even though appending characters could make it into one.
- Heck, [string is double 1.2e] returns 0 even though appending a digit would make it valid, so non-strict doesn't even satisfy the use case of partial input validation.
- Get rid of it.
- [string format] instead of [format].
- Might not do this, but I think that would be the correct ensemble for [format].
- We have [binary format] and [binary scan], why not [string format] and [string scan]?
loop init {set &i 0} while {i < [list length $table]} step {incr &i} { loop init {set &j 0} while {j < [list length $table{i}]} step {incr &j} { loop init {set &k 0} while {k < [list length $table{i j}]} step {incr &k} { if {[string is real $table{i j k}]} { set &table{i j k} [string format $precision $table{i j k}] } } } }This version, while verbose and not preferred, demonstrates the fact that inside expressions and list indexes, many variable substitutions don't require dollar signs. Context makes it clear that a variable's value is being taken. This version also shows off that [lindex] and [lset] are not needed for reading and writing list elements.
Alternative to unpackAMG: Instead of the unpack and unpackover keywords, I wish to expand the syntax of the variable reference list. Let the variable reference argument (the one preceding in or over) be a single variable reference, or a list, like before. But change what each element of the list can be. Let them either be a reference (as in the existing proposal), or be any of the other forms allowed by the [=] command. (Except pass, or whatever I end up calling it.) This will allow for unpacking more complex structures.First, compare the existing examples:Now consider new possibilities:
= (&w &h) 2 4 = &rects{end+1} ((20 30) red -outline orange) = &rects{end+1} ((25 12) green -stipple gray12) loop for (((&x &y) &color (* &options))) in $rects { .canvas create rectangle $(x-w/2) $(y-h/2) $(x+w/2) $(y+h/2)\ -fill $color {*}$options }Here's how it would be done in Brush without this change:
# ... loop for &elem in $rects { = ((&x &y) &color (* &options)) $elem # ... }Literal Tcl 8.5+ transcription:
lassign {2 4} w h lappend rects {{20 30} red -outline orange} lappend rects {{25 12} green -stipple gray12} foreach rect $rects { lassign [lindex $rect 0] x y set options [lassign [lrange $rect 1 end] color] .canvas create rectangle [expr {$x-$w/2}] [expr {$y-$h/2}] [expr {$x+$w/2}] [expr {$y+$h/2}]\ -fill $color {*}$options }This feature gets even fancier in combination with the other assignment modes I'm adding to the [=] command. Basically, the argument following for will accept the same syntax as the first argument to [=], so complex data structure unpacking operations become possible far beyond what the now-rejected unpack keyword would have accomplished.
Single-command nested loopsAMG: There's a complication when nesting loops within a single command, e.g. collect {(a, b)} for &a in (1 2) for &b in (3 4). This case is fine, but what if the inner loop operates on the variable set by the outer loop? It can't be done directly because the substitution is performed before the command is executed, not upon each iteration of the outer loop.One solution is to use references so that the dereferencing is deferred:
% collect {(a, $b@)} for &a in ((1 2) (3 4)) for &b over &a {{1 2} 1} {{1 2} 2} {{3 4} 3} {{3 4} 4}Some cases can be written in terms of the expanded assignment capabilities:
% collect {$b@} for &a in ((1 2) (3 4)) for &b over &a 1 2 3 4 % collect b for ((* &b)) in ((1 2) (3 4)) 1 2 3 4Of course, traditional nesting is still available. The syntax is very clumsy for [collect], though. [lmap] does it better.
% collect {[collect {(a, b)} for &b in $a]} for &a in ((1 2) (3 4)) {{{1 2} 1} {{1 2} 2}} {{{3 4} 3} {{3 4} 4}}Compare with Tcl:
% lmap a {{1 2} {3 4}} {lmap b $a {list $a $b}} {{{1 2} 1} {{1 2} 2}} {{{3 4} 3} {{3 4} 4}}For [loop] there's no issue:
% loop for &a in ((1 2) (3 4)) {loop for &b in $a {= &result{end+1} ($a $b)}} % : $result {{1 2} 1} {{1 2} 2} {{3 4} 3} {{3 4} 4}Though do be aware these examples give subtly different results depending on whether the output of each inner loop is collected into a single list element or if it's all flattened into a single list.
Numeric rangesAMG: The major use case driving C-style loops is iterating over numeric ranges. I think this should be made more straightforward. Previously I was thinking about a range command that can be used to generate the list being iterated over, but making the list would be a waste of time and memory. It makes more sense to incorporate the iteration parameters directly into the loop/collect command.
% collect x for &x from 0 to 10 0 1 2 3 4 5 6 7 8 9 10 % collect x for &x from 0 to 10 step 2 0 2 4 6 8 10 % collect x for &x from 0 to 9 step 2 0 2 4 6 8 % collect x for &x from 0 until 10 0 1 2 3 4 5 6 7 8 9 % collect x for &x from 10 to 0 % collect x for &x from 10 to 0 step -1 10 9 8 7 6 5 4 3 2 1 0 % collect x for &x from 10 until 0 step -1 10 9 8 7 6 5 4 3 2 1 % collect x y for &x from 0 to 2 for &y from 6 to 8 0 6 0 7 0 8 1 6 1 7 1 8 2 6 2 7 2 8 % collect x y for &x from 0 to 2 and &y from 6 to 8 0 6 1 7 2 8Of course I intend the loop forms to work too, but collect makes for easier examples.Real numbers present a challenge. Stepping by a number whose denominator is not a power of two likely won't land exactly on the final to value. However, this is far from a new problem.As discussed above, it's tricky to nest loops in a single command if the inner iteration parameters depend on the outer iteration variables. Late-bound references won't help because the arguments to from, to, until, and step are expected to be numbers. The fix is to make them not be numbers but rather expressions which are evaluated as late as possible. This change makes all the above examples continue to work, plus makes this work:
% collect x y for &x from 0 to 2 for &y from 0 to x 0 0 1 0 1 1 2 0 2 1 2 2But on the downside, this becomes dangerous due to double substitution:
% collect x for &x from $start to $endThe solution is to ensure all substitution is in the hands of the expression engine. Don't let the interpreter do any substitution of its own. Either protect $ with quotes (as with Tcl [expr]) or avoid it entirely (using Brush expression syntax's implied variable substitution):
% collect x for &x from {$start} to {$end} % collect x for &x from start to endI wonder if the to/until and step expressions should be evaluated in each iteration of the loop (like they would in the C-like loops) or if they should be evaluated once at the beginning of each loop, at the same time as the from expression.
Concerns about references editAMG: After thinking about it, these two concerns are not new problems. They exist in Tcl already but in different forms. They're fundamental, and we're already happily ignoring them or successfully working around them.
SecurityAMG: References (as defined by Brush), just like C pointers, break the security of the language, allowing unrestricted access to any variable from anywhere else in that same interpreter. You can't obtain such a raw reference by writing &123, where 123 is the numeric ID of the variable, since that would be interpreted as legally making a reference to a (presumably new) variable that's actually named 123. But you can use quoting: \&123. The resultant string won't initially be seen as a reference, but it can easily shimmer to reference: [set \&123 value].Numeric IDs could be randomized to make this kind of thing far harder to pull off in a useful sort of way, but that's expensive and not guaranteed to work. This would be like having C map variables far apart from each other and in random order, different every time a program is run, with no predictable layouts like the stack. C would have to keep array and struct elements adjacent, similar to how Brush would (of course) keep list and dict elements together in a single variable.While initially designing Brush, I had the thought that confining access to be within the same interpreter was enough security. After all, Tcl is actually the same way: at every point in the program, you have unrestricted read/write access (modulo traces) to all variables in your local stack frame, in all your callers' frames, and all global and namespace variables within the interpreter. Just use [uplevel] or [global] or [variable] or :: notation.So I guess this isn't a new problem after all, and I shouldn't feel too badly that my alternative for [uplevel]/[global]/[variable]/:: has one of the same problems they do. I should probably point this out in the paper.But let's say I wanted to fix this anyway. I could do so by forbidding shimmering to reference. This is an apparent violation of EIAS, but it is theoretically possible. Having this restriction would greatly simplify reference counting of variables, so there is some temptation to do this.
FragilityAMG: Even though Brush references have string representations in EIAS fashion, those string representations don't have meaning outside the interpreter in which they were created. They can't be written to disk then loaded later to pick up where the interpreter left off. Tcl's analog of Brush references (names of variables) has the same limitation (only make sense inside the interpreter) but to a lesser extent (that interpreter does have the opportunity to resume execution using data it read from disk).Not sure how big an issue this is, or if it's an issue at all. I mean, Brush doesn't stop you from putting variable names, as opposed to references, in data structures. However, those names won't have the effect of keeping the variables from being destroyed by the garbage collector, but that is true of Tcl as well. And it's not worth complaining about an overactive gc when we're already talking about terminating and restarting the program, thereby invoking the second biggest garbage collector of them all (process termination... the first biggest being a system reboot).Compare with C. Brush references are analogous to C pointers. C pointers are never suitable for data interchange formats. The same goes for Brush references. Take the case of shared memory segments. In C, it's preferable to put array indices in shared memory rather than pointers. This is like putting variable names rather than references into Brush data structures which are shared with the outside world.So I guess it's not that big a deal, or at least it's not a new problem.
Late-bound indexes edit
Round oneAMG: Page 27 appears to have an error.
set (&x &i) ((a b c) 1) set &j &i # &124 set &r1 &x{&i^} # &123(&124@) set &r2 &x{$j^} # &123(&124@) : ($r1@ $r2@) # b b set &i 2 : ($r1@ $r2@) # c cThe string representations show dict indexes (parentheses), but list indexes (braces) were used to construct.Also, more explanation is needed here that in the string representation of a reference, list and dict indexes can contain references involving @ and other indexing, and they will automatically be dereferenced.Is that really the notation I want to use? Yes, because all references start with &, and the references in the example are to two variables, not just one. No, because it's not merely a reference but also an instruction to perform substitution. How confusing! But since this is the generated string representation rather than what the programmer types, I can bend rules if I must. However I would prefer as close a parallel between the string representation and the source script as possible.So look at set &r1 &x{&i^}. The programmer types this directly, yet it uses a reference as an index. Guess that establishes precedent, but I really ought to have explained it. The idea was to make it look at the value of i at time of dereference, not immediately, without having to go through the trouble of putting the reference into another variable, like in the set &r2 &x{$j^} line.I'm tempted to write $&i^ instead, but this has the problem of not making it clear which indexing and dereferencing operators are bound to the construction of the inline reference &i and which are bound to the substitution $&i. Does it matter? I don't believe it does, since either way every layer of indexing that gets built up when constructing the reference has to be traversed again when dereferencing it. Rewriting the example:
set (&x &i) ((a b c) 1) set &j &i # &124 set &r1 &x{$&i^} # &123{$&124@} set &r2 &x{$j^} # &123{$&124@} : ($r1@ $r2@) # b b set &i 2 : ($r1@ $r2@) # c cThis looks less like a typo, since it's clear references aren't being used directly as list indexes. That's bad enough because references aren't legal integers and therefore shouldn't be legal indexes. But it's worse when using dict indexes because references are valid strings, therefore valid dict indexes, though I guess backslashes would help.Now we have a new $&name notation. What does it mean? It's the same as &name but, when used in a reference index and in combination with ^ (both of which should be mandatory), is clearly taking a value, not just making a reference.It's a sort of shorthand. This:
set &r1 &x{$&i^} # &123{$&124@}is functionally equivalent to this:
set &ir &i # &124 set &r1 &x{$ir^} # &123{$&124@}without the need for an intermediate variable. Or write:
set &r1 &x{$[: &i]^}which I guess is the most straightforward way to demonstrate that it is shorthand. The question I brought up earlier can now be posed as: of the indexing between the variable name and the first ^, whether it matters which goes immediately before the ] and which goes immediately after. Still, I don't think it does.Here's a crazy example designed to tease out whether or not there's a difference:
set &x ((a b c) (d e f)) set &y ((A B C) (D E F)) set &iv (1 0) set &jv (2 1 0) set &xy (lower &x upper &y) set &i 1 set &j 2 set &k lower set &r &xy($&k^){$&iv{$&i^}^ $&jv{$&j^}^} : $r@ # a set (&i &k) (0 upper) : $r@ # DAre all three of the following lines equivalent?
set &r &xy($&k^){$&iv{$&i^}^ $&jv{$&j^}^} set &r &xy($[: &k]^){$[: &iv{$[: &i]^}]^ $[: &jv{$[: $j]^}]^} set &r &xy($[: &k]^){$[: &iv]{$[: &i]^}^ $[: &jv]{$[: $j]^}^}Yeah, I'd say so. So long as sequencing is maintained, it shouldn't matter whether a particular index is baked into a reference or applied upon dereference.Something else came to mind. What does this mean?
set &r &xy($&k^){$iv{$&i^} $jv{$&j^}}In the construction of the reference, it's trying to index into the current value of iv and jv, but the index is late-bound. In this case I think "late" means "now" since late binding means it's deferred until the index is actually needed, which is right away. In a compiled language, this would be a good thing to warn about. But Tcl and Brush have no facility for warnings. I don't see how it could be an error, so save it for Nagelfar.This line of thought needs more time to simmer.What's the good of late-bound indexes, anyway? Well, even though the Brush paper says otherwise, late binding is needed elsewhere in the language, namely for reference composition, so we have it. But I originally specified it only for indexes in reference so that you could make a reference which automatically looks at an iterator to find the value being considered for the current iteration of the loop.
Round twoAMG: I'm still on the fence about whether or not I even want this feature. The motivating use case was being able to modify a list while iterating over it without having to reference the list by name all the time:
set &alphabet [list split abcdefghijklmnopqrstuvwxyz {}] set &ref &alphabet{$&i^} for {set &i 0} {&i < [list size $alphabet]} {incr &i} { set $ref [string toupper $ref@] }But maybe there's a more direct way: a [foreach] whose iterator variables reference, instead of copy, the list elements. That would do the job. Dereference the iterators to get their values, or pass them directly to [set] (or [unset]) to modify the list.There's a design problem! [foreach] takes one or more list values as arguments, so there's no requirement for the lists to be in variables. It's impossible to make references to a value, only to a variable. Though consider: what is the sense of modifying a value that's not in a variable? In such a case, this [foreach] variant would not be used, and the values would be placed directly in the iterator variables, no references involved.It does, however, make sense to mix and match. You might want to iterate by reference over one list variable in parallel with iterating by value over another list value, maybe for the purpose of incrementing the elements of the first by the elements of the second.How would this be specified? I wish I could just say that passing a reference to [foreach] where a list value is expected causes it to iterate by reference, or just pass a list value directly to iterate by value. However, this doesn't work because every reference can be interpreted as a single-element list.The Brush [set] command relies on the fact that lists with non-unit length are distinct from all possible references. This trick doesn't help here though since it's certainly valid to pass [foreach] a list with length one. How unfortunate. Consequently, an option is needed to explicitly say that a particular list is being iterated by reference. I don't want to complicate the common, iterate-by-value case though.How about iterating over a subset of a list? I think reference composition can be used for this purpose. Rework the above example to only capitalize the first five letters:
set &alphabet [list split abcdefghijklmnopqrstuvwxyz {}] foreach -reference &i &alphabet@{0:4} { set $i [string toupper $i@] }This actually looks a lot clearer than the late-bound index method. So maybe I keep this and drop the other, thereby eliminating the need for $&...^ notation as well as reference nesting inside of the string representations. That simplifies everything quite a lot.But I still have late-bound dereferencing. Funny though, it's now used for a completely different purpose than was envisioned in the Brush paper.AMG, later: These dilemmas are resolved by the introduction of the [loop] and [collect] commands.
Round threeAMG: I keep going back and forth on this. Forbidding late-bound dereferencing in indexes isn't as simple as disallowing this $&...^ syntax I mentioned, since they can certainly be had by other means. Like I said, it's just shorthand for $[: ...]^ which is legal. I would have to check for and restrict any use of late-bound dereferencing other than reference composition. But that seems artificial to me, and I am shooting for generality. Maybe keep it after all, even though it's complicated and rarely needed.I also do have the option of changing the string representation further. Don't say $&...@ but rather simply $... (leave out the & and the @), where ... is the numeric ID corresponding to the variable being referenced. It's fine to make this change because it'll still be recognized as a reference on account of living inside a reference value (a value starting with &... and otherwise conforming to this specification). That sounds good to me.
set &r1 &x{$&i^} # &123{$124}In addition, I have half a mind to not support writing $&i^ to get this behavior but rather forcing the programmer to write it out long: $[: &i]^.
Automatic closures editAMG: Brush, as specified, implements closures by manually bringing them into scope as references.This is impossible, but I'll describe it anyway.
proc &accum_gen ((val? 0)) { : (lambda ((valref= &val) (inc? 0)) { set $valref $($valref@ + inc) }) }Here, the returned lambda captures the caller's &val as a hidden argument. That's all fine and dandy, but what about:
proc &accum_gen_automatic ((val? 0)) { : (lambda ((inc? 0)) { incr &val $inc }) }This maps very closely onto Paul Graham's Common Lisp version:
(defun foo (n) (lambda (i) (incf n i)))Using his argument names and ditching the defaulted values:
proc &foo (n) {: (lambda (i) {incr &n $i})}Very cool! But why is it impossible? It's because by the time Brush discovers that the return value of [foo] is in fact a lambda, the enclosing scope is long gone. All it'll do is return $i with no memory that there once was a variable called n.How to fix? Eagerly compile the lambda, at least sometime before [foo] returns. But this is contrary to the design of Tcl therefore Brush, since everything is a string and the types of values are manifest in their use, not their declaration. And use happens too late.How to fix anyway? The reference to n absolutely must be created while [foo] is still running, which is why [accum_gen] is written the way it is. And there you have it.Damn it, you say, let's have automatic closure anyway! There could be a [lambda] command that eagerly compiles its argument as a script body, capturing its calling environment immediately, then returning the equivalent lambda value that contains references to variables in the enclosing scope along with some magic to bind them to local names. And should [proc] also be given this property? It would virtually eliminate the need for [variable] and [global]. I'm not sure. I actually kind of like how Tcl forces you to explicitly bring in globals.How would that look? Let's put it side-by-side with Paul Graham's version:
(defun foo (n) (lambda (i) (incf n i))) proc &foo (n) {lambda (i) {incr &n $i}}Holy cow, that's a close resemblance! Plus, the parentheses around n and i can be omitted because their listhood is dictated by context.More thought needed. I must admit that I am sorely tempted.One thing I can do is add a & notation to the parameter list specification. The currently allowed forms are:But I can add & to required, defaulted, and bound arguments to create reference arguments. (It doesn't make sense for optional and catchall arguments.) For reference arguments, the local variable will be linked to the variable referenced by the value, with no need to explicitly dereference using @.
proc &foo (n) {: (lambda ((n&= &n) i) {incr &n $i})}That's a step in the right direction. All that's needed is a [lambda] command that compiles the script argument, identifies the non-local variable names present in both the script and [lambda]'s calling environment, and augments the formal parameter list with "(n&= &n)"-like constructs for each.
proc &foo (n) {lambda (i) {incr &n $i}}[lambda] returns this three-element list: lambda {{n&= &123} i} {incr &n $i}where &123 is the reference that would be obtained by typing &n.
RestrictionsAMG: However, this magical [lambda] command won't know to bind variables whose names are computed, either by $"..."/&"..." notation or by generated code. Also this means extension commands taking script arguments or otherwise looking up variables will need to advertise to the bytecode compilation engine what variables they reasonably expect to use when executed. I hope this limitation isn't too onerous.
NamespacesAMG: It's been tacitly understood (by me, anyway) that [lambda] and the string/list representation thereof would have an optional third argument containing the namespace name. I should make this explicit. The reason I've avoided this so far is so I could avoid thinking about whether I really want to keep Tcl namespace syntax which some have complained about.
Reference composition editAMG: In the paper I call it additive references, but I don't think that's as good a name as compositional. Also, the notation shouldn't be "&$name@" because the $ dollar sign isn't helping. Get rid of that. A more general explanation for the notation exists.Also, allow late binding (^) not only inside index computations but also anywhere following the reference variable name, with the restriction that early binding (@) can't happen after (to the right of) late binding.Here's a convoluted example.
set &var (a (0 1 2) b (3 4 5)) set &ref (&var(a) &var(b)) set &ref2 (&ref &var) set &ref3 (x &ref2{0} y &ref2{1})Using the above code as given, here are a whole bunch of possible combinations of early and late binding. For each I show the notation entered by the programmer, the equivalent string representation (assuming &100 is &var, etc.), and the value obtained by dereferencing the reference.Also, the mnemonic is @ is an archery target that's been struck whereas ^ is an arrow still in motion.AMG: It's quite likely most, if not all, of the practical use cases supporting late binding are obsoleted by the improved looping capabilities, particularly the ability to loop "over" a variable's value, even if the variable's value is a complex data structure. So there's no need to run a loop counter and access a funky reference variable whose value changes according to the value of said loop counter, just so the funky reference variable can be used to modify the structure that's effectively being iterated over.Compare:
= &data (a b c d e f g) = &ref &data{&i^} loop for &i from 0 until [list length $data] { = $ref [string toupper $ref@] }With:
= &data (a b c d e f g) loop for &i over &data { = $i [string toupper $i@] }It's no contest. The latter is vastly superior. Even though I designed it, I can't even be sure I got the syntax right on the former.Unless I can find a compelling reason for late-bound references, I'm going to drop them entirely.By the way, the following would also work for this example:
= &data [collect {[string toupper $x]} for x in $data]Or exploit EIAS and the fact that [string toupper] doesn't modify the list grouping and delimiting characters:
= &data [string toupper $data]A more complex example:
= &data ((Douglas Adams) (Terry Pratchett)) loop for (((&first &last))) over &data { = ($first $last) ([string toupper $last@] $first@) }With late-bound references:
= &data ((Douglas Adams) (Terry Pratchett)) = &first &data{&i^ 0} = &last &data{&i^ 1} loop for &i from 0 until [list length $data] { = ($first $last) ([string toupper $last@] $first@) }And in current Tcl:
set data {{Douglas Adams} {Terry Pratchett}} for {set i 0} {$i < [llength $data]} {incr i} { lset data $i [list [string toupper [lindex $data $i 1]]\ [lindex $data $i 0]] }AMG: Here's an example of reference composition in action. The idea is to extract selected columns from a table. The table is in a file with a newline between each row, whitespace between each value, and values containing no list-significant metacharacters. $indexes is a list of zero-based column indexes to be extracted, and $output is a list of lists, each of which is one of the columns pulled from $input.
loop while {[$chan get &line] >= 0}\ for &column over &output and &index in $indexes { = &column@{end+1} $line{index} }Of interest is the word &column@{end+1}. The intended first argument to [=] is a reference into $output with the vectored index {end+1}.Typing $column would give a reference to $output with no indexing, and typing $column{end+1} would try to index $column rather than $output.Instead, a new reference is constructed by composing an existing reference $column with the index {end+1}. The syntax for this is & because a new reference is being formed, column because that's the variable to initially consider, @ to dereference the item referenced so far and to apply the remainder of the word to the reference found within, and finally {end+1} to apply a vectored index denoting the list element after the final element.
Traces editAMG: Do I want Brush to have variable traces? Traces can be supremely powerful and helpful, but they also complicate variable access and interfere with optimizations, for instance common subexpression elimination in math expressions. Tcl traces on arrays and elements are quite nice and are the reason to use arrays instead of dicts, but Brush doesn't have arrays, only dicts. How would I get that functionality back?Similarly, what would it mean for a trace to be applied to an indexed reference? That seems way too troublesome for me. Perhaps only support traces on the entire variable, but give the trace callback the reference path that was used. Let it decide what special handling, if any, the referenced component needs.[Tcl_LinkVar]() has been very, very useful to me at work, and it would be a shame not to have something like it in Brush.Other kinds of traces might not be as big a deal, but I wouldn't know for sure. I haven't used them.AMG: One way to go about indexed traces is to maintain a list of indexes for which a trace has been registered, then every time the value of the variable is changed, check if it changed in a way that affects any of the traced indexes. "Changing" a value such that its new value equals its old still counts.This gets very tricky when a variable has both {vectored} and (keyed) index traces. Also efficiency may be an issue. Probably should also forbid late-bound ^dereferencing, or else things will be really unmanageable.It's not advisable to put trace flags directly in the value structure. Values are shared, and traces are bound to variables, not values.AMG, years later: I think it's probably best to not support variable traces. The theoretical existence of traces severely limits the possibilities for optimization, and incrementing an epoch counter for stuff that can be done to local variables seems like it would defeat the purpose of optimization. As for the practical usability of traces, it obscures the actual program flow, certainly not a good thing. What of GUIs and linked variables? GUIs can work like they do in compiled languages: provide accessor methods. Linked variables have been very useful to me, but it's quite likely they're almost never used in the wild since they had a bug which disallowed zero or negative float values, and no one noticed until I came along.But what of Tequila? That seems like a great use for traces. I guess the GUI solution is still needed: manually run a command to sync a variable or collection of variables.
Complete rewrite of garbage collection editAMG: The garbage collection scheme described in the Brush paper is very expensive. I tried my best to make sure it is only needed very rarely, then gave advice for how the programmer can further reduce its need. Whenever variables have nonzero refcount after removing their stack frame, the interpreter needs to confirm whether or not they're surviving because they're externally reachable (e.g. a reference was returned or placed in a global variable) or because they're merely self-referential. To tell the difference, it searches everywhere it can think of for external references.There's a back door approach that's much faster. Instead of searching the whole world for external references, look inside the variables' values themselves to see if they account for the entirety of the remaining refcount.I'll illustrate by example. Take this code:
proc &tricky () { set &a &b set &b (&a &a) set &c &d set &e &f set &f &c : &c }This returns a reference to a variable whose value is a reference to an unset variable. Most of the code just burns CPU. Silly, I know, but I'm trying to hammer the gc. Could have just said: proc &trickyfast () {set &c &d; : &c}.At the end of [tricky], its local variable table will have entries for six variables: a through f. Let's say their reference values come out to be &100 through &105, correspondingly. Now, once the local variable table is blown away, the reference counts for all six variables will be decremented, resulting in the total refcounts in the following table:Note that the refcount for c is two because it's referenced not only by the value of f but also by the value in the interpreter's result field.Since e now has zero refcount, its value is ignored in all further processing.Next, analyze the values of all remaining variables to find references to other local variables. If a variable x's value references another variable y, x is a possible savior for y. If the number of references to a variable x found amongst the values of all remaining local variables is less than the total refcount of x, then x is external. If a variable is external or otherwise saved, it is saved along with all variables it is capable of saving. And if a variable is neither external nor saved by an external variable, it is deleted.In this case, c and d are saved. Of course, they are no longer called that because their names existed only in the local variable table that was just deleted. They are now only accessible through the reference that was returned by [tricky]. Though some code somewhere might decide to bind the reference to a local variable name again.What if [tricky]'s caller ignores its return value? The very next command will overwrite the interpreter result. When that result goes away, the refcount of the variable formerly known as c drops to zero, and it is deleted. Its value is likewise deleted because it's not referenced by its variable. That in turn decrements to zero the refcount of the variable once called d, and it is also deleted. It is unset and therefore has no value, so processing stops.
Name of create subcommand editAMG: Do I really want my list/dict/lot creation subcommands to be called create? For such a common operation, a shorter name would do, e.g. cons or make or new. But then again, Brush already has a far shorter name: just enclose the list/dict/lot in parentheses.I'd say it'd still be useful to have [list new] and [dict new] and [lot new] for functional contexts, but shorthand exists for that too. All three are essentially [: ({*}$args)]. The only functional difference is in the non-canonical and error cases. In the non-canonical case for dict and lot, keys are duplicated, and the true [dict] and [lot] commands would omit all but the final duplicate of each key. The only error case is giving [dict new] an odd number of arguments, but [: ({*}$args)] would accept this, though eventually the returned value will be used as a dict, at which time the error would happen anyway. I'm perfectly okay with this. Also I like "new" better than "create".
Reclaiming [set]? editAMG: Probably not going to go with this, but another option for naming [lot] is to in fact call it [set], changing the traditional [set] command to [let] or [var] or [get]/[put] or [=]. I bet everyone would hate this, me included. But it's a thought, so I'm recording it here.AMG: The Tcl "changes" file says the [set] command used to be named [var] [4], so there is precedent for using the name "var" to get/set variable values. (To be more accurate, the precedent is exactly against this, but let's not split hairs haha.)The quote is: 4. "Var" command has been changed to "set".AMG: If Brush is going to have an [:] command, i.e. a command whose name is a symbol rather than a word, then that opens the door for it to have an [=] command to use for variable lookup and assignment. It's a possibility. Symbols are cool because their meanings are more open to interpretation than words. The pronunciation would be "equal" meaning "let the referenced variable be equal to a value, or query what value the referenced variable is equal to".
= &a 5 = &b 6 : $(a + b)versus
set a 5 set b 6 expr {$a + $b}AMG: This creates an interesting symmetry between single-argument [:] and single-argument [=]. Single-argument [:] returns its first argument directly, and single-argument [=] returns the first argument's referent. In general, : $var has the same result as = &var, and : $ref@ has the same result as = $ref.AMG: Instead of calling it the "equal" command implying comparison, pronounce it "assign" or "let".
Cache the hash editAMG: The string representation's hash value can be cached in the Tcl_Obj (or whatever name) structure to avoid repeated computation and to speed up string equality tests. Reserve hash value 0 to mean the hash hasn't been computed since the last time the string representation changed. If the hash actually evaluates to 0, replace it with 1 or ~0 or such.The actual hash algorithm need not change, and it has been shown to work enviably well for the kind of load Tcl experiences [5].JO's "Ousterhash" H(S(1..n),i=n) is simply defined:
- H(S,i>0) = (H(S,i-1)*9 + S(i)) & ~0U
- H(S,i=0) = 0
Amortized constant-time dict element removal editAMG: Linear-time dict element removal sucks, and the only reason Brush has it is to maintain the list representation. Instead remove the elements from the hash table and mark the list as stale. An old email of mine to tcl-core describes some machinery for managing the relationship between list and dict, and it already has the ability to track which list elements aren't present in the hash table because they correspond to duplicate keys. This facility can probably be reused to quickly find elements that have been deleted while lazily updating the list representation. Also maybe periodically update the list when it gets too far out of sync with the dict so as to avoid keeping more than a small amount of garbage in memory.The sticky wicket is managing object lifetimes. If a dict contains the last reference to something that needs to be cleaned up, that cleanup should happen as soon as the reference is removed from the dict. It would not do for the reference to be held in a tombstone garbage slot in the list, only to be cleaned up in response to a list operation being performed on the dict. Brush's internal data model and optimizations should have no effect on the order of operations visible at the script level. I believe the trick is to check if the removed keys or values contain references, then update the list eagerly if so.Actually, a compromise presents itself. When removing references from a dict, don't collapse the list, thereby incurring linear-time performance, but instead set the Tcl_Obj pointers in the list to NULL. Actually, might as well do that all the time, not just when removing keys and values containing references! Really, all that's being done is avoiding the slow memmove() needed to eagerly keep the list compacted.So now I describe a hybrid eager/lazy method. Removed dict elements are eagerly finalized, whatever that entails, but the slots they occupy in the backing list are merely set to NULL. Compaction is deferred until the list needs to be treated as a list, or the index of a key is requested, e.g. using [lot search]. Maybe some heuristic will force compaction if the list is more than 75% NULL or something like that.
Using parentheses to make dicts editAMG: Here's a positive consequence of the dict/list merge in Brush.First I'll explain using Tcl command names. In Brush, there's no performance advantage to creating a dict with [dict create] versus [list]. In Tcl, [dict create] is faster because it avoids creating a temporary list representation which will shimmer to dict the first time it is used as such. But in Brush, the two types are unified, and the only performance difference between the two approaches is whether the dict hash table is created eagerly or lazily. Either way, it's created (at most) once. In fact, the [list] approach means it might never have to be created, if the value is never used as a dict. (There's a non-performance difference: with [list], error detection [i.e. missing value to go with key] is also performed lazily.)Now I'll use Brush notation. Brush's [list] command doesn't make lists; rather, it's an ensemble to collect list-related subcommands. I anticipate there being a [list create] command which works like Tcl [list]. Obviously, this is a lot to type, so Brush has a very convenient shorthand: (parentheses).So, put these two thoughts together. If explicitly constructing as a dict versus a list offers no performance benefit, one might as well use whichever form is easier to type. And in Brush that would be list construction using (parentheses).There's one gotcha, but Brush offers an answer to it too. That gotcha is that [dict create], etc. are commands, and many times you need a command, but (parentheses) are merely syntax for constructing a word, same as "quotes" and {braces}. So how do you use (parentheses) to make a list or dict in a context where you need a command? You could explicitly call [list create] or [dict create], but that's pretty clumsy to type. Brush's solution is the [:] command. When given a single argument, it simply returns it. That single argument could well have been constructed using (parentheses).Let me summarize with an example. In Tcl, you'd write:
dict create $key1 $val1 $key2 $val2But in Brush it's:
: ($key1 $val1 $key2 $val2)The only difference in return values is the intrep and the fact that Brush got its result faster due to deferring construction of the hash table until the value is actually used as a dict. Again, the drawback is that error message generation is also deferred.
Collision between indexes-are-exprs and multiple list indexing editAMG: Multiple list indexing (as shown by one example on page 18) doesn't mix terribly well with the fact that all list (and string, really any numerical) indexes are interpreted as expressions, not just integers or end-integer. Space is the delimiter between indexes, yet whitespace is also permitted within an expression, and each index is an expression. This isn't an irreconcilable conflict, but it can get goofy and difficult to read. For example, $x{1 + 2 3} means $x{1 + 2}{3}.In the interest of promoting readability, I want to require parentheses for expressions containing spaces when more than one index is provided.Alternately, don't allow arbitrary expressions as indexes.I definitely don't like this "solution" though. It forces another level of symbols which I find unwelcome, and there's no benefit. There's already sufficient context for the interpreter to know to expect a math expression, so requiring the programmer to say it in two ways is just a waste and does not contribute to readability.
Double substitution creeps back in editAMG: Again, since numerical indexes are expressions, not just constant numbers, substitutions need to be done by the expression engine and not the interpreter. Same problem as with [expr] in Tcl, but in a lot more places. My fix was to allow variables in expressions to not have dollar signs, though this doesn't work for all possible variable names. I guess in such cases, braces are required. But they would have been required anyway had [expr] been used.Take [string index] as an example. Its first argument is the string from which to retrieve the character, and its second argument is the index of the character to retrieve. That second argument, being an index, would now be treated as some combination of expression and the magic prefix end.In the simple, common cases of the index being a literal integer, there is no problem. These cases are the same as current [string index] usage where no substitution takes place.Now, what about indexes that are the result of variable substitution? Well, try it and see:
set &index 4 string index foobar $indexHere, [string index] receives the expression 4 as its index. Parsing and evaluating 4 as an expression yields the number 4 which seems alright, but what if $index contained... something else?
set &index {[exit]} string index foobar $indexThis would terminate the program. Bad news. Correct usage would be...?
string index foobar {$index} ;# mirrors current correct [expr] usage string index foobar index ;# alternative thanks to implied $ in expressionsThis bothers me. One thing I had wanted to do with Brush is have the programmer explicitly identify variable names (& prefix) even when not taking their value ($ prefix). Yet here the name is being given as a literal string in order to avoid a security issue, and I had also wanted to make the easy thing also be correct, safe, and fast. It's perhaps not so easy to remember when it's required to leave off all prefixes in order to be safe. foobar isn't the name of a variable or any such; it's the actual string being indexed. But index is the name of the variable containing the numerical index. So more thought is required here.Another case! Now let's have the index come from a script substitution.
proc &command () {: 4} string index foobar [command](The proc name needs an & prefix because it's a variable name, and procs are simply lambdas stored in variables. () is the preferred constructor for an empty list, though "" and {} and [] would have also worked. The : 4 is basically shorthand for return 4.)Here, [command] returns 4 which is taken as the expression giving the index. Same concerns as before. What if...
proc &command () {: {[exit]}} string index foobar [command]This also exits. Correct usage would be...
string index foobar {[command]}Yup, the need to always brace your expr-essions has crept back in. Sigh...What's the solution? Well, my solution before to the bracing problem is to provide a three-character $(...) shorthand for performing expression substitution which is significantly easier than the nine-character [expr {...}] locution we know and hate.But here we're getting into trouble again because every command that takes an index argument now actually takes an [expr] argument and is therefore subject to the bracing requirement. I had thought I'd be mitigating that by additionally making $ variable prefixes optional inside expressions in the overwhelmingly common case that there's no possibility for ambiguity. But that only made sense when the parser could easily tell that it's in an expression context (i.e. inside $(...)). Here, that cue is lost because indexes are expressions, and any command argument could be an index.So fix it by not allowing arbitrary expressions as indexes? This means explicit usage of $(...) in more places, but that should be okay because it's still a tremendous savings over the current Tcl situation of needing [expr {...}] in all those same places.But there remains a performance drawback when the end prefix is considered, since that implies slow string concatenation and reparsing. I had solved this by making the index notation innately understand both expressions and end prefixes so they'd all fit inside the same intrep or bytecode with no need for shimmering to and from string. So add end prefixes to the expression syntax but only allow its use where an index is required? Maybe that's what's needed. Hmm!Still not happy with this. Consider another place where indexes are used: variable list indexing. The interpreter already knows (or could know) that it's in expression territory because it saw the braces following a variable name ($ or & prefix) and whatever else, so there's no danger of double substitution here. But there is another danger: inconsistency. That is, unless I can talk it away. :^) Having those indexes allow expressions with no need for $(...) would conflict with indexes passed as arguments requiring $(...).What do I mean when I say, talk it away? I could instead say (1) yea verily, end prefixes are legal in expressions whose results are used as indexes, and (2) variable list indexing takes expressions, not just simple indexes. Formulated this way, there is no inconsistency, but I've added a level to the concept hierarchy.Aside from the dirty feeling of having had to resort to smoke and mirrors, I've also made it questionable whether the string end or any other string with an end prefix is acceptable as an index. I don't like having the same thing in two places, so for me to put end prefixes in the expression syntax (albeit only legal when the expression ends up being used as an index), makes me want to remove it from the index syntax and instead require writing end as, say, $(end).Now for the killer. What is the string representation of $(end)? If I were to [puts] it, what would arise? It's not being used as an index, so that's illegal. In other words, EIAS is violated. Fatal, right?Well, yes, but not to the entire concept, only rather to my misguided notion that I need to remove end prefixes from index notation if I want to add them to expression notation. The string representation is easily in my grasp: keep the end prefix! So puts $(end) would print end, and puts $(end-5*2) would print end-10.But surely something remains illegal. That would be attempting to use a string starting with end, no matter how that string was derived, where an ordinary number is expected. It would only fly where an index or general string are expected. This is actually current Tcl behavior, so there is no change.Need more time before I actually decide on anything... I just wanted to write down my reasoning so I could revisit later and see how dumb I was so I can avoid repeating old mistakes as I go on to make bigger and better mistakes.AMG: Revisiting after much time has passed. Okay, so what I seem to have decided is that security concerns require that commands taking index arguments not interpret those arguments as arbitrary expressions to be evaluated. This restriction means the caller must explicitly use $(...) notation to evaluate the index expression prior to passing them as arguments. Use of $(...) avoids the security problem. To make this possible even when the end prefix is used, $(...) must be capable of internally understanding end and leaving end in the result. All that is acceptable but requires explanation.Like Tcl expressions, Brush expressions yield string values, and those strings are grouped into type categories according to how they are used. The new type being introduced is the index, whose value can be an integer, end, end-integer, or end+integer. The end prefix comes from the original index expression notation. Just because indexes can be integers doesn't mean that indexes are always valid where integers are expected.Let me demonstrate. Here's the setup:
set &index 4 # or maybe: set &index {[exit]} proc &command () {: 4} # or maybe: proc &command () {: {[exit]}}And here are the command examples. Passing a quoted expression used to be safe but is now invalid because expressions are no longer indexes. Passing the result of a substitution was valid but unsafe and is now safe because indexes are not subjected to further substitution.Some more complex examples:
set &a 1 set &b 3 string index foobar $(a+b) string index foobar $(end-a-b) set &i $(end-a-b) string index foobar $iAnother thing to mention is that this new proposal closely matches current Tcl behavior.
Impact of brace counting changes editAMG: Brush significantly redesigns brace counting in order to avoid comments resulting in mismatched braces, for instance in this Tcl code:
proc moo {x} { # if {$x > 5} { if {$x > 20} { puts $x } }A human reading this would expect the first [if] line to be commented out, no questions asked. That would be true if only [proc] was even allowed to execute. But it won't get that chance because the Tcl interpreter never finds the closing brace for [proc]'s final argument. Comments have zero meaning to the brace counter, which only cares about braces and backslashes.Brush's brace counter is also mindful of comments, and it declines to count braces that appear inside of what will be a comment if the braced text were to be a Brush script.Brush also skips braces that appear inside double-quoted words, making code like the following possible:
proc moo {x} { if {$x} { puts "{" } else { puts "}" } }This code actually doesn't work. Try it and see for yourself. The Tcl fix is to quote the braces with backslashes instead of, or in addition to, double quotes. But the Brush fix is to recognize the double quotes and skip braces appearing inside.Of course all this means Brush has to keep track of where pound signs and double quotes are actually capable of starting a comment or quoted word. Plus Brush has #{block comment notation}# for comments which can span multiple lines or appear within a single line, and its #regular line comments can begin at any word, not just the first of a command.So, what's the impact? This makes canonical quoting rules more complicated.To the best of my knowledge, Tcl's canonical quoting rules are:
- Backslash-quote words ending with an odd number of backslashes or containing backslash-newline or mismatched braces.
- Brace-quote empty string and words starting with pound sign or containing whitespace, brackets, dollar signs, matched braces, double quotes, or backslashes.
- Do not quote words that aren't described by the above.
puts $chan {#include <stdio.h>}The above doesn't work since the closing brace is thought to be part of a comment. Instead, one of the following must be used:
puts $chan "#include <stdio.h>" puts $chan \#include\ <stdio.h> puts $chan {#include <stdio.h> }The following will not work. Even though it does successfully inhibit the comment behavior for the brace counter, it will emit the backslash which is not valid C.
puts $chan {\#include <stdio.h>}Understandably, I'm not happy about this, and I will need to think on it further.AMG: Okay, I think I'm willing to accept this because a thoroughly decent workaround exists (use quotes). In Tcl it's already the case that not every string can be brace-quoted, so I'm not going to be able to fix that. Multiple quoting modes exist so any string can be encoded if necessary.AMG: Need to come up with an answer to Lars H's 2006-08-04 post on Why can I not place unmatched braces in Tcl comments.
Extended [set] command editAMG: Brush defines [set] to take the place of [lassign], exploiting the fact that it's always possible to distinguish between a reference and a list with non-unit length. When given a list with length two or greater as its first argument, Brush treats it as a list of references to variables into which elements from its second argument will be assigned, with the leftover elements returned, similar in fashion to [lassign]. If the list has length two, its second element can be empty string, and it is treated as a list of only one reference, to be processed as described in the preceding sentence.I don't always care about every element in a list. My current practice is to assign to a dummy variable called _, then ignore its value. Brush can do the same with &_, but maybe we can do better than that. If an element of [set]'s first argument list is - (let's mix it up a bit, shall we?), that means to throw away the corresponding element of its second argument list. This should provide a marginal performance improvement, avoid cluttering the namespace with unused variables, and (perhaps most importantly) would be a documented, officially-supported idiom rather than an ad-hoc practice subject to variation and misinterpretation.AMG: This wouldn't prevent having variables actually named - since - is distinct from &{-}.The first argument to two-argument [set] is either a reference or a list. If a reference, the second argument is assigned into the variable or element thereof identified by said reference. If a list, each element is a reference (into which a list element of the second argument is assigned) or - (to inhibit assignment). If a two-element list, the final element may be empty string, in which case it is treated as a single-element list containing only the first element.AMG: A further idea would be to allow the elements of [set]'s first argument to be two-element lists like the parameter list argument to Tcl's [proc] command. Variables (references, actually) that would otherwise go unassigned will instead get the default value supplied by the second element. If no second element is given (i.e. it's a reference and not a two-element list), the referent is unset. This would be a change from the current Brush paper which says it's an error to have too few values for assignment.Going further, maybe adopt the full Brush [proc] parameter list syntax, except using references instead of names. For this to work well, the [proc] parameter list syntax should be changed to not append special characters to the parameter names, but I think that's a worthy improvement anyway.AMG: Actually, supporting the parameter list format would largely remove the need for returning the excess elements because it would be possible to write stuff like:
= ((&elem ? ()) (&list *)) $list # set list [lassign $list elem] = (($elem ?) (&list *)) $list # idem, but unset elem if list is empty = (&elem (&list *)) $list # idem, but error if list is emptyEven more fun:
= ((&list *) (&elem ? ())) $list # set elem [lindex $list end]; set list [lrange $list 0 end-1] = ((&list *) &elem) $list # idem, but error if list is empty = (&beg (&list *) &end) $list # move first and last elements from list to beg and end, respectively
pass keyword to return excess elementsAMG: Just in case there really is a situation where it's necessary to return the excess elements, the parameter list definition could be extended to support a pass keyword, reminiscent of the - keyword described above. pass can be used at most once, and not in combination with a * parameter. When used, it collects all unassigned inputs in the manner of a * parameter, and [=] returns the collected inputs as a list.
= (&a &b &c pass) $list # lassign $list a b c = (pass &a &b &c) $list # lassign [lrange $list end-2 end] a b c; lrange $list 0 end-3 list size [= (&a &b &c pass) $list)] # llength [lassign $list a b c]This eliminates the need for the special empty string hack described in the paper which was used to force a single reference to be interpreted as a list. Instead of writing (&a ()), write (&a pass).If pass is not used, what should extended [=] return? Empty string is probably the best choice. The other option is to return its second argument, but this will hurt performance in the case of [=] being used to take elements from a list, modifying the list variable in the process. Returning the original value of the list variable would require that a copy be maintained, though the copy would almost never be used. If the caller really wanted that copy, [:] is available for that purpose:
: $list [= (&a &b &c (&list *)) $list]
Never mind, forget about passAMG: I'm having a hard time getting comfortable with pass meaning "pass through". Perhaps result would work better since it directly means the [=] command's result.AMG: Actually, I'm having a hard time getting comfortable with having the feature at all. What is it good for? Everything it does is more clearly expressed in terms of assigning to a variable through the other modes of operation. I only came up with it as a way to fully implement [lassign] in terms of [=], but is this a worthwhile goal? I mean, this is possible, explicit, and more flexible:
= (&a &b &c (&result *)) $list; : $resultAs for its use to avoid the empty string hack, what is the empty string hack accomplishing anyway? That came from an earlier design for [=] (then named [set]) in which it was a direct replacement for [lassign], always returning the unassigned elements. That feature was only really desirable in combination with popping elements off a list, but that is now better achieved by catchall-assigning back into the list variable with the extended notation introduced here. So once again, it's not needed. Going forward, let's say that [=] always returns empty when doing list assignment.The remaining use case for the empty string hack is assigning the first element of a list into a variable. That can be done more clearly with [= &a $list{0}]. And if the list value isn't contained in a variable, functional substitution can be used: [= &a $[command]{0}]. Or (highly unlikely) if the result of non-command shenanigans like concatenation and list construction, use [:] as a shim to make it be a command: [= &a $[: (a b c)]{0}], but at that point you're really just trying to make things hard.A new need for the empty string hack arises below, though for a different purpose than previously envisioned. I'm completely changing the syntax though so it looks less like a hack and more consistent with the rest of the language.
On third thought, maybe keep itAMG: One good use for pass-like functionality is an option parser. While there remain unprocessed items in the command line list, pass the first to [switch], with the side effect of moving that item out the unprocessed list.The name is still no good. But now I'm messing around with having symbols instead of names, so perhaps that opens up a possibility. How about :? That symbol already is the name of a command which returns its first argument, so it's natural to use the symbol to denote stuffing into the return value.
loop while {[list length $argv]} { switch [= (: (* &argv)) $argv] ( -a {stdout put "got -a"} -b {stdout put "got -b [= (: (* &argv)) $argv]"} default {stderr put "bad option"} ) }As shown above, the locution for popping the first item from a list variable "argv" is [= (: (* &argv)) $argv].It's probably desirable to print the unrecognized argument in the bad option error message. To handle that, make : accept an alternate form in which it's the first element of a list whose second element is the variable (or nested variable assignment definition) into which its value will also go.
loop while {[list length $argv]} { switch [= ((: &option) (* &argv)) $argv] ( -a {stdout put "got -a"} -b {stdout put "got -b [= (: (* &argv)) $argv]"} default {stderr put "bad option: $option"} ) }The above could also be written:
loop while {[list length $argv]} { = (&option (* &argv)) $argv switch $option ( -a {stdout put "got -a"} -b {stdout put "got -b [= (: (* &argv)) $argv]"} default {stderr put "bad option: $option"} ) }
Change order of special list elements passed to [=]AMG: Parsing is much easier when the * and ? modifiers are the first list elements, instead of the second. This lets humans and computers read left-to-right with no need to backtrack:
= ((? &elem ()) (* &list)) $list # set list [lassign $list elem] = ((? $elem) (* &list)) $list # idem, but unset elem if list is empty = (&elem (* &list)) $list # idem, but error if list is empty = ((* &list) (? &elem ())) $list # set elem [lindex $list end]; set list [lrange $list 0 end-1] = ((* &list) &elem) $list # idem, but error if list is empty = (&beg (* &list) &end) $list # move first and last elements from list to beg and end, respectively
Allow nestingAMG: Why not let the syntax be recursive? In combination with letting [loop]/[collect] use the same syntax as [=], this removes the need for unpack and unpackover.Data structures nest, so it's useful to nest the list of references into which data structure elements are assigned.Additionally, I'd like to extend the * catchall to support multiple references, to be assigned in the manner of Tcl [foreach].One more change: Instead of - inhibiting assignment, let's use /. My reasoning is I wish to maintain commonality with [proc]'s second argument (the parameter list), yet I am considering extending [proc] to have inhibited assignment, if it's not there already. The challenge is that [proc] must be able to product a syntax summary from its parameter list if called with wrong arguments, but what should [proc] say for inhibited assignment? If it's told -whatever instead of -, it would know to print whatever, so that solves the problem. But doing so introduces the problem of making it appear like -whatever is an option. Hence I want a different character, and / is available, so instead say /whatever and all is well.As mentioned above, the need for the empty string hack returns. Even though it may not be useful at the top level, it can come in handy with nested lists. Without it, a list element would be assigned. With it, the list element would be required to itself be a single-element list, and that single element would be assigned. But the syntax sucks. Instead I want to leverage the notation already established, that being having the first element of the list indicate how it should be interpreted. The ' character is handy, and it is vaguely reminiscent of Lisp's quote operator, which it also abbreviates as '. Plus ' looks like a tiny 1, reinforcing the fact that it is to be used in cases where (semantically speaking) the list has one element.AMG, updating much later: added (/ comment), :, and (: nest).Here's a syntax summary:(* &var) assigns a list of elements to var, so (* (&v1 &v2)) should likewise assign lists to v1 and v2.There is a difficult interaction between (* ...) and (? ...). Data elements are not directly assigned to references recursively nested within (* ...), but rather these references are set to lists of elements encountered. (? ...), when not given a default value, unsets the referenced variable or element when it does not have a corresponding data element to assign. These two rules result in ambiguity when some elements are assigned and others not. The reason is that lists in Brush are dense, not sparse; there's no way to put a "hole" in a list, other than at the end. Unsetting a list element means deleting it, and its existence can't later be tested.The chosen solution is to add one level of list encapsulation to all elements of all references assigned to non-defaulted (? ...) when recursively nested within (* ...). This provides a sparse list implementation in which each element is a zero-length list if absent or a single-element list if present, and that element is the data. For example, {{hello world}} {} {{how are you}} is such a list, as is {} 1 2. Other solutions are possible, but this seems to be the simplest.
# concatenation and flattening = # input must be a single-element list = (' &x) 1 : $x # 1 = (' &x) () # error: too few elements when assigning to {' &x} = (' &x) (1 2) # error: excess elements when assigning to {' &x} = (' &x) ((1 2)) : $x # {1 2} # input must be a list of single-element lists = (* (' &x)) (1 2 3 4 5 6 7 8) : $x # 1 2 3 4 5 6 7 8 = (* (' &x)) ((1 2) (3 4) (5 6) (7 8)) # error: excess elements when assigning to {&x} = (* (' &x)) (((1 2)) ((3 4)) ((5 6)) ((7 8))) : $x # {1 2} {3 4} {5 6} {7 8} # multiple variables in combination with * = (* &x &y) (1 2 3 4 5 6 7 8) : $x # 1 3 5 7 : $y # 2 4 6 8 = (* &x &y) (1 2 3 4 5 6 7) # error: too few elements when assigning to {* &x &y} = (* &x &y) ((1 2) (3 4) (5 6) (7 8)) : $x # {1 2} {5 6} : $y # {3 4} {7 8} = (* (&x &y)) (1 2 3 4 5 6 7 8) # error: too few elements when assigning to {&x &y} = (* (&x &y)) ((1 2) (3 4) (5 6) (7 8)) : $x # 1 3 5 7 : $y # 2 4 6 8 = (* (&x &y)) (((1 2) (3 4)) ((5 6) (7 8))) : $x # {1 2} {5 6} : $y # {3 4} {7 8} # ? in combination with * = (* &x (? &y)) (1 2 3 4 5 6 7) : $x # 1 3 5 7 : $y # 2 4 6 {} = (* &x (? &y)) ((1 2) 3 (4 5) (6 7) (8 9) () (10 11)) : $x # {1 2} {4 5} {8 9} {10 11} : $y # 3 {{6 7}} {{}} {} = (* &x (? &y _)) ((1 2) 3 (4 5) (6 7) (8 9) () (10 11)) : $x # {1 2} {4 5} {8 9} {10 11} : $y # 3 {6 7} {} _ = (* ((? &x) (? &y))) ((1 2) 3 (4 5) (6 7) (8 9) () (10 11)) : $x # 1 3 4 6 8 {} 10 : $y # 2 {} 5 7 9 {} 11 = (* ((? &x) &y)) ((1 2) 3 (4 5) (6 7) (8 9) () (10 11)) # error: too few arguments when assigning to {{? &x} &y} = (* ((? &x) &y)) ((1 2) 3 (4 5) (6 7) (8 9) (10 11)) : $x # 1 {} 4 6 8 10 : $y # 2 3 5 7 9 11
Make [proc] parameter list special characters be separate list elements editAMG: Instead of writing:
proc &p (a b? (c? xxx) d (e= yyy) f* g? h) {...}Write:
proc &p (a (b ?) (c ? xxx) d (e = yyy) (f *) (g ?) h) {...}This avoids trouble if the parameter (variable) name actually ends in a special character, plus it better suits the proposed change to extended [set].With this change, here are all the possible forms:The one I call out as "explicit" is useful if the variable name is very strange and may be interpreted as a list.AMG: This change makes room for more features. I'm considering named arguments in the manner of options.However, this might not be the best level at which to implement the feature. Tk, for instance, has an option database which can supply defaults for things not passed as command option arguments, and this is integrated with each widget instance's configure and [cget] subcommands.But then again, maybe it is right, since there is nothing stopping the parameter list to [proc] from being the return value of a command which works out all the defaults in a consistent way. Though it's also possible to wrap [proc] itself with a command that does this and more. Or have a common parsing command which each proc can invoke on its catchall argument, just like current Tcl practice, so that no one option scheme is given preferential treatment by the language. Brush's relaxation of Tcl's restriction that the catchall argument be the final makes this a lot easier to do.What I have already defined is (I think) a mostly minimal subset needed to implement anything else. I say "mostly" because required and optional arguments could be defined in terms of the catchall argument. But bound arguments and references are special in that they're supplied when the command is defined, not when it is invoked.AMG: A further refinement is to swap the order of elements. Additionally support the nested modes of operation allowed by [=], [loop], and [collect]. Plus require / name its formal argument for the sake of usage/error message generation. Like so:The "explicit" mode is needed if the name resembles a list which would otherwise be interpreted as a nested construct. The syntax change is required to avoid ambiguous cases.The nested modes of (= ...) and (& ...) are intended to accommodate val and ref being programmatically generated structures. (= ...) and (& ...) themselves cannot be used in a nested construct.Lines marked [*] denote features present in [proc] formal argument lists that significantly differ from [=]/[loop]/[collect] assignment lists. [*] or no, in all cases, the syntax differs in that the variables are given as names and not references, because the names apply inside the [proc] body (to be executed later) and not the context in which they are written and originally encountered.The (/ name) mode is changed to require that a name be given. This is needed for when the generated proc is invoked with a wrong number of arguments and must produce an error message describing its correct usage.Quite a lot of symbols are involved, so it's definitely appropriate to justify their choice and provide a mnemomic.
Be strict editAMG: Tcl tries to be generous with stray $dollar signs and close [brackets]. If it's unable to use them in a substitution, it just passes them through literally. In my experience, this tends to hide bugs or obscure their origin. For example, set chan [open file]] is legal, and no error happens until $chan is passed to any file I/O commands, which will fail to find the channel due to the trailing close bracket. To force programmer intent to be expressed more clearly, and to detect errors as early as possible, I want to require \backslashes or braces around all dollar signs and close brackets that aren't intended to be part of a substitution.Tcl is also generous with {braces} and "double quotes" that aren't at the start of the word. They have no special meaning unless they're at that privileged position. This is a major departure from Unix shells which allow quoting to start, stop, and change styles anywhere within a word, so there is potential for user confusion. Requiring quoting for these special characters when not at word start would help to more quickly educate the user. Brush allows words to be quoted with parentheses as well as braces and double quotes; for consistency, (parentheses) would also need this strict treatment. Literal parentheses could be quoted with backslashes, braces, or double quotes.One consequence of this last change would be to more quickly flush out code that omits the leading &ersand when constructing a variable reference. Detecting this error early would be helpful because adding the requirement for & is a significant difference between Tcl and Brush.For example, a(b) would be illegal; one must write &a(b), $a(b), {a(b)}, "a(b)", or a\(b\). In all cases, it is manifestly clear whether the programmer intends to make a reference, take a value, or simply have a literal string.Lastly, Tcl is generous with #pound sign, which only has special meaning when it appears where the first character of a command name could appear. Brush allows comments in more places, but still they can't start in the middle of a word. (I'm considering changing that last aspect, but I'm not committed yet.) I want to require that non-comment pound signs be quoted the same as any of the other special characters I just discussed. A lot of Tcl code already (unnecessarily) backslash-quotes pound signs, for instance in combination with [uplevel], so this wouldn't be a surprising change, but rather one in line with (admittedly naïve) programmer expectation.Basically I want the rule to be that special characters always require quoting to disable their special interpretation, even in contexts where their special interpretation isn't applicable. This should reduce surprise, encourage consistency, detect errors early, and maximize possibility for future expansion.AMG: Also I want it to be an error to pass too many arguments to [format].
Be lenient editAMG: Wait, I thought we were going to be strict here? Well, there's one case that I think leniency is called for. That would be the case of extra whitespace in line continuations. Compare the following two Tcl code snippets.
puts\ helloand
puts\ helloThe first works, but the second dies with invalid command name "puts ". Trouble is that the whitespace at the end of the first line (bet you didn't even know it was there) causes the backslash to not be interpreted as a line continuation.I propose to change backslash-newline-whitespace to instead be backslash-whitespace-newline-whitespace. If a backslash is followed by any amount of linear whitespace, then one newline, then any amount of linear whitespace, then upon execution, the entire sequence is instead treated as a word separator.AMG: On second thought, this is dangerous. What if the last argument to a command legitimately needs to be a whitespace character? Normally it would be quoted with braces, but perhaps other characters in that same argument can't be quoted with braces, e.g. the argument contains mismatched braces, in which case backslashes must be used. What about double quotes? Sure, they'd solve the problem, but they're not going to be emitted by [list] or the Brush list constructor (parentheses) since double quotes don't nest as well as braces. I want for Brush lists to always be usable as commands when generating code.Sorry, this feature seems like it may be a liability.
The [take] command edit
MotivationAMG: One use for the [K] command is to improve performance in cases where Tcl's sharing semantics incur cost with no benefit [8].
set x [lreplace $x 2 3 foo bar quux][lreplace] is forced to create and operate on a copy of $x, even though the original $x will be thrown away very soon after [lreplace] completes. Of course it can't know this in advance, especially if there is any chance of error.However, [lreplace] can run much faster if its first argument is unshared (e.g. it is not stored in a variable). The current, most efficient Tcl way to do this is:
set x [lreplace $x[set x {}] 2 3 foo bar quux]This is efficient but confusing to read. It can also be written using [K], through there is little improvement to readability:
set x [lreplace [K $x [set x {}]] 2 3 foo bar quux]
Design approachI suggest a new [take] command which sets a variable to empty string then returns its old value, which has a higher chance of being unshared at that point. Its argument would be a reference to the variable, with normal Brush indexing permitted. I don't think [take] should always force its return value to be unshared, since whatever uses the value is certainly capable of making its own unshared copies if need be. All it needs to do is take the value away from the variable. It shouldn't unset it because that is expensive, and this is meant as a performance optimization.Here's a theoretical Brush rendition of the above:
set &x [list replace [take &x] 2 3 foo bar quux]By the way, in Brush this particular example could have been written without the [take]; I just copied it from the [K] page. Brush has more direct syntax for replacing parts of lists:
set &x{2:3} (foo bar quux)Now combine the two for an interesting point. After running the above, [take &x{2:3}] would return "foo bar". It would also behave like [set &x{2:3} ()], which is the same as [unset &x{2:3}] and removes the two requested elements from the list. It does not replace each taken element with empty string. The reason is that the operation is being done on a list range, not individual elements.Oh yes, another thing to mention: Interactive (shell) use of Tcl or Brush implies that the [history] mechanism also keeps references to all the old result values. This also affects performance, but maximal performance in an interactive environment is less important. In Tcl, this can be inhibited by forcing commands to result in empty string by ending them with ";list". In Brush, the construction is ";:", since the [list] command does something different than in Tcl.
Implementation of [take]AMG: Speaking of [:], here's a Brush implementation of [take]:
proc &take (var) { : $var@ [set $var {}] }Let's go through it in excruciating detail to review the similarities and differences with Tcl.[proc]'s three arguments are:
- The reference to the locally scoped variable in which the procedure's lambda will be stored.
- The list of argument names (not references). Because this is a list, (parentheses) notation is recommended. Special syntax for things like defaults is supported but not used by this example.
- The script to execute when the procedure is called.
- $var@. The $ introduces it as a variable substitution, then var means to get the value of the local variable called "var". The @ says to interpret that value as a reference and get the value of the referenced variable. The caller is supposed to have passed a reference to a variable, so this works out perfectly. Brush references are valid across any number scopes plus have other interesting properties. This gives them many advantages over the Tcl equivalent of passing around variable names. Anyway, this value becomes the result value of [:] which in turn becomes the return value of the proc as a whole, no matter what side effects may take place in the generation of [:]'s subsequent arguments.
- [set $var {}]. This is a script substitution, also known as command substitution. [:] ignores all arguments but the first, so the fact that this evaluates to empty string is inconsequential. The important thing is that it has a side effect.
- $var. [set]'s first argument is a reference to the variable it is to access and (optionally) modify. It doesn't matter where that reference came from. In this case it was put forth by [take]'s caller, but who knows who originally generated it? Doesn't matter. A variable will continue to exist for as long as references to it exist, so this reference could even be to a variable that was a local in a procedure that terminated long ago. It would still work. Anyway, there's no @ here because [set] wants the reference, not the value.
- {}. This is the value [set] will put in the variable. In this case, it's empty string. It could also have been written [], (), or "". All four are equivalent, but {} is the canonical representation of empty string.
Automatic [take]Okay, back the concept of [take]. After thinking about it a bit more, it occurred to me that it may be worthwhile and not altogether impossible to automatically detect situations where [take] would be helpful. When a variable is about to be overwritten (barring an exception/error, ugh), it need not contribute to its value's reference count. If the interpreter can somehow look ahead and see that the variable(s) contributing to an argument will be overwritten, speculatively decrement its(/their) reference count(s).
set &x [list replace $x 2 3 foo bar quux]Here, [list replace] doesn't know where its return value will go, nor does it know where its arguments came from. Clearly it's not capable of doing any of the optimizations I describe.Does the interpreter have a broad enough perspective to recognize the opportunity for optimization? So, what does the interpreter know? It knows it's invoking some command and passing it the value of a variable as an argument. It also knows it's passing the return value of that command as an argument to another command which also takes a reference to that same variable as its other argument.That's not enough information. The interpreter doesn't know what any of the commands are actually doing, so it doesn't know the second command will overwrite the referenced variable. It can't even be sure the second command will ever be called; the first command might have an exceptional return code such as [break] or [error].What's needed is for the commands to advertise their behavior to the interpreter. Actually, Tcl already has this! It's called bytecode. Surely a complete Brush implementation would have similar.If x doesn't exist (has no value), is shared, or has a write trace, this optimization does not apply. Just let normal execution run its course.By the way, write traces imply sharing because the trace command needs to access both the old and new value of a variable. Read traces are okay.Next step is for [list replace] (or whatever command) to look before it leaps. Before making any modifications whatsoever to the variable, it must check for every possible error condition. In this case, the only errors that can happen are for the value to not be a list or to be too short.Once it's certain the command will complete successfully, and that x's value is unshared and has no write trace, the command is free to operate directly on x, achieving optimal performance. In this case, the only thing [set] does is inform the interpreter of the desired semantics, being that x is intended to be overwritten as soon as the command returns.Actually, there's another way to look at it for this particular example. The bytecode compiler, [list replace], and [set] can cooperate to transform to:
set &x{2:3} (foo bar quux)And now there's no question about sharing or copies.Switching gears a bit, just going to jot this down so I don't forget. [take] (or an automatic version thereof) is actually useful in cases where the the manipulation isn't done via command. For example, appending a newline:
set &x $x\nIdeally this would be just as fast as the [append] version. But in current Tcl, it involves loading the value of $x onto the stack, appending to the value on the top of the stack (thereby creating a new, unshared copy), then popping that value from the stack and into the variable (thereby freeing the original, if unshared). That's a wasted copy. Calling [append] skips the stack and therefore enables in-place modification.
A simpler perspectiveAMG: Seeing the above two examples:
set &x{2:3} (foo bar quux) set &x $x\nmakes me think the [take] functionality would be better had another way. Pass variable arguments in the traditional manner; instead focus on optimizing stuff that's byecoded inline. The first above example shows that Brush's indexing capabilities make it rare to need a command for doing the kinds of operations that could have benefited from [take]. The second shows a case where in current Tcl, no command is needed at all, other than [set], yet could stand to be optimized.In both cases, the bytecode compiler ought to implement [set] command (a.k.a. [=] as proposed elsewhere on this page) with inline bytecodes, not as a command invocation. The bytecode compiler also is responsible for whatever manipulations may be necessary to index into the variable and modify its value. So if it takes the long view rather than looking through a microscope all the time, it may realize that the duplication can be skipped in certain circumstances.The first case (set &x{2:3} (foo bar quux)) is easiest since $x is never substituted, so there is no temptation to duplicate. However, it's still extremely interesting because it's equivalent to set &x [list replace $x 2 3 foo bar quux]. Therefore, writing the latter should produce the same optimized result. This establishes the nature of the relationship between [list replace] (or any optimized command) and the bytecode compiler. But it's quite complex, much more than needs to be specified now.The second case (set &x $x\n) is trickier because a straightforward stack-based implementation will briefly have x's original value both in the (not yet modified) variable and on the stack. Modifying the stack then triggers a wasteful duplication. Here is where the bytecode compiler needs to look ahead and see the potential for an in-place modification.
Creative writing editAMG: Adopt TIP 278 [9]. See also: Dangers of creative writing.
Typographic issues edit
Wrong phone numberAMG: The "[email protected]" example on page 18 confuses phone numbers 555-1235 and 555-0216.
Wrong error messageAMG: The "wrong # args" error message on page 16 says "e" when it should say "d".
Simply simplyAMG: The word "simply" is used in two sentences in a row on page 8.
[accum_gen] exampleAMG: On page 38, change "accums(a) 0" to just "accums(a)" so as to demonstrate the default value feature.
Substitution summaryAMG: On page 23, in the Tcl column, change $simple_name(index) to $name(index). Change Not Easily Available to Not Available In One Line or similar.
Reference before variableAMG: On page 28, instead of saying, "the reference exists before the variable is made", say it exists before the variable is given a value.
Not enough argumentsAMG: On page 17, instead of saying "not enough arguments", say "not enough list elements for all variables", or something to that effect.
Examples and motivationAMG: Incorporate examples and motivational discussion from What Languages Fix.AMG: Here's a real-life case study from some of my CNC work. This shows a side-by-side comparison of Tcl and Brush, one line at a time, performing a few very basic data structure manipulations.The cuts variable is a list of cuts being made into the material, where each cut is a two-key dict.
- lines: List of first and last zero-based input file line numbers.
- vertices: List of alternating X/Y coordinates.
dict set zips $zipFile extract [concat [dict get $zips $zipFile extract] [list $asciiFile]]Brush code:
= &zips($zipFile extract){end+1} $asciiFile | http://wiki.tcl.tk/37931 | CC-MAIN-2017-04 | refinedweb | 18,097 | 60.65 |
With OneAgent Operator version 0.8.0 there is built-in support for automated container injection via webhook for application-only monitoring at runtime based on a new custom resource of type "OneAgentAPM" and a set of dedicated labels that can be provided for the app container.
My question is this: Does this deployment method conflict in any way with the traditional operator-managed and dockerized fullstack OneAgent?
Say for example, we decide to add a custom resource of type OneAgentAPM for an operator that is already managing dockerized OneAgents and we add the label
oneagent.dynatrace.com/instance: <OneAgentAPM object name> to an application container running on a node that is already monitored via a dockerized OneAgent - is this supported and will there be any limitations with regards to fullstack monitoring of the app container?
The reason I'm asking is because it seems this would allow applications more control and flexibility over the OneAgent version that gets injected into their container as they can use label
oneagent.dynatrace.com/installer-url to install a specific OneAgent version into the container.
Solved! Go to Solution.
Hi there. I will publish a blog on this shortly. The short story is that you cannot, yet, use this with the fullstack approach. This is coming in a later release. So, for now, you must choose one or the other.
In a few months, you'll be able to control versions as you say, and do some interesting pipeline integrations as well. You could poll the deployment API, see when the latest version is available, download the agent as a docker image, run it through your pipeline, tag it, and push it when you're ready.
So. Think of this as phase 1. It's application only, and it has the advantage of central control (rather than editing Dockerfiles or deployment YAML). Phase 2 will provide full agents as docker images from the cluster via Docker pull. Phase 3 is the whole solution, with a "full stack" approach, proper licensing, logging, admission controller injection, and heap dumps.
Incidentally, we'll also have a containerized ActiveGate at that time, which will allow you to connect to the K8s API in one shot via the same Operator, rather than the two-step dance you need to do today.
So. Lots of goodness coming, but this particular release is a bit limited.
Best.
Hey Matt,
Thanks for the quick feedback - interesting stuff!
What about using the dockerized OneAgent in infrastructure mode together with a selective app-only webhook integration of individual application pods/containers, basically giving users the possibility of "selective fullstack" monitoring per namespace/container/pod? Is this a scenario that will be possible now or in a later phase and if so, how would the licensing work out for such cases?
I think it would be really useful for cases where we only want a small fraction of containers to be deep-monitored and where the fullstack licensing per node is disproportionately expensive compared to app-only monitoring...
The issue here is with the host id's. Your host agents will see unknown processes, that are actually instrumented, but the entity model will be confusing, since each container is reported as its own host.
Understood. So is this something that will be addressed in "phase 3" or is this a fundamental restriction that will hold for the foreseeable future?
Phase 3 will give you what you want. Entity model will work, licensing, all of it.
Also, I just learned that there is some injection in "IM" mode as of 196, which means that agent will oveerride the "app only" agent. Another reason you'll have to wait for phase 3.
Where are we currently with respect to "phase 3"? I realize the new Dynatrace operator is available but still doesn't have support for automated app-only injection. Also, I would like to know how/if it's currently possible to control/override the injected code module version when full-stack monitoring is deployed and whether or not the entity model has support for infrastructure-only monitoring of the worker nodes combined with automated app-only monitoring of the containers.... By "support" I mean the relationship between deep-monitored containers and the underlying worker nodes is auto-detected and used accordingly for performing automated RCA. | https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/Using-OneAgent-Operator-for-app-only-injection-via-webhook-on-a/m-p/172157/highlight/true | CC-MAIN-2021-43 | refinedweb | 721 | 51.58 |
Qpass 1.0
Sponsored Links
Qpass 1.0 Ranking & Summary
RankingClick at the star to rank
Ranking Level
Qpass 1.0 description
Qpass 1.0. Utilities
Qpass 1.0 Screenshot
Qpass 1.0 Keywords
Bookmark Qpass 1.0
Qpass 1.0 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for Qpass 1.0. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Related Software
Copy brains architecture to solve recognition and classification problems Free Download
Outlook, Internet Explorer, Password, saved passwords, cached Free Download
Adds functionality to any program by allowing a user to select a specific DSN Free Download
import QuickBooks data Free Download
data & metadata movement between Oracle databases Free Download
Migrate-Data is an extremely powerful enterprise data migration tool Free Download
DBackup is a utility which enables the user to make safety copies of his own data: easy and guided user interface, a click is enough to make a backup, it lets the user archive only important files, th Free Download
WebHut program is useful for creating HTML pages for the Internet easily. It is a database application. User data are represented as categories, where objects are joined. Any thing can be a category o Free Download
Latest Software
Popular Software
Favourite Software | http://wareseeker.com/Utilities/qpass-1.0.zip/61179 | CC-MAIN-2016-44 | refinedweb | 231 | 50.84 |
# 16
Commentary by Dr. Chuck Baynard
Q16. How did God create angels?
Answer: God created all the angels spirits, immortal,
holy, excelling in knowledge, mighty in power, to execute his
commandments, and to praise his name, yet subject to change.
References: Col. 1:16; Ps. 104:4; Mt. 22:30; Mt.
25:31; 2 Sam. 14:17; Mt. 24:36; 2 Thess. 1:7; Ps. 103:20-21; 2
Pet. 2:4.
Angels have always captured the imagination of man and volumes
have been written about them. However, there isnt that much
disclosed about them in the Holy Writ that is not covered in this
statement by the Westminster Divines. John Calvin taught that
there were some things that God had not disclosed, and to enter
into speculation about such matters was unlawful for man and sin.
Paul makes a similar statement when he speaks of being snatched
away to the third heaven where he heard things that were not
lawful for man to utter. Paul spoke these words in the context of
visions and revelations of the Lord (2 Corinthians 12:4). So it
is I think we need to tread lightly concerning angels in general
and where Scripture remains silent, let us not enter into vain
superstitions and speculations.
One of the things that caught my eye on first reading was that
the angels are subject to change. When speaking of mankind, the
choice of word is subject to fall. This is one of the places one
could wish to have sat with the Divines as these two words were
discussed. I see no indication here that it is impossible for
there to be a future rebellion in the heavenly host and other
angels by reason of sin be transformed from angels of light to
those of darkness. Yet, this doesnt seem to fit the whole
of Scripture and we need more light to understand what the
divines were saying in this question.
The same term of "election" is applied to both
angels and men in Reformed theology. We could in a sense then
even claim that the "Unconditional Election" of the
TULIP applied to angels. Can we find other points that transcend
these two areas of creation? I dont think it is much of a
reach to see the bright hues of the fifth petal shining over both
realms, and those whom God has "elected" both of angels
and men resting safely in the decrees of God. As we begin to look
at this question in the light of the TULIP, I think we can get a
glimmer of why the choice of change for angels and fall for men.
If petals two and five apply, what of the other three. We see the
fourth petal fade when moved to the heavenly abode of angels, for
there is no offer of grace to "changed" angels as there
is to fallen man. We also see that without the
"archetypical" representative having fallen into sin,
that the first petal cannot be applied as it is with man who is
conceived and born in sin. The difference? All angels are of
immediate creation and have not a "federal"
representative from which sin can be imputed. Haven fallen from
the holy estate in which they were created, there is no offer of
a savior and thus the third petal too grows dim when trying to
transcend the spiritual realm of Gods angels.
While this isnt exactly a precise nor orthodox method, I
think when we will look at the Scriptures and doctrines derived
there from in the light of the TULIP we will see much
understanding of doctrines that on the surface appear to have no
connection to the classic points of Calvinism. As long as we
dont have to stretch nor twist Scripture in the process,
using the TULIP as a hermeneutical tool will help us stay on
solid ground theologically. For as we develop one doctrine it
cannot contradict nor diminish another part of the whole. As
Scripture itself is most beautiful and majestic in part because
of its unity from cover to cover, so our total theology must have
the same unity. If we accept then, the five points of Calvinism
as the heart of Reformed theology, we must at least see if we
harm this vital organ of our faith as we develop individual
tenets of this theology.
What then we can know for sure about angels that is of any
import to us is contained in this statement of the Divines
containing only twenty-seven words. While there are other areas
or duties we can find support for in Scripture, I find this
statement most complete and sufficient to be taught as doctrine
of the Church. We can rest assured that the Divines were well
aware of Calvins words concerning angels, and though Calvin
ventures a Sabbaths day journey further so to speak, he too
limits conclusions about this heavenly host to a very short
segment of his "Institutes." Less we wander into some
strange place, I think we do well to emulate these fathers of the
Reformed faith. One then can wonder where such volumes of work
concerning the angels, and the myriad of icons depicting these
unseen citizens of heaven come from. While I would hesitate to go
so far as declare them satanic in origin, they are at best the
vain ramblings of uncontrolled imaginations of men. Such to be
guarded against, lest we too fall into idolatry as some of our
ancient fathers did, when venturing beyond the revelation of the
Scriptures.
Dr. Chuck Baynard - 246 Rainbow Circle, Clover, SC 29710 -
October 1997 | http://www.christianobserver.org/Westminster%20Larger%20Catechism/lc16.htm | crawl-001 | refinedweb | 941 | 65.35 |
I am new to C++ and programming in general, and am working on an upcoming homework assignment. I have run into several problems and would greatly appreciate assistance. The assignment is to write a code to determine area and volume in C++, using a While Loop. I am stuck with the while statement, I am not sure how to state when I want it to stop. Below is what I have so far.
#include <iostream> using namespace std; int area (int l, int w); // function prototype int volume (int l, int w, int d); // function prototype int main (int argc, char* argv[]) { int length, width, depth; // parameters for the areas and volumes int result; //result is output cout << endl; // read in the values of length, width, and depth While (?) { cout << "If you want the result of the calculation to be an area,"; cout << endl; cout << "place a 0 in the depth parameter "; cout << endl << endl; cout << "Enter a length (postive integer): "; cin >> length; cout << "Enter a width (postive integer): "; cin >> width; cout << "Enter a depth (postive integer): "; cin >> depth; cout << endl; if (depth == 0) { result = area (length, width); cout << "With a length of " << length; cout << " and a width of " << width; cout << " the area is " << result << endl; } else { result = volume (length, width, depth); cout << "With a length of " << length; cout << " a width of " << width; cout << " and a depth of " << depth; cout << " the volume is " << result << endl; } // end if } // end loop return 0; } // end main function int area (int l, int w) { int result; result=area(l,w); return result; } // end area function int volume (int l, int w, int d) { int result; result=volume(l,w,d); return result; } // end volume function
Thank you,
bmgee13 | https://www.daniweb.com/programming/software-development/threads/76079/c-homework-assistance | CC-MAIN-2017-09 | refinedweb | 284 | 56.66 |
Toffee Base Library (libToffee.fx)
The Toffee Base Library (contained in
libToffee.fx and
libToffee.a) is a small library that can be optionally used in Elements projects for the Cocoa platforms. It provides some of the core types, classes and functions that are needed for more advanced language features of Oxygene, C# and Swift, such as generic arrays and dictionaries, LINQ support, and more.
Source Code
The Toffee Base Library is open source and implemented fully in Oxygene. You can find the source code on GitHub, and contributions and pull requests are more than welcome.
Note that you will usually need the very latest compiler to rebuild
libToffee yourself, and in some cases, the git repository might even contain code that requires a later compiler than is available. Refer to the commit messages for details on this, and check out an older revision, if necessary.
Namespaces
The Toffee Base Library is divided into these namespaces:
See Also
- Toffee Base Library in the Cocoa section
- Toffee Base Library on GitHub
Version Notes
libToffee.fx was called
libNougat.fx in Elements 8.3 and below. Projects will automatically be migrated to use the
libToffee.dll reference when opened in Fire or Visual Studio. | https://docs.elementscompiler.com/API/ToffeeBaseLibrary/ | CC-MAIN-2018-51 | refinedweb | 202 | 66.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.