text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Let’s say you’d like to add a CSS Modules Stylesheet to your project. You can find Create React App’s guidance here, but essentially — and as the guidance states — CSS Modules let you use the same CSS selector in different files without worrying about naming clashes. This works because each HTML element in your file that you want to style is automatically given a unique class name.
This can seem quite confusing at first, but really the process to implement CSS Modules can be simplified to just 4 steps, as demonstrated in the below example.
Adding modular CSS to a simple <Link /> component
- A feature of React is that CSS Modules are “turned on” for files ending with the
.module.cssextension. Create the CSS file with a specific filename in the following format:
Link.module.css
2. Import styling to your component:
import styles from ‘../styling/components/Link.module.css’
3. The styles in your CSS file can follow your preferred naming convention, for instance:
.bold { font-weight: bold;}
4. The style is applied to the HTML element as follows:
className={styles.bold}
And that’s it!
Photo credit: Adrian Swancar on Unsplash | https://www.freecodecamp.org/news/how-to-add-a-css-modules-stylesheet-to-your-react-component-in-4-simple-steps/ | CC-MAIN-2021-43 | refinedweb | 195 | 64.61 |
Dove Tales Newsletter Carramar Campus
6 July 2012
in this issue
From the Principal
From the Principal
M
Whole School St Stephen’s School on Facebook iPad Case Protection ‘Back to the 80s’ Musical Memorabilia Request Canteen News Cybersafety Seminar Online Reporting About Telethon Speech & Hearing
Primary Coffee Connect Congratulations Holiday Fun at the RSPCA WA Active After School Communities Program
Secondary Reporting Day Year 12 Parents International Success Indonesian Visitors Congratulations Pupil Free Days Year 12s 2013 Maths XTC Badge Winners Homeroom Connect: Forgiveness in Charis Burundi Fundraising Upper School Physical Recreation Shine Program for Girls LevelUp Program for Boys Year 11 & 12 Exams Semester 2 Exam Schedule Music Notices Holiday Programs Free School Holiday Body Image Art Workshop Parenting Program
School Information Term Dates 2012 & 2013 Office Hours / Contact Numbers Uniform Shop Parent Connect Act Belong Commit
Dove DoveTales Tales
Issue 10
usic is all about emotion. It is a unique opportunity to share how you feel with somebody else in a way that words alone cannot convey. Words are often about what we think, about ideas and thoughts. When we put them in the format of poetry or lyrics, words can convey some sense of feeling. However, when words are put together with music, we connect our thoughts with our feelings in a most uniquely human way – we sing what we feel and we feel what we sing. People who listen to the same music often feel the same emotions. Whether it is a love song, a national anthem, or a sports song, the emotions felt are common to those who experience them. When people listen to a major key they feel happier, yet when they listen to a minor key they feel sadder. There is something genetic, something wired into our DNA, that makes us feel the same things when we listen to the same music. Music is a uniquely human experience that communicates uniquely human emotions. Emotions are essential to being human. Those without emotion may be described as psychopathic. We need to love, cry, care, feel. Yet, emotion and passion when unfettered and unrestrained can lead to devastating effects – anger, violence, and sometimes bloodshed. Singing, however, offers a unique opportunity to inform our emotions. When we put words or lyrics to music we combine ideas with passion, thoughts with emotions. In doing so we can align particular ideas and thoughts with particular passions and emotions. We learn how to feel. Songs give voice to our hearts. Music stirs the soul and the spirit, while words give our minds ideas and thoughts. When music is combined with the right lyrics, songs can invoke nationalistic pride, Christian praise, joyous celebration, and the melancholic loss of loved ones. Singing connects our rational centre of the brain (the prefrontal cortex) with our emotional centre of the brain (the limbic system). The more we connect and lay down neuronal pathways between our prefrontal cortex and our limbic system the more we are capable of pro-social behaviours, self-restraint, and human maturity, of thoughtful and considerate passion. St Stephen’s School is blessed to have an amazing Music program, especially singing. I delight in the wonderful talents of our singers and musicians, and in our teachers who grow and nurture their talents. To further develop and encourage singing and music at St Stephen’s School (as well as dance and drama – I will need to write of our amazing dramatists in another article!) we are currently looking to appoint a Director of Performing Arts to lead the Performing Arts from Years K to 12 across all Carramar and Duncraig campuses. Let us pray that God will continue to bless our School’s Performing Arts and provide us with a Director of Performing Arts who will lead our growing of people in music, singing, drama and dance. We can all think, we can all feel, we can all sing. Tony George Principal
DUNCRAIG
C ARRAMAR
100 Doveridge Drive, Duncraig WA 6023 (Year 3 -12) 50 St Stephens Crescent, Tapping WA 6065 9 Brookmount Ramble, Padbury WA 6025 (ELC) (K-12) PO Box 68, Greenwood, WA 6924 PO Box 246, Joondalup, WA 6919 Ph: +61 8 9243 2100 Fax: +61 8 9243 2490 Ph: +61 8 9306 7100 Fax: +61 8 9306 7101
Issue 10, 6 July 2012
1.
Whole School Information Back to Menu Important Dates St Stephen’s School on Facebook St Stephen’s School has its own Facebook site. Visit and ‘Like” our page! You can also follow us on Twitter
Friday 6 July End of Term 2 Monday 23 July Pupil Free Day Academic Staff return Tuesday 24 July Pupil Free Day Wednesday 25 July Students return
iPad Case Protection It is difficult to recommend iPad cases as there are so many, and personal preference or style plays an important role. From an educational perspective within a school, it is essential that the iPad is protected from physical damage. The glass touch screen can be easily broken if the iPad is dropped, knocked or crushed. If the iPad is under warranty, Apple will replace a broken or cracked screen for $279 by replacing the whole iPad. Students treasure their iPads and generally take good care of them but accidents do happen. Visit many2one.com.au and then many2one.com.au/ ipads/protect/ to see our suggested models to protect an iPad at school. Stephen Corcoran
Friday 3 August Year 3 - 6 Interschool Cross Country Thu 9 to Sat 11 August Year 7 - 12 Musical Production Back to the 80s Friday 10 August Primary Athletics Carnival Wednesday 15 August Cybersafety Seminar 7.00 pm Friday 17 August 40 Hour Famine
Director of iEducation
St Stephen’s School Carramar Presents...
Below: Primary Captains’ Lunch Arts & Smarts - Music (pic 2 & 3)
Thursday 9 - Saturday 11 August 2012 7.00 pm start (11 August, 1.00 pm matinee) Sports and Learning Centre (Gymnasium) St Stephen’s School Carramar 50 St Stephens Crescent, Tapping Contact: 9306 7100
‘Back to the 80s’ Musical - 9 to 11 August Tickets now available from Main Administration.
Memorabilia Request Do you have any old 80s memorabilia? Records? Cassettes? Posters? Knick-knacks? Leg warmers? Puffy sleeved taffeta ball gowns? If you’ve any 80s relics - from clothing, to props, posters & LPs - that you’d be happy to donate or loan to the Drama department for our upcoming musical, we would be most appreciative. Many thanks! Alison Hignett
Canteen News High School House Athletic Carnival – 3 August 2012. Please note the canteen will be open this day for students to order lunch. Parents are also able to purchase lunch, snacks and hot and cold drinks on the day. A big thanks to all our lovely volunteers for your help this term, well look forward to seeing you next term.
Dove Tales
Issue 10, 6 July 2012
2.
Whole School Back St Stephen’s School Introduces
CYBERSAFETY SEMINAR
Online Reporting
Introduction to the Dangerous, Sleazy and Unsafe Life of Social Networking - What parents can do Just when you think it couldn’t become more complicated, the social networking scene has! The newest craze is called Tumblr and is described as a Web Blog. As of 8 June 2012, Tumblr has over 58.9 million blogs and more than 24.7 billion total posts.Tumblr posts can be anonymous which means if the privacy settings are not chosen, anyone can write and say whatever they like. Additionally, the site has become one of protest, unhappiness, bad language, disrespect and unhealthy venting. Sadly, it also provides a cauldron of hate mail which cooks away in your homes. Tumblr is popular among the counter–culture, that is, those who self harm, have eating disorders and are prone to substance abuse. Even those not involved in these activities will be familiar with the sites of which we write.
Track your child’s academic progress at any time via the School’s online reporting service known as ‘EdInfo’. All published academic reports, including selected subject grades and marks for the current academic year, will be offered online for your child attending Secondary School. You will soon receive a letter describing this service, along with a secure username and password. When you have this information visit and login with your username and password to access your children’s results.
The site is causing unrest and trouble in our community and parents really need to make informed decisions about whether or not it is in the best interests of their children to be using Tumblr. St Stephen’s School will take action against those in our community who intimidate, threaten and belittle others wherever it happens.
Venue:
Technology Hub St Stephen’s School Carramar - 50 St Stephens Crescent, Tapping
Date:
Wednesday 15 August 2012 Presentation 7.00 – 8.30 pm
RSVP:
Please RSVP by Monday 6 August at
If you experience any difficulty please contact the School by phone or email providing full details of the issue: Carramar: 9306 7100 or email admincarramar@ststephens.wa.edu.au Duncraig: 9243 2100 or email adminduncraig@ststephens.wa.edu.au
Inspired learning that transforms and empowers lives
Dove Tales
Issue 10, 6 July 2012
3.
Primary Back to Menu Coffee Connect
Holiday Fun at the RSPCA WA
Please note that as from Term 3, Coffee Connect will take place every Tuesday in the Primary Forum from 8.45 am to 10.00 am.
RSPCA WA has a huge range of holiday boredom busters that kick off this July. There are old favourites - Be a Volunteer, Talk to the Animals, Doggie Delights & Radical Reptiles.
All parents and younger siblings welcome!
Congratulations Flynn O’Neill (6C) recently achieved the following results in Karate competitions: 2012 State Karate Championships Boys Kata 10/11 years - Bronze Medal 2012 Margaret River Tournament of Champions Boys Kata 10/11 years - First Place 2012 Karate Weset Shobukan Open Championships Boys Kata 10/11 years - First Place Boys Kumite 10/11 years - Second Place
We have many new exciting sessions: Birds of Prey for all ages, Be an Inspector, Be a Volunteer Junior and Intermediate and joint sessions with the Perth Zoo “Wild about Animals” and a new session with Landsdale Animal Farm. There’s something for all kids aged 4-17 years. All sessions involve animal interaction, a shelter tour and are reasonably priced. Visit to find out more and call 9209 9325 for bookings.
Well done Flynn!
Active After School Communities Program - Term 3 We will commence the AASC program for Term 3 in Week 3, on Wednesday 8 August 2012. This will run for a 7 week period, finishing in Week 9, Thursday 20 September. Your child or children, from Pre-Primary to Year 6, are welcome to come along and participate on a Wednesday or Thursday afternoon, (or both if they’d like). The afternoon will run from 3.30 pm to 4.30 pm. AASC’s aim is to encourage children to actively engage in sporting activities rather than sitting in front of the computer or TV screen all afternoon. The sessions are free, but please note that it is not a childcare service! All are required to become actively engaged in the sessions and the use of computer games etc. is not accepted. At each AASC session there is a fruit break where we provide assorted seasonal fruit and vegetables such as apples, oranges, pears, carrots and cheese etc. for your child. If they are unable to eat these fruits and vegetables or require a bit more sustenance for the afternoon, please ensure that they have another healthy snack to eat. Attached is a form to complete for Term 3. This form needs to be returned to the Primary School Office before your child can participate in the program for Term 3. Mrs Sharon Burnett and Mrs Lorna Crabb are the AASC Coordinators. If you have any queries they can be contacted on 0432 921 627 or chat to them on the green the days of the sessions. If the weather is not cooperating, the session is then moved to the Primary Forum. Tracey and Tina are also available to assist in the Primary School Office, or you can call them on 9306 7111 during the school hours. Thank you for your support.
Dove Tales
Issue 10, 6 July 2012
4.
Secondary Back to Menu Reporting Day
Indonesian Visitors
Parents are advised that the Year 7 to 9 Student / Parent / Teacher Reporting Day will be held on Tuesday, 24 July. Interview schedules, details and reports will be sent home with the student via homeroom shortly to make booking times. Reports will be printed the last week of term.
Year 12 Parents Year 12 students are encouraged to check their personal details and enrolments with the School Curriculum and Standards Authority online at If any of the details are incorrect, please inform Student Services and they will make the changes with the Authority. A link to this facility has been posted on the Authority’s Facebook page at facebook.com/SCSAWA. Students may find this a convenient way to access the site.
International Success
Each day our visitors toured the School and observed how the Hearing Impaired students were supported in their mainstream classes. They saw the note taking and tuition support given to the Secondary students, classroom support given in the Primary setting, as well as a one on one session given in the Hearing Impaired Room.
Phoebe Moses, Year 8 represented Australia this month at the Indiana University, USA. As the National Junior Scenario Writing Champion in 2011, she was invited to compete in the futuristic short story writing section of the Future Problem Solving Program International Conference.
Their response to what they saw and their excitement after speaking with our students was a reminder that the TSH School Support Programs are highly successful in helping our students reach their full potential in mainstream schools.
At the conference, under exam conditions, Phoebe wrote a story based on ethical practice in the pharmaceutical industry of the future. Her international team with representatives from Australia, Florida, California and Alabama was awarded third place in the competition. Phoebe’s insightful individual story was awarded fourth place out of the twenty seven countries and USA states represented in her division. At the colourful Opening Ceremony, Phoebe processed with all the international flagbearers in costumes representing their nation. She proudly carried the Western Australian flag and was dressed in a surf-life saving costume. Other students in the Western Australian affiliate were from Wesley College and Kensington Primary School. All the Australian competitors sat together at the opening ceremony and there was a great feeling of comraderie. Prior to the Opening Ceremony, over a thousand international students chatted outside the auditorium and exchanged stick pins from their country. Phoebe’s black swans, kangaroos and Australian accent proved to be very popular. The students and teachers attending enjoyed sharing ideas at the beautiful university campus with its impressive limestone architecture, maple and sycamore trees and wild squirrels and rabbits. Throughout the week there was a wonderful atmosphere of intercultural exchange. The Conference itself was evidence of the role international collaborative thinking and co-operation plays in creating a positive future.
Dove Tales
Last month, a group of teachers, hospital staff and government officials from East Java visited the St Stephen’s Carramar Outpost. They were in Perth to study the current training and program facilities for Hearing Impaired children in W.A. as Indonesia is moving towards the inclusive education of children with disabilities.
The visit from our Indonesian colleagues was a wonderful experience for all involved. Once again our students did us proud as they responded to the many questions of our visitors. Also to be commended is the staff and students of St. Stephen’s Carramar, who were most gracious in participating in the hosting of our guests. Josie Hawkins, Teacher of the Deaf, St Stephen’s Carramar Outpost.
Congratulations Karan Sandhoo was recently selected for Regionals in the U13 Boys North Metro Regional Hockey Team. Well done Karan. Sam Jones, Jordan Visser and Matthew De Beer from the Carramar campus, and Damien Cilliers from Duncraig campus, on their selection in the Under 14 North Regional Squad for the upcoming Rugby Union State Championships. Well done gentlemen!
Pupil Free Days A reminder that Monday 23 July and Tuesday 24 July are Pupil Free Days. We look forward to seeing all students return Wednesday 25 July.
Year 12s 2013 This is a reminder that the 2013 Year 12 students will commence their Academic year on Monday 29 October (Week 3, Term 4). This gives them a week to rest after their exams before having to attend school.
Issue 10, 6 July 2012
5.
Secondary Back to Menu Maths XTC Badge Winners Congratulations to the following students who were awarded XTC badges for outstanding performance in their mathematics classes in Term 1: Year 7: Kieran Fitzgerald, Charli Morrison, Harrison Hyde, Andrea Lwin, Khushi Kamani Year 8: Dennis Mader, Rachel Wordsworth, Elle Warren, Phoebe Moses, Megan Gray, Tegan Simonsen Year 9: Michael Wells, Stuart Purdie, Aimee D'Cruz, James Rimmer, Brodie Knox, Nathan Thompson Year 10: Daniella Radaelli, Samuel Brown, Scott Sinnott,William Stock, Kelsey Dootson Year 11: Lauren Smith, Ryan Carter, Jye Welch, Jessica Saunders Year 12: Kooper Delacy, Alex Wilson, Sri Unnikrishnan, Nicholas Roland, Max Dickson, Aaron Dickson, Chris Moore, Troy Campbell (pictured bottom right) XTC Extreme (4th year): Nirali Parikh (Year 10), Nick Blanchard (Year 11) - pictured right. XTC Extreme squared (5th year): Jessica Bartels (Year 12) - pictured top right Students were entered into the Australian Informatics Competition and we would like to congratulate these 6 students who achieved a distinction, placing them in the top 10 % of Australia: Nick Blanchard, Nisarg Thakrar, Lauren Smith, Stuart Purdie, Jason Smith, Kooper Delacy
Homeroom Connect: Forgiveness in Charis This term Charis house has been looking at the meaning of forgiveness in Homeroom connect. Homeroom Connect is a new initiative aimed at bringing students closer together to form a greater understanding of the messaging in the bible. Students have been exploring their understanding of forgiveness. How easy is it to forgive? Why does God forgive our sins? Why should we forgive? Students answered these questions and other like them, together led by a Year 10, 11 or 12 student leader. Students were then asked to work together as a group to create a visual representation of forgiveness and also a short poem or story. Some of these will be framed and put on display in the Charis house forum. For if you forgive others their trespasses your heavenly Father will also forgive you - Matthew 6:14
Burundi Fundraising St Stephen’s Secondary School is fundraising to help build a school in Burundi. Burundi is a country in the Great Lakes region of Eastern Africa. The future of Burundi is somewhat bleak as less than 50% of children attend school and HIV/AIDS is almost out of control. In addition, basic foods and medicines are in short supply and St Stephen’s School is aiming to raise $10,000 to build a school for the children. On Thursday 21 June, Makaria House Council held a sausage sizzle for staff and students. It was an extremely successful outcome, raising $524.50! We apologise to those who missed out on receiving a sausage sizzle are running another one today, 6 July!
Dove Tales
In other news, Makaria is holding their yearly ‘Walkathon’ on 17 August, in which we ask families and friends to sponsor students giving generously to reach our Secondary School goal of $10,000! Thank you for your on going support in helping us to achieve our goal. Emily Parfitt Year 12 Makaria House Council Captain
Issue 10, 6 July 2012
6.
Secondary Back to Menu Upper School Physical Recreation
Year 11 & 12 Exams
We all know the benefits of exercise to an overall healthy lifestyle – we feel better about ourselves, we reduce our risk of heart disease, and there is strong evidence that suggests our thinking and cognitive processes actually work better as well when we exercise regularly. This is one of the main reasons we value continued physical activity throughout high school years and encourage an active healthy lifestyle beyond. Not only are we more likely to live a longer and healthier life, we are more productive as well. To this end we are in the process of making some changes to our Year 11 and 12 Physical Recreation program.
Expressions of Interest are requested for 30 parents to assist us with the invigilation of the Year 11 & 12 Semester 2 Examinations. Held in October, the Examinations will be conducted from Monday 8 to Friday 19 October (second week of the October school holidays and Term 4 Week 1). If you have half or full days available during this period, and would like to work off some 2012 Family Commitment hours, then please contact Miss Simone Robinson as soon as possible: simone.robinson@ststephens.wa.edu.au
As of Term 3 2012, we will be moving the program to school based activities. There will still be offered a range of activities for the students to choose from, everything from Zumba and yoga through to more traditional sports such as Basketball, Netball and Volleyball,and as the program expands we are looking at other possibilities. We feel bringing this program back to this base is the perfect way to continue to model and encourage a healthy, active lifestyle, and focus on those positive social aspects that sport can provide. As the students come to the end of their schooling and they leave us as young men and women, we feel it’s important to reinforce the safe health message we deliver in the lower school health program. The final term of the year in this course we will be looking at health and safety issues relevant to our young people as they prepare for the next stage of their lives. This will be class room based and will involve a variety of approaches, including guest speakers.
Please include your preferred phone contact details, your availability, and your child’s name & year group. Please note the times: morning sessions 8.30 am – 12.45 pm and afternoon sessions from 12.30 pm (with differing finish times - between 3.45 and 4.45 pm).
Semester 2 Exam Schedule As previously advised, students on the Year 11 French Tour will have a discreet examination timetable that continues into Week 2, Term 4. Year 10 Exams will be held Monday 17 September – Friday 21 September (Week 9, Term 3)
Music Notices
We are excited about the direction this program is heading and look forward to working with the students in its continued development.
Shine Program for Girls The popular ShineGirl program returns and begins on Friday, 10 August. ShineGirl equips Year 7 - 10 girls to: • • • •
Identify themselves as valuable with much to contribute Build confidence, self esteem and self worth Develop decision making skills Understand they can have a positive influence in the world
The program is run on Fridays by outside facilitators. The girls will miss the last 30 minutes of Period 3 and lunch each Friday for Term 3, therefore any work missed will be the girls’ responsibility to catch up. There is no cost to families. Parents are asked to nominate their daughters through Ms Louise Desimone on 9306 7177 as soon as possible. Limited to 20 places.
LevelUp Program for Boys Relationships Australia returns to offer this program of Dealing With the Hassles of Life for Year 7 to 10 boys. This program will run through Term 3 after school on Wednesdays in the Library. Limited to 15 boys and led by two trained, experienced counsellors, the boys will be taught: • Self discipline • Dealing with anger • Developing self confidence • Building negotiation, cooperation and communication skills. Parents are asked to nominate their sons through Ms Louise Desimone on 9306 7177. Dove Tales
Thank you to all the students who participated in our annual Interhouse Music Festival and congratulations to the winning House Parresia who were glad to receive the 2012 Music Shield. Parresia 119 Points, Makaria 84 points and Charis 63 points. Our annual Battle of the Bands will be held in Week 6, 28 - 31 August. Student bands will fiercely battle out for the title of Winning Band 2012, this year will also feature the 2011 winning band Revelry and Requiem. Parents and friends are welcome to attend the event which will be held in the Tech Hub lunch times (1.25 – 1.50 pm). Instrumental/Vocal lessons and rehearsals for Term 3 will recommence Week 1 unless otherwise advised by your Music teacher. Academic copies of Sibelius 7, Pro-Tools 10 and Auralia 4 are available for students to purchase. This includes four years of free Sibelius upgrades all for $324. We also have brand new guitar packages for students; this includes a classical guitar with soft case, cleaning cloth, learner CD and tuning pitch pipe. Guitar sizes for all ages PP – 12, prices range from $85 to $106 per guitar package. Contact Mr de Bie to secure your order for these amazing student deals: shannon. debie@ststephens.wa.edu.au Applications for instrumental and vocal lessons are open all year round. Please collect these from either front admin or our Music administrator Mr James Kros (james.kros.@ststephens.wa.edu.au)
Issue 10, 6 July 2012
7.
Community News Back to Menu Holiday Programs Click on the following headings for more information on the flyers below: * ECU Kids Holiday Program * Bouncers * Fremantle Football Club * Fun4Kids - C3 Church *
Free School Holiday Body Image Art Workshop WOMEN’s Healthworks is running art workshops focused on improving body image over the school holidays at the Blender Gallery, Central Walk, Joondalup; the dates are 12 July and 19 July between 10.30 am and 1.30 pm, light refreshments will be provided. There are 10 places per group - please contact Project Officer Jemma Caswell at WOMEN’S Healthworks on 9300 1566 or email the LYT program email: gabrielle@womenshealthworks.org.au. You can also visit Facebook.
Parenting Program Tuning in to Kids - Emotionally Intelligent Parenting. Also suitable for teachers, social workers, psychologists, psychiatrists, paediatricians, occupational therapists, speech language therapists, guidance officers, child care workers and others working with children and families. The course is being held on two full days,Thursday 18 & Friday 19 October at Curtin University. For further information, including costing, visit:
5th Annual Kids Mega Market Sale
Dove Tales Dove Tales
Issue 10, 6 July 2012
8.
Community Back to Menu
Keep physically active for health and happiness Remember the last time you took a brisk walk in the park, kicked the footy with mates or took a dive into the cool water of the Indian Ocean? What about the relaxation that came from tending to your garden? Didn’t you feel great? With such a diverse environment and climate in Western Australia, we have plenty of options for finding an enjoyable physical activity that suits our lifestyle. And the good news is being active helps us keep mentally healthy too.
How does being physically active keep us mentally healthy? Even a small amount of physical activity alters the levels of mood chemicals in our brain. This reduces anxiety, gives us a feeling of calmness and increases our general wellbeing. [1, 2] We don’t need to run a marathon or climb Mt Kosciuszko to get the mental health benefits of keeping active. Simply setting small, achievable physical activity goals and challenges for ourselves can increase our confidence and self esteem.[3] When we achieve these goals we gain a sense of accomplishment which helps keep us mentally healthy. Being physically active can also take our mind off daily problems and unpleasant thoughts. We relax and unwind which puts us in a more positive frame of mind to deal with any challenges in our everyday lives. Our concentration and memory also improve which helps our performance at work and the quality of our social relationships. [3]
To achieve the benefits of being active, the National Physical Activity Guidelines for Australian Adults recommends we: • think of movement as an opportunity not an inconvenience • be active every day in as many ways as we can • put together at least 30 minutes of moderate intensity physical activity on most, preferably all, days of the week • if we can, also enjoy some regular vigorous activity for extra health benefits. [4]
Be active with others Being active is great but being active with others has extra benefits. Physical activity in groups keeps us motivated, allows us to catch up with friends and widens our social networks. This creates a sense of togetherness that makes communities safer and more enjoyable places in which to live, work and play. [5]
Dove Tales
Issue 10, 6 July 2012
9.
Community Back to Menu
How can I fit more physical activity into my day? We can all find ways to fit a little more physical activity into our lives and enjoy the physical and mental health benefits. Try to think of easy ways that you can incorporate physical activity into your daily routine. Here are some ideas… • Leave the car keys on the hook and walk or cycle to the shops. • Get outside and play a game of hopscotch or kick a ball in the park with your kids (or someone else’s kids). • Turn up the music, sing along and dance while you vacuum. • Hop off the bus, train or tram one stop early and walk the rest of the way. • Swap your office chair for a fit ball to tone your core muscles. • Plan active outings for the family such as swimming or bush walking. • Invite your work mates for a stroll outside on your lunch break. • Join a walking group. • Wherever you can, challenge yourself to take the stairs instead of a lift or escalator. • If you work in an office, take a break from your desk and walk over to speak to your colleague instead of emailing them.
Physical activity makes us feel happier, energised and more confident. So go on, stay active and keep mentally healthy! It’s as easy as A-B-C: Act-Belong-Commit. Can assist with…
Contact details
Website
Check out the Find Thirty every day campaign website. The campaign aims to increase the number of West Australian adults who are sufficiently active for good health. The site includes information on the campaign and how to Find Thirty® every day, as well as information for professionals working in physical activity.
The Physical Activity Taskforce’s be active wa site offers information for Western Australians on how to be active, as well as information for government and community agencies on how to promote and support physical activity.
®
Find a local sport and recreation club using the Department of Sport and Recreation online directory! Search by location or activity to find something that suits you in just a few clicks.
(08) 9492 9700
1. Paluska SA, Schwenk TL. Physical activity and mental health. Sports Medicine, 2000. 29(3): p.14. 2. Australian Bureau of Statistics. Participation in sports and physical recreation 2005-06. [Internet] 2005 [cited 2009 May 6]; (ABS cat. no. 4177.0). Available from: 3. Department of Sport and Recreation (WA). Benefits of physical activity [Internet]. Perth: Government of Western Australia; 2008 [updated 2008 Sept 18; cited 2009 May 6]. Available from: 4. Department of Health and Ageing (AU). Physical activity guidelines. [Internet]. Canberra: Australian Government; 2009 [updated 2009 March 23; cited 2009 May 23]. Available from: 5. Better Health Channel (AU). Physical activity - choosing the one for you. The State of Victoria (Australia); 2008 [updated 2008 March; cited 2009 May 28]; Available from: pages/Physical_activity_what’s_right_for_you?OpenDocument
Dove Tales
Issue 10, 6 July 2012
10.
School Information Back to Menu Term Dates 2012 Term 2 Teachers return Students return Term ends
Term Dates 2013 Monday 23 April (Staff Retreat 23 and 24 April) Thursday 26 April Friday 6 July
Term 1 Teachers return Tuesday 29 January Commissioning Service Students return Friday 1 February Term ends Friday 19 April
Term 3 Teachers return Monday 23 July (Reporting Day Tuesday 24 July) Students return Wednesday 25 July Friday 28 September Term ends
Term 2
Term 4 Staff return Students return Students leave Teaching staff leave
Term ends Monday 15 October Tuesday 16 October Friday 7 December Thursday 13 December
Staff Conference
Thursday & Friday 30 - 31 May Reporting Day (Yrs 7-9) Tuesday 7 May*2 Students return (Primary) Tuesday 7 May*3 Students return (Secondary) Wednesday 8 May Friday 5 July
Term 3 Teachers return
Monday 22 July
Reporting Day (Primary) Tuesday 23 July
Students return (Secondary) Tuesday 23 July*4
Office Hours / Contact Numbers
Students return (Primary) Wednesday 24 July
Secondary Office:
9306 7100
Term ends
Primary Office:
9306 7111
Term 4
Hours:
8.00 am – 4.00 pm
Teachers return
Monday 14 October
Students return
Tuesday 15 October
Students leave
Friday 6 December
Teachers leave
Thursday 12 December
Uniform Shop Normal Trading Hours Carramar Tuesdays:
12.30 pm - 4.00 pm
Thursdays:
8.00 am - 11.00 am
*1
Friday 27 September
*1 changed from 6-7 May *2 was Term 3 *3 was Wed 8 May *4 was Wed 24 July
Canteen Act Belong Commit
Phone: 9306 7132
Parent Connect
Parent Connect is a school-based network of parents, teachers and friends of the St Stephen’s School community. Parent Connect aims to: • Encourage parent and community participation through social activities and school events • Promote the recreation and welfare of students and families • Assist in providing facilities and equipment for the School through fundraising activities • Organise events for year groups and wider community based events such as school discos, school fetes, busy bees and guest speakers Please visit the Parent Connect page of the school website to find out latest news and more information about coming events.
Dove Tales
Issue 10, 6 July 2012
11. | https://issuu.com/ststephensschool/docs/dove_tales_carramar_issue_10 | CC-MAIN-2017-09 | refinedweb | 5,670 | 60.35 |
So I've set up the if else and loop for the program. I've hit a brick wall with the conversions. Let's say i enter 5 feet and 4 inches. I need it to say 1 meters and 62.56 centimeters and vice versa when i do the other option. How do I do coding for the necessary calculations for the output?? Please help.. I am a beginner and just started taking a course at school.
Code:#include<iostream> using namespace std; void main() { int choice; do { float ft,in,m,cm; cout<<"Choose Task:"<<endl; cout<<"1.Convert Feet and Inches to Meters and Centimeters."<<endl; cout<<"2.Convert Meters and Centimeters to Feet and Inches."<<endl; cout<<"Enter 0 to end program."<<endl; cin>>choice; if (choice==1) { cout<<"Enter Feet: "; cin>>ft; cout<<"Enter Inches: "; cin>>in; cout<<"The total feet and inches in meters and centimeters is "<<()<<"m and "<<()<<"cm."<<endl; cout<<" "<<endl; } else if (choice==2) { cout<<"Enter Meters: "; cin>>m; cout<<"Enter Centimeters: "; cin>>cm; cout<<"The total meters and centimeters in feet and inches is "<<()<<" ft and "<<()<<" in."<<endl; cout<<" "<<endl; } } while(choice!=0); if(choice==0) cout<<"Program Ended."<<endl; } | http://cboard.cprogramming.com/cplusplus-programming/128282-help-cplusplus-feet-inches-meters-centimeters-vice-versa-program.html | CC-MAIN-2014-15 | refinedweb | 200 | 69.07 |
In Chapter 7, we pointed out that Scala doesn't have many built-in control abstractions, because it gives you the ability to create your own. In the previous chapter, you learned about function values. In this chapter, we'll show you how to apply function values to create new control abstractions. Along the way, you'll also learn about currying and by-name parameters.
All functions are separated into common parts, which are the same in every invocation of the function, and non-common parts, which may vary from one function invocation to the next. The common parts are in the body of the function, while the non-common parts must be supplied via arguments. When you use a function value as an argument, the non-common part of the algorithm is itself some other algorithm! At each invocation of such a function, you can pass in a different function value as an argument, and the invoked function will, at times of its choosing, invoke the passed function value. These higher-order functions—functions that take functions as parameters—give you extra opportunities to condense and simplify code.
One benefit of higher-order functions is they enable you to create control abstractions that allow you to reduce code duplication. For example, suppose you are writing a file browser, and you want to provide an API that allows users to search for files matching some criterion. First, you add a facility to search for files whose names end in a particular string. This would enable your users to find, for example, all files with a ".scala" extension. You could provide such an API by defining a public filesEnding method inside a singleton object like this:
object FileMatcher { private def filesHere = (new java.io.File(".")).listFilesThe filesEnding method obtains the list of all files in the current directory using the private helper method filesHere, then filters them based on whether each file name ends with the user-specified query. Given filesHere is private, the filesEnding method is the only accessible method defined in FilesMatcher, the API you provide to your users.
def filesEnding(query: String) = for (file <- filesHere; if file.getName.endsWith(query)) yield file }
So far so good, and there is no repeated code yet. Later on, though, you decide to let people search based on any part of the file name. This is good for when your users cannot remember if they named a file phb-important.doc, stupid-phb-report.doc, may2003salesdoc.phb, or something entirely different, but they think that "phb" appears in the name somewhere. You go back to work and add this function to your FileMatcher API:
def filesContaining(query: String) = for (file <- filesHere; if file.getName.contains(query)) yield fileThis function works just like filesEnding. It searches filesHere, checks the name, and returns the file if the name matches. The only difference is that this function uses contains instead of endsWith.
The months go by, and the program becomes more successful. Eventually, you give in to the requests of a few power users who want to search based on regular expressions. These sloppy guys have immense directories with thousands of files, and they would like to do things like find all "pdf" files that have "oopsla" in the title somewhere. To support them, you write this function:
def filesRegex(query: String) = for (file <- filesHere; if file.getName.matches(query)) yield fileExperienced programmers will notice all of this repetition and wonder if it can be factored into a common helper function. Doing it the obvious way does not work, however. You would like to be able to do the following:
def filesMatching(query: String, method) = for (file <- filesHere; if file.getName.method(query)) yield fileThis approach would work in some dynamic languages, but Scala does not allow pasting together code at runtime like this. So what do you do?
Function values provide an answer. While you cannot pass around a method name as a value, you can get the same effect by passing around a function value that calls the method for you. In this case, you could add a matcher parameter to the method whose sole purpose is to check a file name against a query:
def filesMatching(query: String, matcher: (String, String) => Boolean) = {In this version of the method, the if clause now uses matcher to check the file name against the query. Precisely what this check does depends on what is specified as the matcher. Take a look, now, at the type of matcher itself. It is a function, and thus has a => in the type. This function takes two string arguments—the file name and the query—and returns a boolean, so the type of this function is (String, String) => Boolean.
for (file <- filesHere; if matcher(file.getName, query)) yield file }
Given this new filesMatching helper method, you can simplify the three searching methods by having them call the helper method, passing in an appropriate function:
def filesEnding(query: String) = filesMatching(query, _.endsWith(_))The function literals shown in this example use the placeholder syntax, introduced in the previous chapter, which may not as yet feel very natural to you. Thus, here's a clarification of how placeholders are used in this example. The function literal _.endsWith(_), used in the filesEnding method, means the same thing as:
def filesContaining(query: String) = filesMatching(query, _.contains(_))
def filesRegex(query: String) = filesMatching(query, _.matches(_))
(fileName: String, query: String) => fileName.endsWith(query)Because filesMatching takes a function that requires two String arguments, however, you need not specify the types of the arguments. Thus you could also write (fileName, query) => fileName.endsWith(query). Since the parameters are each used only once in the body of the function, and since the first parameter, fileName, is used first in the body, and the second parameter, query, is used second, you can use the placeholder syntax: _.endsWith(_). The first underscore is a placeholder for the first parameter, the file name, and the second underscore a placeholder for the second parameter, the query string.
This code is already simplified, but it can actually be even shorter. Notice that the query gets passed to filesMatching, but filesMatching does nothing with the query except to pass it back to the passed matcher function. This passing back and forth is unnecessary, because the caller already knew the query to begin with! You might as well simply remove the query parameter from filesMatching and matcher, thus simplifying the code as shown in Listing 9.1.
object FileMatcher { private def filesHere = (new java.io.File(".")).listFiles
private def filesMatching(matcher: String => Boolean) = for (file <- filesHere; if matcher(file.getName)) yield file
def filesEnding(query: String) = filesMatching(_.endsWith(query))
def filesContaining(query: String) = filesMatching(_.contains(query))
def filesRegex(query: String) = filesMatching(_.matches(query)) }
This example demonstrates the way in which first-class functions can help you eliminate code duplication where it would be very difficult to do so without them. In Java, for example, you could create an interface containing a method that takes one String and returns a Boolean, then create and pass anonymous inner class instances that implement this interface to filesMatching. Although this approach would remove the code duplication you are trying to eliminate, it would at the same time add as much or more new code. Thus the benefit is not worth the cost, and you may as well live with the duplication.
Moreover, this example demonstrates how closures can help you reduce code duplication. The function literals used in the previous example, such as _.endsWith(_) and _.contains(_), are instantiated at runtime into function values that are not closures, because they don't capture any free variables. Both variables used in the expression, _.endsWith(_), for example, are represented by underscores, which means they are taken from arguments to the function. Thus, _.endsWith(_) uses two bound variables, and no free variables. By contrast, the function literal _.endsWith(query), used in the most recent example, contains one bound variable, the argument represented by the underscore, and one free variable named query. It is only because Scala supports closures that you were able to remove the query parameter from filesMatching in the most recent example, thereby simplifying the code even further.
The previous example demonstrated that higher-order functions can help reduce code duplication as you implement an API. Another important use of higher-order functions is to put them in an API itself to make client code more concise. A good example is provided by the special-purpose looping methods of Scala's collection types.[1] Many of these are listed in Table 3.1 in Chapter 3, but take a look at just one example for now to see why these methods are so useful.
Consider exists, a method that determines whether a passed value is contained in a collection. You could of course search for an element by having a var initialized to false, looping through the collection checking each item, and setting the var to true if you find what you are looking for. Here's a method that uses this approach to determine whether a passed List contains a negative number:
def containsNeg(nums: List[Int]): Boolean = { var exists = false for (num <- nums) if (num < 0) exists = true exists }If you define this method in the interpreter, you can call it like this:
scala> containsNeg(List(1, 2, 3, 4)) res0: Boolean = falseA more concise way to define the method, though, is by calling the higher-order function exists on the passed List, like this:
scala> containsNeg(List(1, 2, -3, 4)) res1: Boolean = true
def containsNeg(nums: List[Int]) = nums.exists(_ < 0)This version of containsNeg yields the same results as the previous:
scala> containsNeg(Nil) res2: Boolean = falseThe exists method represents a control abstraction. It is a special-purpose looping construct provided by the Scala library rather than being built into the Scala language like while or for. In the previous section, the higher-order function, filesMatching, reduces code duplication in the implementation of the object FileMatcher. The exists method provides a similar benefit, but because exists is public in Scala's collections API, the code duplication it reduces is client code of that API. If exists didn't exist, and you wanted to write a containsOdd method, to test whether a list contains odd numbers, you might write it like this:
scala> containsNeg(List(0, -1, -2)) res3: Boolean = true
def containsOdd(nums: List[Int]): Boolean = { var exists = false for (num <- nums) if (num % 2 == 1) exists = true exists }If you compare the body of containsNeg with that of containsOdd, you'll find that everything is repeated except the test condition of an if expression. Using exists, you could write this instead:
def containsOdd(nums: List[Int]) = nums.exists(_ % 2 == 1)The body of the code in this version is again identical to the body of the corresponding containsNeg method (the version that uses exists), except the condition for which to search is different. Yet the amount of code duplication is much smaller because all of the looping infrastructure is factored out into the exists method itself.
There are many other looping methods in Scala's standard library. As with exists, they can often shorten your code if you recognize opportunities to use them.
In Chapter 1, we said that Scala allows you to create new control abstractions that "feel like native language support." Although the examples you've seen so far are indeed control abstractions, it is unlikely anyone would mistake them for native language support. To understand how to make control abstractions that feel more like language extensions, you first need to understand the functional programming technique called currying.
A curried function is applied to multiple argument lists, instead of just one. Listing 9.2 shows a regular, non-curried function, which adds two Int parameters, x and y.
scala> def plainOldSum(x: Int, y: Int) = x + y plainOldSum: (Int,Int)Int
scala> plainOldSum(1, 2) res4: Int = 3
By contrast, Listing 9.3 shows a similar function that's curried. Instead of one list of two Int parameters, you apply this function to two lists of one Int parameter each.
scala> def curriedSum(x: Int)(y: Int) = x + y curriedSum: (Int)(Int)Int
scala> curriedSum(1)(2) res5: Int = 3. Here's a function named first that does in spirit what the first traditional function invocation of curriedSum would do:
scala> def first(x: Int) = (y: Int) => x + y first: (Int)(Int) => IntApplying 1 to the first function—in other words, invoking the first function and passing in 1—yields the second function:
scala> val second = first(1) second: (Int) => Int = <function>Applying 2 to the second function yields the result:
scala> second(2) res6: Int = 3These first and second functions are just an illustration of the currying process. They are not directly connected to the curriedSum function. Nevertheless, there is a way to get an actual reference to curriedSum's "second" function. You can use the placeholder notation to use curriedSum in a partially applied function expression, like this:
scala> val onePlus = curriedSum(1)_ onePlus: (Int) => Int = <function>The underscore in curriedSum(1)_ is a placeholder for the second parameter list.[2] The result is a reference to a function that, when invoked, adds one to its sole Int argument and returns the result:
scala> onePlus(2) res7: Int = 3And here's how you'd get a function that adds two to its sole Int argument:
scala> val twoPlus = curriedSum(2)_ twoPlus: (Int) => Int = <function>
scala> twoPlus(2) res8: Int = 4
In languages with first-class functions, you can effectively make new control structures even though the syntax of the language is fixed. All you need to do is create methods that take functions as arguments.
For example, here is the "twice" control structure, which repeats an operation two times and returns the result:
scala> def twice(op: Double => Double, x: Double) = op(op(x)) twice: ((Double) => Double,Double)DoubleThe type of op in this example is Double => Double, which means it is a function that takes one Double as an argument and returns another Double.
scala> twice(_ + 1, 5) res9: Double = 7.0
Any time you find a control pattern repeated in multiple parts of your code, you should think about implementing it as a new control structure. Earlier in the chapter you saw filesMatching, a very specialized control pattern. Consider now a more widely used coding pattern: open a resource, operate on it, and then close the resource. You can capture this in a control abstraction using a method like the following:
def withPrintWriter(file: File, op: PrintWriter => Unit) { val writer = new PrintWriter(file) try { op(writer) } finally { writer.close() } }Given such a method, you can use it like this:
withPrintWriter( new File("date.txt"), writer => writer.println(new java.util.Date) )The advantage of using this method is that it's withPrintWriter, not user code, that assures the file is closed at the end. So it's impossible to forget to close the file. This technique is called the loan pattern, because a control-abstraction function, such as withPrintWriter, opens a resource and "loans" it to a function. For instance, withPrintWriter in the previous example loans a PrintWriter to the function, op. When the function completes, it signals that it no longer needs the "borrowed" resource. The resource is then closed in a finally block, to ensure it is indeed closed, regardless of whether the function completes by returning normally or throwing an exception.
One way in which you can make the client code look a bit more like a built-in control structure is to use curly braces instead of parentheses to surround the argument list. In any method invocation in Scala in which you're passing in exactly one argument, you can opt to use curly braces to surround the argument instead of parentheses.
For example, instead of:
scala> println("Hello, world!") Hello, world!You could write:
scala> println { "Hello, world!" } Hello, world!In the second example, you used curly braces instead of parentheses to surround the arguments to println. This curly braces technique will work, however, only if you're passing in one argument. Here's an attempt at violating that rule:
scala> val g = "Hello, world!" g: java.lang.String = Hello, world!
scala> g.substring { 7, 9 } <console>:1: error: ';' expected but ',' found. g.substring { 7, 9 } ^
Because you are attempting to pass in two arguments to substring, you get an error when you try to surround those arguments with curly braces. Instead, you'll need to use parentheses:
scala> g.substring(7, 9) res12: java.lang.String = wo
The purpose of this ability to substitute curly braces for parentheses for passing in one argument is to enable client programmers to write function literals between curly braces. This can make a method call feel more like a control abstraction. Take the withPrintWriter method defined previously as an example. In its most recent form, withPrintWriter takes two arguments, so you can't use curly braces. Nevertheless, because the function passed to withPrintWriter is the last argument in the list, you can use currying to pull the first argument, the File, into a separate argument list. This will leave the function as the lone parameter of the second argument list. Listing 9.4 shows how you'd need to redefine withPrintWriter.
def withPrintWriter(file: File)(op: PrintWriter => Unit) { val writer = new PrintWriter(file) try { op(writer) } finally { writer.close() } }
The new version differs from the old one only in that there are now two parameter lists with one parameter each instead of one parameter list with two parameters. Look between the two parameters. In the previous version of withPrintWriter, shown here, you see ...File, op.... But in this version, you see ...File)(op.... Given the above definition, you can call the method with a more pleasing syntax:
val file = new File("date.txt")In this example, the first argument list, which contains one File argument, is written surrounded by parentheses. The second argument list, which contains one function argument, is surrounded by curly braces.
withPrintWriter(file) { writer => writer.println(new java.util.Date) }
The withPrintWriter method shown in the previous section differs from built-in control structures of the language, such as if and while, in that the code between the curly braces takes an argument. The withPrintWriter method requires one argument of type PrintWriter. This argument shows up as the "writer =>" in:
withPrintWriter(file) { writer => writer.println(new java.util.Date) }What if you want to implement something more like if or while, however, where there is no value to pass into the code between the curly braces? To help with such situations, Scala provides by-name parameters.
As a concrete example, suppose you want to implement an assertion construct called myAssert.[3] The myAssert function will take a function value as input and consult a flag to decide what to do. If the flag is set, myAssert will invoke the passed function and verify that it returns true. If the flag is turned off, myAssert will quietly do nothing at all.
Without using by-name parameters, you could write myAssert like this:
var assertionsEnabled = trueThe definition is fine, but using it is a little bit awkward:
def myAssert(predicate: () => Boolean) = if (assertionsEnabled && !predicate()) throw new AssertionError
myAssert(() => 5 > 3)You would really prefer to leave out the empty parameter list and => symbol in the function literal and write the code like this:
myAssert(5 > 3) // Won't work, because missing () =>By-name parameters exist precisely so that you can do this. To make a by-name parameter, you give the parameter a type starting with => instead of () =>. For example, you could change myAssert's predicate parameter into a by-name parameter by changing its type, "() => Boolean", into "=> Boolean". Listing 9.5 shows how that would look:
def byNameAssert(predicate: => Boolean) = if (assertionsEnabled && !predicate) throw new AssertionError
Now you can leave out the empty parameter in the property you want to assert. The result is that using byNameAssert looks exactly like using a built-in control structure:
byNameAssert(5 > 3)A by-name type, in which the empty parameter list, (), is left out, is only allowed for parameters. There is no such thing as a by-name variable or a by-name field.
Now, you may be wondering why you couldn't simply write myAssert using a plain old Boolean for the type of its parameter, like this:
def boolAssert(predicate: Boolean) = if (assertionsEnabled && !predicate) throw new AssertionErrorThis formulation is also legal, of course, and the code using this version of boolAssert would still look exactly as before:
boolAssert(5 > 3)Nevertheless, one difference exists between these two approaches that is important to note. Because the type of boolAssert's parameter is Boolean, the expression inside the parentheses in boolAssert(5 > 3) is evaluated before the call to boolAssert. The expression 5 > 3 yields true, which is passed to boolAssert. By contrast, because the type of byNameAssert's predicate parameter is => Boolean, the expression inside the parentheses in byNameAssert(5 > 3) is not evaluated before the call to byNameAssert. Instead a function value will be created whose apply method will evaluate 5 > 3, and this function value will be passed to byNameAssert.
The difference between the two approaches, therefore, is that if assertions are disabled, you'll see any side effects that the expression inside the parentheses may have in boolAssert, but not in byNameAssert. For example, if assertions are disabled, attempting to assert on "x / 0 == 0" will yield an exception in boolAssert's case:
scala> var assertionsEnabled = false assertionsEnabled: Boolean = falseBut attempting to assert on the same code in byNameAssert's case will not yield an exception:
scala> boolAssert(x / 0 == 0) java.lang.ArithmeticException: / by zero at .<init>(<console>:8) at .<clinit>(<console>) at RequestResult$.<init>(<console>:3) at RequestResult$.<clinit>(<console>)...
scala> byNameAssert(x / 0 == 0)
This chapter has shown you how to build on Scala's rich function support to build control abstractions. You can use functions within your code to factor out common control patterns, and you can take advantage of higher-order functions in the Scala library to reuse control patterns that are common across all programmers' code. This chapter has also shown how to use currying and by-name parameters so that your own higher-order functions can be used with a concise syntax.
In the previous chapter and this one, you have seen quite a lot of information about functions. The next few chapters will go back to discussing more object-oriented features of the language.
[1] These special-purpose looping methods are defined in trait Iterable, which is extended by List, Set, Array, and Map. See Chapter 17 for a discussion.
[2] In the previous chapter, when the placeholder notation was used on traditional methods, like println _, you had to leave a space between the name and the underscore. In this case you don't, because whereas println_ is a legal identifier in Scala, curriedSum(1)_ is not.
[3] You'll call this myAssert, not assert, because Scala provides an assert of its own, which will be described in Section 14.1. | https://www.artima.com/pins1ed/control-abstractionP.html | CC-MAIN-2018-22 | refinedweb | 3,894 | 60.55 |
ubuntuone-login crashed with ValueError in call_async(): Unable to guess signature from an empty dict
Bug Description
This error happens because Ubuntu SSO cannot be contacted by ubuntuone-login process. Eventually the latter gives up and throws a DBus error.
If you are experiencing this issue please check whether you have a custom python installation used by default:
which python
Expected output: /usr/bin/python
If you have e.g. another python binary in /usr/local/bin and `which python` shows it then ubuntu-sso-client will fail to start leading to this issue.
If you don't have another python installation please leave a comment with the output of:
/usr/lib/
Related branches
- Manuel de la Peña (community): Disapprove on 2012-05-28
- Alejandro J. Cura (community): Needs Fixing on 2012-05-24
- Roberto Alsina (community): Approve on 2012-05-23
- Diff: 42 lines (+8/-3)1 file modifiedubuntuone/platform/credentials/linux.py (+8/-3)
- Manuel de la Peña (community): Approve on 2012-05-28
- dobey (community): Approve on 2012-05-25
- Diff: 79 lines (+34/-3)2 files modifiedtests/platform/credentials/test_linux.py (+26/-0)
ubuntuone/platform/credentials/linux.py (+8/-3)
- Brian Curtin (community): Approve on 2012-07-23
- Diff: 78 lines (+23/-9)2 files modifiedtests/syncdaemon/test_sync.py (+15/-7)
ubuntuone/syncdaemon/sync.py (+8/-2)
Get a UbuntOne crash at each and every login since upgrading Maverick => Natty beta.
Happens on all 3 different machines out of 3 machines on which I have tried.
The traceback appears to be entirely inside dbus-python so moving to there.
I tried to use Ubuntu One, suddenly the error appeared.
This seems to be back as it occurred right after upgrading to 11.10Alpha of July 10.
11.10 beta. I get this error during logining in.
while attempting to register tomboy for sync
I get this bug with 12.04 right after login.
I just updated 12.04 and UbuntuOne crashed on login to the desktop and brought me here. This is regression, U1 has been working fine on precise so far. Also cannot log in to U1 client: red writing on the client states: "There was a problem while retrieving the crede" and then cuts off. Unable run u1sdtool to stop and start the daemon. Going to log a separate bug about that issue.
So U1 is completely dead
I got the same bug on login - after today's upgrade.
Still occurs.
Ditto, new U1 client now states "File Sync error. (auth failed (AUTH_FAILED))
My user details are present and I have not changed my password
no one has been attending to this as it was marked incorrectly. Changed to ubuntuone
psypher, what makes you think this was marked incorrectly? The traceback from the original report is all entirely in dbus-python. And your error does not sound at all anything like this. Why did you move it back to u1-client?
Marked as affecting both packages until someone works out for certain which is at fault.
I've also seen it at login on a freshly updated 12.04 system (upgraded from 11.10).
This is the error details I get when trying to start the U1 client. It's similar, but note that this traceback *does* go through ubuntuone-client.
CredentialsError
DBusException(
Here's the u1 frame pulled out:
File "/usr/lib/
reply_
Status changed to 'Confirmed' because the bug affects multiple users.
@Rodney, I was told by the guys one the IRC channel that it was marked incorrectly as dbus and hence no-one from u1 saw it and wasn't working on it
Zakaria, can you please explain why you changed this from Confirmed to Opinion in dbus-python?
amen
Presumably this happens if dbus-python doesn't manage to introspect the credentials interface, so it doesn't know how to marshal the empty dictionary.
I'm not sure why this would happen for some people and not others, but we could probably work around it by passing dbus.Dictionary({}, signature='ss') to find_credentials() instead (or whatever the signature is meant to be).
Hello,
any update on a fix for this bug?
Regards
Wim
Adding ubuntuone-
As instructed I ran:
/usr/lib/
And this produced the following output:
Traceback (most recent call last):
File "/usr/lib/
from ubuntu_sso.qt.main import main
ImportError: No module named ubuntu_sso.qt.main
Was asked to run three commands, output is attached.
Those who experience this issue - do you happen to have a custom Python installation?
What is the output of `which python` ?
A follow up to the previous message.
In case you do have a custom Python installation then this is what breaks ubuntu-sso-client since the packaged version uses /usr/bin/env python which in turn causes your custom interpreter to be started which may not know about ubuntuone modules at all. The task to switch to /usr/bin/python interpreter for various Ubuntu One-related applications is LP:984089.
The comment in #25 is an example of custom python installations that gets used by default.
This bug was fixed in the package ubuntuone-client - 3.99.0-0ubuntu1
---------------
ubuntuone-client (3.99.0-0ubuntu1) quantal; urgency=low
* New upstream release.
- Use dbus.Dictionary to pass empty dicts. (LP: #711162)
- Ignore IN_CLOSE_WRITE for directories. (LP: #872894)
- Validate SSL certificates better. (LP: #882062, LP: #1014654)
- Ignore .goutputstream temporary flies. (LP: #1012620)
- Handle failures better in share creation. (LP: #1013180)
- Re-upload files when server reports empty hash. (LP: #1013401)
* debian/control:
- Update some build dependencies in preparation for testing during builds,
and to allow building on older supported versions of Ubuntu.
* debian/watch:
- Update to use stable-4-0 series for Quantal releases.
-- Rodney Dawes <email address hidden> Tue, 19 Jun 2012 16:58:05 -0400
Hello Id2ndR, or anyone else affected,
Accepted ubuntuoneone-client - 3.0.2-0ubuntu1
---------------
ubuntuone-client (3.0.2-0ubuntu1) precise-proposed; urgency=low
* New upstream release. (LP: #1018991)
- Wrap empty dicts with dbus.Dictionary. (LP: #711162)
- Ignore IN_CLOSE_WRITE for directories. (LP: #872894)
- Ignore .goutputstream temporary flies. (LP: #1012620)
- Handle failures better in share creation. (LP: #1013180)
* debian/copyright:
- Remove comma in list of files for dep5 copyright format.
* 00_bzr1259_
- Re-upload files when server reports empty hash. (LP: #1013401)
* debian/patches:
- Remove upstreamed patches.
-- Rodney Dawes <email address hidden> Mon, 09 Jul 2012 15:46:44 -0400
I removed the dbus-python bug task because this is a deliberate decision in dbus-python, not a bug. Specifically, this comment in message-append.c sheds some light:
/* No items, so fail. Or should we guess "a{vv}"? */
I actually think dbus-python is probably doing the right thing by refusing to guess the signature of an empty dictionary. It's probably a bug that the documentation doesn't describe this, and I will file such a documentation bug upstream.
How do I fix this?
CredentialsError
DBusException(
setting to high since it seems quite some users run into it | https://bugs.launchpad.net/ubuntu/+source/ubuntuone-client/+bug/711162 | CC-MAIN-2018-05 | refinedweb | 1,160 | 58.89 |
Lazy loading is useful while dealing with large hierarchical data sources where you would like to avoid the delays involved in loading the entire data set at once.
The TreeView control makes lazy-loading super easy as only two steps are required:
The tree in example below starts with three lazy-loaded nodes. When you expand them, the lazyLoadFunction is invoked. The function uses a timeout to simulate an http delay and returns data for three child nodes, one of which is also a lazy-loaded node.
<div id="theTree"></div>
import * as wjNav from '@grapecity/wijmo.nav'; // create the tree var tree = new wjNav.TreeView('#theTree', { itemsSource: getData(), displayMemberPath: 'header', childItemsPath: 'items', lazyLoadFunction: lazyLoadFunction }); // start with three lazy-loaded nodes function getData() { return [ { header: 'Lazy Node 1', items: [] }, { header: 'Lazy Node 2', items: [] }, { header: 'Lazy Node 3', items: [] } ]; } // function used to lazy-load node content function lazyLoadFunction(node, callback) { setTimeout(function () { // simulate http delay var result = [ // simulate result { header: 'Another lazy node...', items: [] }, { header: 'A non-lazy node without children' }, { header: 'A non-lazy node with child nodes', items: [ { header: 'hello' }, { header: 'world' }] }]; callback(result); // return result to control }, 2500); // 2.5sec http delay }
By default, lazy nodes load their data only once, when the node is expanded for the first time. You can change that behavior for selected nodes causing them to re-load their data whenever they are expanded. This can be used to improve performance in cases where data is loaded asynchronously.
To do this:
Submit and view feedback for | https://www.grapecity.com/wijmo/docs/Topics/Nav/TreeView/DataBinding/Lazy-Loading | CC-MAIN-2022-40 | refinedweb | 254 | 51.48 |
Posted on March 1st, 2001
Java goes out of its way to guarantee that any variable is properly initialized before it is used. In the case of variables that are defined locally to a method, this guarantee comes in the form of a compile-time error. So if you say::
//: InitialValues.java // Shows default initial values class Measurement { boolean t; char c; byte b; short s; int i; long l; float f; double d; void print() { System.out.println( "Data type Inital value\n" + "boolean " + t + "\n" + "char " + Inital value boolean false char byte 0 short 0 int 0 long 0 float 0.0 double 0.0
The char value is a null, which doesn’t print.
You’ll see later that when you define an object handle inside a class without initializing it to a new object, that handle is given a value of null.
You can see that even though the values are not specified, they automatically get initialized. So at least there’s no threat of working with uninitialized variables.
Specifying initialization Measurement are changed to provide initial values:
class Measurement { boolean b = true; char c = 'x'; byte B = 47; short s = 0xff; int i = 999; long l = 1; float f = 3.14f; double d = 3.14159; <p><tt> //. . . </tt></p>
You can also initialize non-primitive objects in this same way. If Depth is a class, you can insert a variable and initialize it like so:
class Measurement { Depth o = new Depth(); boolean b = true; <p><tt> // . . . </tt></p>
If you haven’t given o an initial value and you go ahead and try to use it anyway, you’ll get a run-time error called an exception (covered in Chapter 9).
You can even call a method to provide an initialization value:
class CInit { int i = f(); //... }
This method can have arguments, of course, but those arguments cannot be other class members that haven’t Measurement will get these same initialization values. Sometimes this is exactly what you need, but at other times you need more flexibility.
Constructor initialization
The constructor can be used to perform initialization, and this gives you greater flexibility in your programming since you can call methods and perform actions at run time to determine the initial values. There’s one thing to keep in mind, however: you aren’t precluding the automatic initialization, which happens before the constructor is entered. So, for example, if you say:
class Counter {// . . .
int i;
Counter() { i = 7; }
then i will first be initialized to zero, then to 7. This is true with all the primitive types and with object handles, including those that are given explicit initialization at the point of definition. For this reason, the compiler doesn’t try to force you to initialize elements in the constructor at any particular place, or before they are used – initialization is already guaranteed. [21]Order of initialization
Within a class, the order of initialization is determined by the order that the variables are defined within the class. Even if the variable definitions are scattered throughout in between method definitions, the variables are initialized before any methods can be called – even the constructor. For example:
//: OrderOfInitialization.java // Demonstrates initialization order. //); // Re-initialize t3 } Tag t2 = new Tag(2); // After constructor void f() { System.out.println("f()"); } Tag t3 = new Tag(3); // At end } public class OrderOfInitialization { re-initialized inside the constructor. The output is:
Tag(1) Tag(2) Tag(3) Card() Tag(33) f()
Thus, the t3 handle?Static data initialization
When the data is static the same thing happens; if it’s a primitive and you don’t initialize it, it gets the standard primitive initial values. If it’s a handle to an object, it’s null unless you create a new object and attach your handle to it.
If you want to place initialization at the point of definition, it looks the same as for non-statics. But since there’s only a single piece of storage for a static, regardless of how many objects are created the question of when that storage gets initialized arises. An example makes this question clear:
//: StaticInitialization.java // Specifying initial values in a // class definition. { public static void main(String[] args) { System.out.println( "Creating new Cupboard() in main"); new Cupboard(); System.out.println( "Creating new Cupboard() in main"); new Cupboard(); t2.f2(1); t3. The output shows what happens: created only when the first Table object is created (or the first static access occurs). After that, the static object is not re-initialized.
The order of initialization is statics first, if they haven’t already been initialized by a previous object creation, and then the non- static objects. You can see the evidence of this in the output.
- The first time an object of type Dog is created, or the first time a static method or static field of class Dog is accessed, the Java interpreter must locate Dog.class, which it does by searching through the classpath.
- As Dog.class is loaded (which creates a Class object, which you’ll learn about later), all of its static initializers are run. Thus, static initialization takes place only once, as the Class object is loaded for the first time.
- When you create a new Dog( ) , the construction process for a Dog object first allocates enough storage for a Dog object on the heap.
- This storage is wiped to zero, automatically setting all the primitives in Dog to their default values (zero for numbers and the equivalent for boolean and char).
- Any initializations that occur at the point of field definition are executed.
- Constructors are executed. As you shall see in Chapter 6, this might actually involve a fair amount of activity, especially when inheritance is involved.
Java allows you to group other static initializations inside a special “static construction clause” (sometimes called a static block ) in a class. It looks like this:
class Spoon {
static int i;
static {
i = 47;
}
// . . .
So it looks like a method, but it’s just the static keyword followed by a method body. This code, like the other static initialization, is executed only once, the first time you make an object of that class or you access a static member of that class (even if you never make an object of that class). For example:
//:) } static Cups x = new Cups(); // (2) static Cups y = new Cups(); // (2) <p><tt>} ///:~ </tt></p>
The static initializers for Cups will be run when either the access of the static object c1 occurs on the line marked (1), or if line (1) is commented out and the lines marked (2) are uncommented. If both (1) and (2) are commented out, the static initialization for Cups never occurs.Non-static instance initialization
Java 1.1 provides a similar syntax for initializing non-static variables for each object. Here’s an example:
//: Mugs.java // Java 1.1 :
{ c1 = new Mug(1); c2 = new Mug(2); System.out.println("c1 & c2 initialized"); }
looks exactly like the static initialization clause except for the missing static keyword. This syntax is necessary to support the initialization of anonymous inner classes (see Chapter 7).
[21] In contrast, C++ has the constructor initializer list that causes initialization to occur before entering the constructor body, and is enforced for objects. See Thinking in C++ .
There are no comments yet. Be the first to comment! | https://www.codeguru.com/java/tij/tij0052.shtml | CC-MAIN-2018-51 | refinedweb | 1,229 | 63.19 |
Brandon S. Allbery KF8NH wrote: > On Aug 13, 2007, at 16:29 , Benjamin Franksen wrote: >>. > > Clearly it does, but not as a side effect of the *monad*. It's > ordinary Haskell data dependencies at work here, not some mystical > behavior of a monad. I can't remember claiming that Monads have any mystic abilities. In fact, these Monads are implemented in such a way that their definition /employs/ data dependencies to enforce a certain sequencing of effects. I think that is exactly the point, isn't dependencies*. Of course, you can unfold (itso inline) bind and return (or never use them in the first place). Again, nobody claimed Monads do the sequencing by employing Mystic Force (tm); almost all Monads can be implemented in plain Haskell, nevertheless they sequence certain effects -- You Could Have Invented Them Monads Yourself ;-) The Monad merely captures the idiom, abstracts it and ideally implements it in a library for your convenience and for the benefit of those trying to understand what your code is supposed to achieve. She reads 'StateM ...' and immediately sees 'ah, there he wants to use some data threaded along for reading and writing'. Cheers Ben | http://www.haskell.org/pipermail/haskell-cafe/2007-August/030474.html | CC-MAIN-2013-48 | refinedweb | 194 | 60.65 |
Background
Note: A newer version of this post exists with an assertion helper for Python 3 and pytest. Read on for Python 2 and unittest and general background on Q objects…
When programmatically building complex queries in Django ORM, it’s helpful to be able to test the resulting Q object instances against each other.
However, Django’s Q object does not implement __cmp__ and neither does Node which it extends (Node is in the django.utils.tree module).
Unfortunately, that means that comparison of Q objects that are equal fails.
>>> from django.db.models import Q >>> a = Q(thing='value') >>> b = Q(thing='value') >>> assert a == b Traceback (most recent call last) ... Assertion Error:
This means that writing unit tests that assert that correct Q objects have been created is hard.
A simple solution
Q objects generate great Unicode representations of themselves:
>>> a = Q(place='Residential') & Q(people__gt=5) >>> unicode(a) u"(AND: ('place', 'Residential'), ('people__gt', 5))"
In addition, it is “good” testing practice to write assertion helpers whenever a test suite has complicated assertions to make frequently. This provides an opportunity to DRY out test code and expand on any error messages that are raised on failure.
Therefore a really simple solution is an assertion helper that would compare Q objects by:
- Asserting that left and right sides are both instances of Q.
- Asserting that the Unicode for the left and right sides are identical.
So here’s a mixin containing the assertion helper. It can be added to any class that extends unittest.TestCase (such as Django’s default TestCase):
from django.db.models import Q class QTestMixin(object): def assertQEqual(self, left, right): """ Assert `Q` objects are equal by ensuring that their unicode outputs are equal (crappy but good enough) """ self.assertIsInstance(left, Q) self.assertIsInstance(right, Q) left_u = unicode(left) right_u = unicode(right) self.assertEqual(left_u, right_u)
Disadvantage of this method is that it is simplistic and doesn’t find all the Q objects that are identical (see below). However, the advantage is that it provides rich diffs on failure:
class TestFail(TestCase, QTestMixin): def test_unhappy(self): """ Two Q objects are not the same """ a = Q(place='Residential') b = Q(place='Palace') self.assertQEqual(a, b)
Gives output:
AssertionError: u"(AND: ('place', 'Residential'))" != u"(AND: ('place', 'Palace'))" - (AND: ('place', 'Residential')) ? ^^^^^^^^^ + (AND: ('place', 'Palace')) ? ^ +++
Which can be very helpful when trying to track down errors.
See this updated post for a version of this assertion helper for Python 3 with pytest.
The perfect world: Predicate Logic
Since Q objects represent the logic of SQL WHERE clauses they are therefore Python representations of predicates. In an ideal world the predicate logic rules of equality could be used to compare Q objects and this would be built directly into Q.__cmp__.
This would mean that:
# WARNING MAGIC IMAGINARY CODE! # Commutative would work >>> a = Q(x=1) | Q(x=2) >>> b = Q(x=2) | Q(x=1) >>> a == b True # Double negation would work >>> a = Q(x=1) >>> b = ~~(Q=1) >>> a == b True # Negation on expression would work >>> a = ~(Q(x=1) & Q(x=2)) >>> b = ~Q(x=1) | ~Q(x=2) >>> a == b True # END IMAGINATION SECTION. | http://jamescooke.info/comparing-django-q-objects.html | CC-MAIN-2017-39 | refinedweb | 530 | 53.92 |
2 red led's went into the pumpkin's eyes, 2 large super bright white led's went into the middle and the PIR sensor was bodged into his nose, all connected together with some dodgy wiring. I pushed the raspberry pi and a x-mini speaker into the middle and all that was needed was some software.
Me and Mrs O'Hanlon recorded some scary sounds using the voice recorded on my phone which I then copied onto the Pi.
I started with the program from raspberry pi spy's article on cheap PIR sensors and the raspberry pi as the basis for my software to which I added in some code to flash the red led's when the PIR wasn't triggered and turn them on and play a random sound when it was.
Anyway, my boy absolutely hated it, it made him cry :( but I stuck it outside the front of the house and the local children loved it.
Code
You can also find the code on github,
import gpioRap as gpioRap import RPi.GPIO as GPIO import subprocess import time import random #Create GpioRap class using BCM pin numbers gpioRapper = gpioRap.GpioRap(GPIO.BCM) #Create an LED, which should be attached to pin 17 white1 = gpioRapper.createLED(4) white2 = gpioRapper.createLED(17) red1 = gpioRapper.createLED(21) red2 = gpioRapper.createLED(22) # Define GPIO to use on Pi GPIO_PIR = 24 # Set pir pin as input GPIO.setup(GPIO_PIR,GPIO.IN) try: Current_State = 0 Previous_State = 0 # Loop until PIR output is 0 while GPIO.input(GPIO_PIR)==1: Current_State = 0 redeyecounter = 0 #Loop until exception (ctrl c) while True: # Read PIR state Current_State = GPIO.input(GPIO_PIR) if Current_State==1 and Previous_State==0: # PIR is triggered print " Motion detected!" # turn on red and white lights red1.on() red2.on() white1.on() white2.on() # play random sound soundno = random.randint(1,6) subprocess.call(["mplayer","/home/pi/dev/pumpkin/"+str(soundno)+".m4a"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Record previous state Previous_State=1 elif Current_State==0 and Previous_State==1: # PIR has returned to ready state print " Ready" # turn off red and white lights red1.off() red2.off() white1.off() white2.off() Previous_State=0 elif Current_State==0 and Previous_State==0: #in steady state, incremenet flash red eye state redeyecounter+=1 #every 5 seconds (ish) of steady state, flash red eyes if redeyecounter == 500: redeyecounter = 0 for count in range(0,3): red1.on() red2.on() time.sleep(0.1) red1.off() red2.off() time.sleep(0.1) # Wait for 10 milliseconds time.sleep(0.01) except KeyboardInterrupt: print "Stopped" finally: #Cleanup gpioRapper.cleanup()
This comment has been removed by the author. | https://www.stuffaboutcode.com/2013/10/halloween-pumpkin-and-raspberry-pi.html | CC-MAIN-2019-35 | refinedweb | 441 | 55.54 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Sat, Mar 31, 2001 at 08:26:58AM -0800, Zack Weinberg wrote: > This patch adds MI guards to genrtl.h, insn-attr.h, insn-codes.h, > insn-config.h, and insn-flags.h. I also took the opportunity to make > the error check at exit of the affected generators a bit more robust. For what it's worth, I thought that the preferred format for the MI guards was #ifndef GCC_FILENAME_H #define GCC_FILENAME_H ... #endif /* GCC_FILENAME_H */ so as not to pollute the reserved namespace. See . Matt
PGP signature | http://gcc.gnu.org/ml/gcc-patches/2001-03/msg02027.html | crawl-001 | refinedweb | 106 | 68.47 |
Difference between revisions of "Jython Scripting"
Revision as of 16:49, 25 September 2009
- 3 Tips and Tricks
- 4 Jython for plugins "Find..." command window to launch it easily (keybinding 'l').
The next time Fiji is run, automatic commands in macros/StartupMacros.txt mesuring - Split channels.
Tips and Tricks
Getting a list of all members in one package
You can use the Python function dir(<package>) to see the contents of a package:
import ij print dir(ij)
Jython for plugins
Using a jython script as a plugin
The simplest way is to place the jython script file into fiji/plugins/ folder or a subfolder, and it will appear in the menus on restarting fiji.
Distributing jython scripts in a .jar file
PLEASE NOTE: there is no need to do the following. See entry above.
We create two jython scripts that we want to distribute in a .jar file as plugins:
The printer.py script:
IJ.log("Print this to the log window") two. | https://imagej.net/index.php?title=Jython_Scripting&oldid=2295&diff=prev | CC-MAIN-2019-47 | refinedweb | 164 | 75.3 |
Topaz: Perl for the 22nd Century
Introduction
One of the more interesting talks at the O’Reilly 1999 Open Source Convention was by Chip Salzenberg, one of the core developers of Perl. He described his work on Topaz, a new effort to completely re-write the internals of Perl in C++. The following article is an abridged version of the transcript of this talk that provide the basic context for Topaz and the objectives for this new project. You can also listen to the complete 85-minute talk using the RealPlayer.
Topaz is a project to re-implement all of Perl in C++. If it comes to fruition, if it actually works, it’s going to be Perl 6. There is, of course, the possibility that for various reasons, things may change and it may not really work out, so that’s why I’m not really calling it Perl 6 at this point. Occasionally I have been known to say, “It will be fixed in Perl 6,” but I’m just speaking through my hat when I say that.
Who’s doing it? Well, it’s me mostly for now because when you’re starting on something like this, there’s really not a lot of room to fit more than one or two people. The core design decisions can’t be done in a bazaar fashion (with an “a”), although obviously they can be bizarre (with an “i”).
When? The official start was last year’s Perl conference. I expected to have something, more or less equivalent to Perl 4 by, well, now. That was a little optimistic.
So how will it be done? Well, it’s being done in C++, and there are some reasons for that, one of which is, of course, I happen to like C++. Actually the very first discussion/argument on the Perl 6 porter’s mailing list was what language to use. We had some runners-up that actually were under serious consideration.
Choosing A Systems Programming Language
Objective C has some nice characteristics. It’s simple and, with a GNU implementation, it is pretty much available everywhere. The downside is that Objective C has no equivalent of inline functions, so you’d have to resort to heavy use of macros again, which is something I’d like to get away from. Also, it doesn’t have any support for namespaces, which means that the entire mess we currently have would have to be carried forward: maintaining a separate list of external functions that need to be renamed by the preprocessor during compilation so that you don’t conflict with somebody else when you embed it in another program. I really hate that part. Even though it’s well done, it’s just one of those things you wish you didn’t have to do.
In C++ you solve that problem by saying “namespace Perl open curly brace,” and the rest is automatic. So that is the reason why Objective C fell out of the running.
Eiffel actually was a serious contender for a long time. That is, until I realized that to get decent performance, Eiffel compilers—or I should say the free Eiffel compiler, because there are multiple implementations—needed to do analysis at link-time as to all the classes that were actually in the program. Eiffel has no equivalent of declaring member functions—I’m using the C++ terminology—declaring them to be virtual or nonvirtual. It intuits this by figuring out the equivalent of the Java characteristic final, i.e., I have no derived classes, at link-time. And so it says, well, if there are no derived classes, then therefore I can just inline this function call. Which is clever and all, but the problem is that Topaz must be able to load classes dynamically at run time and incorporate them into the class structure, and so obviously anything that depends on link-time analysis is right out. So that was the end of Eiffel.
Ada, actually as a language, has much to recommend it. Conciseness is not one of them, but it does have some good characteristics. I do secretly tend toward the bondage and discipline style of programming, i.e., the more you tell the compiler, the more it can help you to enforce the things you told it. However, the only free implementation of Ada, at least the only one I’m aware of, GNAT, is written in Ada. This is an interesting design decision and it obviously helped them. They obviously like Ada so they use it, right? The problem is that if Perl 6 were written in Ada—it would require people to bootstrap GNAT before they could even get to Perl. That’s too much of a burden to put on anybody.
So, we’re left with C++. It’s rather like the comment that I believe Winston Churchill is reported to have said about democracy: It’s absolutely the worst system except for all the others. So, C++ is the worst language we could have chosen, except for all the others.
So, where will it run? The plan is for it to run anywhere that there’s an ANSI-C++ compiler. Those of you who have seen the movie the mini-series Shogun might remember when the pilot is suppose to learn Japanese, and if he doesn’t learn it the entire village will be killed. He can’t stand the possibility of all these deaths being on his head so he’s about to commit suicide and finally the shogun says, “Well, whatever you learn, it will be considered enough,” and so then he’s okay with it. Well, that’s kind of how I feel about Visual C++. Whatever Visual C++ implements, we shall call that “enough,” because I really don’t think that we can ignore Windows as a target market. If nothing else, we need the checklist item—works on Windows. Otherwise the people who don’t understand what’s going on will refuse to Perl in situations where they really need to.
So, you know, unless there’s an overriding reason why it’s absolutely impossible, although we will use ANSI features as much as possible because ANSI C++ really is a well-done description and a well-done specification for C++ with a few minor things I don’t like. Visual C++ is so common we really just can’t afford to ignore it.
As for non-Windows platforms, and even for Windows platforms for some people, EGCS (which actually has now been renamed to GCC 2.95) is a really amazingly good implementation of the C++ language. The kind of bugs, the kind of features that they’re working on the mailing list, are so esoteric that actually it takes me two or three times to read through just the description of the bug before I understand it. The basic stuff is no problem at all.
The ANSI C++ library for EGCS/GCC is really not all that good at this point, but work is under way on that. I expected them to be more or less done by now, but obviously they’re not. I still expect them to be done by the next conference. It’s just that the next conference is now the conference 4.0. By then I hope that we’ll be able to use that library in the Topaz implementation.
Now, the big question:
Why in the world would I do such a thing? Or rather start the ball rolling? Well the primary reason was difficulty in maintenance. Perl’s guts are, well, complicated. Nat Torkington described them well. I believe he said that they are “an interconnected mass of livers and pancreas and lungs and little sharp pointy things and the occasional exploding kidney.” It really is hard to maintain Perl 5. Considering how many people have had their hands in it; it’s not surprising that this is the situation. And you really need indoctrination in all the mysteries and magic structures and so on—before you can really hope to make significant changes to the Perl core without breaking more things than you’re adding.
Some design decisions have made certain bugs really hard to get rid of. For example, the fact that things on the stack do not have the reference counts incremented has made it necessary to fake the reference counting in some circumstances, � la the mortality concept, for those of you who have been in there.
Really, when you think about it, the number of people who can do that sort of deep work because they’re willing to or have been forced to put enough time into understanding it, is very limited, and that’s bad for Perl, I think. It would be better if the barrier to entry to working on the core were lower. Right now the only thing that’s really accessible to everyone is the surface language, so anytime anybody has the feeling that they want to contribute to Perl, the only thing they know how to do is suggest a new feature. I hope in the future they’ll be able to do things like suggest an improvement in the efficiency layer or something like that.
The secondary reason actually is new features. There are some features there where people say, “Yeah, I want that just cuz it’s cool.” First of all, dynamic loading of basic types—and I’ll give an example of that later—the basic concept is if you want to invent a new thing like a B-tree hash, you shouldn’t have to modify the Perl core for that. You should just be able to create an add-on that’s dynamically loaded and inserts itself and then you’d be able to use it.
Robust byte-code compilation is another such feature. Now, in complete honesty, I don’t know. I haven’t looked at the existing byte-code compilation output, but I do know from examining how the internals work that retrofitting something like that is quite difficult. If you incorporate it into the structure of the OP-tree (for those of you who know what that is, the basic operations), there’s the concept of a separation between designing the semantic tree (as in “this is what I want”) versus designing the runtime representation for efficient execution. Once you’ve made that separation, now you can also have a separate implementation of the semantic tree, which is, say, just a list of byte codes that would be easy to write to a file and then read back later. So, separation of representing the OP-tree statically versus what you use dynamically is an important part of that part the internals.
Also, something that could be done currently but nobody’s gotten around to it—Micro Perl. Now if you built Perl, you’ve noticed that there’s a main Perl, and then there’s Mini Perl, which you always to expect to have a little price tag hanging off of, and then there’s the concept of Micro Perl, which is even smaller than Mini Perl. The idea here is: What parts of Perl can you build without any knowledge that Configure would give you. Or perhaps, only very, very, very little configure tests. For example, we could assume ANSI or we could assume pseudo-POSIX. In any case, even if you limit yourself to ANSI, you’ve got quite a bit of stuff. You, of course, have all the basic internal data structures in the language. You can make a call to system, to spawn children, and a few other things, and that basically gives you Perl as your scripting language. Then you can write the configure in Micro Perl. I don’t know about you, but I’d much rather use Micro Perl as a language for configuration than sh, because who knows what particular weird variant of sh you’re going to have, and really isn’t it kind of a pain to have to spawn an external text program just to see if two strings are equal? Come on. Okay, so that’s also part of the plan. We could do this with Perl 5, who knows maybe now that I’ve mentioned it somebody will, but that’s also something I have in mind.
Why not use C? Certainly C does have a lot to recommend it. The necessity of using all those weird macros for namespace manipulation, which I’d rather just use the namespace operator for, and the proliferation of macros are all disadvantages. Stroustrup makes the persuasive argument that every time you can eliminate a macro and replace it with an inline function or a const declaration or something or that sort, you are benefiting yourself because the preprocessor is so uncontrolled and all of the information from it is lost when you get to the debugger. So I’d prefer to use C++ for that reason.
Would it be plausible to use Perl, presumably Perl 5 to automatically generate parts of Perl 6? And the answer is yes, that absolutely will be done. The equivalent of what is now opcode.pl will still exist, and it will be generating a whole bunch of C++ classes to implement the various types of OPs..
How or Why Perl Changes
The language changes only when Larry says so. What he has said on this subject is that anything that is officially deprecated is fair game for removal. Beyond that I really need to leave things as is. He’s the language designer. I’m the language implementer, at least for this particular project. It seems like a good separation of responsibilities. You know, I’ve got enough on my plate without trying to redesign the language.
Larry is open to suggestions, and in fact that was an interesting discussion we had recently on the Perl 5 Porters mailing list.. That’s the instinct of the language designer coming to the fore saying that something that is a string or a number should not be so hard to type. It should read better.
Meanwhile, if you want to declare something as being a reference to a class - MY DOG SPOT—that’s going to work. You can say that $SPOT when it has a defined value will be a reference to an object of type DOG or at least of a type that is compatible with DOG, and the syntax is already permitted in the Perl parser; it doesn’t do very much yet but that will be more fully implemented in the future as well. Many of the detailed aspects of this came about not just springing fully formed from Larry’s forehead but as a result of discussion. So yes, he absolutely is taking suggestions.
Getting into the Internals
Now I’d like to ask how many of you do not know anything about C++? Okay, a fair number, so I’m going to have to explain—everyone else is lying. Two kinds of people: people who say that they know C++ and the truthful. Okay. C++ is complicated, definitely. Actually that reminds me, I’m doing this in C++ and I use EMACS. Tom Christiansen asked me, “Chip, is there anything that you like that isn’t big and complicated?” C++, EMACS, Perl, Unix, English—no, I guess not.
At this point, Chip begins to dive rather deep into a discussion of the internals. You can listen to the rest of his talk if you are interested in these details. | https://www.perl.com/pub/1999/09/topaz.html/ | CC-MAIN-2022-40 | refinedweb | 2,613 | 69.52 |
Answered by:
How to capture screen in Metro app??
Hi,
How can I capture screen from a metro app?
- In WPF:
using (Graphics g = Graphics.FromImage(bitmap))
{
g.CopyFromScreen(SourcePoint, DestinationPoint,
SelectionRectangle.Size);
}
bitmap.Save(FilePath, ImageFormat.Jpeg);
There is an equivalente class to Graphics?
Best regardsFriday, March 02, 2012 11:11 AM
Question
Answers
All replies
Hi Rob,
Thanks for your answer,
After your answer ("... you can't readily implement this for your own app") I was wondering about other ways to try do this (maybe using DirectX).
In a previous version of DirectX it was possible to capture screen by:
1-) Create a device and a surface
2-) Get front buffer data from device and store it in surface [ myDevice->GetFrontBufferData(0, pSurface); ]
3-) Use D3DXSaveSurfaceToFile to save the surface to file.
But I couldn't find neither a function to get front bufffer (in ID3D11Device1) nor a function to save the surface to a file in this verson of DirectX.
Do you have any clue??
BRMonday, March 05, 2012 5:07 PM
Now, hitting the Windows Key and Print Screen at the same time dumps the entirety of your screen to a .PNG file in your Pictures folder.
Does anyone know how to do this through Metro app?
BR,
CarlaFriday, March 09, 2012 11:29 AM
Now, hitting the Windows Key and Print Screen at the same time dumps the entirety of your screen to a .PNG file in your Pictures folder.
Does anyone know how to do this through Metro app?
BR,
Carla
Rebecca M. RiordanFriday, March 09, 2012 1:55 PM
Hi Rob,
Even with new SwapChainBackgroundPanel is not possible to get a screenshot from a XAML/DirectX application right?
I created application with SwapChainBackgroundPanel on main page and mixed with the directx sample "save to image file"
but no xaml ui was saved on image , only the swapchain buffer.
you have already mentioned that it is not possible, but with this new control I thought the things were changed a little.
Flavio
Tuesday, March 13, 2012 12:33 PM
Hi Flavio,
The SwapChainBackgroundPanel control doesn't affect this situation. You can store off the DX buffer, but that occurs before the Xaml controls are composited in.
--RobTuesday, March 13, 2012 3:54 PMOwner
Correct. There is no way for a Metro style app to capture a screenshot.
--RobWednesday, March 21, 2012 1:30 AMOwner
Hi Rob,
If I want to capture a region in app itself, is there any solution? Thanks!Wednesday, March 21, 2012 7:04 AM
No. There is no way to render the Xaml elements used by your app to a bitmap.
--RobThursday, March 22, 2012 2:54 AMOwner
- That's really sad, because I've used that in my Windows Phone app to create dynamically secondary tiles from my app. That way I can easily save a custom Xaml graphic to a PNG and use it as a background for my start screen tile.Thursday, March 22, 2012 4:46 AM
While I agree this features is one of the mostly missed ones and I still hope it will show up in a later version - there are a lot of tile templates from which you could choose something that would perhaps fit your needs:
If these are not enough - perhaps you could render the contents you need using something like WriteableBitmapEx (I have a version that works with WinRT). Otherwise - you could render anything with Direct2D and WIC that you can render using XAML UI. I agree that it is a lot of work to achieve the same result - but these are your options for now.
Filip SkakunThursday, March 22, 2012 6:33 AM
If you still care, you are able to make a desktop app that works in conjuction with the metro app, passing the information along, that is if the normal desktop app is able to screen capture metro apps correctly...
It can just pass all the info to the other program directly either by opening a connection through both apps such as any internet app would but between the two apps on your computer, by writing the pictures to the filesystem & reading the files for display in your metro app, or other methods to get data between the two apps.
---This would be at least until it's supported natively or someone finds another way...however by design rob is right, everything is meant to be sandboxed so that they can't do much harm either to other apps directly or interact with them in a way that's not expected...Thursday, March 22, 2012 2:10 PM
Hi Filip Skakun,
I have already tried to use Direct2D and WIC to render XAML controls, but I was not successful
Br,
C.J. SantosThursday, March 22, 2012 7:30 PM
- What I meant was - you can draw shapes, text and images using Direct2D. The same shapes that you would otherwise draw with XAML UI. I did not mean to imply that you can render XAML controls - just that you can use Direct2D as an alternative to achieve same results albeit with somewhat more effort.
Filip SkakunThursday, March 22, 2012 8:17 PM
Hi Michael,
Metro style apps are required to be fully functional stand-alone. They cannot rely on or communicate with desktop apps.
Filip's suggestion to use low level primitives to draw a facsimile of your controls is the closest to a solution available. I'm quite excited to hear that WriteableBitmapEx has been ported to make this more accessible from C#.
--RobFriday, March 23, 2012 12:20 AMOwner
- Win Key + Print Screen works just fine.
ASUS CM-6850 Intel I7-2600 CPU, 16GB RAM, 1 TB HDD, 180 GD Corsair Force 3 SSD, Windows 7 Professional 64 bit. Gateway Laptop 2 GB RAM Windows 8 Release Preview, Office 15 Customer PreviewSaturday, August 18, 2012 9:49 PM
They can't - you still need to render it to the image yourself. As for another app - that is unlikely to be allowed even in a more distant future - you could get apps capturing screenshots of your banking apps, etc. I suspect this might actually be the reason why WriteableBitmap is not available yet - I am speculating, but it could be because it was hard to make it secure.
Then of course it is more likely that some analytics said it was used by less than 1% of developers or other such b*... nonsense.
Filip Skakun
Wednesday, August 29, 2012 3:57 AM
- Edited by Filip Skakun Wednesday, August 29, 2012 3:59 AM
- can i get my app' s screen image?
让信任简单起来Thursday, August 30, 2012 5:44 AM
- Only if you render it yourself using primitive drawing operations of DirectX or WriteableBitmap.
Filip SkakunThursday, August 30, 2012 5:58 AM
can i use WriteableBitmap in Metro app?
i can't found it's namespace
让信任简单起来Thursday, August 30, 2012 6:26 AM
- Hovering over the entered type name usually reveals an icon that you can use to add the missing references. Alt+Shift+F10 does the same thing. The type is in Windows.UI.Xaml.Media.Imaging namespace. You can also find it in the Object Browser (Ctrl+W,J).
Filip SkakunThursday, August 30, 2012 6:29 AM
Rob - all due respect - but we gotta have this... when I first found out about this limitation I was at a loss but we found a way around it for release. Now we have a 2nd client coming to us for an app and we need the same functionality for different reasons so its critical now. There's a ton of places this kind of thing is used... could it be written possibly in c++ then imported as a winrt component to our c# apps?
best, RagnasaurThursday, September 06, 2012 1:55 AM
Too bad this doesn't work. Most natural thing to put the current state of the app into a bitmap and then implement Undo by showing a list of the screenshots.
Overal Metro is TOO limited to do real stuff...
HJSunday, September 23, 2012 11:17 AM
Hi,
I can't understand how you disable something very important to developers and needed in many apps.
We can't always wait for new versions to get basic capability. Time is not on our side.
Please think of developers as they are the people who make the apps for users that encourage them to buy your operating system.
I hope that someone who has a word in Microsoft think of these words. There are no more second chances.
elhajjhSaturday, October 27, 2012 9:17 PM
The same to me! It's important!!!
I just cant understand why MS remove this even it works in WPF!
If YOU got reason just tell us Okay?
I dont know why Windows store just have only 5000 apps and YOU just still REFUSE to help developers?
It's Lin.Monday, October 29, 2012 7:34 AM
This is really frustrating,
We need many functionalities that were available before. Now we are working on very important project that needs the functionality of capturing the screen of our application or capture our own xaml control and converts it to bitmap .
What shall we do, abort our project after months of working, what to tell the customers ??????
Microsoft should add this feature in an update as soon as possible, I think they can do it in few days if they want.
Please HELP.
elhajjhTuesday, November 06, 2012 2:40 PM
- You can't capture a control, but through the magic of Windows Runtime you should be able to render anything you want to a bitmap using a Direct2D, DirectWrite and Windows Imaging Component. You can even use the SharpDX library to do it using C# as in this sample:
Filip SkakunTuesday, November 06, 2012 3:41 PM
- Does anyone have a usable example of how to do this to a canvas in metro. This seems a little silly that this is not available when you could do this in Silverlight 3...
Nevin MorrisonTuesday, November 13, 2012 12:00 AM
Metro has great potential; but it seems that Microsoft's decisions are limiting everybody's attempt at using it. Things need to change. I am sick of hearing from people "Hey, why can't your app do this?" and having to respond with "It's not allowed to." And then having to put up with crap like "But desktop apps can" - but I can't release Desktop apps to the Store because I can only get an "Individual" account - when in fact I am a legally-registered Business (but not a Company).
Back on the subject. You can't take a snapshot of a single element inside your own application? Now that's just ridiculous.
Tuesday, January 29, 2013 11:06 PM
- Edited by doubleyoueight Tuesday, January 29, 2013 11:08 PM
- I agree here, this is critical. Customers want it and developers need it. Please reconsider adding this to a future update.Tuesday, January 29, 2013 11:12 PM
- Seriously? I just invested hundred hours++ implementing a new app and was now starting at the "Save as image" feature, and I find out this is not possible! No wonder WinRT-developers are unhappy and no really good apps are showing up... Sorry.Wednesday, March 06, 2013 8:37 PM
- Yup! Really disappointed! I think it should be easily for us to capture every part of the screen.Monday, March 18, 2013 3:10 AM
- When I use IE 10 (metro version) I can see a thumbnail of the current page displayed in top App Bar.
What is the mechanism/API IE10 is using to do that?
Is it possible to create a thumbnail of my own application screen using the same mechanism?
Monday, April 01, 2013 3:49 PM
- IE10 cheats.
Rebecca M. RiordanTuesday, April 02, 2013 11:55 AMMonday, April 08, 2013 5:11 PMOwner
LOL. Like I said, IE10 cheats ;)
Rebecca M. Riordan
Monday, April 08, 2013 5:16 PM
- Proposed as answer by doubleyoueight Monday, April 08, 2013 10:54 PM
- If there is no way to save xaml elements to bitmaps, then what's the purpose of rendering xaml elements? I really think MSFT needs to listen to developers more seriously, Writeablebitmap is very bad since it doesn't support DrawText (WriteableBitmapEx doesn't support it either).Tuesday, April 09, 2013 3:43 AM
Hi, Rob
Can I capture SurfaceRT Desktop app (like office word) screen using programing?
Friday, May 10, 2013 3:02 AM
- Maybe there is a way with the Printing API? Something like "print screen to PDF"? I didn't try it, just an idea...Thursday, September 12, 2013 10:14 AM | https://social.msdn.microsoft.com/Forums/en-US/63dd9596-bf94-440b-847a-961cbf036e7b/how-to-capture-screen-in-metro-app?forum=winappswithcsharp | CC-MAIN-2017-09 | refinedweb | 2,123 | 70.02 |
Data fetching
How to fetch data from your GraphQL resolvers.
By this point in the documentation, you know how to generate a GraphQL.js schema from the GraphQL schema language, and how to add resolvers to that schema to call functions. How do you access your backend from those resolvers? Well, it's quite easy, but as your app gets more complex it might make sense to add some structure. We'll start with the basics and then move on to more advanced conventions.
Basic fetching
As you have read on the resolvers page, resolvers in GraphQL.js can return Promises. This means it's easy to fetch data using any library that returns a promise for the result:
import rp from 'request-promise'; const resolverMap = { Query: { gitHubRepository(root, args, context) { return rp({ uri: `{args.name}` }); } } }
Factoring out fetching details
As you start to have more different resolvers that need to access the GitHub API, the above approach becomes unsustainable. It's good to abstract that away into a "repository" pattern. We call these data fetching functions "connectors":
// github-connector.js import rp from 'request-promise'; // This gives you a place to put GitHub API keys, for example const { GITHUB_API_KEY, GITHUB_API_SECRET } = process.env; const qs = { GITHUB_API_KEY, GITHUB_API_SECRET }; export function getRepositoryByName(name) { return rp({ uri: `{name}`, qs, }); }
Now, we can use this function in several resolvers:
import { getRepositoryByName } from './github-connector.js'; const resolverMap = { Query: { gitHubRepository(root, args, context) { return getRepositoryByName(args.name); } }, Submission: { repository(root, args, context) { return getRepositoryByName(root.repositoryFullName); } } }
This means we no longer have to worry about the details of fetching from GitHub inside our resolvers, and we just need to put in the right repository name to fetch. We can improve our GitHub fetching logic over time.
DataLoader and caching
At some point, you might get to a situation where you are fetching the same objects over and over during the course of a single query. For example, you could have a list of repositories which each want to know about their owner:
query { repositories(limit: 10) { owner { login avatar_url } } }
Let's say this is our resolver for
owner:
import { getAuthorByName } from './github-connector.js'; const resolverMap = { Repository: { owner(root, args, context) { return getAuthorByName(root.owner); }, }, };
If the list of repositories has several that were owned by the same user, the
getAuthorByName function will be called once for each, doing unnecessary requests to the GitHub API, and running down our API limit.
You can improve the situation by adding a per-request cache with
dataloader, Facebook's helpful JavaScript library for in-memory data caching.
One dataloader per request
One important thing to understand about
dataloader is that it caches the results forever, unless told otherwise. So we really want to make sure we create a new instance for every request sent to our server, so that we de-duplicate fetches in one query but not across multiple requests or, even worse, multiple users.
At this point, the code becomes a bit more complex, so we won't reproduce it here. Check out the GitHunt-API example for the details:
- The GitHub connector, which uses DataLoader, passes along API keys, and does extra caching with GitHub's eTag feature.
- The GitHub model, which defines some helpful functions to fetch users and repositories.
- The GraphQL context, which includes the models, initialized with the connector for every request.
- The resolvers, which use the model from the context to actually fetch the object.
The code is more decoupled than necessary for a small example, but it's done that way intentionally to demonstrate how a larger API could be laid out. | https://www.apollographql.com/docs/graphql-tools/connectors/ | CC-MAIN-2019-18 | refinedweb | 602 | 53 |
Mission2_RGB_LED
After successfully get an LED to work, why not try more LEDs? In this project, let's build a circuit for the first time and make the LEDs blink one after another repeatedly.
What you need
The parts you will need are all included in the Maker kit.
- SwiftIO board
- Breadboard
- Red, green, and blue LEDs
- Resistor
- Jumper wires
Circuit
Let's know something about the breadboard first. The one in the kit is a tiny and simplified version. You can find many holes in it. Each upper or lower five sockets vertically beside the gap in the middle are connected as shown above. It is very convenient for your project prototype.
- Place three LEDs on different columns.
- The long leg of each LED connects to a digital pin: red LED connects to D16, green LED connects to D17, blue LED connects to D18.
- The short leg is connected to a 1k ohm resistor and goes to the pin GND.
BTW, you can usually find that the red jumper wires are for power, and the black ones are for ground.
note
The resistance of the resistor is not absolute, as long as it is bigger than the minimum requirement to resist the voltage. And the brightness of the LED will be influenced by the resistor: its resistance is higher, the LED will be dimmer.
Example code
// Import the SwiftIO library to use everything in it.
import SwiftIO
// Import the board library to use the Id of the specific board.
import MadBoard
// Initialize three digital pins used for the LEDs.
let red = DigitalOut(Id.D16)
let green = DigitalOut(Id.D17)
let blue = DigitalOut(Id.D18)
while true {
// Turn on red LED for 1 second, then off.
red.write(true)
sleep(ms: 1000)
red.write(false)
// Turn on green LED for 1 second, then off.
green.write(true)
sleep(ms: 1000)
green.write(false)
// Turn on blue LED for 1 second, then off.
blue.high()
sleep(ms: 1000)
blue.low()
} red = DigitalOut(Id.D16)
let green = DigitalOut(Id.D17)
let blue = DigitalOut(Id.D18)
The class DigitalOut allows you to set the pin to output high or low voltage. You need to initialize three output pins: D16, D17, and D18 that the LEDs connect. Only after initialization, the pin can output the designated levels.
while true {
}
To make the LED blink repeatedly, you need to write the code in the dead loop
while true. The code inside it could run all the time unless you power off the board.
red.write(true)
sleep(ms: 1000)
red.write(false)
In the loop, you will set three LEDs separately. The operations are similar. Let's look at the red LED.
At first, the pin outputs a high voltage to light the LED. Since each of the three LEDs connects to the digital pin and ground, they will be on as you apply a high voltage. After 1s, turn off the LED with a low voltage. So the LED will be on for 1s and then be turned off.
The following LED turns on immediately and repeats the process above. Thus three LEDs blink in turns.
Reference
DigitalOut - set whether the pin output a high or low voltage.
sleep(ms:) - suspend the microcontroller's work and thus make the current state last for a specified time, measured in milliseconds.
SwiftIOBoard - find the corresponding pin id of SwiftIO board. | https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission2 | CC-MAIN-2022-21 | refinedweb | 564 | 76.93 |
I'm having a strange problem while trying to install a python library using its setup.py file. when I run the setup.py file, I get an import error, saying
ImportError: No module named Cython.Distutils
enwe101@enwe101-PCL:~/zenlib/src$ sudo python setup.py install
Traceback (most recent call last):
File "setup.py", line 4, in <module>
from Cython.Distutils import build_ext
ImportError: No module named Cython.Distutils
>>> from Cython.Distutils import build_ext
>>>
>>> from fake.package import noexist
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named fake.package
#from distutils.core import setup
from setuptools import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import os.path
~/.bashrc
export PATH=/usr/local/epd/bin:$PATH
which python
/usr/local/epd/bin/python
/usr/local/epd/lib/python2.7/site-packages
Cython
Distutils
build_ext.py
__init__.py
Your sudo is not getting the right python. This is a known behaviour of sudo in Ubuntu. See this question for more info. You need to make sure that sudo calls the right python, either by using the full path:
sudo /usr/local/epd/bin/python setup.py install
or by doing the following (in bash):
alias sudo='sudo env PATH=$PATH' sudo python setup.py install | https://codedump.io/share/Z7jceULaY3Ju/1/python-importerror-cythondistutils | CC-MAIN-2017-34 | refinedweb | 215 | 54.79 |
Copyright ©1999 W3C (MIT, INRIA, Keio) , All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
This is a W3C Note produced as a deliverable of the XML Linking WG according to its charter. A list of current W3C working drafts and notes can be found at .
This document is a work in progress representing the current consensus of the W3C XML Linking Working Group. This version of the XML Link Requirements document has been approved by the XML Linking working group and the XML Plenary to be posted for review by W3C members and other interested parties. Publication as a Note does not imply endorsement by the W3C membership. Comments should be sent to www-xml-linking-comments@w3.org, which is an automatically and publicly archived email list.
This document is being processed according to the following review schedule:
Comments about this document should be submitted to the "contact" listed above for each process.
This document specifies requirements for the XLink specification. Xlink defines XML-conforming syntax for expressing links among XML documents and other Internet resources, and defines some of the behavior of applications that support it.
XML Linking Working Group Page [member only], for general information about the activities of the WG..
XPointer-Information Set Liaison Statement, produced by the XML Linking Working Group. This document enumerates perceived constraints that work on the XPointer specification has indicated may affect the XML Information Set Working Group, since it is those information structures that XPointer provides access to.
XML Linking Language (XLink) Working Draft, the companion specification to this document. specifies the requirements for linking among XML documents and other Internet resources. following general design principles, adapted from those of XML, underly the XLink design. The XML design principles are described in the W3C Note XML Linking Language (XLink) Design Principles.
XLink must be straightforwardly usable over the Internet.
XLink must be usable by a wide variety of link usage domains and classes of linking application software.
XLink must support HTML 4.0 linking constructs.
The XLink expression language must be XML.
The XLink design must be prepared quickly.
The XLink design must be formal, concise, and illustrative.
XLinks must be human-readable and human-writable.
XLinks may reside within or outside the documents in which the participating resources reside.
XLink must represent the abstract structure and significance of links.
XLink must be feasible to implement.
XLink must be informed by knowledge of established hypermedia systems and standards.
These include, but are not limited to, Augment, Dexter, FRESS, HTML, Hyper-G, HyTime, InterMedia, MicroCosm, OHS, and the Text Encoding Initiative (see Bibliography).
While it is not the purpose of this document to establish or constrain the terminology XLink must use, some terminology is defined here for the purpose of clarity in the remainder of this requirements specification.
The concrete description of one or more data resources or sub-resource portions, and the nature of their relationship for some given purpose.
One of the resources or data resources described and connected by a link.
A specification that identifies a particular link end's location, such as a URI.
The semantic function that a given end plays in a link, such as being the thing commented upon, the comment, or a referenced authority. A role is part of the descriptive aspect of linking, and is not itself considered a link end.
The act of navigating from one end of a link to some other end of the same link. This is commonly accomplished in browsers by clicking on the data at one link end, but other kinds of traversal are also possible and useful. XLink may provide means for controlling which ends of a link can be reached from which other ends; such constraints are commonly called "traversal rules", and serve to make some ends "available" and others "unavailable" from a given starting point.
The resource or sub-resource to be reached via a traversal.
An ordered list of sub-resources, each linked to the next. This construct is typically used to create a path or guided tour through some data collection.
An end that can be directly reached from the "current" location (in applications where that is a meaningful notion), is said to be available (or sometimes, "active"). Rules of some kind may dictate that not all ends are available at a given time or from a given starting end.
A single end may, in some systems, include multiple discontiguous data objects, that are to be treated together as a single end of the link. Such an end is called an "aggregate", and the resources that are part of it are called its "members". Typically an aggregate has a unified role and other descriptive information in the link, while the members have their own relationships involving how they are assembled or treated as a unit (say, ordering, transformation, selection, and so on).
In order to make it possible to express links all of whose ends are read-only, many hypermedia systems provide a way to encode links in some place external to the document(s) containing any of their ends. A link that makes use of this capability is said to be stored "out of line", while one whose own location is one of its ends is "inline".
Note: The HTML <A> element is strictly inline; ISMAP files for graphics contain entries that are somewhat analogous to out-of-line links, since they link graphic regions to other resources without being embedded in either the graphic or the other resource.).
The following diagram shows a 3-ended link such as might be used in an editing or review application. The three ends are
The link accomplishes several things:
A link may also express much other data about itself or its ends; XLink may define some such data, and may permit link creators to add their own as well. An XLink may also place constraints or tests on its ends, such as requiring that certain ends be in certain data formats, or providing ways to detect when a locator has failed. But the functions of identifying ends, describing them and their relationship, and grouping them into an explicit link, are the basic desiderata of XLink.
1: An XML link must be able to describe and relate one or more Internet resources and/or data portions within resources. This implies the following:
addressing complete resources
addressing specific portions of resources (this is largely accomplished via related specifications such as for URLs and XPointers).
expressing links having multiple ends
Note: Links themselves are also resources, and resources to be used with XLink must be expressible via URIs (including fragment identifiers).
2: It must be possible for an XML link itself to serve as one of the resources pointed to by the link. That is, the link construct itself may serve to mark up one of the endpoints of the link.
3: It must be possible for an XML link to address into a resource without requiring modification of that resource. Thus, a link need not physically occur within any of the resources it points to.
4: An XML link, as an abstract datatype, must make at least the following information available to an application:
A specification of each of its ends (as described below).
An indication of whether or not the link's location is itself an end of the link.
Note: Making the link's own location an end is a distinct and common special case, probably worthy of special syntax (even though, of course, one could always add an explicit end that pointed to the link's location using generic mechanisms). Supporting links whose own location is not one of their link ends, is critical so that links can be created to connect and describe resources without modifying those resources themselves. In such cases the actual location of the link is generally insignificant to the application.
An optional order in which the link's ends are accessed or made available. The link processor must be able to access each resource that serves as a link endpoint in an order specified by the link author.
Note: This order might, for example, be used in a menu of available destination ends when the user clicks on the data at one end of a link. [there is not consensus on the need for ordering ends, or how it relates to ordering of members within ends, if those are to be supported].
A required link type that may have meaning to specific applications (if not specified, the type is specifically "undefined"). This can indicate the link's purpose so that application-specific processing can be applied.
For compatibility with HTML, it appears necessary to leave the type implicit in some cases; such a link is considered to have a type, specifically "undefined".
Some human-readable identifying text, or title. This can be displayed or otherwise used as a description of the link as supplied by the link author.
Note: Link titles raise issues of internationalization. They must be able to include text not just in English. They are also very important as a means of addressing Accessibility concerns for the print-disabled user.
5: Each end of a link, as an abstract datatype, must make at least the following information available to an application:
A role, to specify the end's particular function in relation to the link and/or the other ends. A rolemay have meaning to specific applications.
A title, some human-readable identifying text for the particular end. This can be displayed or otherwise used as a description of the link as supplied by the link author.
Note: Link titles raise issues of internationalization; it is required that they be able to include text not just in English. They are also very important as a means of addressing Accessibility concerns for the print-disabled user.
Some behavior hints that may suggest certain treatment on the part of link processors.
For example, simple indications of when to access a resource, where to display it, or what to do with the originating end. The link processor can pass on hints as how to display and process the link to the application; a simple example is that a stylesheet in a browsing application could access them to condition its display or interaction behavior. In some applications such hints may have no meaning, and are therefore not required.
A locator that identifies the specific destination constituting the particular link end. (see also the Note on the next item)
A context locator that identifies the corresponding containing scope that should be displayed, indexed, or otherwise used to provide contextual meaning for the resource identified by the locator.
Note: The distinction between this and the last item is similar to what underlies the typical treatment of fragment identifiers in HTML <A> links: The client retrieves a context which is typically a whole document, but then somehow identifies the particular target portion within it. Another example could be a link in a review or annotation application: it may point to a particular mis-spelled word, even though a later user is unlikely to desire retrieval of merely the one word without its document context.
6: It must be possible to specify the types of destinations a link's ends can point to. In particular, a resource may be restricted to a specified set of content-types, XML element types, namespaces, and so on.
Note: For example, an application might wish to ensure that for links of type "review-comment", the "topic" end must point to part of an XML document from the DocBook schema, while the "comment" end must point to an entire HTML document.
7: XLink should be able to express limited claims about the legal status of the linked data, particularly in the case of transclusion. For example, a way for a link creator to assert that they have the right to copy the data at some link end(s).
Note: Some users have noted that support of transparent inclusion (transclusion) could conceivably be misconstrued as facilitating plagiarism. One possible option is to provide a way on transclusion links, to express a claim about rights. Browsers, for example, could then mark or prevent transclusions that do not explicitly claim rights. Making it possible for link creators to be clear seems to be about the best one can hope for in addressing this question.
8: It must be possible to control the directions of traversal available among a link's ends.
Note: In HTML the <A> link always has exactly two ends, and traversal is normally available only from one of them (the one where the <A HREF...> is). Out of line and multi-ended links enable a wider variety of potential traversals. The WG is considering what degree of control is desirable, and whether it shall be specified per link type, whether it can depend on environmental factors, and so on.
9 [non-mandatory goal]: It must be possible to detect when a resource a link points to is invalidated or modified.
10: XLink should be expressable in terms of RDF.
1: A link must be specified using XML..
3: Link markup must be unambiguously recognizable within a standalone XML instance in which it occurs.
4: Specification of a link must be independent of the specification of the address(es) of the resource(s) the link connects and describes.
5: It must be possible to assert the existence of a link from a DTD.
[There is not consensus on whether it is enough to be able to locate link ends that reside in a DTD, or whether there must be a way to put the physical representation of a link actually within a DTD (which imposes greater syntactic challenges).
1: An XML link must use a URI to address a resource as defined in IETF RFC 1738: Uniform Resource Locators.
2: An XML link must require using the XPointer specification to identify specific link end locations in an XML resource.
3: An XML link must provide a straightforward way of representing an HTML <A> or <IMG> link. Automated translation of HTML links to XML links must be possible.
4: XLink must liaise with other WGs as appropriate, including RDF and SYMM.
Note: This bibliography only lists works that are readily accessible, either online or in widely-available print publications. A wealth of information on major systems and projects is available on the Memex and Beyond Web site.
Akscyn, Robert, Donald McCracken, and Elise Yoder. 1987. "KMS: A Distributed Hypermedia System for Managing Knowledge in Organizations." In Proceedings of Hypertext '87, Chapel Hill, NC. November 13-15, 1987. NY: Association for Computing Machinery Press, pp. 1-20.
DeRose, Steven J. and Andries van Dam. 1999. "Document Structure and Markup in the FRESS hypertext system." In Markup Languages 1(1), Winter 1999, pp. 7-46.
Furuta, Richard, Frank M. Shipmann III, Catherine C. Marshall, Donald Brenner, and Hao-wei Hsieh. 1997. "Hypertext paths and the World-Wide Web: Experiences with Walden's Paths." In Proceedings of Hypertext '97. NY: Association for Computing Machinery Press.
Garret, L. Nancy, Karen E. Smith, and Norman Meyrowitz. 1986. "Intermedia: Issues, Strategies, and Tactics in the Design of a Hypermedia System." In Proceedings of the Conference on Computer-Supported Cooperative Work.
Hall, Wendy, Hugh Davis, and Gerard Hutchings. 1996. Rethinking Hypermedia: The Microcosm Approach. Boston: Kluwer Academic Publishers. ISBN 0-7923-9679-0.
Marshall, Catherine C., Frank M. Shipman, III, and James H. Coombs. 1994. "VIKI: Spatial Hypertext Supporting Emergent Structure". In Proceedings of the 1994 European Conference on Hypertext. NY: Association for Computing Machinery Press.
Yankelovich, Nicole, Bernard J. Haan, Norman K. Meyrowitz, and Steven M. Drucker. 1988. "Intermedia: The Concept and the Construction of a Seamless Information Environment." IEEE Computer (January, 1988): 81-96.
Berners-Lee, T. and L. Masinter, editors. December 1994. "Uniform Resource Locators (URL)". IETF document RFC 17338.
DeRose Steven J. and David G. Durand. 1994. Making HyperMedia Work: A User's Guide to HyTime. Boston: Kluwer Academic Publishers. ISBN 0-7923-9432-1.
DeRose, Steven J. and David G. Durand. 1 995. .
Grønbæk, Kaj and Randall H. Trigg. 1996. "Toward a Dexter-based model for open hypermedia: Unifying embedded references and link objects." In Proceedings of Hypertext '96. NY: Association for Computing Machinery Press. Also available online. p>
Halasz, Frank. 1994. "The Dexter Hypertext Reference Model." In Communications of the Association for Computing Machinery 37 (2), February 1994: 30-39.
Hardman, Lynda, Dick C. A. Bulterman, and Guido van Rossum. 1994. "The Amsterdam Hypermedia Model: Adding Time and Context to the Dexter Model." In Communications of the Association for Computing Machinery 37.2 (February 1994): 50-63.
International Organization for Standardization. 1992. ISO/IEC 10744. "Information technology - Hypermedia/Time-based Structuring Language (HyTime)." Also available online.
Moline, Judi, Dan Denigni, and Jean Baronas (eds.). 1990. Proceedings of the Hypertext Standardization Workshop, January 16-18, 1990, National Institute of Standards and Technology. Washington: U.S. Government Printing Office. Available from the National Technical Information Service as Publication PB90215864. (ordering information)
Nürnberg, Peter J. Home page of the Open Hypermedia Systems Working Group.
Sperberg-McQueen, C. Michael and Lou Burnard (eds). 1994. Guidelines for Electronic Text Encoding and Interchange. Chicago, Oxford: Text Encoding Initiative. Also available online. See especially the section on extended pointer syntax. Also available for ftp.
Agosti, Maristelle and Alan Smeaton. 1996. Information Retrieval and Hypertext. Boston: Kluwer Academic Publishers. ISBN 0-7923-9710-X.
Bush, Vannevar. 1945. "As We May Think." Atlantic Monthly 176 (July): 101-108. Links to many of Bush's works are collected here.
Catano, James V. 1979. "Poetry and Computers: Experimenting with the Communal Text." In Computers and the Humanities 13 (9): 269-275.
Conklin, Jeff. 1987. "Hypertext: An Introduction and Survey." IEEE Computer 20 (9): 17-41.
DeRose, Steven J. 1989. "Expanding the Notion of Links." In Proceedings of Hypertext '89, Pittsburgh, PA. NY: Association for Computing Machinery Press.
Engelbart, Douglas C. 1963. "A Conceptual Framework for the Augmentation of Man's Intellect". In Vistas in Information Handling, Vol. 1 (P. Howerton, ed.). Washington, DC: Spartan Books: 1-29. Reprinted in Greif, Irene (ed.), 1988. Computer-Supported Cooperative Work: A Book of Readings. San Mateo, California: Morgan Kaufmann Publishers, pp. 35-65. ISBN 0934613575.
Gibson, David, Jon Kleinberg, and Prabhakar Raghavan. 1998. "Inferring Web Communities from Link Topology." In Proceedings of Hypertext '98, Pittsburgh, PA. NY: Association for Computing Machinery Press.
Halasz, Frank. 1987. "Reflections on NoteCards: Seven Issues for the Next Generation of Hypermedia Systems." Address presented at Hypertext '87, November 13-15, 1987. Reprinted in Communications of the Association for Computing Machinery 31 (7), July 1988: 836-855.
Kahn, Paul. 1991. "Linking Together Books: Experiments in Adapting Published Material into Intermedia Documents." In Paul Delany and George P. Landow (eds), Hypermedia and Literary Studies. Cambridge: MIT Press.
Landow, George P. 1987. "Relationally Encoded Links and the Rhetoric of Hypertext." In Proceedings of Hypertext '87, Chapel Hill, NC, November 13-15, 1987. NY: Association for Computing Machinery Press: 331-344.
Meyrowitz, Norman. 1986. "Intermedia: the Architecture and Construction of an Object-Oriented Hypermedia system and Applications Framework." In Proceedings of OOPSLA. Portland, OR.
Nelson, Theodore H. 1987. Literary Machines. (available in multiple editions).
Trigg, Randall H. 1988. "Guided Tours and Tabletops: Tools for Communicating in a Hypertext Environment." In ACM Transactions on Office Information Systems, 6.4 (October 1988): 398-414.
Trigg, Randall H. 1991. "From Trailblazing to Guided Tours: The Legacy of Vannevar Bush's Vision of Hypertext Use." In Nyce, James M. and Paul Kahn, eds, 1991, From Memex to Hypertext: Vannevar Bush and the Mind's Machine. San Diego: Academic Press, pp. 353-367. A thorough review.
Yankelovich, Nicole, Norman Meyrowitz, and Andries van Dam. 1985. "Reading and Writing the Electronic Book." IEEE Computer 18 (October, 1985): 16-30.
Zellweger, Polle T. 1989. "Scripted Documents: A Hypermedia Path Mechanism." In Proceedings of Hypertext '89. NY: Association for Computing Machinery Press. | http://www.w3.org/TR/1999/NOTE-xlink-req-19990224 | crawl-001 | refinedweb | 3,327 | 58.18 |
.
or you could plug your samples into :D
The actual direct link would be:
Hurray! Been missing Glowing Python posts. Happy to see a new one, learn something new.
I think, this does nothing else than calculating the mean and standard deviation of samp:
>>> samp = norm.rvs(loc=0,scale=1,size=150)
>>> param = norm.fit(samp)
>>> mu = np.mean(samp)
>>> sigma = np.std(samp)
>>> mu==param[0]
True
>>> sigma==param[1]
True
>>>
According to the scipy documentation it should perform a Maximum Likelihood Estimate.
For the normal distribution, the sample mean ( which is what np.mean() calculates ) is the maximum likelihood estimator of the population ( parametric ) mean. This is not true of all distributions, though.
If it helps, some code for doing this w/o normalizing, which plots the gaussian fit over the real histogram:
from scipy.stats import norm
from numpy import linspace
from pylab import plot,show,hist
def PlotHistNorm(data, log=False):
# distribution fitting
param = norm.fit(data)
mean = param[0]
sd = param[1]
#Set large limits
xlims = [-6*sd+mean, 6*sd+mean]
#Plot histogram
histdata = hist(data,bins=12,alpha=.3,log=log)
#Generate X points
x = linspace(xlims[0],xlims[1],500)
#Get Y points via Normal PDF with fitted parameters
pdf_fitted = norm.pdf(x,loc=mean,scale=sd)
#Get histogram data, in this case bin edges
xh = [0.5 * (histdata[1][r] + histdata[1][r+1]) for r in xrange(len(histdata[1])-1)]
#Get bin width from this
binwidth = (max(xh) - min(xh)) / len(histdata[1])
#Scale the fitted PDF by area of the histogram
pdf_fitted = pdf_fitted * (len(data) * binwidth)
#Plot PDF
plot(x,pdf_fitted,'r-')
this code doesn't work….
This comment has been removed by the author.
Is there a way to fit data to an exponential distribution such that it maximizes the entropy H(p_i) = - sum p_i*log(p_i) where p_i is the probability of a given event?
Hi, scipy implements the exponential distribution and way to fit the parameters:
I'm not sure about the fitting technique implemented. | http://glowingpython.blogspot.it/2012/07/distribution-fitting-with-scipy.html | CC-MAIN-2017-30 | refinedweb | 341 | 57.87 |
----- Original Message ----- From: "Jean-Paul Calderone" <exarkun at divmod.com> To: "Twisted general discussion" <twisted-python at twistedmatrix.com> Sent: Saturday, December 31, 2005 4:52 PM Subject: Re: how winnt fileops work and what to do about it (was Re:[Twisted-Python] Twisted windows hackers - help the tests to pass!) > On Sat, 31 Dec 2005 05:48:43 -0500, Paul G <paul-lists at perforge.com> > wrote: >> >>ok, i can actually chime in here because i've done filesystems work on >>windows (don't ask ;). now, it's been a while, but i should remember >>things reasonably accurately (i hope). see below for comments: > > Thanks, Paul, for these comments. This cleared up a lot about how the > filesystem works on Win32 for me. just a quick revision from me, for the benefit of posterity only, since investigating the feasibility of fixing the underlying cause (files being held open) seems to be the best course of action. >>4. test whether you can either use ZwSetFileInformation() to rename >>directories by changing the FILE_NAME attr in the appropriate info >>structure or use it to move by renaming files which are open, again using >>the appropriate (but different) structure. this will definitely not work for files based on the ddk docs i managed to dig out, and will almost certainly not work for directories (though this isn't documented either way). quite simply, expecting this to work is an expectation borne out of familiarity with things like ext2/linux-vfs, where all filesystem objects are inodes mapped into a namespace with dentries. it appears that no matter how you slice it, the underlying implementation of ntfs is drastically different, so a rename is a move and a move is a copy+delete, which brings us back to our problem. -p | http://twistedmatrix.com/pipermail/twisted-python/2005-December/012278.html | CC-MAIN-2014-10 | refinedweb | 297 | 58.11 |
On Fri, Sep 23, 2011 at 11:39 AM, Devin Jeanpierre <jeanpierreda at gmail.com> wrote: >> Decorator syntax cannot work without deep magic, because the >> compiler *doesn't know* that injected names need to be given special >> treatment. > > This is the last time you mention the decorator solution (aside from > further explanation of the problem). Is it being discarded for that > reason? Yes, it's far too hard to explain what it does in terms of existing Python semantics (the Zen has something to say on that point). Bytecode hackery and other tricks would *work*, but deep magic should only be employed when there aren't any alternatives and the problem is sufficiently common. (PEP 3135, the new super semantics for 3.x, pushes the boundaries of what's reasonable, but it's so easy to *use* that it's worth the additional under the hood complexity). Magical behaviour is also far more likely to cause problems for other implementations - it tends to rely on assumptions that aren't explicitly guaranteed by the language specification. Use of default arguments for pre-initialised function locals is nowhere near common enough to justify deep magic as a solution, so the idea just isn't worth pursuing. That's the real benefit of the "nonlocal i=i" idea: it takes an *existing* concept (i.e. closures), and just tweaks it a bit by avoiding the need for a separate outer scope when all you really want to do is share some state between invocations. Anyone that already understands closures shouldn't have much trouble grasping the idea that the 'innermost containing scope' can now be the function itself rather than the next level up. Any implementation that already correctly handles closures should also be able to cope with the self reference without too much trouble. No new keywords, no new namespace semantics, just a slight tweak to the way the compiler handles def and nonlocal statements. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-ideas/2011-September/011787.html | CC-MAIN-2016-30 | refinedweb | 333 | 52.6 |
bcc-tcptop - Man Page
Summarize TCP send/recv throughput by host. Top for TCP.
Synopsis
tcptop [-h] [-C] [-S] [-p PID] [--cgroupmap MAPPATH]
[--mntnsmap MAPPATH] [interval] [count]
Description
This is top for TCP sessions.
This summarizes TCP send/receive Kbytes by host, and prints a summary that refreshes, along other system-wide metrics.
This uses dynamic tracing of kernel TCP send/receive functions, and will need to be updated to match kernel changes.
The traced TCP functions are usually called at a lower rate than per-packet functions, and therefore have lower overhead. The traced data is summarized in-kernel using a BPF map to further reduce overhead. At very high TCP event rates, the overhead may still be measurable. See the Overhead section for more details.
Since this uses BPF, only the root user can use this tool.
Requirements
CONFIG_BPF and bcc.
Options
- -h
Print USAGE message.
- -C
Don't clear the screen.
- -S
Don't print the system summary line (load averages).
- -p PID
Trace this PID only.
- --cgroupmap MAPPATH
Trace cgroups in this BPF map only (filtered in-kernel).
- --mntnsmap MAPPATH
Trace mount namespaces in this BPF map only (filtered in-kernel).
- interval
Interval between updates, seconds (default 1).
- count
Number of interval summaries (default is many).
Examples
- Summarize TCP throughput by active sessions, 1 second refresh:
# tcptop
- Don't clear the screen (rolling output), and 5 second summaries:
# tcptop -C 5
- Trace PID 181 only, and don't clear the screen:
# tcptop -Cp 181
- Trace a set of cgroups only (see special_filtering.md from bcc sources for more details):
# tcptop --cgroupmap /sys/fs/bpf/test01
Fields
- loadavg:
The contents of /proc/loadavg
- PID
Process ID.
- COMM
Process name.
- LADDR
Local address (IPv4), and TCP port
- RADDR
Remote address (IPv4), and TCP port
- LADDR6
Source address (IPv6), and TCP port
- RADDR6
Destination address (IPv6), and TCP port
- RX_KB
Received Kbytes
- TX_KB
Transmitted Kbytes
Overhead
This traces all send/receives in TCP, high in the TCP/IP stack (close to the application) which are usually called at a lower rate than per-packet functions, lowering overhead. It also summarizes data in-kernel to further reduce overhead. These techniques help, but there may still be measurable overhead at high send/receive rates, eg, ~13% of one CPU at 100k events/sec. use funccount to count the kprobes in the tool to find out this rate, as the overhead is relative to the rate. Some sample production servers tested found total TCP event rates of 4k to 15k per second, and the CPU overhead at these rates ranged from 0.5% to 2.0% of one CPU. If your send/receive rate is low (eg, <1000/sec) then the overhead is expected to be negligible; Test in a lab environment first.
Source
This is from bcc.
Also look in the bcc distribution for a companion _examples.txt file containing example usage, output, and commentary for this tool.
OS
Linux
Stability
Unstable - in development.
Author
Brendan Gregg
Inspiration
top(1) by William LeFebvre
See Also
tcpconnect(8), tcpaccept(8) | https://www.mankier.com/8/bcc-tcptop | CC-MAIN-2020-34 | refinedweb | 508 | 66.13 |
Chapter 11: Creating Dynamic Ribbon Customizations Expert Access 2007 Programming by Rob Cooper and Michael Tucker from Wrox (ISBN 978-0-470-17402-9, copyright Wrox 11: Creating Dynamic Ribbon Customizations (2 of 2)
The Microsoft Office Fluent user interface Ribbon, first introduced in Office 2007, provides many new and interesting opportunities for user-interface development in Access applications. Unlike menus, the Ribbon gives you a chance to expose functionality in an application that might otherwise be overlooked. For new users, the Ribbon is designed to reduce the barrier to entry, making it easier to find the item the user is looking for.
In this chapter, you learn how to:
Create ribbon customizations for use in Access applications
Program the Ribbon to provide dynamic user experiences
Use images effectively in ribbon customizations to provide experiences that are fun and easy-to-use
Disable or repurpose built-in controls to provide your own functionality
Contents
Overview of Ribbon Customizations in Access
Programming the Ribbon
Organizing Ribbon Items
Ribbon Controls
Overview of Ribbon Customizations in Access
Ribbon customizations that you write for Access applications are generally designed to replace the menu bars and toolbars of applications created in previous versions. Although this may be how you typically create ribbon customizations, it is not the only way that you can use them.
You can create two types of customizations for the Ribbon. You can use the first type of customization with a particular database, which appears when a given database is open in Access. The second type of ribbon customization you can create is in a COM add-in. Because COM add-ins are available for any database open in Access, it stands to reason that ribbon customizations created as part of a COM add-in are also available for any database. In this chapter, we focus on the first type of customization. Refer to the MSDN Web site for more information about creating ribbon customizations as part of COM add-ins.
Ribbon customizations are written in XML that conforms to a defined XML schema that is available for download. The XML schema for developing ribbon customizations includes controls that are defined as elements in XML and attributes of those controls that define their behaviors and appearance. We take a look at some of the control elements and common attributes used on all types of controls later in this chapter.
Development Tips
Before we get started with developing ribbon customizations, let's go through some tips that will help you during development.
Discovering Errors
By default, you won't see any errors if there are problems with the XML that you've defined for a ribbon customization. To display errors during development, be sure to set the Show add-in user interface errors option in the Advanced page of the Access Options dialog box. Without this option set, ribbon customizations may not load and it may not necessarily be clear why.
When this error is set, Access displays any errors caused by a ribbon customization, as shown in Figure 11-1.
Figure 11-1. Access displays ribbon customization errors
Using IntelliSense
As VBA developers, we are very glad to have IntelliSense. We see it as a great time saver during development because we can use it to help complete text while writing code. IntelliSense is also available when developing the XML for a ribbon customization using Visual Studio 2005.
In order to use IntelliSense in Visual Studio 2005, you need to download the XML schema for ribbon development. The schema is included as a part of the 2007 Microsoft Office System XML Schema reference, which is available for download from the Microsoft Web site.
Once you have the schema, select the customUI.xsd schema for a given XML document in Visual Studio 2005 as follows:
Launch Visual Studio 2005. You can use any version of Visual Studio, including the Express Editions, which are freely available for download from the Microsoft Web site.
In Visual Studio 2005, click File, select New, and then click File.
Select XML File and click Open.
Click the builder button in the Schemas property for the file. This property is available in the Properties window for the file.
Click the Add button and browse to the customUI.xsd file that was installed as a part of the XML Schema reference.
Make sure the customUI.xsd schema is checked and click OK in the XSD Schemas dialog box in Visual Studio.
When the schema has been added to the document, you should receive IntelliSense for the customUI node, as shown in Figure 11-2.
Figure 11-2. Receiving IntelliSense for the customUI node
Prevent Loading at Startup
If you are developing a ribbon customization to replace the built-in Access Ribbon, you may want to prevent your customization from loading in order to use the development tools and ribbons included with Access. To prevent your ribbon customization from loading during development, hold down the Shift key.
Finding Controls and Images
Office 2007 includes a great number of built-in controls and images that you can use in your applications. As you will see later in this chapter, you are not limited to using images from Access. You can use images from other Office applications such as Microsoft Word, Microsoft Excel, and Microsoft Outlook as well! With all of these options available, finding controls and images can be a bit daunting. Luckily, Microsoft has provided some resources to help you find controls and images for use in your applications.
Each application that supports the Ribbon in Office 2007 includes a built-in mechanism for finding controls in the Customize page of the Options dialog box for the application. For example, let's say that you wanted to find the built-in control ID for the Access Close Database button in the Office menu. Here's an easy way to find the control.
Open the Access Options dialog box and select the Customize page. The Access Options dialog box is available under the Office menu in Office 2007.
Select Office Menu from the Choose commands from drop-down.
Hover the mouse over Close Database. You should see a tooltip that provides the name of the button, FileCloseDatabase, as shown in Figure 11-3.
Figure 11-3. Hover over a control
This is most helpful when you know exactly what you are looking for. To prevent wading through dialog boxes looking for a given control, Microsoft provides a download called "List of Control IDs" that contains lists of all control ID values in Office 2007. This is also available for download on the Microsoft Web site.
More Resources
Microsoft began to release documentation about the Ribbon for developers prior to the release of Office 2007. With the vast amount of changes made to the user interface, this was intended to get the word out early so that developers were prepared. One resource that has been very useful for us is the Office Fluent User Interface Developer Center on MSDN.
How to Write Ribbon Customizations
As mentioned earlier, using IntelliSense to write the XML for a ribbon customization makes authoring the Ribbon much easier. This gives you more time to focus on how you'd like the Ribbon to look and function. With this in mind, we'll use Visual Studio to write ribbon customizations. Later in this chapter, we show you where to save the customization for use in your application.
Getting Started
The root node of a ribbon customization is the customUI node, which defines the XML namespace that is tied to the schema for ribbon customizations in Office 2007. Let's use this node to start writing our first ribbon customization. This first customization is pretty straightforward, but it will introduce you to the process.
<customUI xmlns=""> </customUI>
Start by creating a new XML file in Visual Studio and add the customUI.xsd schema to the document, as described in the section, "Using IntelliSense."
Next, add the XML for the customUInode, as shown in the code that follows. IntelliSense makes adding this node easier, complete with the namespace.
<customUI xmlns=""> </customUI>
The first thing you do is tell the Ribbon that we are adding to the built-in Access Ribbon. To do this, use the Ribbon node as shown in the following XML.
<customUI xmlns=""> <ribbon> </ribbon> </customUI>
Next, add a tab with one group to the built-in Access ribbon. To do this, add the XML as shown in the following. We discuss tabs and groups in more detail in the section "Organizing Ribbon Items."
<customUI xmlns=""> <ribbon> <tabs> <tab id="tab1" label="My Tab"> <group id="grp1" label="My Group"> </group> </tab> </tabs> </ribbon> </customUI>
Now that you have a group, you can start to add controls. Many people start programming in Access and VBA with command buttons, so you'll do the same here. Add a button to the group using the button node as shown in the following.
<customUI xmlns=""> <ribbon> <tabs> <tab id="tab1" label="My Tab"> <group id="grp1" label="My Group"> <button id="btn1" label="Hello World" onAction="=MsgBox('Hello World')"/> </group> </tab> </tabs> </ribbon> </customUI>
Note the use of the onAction attribute. This attribute is defined for several nodes in the Ribbon schema and is called when the user invokes some functionality on a control. Attributes on controls that call into custom code are called callbacks. In this case, we're using the MsgBox function in an expression to display the obligatory Hello World message. Using expressions for ribbon customizations in Access is convenient, but this is a programming book. We're here to write code. The remaining callbacks in this chapter use VBA in a database to provide more functionality.
Congratulations! You've written a ribbon customization! You'll see how to add this to a database in a few moments.
Common Attributes
The onAction attribute is one you are likely to see frequently. Other attributes that you'll commonly use are listed in the following table.
Once you've written a ribbon customization, you need a way to load it into Access. There are two ways to do this. The easiest way is to use a user-defined system table called USysRibbons.
Using the USysRibbons Table
The easiest way to load a custom ribbon is to use a special table called USysRibbons. This table has two fields, as shown in the table that follows.
Create this table now to add the Ribbon customization that you created earlier. Add a record to the table and set the RibbonName field to HelloWorld, and copy and paste the XML created earlier in the RibbonXml field.
When Access opens a database, it looks for a USysRibbon stable and loads any ribbon customizations that are defined by the records in the table. Ribbon customizations must be named uniquely throughout the application so you cannot duplicate a RibbonName. Make this field a primary key if you define a lot of ribbons in an application to prevent yourself from creating duplicate names.
Once you've added the HelloWorld ribbon in the table, let's tell Access to load this ribbon when the database is opened. To do this, Access includes a new property in the Current Database page of the Access Options dialog box called Ribbon Name. Set this property to the name of a ribbon customization to load, in our case, HelloWorld.
After setting the property, Access tells you that you need to close and re-open the database for the property to take effect. When you re-open the database, you should have a new tab on the Access Ribbon called My Tab, as shown in Figure 11-4.
Figure 11-4. You should have a new tab called My Tab
The Ribbon Name property is stored in DAO as a database property on the DAO.Database object called CustomRibbonId. Use the following code to retrieve the property.
(Visual Basic for Applications)
MsgBox CurrentDb().Properties("CustomRibbonId")
Using the LoadCustomUI Method
Obviously any data you store in the database adds to its size. If you are using several ribbons in an application, you might want to store ribbons external to your database. Access 2007 includes a new method on the Application object called LoadCustomUI that allows you to do this.
This method only loads ribbon customizations into Access. It does not update the current ribbon. As a result, you need to tell Access to display a ribbon either by:
Setting the Ribbon Name property for the database, or
Setting the Ribbon Name property for a form or report
We also tend to call this method in a conditional statement as shown in the following example.
(Visual Basic for Applications)
Function LoadRibbonFromCommandLine() As Long Dim stXmlPath As String Dim stXmlData As String ' get the path to the ribbon and make sure it exists stXmlPath = CurrentProject.Path & "\LoadCustomUITest.xml" Debug.Assert (Len(Dir(stXmlPath)) > 0) ' load the ribbon from disk Open stXmlPath For Input Access Read As #1 stXmlData = Input(LOF(1), 1) ' remove the byte order mark stXmlData = Mid(stXmlData, InStr(stXmlData, "<customUI")) Close #1 ' Check the command line. If you pass "DEBUG" to the /cmd switch, ' load the ribbon with startFromScratch="false" If (Command$() = "DEBUG") Then stXmlData = Replace(stXmlData, _ "startFromScratch=""true""", _ "startFromScratch=""false""") End If ' load the ribbon LoadCustomUI "rbnMain", stXmlData End Function
The Ribbon customization being loaded from disk defines the startFromScratch attribute as true. In this function, we are checking the command line using the VBA Command function. If we pass in the string DEBUG, then we change the XML after it has been loaded to set it to false. This is a simple indicator that we are working in a debug version of the database and we want to see the Access Ribbon to do development work. You could use a similar technique to load a different ribbon customization altogether either for yourself or for users in a particular security group.
Programming the Ribbon
To write ribbon customizations that are flexible and dynamic, you need to write some code. If you were familiar with programming CommandBar objects in previous Office versions, you'll find that things have changed quite a bit.
The biggest difference is that there isn't a direct mechanism for setting properties on the Ribbon. In other words, with a CommandBarControl object, you could set its caption by using something like the following:
(Visual Basic for Applications)
Dim objControl As Office.CommandBarControl Set objControl = CommandBars("Menu Bar").Controls(0) objControl.Caption = "Test Caption"
You cannot change properties of controls in the Ribbon in this manner. Instead, you must use callback routines that are named in attributes in the Ribbon-customization XML. A callback is a procedure that the Ribbon calls when it needs to check the state of an object, such as whether a control is enabled or visible; or when it needs data, such as in a combo box control; or when the user has taken some action, such as clicking a button.
Without an object model that includes properties, such as the CommandBarControl object, most values for controls in a customization are set using the arguments that are defined in the callback code. We take a closer look at this model throughout this chapter.
Ribbon Objects
The callback routines defined by the Ribbon typically include an instance of an IRibbonControl object that defines the control whose callback is being fired. To use this object, set a reference to the Office 12.0 Object Library. Once you've set this reference, you can write callback routines. For example, the signature for the onAction callback for a button is defined as follows.
(Visual Basic for Applications)
Public Sub OnAction(ctl As IRibbonControl) End Sub
The routine can be named any valid procedure name in VBA. In the XML that defines the callback, use the name of the routine as shown.
<button id="btn1" label="button" onAction="OnAction"/>
Using Callback Routines
The Ribbon defines several callback attributes that are used for a number of scenarios as mentioned earlier. Callback routines are similar to events. When the Ribbon determines that it needs some information or when something occurs, it notifies you using callback routines. The good news is that you are given the control for each callback, so you can reuse the code for a callback routine for multiple controls.
For example, if you use buttons in your customizations, you're likely to use the onAction attribute. Let's say that you use several buttons to open forms in your application. Rather than writing a callback for each form, you might store the name of the form in the tag attribute of a control. As with the Tag property of Access controls, the tag attribute of Ribbon controls lets you store extra data with a control. This means you can write one callback routine to handle opening forms such as the following:
<button id="btnMoreOptions" onAction="OnOpenForm" tag="frmOptions" label="More Options"/> <button id="btnHelp" onAction="OnOpenForm" tag="frmHelp" label="Help"/> <button id="btnHome" onAction="OnOpenForm" tag="frmHome" label="Home"/>
Then the code for the callback is straightforward, as follows.
(Visual Basic for Applications)
Public Sub OnOpenForm(Control as IRibbonControl) DoCmd.OpenForm Control.Tag End Sub
Common Callback Routines
Several controls define callbacks that are common to many controls. The following table lists callbacks that are common across controls that you may use frequently. We use some of these callbacks later in this chapter in specific scenarios.
Similar Properties, Methods, or Events in Access
The following table lists several of the callbacks for specific controls along with the equivalent event for the corresponding control in Access.
For a complete list of callbacks with the expected signatures for the routines refer to MSDN.
Refreshing Controls
You may want to refresh controls in your customizations from time to time. For instance, let's say that you were writing an application for a doctor's office and wanted to use a drop-down control to allow users to set their status to either available or away. In the away state, you might want to lock controls in the Ribbon to prevent users from accessing items while the person was away. In order to do this, you need to handle the onLoad callback in the Ribbon.
The onLoad callback is defined on the customUI node as follows.
<customUI xmlns=
The signature for this callback in code receives an IRibbonUI object, which defines two methods: Invalidate and InvalidateControl. Cache a copy of this object in a global variable to call these methods. The callback is called when the customization is loaded and only then. You cannot call it again. Because globals are reset when an unhandled error occurs, test the object for Nothing before calling the method. The following code shows you how to cache the object.
(Visual Basic for Applications)
Public gobjRibbon As IRibbonUI Public Sub OnRibbonLoad(objRibbon As IRibbonUI) Set gobjRibbon = objRibbon End Sub
Then, to refresh, or invalidate all controls in the Ribbon, call:
If (Not gobjRibbon Is Nothing) Then gobjRibbon.Invalidate End If
To refresh an individual control, call:
If (Not gobjRibbon Is Nothing) Then gobjRibbon.InvalidateControl "ControlName" End If
Organizing Ribbon Items
The top level of the command bar structure in previous versions of Office and Access was the menu. Menus contained controls or other menus that sometimes resulted in deeply nested hierarchies that made items difficult to find. One goal of the Ribbon is to expose commonly used commands so that they are visible to users. Let's look at how you can organize controls in a ribbon customization.
Tabs
The top-level means of organization for items in a ribbon customization is a tab. Tabs can contain group nodes. To define a custom tab, use the tab node in the customization, such as the following:
<tabs> <tab id="tabMyTab" label="My Tab"> </tab> </tabs>
The tab node is contained within the tabs node unless you are using contextual tabs as described in the next section.
Contextual Tabs
Tabs that appear for a particular view of an object are called contextual tabs. Access provides several built-in contextual tabs. For example, there are contextual tabs in design view of Forms, Reports, Macros, and Queries, and when a table is open in datasheet view. You'll also find contextual tabs in other applications in Office. For example, Word provides contextual tabs when designing tables.
To create your own contextual tabs in a customization, use the contextualTabs element in XML. This node must contain another element named tabSet. In the tabSet element, specify an idMso attribute of TabSetFormReportExtensibility. This tells Access to provide contextual tabs for a form or report. Follow these steps to create contextual tabs.
Create a new XML file and add the root node and the Ribbon node, as follows.
<customUI xmlns=""> <ribbon>
Add the contextualTabsandtabSetnodes, as follows.
<contextualTabs> <tabSet idMso="TabSetFormReportExtensibility">
Add the Ribbon customization as normal. In this example, we create contextual tabs that contain two tabs and two groups.
<tab id="tab1" label="Tab 1"> <group id="t1grp1" label="Tab 1 - Group 1"> </group> <group id="t1grp2" label="Tab 1 - Group 2"> </group> </tab> <tab id="tab2" label="Tab 2"> <group id="t2grp1" label="Tab 2 - Group 1"> </group> <group id="t2grp2" label="Tab 2 - Group 2"> </group> </tab>
Close the nodes that were created above, as follows.
</tabSet> </contextualTabs> </ribbon> </customUI>
Add this customization to a USysRibbons table and set the RibbonName field to ctabReport1.
Create a new report named Report1.
Open the report in design view and set the Ribbon Name property of the report to ctabReport1. This list is refreshed when you close and re-open the database. Set the Caption property of the report to My Report: Caption.
Close and re-open the database and open Report1 in Report View. You should see the two tabs that you defined in the customization, as shown in Figure 11-5.
Figure 11-5. You should see the two tabs that you defined
Notice that the caption of the report was used in the title bar over the two tabs that were defined. If the Caption property is empty, the name of the object is used in this title.
Groups
We have already worked with groups in our customizations. The group node can contain other control types.
Ribbon Controls
The Ribbon provides many different types of controls that you can add to a customization. We've already looked at some of the different types of controls, but let's take a closer look.
Buttons
Buttons are created using the button node and are likely to be the control you use the most when designing a ribbon customization. Buttons can be created in two different sizes — large or normal — as indicated by the size attribute, as shown in the following.
<button id="btnMyButton" label="My Button" size="large"/>
Figure 11-6 shows five buttons, two of which are large and three that are normal size.
Figure 11-6. Two of the buttons are large and three are normal
Toggle Buttons
As with toggle button controls in Access forms, toggle buttons in a ribbon customization enable you to reflect a true/false state. For example, imagine that you are creating a context-sensitive help system in your application that displays help information in a form. Using a toggle button, you can show or hide the help form.
Toggle buttons are created using the toggleButton node, as shown in the following, and can also appear in normal or large size:
<toggleButton id="tglHelpForm" size="large" imageMso="Help" label="Help Form" onAction="OnPressedAction"/>
The onAction callback for the toggle button is used to hide or show a form called frmHelp, as follows:
(Visual Basic for Applications)
Public Sub OnPressedAction(ctl As IRibbonControl, Pressed) If (ctl.ID = "tglHelpForm") Then If (CurrentProject.AllForms("frmHelp").IsLoaded) Then ' close the form DoCmd.Close acForm, "frmHelp" Else ' show the form DoCmd.OpenForm "frmHelp" End If ' refresh the control If (Not gobjRibbon Is Nothing) Then gobjRibbon.InvalidateControl "tglHelpForm" End If End If End Sub
This customization creates a toggle button, as shown in Figure 11-7.
Figure 11-7. The code creates a toggle button
When you click the toggle button, the form is shown as depicted in Figure 11-7. When you click the toggle button again, the form should be closed.
Check Boxes
Check boxes have similar behavior to toggle buttons in that they can show a true/false state. They are created using the checkbox node, as follows.
<checkBox id="chk2" label="Checkbox 2"/>
Figure 11-8 shows three check boxes, one of which is disabled using the enabled attribute with a value of false.
Figure 11-8. Three check boxes with one disabled
Check boxes can also appear in menus but they appear differently as shown in the previous example. The following XML creates two menus that contain check boxes.
<menu id="mnuCheck1" label="Normal checkboxes"> <checkBox id="mnuchk1" label="Menu checkbox 1"/> <checkBox id="mnuchk2" label="Menu checkbox 2"/> </menu> <menu id="mnuCheck2" label="Large checkboxes" itemSize="large"> <checkBox id="mnuchk3" label="Menu checkbox 1"/> <checkBox id="mnuchk4" label="Menu checkbox 2"/> </menu>
When you check a normal check box in a menu, you should see something that resembles the check box in Figure 11-9.
Figure 11-9. A normal check box in a menu
We'll talk about menus a little later, but you'll also notice that the menu node contains an attribute called itemSize. When you set this attribute to large, the items inside of a menu appear larger. When you select a large check box in a menu you should see something that resembles Figure 11-10.
Figure 11-10. A large check box in a menu
Combo Boxes and Drop-Downs
As with combo boxes in Access, combo boxes can be used to provide the user with a list of options to choose from. Combo boxes are created using the comboBox node and contain item nodes. Using item nodes in a combo box creates a static list of items as shown in the following XML and in Figure 11-11:
<comboBox id="cboStatic" label="Static combo box"> <item id="cboItem1" label="Item 1"/> <item id="cboItem2" label="Item 2"/> <item id="cboItem3" label="Item 3"/> </comboBox>
Figure 11-11. A combo box with a static list of items
To create a combo box with a dynamic list of items, you need to use callbacks. Creating a dynamic combo box is described in the section "The NotInList Event — Ribbon Style."
A drop-down control is very similar to a combo box, but the user cannot type text in it. Drop-down controls are created using the dropdown node. These controls are very useful for giving the user a list of items that cannot change, such as a status. The following XML defines a drop-down control with two items. (When you click the drop-down, you should see the items listed, as in Figure 11-12.)
<dropDown id="ddStatic" label="Static dropdown"> <item id="ddItem1" label="Item 1" imageMso="HappyFace"/> <item id="ddItem2" label="Item 2" imageMso="Info"/> </dropDown>
Figure 11-12. A drop-down control with two items
You'll notice that these controls can also contain images. We'll go into more detail about images later in this chapter.
Labels and Edit Boxes
The Ribbon also defines labels and edit boxes that are similar to labels and text boxes in Access. Label controls can also be disabled, but we tend to leave them enabled to show information. Label controls are useful for displaying status information about data in the application. You might also use them to show the current date and time.
Edit boxes are useful for letting the user enter any information. Later on we'll take a look at using a label control and edit box for form navigation.
Menus
Menus are still available in the Ribbon. Their use, however, is limited to cases when there are multiple command choices that are grouped together, crowding the Ribbon. Menus take on the new appearance of the Ribbon, but still let you construct hierarchies as with previous versions of Office. Let's take a look at the different features of menus. In the following sections, we look at two buttons as they appear in menus that have been decorated with the specified features.
itemSize Attribute
As we mentioned earlier, menus contain an attribute called itemSize that lets you control the size of items under the menu. This attribute can be set to either normal or large. Figure 11-13 shows the two buttons when the itemSize is set to normal.
Figure 11-13. The itemSize attribute set to normal
When the itemSize attribute is set to large, you should see something similar to Figure 11-14. We've also set the size attribute of the menu to large so that it fills all three rows of the Ribbon.
Figure 11-14. The itemSize attribute set to large
description Attribute
The description attribute is available for controls when you set the itemSize attribute of the menu to large. This attribute is used to provide more text inside the menu for a given control. The XML that defines this attribute looks like this.
<menu id="mnuDescriptions" label="Menu: Descriptions" itemSize="large" size="large"> <button id="mnuBtn1D" label="Button1" description="Click here to run something cool"/> <button id="mnuBtn2D" label="Button2" description="Click here to run something even cooler"/> </menu>
The two buttons that show descriptions are shown in Figure 11-15.
Figure 11-15. Two buttons with descriptions
Menu Separators
Menu separators provide a nice separation of controls in a menu and are created using the menuSeparator node. This control defines an attribute called title that is used to include text in the separator, as shown in the following XML:
<menuSeparator id="msCheckboxes" title="Check Boxes"/>
This creates a menu separator that contains text, as shown in Figure 11-16.
Figure 11-16. Menu separator that contains text
The title attribute is optional, however, so if you don't define it you get menu separators that look like the flat menu separators from previous versions of Office. These separators appear in Figure 11-17.
Figure 11-17. Menu separators that do not contain text
Next Part: Chapter 11: Creating Dynamic Ribbon Customizations (2 of 2) | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/dd548010(v=office.12) | CC-MAIN-2018-34 | refinedweb | 5,050 | 61.16 |
I need to parse a text file with the following format and convert it to a Hash which will be converted to JSON.
The text file has this format:
HD040008000415350110XXXXXXXXXX0208XXXXXXXX0302EN0403USA0502EN0604000107014
EM04000800030010112TME001205IQ50232Blue Point Coastal Cuisine. INC.06145655th Avenue0805921010909SAN DIEGO1008Downtown1102CA1203USA
EM
04
0008
EM 04 0008
00
EM0400080003001
{"EM" => 0008, "00" => "001"}
This is a very common type of encoding called Type-Length-Value (or Tag-Length-Value), for reasons I suppose are obvious. As with many such tasks in Ruby,
String#unpack is a good fit:
def decode(data) return {} if data.empty? key, len, rest = data.unpack("a2 a2 a*") val = rest.slice!(0, len.to_i) { key => val }.merge(decode(rest)) end p decode("HD040008000415350110XXXXXXXXXX0208XXXXXXXX0302EN0403USA0502EN0604000107014") # => {"HD"=>"0008", "00"=>"1535", "01"=>"XXXXXXXXXX", "02"=>"XXXXXXXX", "03"=>"EN", "04"=>"USA", "05"=>"EN", "06"=>"0001", "07"=>"4"} p decode("EM04000800030010112TME001205IQ50232Blue Point Coastal Cuisine. INC.0614565 5th Avenue0805921010909SAN DIEGO1008Downtown1102CA1203USA") # => {"EM"=>"0008", "00"=>"001", "01"=>"TME001205IQ5", "02"=>"Blue Point Coastal Cuisine. INC.", "06"=>"565 5th Avenue", "08"=>"92101", "09"=>"SAN DIEGO", "10"=>"Downtown", "11"=>"CA", "12"=>"USA"}
If you want to read an entire file and return a JSON array of objects, something like this would suffice:
#!/usr/bin/env ruby -n BEGIN { require "json" def decode(data) # ... end arr = [] } arr << decode($_.chomp) END { puts arr.to_json }
Then (supposing the script is called
script.rb and is executable:
$ cat data.txt | ./script.rb > out.json | https://codedump.io/share/BTvKJVigiK7Q/1/how-to-parse-a-text-file-containing-multiple-lines-of-data-and-organized-by-numerical-values-and-then-convert-to-json | CC-MAIN-2017-26 | refinedweb | 230 | 76.01 |
I read with interest an article by Robert Hayden called “Advice to mathematics teachers on evaluating introductory statistics textbooks.” It makes two claims:
- Statistics textbooks should emphasize real problems with real data. I couldn't agree more.
- You should choose a statistics textbook written by a statistician at a large university who has good research publications. I couldn't agree less.
As an example (in fact, the only example) he cites this problem from an unnamed popular textbook:
“Jimmy Nut Company advertises that their nut mix contains 40% cashews, 15% brazil nuts, 20% almonds, and only 25% peanuts. The truth in advertising investigators took a random sample (of size 20 lb) of the nut mix and found the distribution to be as follows:
Cashews Brazil Nuts Almonds Peanuts
6 lb 3 lb 5 lb 6 lb
At the 0.01 level of significance, is the claim made by Jimmy Nuts true?”
Prof. Hayden hates this problem. He thinks it is ill-posed, and evidence of everything wrong with statistics books. He explains: “The problem here is that the chi-squared goodness of fit test applies only to categorical (discrete) data. It compares the actually observed counts in each category to the counts we would expect if the hypothesis being tested were true.”
This is true, and if you blindly plug the numbers from the problem into the formula for a chi-squared statistic, the result is meaningless. In fact, if you express the given data in other units, you can make the resulting p-value as big or small as you like.
But that doesn’t mean the textbook author has made a “gross error,” because the question doesn’t say “plug these numbers in blindly and write down the result without thinking.” The question doesn’t even specify the chi-squared statistic.
The question, as written, is perfectly sensible, moderately challenging, and interesting. It requires thought and care to formulate the question in a way that leads to a meaningful answer, and it is exactly the kind of question someone doing statistics is likely to be asked.
I don’t know what the textbook author had in mind, but there is nothing wrong with the question as written. And to prove it, I’m going to answer it. I follow the procedure I recommend in my article "There is only one test."
The first step is to make a model of the null hypothesis, which is that the proportion of nuts is as advertised. But if the proportions are always as advertised, the data observed would be impossible. So we need a random model that characterizes the variation we expect to see in a sample.
What are the sources of variation in a 20 lb sample of nuts? One source is mechanical error; that is, the people and machines that weigh and mix nuts might be miscalibrated. If the accuracy of this process varies from day to day, we could treat it as a random variation. To quantify this effect, we would need some information about how the nuts are processed.
Another source of error, and probably the one the textbook author had in mind, is discrete sampling. If we take a small sample, we would not expect the proportions to be exactly as advertised. To model this kind of variation, we have to count individual nuts.
To convert the data from weights to counts, we have to know how much each kind of nut weighs. This step requires some research, which I think is an interesting element of the problem. Since we live in a post-Google information utopia, it is reasonable to expect students to answer this question, and useful for them to practice.
I found several sources with information about the weight of typical nuts, some more authoritative than others. The one I use is “Nuts for Nutrition,” by Alice Henneman, MS, RD, Extension Educator at the University of Nebraska Cooperative Extension - Lancaster County. In addition to the credentials and affiliation of the author, and the witty article title, I chose this source because it provides numbers for all four nuts in one table, so even if the methodology that produced them is not ideal, it is probably consistent, which means that the relative values are likely to be good even if the absolute values are not. From the table:
Nuts per ounce: Cashew 16-18, Brazil nut 6-8, Almonds 20-24, Peanuts 28.
To start I use the middle of each range. With this data we can convert observed values from pounds to counts and convert the expected proportions from “by weight” to “by count”. Here’s a function that does it:
def ConvertToCount(sample, count_per):
"""Convert from weight to count.
sample: Hist that maps from category to weight in pounds
count_per: dict that maps from category to count per ounce
"""
for value, count in sample.Items():
sample.Mult(value, 16 * count_per[value])
The code examples here use the Pmf and Cdf libraries from Think Stats. Here’s the code that processes the information we’re given:
sample = dict(cashew=6, brazil=3, almond=5, peanut=6)
count_per = dict(cashew=17, brazil=7,
almond=22, peanut=28)
observed = Pmf.MakeHistFromDict(sample)
ConvertToCount(observed, count_per)
advertised = dict(cashew=40, brazil=15,
almond=20, peanut=25)
expected = Pmf.MakePmfFromDict(advertised)
ConvertToCount(expected, count_per)
expected.Normalize(observed.Total())
Here are the results in tabular form:
Nut Expected Observed % error price per pound
cashew 2266 1632 -28 % $10
brazil 350 336 - 4 % $8
almond 1467 1760 +20 % $8
peanut 2333 2688 +15 % $3
So there are 28% fewer cashews than expected, and 20% more almonds. Is this suspicious? Intuitively, this looks like more deviation than I expect in a sample this large. And the relationship between “error” and price makes it look even worse.
Let’s make that more rigorous. The next step to choose a test statistic. Mean relative error would be a reasonable choice. Or I could devise a statistic that measures the excess of cheap nuts and lack of expensive nuts. But to keep it simple I'll use the chi-square statistic. The primary advantage of chi-square is that we can compute p-values analytically, which is efficient. But since I am planning to use simulation, the only advantage is that it is conventional.
For the given data the chi-square statistic is 291, which doesn’t mean anything. To make it mean something, we have to compute the p-value, which is the probability of seeing a chi-square statistic so large under the null hypothesis.
Which means we need to get back to our model of the null hypothesis. One option is to imagine choosing nuts one at a time from a vat that contains a large number of nuts mixed in the advertised proportions. Here’s what that looks like:
num_nuts = observed.Total()
cdf = Cdf.MakeCdfFromHist(expected)
t = cdf.Sample(num_nuts)
hist = Pmf.MakeHistFromList(t)
chi2 = ChiSquared(expected, simulated)
If we run that 1000 times, we can count how many samples yield chi2 > 290. And the answer is...0. In fact, after 1000 simulations, the largest test statistic is 20. So the p-value is much smaller than 1/1000 and it is unlikely that the observed sample came from the advertised mixture, at least under this model.
Since there was some uncertainty in the data we collected, we should do a perturbation analysis. For example, Jimmy might claim that he used big nuts, which would make the counts smaller and the results less significant. So let’s give Jimmy the benefit of the doubt and run the numbers with these counts per ounce:
Nuts per ounce: Cashew 12, Brazil nut 5, Almonds 18, Peanuts 24.
With these data, the observed test statistic is only 215 (down from 290). But the chance of seeing a value that large under the null hypothesis is still negligible.
More credibly, Jimmy might claim that the model of the null hypothesis is unrealistic because the nuts are not perfectly mixed, so the sampling process is not independent. In fact, correlation between successive draws would increase variation in the sample. To see why, imagine the extreme scenario where the nuts are not mixed at all; in that case, most samples contain only one kind of nut.
To model this scenario, we can make a list that represents the vat of nuts the sample is drawn from. Initially the vat is completely unmixed. Then we "stir" by choosing random pairs and swapping them.
def MakeVat(expected, num_nuts, factor=10, stir=0.0):
"""Makes a list of nuts with partial stirring."""
t = []
for value, freq in expected.Items():
t.extend([value] * int(freq * factor))
[RandomSwap(t) for i in xrange(int(num_nuts*factor*stir))]
return t
expected is a Pmf that represents the expected distribution. num_nuts is the total number of nuts in the sample. factor is the size of the vat, in multiples of num_nuts. And stir is the number of swaps to perform per nut in the vat.
I set factor to 10, which means nuts are mixed in 200 lb batches. With stir = 0, the nuts are unmixed and the p-value is 1; that is, it is certain that we will see a large deviation from the advertised proportions. If stir > 2, the nuts are well mixed and the p-value is 0, as we saw before.
Between 1.1 and 1.3 swaps per nut, the p-value drops quickly from 1 to 0. So Jimmy's argument is plausible: if the nuts are not well mixed, that could explain the deviation we saw in the sample.
In summary, what can we conclude about the Jimmy Nut Company?
1) If the sample is drawn from a well mixed batch of nuts, it provides strong evidence that the mix is not as advertised.
2) But if the nuts are not well mixed, that might explain the deviation we saw. We could test that hypothesis by mixing the vat more thoroughly or taking small samples from several parts of the vat.
And what can we conclude about the Jimmy Nut Company problem?
1) I think it's an excellent problem, as written. It requires careful thinking and a little research, and the results could lead to interesting class discussion.
2) I see no evidence that this problem indicates a conceptual error on the part of the textbook writer.
In summary, I agree with Prof. Hayden that statistics students should work with real data, but respectfully disagree with his assertion that statistics books should be written by statisticians. I think they should be written by computer scientists.
UPDATE August 30, 2011: With the help of Google Books, I found the source of this problem: Understandable statistics: concepts and methods by Charles Henry Brase and Corrinne Pellillo Brase. The problem appears in the 5th edition, but not the current 9th edition. | http://allendowney.blogspot.com/2011_08_01_archive.html | CC-MAIN-2015-06 | refinedweb | 1,815 | 63.39 |
I am trying to make a program that copies the contents of any number of files into one big file. No breaks or anything needs to exist in between the contents of the files. I have a good working start, but it will only copy the contents of the first file I designate and not the rest. Also, at the end of the file, it puts this weird y looking character on the end. I am guessing my problem lies in the fgetc and fputc functions and probably the way I have the loop set up to copy and paste the characters as a whole. Anyways, I point in the right direction would be nice.
Code:
//the program basically copies the contents of files designated in the command line argument into one big file
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
FILE *concat, *fp;
char c;
int n;
concat = fopen("concat.txt","w");
for(n=1; argv[n] != NULL; n++){ //loops through all files specified in command line argument
if(fopen(argv[n], "r") == NULL){ //if the file cant be opened (aka NULL) then satte file cant be opened
printf("Error. File ""%s"" cant be opened.\n", argv[n]);
exit(0);
}
else{ //if the file can be opened, then open it in read mode
fp=fopen(argv[n], "r");
while(c != EOF){ //while the file isn't at the end, EOF, copy each character into c then put c into concat
c = fgetc (fp);
fputc(c, concat);
}
fclose(fp);
}
}
fclose(concat);
system("PAUSE");
return 0;
} | http://cboard.cprogramming.com/c-programming/51713-combining-file-contents-command-line-printable-thread.html | CC-MAIN-2014-41 | refinedweb | 260 | 79.8 |
.
TIP To make sure you're configuring and using the right network interface, rename the NICs to Internal and External.
If you don’t have internet access on the IIS ARR server, you can use the steps highlighted in How to install Application Request Routing (ARR) 2.5 without Web Platform Installer (WebPI)..
On the Server Farm settings node make the configuration changes as detailed below:
Time-Out: 200 seconds
Response Buffer threshold: 0
Note: Make sure the option “Stop processing of subsequent rules” is selected. This is to make sure that the validation process stops once the requested URL finds a match..
B. Roop Sankar
Premier Field Engineer, UK
Excellent, ARR can work with Exchange is a good news. I always liked this option over other load balancing method but it was never possible to use with Exchange Servers. Good Move Gentlemen!
ARR can also provide a reverse proxy option for Lync:
Anyone tried this with Exchange 2010 ?
It does work with Exchange 2010 - we have it set up for reverse proxying autodiscover and EWS.
sounds cool BUT
the only thing it really "buys" you is it will block any other url but these that are published
compared to tmg.isa content inspection inside those packets also no?
so its a nice little cheap solution I admit but it really does need an additional layer if you ever want to secure and monitor what's going on inside those rpc traffic for example...
I guess its cool:)
Can you please describe below point:
1.) Load balancing in ARR for CAS will be intelligent or is it just like Round-Robin?
2.) Can we deploy two ARR server with windows NLB to achieve HA?
Excellent Article.
We are planning to implement a CAS load balancer, Does this solution could work instead? We are running Exchange 2010.
This is great. I really think this is a great direction. TMG just wasn't scaling anymore and its content inspection is for the days of IIS5. For me, just having a solution pre-auth external connections before they can hit the internal prod servers will help me sleep.
QUESTION: Where did you get the stencils from your first illustration? Is this new visio stencil ?
@Chris Dearie, if you Bing terms like Exchange 2013 Visio stencil you should end up at this in the end;
Great article. I can get OWA and ECP to work fine but when using EWS I get a 502 error - Web server received an invalid response while acting as a gateway or proxy server. Any ideas where I am going wrong?
@Ajay -
1. When you set the option to "Least Current Request", then the traffic is distributed based on the current number of HTTP requests between ARR and each of the CAS servers. Requests are routed to the server with the least number of current HTTP requests.
This article would give you additional information on the available Load Balancing algorithms in IIS ARR.
technet.microsoft.com/.../dd443524.aspx
@Keith - Make sure that EWS External & Internal URL's have been published correctly. Run, Get-WebServicesVirtualDirectory to see the namespace EWS is published on.
Example: mail.tailspintoys.com/.../Exchange.asmx
Hi Roop
Both URL's are set correctly....any other ideas?
Thanks
Keith
ARR is a great reverse proxy, but it doesn't provide pre-authentication like TMG does it? This is the big selling point with other application proxies like TMG..
Hi Roop
The 502 - Bad Gateway error I was getting is a bug in ARR when used with Windows 2012. Here is the fix:
Just in case anyone else gets this frustrating problem!
Working a treat now
How should you configure redundancy between 2 ARR boxes?
@ Keith -- That's a bit strange that this patch fixed the EWS issue while OWA and ECP was already working without this patch. So I don't think this is a bug as such but I am investigating this behaviour. Also, this patch is for IIS 7.0 and IIS 7.5 so applies
to both Windows Server 2008 R2 and Windows Server 2012. Anyway, happy to hear that you have this all working now.
@Cameron -- That is correct, IIS ARR doesn't provide any pre-authentication. If pre-auth is a requirement then you can look at Web Application Proxy (WAP) which is available in Windows Server 2012 R2.
Enable Work from Anywhere without Losing Sleep: Remote Access with the Web Application Proxy and VPN Solutions
channel9.msdn.com/.../WCA-B333
@ High Availability -- You can configure two IIS ARR boxes either in an Active/Passive or Active/Active configuration.
Active/Passive - This configuration achieves high availability.
Active/Active - This configuration achieves both high availability and scalability.
Achieving High Availability and Scalability - ARR and NLB.
Any ideas?
one question: what about SMTP?
How to filter out non- Exchange URLs?
Hi:
Does Outlook Anywhere Works?
@bwitch -- IIS ARR cannot be used for SMTP
@Petri X -- Every URL that IIS ARR recives is evaluated against the URL Rewrite Rules you (Admin) have defined. So if you take the examples from this article then IIS ARR will BLOCK all requests except the ones for mail.tailspintoys.com and autodiscover.tailspintoys.com. Hence any URL's that the Admin has not defined in IIS ARR will be blocked by default.
@ Diego Arias -- Yes, it does work for Outlook Anywhere.
What about TMG authentication such as Radius with constrained delegation, Certificate Based with Constrained Delegation, NTLM with constrained delegation?
We are using these authentication methods to enable finger print authentication for OWA, certificate authentication for ActiveSync, and NTLM with constrained delegation for Outlook Anywhere, respectively.
What about them Jim? ARR is one option, TMG is another (using radius is very slow performance wise by the way, you should switch to LDAP or direct AD), this article covers ARR only.
Roop,
Aah, you speak about host name filtering, but I meant the URL path filtering (after the hostname).
On the "URL Rewrite" picture you have now "URL Path = *", which basically allows everything to come in. But is it so, that if you add URL path=/owa/* the ARR will block everything else, except OWA traffic (assuming the host part is valid)? Or let the ARR be the request as untouched if it cannot find the rewrite rule (assuming still that host part is valid)?
@Petri X -- Yes, you can have IIS ARR further "filter" for the URL path after the hostname i.e: /OWA/*, /ECP/* etc. IIS ARR can indeed be configured to verify both the Hostname (mail.tailspintoys.com) and the URL path (/OWA) before blocking or allowing the traffic/request through.
And just as wishful thinking, I have talked about this in "Part3" of the series, which is going to be published soon :)
Excellent!
What about sizing the AAR Reverse Proxy with RAM and CPU, especially in a VM enviroment?
If my IIS ARR server is sitting in a DMZ and the "internal" interface is separated from the CAS servers by a firewall, what ports/protocols need to be allowed between the ARR and CAS servers?
I also have some sizing questions like Bernd asked in a previous post.
Thanks
Richie
I tried this and the outlook activity test could not reach the RPC server. I tried installing RPC over HTTP on the ARR server and I get a 401 access denied. If I route the traffic directly to the Exchange server I have no problem. Is there something special that you have to do on the ARR server to get RPC to work?
I also only receive HTTP 401 whilst trying to connect to an Exchange 2010 NLB CAS Array. Any suggestions?
Is there a way to use client certificates with this method? We can no longer use KCD with TMG and I was hoping this was the solution to replace that but it doesn't work either. I get 502 errors when I try to use client certificates.
I have the same situation as @Keith Gibson.
All servers are Windows Server 2012, ARR 3.0
Have you got any ideas?
Can ARR be used with single NIC? Our single NIC is able to communicate both externally and internally
Hi Everyone,
First gratz again on this great article.
As for @Keith Gibson and @itpadla I was having the same issue where both servers are enabled and when I enter my login details, click sign-in, it would just return me to the login page. This was for logging into the ECP. To resolve this I actually enabled Client Affinity on the Server Farm settings.
This leads me into my questions.
1. Why do we have the HTTPS and not HTTP in the inbound rules in URL Rewrite. I am finding that if I wanted to go to OWA I need to enter HTTPS://... but if I type in HTTP it response with a simple IIS page. Is there a way to redirect to HTTPS from HTTP and still keep the rule setup in this article?
2. Next, is it OK to enable Client Affinity on my ECP server farm? Does this open any security issues from the client to the ARR IIS server?
Thanks...
How does this relate to the Web Application Proxy feature just released in Server 2012 R2?
Good blog, I managed to get Outlook Anywhere (rpc) working after some troubleshooting.
I allready had ARR 2.5 installed on a Windows 2012 domain controller. Wich worked for some other websites than Exchange 2013. I managed to get owa working(ish) but with this blog I got it right.
Only thing that didn't work was Outlook Anywhere, it kept coming with a user and password window. In the log files I found error 400 and 401.
Eventually found that in IIS on the CAS server, in the rpc virtual directory the Basic authentication was disabled. When manually enabling Basic authentication, Outlook Anywhere works like a charm.
After that I did a change in the OutlookAnywhere config in Exchange. For both internal en external I set Basic authentication.
Afterwards I upgraded Windows 2012 to R2 and ARR 2.5 to 3.0.
It still works without modifying the configuration.
I too am questioning the need for two network interfaces on the ARR Server. I've seen this suggestion in a number of ARR How-To articles, but haven't seen an explanation as to why. When you suggest an "internal" network interface, are you proposing that it be connected directly to a subnet/VLAN on the internal network? If so, that would be circumventing a network firewall checkpoint, which I don't think we should be comfortable with.
Sankar, May I use ARR for another web app than OWA or Lync?
Sankar, May I use ARR for another web app than OWA or Lync?
Sankar, May I use ARR for another web app than OWA or Lync? | https://techcommunity.microsoft.com/t5/Exchange-Team-Blog/Part-1-Reverse-Proxy-for-Exchange-Server-2013-using-IIS-ARR/bc-p/592540/highlight/true | CC-MAIN-2019-47 | refinedweb | 1,812 | 73.68 |
is the namespace omnidir not available on opencv4 in python3.6
i instaled opencv 4 and checked the Version.
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18)
">>> import cv2
">>>cv2.____version____"
'4.0.0'
i cant find the functions cv.omnidir.calibrate / cv2.omnidir.calibrate in opencv
do i have to compile it by myselfe or is there a way to use the omnicam calibration with python as described and Dokumented?......
what exactly did you install, and how so ?
I downloaded opencv winpack 4.0.0, opend it, used python setup.py install and set the paths on a new installed win 10 pc with python 3.6.5.
Other functions/namespaces do work, but I can't find the new omnicam calibration. | http://answers.opencv.org/question/207193/is-the-namespace-omnidir-not-available-on-opencv4-in-python36/ | CC-MAIN-2019-18 | refinedweb | 129 | 77.53 |
22 December 2011 05:11 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“We are in the phase of equipment installation now and will complete project construction around April next year,” the source said.
The start-up could be delayed as the company has yet to obtain the qualification certificate for port construction that could take several months, a source close to the company said.
The new plant’s PTA production is only for captive use, the company source said.
Financial details of the project were not immediately available.
Tongkun Group, the fifth-largest polyester manufacturer in
Tongkun Group plans to add two polyester yarn lines at Tongxiang and Huzhou, respectively, in
The top four Chinese polyester makers by capacity in 2011 are Jiangsu Sanfangxiang Group, Zhejiang Hengyi Petrochemical, Sinopec Yizheng Chemical Fibre and Hengli | http://www.icis.com/Articles/2011/12/22/9518665/Chinas-Jiaxing-Petrochemical-to-start-up-PTA-unit-in-Q2.html | CC-MAIN-2014-41 | refinedweb | 134 | 58.32 |
kpenoWRobot user
Content Count48
Joined
Last visited
About kpeno
- RankAdvanced Member
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
- kpeno started following Support and problem with new version
problem with new version
kpeno posted a topic in WRobot for Wow The Burning Crusade - Help and supportHello, after update from oldest version to new - wrobot close after i click :"start". i try clear install also - nothing help 😞 log below. Vanilla also have this problem Old version with same profile working fine. 14 янв 2019 21H23.log.html
- kpeno started following All running bots closed at one time
All running bots closed at one time
kpeno commented on zzzar's bug report in Bug TrackerSame problems, and after this relogger try restart bot and have message :"Game version incorrect" and all stops until restart handly 🙂
- kpeno started following Need help with close wrobot via c\# code and Pathfinder problem again?
Pathfinder problem again?
kpeno replied to Findeh's topic in WRobot for Wow Vanilla - Help and supportNeed code for pause bot 🙂 Or he is going in the wall
Need help with close wrobot via c# code
kpeno posted a topic in WRobot for Wow Vanilla - Help and supportSometime have bug, where bot not closed after character change :"Character changed, closing bot." And i cant get access to MemoryRobot.Memory.Close(); Any way to avoid it? if (Logging.ReadLastString(Logging.LogType.Error) == "Character changed, closing bot.") { MemoryRobot.Memory. }
meshes in silithus near/in Cenarion hold
kpeno commented on kpeno's bug report in Bug TrackerIt's vanilla.
- kpeno started following Why is admin using my account?! and meshes in silithus near/in Cenarion hold
meshes in silithus near/in Cenarion hold
kpeno posted a bug report in Bug TrackerCenarion hold have very bad meshes and cant run to repair/fly master. offmesh not work, because bot think :"i know patch!" 😞
Avvi reacted to a post in a topic: Why is admin using my account?!
Why is admin using my account?!
kpeno replied to yori69's topic in General discussionMy english sux and i have no idea what here talking about, but it's fine
- kpeno started following Development tools
Development tools
kpeno posted a bug report in Bug TrackerIn 2+ version for TBC(others not tested) problem with this. If open and click any option e.q: (my position) - this utility show result e.q: (x = 1,y = 2, z = 0),if you running out from this position(not restart development tools) and click again get my position - nothing change, same with target info. And small not bug, but not true - after get any results, scroll always down, but(for me) by default best info in Upper scroll.
Error wow message text
kpeno replied to kpeno's topic in WRobot for Wow Vanilla - Help and supportOh, this really work only for vanilla, maybe you can tell where i can get info about others codes and others addons ? :)
- kpeno started following Remove wow event
Remove wow event
kpeno posted a topic in WRobot for Wow Vanilla - Help and supportHi, anyone know how disable luaevent wow after i added it ?:) added ez :"EventsLua.AttachEventLua(LuaEventsId.CURRENT_SPELL_CAST_CHANGED, m => azaza());", but not found Detach method :(
- If anybody have same problem, Custom Profile.dll in Products folder :facepalm and :cry :D
- did not help
Custom profile to dll
kpeno posted a topic in WRobot for Wow Vanilla - Help and supportHi, any way to make invisible error in visual studio with code :"public class CustomProfile : Custom_Profile.ICustomProfile" . I am not found any library from wrobot directory who see this namespace and if try compilite always have this error :(
settings custom profile
kpeno posted a topic in WRobot for Wow Vanilla - Help and supportAny ideas how add button or keyboard combination for show settings in custom profile in any moment as i want ?
How to open "Big-mouth Clam" while fishing?
kpeno replied to eniac86's topic in WRobot for Wow Vanilla - Help and supportdelete | https://wrobot.eu/profile/43398-kpeno/ | CC-MAIN-2019-09 | refinedweb | 664 | 58.62 |
# Help for Google Apps Sync Tool
The LDAP Synchronization Tool is designed to facilitate the initial provisioning of an enterprise on Google Apps, and ongoing maintainance thereafter. Its goal is to be a scriptable tool which fits into an IT department's overall chain of tools for employee management, rather than an all-encompassing product. For this reason, it does not have a GUI. The intent is that the source code for the tool be made freely available.
Many organizations use an LDAP server for maintaining and serving data on employees, computers, printers, mail accounts, and much more. Particularly, almost any Windows shop is very likely using Exchange with ActiveDirectory. Google Apps currently has an administrative GUI, suitable for management of individual users with some bulk-upload capabilities, and a programmatic API. This tool is intended to fill the gaps left by those methods, by permitting the enterprise admin to regularly synchronize the LDAP server with Google: add new employees, remove terminated employees, and update employees whose data has changed. The tool uses only the Google Apps Provisioning API to talk to Google; no other information is ever sent to Google.
The tool is written in Python and requires 2.4 or above. It mines an LDAP server (e.g. MS ActiveDirectory, OpenLdap, etc.) for users, according to a search filter entered by the user, and copies them into a Python dictionary. It reads and writes its state to a CSV or XML file. The overall workflow is:
To motivate the rest of this doc, here are two examples of how an admin might use the Tool:
In a steady state, the admin has previously mined the LDAP directory, added all the employees to Google Apps, and saved the users to a file 'yourdomain.users.csv'. He/she has also saved a configuration file 'yourdomain.com.cfg'. This workflow represents what would be done every day/week/whatever to sync up the LDAP directory with Google. The commands are shown in bold.
$ ./sync_ldap.py -c yourdomain.com.cfg -f yourdomain.users.csv
Command: connect
Connected to LDAP://yourldapserver.yourdomain.com
Command: updateUsers
Add users that match your user filter of
(objectclass=organizationalPerson)
with Google attributes mapped as follows:
GoogleUsername mail[:mail.find('@')]
GoogleFirstName givenName
GoogleOldUsername None
GoogleLastName sn
GooglePassword "password"
GoogleApplyIPWhitelist False
Add users that match your user filter of
(&(objectclass=organizationalPerson)(modifyTimestamp>=20061101185255.0Z))
with Google attributes mapped as follows:
Added 11 new users and marked them for
addition to Google Apps. Total is now 707.
Found 45 users in database which
have changed in LDAP, and marked them for updating in Google
Apps for Your Domain.
Here you need to set several variables to indicate how to login administratively to Google Apps
set admin domain-admin@yourdomain.com
set domain yourdomain.com
set password secret
To configure the system to use your python code from user_transofrmation_rule.py instead of the default rule use something like
mapGoogleAttribute GoogleLastName GoogleLastNameCallback
mapGoogleAttribute GoogleFirstName GoogleFirstNameCallback
mapGoogleAttribute GoogleUsername GoogleUsernameCallback
To finally go ahead and sync all users with Google Apps run
Command: syncAllUsers
# results not shown for privacy reasons
The syncAllUsers command would have authenticated with Google Apps, added the 11 new users, and updated the 45 whose information has changed. The data file 'yourdomain.users.csv' is automatically saved upon exit. Note that a perfectly reasonable variant on this workflow is to not do the syncAllUsers, and instead open up 'yourdomain.users.csv' in a CSV reader tool, and examine the 'meta-Google-action' column to make sure that the correct things will in fact be done by syncAllUsers. The user might also want to run some other program on the CSV file, since the Tool really is just one part of a chain.
You may get an error similar to the following
05-16 14:37 root ERROR failure to handle 'added' on ...: ProvisioningApiError: Invalid character: password.newPasswords.beta: Must be at least 6 characters
password.newPasswords.alpha: Must be at least 6 characters
in which case you must edit the CSV file to include a password.
You may get this error
05-17 13:38 root ERROR failure to handle 'added' on ...: ProvisioningApiError: InvalidQuota(1041): Invalid quota '1024'
which indicates you are attempting to use a quota different than the acceptable quotas for you domain. This is set by google. You can set it to one of the quota options available to your domain by changing the GoogleQuota attribute via the mapGoogleAttribute command. This attribute is also supported via GoogleQuotaCallback to user_transformation_rule.py.
There were two files used as input in the last example, yourdomain.com.cfg (the configuration) and yourdomain.users.csv (the actual employee data). This is how we could have gotten them:
$ sync_ldap.py
Command: set ldap_url LDAP://yourldapserver.yourdomain.com
Command: set ldap_base_dn dc=yourdomain,dc=com
Command: set ldap_user_filter (objectclass=organizationalPerson)
Command: connect
05-16 12:58 root INFO Connected to LDAP://yourldapserver.yourdomain.com
Connected to LDAP://yourldapserver.yourdomain.com
Command: set ldap_timeout 180 <== This is needed for large queries like the one above
Command: testFilter
Search found 706 users
Retrieving all attributes on a small sample of these...
The set of attributes defined on these users is:
[ 'accountInstance',
'accountType',
'adminAssistant',
'cn',
'ctCalMail',
'ctCalOrgUnit2',
'ctCalOrgUnit3',
'ctCalOrgUnit4',
'ctCalXItemId',
'displayName',
'gecos',
'gidNumber',
'givenName',
'jpegPhoto',
'l',
'labeledURI',
'loginShell',
'mail',
'o',
'objectClass',
'ou',
'pager',
'physicalDeliveryOfficeName',
'roomNumber',
'sn',
'telephoneNumber',
'title',
'uid',
'uidNumber',
'createTimestamp',
'creatorsName'
]
Google suggests retaining the following attributes:
cn
displayName
gidNumber
givenName
googlePassExpire
googlePassLastChg
googlePassLastWarn
mail
mailHost
mailRoutingAddress
miMailExpirePolicy
networkPortMD5Password
networkPortPassword
physicalDeliveryOfficeName
sn
uid
uidNumber
Google suggests mapping Google attributes as follows:
GoogleLastName sn
GoogleUsername mail[:mail.find('@')]
GooglePassword "password"
GoogleApplyIPWhitelist False
GoogleFirstName givenName
Google suggests as the "last updated" timestamp attribute:
modifyTimestamp
Accept these suggestions?y
Command: attrList remove networkPortMD5Password
'networkPortMD5Password' removed. There were 0 users with non-null values
for that attribute.
Command: attrList remove networkPortPassword
'networkPortPassword' removed. There were 0 users with non-null values
for that attribute.
Command: updateUsers
Add users that match your user filter of
(objectclass=organizationalPerson)
with Google attributes mapped as follows:
GoogleLastName sn
GoogleUsername mail[:mail.find('@')]
GooglePassword "password"
GoogleApplyIPWhitelist False
GoogleFirstName givenName
Add users that match your user filter of
(objectclass=organizationalPerson)
with Google attributes mapped as follows:
Added 706 new users and marked them for
addition to Google Apps. Total is now 706.
Command: writeUsers yourdomain.users.csv
Writing user file to yourdomain.users.csv
Done
Command: stop
You did not supply a config file on the command line
(via the '-c' argument).
A config file will capture all your LDAP and user settings so that you don't
need to run commands next time to set them. Do you want to save your
configuration in a file (y/n) y
Enter a file name: yourdomain.cfg
Notes on the above commands:
The tool uses the optparse package from Python 2.4, which is why it requires that version. Here is the output when "--help" is passed in:
usage: sync_ldap.py [-v][-q] [-f <dataFile>]
options:
--version show program's version number and exit
-h, --help show this help message and exit
-f DATA_FILE, --dataFile=DATA_FILE
User data file (XML or CSV), both read from and
written to.
-c CONFIG_FILE, --configFile=CONFIG_FILE
Configuration file (standard Python format)
-l LOG_FILE, --logFile=LOG_FILE
Log file (defaults to stdout/stderr)
All the options really are "optional." The tool can prompt for whatever it needs and doesn't already have. The -c option provides a configuration file (syntax given below), which contains almost every possible setting. If a configuration file is not supplied, you can set all the variables via the 'set' command, and on exit, a config file can be written out.
The DATA_FILE need not be given, but if it is, the user database is populated from the file before the command interpreter starts.
The tool does not assume that the LDAP server is ActiveDirectory, OpenLdap, or any other product, and thus it doesn't know a priori what attributes should be used to derive required Google attributes. Furthermore, LDAP directories often have many attributes which are of no conceivable use for Google Apps provisioning, and which can be large and not even ASCII text, e.g. JPEG images, Microsoft security descriptors, etc. Since the Tool just uses a Python dictionary for its storage rather than something more scaleable, it's desirable to not maintain more data than absolutely necessary for Google provisioning. For this reason, a fair number of commands are aimed at discovering and managing the set of attributes to be maintained by the tool, and "mapping" (explained below) LDAP attributes to Google attributes. This was illustrated in the Standing Start example above. Although the admin can always use some other tool to read in the XML or CSV file and assign Google attributes in a way that makes sense to him/her, the tool can also take mappings of one or more LDAP attributes to Google attributes. This mapping is actually a general Python expression, as the GoogleUsername illustrates. An example of this mapping is shown in the Standing Start section:
NOTE: do not use None for the GooglePassword. Google Apps requires a password. You will need to set this to a string literal as shown in the examples in this document (not recommended), or write a Python expression based on your ldap attributes to come up with a password. Alternatively, you can use a user_transformation_rule (as described elsewhere in this document) to contruct a password based upon some authoritative source of information that only the user would know about him / herself.
The admin can always modify the list of attributes and the mappings, whether or not he/she accepts the Google suggestions. Commands
The tool uses the cmd package from the Python standard library to implement a fairly nice command interpreter, with command-line editing and recall ala GNU readline. Here is the output from "?" (or equivalently, "help"):
Command: ?
Documented commands (type help <topic>):
========================================
attrList mapGoogleAttribute showLastUpdate syncAllUsers
batch markUsers showUsers testFilter
connect readUsers stop updateUsers
disconnect set summarizeUsers writeUsers
Undocumented commands:
======================
EOF help
Command: help attrList
Configures the LDAP attributes that are retrieved
from your server and stored in the database and in your XML file. If an
attribute is not needed for configuring Google Apps,
consider removing it.
Usage:
attrList add <attrName>
attrList remove <attrName>
attrList show (displays the current list)
Command: help batch
Executes commands from a text file
As the example shows, the help for any command can be seen by typing help <command>
The command names should be considered provisional and subject to change! I'm not going to claim that a whole lot of thought's gone into them, so far.
In general, you set a configuration variable via the 'set' command, and configure it permanently via a standard Python Library "ConfigParser" file. Here are all possible variables, only some of which are required for any given run of the Tool:
admin
attrs
domain
ldap_admin_name
ldap_disabled_filter
ldap_host
ldap_password
ldap_port
ldap_root_dn
ldap_timeout
ldap_user_filter
ldap_page_size
logfile
loglevel
mapping
max_threads
primary_key
timestamp
tls_option
tls_cacertdir
tls_cacertfile
To get a comprehensive description of these parameters type
set
at the "Command:" prompt within the tool.
Here is an example of the config file:
[ldap-sync]
max_threads = 10
primary_key = 'uidNumber'
ldap_port = 389
loglevel = 11
logfile = '/bob/log.txt'
timestamp = 'modifyTimestamp'
mapping = {'GoogleUsername': "mail[:mail.find('@')]", 'GoogleFirstName': 'givenName', 'GoogleLastName': 'sn', 'GooglePassword': '"password"', 'GoogleApplyIPWhitelist': False}
ldap_host = 'yourldapserver.yourdomain.com'
attrs = set(['cn', 'meta-last-updated', 'uidNumber', 'GoogleFirstName', 'gidNumber', 'GoogleApplyIPWhitelist', 'meta-Google-action', 'uid', 'GoogleUsername', 'displayName', 'modifyTimestamp', 'sn', 'mail', 'GoogleLastName', 'givenName', 'GooglePassword'])
ldap_disabled_filter = '(objectclass=organizationalPerson)'
ldap_user_filter = '(objectclass=organizationalPerson)'
ldap_timeout = 180.0
ldap_root_dn = 'dc=yourdomain,dc=com'
The astute observer will recognize that, especially for 'attrs' and 'mapping', this is just the output of a Python 'repr' command. In fact all these parameter values are gotten that way, and the strings are 'eval' -ed on the way in to recover the values. This is tolerable for simple numbers and strings, but probably not for the more complex syntaxes, unless the admin is very fluent with Python. For this reason, it's disallowed to set 'attrs' and 'mapping' via the 'set' command, and you have to use commands that are tailored specifically for those variables ('attrList' and 'mapGoogleAttribute' respectively).
These are, hopefully, pretty well explained just by their names:
The ldap_disabled_filter in particular is optional. It's intended for environments where former employees are kept in the directory but one of their variables is changed to reflect their employment status. If there is no ldap_disabled_filter, exited employees are assumed to be deleted, and discovered by their absence in a scan. Google config variables These are only required when the syncAllUsers command is issued:
Some of these could equally well be considered "LDAP variables", I suppose, but they're mainly looked at by the "userdb" module:
NOTE: there are differences in the way the filter string above is constructed based on the type of ldap directory being accessed. This is due to slight differences in handling the format of the modifyTimestamp. At the moment the tool tries to guess whether Active Directory is being accessed and formats the query string appropriately. It uses the presence of the sAMAccountName field in your "attrs" config variable as an indicator that Active Directory is being accessed. For more details about these slight differences, see the AndUpdateTime() method in commands.py.
Also you may want to restrict the types of operations allowed by the sync. Specifically the google_operations variable setting below will restrict changes to updates only.
[ldap-sync]
google_operations = ['updated']
Miscellaneous config variables
Reference for the 'logging' module:
50: critial errors only
40: errors
30: warnings
20: info messages
10: debug messages
0: all messages
testFilter, attrList, updateUsers commands
If you think of the ultimate result of a tool session, it's an NxM table. There are N rows, each row being a user, and M columns, each column an LDAP attribute, meta-attribute (more on this below), or Google attribute (some piece of data that's going to be uploaded to Google). As we said earlier, it's important as a practical matter to keep M under control, since LDAP directories can have a lot of attributes of no possible use in Google provisioning, and those attributes can be large. Thinking of it that way, set ldap_user_filter is for controlling N, attrList is for viewing and controlling M, and testFilter does a little of both, and is optional. updateUsers is the payoff from almost all the commands mentioned so far.
set ldap_user_filter just sets the LDAP filter that is used in updateUsers, and has the same syntax as a filter for the ldapsearch utility. An example for Google might be "(objectClass=googleOrgPerson)"
attrList show displays the attributes that will be saved on the next updateUsers call.
attrList add adds an additional LDAP attribute.
attrList remove removes an attribute. This actually goes through the Python dictionary and removes existing attributes of that name.
testFilter is an attempt at "LDAP discovery" for first-time users. Once you've used the tool and gotten your user filter and attribute set the way you want it, you wouldn't use this. testFilter is intended to help you find out reasonably quickly which attributes are in your schema. testFilter
<filter>
showUsers, summarizeUsers commands
This is not a data exploration tool. Save the file and open it in some CSV reader if you really want to look at your data.
showUsers <start number> <end number> displays the users from
<start>
<end>
Command: showUsers 1
Display new users 1 to 1
1: cn=Fred299 Flinstone,o=Acme Inc.,c=US
{ 'GoogleFirstName': '',
'GoogleLastName': 'Flintstone',
'GooglePassword': '',
'GoogleRole': '',
'GoogleUsername': 'fred299',
'cn': 'Fred299 Flinstone',
'gidNumber': '800',
'mail': 'fred299@yourdomain.com',
'meta-Google-action': 'add',
'meta-Google-status': '',
'meta-last-updated': '1159480585.93',
'modifyTimestamp': '20060622180843Z',
'sn': 'Flintstone',
'uid': 'fred299',
'uidNumber': '800',
'userPassword': 'abc123'}
If you don't supply
summarizeUsers provides an overview of the data (more explanation below on what the Google data means):
Command: summarizeUsers
Total users: 1003
Marked for addition to Google: 1003
Marked for deletion from Google: 0
Marked for update: 0
Deriving Google attributes
testFilter, mapGoogleAttribute commands
Although probably for most installations, the 'sn' attribute can be mapped directly to GoogleLastName and 'givenName' can be mapped to GoogleFirstName, we can't assume anything, and getting the customer's desired GoogleUsername is even dicier. But we want this tool to be useful, so there's a way that, when a user is added, his or her Google attributes can be derived from the LDAP attributes, and this derivation can be any legal Python expression, e.g. the "mail[:mail.find('@')]" example above, which takes "fred@acme.com" and produces "fred."
Technical details: this is actually done with a Python 'eval' statement, making the global namespace the set of attributes for that user. So your expression can pull in any other data about the user, but nothing else. testFilter suggests a set of mappings, using some very simple heuristics.
mapGoogleAttribute <attr> <expression> derives
<attr>
<expression>
showLastUpdate, updateUsers commands
When we want to pick up "changes" in LDAP since we last did an updateUsers, we have to answer the question "changes since when?" There are a lot of possible ways to answer the "when" question, and we can and should debate that, but the provisional answer is "since you last did updateUsers." So there is a meta-attribute "meta-last-updated" added to each user record giving the time that the updateUsers that brought in this user was issued. showLastUpdate displays the highest value of meta-last-updated.
Furthermore, as far as I can tell there is no uniformity among LDAP schemas on the attribute representing "last changed" time In openldap it's "modifyTimestamp" but with ActiveDirectory it seems to be "whenChanged" (among other attributes that also have that information!), and LDAPv3 has a "modifiedTime" standard attribute (but of course v3 is hardly universal yet). So one of the "attribute discovery" functions the tool does is suggest which attribute you want to use to denote "last changed". updateUsers, then, finds T, the latest meta-last-updated attribute in the user database, and adds a term to the user filter "(modifyTimestamp>=T)" or "(whenChanged>=T)" or whatever attribute is being used for "last changed." (It really ought to be ">" and not ">=", but for some reason LDAP doesn't support ">" and "<" in filters.) Each user found by that query is added to the database with the value of another meta-attribute, "meta-Google-action", set to "add". After that, we want to find the users that are no longer active, so another query is done, in one of two ways, depending on whether the ldap_disabled_filter variable is defined: If there is both a ldap_disabled_filter and a timestamp defined: a query is done on users passing the disabled filter, and having a timestamp greater than the last time we did updateUsers. Those users are considered exited. Otherwise, a query is done on ldap_user_filter, and all old users no longer passing the query are considered exits.
summarizeUsers, markUsers, syncAllUsers, syncOneUser commands
Each user has a meta-Google-action attribute, which is used by syncAllUsers, and as mentioned above, updateUsers tries to be smart about setting it appropriately. Furthermore, the admin can always save the file and open it in a CSV reader that makes it easy to search and edit en masse. But for single user changes, markUsers can be used to change the meta-Google-action attribute.
summarizeUsers displays what your workload with respect to Google is:
Command: summarizeUsers
Total users: 1003
Marked for addition to Google: 1003
Marked for deletion from Google: 0
Marked for update: 0
Marked for rename: 0
syncAllUsers is the ultimate goal of the tool. It uses the Provisioning API to actually go to Google and add, delete, update and rename the users according to their meta-Google-action attribute. Once the meta-Google-action has been successfully carried out on the user, the meta-Google-action attribute is set to null.
syncAllUsers works as follows (the same logic is followed in turn for adds, exits, updates, and renames):
The default handling of each possible action is: added: CreateAccount(), EnableEmail() exited: LockAccount() updated: UpdateAccount() renamed: RenameAccount()
This is for the case of "a new user doesn't have his/her mail account yet, they're standing right there in my office, and we need to enable them now." Or, "someone was just terminated, and we need them removed now." The working assumption here is that the user's record in LDAP should be assumed correct, so the task is to sync it with Google. (If the LDAP record is not correct, that should be taken care of before executing this command.) syncOneUser is somewhat involved, since there are a lot of cases to consider. In general, the command updates the user's record from LDAP, queries Google and displays the results it gets back, and then displays its conclusion as to what happened with this user (was he/she added, exited, etc.?) The admin gets a chance to OK it before the result is executed.
Here are some examples: User already present in UserDB and in Google:
Command: syncOneUser name=Anew User
Now looking up 'anewuser' in Google Apps...
Google Apps returned the following data:
userName : anewuser
firstName : Anew
lastName : google
accountStatus : unlocked
aliases :
Google Apps is up to date. No action needed.
User's LDAP record has been changed (new first name):
Command: syncOneUser name=Anew User': 'updated',
'meta-last-updated': 1164846094.3949389,
'name': 'Anew User',
'sn': 'Formeruser',
'userPrincipalName': 'anewuser@yourdomain.com',
'whenChanged': '20061130002558.0Z'}
Now looking up 'anewuser' in Google Apps...
Google Apps returned the following data:
userName : anewuser
firstName : Anew
lastName : google
accountStatus : unlocked
aliases :
The recommended action for this user is to
treat user as having been: updated
Proceed with that action (y/n) y
User's LDAP record gets deleted
Command: syncOneUser name=Anew User
Search in LDAP found 0 users
There are 1 users in the user database that match
your query.': 'exited',
'meta-last-updated': 1164846561.784775,
'name': 'Anew User',
'sn': 'Formeruser',
'userPrincipalName': 'anewuser@yourdomain.com',
'whenChanged': '20061130002558.0Z'}
Now looking up 'anewuser' in Google Apps ...
Google Apps returned the following data:
userName : anewuser
firstName : Anewer
lastName : google
accountStatus : locked
aliases :
The recommended action for this user is to
treat user as having been: exited
Proceed with that action (y/n) n
readUsers, writeUsers commands
User data can be saved in either of two popular forms: XML and CSV. readUsers
<filename>
batch, stop commands
You can batch up a set of frequently-used commands into a file and run that: batch
setHost yourldapserver.yourdomain.com
setRootDN dc=yourdomain,dc=com
connect
setFilter (objectclass=inetOrgPerson)
stop exits the command interpreter. (as does control-D on Linux and control-Z on Windows)! | http://code.google.com/p/google-apps-for-your-domain-ldap-sync/wiki/HowToUseIt | crawl-002 | refinedweb | 3,784 | 51.68 |
Python also provides a handy string method for including variables in strings. This method is
.format().
.format() takes variables as an argument and includes them in the string that it is run on. You include
{} marks as placeholders for where those variables will be imported.
Consider the following function:
def favorite_song_statement(song, artist): return "My favorite song is {} by {}.".format(song, artist)
The function
favorite_song_statement takes two arguments, song and artist, then returns a string that includes both of the arguments and prints a sentence. Note:
.format() can take as many arguments as there are
{} in the string it is run on, which in this case is two.
Here’s an example of the function being run:
print(favorite_song_statement("Smooth", "Santana")) # => "My favorite song is Smooth by Santana"
Now you may be asking yourself, I could have written this function using string concatenation instead of
.format(), why is this method better? The answer is legibility and reusability. It is much easier to picture the end result
.format() than it is to picture the end result of string concatenation and legibility is everything. You can also reuse the same base string with different variables, allowing you to cut down on unnecessary, hard to interpret code.
Instructions
Write a function called
poem_title_card that takes two inputs: the first input should be
title and the second
poet. The function should use
.format() to return the following string:
The poem "[TITLE]" is written by [POET].
For example, if the function is given the inputs
poem_title_card("I Hear America Singing", "Walt Whitman")
It should return the string
The poem "I Hear America Singing" is written by Walt Whitman. | https://production.codecademy.com/courses/learn-python-3/lessons/string-methods/exercises/format-i | CC-MAIN-2021-04 | refinedweb | 273 | 64 |
Bug in rich:tabPanel tab switching?Steve Stair Jun 25, 2012 3:23 PM
I've got the simplest possible tabPanel and a submit button, like this
<body> <h:form> <rich:tabPanel <rich:tab <rich:tab </rich:tabPanel> <a4j:commandButton </h:form> </body>
and a simple backing bean
package test; public class TestBeanModel { private String selectedTab; public String getSelectedTab() { return selectedTab; } public void setSelectedTab(String selectedTab) { System.out.println("setSelectedTab(" + selectedTab + ")"); this.selectedTab = selectedTab; } public boolean isSubmitDisabled() { System.out.println("isSubmitDisabled()"); return true; } }
Every time I switch tabs, I see output like this:
isSubmitDisabled() setSelectedTab(Two)
I'm assuming that it is rerendering the submit button, and therefore calling the isSubmitDisabled() method. Why?
I tried adding ajaxSingle="true" to the tabPanel, but it doesn't make any difference. I can of course change the switchType to client, but then the server can't tell which tab is selected.
I saw a post here that, instead of just using switchType="ajax", used "client", and called an <a4j:jsFunction> ontabchange ().
I didn't see any reason why this is different than using switchType="ajax", until I added ajaxSingle="true" to the jsFunction.
<body> <h:form> <a4j:jsFunction <a4j:actionparam </a4j:jsFunction> <rich:tabPanel <rich:tab <rich:tab </rich:tabPanel> <a4j:commandButton </h:form> </body>
With that change, my tab changes only call setSelectedTab(), as I needed. Am I missing something?
I'm using RichFaces 3.3.2.SR1
1. Re: Bug in rich:tabPanel tab switching?Steve Stair Jun 25, 2012 4:02 PM (in response to Steve Stair)
I guess tabPanel is working as designed, in that it is processing the whole form. I think I'm just upset that there's no ajaxSingle support on tabPanel.
2. Re: Bug in rich:tabPanel tab switching?Christian Peter Jun 26, 2012 1:48 AM (in response to Steve Stair)
you could surround the tabpanel with an a4j:region if you like. Maybe this changes the behavoir in this case.
3. Re: Bug in rich:tabPanel tab switching?Steve Stair Jun 26, 2012 10:24 AM (in response to Christian Peter)
It works! As long as I don't care about the contents of the panel getting processed when the tab changes. Thanks. | https://developer.jboss.org/thread/201660 | CC-MAIN-2018-26 | refinedweb | 370 | 56.25 |
While advising on how to put together a C data collection program, I was part of a conversation that suggested that to host a web page of results we need to install Apache. No way!
Sockets are a general purpose way of communicating over networks and similar infrastructure. Essentially they are a generalization of streams to things other than storage devices. The problem with sockets is that they are called "sockets", which sounds strange. In addition they are so general purpose that it can be difficult to see what you can do with them.
The case mentioned in the introduction was a C program that needed to send some data using the web - an HTML page or JSON data. At first sight this seemed to need a web server and so the programmers concerned were considering installing Apache. The system in question was a Raspberry Pi and could manage to run Apache, but in this case the whole solution was overkill.
It is very easy to implement a simple web server or a web client using sockets. This is what this article explains and along the way you will discover how versatile sockets are. You can use them to communicate using almost any standard protocol, like HTTP, or a custom protocol of your own devising. All sockets do is transport data from one point to another.
The basic steps in using a socket are fairly simple:
Sockets connect to other sockets by their addresses.
The simplest case is where there are just two sockets or two endpoints communicating. Once the connection is made the two sockets operate in more or less the same way. However, in general one of the sockets will have initiated the connection - the client - and the other will have accepted the connection - the server.
There is a conceptual difference between a client and a server socket. A server socket is setup and then it waits for clients to connect to it. A client socket actively seeks a connection with a server. Once connected, data can flow in both directions and the difference between the two ends of the connection becomes less.
The key idea is that a socket is implemented to make it look as much like a standard Linux file as possible. This conforms with a general principle of Linux that any I/O facility should follow the conventions of a file.
The basic socket functions that you need to know are:
sockfd = socket(int socket_family, int socket_type, int protocol);
socket_family
This returns a socket descriptor an int which you use in other socket functions.
The socket_family is where you specify the type of communications link to be use and this is where sockets are most general. There are lots of communications methods that sockets can use including AF_UNIX or AF_LOCAL which don't use a network but allow intercommunication between processes on the same machine. In most cases you are going to be using AF_INET for IPv4 or AF_INET6 for IPv6 networking.
The socket_type specifies the general protocol to be used. In most cases you will use SOCK_STREAM which specifies a reliable two-way connection - for IP communications this means TCP/IP is used. For some applications you might want to use SOCK_DGRAM which specifies that the data should be sent without confirming that it has been received. This is a broadcast mechanism that corresponds to UDP for IP communications.
The protocol parameter selects a sub-protocol of the socket type. in most cases you can simply set it to zero.
As we are going to be working with sockets that basically work with the web we will use AF_INET and SOCK_STREAM.
To connect a socket as a client of another socket you need to use
int connect(int sockfd, const struct sockaddr *addr, socklen_t
addrlen);
The sockfd parameter is just the socket file descriptor returned from the socket function. The addr parameter points at a sockaddr struct which contains the address of the socket you want to connect to. Of course addrlen just specifies the size of the struct.
Socket address type depend on the underlying communications medium that the socket used but in most cases, and certainly in this article, it is just an IP address.
As addresses are used in many different socket function it is worth dealing with how to construct an address as a separate topic.
To assign a server socket an address it will respond to use:
int bind(int sockfd, const struct sockaddr *addr,
socklen_t addrlen);
The sockfd parameter is just the socket file descriptor returned from the socket function and addr is a pointer to an address struct.
Beginners often ask what the difference is between connect and bind. The answer should be obvious. Connect makes a connection to the socket with the specified address whereas bind makes the socket respond to that address. Put another way - use connect with a client socket and bind with a server socket.
There isn't anything much to say about sending and receiving data from an open socket because it is just a file and you can use the standard read and write functions that you would use to work with a file. Of course there are some differences and some additional features that you need to work with a network, but this is the general principle.
There is one small matter that we have to deal with that takes us beyond simple file use semantics. If you have opened a socket and bound it to an IP address then it is acting as a server socket and is ready to wait for a connection.
How do you know when there is a connection and how do you know when to read or write data?
Notice this problem doesn't arise with a client socket because it initiates the complete connection and sends and receives data when its ready.
The function:
int listen(int sockfd, int backlog);
sets the socket as an active server. From this point on it listens for the IP address it is bound to and accepts incoming connections. The backlog parameter sets how many pending connections will be queued for processing.
The actual processing of a connection is specified by the:
int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);
The accept command provides the address of the client trying to make the connection in the sockaddr structure. It also returns a new socket file descriptor to use to talk to the client. The original socket carries on operating as before. Notice that this is slightly more complicated than you might expect in that it is not the socket that you created that is used to communicate with the client. The socket you created just listens out for clients and creates a queue of pending requests. The accept function processes these requests and creates a new socket that is used to communicate with the client.
This still doesn't solve the problem of how the server detects that there are clients pending?
This is a complicated question with many different solutions.
You can set up the listening socket to be either blocking or non-blocking. If it is blocking then a call to accept will not return until there is a client ready to be processed. If it is non-blocking then a call to accept returns at once with an error code equal to EAGAIN or EWOULDBLOCK. So you can either use a blocking call or you can poll for clients to be ready. A more complex approach would be to use another thread to call the poll() function which performs wait with no CPU overhead while the file descriptor isn't ready.
More of this later.
We now have enough information to implement our first socket program - a web client. It has to be admitted that a web client isn't as common a requirement as a web server, but it is simpler and illustrates most of the points of using sockets to implement an HTTP transaction.
The first thing we have to do is create a socket and for the TCP needed for an HTTP transaction this is just:
int sockfd = socket(AF_INET, SOCK_STREAM, 0);
To allow this to work you have to add:
#include <sys/socket.h>
Next we need to get the address of the server we want to connect to. For the web this would usually be done using a DNS lookup on a domain name. To make things simple we will skip the lookup and use a known IP address. Example.com is a domain name provided for use by examples and you can find its address by pinging it. At the time of writing it was hosted at:
93.184.216.34
This could change so check before concluding that "nothing works".
The address structure:
struct sockaddr_in addr;
has three fields sin_family is just set to:
addr.sin_family = AF_INET;
to indicate an internet IPv4 address. The next field is the port number of the IP adddress, but you can't simply use:
addr.sin_port = 80;
because the bit order used on the Internet isn't the same as used on most processors. So you have to use a utility function that will ensure the correct bit order:
addr.sin_port = htons(80);
The function name stands for host to network short and there are other similarly named functions. | http://www.i-programmer.info/programming/cc/9993-c-sockets-no-need-for-a-web-server.html | CC-MAIN-2017-51 | refinedweb | 1,560 | 60.75 |
Server as a function with Kotlin – http4k
Server as a function with Kotlin – http4k
Have you ever heard about the concept of “Server as a Function”? The idea is that we write our server application based on just ordinary functions, which is based on a concept outlined in the paper Your Server as a Function written and published by Twitter/Marius Eriksen. In the Kotlin world, the most prominent implementation of this concept is http4k, which the maintainers describe as an “HTTP toolset written in Kotlin with a focus on creating simple, testable APIs”. The best part about it is that http4k applications are just Kotlin functions that we can test straightforwardly. Take a look at this first example:
First http4k server example
val app: HttpHandler = { request: Request -> Response(OK).body(request.body) }
val server = app.asServer(SunHttp(8000)).start()
This code shows a fully functional http4k application consisting of a single Kotlin function
which we embedded into a
server, one example of available server implementations we may choose from. Note the type
here, which represents one of the two essential concepts in the idea of “Server as a Function”:
- HttpHandler ((Request) -> Response
): abstraction to process HTTP requests into responses by mapping the first into the latter
- Filter (HttpHandler -> HttpHandler
): abstraction to add pre and post-processing like caching, debugging, authentication handling, and more to anHttpHandler
. Filters are composable/stackable
Every http4k application can be composed of
s in combination with
s, both of which are simple type aliases for ordinary Kotlin function types. Http4k comes with zero dependencies if we don’t count the Kotlin standard library as one. Since http4k applications, in their pure form, only entail some nested Kotlin functions; there is no reflection or annotation processing involved. As a result, http4k applications can start and stop super quickly which also makes them a reasonable candidate to be deployed on Function-as-a-Service environments (as opposed to e.g., Spring Boot applications).
More advanced http4k application
Let’s take a look at a more advanced example of an http4k server.
val pingPongHandler: HttpHandler = { _ -> Response(OK).body("pong!") }
val greetHandler: HttpHandler = { req: Request ->
val name: String? = req.query("name")
Response(OK).body("hello ${name ?: "unknown!"}")
}
val routing: RoutingHttpHandler = routes(
"/ping" bind GET to pingPongHandler,
"/greet" bind GET to greetHandler
)
val requestTimeLogger: Filter = Filter { next: HttpHandler ->
{ request: Request ->
val start = clock.millis()
val response = next(request)
val latency = clock.millis() - start
logger { "Request to ${request.uri} took ${latency}ms" }
response
}
}
val app: HttpHandler =
ResponseFilters.GZip()
.then(requestTimeLogger)
.then(routing)
In this snippet, we can see a few exciting things http4k applications may entail. The first two expressions are definitions of
s, in which the first one takes any response and maps it to an
response containing “pong” in its body. The second handler takes a request, extracts a name, and greets the caller. In the next step, we apply routing to the handlers by assigning one particular route to each handler. As we can see, the
is used to serve a client who invokes
, while the
is used to cover
.
Routing
Routing in http4k works with arbitrary levels of nesting, which works flawlessly since
itself results in a new
(strictly speaking, a special kind of type
), just like the original ones.
Filters
As mentioned before, the other important concept we want to look at is
s. For starters, we create a
that intercepts each incoming request by measuring its processing time and logging the elapsed time. Filters can be combined using the
method, which allows us to define chains of filters. The corresponding API looks like this:
fun Filter.then(next: Filter): Filter
In the example application above, we add our custom filter to one of the default filters called
. Once we have combined all our filters, we want to add an
to our filter chain. Again, there’s a
function we can use to do so:
fun Filter.then(next: HttpHandler): HttpHandler
As we can see, this again results in an
. You all probably got the idea by now – It only needs two simple types to express how an HTTP server should operate.
The shown
filter is just one of many default filters we may choose from. Others cover concerns like caching, CORS, basic authentication or cookie handling and can be found in the
package.
Calling HttpHandlers
So what did we get out of that nesting which resulted in yet another
called
? Well, this itself does not entail running an actual server yet. However, it describes how requests are handled. We can use this object, as well as other separate
s and
s, and invoke it directly (e.g. in our tests). No HTTP required. Let’s see this in action:
//call handler directly
val handlerResponse: Response = pingPongHandler(Request(GET, "/any"))
//call handler through routing
val routingCallResponse: Response = app(Request(GET, "/ping").header("accept-encoding", "gzip"))
app.asServer(Jetty(9000)).start()
Http4k comes with its own implementations of a
and a
, first of which can be used to invoke an
. Calling the unattractive
yields something similar to
while calling the final
handler gives us a gzipped response due to the applied GZip filter. This call also implies a log informing about the duration of the request:
. Please note that, while it was fine to call
with a random URI (
), we had to use the designated
URI when invoking the routing-backed
.
Last, but not least, we start our very own http4k
as a server on a
with port
. Find a list of available server implementations here.
Lenses
One of the things a sophisticated HTTP app has to deal with is taking stuff out and also putting stuff into HTTP messages. When we take parameters out of requests, we also care about validating these values. Http4k comes with a fascinating concept that helps us deal with the stated concerns:
.
Basic Definition
Lenses, according to multiple resources, were first used in the Haskell world and are a functional concept that may appear slightly hard to understand. Let me try to describe it in a shallow, understandable manner. Let’s say we have a class
which comes with different fields
,
, and so on. A lens basically composes a getter and a setter focusing on precisely one part of
. A
lens getter would take an instance of
to return the part it is focused on, i.e.,
. The lens setter, on the other hand, takes a
along with a value to set the focused part to and then returns a new
with the updated part. Remember that a lens can be used to both get and set a part of a whole object. Now, let’s learn how this concept helps us with handling HTTP messages.
Lenses in http4k
Following the basic idea of a lens, http4k lenses are bi-directional entities which can be used to either get or set a particular value from/onto an HTTP message. The corresponding API to describe lenses comes in the form of a DSL which also lets us define the requirement (optional vs. mandatory) of the HTTP part we are mounting a lens on. Since HTTP messages are a rather complex container, we can have lenses focusing on different areas of the messages: Query, Header, Path, FormField, Body. Let’s see some examples of how lenses can be created:
// lens focusing on the path variable name
val nameLens = Path.string().of("name")
// lens focusing on a required query parameter city
val requiredQuery = Query.required("city")
// lens focusing on a required and non empty string city
val nonEmptyQuery = Query.nonEmptyString().required("city")
// lens focusing on an optional header Content-Length with type int
val optionalHeader = Header.int().optional("Content-Length")
// lens focusing on text body
val responseBody = Body.string(ContentType.TEXT_PLAIN).toLens()
So far, the API for creating lenses looks more or less straightforward but what about using them on a target? Here’s the pseudo code syntax for
a) Retrieving a value:
, or
b) Setting a value:
, or
Use Lens to Retrieve value from HTTP Request
Reusing the
sample from earlier, let’s modify our code to make use of lenses when retrieving a value:
val nameLens: BiDiLens<Request, String> =
Query.nonEmptyString().required("name")
val greetHandler: HttpHandler = { req: Request ->
val name: String = nameLens.extract(req) //or nameLens(req)
Response(OK).body("hello $name")
}
We create a bidirectional lens focusing on the query part of our message to extract a required and non-empty name from it. Now, if a client happens to call the endpoint without providing a
query parameter, the lens automatically returns an error since it was defined as “required” and “nonEmpty”. Please note that, by default, the application exposes much detail to the client announcing the error as
including a detailed stack trace. Rather than that, we want to map all lens errors to HTTP 400 responses which implies that the client provided invalid data. Therefore, http4k offers a
filter which we can easily activate in our filter chain:
// gzip omitted
val app: HttpHandler = ServerFilters.CatchLensFailure
.then(requestTimeLogger)
.then(routing)
Use Lens to Set value in HTTP Request
After looking into extracting values from HTTP messages, how can we use the
to set a value in an HTTP request?
val req = Request(GET, "/greet/{name}")
val reqWithName = nameLens.inject("kotlin", req)
// alternatively, http4k offers a with function that can apply multiple lenses at once
val reqWithName = Request(GET, "/greet/{name}").with(
nameLens of "simon" //, more lenses
)
The example shows how we create an instance of
and inject a value via one or many lenses. We can use the
function to specify the value we want to set into an arbitrary instance of
. Now that we saw a basic example of a string lens, we want to dig into handling some more advanced JSON content.
JSON handling
We can choose from several JSON implementations, including e.g., the common Gson and Jackson library. I personally prefer Jackson as it comes with a great Kotlin module (Kudos to my friend Jayson Minard 😉). After adding a JSON format module to our application, we can start marshaling objects to and from HTTP messages using lenses. Let’s consider a partially complete REST API that manages persons:
[...]
import org.http4k.format.Jackson.auto
class PersonHandlerProvider(private val service: PersonService) {
private val personLens: BiDiBodyLens<Person> = Body.auto<Person>().toLens()
private val personListLens: BiDiBodyLens<List<Person>> = Body.auto<List<Person>>().toLens()
fun getAllHandler(): HttpHandler = {
Response(OK).with(
personListLens of service.findAll()
)
}
fun postHandler(): HttpHandler = { req ->
val personToAdd = personLens.extract(req)
service.add(personToAdd)
Response(OK)
}
//...more
}
In this example, we see a class that provides two handlers representing common actions you would expect from a REST API. The
fetches all currently stored entities and returns them to the client. We make use of a
(BiDirectional) that we created via the
extension for Jackson. As noted in the http4k documentation, “the auto() method needs to be manually imported as IntelliJ won’t pick it up automatically”. We can use the resulting lens like already shown earlier by providing a value of type
and inject it into an HTTP
as shown in the
implementation.
The
, on the other hand, provides an implementation of an
, that extracts a
entity from the request and adds it to the storage. Again, we use a lens to extract that JSON entity from the request easily.
This already concludes our sneak peek on lenses. As we saw, lenses are a fantastic tool that lets us extract and inject parts of an HTTP message and also provides simple means of validating those parts. Now, that we have seen the most fundamental concepts of the http4k toolset, let’s consider how we can test such applications.
Testing
Most of the time, when we consider testing applications that sit on top of a web framework, we have to worry about details of that framework which can make testing harder than it should be. Spoiler Alert: This is not quite the case with http4k 🎉
We have already learned that
, one of the two core concepts in the http4k toolset, are just regular Kotlin functions mapping requests to responses and even a complete http4k application again is just an
and thus a callable function. As a result, entire and partial http4k apps can be tested easily and without additional work. Nevertheless, the makers of http4k thought that it would still be helpful to provide some additional modules which support us with testing our applications. One of these modules is
, which adds a set of Hamkrest matchers, we can use to verify details of message objects more easily.
Http4k Handler Test Example
import com.natpryce.hamkrest.assertion.assertThat
import org.http4k.core.Method
import org.http4k.core.Request
import org.http4k.core.Status
import org.http4k.hamkrest.hasStatus
import org.junit.jupiter.api.Test
class PersonHandlerProviderTest {
val systemUnderTest = PersonHandlerProvider(PersonService())
@Test
fun getAll_handler_can_be_invoked_as_expected(){
val getAll: HttpHandler = systemUnderTest.getAllHandler()
val result: Response = getAll(Request(Method.GET, "/some-uri"))
assertThat(result, hasStatus(Status.OK))
}
}
This snippet demonstrates a test for the
we have worked with earlier already. As shown, it’s pretty straightforward to call an
with a
object and then use Hamkrest or whatever assertion library you prefer to check the resulting
. Testing
s, on the other hand, is “harder”. To be honest though, it’s just one tiny thing we need to do on top of what we did with handlers. Filters map one
into another one by applying some intermediate pre or post-processing. Instead of investigating the mapping between handlers itself, it would be more convenient to again send a
through that filter and look into the resulting
. The good news is: It’s super easy to do just that:
Http4k Filter Test Example
val addExtraHeaderFilter = Filter { next ->
{
next(it).header("x-extra-header", "some value")
}
}
@Test
fun adds_a_special_header() {
val handler: HttpHandler = addExtraHeaderFilter.then { Response(OK) }
val response: Response = handler(Request(GET, "/echo"))
assertThat(response, hasStatus(OK).and(hasHeader("x-extra-header", "some value")))
}
We have a
called
that adds a custom header to a processed request and then forwards it to the next filter. The goal is to send a simple request through that filter in our test. What we can do, is making the filter a simple
by adding a dumb
handler to it via
. As a result, we can invoke the newly created handler, now containing our very own filter, and investigate whether the resulting
object contains the new expected header. There we go – both handlers and filters got tested 🙃
To wrap up, I want to say that this was just a quick look at the happy paths of testing http4k apps with mostly familiar tools. It might become necessary to test against the actual running server and verify Responses on a lower level, e.g., comparing the resulting JSON directly. Doing that is also possible and supported via the Approval Testing module. Later in this article, we want to look at the client module of http4k, which again opens up some new possibilities.
Serverless
One of the hottest topics of our time is Serverless computing. You know, that thing where we can run our code on other people’s… servers. One part of it is known as Function as a Service, or FaaS and the most common ones include AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions. The general idea is that these vendors provide a platform where we can deploy our code, and they take care of managing resources and scaling our application on demand. One of the downsides of Serverless is that our functions may be spun down by the platform if it’s not being used until someone wants to use it again, which would require a fresh startup. What does that mean for us? We need to choose target platforms and tools which allow a fast start-up of our application. Spring on the JVM in its classical form, for instance, would probably not be the best tool for that use case. However, as you can image, http4k with its small footprint and super quick start-up times is a great choice. It even comes with native support for AWS Lambda.
I won’t dive much deeper into this topic as part of this article, but I’m planning to write a much more detailed post on what we can do with http4k on a FaaS platform. Stay tuned.
Client as a Function
By now, we have learned how cool http4k is and why it’s a great tool to develop server applications. HTTP Servers don’t make much sense without clients using them, so we want to conclude this article by looking at the other side – Clients as a Function.
The http4k
library comes with everything we need to get started with clients. Clients in http4k again are just a special form of an
, as we can see in this little snippet:
val request = Request(Method.GET, "")
val client: HttpHandler = JavaHttpClient()
val response = client(request)
The shown
is the default implementation that comes with the
library. If we were to prefer OkHttp, Apache, or Jetty instead, we would find a related module to replace the default. Since we program against interfaces (clients are
s), it’s not a big deal to swap out implementations at any time. The
library obviously comes with several default
s we can apply to our client which can be found in the
file that contains stuff like
,
and more stuff you’d expect for a client. The fact that all concepts of http4k servers, including handlers, filters and also lenses, can be reused in http4k clients opens up quite a few possibilities so that it can make much sense to e.g., use the client module to test your servers an vice versa.
Summary and Lookout
I personally learned to appreciate http4k a lot in the last couple of weeks. Once you’ve made yourself comfortable with the basic concepts, it becomes straightforward to develop (which includes testing) server applications quickly. Http4k comes with an incredible list of supported concepts and technologies including OAuth, Swagger, Websockets, XML and many, many more. Its modular nature allows us to add functionality by applying dependencies as needed, and due to its simple base types, it is highly extensible. Http4k is a toolset that allows us to write applications with quick startup times which also makes it a valid alternative when it comes to FaaS and Serverless computing. As if this wasn’t enough, the toolset also includes sophisticated means for writing HTTP clients, which we learned about in the last section. Overall, http4k is a promising technology that you should definitely consider when choosing your next HTTP toolset.
If you want to learn more about Kotlin and its fantastic language features, please have a look at my talk “Diving into advanced language features”.
The post Server as a function with Kotlin – http4k appeared first on Kotlin Expertise Blog. | https://kotlined.com/blog/category/kotlin-news/lambda/ | CC-MAIN-2020-50 | refinedweb | 3,143 | 62.07 |
I am new to wxPython and am trying to add a DataViewTreeControl. The control shows up just fine. However, when I associate data I get
Segmentation fault (core dumped). I have done enough work to know that it happens every time when it is trying to return from the
GetValue() method.
I have patterned my implementation after the Data View Model Demo and I believe that it is done properly.
I have read that incorrectly trying to update objects can cause this kind of error (here) but I do not believe I am doing this. I have attempted using
wx.CallAfter() and
wx.CallLater() without luck.
The application does have a toolbar and grid currently working so I know the whole thing isn't broken.
If I leave the implementation as in the demo above the segmentation fault occurs when attempting the
return mapper[col] (equivalent to line 180 in the demo) in this case the type of
mapper[col] is
'unicode'.
If I convert the value to string (this is what my column data type is set to be) then I get further before the seg fault occurs on line 16 below (I am not sure if this is important or not, but here it is):
> /usr/lib/python2.7/encodings/utf_8.py(16)decode() 15 def decode(input, errors='strict'): ---> 16 return codecs.utf_8_decode(input, errors, True)
I am looking for some direction regarding whether this is likely a threading issue or if it might be an error with my implementation of the data model.
As I said this works until I try to associate a model with the control. Here is a minimum non-working example (self, in this case is the wx.Frame) :
def get_metadata(self): mdDict = dict() a1 = coremetadata.mdCoreAttribute(0, 'att1', 'cat1', 1, 'core1') a2 = coremetadata.mdCoreAttribute(1, 'att2', 'cat1', 2, 'core1') a3 = coremetadata.mdCoreAttribute(2, 'att3', 'cat3', 3, 'core1') c1 = coremetadata.mdCore('core1') c1.atts.append(a1) c1.atts.append(a2) c1.atts.append(a3) mdDict['core1'] = c1 return coremetadata.CoreMetaData(mdDict.values()) def createDVTC(self): self.dvtc = dv.DataViewTreeCtrl(self.grid, wx.ID_ANY, size=(300,300)) mdata = self.get_metadata() # tell the object to use our data self.dvtc.AssociateModel(mdata) return self.dvtc def create_mdPane(self): self.dvtc = self.createDVTC() self._mgr.AddPane(self.dvtc, aui.AuiPaneInfo(). Name("MDNotebook").Caption("Metadata Display"). Right().Layer(1).Position(1).MinimizeButton(True))
Thanks for any help! | http://www.howtobuildsoftware.com/index.php/how-do/uhB/python-wxpython-wxpython-phoenix-dataviewtreecontrol-has-segmentation-fault-core-dumped | CC-MAIN-2019-22 | refinedweb | 401 | 50.63 |
DEBSOURCES
Skip Quicknav
sources / r-cran-dendextend / 1.9.0+dfs332
dendextend 1.9.0 (2018-10-19)
----------------------------------------
###OTHER NOTES
* Removed some old deprecated code relating to dendextendRcpp.
* Minor edits to the doc.
###UPDATED FUNCTIONS:
* prune_leaf - now works properly with non-binary trees. The modified code in prune_leaf() now trim leaves from splits with more than 2 branches. This simply involves removing the list elements from the dendrogram that match the leaf_name to be removed. Also added tests to verify it works. (props @hypercompetent)
###BUG FIXES
* dend_diff - now works properly (props @jdetribol)
* Fix "S3 method lookup found 'as.phylo.dendrogram' on search path" by using @rawNamespace in roxygen.
dendextend 1.8.0 (2018-04-28)
----------------------------------------
###NEW FUNCTIONS
* branches_attr_by_lists - Change col/lwd/lty of branches from the root down to clusters defined by list of labels of respective members
* colored_dots - Add colored dots beside a dendrogram
dendextend 1.7.0 (2018-02-10)
----------------------------------------
###NEW FUNCTIONS
* get_subdendrograms (and find_dendrogram) - getting subtrees from dendrogram based on cutree labels (fixes #61 and)
###BUG FIXES
* seriate_dendrogram = stop seriate_dendrogram from always using OLO #57
###OTHER NOTES
* Remove NMF from suggested (as it is about to be removed from CRAN). And also labeltodendro.
dendextend 1.6.0 (2017-11-13)
----------------------------------------
###NEW FUNCTIONS
* pvrect2 - Draw Rectangles Around a Dendrogram's Clusters with High/Low P-values
* min_depth, max_depth - measures the min/max depth of a tree (from the root to the closest/furtherest leaf).
###BUG FIXES
* rect.dendrogram - it now deals much better with setting the lower part of the rectangle to be below the labels.
* circlize_dendrogram - can now handle dendrograms with non unique labels. A warning is issued, and a running number is padded to the labels. Problem first reported here:
dendextend 1.5.2 (2017-05-19)
----------------------------------------
###NEW FUNCTIONS
* print.ggdend - a wrapper for ggplot2::ggplot of a ggdend object.
###BUG FIXES
* Adding function reindex_dend to resolve problems with as.hclust (#39)
###OTHER NOTES
* Remove dendextendRcpp :( (from tests and suggests)
dendextend 1.5.0 (2017-03-24)
----------------------------------------
###NEW FUNCTIONS
* dend_expend and find_dend - functions for finding a "good" dendrogram for a dist
###UPDATED FUNCTIONS:
* cor_cophenetic - dend2 can now also be a dist object (allowing to check how close is some clustering to the original distance matrix).
###BUG FIXES
* dend_diff - now restore to the par(mfrow) value before running the function.
###OTHER NOTES
* Simplified the roxygen2 code by using @rdname (and removing many instances of @allias and @usage).
* remove a test in untangle due to different outputs in R 3.3.3 and R 3.4.0. More tests are probably needed for the bytecompiler for extreme cases (i.e.: dendrograms with odd branch heights)
dendextend 1.4.0 (2017-01-21)
----------------------------------------
###NEW FUNCTIONS
* as.dendrogram.varclus (enhances in Hmisc)
* highlight_branches, highlight_branches_col, highlight_branches_lwd - Highlight a dendrogram's branches heights via color and line-width.
* has_edgePar, has_nodePar - Does a dendrogram has an edgePar/nodePar component?
###UPDATED FUNCTIONS:
* find_k - now includes `k` in the output, which is a copy of `nc`. This is to make it easier to extract the value (i.e.: the suggested number of clusters).
* set - added the parameter `order_value` to easily use values which are in the order of the original data.
* tanglegram
* Add possibility to draw several tanglegrams on same page via the `just_one` paramter.
* added highlight_branches_col (FALSE) and highlight_branches_lwd (TRUE) parameters. These will only be turned on if the relevant attribute is not already present in the tree. If the tree already has lty/lwd/col - these will not be updated by this parameter. These parameters can be removed from the tree by using `dend %>% set("clear_branches")`.
dendextend 1.3.0 (2016-07-15)
----------------------------------------
###NEW FUNCTIONS
* Added set_labels and place_labels. These are convenience functions for updating the labels of a dendrogram. They differ in their assumption about the order of the labels. Props to Garrett Grolemund for the idea.
* set_labels assumes the labels are in the same order as that of the labels in the dendrogram.
* place_labels assumes the labels has the same order as that of the items in the original data matrix. This is useful for renaming labels based on some other columns in the data matrix.
###BUG FIXES
* "labels_colors<-" - make it more robust for combinations of using it with assign_values_to_leaves_nodePar (as used in set("labels_colors", ...) for example)
dendextend 1.2.0 (2016-06-21)
----------------------------------------
###NEW FUNCTIONS
* find_k - Find the (estimated) number of clusters for a dendrogram using average silhouette width
* is.dist - checks class to be `dist`
* seriate_dendrogram - rotates a dendrogram to fit the optimal ordering (via OLO or GW) of some distance matrix (very useful for heatmaps)
###BUG FIXES
* ggplot.ggdend - Fix the tiny notch in angle of the branches
###OTHER NOTES
* ggplot2 is now imported (instead of just suggested). This is because the use of dendextend for transforming dendrograms into ggplot2 has become more important (thanks to the new plotly package).
* Fix various small issues (importing functions from other packages, the documents, etc.)
* Improve vignette size for CRAN (moving to html_vignette)
dendextend 1.1.9 (2016-03-17)
----------------------------------------
###OTHER NOTES
* Move is.X style functions to a new file: is.functions.R
dendextend 1.1.8 (2016-02-10)
----------------------------------------
###BUG FIXES
* added tryCatch to tests so it would pass github.
dendextend 1.1.7 (2016-02-10)
----------------------------------------
###BUG FIXES
* Added rmarkdown in Suggests of DESCRIPTION. This fixes the travis-ci error: " The vignette engine knitr::rmarkdown is not available, because the rmarkdown package is not installed. Please install it." as was suggested in:
* Another minor error in tests due to cutree in R-devel.
dendextend 1.1.6 (2016-02-10)
----------------------------------------
###BUG FIXES
* Moved `inst/tests/` to `tests/testthat/`
* Tests fixed to deal with the new R's ability to run as.hclust on dendrogram with ties.
dendextend 1.1.5 (2016-01-06)
----------------------------------------
###BUG FIXES
* cutree - Fix #18 by updating the sort_cluster_numbers parameter.
###OTHER NOTES
* Adding Gregory Jefferis as an author (thanks to his work on dendroextras, which was later reused in the dendextend package)
dendextend 1.1.4 (2016-01-01)
----------------------------------------
###BUG FIXES
* colored_bars
* Fix the default position of the bars (i.e.: y_shift) when dend is provided. (this is a major corrections to the way the bars are located automatically via y_shift)
* Fix the example in the documentation of `colored_bars`
* NEW parameter horiz - now allows for adding bars to horizontal dendrograms
* sort_by_labels_order is now set to TRUE by default
* Made more checks on the object type of dend
* Added many description elements in .Rd files.
dendextend 1.1.3 (2015-11-07)
----------------------------------------
###NEW FUNCTIONS
* color_unique_labels
dendextend 1.1.2 (2015-10-30)
----------------------------------------
###UPDATED FUNCTIONS:
* intersect_trees - return dendlist() and a warning if there are no shared labels between the two trees.
* Removed functions labels.matrix and labels<-.matrix, in order to avoid conflicts with arules (and since these functions are not really used in other parts of the package) - reported by Maja Alicja.
* rect.dendrogram and identify.dendrogram - gain the stop_if_out parameter. The defaults for this parameter makes the error of "Error in rect.dendrogram(x, k = k, x = X$x, cluster = cluster[, k - 1], :
k must be between 2 and 10" - to become less common (since it replaces it with a warning). Feature request by jedgroev from
###BUG FIXES
* as.ggdend.dendrogram - fix "Error in FUN(X[[i]], ...) : subscript out of bounds" problem for some (more) trees. Fixes bug #12
* as.ggdend.dendrogram - can now handle different label heights. (for example, the ones produces when using hang.dendrogram)
###OTHER NOTES
* Updated CITATION file.
dendextend 1.1.0 (2015-07-30)
----------------------------------------
###NEW FUNCTIONS
* circlize_dendrogram - a function for creating radial dendrogram plots
* labels_col - Added as an alias of labels_colors
* labels_cex
###UPDATED FUNCTIONS:
* set - added "branches_k_lty" parameter
###BUG FIXES
* as.ggdend.dendrogram - fix "Error in FUN(X[[i]], ...) : subscript out of bounds" problem for some trees.
* Various typo fixes to the vignettes.
###OTHER NOTES
* Added a CITATION file!
dendextend 1.0.3 (2015-07-05)
----------------------------------------
###UPDATED FUNCTIONS:
* assign_values_to_nodes_nodePar - can now handle NA with a character string vector in value. Fixes #10
dendextend 1.0.2 (2015-06-28)
----------------------------------------
###UPDATED FUNCTIONS:
* ggplot.ggdend - added offset_labels (TODO: still needs to fix the figure margins)
###OTHER NOTES
* minor typo fixes
dendextend 1.0.1 (2015-06-28)
----------------------------------------
###OTHER NOTES
* Added import to NAMESPACE: graphics, grDevices
* dendextend 1.0.1 is intended to be shipped to CRAN.
dendextend 1.0.0 (2015-06-27)
----------------------------------------
###NEW FUNCTIONS
* common_subtrees_clusters - a (currently hidden) function to get clusters for labels of common subtrees between two trees.
* labels.dendrogram is now working through dendextend_options("labels.dendrogram"). The defualt is the new dendextend_labels.dendrogram (which is just stats:::labels.dendrogram), but this allows the package dendextendRcpp to change the function used by the package (without masking the function in base R) - thus making both me, and CRAN, happy :)
* rank_values_with_clusters - Rank a vector based on clusters (important for various functions.) Added tests.
* cor_common_nodes - a new correlation measure between dendrograms.
* cor_FM_index - Correlation of FM_index for some k (similar to Bk actually)
* prune_common_subtrees.dendlist - Prune trees to their common subtrees
* nleaves.dendlist
###UPDATED FUNCTIONS:
* color_branches - gained a new "cluster" parameter to allow for easy uneven coloring of branches (using branches_attr_by_clusters). This is not a new feature, as it is an attempt to have the user access more options from one place (i.e.: color_branches).
* tanglegram - turned "highlight_distinct_edges = FALSE". This is due to a bug in R that causes plot.dendrogram to crash when trying to plot a tree with both lty and some character color for the branch.
* tanglegram -
* Added new parameters: common_subtrees_color_lines and common_subtrees_color_branches to color connecting-lines and/or the dendrograms themsleves, to help detect commonly shared subtrees.
* new parameter: faster.
* plot.ggdend - added support to plot nodes
* branches_attr_by_clusters -
* can now accept a vector of values with the length of the labels.
* new parameter "branches_changed_have_which_labels" - allows the user the decide if the change in parameters will be for branches with all/any of the labels (useful for tanglegram - since we would like to color common subtrees with all of the labels, not just any.)
* dendlist - removed warning when creating an empty dendlist (since we also don't get a warning when creating a list() )
* ladderize - added the `which` parameter (for indicating which elements in the dendlist to ladderize)
* untangle.dendlist -
* can now return a dendlist with more than two elements.
* now preserve the names in the dendlist
* now has a new "labels" method, and it is now the default.
* dendlist - added the `which` parameter (for indicating which elements in the dendlist to pick out)
* set
* added "rank_branches"
* untangle_random_search/untangle_step_rotate_2side/untangle_best_k_to_rotate_by_2side_backNforth - no longer returns a dendlist with names (it shouldn't, they didn't have a name to begin with.)
* cor.dendlist - added methods "common_nodes", "FM_index".
* cor_cophenetic - changed method to method_coef (so it could be updated when using cor.dendlist)
* use "dend" as a standard name for all the parameters that accept dendrograms (whenever possible, in some cases I preferred to keep another name due to other conventions.)
* object -> dend for:
set, set.dendrogram, set.dendlist,
get_leaves_attr, get_leaves_nodePar, get_leaves_edgePar,
get_leaves_branches_attr, get_leaves_branches_col, get_nodes_attr,
assign_values_to_leaves_nodePar, assign_values_to_leaves_edgePar,
assign_values_to_nodes_nodePar, assign_values_to_branches_edgePar,
remove_branches_edgePar, remove_nodes_nodePar, remove_leaves_nodePar,
labels_colors, shuffle
* tree -> dend for:
color_branches, color_labels_by_labels, color_labels, get_branches_heights, dendextend_get_branches_heights, dendextend_cut_lower_fun, cutree_1h.dendrogram, cutree_1k.dendrogram, heights_per_k.dendrogram
* x -> dend for: cor.dendlist, dend_diff, dist.dendlist, distinct_edges, highlight_distinct_edges, partition_leaves, get_nodes_xy, prune, unbranch
* tree1/tree2 or x1/x2 -> dend1/dend2 for: cor_FM_index, cor_common_nodes, cor_bakers_gamma, cor_cophenetic, entanglement, untangle_random_search, intersect_trees, tanglegram
###BUG FIXES
* Fix "'::' or ':::' import not declared from: ‘rpart’" by adding rpart to DESCRIPTION.
* as.hclust.pvclust - use "x" instead of "object", to avoid "checking S3 generic/method consistency ... WARNING"
* Fix "Objects in \usage without \alias in documentation object". For ‘prune.rpart’ and ‘sort.dendlist’.
* rect.dendrogram - deals with the case we want to use k for which k+1 is not defined.
* Parameters are now stored as a list for (allows for branches to have colors/linetype/linewidth, some are numeric and some character): assign_values_to_branches_edgePar / assign_values_to_leaves_nodePar / assign_values_to_leaves_edgePar / assign_values_to_nodes_nodePar - the parameters are stored as a list in edgePar/nodePar (instead of a vector). Discovered thanks to Martin Maechler. Fixes: "Error in segments(x0, y0, x1, y1, col = col, lty = lty, lwd = lwd) : invalid line type: must be length 2, 4, 6 or 8"
* made sure that the end of the vector in "dendextend_heights_per_k.dendrogram" will give the tree size (and not 0) as its name.
* as.ggdend - now supports edgePar and nodePar which are lists.
###OTHER NOTES
* Added Code of conduct
* Added CRAN status
* Added codecov
* Moved FAQ from introduction.Rmd to FAQ.Rmd
* New vignette "Cluster_Analysis.Rmd" - demonstrating the use of the package on three famous datasets (Iris, khan, votes.repub, animals)
* Added the khan dataset to the package.
* moved assign_dendextend_options to ".onLoad" - this now allows the use of dendextend::some_func
dendextend 0.18.8 (2015-05-17)
----------------------------------------
###UPDATED FUNCTIONS:
* all.equal.dendlist - can now compare two dendlist objects to one another, and not only the dendrograms inside one dendlist. (also added new tests that all.equal.dendlist works)
* all.equal.dendrogram - suppress warning for a dend with only 1 item, when using dendextendRcpp.
* rect.dendrogram - make the heights of the rects be in the middle of the two clusters so that they would look nicer. Added prop_k_height to state which proportion of the height the rect should be (between k and k+1 heights). Also added upper_rect to allow the user to customize the exact height.
###BUG FIXES
* identify.dendrogram - parameter horiz now works (fixes #4)
dendextend 0.18.7 (2015-05-05)
----------------------------------------
###VIGNETTE
* remove dependancy on DendSer (using require instead of library)
###UPDATED FUNCTIONS:
* untangle - added method="ladderize"
* tanglegram.dendlist - fix it when names is null to have the default be ""
dendextend 0.18.6 (2015-04-25)
----------------------------------------
###NEW FUNCTIONS
* get_leaves_edgePar - Get edgePar of dendrogram's leaves
* get_leaves_branches_attr - Get an attribute of the branches of a dendrogram's leaves
* get_leaves_branches_col - Get the colors of the branches of a dendrogram's leaves. This function is actually the point of the two other functions. It is meant to help match the color of the labels with that of the branches - after running color_branches.
###UPDATED FUNCTIONS:
* tanglegram.dendlist - now gets main_left and main_right (if they are null) from the names in dendlist (if they exist)
###VIGNETTE
* added example for using "get_leaves_branches_col" in "How to color the branches in heatmap.2?"
###OTHER NOTES
* library corrplot added to plot cor.dendlist results. Examples added to vignette and function's example.
dendextend 0.18.5 (2015-04-22)
----------------------------------------
###VIGNETTE
* added: "How to color the branches in heatmap.2?"
dendextend 0.18.4 (2015-02-07)
----------------------------------------
###NEW FUNCTIONS
* prune.rpart added
* sort.dendlist
* as.hclust.pvclust
dendextend 0.18.3 (2015-01-31)
----------------------------------------
###OTHER NOTES (Thanks to Prof Brian Ripley for help.)
* Fix "title case" for the package's Title in the DESCRIPTION
* Fix "No package encoding and non-ASCII characters in the following R files"
* Fix "Please use :: or requireNamespace() instead." by commenting out all "library", since using "::" is enough!
* dendextend 0.18.3 is intended to be shipped to CRAN.
dendextend 0.18.2 (2015-01-31)
----------------------------------------
###OTHER NOTES
* Minor doc fix for `collapse_branch`
* dendextend 0.18.2 is intended to be shipped to CRAN.
dendextend 0.18.1 (2015-01-31)
----------------------------------------
###VIGNETTE - new sections!
* Quick functions for FAQ
* How to colour the labels of a dendrogram by an additional factor variable
* How to color a dendrogram's branches/labels based on cluster (i.e.: cutree result)
* Change dendrogram's labels
* Larger font for leaves in a dendrogram
* How to view attributes of a dendrogram
* ggplot2 integration (!)
* Comparing trees:
* dend_diff
* all.equal
* dist.dendlist
* cor.dendlist
* Others
* rotate - explain about sort
* collapse_branch
###UPDATED FUNCTIONS:
* `sort.dendrogram` - added a new parameter: type = c("labels", "nodes"), to use `ladderize` for sorting
* `ggplot.ggdend` - support theme = NULL
###OTHER NOTES
* dendextend 0.18.1 is intended to be shipped to CRAN.
dendextend 0.18.0 (2015-01-29)
----------------------------------------
###NEW FILES:
* ape.R - moved `as.dendrogram.phylo` and `as.phylo.dendrogram` functions to it.
* cor.dendlist.R - For the cor.dendlist function
* renamed imports_stats.R to stats_imports.R
* ggdendro.R
* ggdend.R
* dist_long.R
###NEW FUNCTIONS
* More connections:
* A new phylo method for `labels` and `labels<-`
* More ways to compare trees:
* cor.dendlist - Correlation matrix between a list of trees.
* partition_leaves - A list with labels for each subtree (edge)
* distinct_edges - Finds the edges present in the first tree but not in the second
* highlight_distinct_edges - Highlight distint edges in a tree (compared to another one). Works for both dendrogram and dendlist.
* dend_diff - Plots two trees side by side, highlighting edges unique to each tree in red. Works for both dendrogram and dendlist.
* dist.dendlist - Topological Distances Between Two dendrograms (currently only the Robinson-Foulds distance)
* all.equal.dendrogram/all.equal.dendlist - Global Comparison of two (or more) dendrograms
* `which_node` - finds Which node is common to a group of labels
* Dendrograms in ggplot2! (enhancing the ggdendro package)
* `dendrogram_data` (internal) function - a copy of the function from the ggdendro package (the basis for the new ggdend class).
* `get_leaves_nodePar` - Get nodePar of dendrogram's leaves (designed to help with as.ggdend)
* `as.ggdend.dendrogram` - turns a dendrogram to the ggdend class, ready to be plotted with ggplot2.
* `prepare.ggdend` - fills a ggdend object with various default values (to be later used when plotted)
* `ggplot.ggdend` - plots a ggdend with the ggplot2 engine (also the function `theme_dendro` was imported from the ggdendro package).
* Others:
* `remove_nodes_nodePar` - as the name implies...
* `collapse_branch` - simplifies a tree with branches lower than some tollerance level
* `ladderize` - Ladderize a Tree (reorganizes the internal structure of the tree to get the ladderized effect when plotted)
###NEW TESTS:
* partition_leaves
* distinct_edges
* dend_diff
* dist.dendlist
###UPDATED FUNCTIONS:
* `nleaves.phylo` - no longer require conversion to a dendrogram in order to compute.
* `labels<-.dendrogram` - no longer forces as.character conversion
* `tanglegram` - a new highlight_distinct_edges parameter (default is TRUE)
* `cutree` - now produces warnings if it returns 0's. i.e.: when it can't cut the tree based on the required parameter. (following isue #5 reported by grafab)
* `cutree_1h.dendrogram` and `cutree_1k.dendrogram` - will now create clusters as the number of items if k==nleaves(tree) or if h<0. This is both consistent with stats::hclust, but it also "makes sense" (since this is well defined for ANY tree). Also updated the tests.
* Rename `get_branches_attr` to be `get_root_branches_attr`
* `get_nodes_attr` - added the "id" parameter (to get attributes of only a subset of id's)
* `as.dendrogram.phylo` is properly exported now.
dendextend 0.17.6 (2014-12-08)
----------------------------------------
###NEW FUNCTIONS
* dist_long - Turns a dist object to a "long" table.
dendextend 0.17.5 (2014-09-22)
----------------------------------------
###UPDATED FUNCTIONS:
* `order.dendrogram<-` - commenting off an examples and tests which (as of R 3.1.1-patched) produces an error (as it should). Thanks to Prof Brian Ripley for the e-mail about it.
###BUG FIXES
* checking S3 generic/method consistency ... WARNING
cor_bakers_gamma:
function(tree1, tree2, use_labels_not_values, to_plot, warn, ...)
cor_bakers_gamma.dendlist:
function(tree1, which, ...)
* Undocumented code objects:
'plot.dendlist'
* `cor_bakers_gamma.Rd`:
\usage lines wider than 90 characters
###VIGNETTE:
* Fixed several typos and grammatical mistakes.
###OTHER NOTES
* dendextend 0.17.5 is intended to be shipped to CRAN (to stay compatible with R 3.1.1-patched).
dendextend 0.17.4 (2014-09-20)
----------------------------------------
###NEW FUNCTIONS
* set.data.table - informs the user of the conflict in "set" between dendextend and data.table.
###VIGNETTE:
* added sessionInfo
dendextend 0.17.3 (2014-08-26)
----------------------------------------
###UPDATED FUNCTIONS:
* color_labels - now can handle the coloring of labels when the function is without k and h, but that it is not possible to cut the tree to nleaves items (due to several leaves with 0 height). This is done by not doing any cutting in such cases, and just directly using labels_colors. Tests are added. Bug report by Marina Varfolomeeva, a.k.a varmara - thanks! ( )
dendextend 0.17.2 (2014-08-25)
----------------------------------------
###NEW FUNCTIONS
* cor_bakers_gamma.dendlist
* assign_values_to_leaves_edgePar (noticed the need from this question:)
###UPDATED FUNCTIONS:
* assign_values_to_leaves_nodePar - added if(warn), if value is missing.
###VIGNETTE:
* Fix some typos and mistakes.
* Add to introduction.Rmd how to install the package from github.
dendextend 0.17.1 (2014-08-19)
----------------------------------------
###OTHER NOTES
* compacted 'dendextend-tutorial.pdf' from 725Kb to 551Kb (doc fixes to pass CRAN checks)
(Thanks to using the following:
tools::compactPDF("inst\\doc\\dendextend-tutorial.pdf",
qpdf = "C:\\Program Files (x86)\\qpdf-5.1.2\\bin\\qpdf.exe",
gs_cmd = "C:\\Program Files\\gs\\gs9.14\\bin\\gswin64c.exe",
gs_quality="ebook")
And to the help of Prof Brian Ripley and Kurt Hornik
)
dendextend 0.17.0 (2014-08-19)
----------------------------------------
###VIGNETTE:
* Wrote a new vignette "introduction.Rmd", to showcase the new functions since the last vignette, and give a quick-as-possible introduction to the package functions.
###NEW FUNCTIONS
* `get_nodes_xy` - Get the x-y coordiantes of a dendrogram's nodes
* `all_unique` - check if all elements in a vector are unique
* `head.dendlist`
* `rainbow_fun` - uses rainbow_hcl, or rainbow (if colorspace is not available)
###UPDATED FUNCTIONS:
* ALL `warn` paramteres are now set to dendextend_options("warn") (which is FALSE)!
* `get_branches_attr` - change "warning" to "warn", and it now works with is.dendrogram, and no longer changes the class of something which is not a dendrogram.
* `untangle_step_rotate_2side` - print_times is now dendextend_options("warn"),
* `color_branches` - now handles flat trees more gracefully. (returns them as they are)
* `cutree.dendrogram` - now replaces NA values with 0L (fix tests for it), added a parameter (NA_to_0L) to control it.
* `Bk` - Have it work with cutree(NA_to_0L = FALSE)
* `set.dendrogram` - added explenation in the .Rd docs of the different possible options for "what"
* `set.dendrogram` - added nodes_pch, nodes_cex and nodes_col - using `assign_values_to_nodes_nodePar`
* `set.dendrogram` - changed from using `labels_colors<-` to `color_labels` for "labels_colors" (this will now work with using k...)
* `set.dendrogram` - if "what" is missing, return the object as is.
* `set.dendrogram` - added a "labels_to_char" option.
* `labels_colors<-` - added if(dendextend_options("warn"))
* `labels<-.dendrogram` - if value is missing, returning the dendrogram as is (this also affects `set`)
* `get_nodes_attr` - can now return an array or a list for attributes which include a more complex structure (such as nodePar), by working with lists and adding a "simplify" parameter.
* `rect.dendrogram` - a new xpd and lower_rect parameters - to control how low the rect will be (for example, below or above the labels). The default is below the labels.
* `colored_bars` - added defaults to make the bars be plotted bellow the labels. +allow the order of the bars to be based on the labels' order, made that to be the default +have scale default be better for multiple bars.
* `branches_attr_by_labels` now uses `dendextend_options("warn")` to decide if to print that labels were coerced into character.
* `intersect_trees` - now returns a dendlist.
* `untangle` - has a default to method (DendSet)
* `untangle_step_rotate_1side` - added "leaves_matching_method" parameter.
* `entanglement.dendrogram` - changed the default of "leaves_matching_method" to be "labels" (slower, but safer for the user...)
###BUG FIXES
* `branches_attr_by_clusters` and `branches_attr_by_labels` - moved from using NA to Inf.
* `color_branches` - can now work when the labels of the tree are not unique ("feature"" request by Heather Turner - thanks Heather :) )
* `rect.dendrogram` - fix a bug with the location of the rect's (using "tree" and not "dend")
* `rect.dendrogram` - Made sure the heights are working properly!
* `colored_bars` - fix for multiple bars to work.
* `assign_values_to_branches_edgePar`, `assign_values_to_nodes_nodePar`, `assign_values_to_leaves_nodePar` - now ignores "Inf" also when it is a character by adding as.numeric (and not only if it is numeric!) (this might be a problem if someone would try to update a label with the name "Inf").
###NEW FILES:
* dendextend_options.R - moved `dendextend_options` functions to it.
* get_nodes_xy.R
* Rename files: trim.R -> prune.R
* DendSer.R
* Move the function `branches_attr_by_labels` between two files.
###NEW TESTS:
* `assign_values_to_branches_edgePar` - make sure it deals with Inf and "Inf".
###OTHER NOTES
* Moved ggdendro,labeltodendro,dendroextras,ape to "Enhances:" in DESCRIPTION.
* Moved dendextend-tutorial.rnw to vignettes\disabled - so it is still there, but not compiled.
* Moved dendextend-tutorial.pdf to inst\doc - so there is a copy of this older vignette, but without needed to run it with all the benchmarks... (it is also compressed)
* Created a copy of "introduction.html" in inst/ignored (so people could see it on github)
* Have the package build the vignette.
dendextend 0.16.4 (2014-08-06)
----------------------------------------
###NEW FUNCTIONS
* `as.dendrogram.pvclust` - extract the hclust from a pvclust object, and turns it into a dendrogram.
* `hc2axes` - imported from pvclust, needed for text.pvclust
* `text.pvclust` - imported from pvclust, adds text to a dend plot of a pvclust result
* `pvclust_show_signif` - Shows the significant branches in a dendrogram, based on a pvclust object
* `pvclust_show_signif_gradient` - Shows the gradient of significance of branches in a dendrogram, based on a pvclust object
###UPDATED FUNCTIONS:
* `assign_values_to_leaves_nodePar`, `assign_values_to_nodes_nodePar`, `assign_values_to_branches_edgePar` - If the value has `Inf` (instead of NA!) then the value will not be changed.
dendextend 0.16.3 (2014-08-06)
----------------------------------------
###NEW FUNCTIONS
* `assign_values_to_nodes_nodePar` - Assign values to nodePar of dendrogram's nodes
###UPDATED FUNCTIONS:
* `assign_values_to_leaves_nodePar` - If the value has NA then the value in edgePar will not be changed.
###OTHER NOTES
* NEWS - updated to use header 2 and 3 instead of 1 and 2 for the markdown version.
dendextend 0.16.2 (2014-07-29)
----------------------------------------
###OTHER NOTES
* require -> library (Thanks Yihui:)
dendextend 0.16.1 (2014-07-26)
----------------------------------------
###OTHER NOTES
* Minor doc fixes to pass CRAN checks.
dendextend 0.16.0 (2014-07-26)
----------------------------------------
###NEW FUNCTIONS
* `branches_attr_by_clusters` - This function was designed to enable the manipulation (mainly coloring) of branches, based on the results from the cutreeDynamic function (from the {dynamicTreeCut} package).
* `which_leaf` - Which node is a leaf?
* `na_locf` - Fill Last Observation Carried Forward
###UPDATED FUNCTIONS:
* `assign_values_to_branches_edgePar - now can keep existing value, if gets NA.
* `colored_bars` - change the order of colors and dend, and allowing for dend to be missing. (also some other doc modifications)
* `branches_attr_by_labels` - change the order of some parameters (based on how much I expect users to use each of them.)
* `assign_values_to_branches_edgePar` - allow the option to skip leaves
###NEW FILES:
* branches_attr_by.R - for branches_attr_by_clusters
###OTHER NOTES
* added a pvclust example (using a condition on p-value, and heighlighting branches based on that with lwd/col.)
dendextend 0.15.2 (2014-07-24)
----------------------------------------
###NEW FUNCTIONS
* `noded_with_condition` - Find which nodes satisfies a condition
* `branches_attr_by_labels` - Change col/lwd/lty of branches matching labels condition
###UPDATED FUNCTIONS:
* `rect.dendrogram` - adding paramters for creating text under the clusters,
as well as make it easier to plot lines on the rect (density = 7). props to skullkey for his help.
* `set.dendrogram` - added new options: by_labels_branches_col, by_labels_branches_lwd, by_labels_branches_lty
###NEW FILES:
* noded_with_condition.R
dendextend 0.15.1 (2014-07-16)
----------------------------------------
###NEW FUNCTIONS
* `order.hclust` - Ordering of the Leaves in a hclust Dendrogram
* `rect.dendrogram` - just like `rect.hclust`, plus: works for dendrograms, passes `...` to rect for lwd lty etc, now has an horiz parameter!
* `identify.dendrogram` - like `identify.hclust`: reads the position of the graphics pointer when the (first) mouse button is pressed. It then cuts the tree at the vertical position of the pointer and highlights the cluster containing the horizontal position of the pointer. Optionally a function is applied to the index of data points contained in the cluster.
###NEW FILES:
rect.dendrogram.R
###OTHER NOTES
* Rename the `add` functions to be called `set`. Reason: both are short names (important for chaining), both are not used in base R. "add" is used in magrittr (not good long term), and "set" sounds better English wise (we are setting labels color, more than adding it...).
* Rename 2 file names from add->set (set.dendrogram.R and tests-set.dendrogram.R)
dendextend 0.15.0 (2014-07-14)
----------------------------------------
###NEW FUNCTIONS
* `dendlist` - a function which creates a list of dendrogram of the new "dendlist" class.
* `tanglegram.dendlist`
* `entanglement.dendlist`
* `is.dendlist` - to check that an object is a dendlist
* `as.dendlist` - to turn a list to a dendlist
* `plot.dendlist` - it is basically a wrapper to tanglegram.
* `click_rotate` - interactively rotate a tree (thanks to Andrej-Nikolai Spiess)
* `untangle` - a master function to control all untangle functions (making it much easier to navigate this feature, as well as use it through %>% piping)
* `untangle_DendSer` - a new untangle function (this time, only for dendlist), for leverging the serialization package for some more heuristics (based on the functions rotate_DendSer and DendSer.dendrogram).
* `add.dendrogram` - a new master function to allow various updating of dendrogram objects. It includes options for: labels, labels_colors, labels_cex, branches_color, hang, leaves_pch, leaves_cex, leaves_col, branches_k_color, branches_col, branches_lwd, branches_lty, clear_branches, clear_leaves
* `add.dendlist` - a wrapper to add.dendrogram.
* `colored_bars` - adding colored bars underneath a
dendrogram plot.
###UPDATED FUNCTIONS:
* Made sure that the main `untangle` functions will return a `dendlist` (and also that untangle_step_rotate_2side will be able to work with the new untangle_step_rotate_1side output)
* switched to using match.arg wherever possible (Bk_plot, cor_cophenetic, entanglement, untangle_random_search, untangle_step_rotate_1side, and untangle_step_rotate_2side).
* `labels_colors<-` - now has a default behavior if value is missing. Also made sure it is more robust (for cases with partiel attr in nodePar)
* `color_branches` - now has a default behavior if k is missing.
* `assign_values_to_branches_edgePar` - value can now be different than 1 (it now also has a recycle option for the value)
* Generally - moved to using `is.dendrogram` more.
* `tanglegram` - now preserve and restore previous par options (will no longer have a tiny plot in the left corner, when using a simple plot after tanglegram)
###NEW S3 METHODS:
* `tanglegram.dendlist`
###NEW FILES:
* dendlist.R
* test-dendlist.R
* test-add.dendrogram.R
* add.dendrogram.R
* colored_bars.R
* magrittr.R
###UPDATED TESTS:
* Check dendlist works
###OTHER NOTES
* DESCRIPTION -
* Added the magrittr package as a Depends.
* changed stats from depends to imports. Here is a good reference for why to choose the one over the other -
And:
* Fix errors and typos in vignettes - thank you Bob Muenchen!
* Fix the docs of the functions in dendextend which relates to the newer dendextendRcpp (version 0.5.1): cut_lower_fun, get_branches_heights, heights_per_k.dendrogram
* tests - Moved from using test_that with equal() to test_equal (due to some conflict with, possibly, devtools)
* roxygen2 - Moved from using @S3method to @export (removed 45 warnings from check() )
* Moved all "@import" to the dendextend-package.R file (just to make it easier to follow up on them). This code makes sure that thees packages will be mentioned in the NAMESPACE file.
* Imported the %>% function from magrittr (using a trick from the dplyr package)
dendextend 0.14.4 (2014-07-04)
----------------------------------------
###OTHER NOTES
* Changed all R script files from .r to .R!
dendextend 0.14.3 (2014-04-26)
----------------------------------------
###UPDATED DESCRIPTION:
* Fix an author name.
* Added dendextendRcpp to suggest
###OTHER NOTES
* Minor changes to docs.
dendextend 0.14.2 (2014-03-15)
----------------------------------------
###UPDATED DESCRIPTION:
* Added dependency for R (>= 3.0.0)
###OTHER NOTES
* dendextend 0.14.2 is intended to be shipped to CRAN.
dendextend 0.14.1 (2014-03-15)
----------------------------------------
###UPDATED DESCRIPTION:
* Added Uwe and Kurt as contributors.
* Removed Suggests: dendextendRcpp, (until it would be on CRAN)
* Removed link to google group
###NEW FUNCTIONS
* dendextend_options (actually an enviornment + a function). Here I've moved the dendextend_options from the global enviornment to the dendextend namespace.
###UPDATED TESTS:
* update test_rotate.r so it would make sure ape is loaded BEFORE dendextend.
###OTHER NOTES
* dendextend 0.14.1 goes with Version 0.5.0 of dendextendRcpp. Previous versions of dendextendRcpp will not be effective for versions of dendextend which are before 0.14.0.
* dendextend 0.14.1 is intended to be shipped to CRAN.
dendextend 0.14.0 (2014-03-15)
----------------------------------------
###UPDATED FUNCTIONS:
* assign_dendextend_options - Moved to passing the functions through "dendextend_options" instead of through "options" (Thanks to suggestions by Kurt Hornik and Uwe Ligges).
* assign_dendextend_options - is now exported.
* remove_dendextend_options - now removes the object dendextend_options
* get_branches_heights, heights_per_k.dendrogram, cut_lower_fun - now all rely on dendextend_options.
###UPDATED TESTS:
* update tests to the new names in dendextendRcpp (dendextendRcpp_cut_lower_fun, dendextend_options)
dendextend 0.13.0 (2014-03-01)
----------------------------------------
###UPDATED FUNCTIONS:
* assign_dendextend_options - Moved to passing the functions through "options" instead of through assignInNamespace (which was not intended for production use).
* get_branches_heights, heights_per_k.dendrogram, cut_lower_fun - now all rely on the function located in the global options. This way, they can be replaced by the dendextebdRcpp version, if available.
###UPDATED TESTS:
* When comparing to dendextendRcpp - added condition to not make the check if the package is not loaded and in the search path (this way I could compare the tests with and without the dendextendRcpp package).
* added a minor test for dendextend_get_branches_heights - checking the function directly through the options.
###UPDATED DOCS:
* dendextend_get_branches_heights, dendextend_heights_per_k.dendrogram, dendextend_cut_lower_fun - gave speed tests
###NEW FUNCTIONS
* assign_dendextend_options - we now pass all functions that have a Rcpp equivalent through "options". While this adds a bit of an overhead (sadly), it still gets a much faster speed gain, and without verious warnings that CRAN checks would not like...
* dendextend_get_branches_heights, dendextend_heights_per_k.dendrogram, dendextend_cut_lower_fun
###OTHER NOTES
* dendextend 0.13.0 goes with Version 0.4.0 of dendextendRcpp. Previous versions of dendextendRcpp will not be effective for versions of dendextend which are before 0.13.0 (however, it would also not conflict with them...)
* dendextend 0.13.0 is intended to be shipped to CRAN.
dendextend 0.12.2 (2014-02-03)
----------------------------------------
###UPDATED DESCRIPTION:
* Removed VignetteBuilder: knitr (until later)
* Removed Suggests: dendextendRcpp, (until later)
* fixed mis-spelled words: extanding (14:40)
###NEW FUNCTIONS
* Hidden "stats" functions have been added to a new file "imports_stats.r"
with a new local copy for
'stats:::.memberDend' 'stats:::.midDend'
'stats:::midcache.dendrogram' 'stats:::plotNode'
'stats:::plotNodeLimit'
with stats::: -> stats_
###UPDATED FUNCTIONS:
* stats:::cutree -> stats::cutree
* dendextend:::cutree -> dendextend::cutree
###OTHER NOTES
* compacted ‘dendextend-tutorial.pdf’ from 961Kb to 737Kb (thanks to tools::compactPDF)
* dendextend 0.12.2 is intended to be shipped to CRAN.
dendextend 0.12.1 (2014-02-02)
----------------------------------------
###UPDATED TESTS:
* Made sure to check dendextendRcpp is available before calling it.
###UPDATED DOCS:
* data(iris) -> data(iris, envir = environment())
* Fix "\examples lines wider than 100 characters:" in several places.
###OTHER NOTES
* Commented out manipulations on the search path and of assignInNamespace (to avoid NOTES/warnings). This was done after moving all of these operations into Rcpp.
* dendextend 0.12.1 is intended to be shipped to CRAN. (but failed)
dendextend 0.12.0 (2014-02-01)
----------------------------------------
###UPDATED FUNCTIONS:
* exported prune_leaf
* as.dendrogram.phylo as.phylo.dendrogram - turned into S3 (no longer exported)
* changed functions names:
* trim -> prune
* unroot -> unbranch
* Moved from ::: to :: (where possible).
* tanglegram.dendrogram - fix warning in layout(matrix(1:3, nrow = 1), width =
columns_width): partial argument match of 'width' to 'widths'
* Return "as.phylo.dendrogram" by adding "ape" to "Imports" in DESCRIPTION and "import" to NAMESPACE. Also fixing consistancy (using x instead of object).
* unbranch.phylo - fix extra parameters.
###UPDATED DOCS:
* leaf_Colors - fix example (added "dend").
* Fix various "Missing link or links in documentation object" for example:
* remove \link{untangle} from various .Rd (I never created this function...)
* tangelgram -> tanglegram
* fix "Unknown package 'dendroextra' in Rd xrefs" in color_branches docs. (into dendroextras)
* Fix Undocumented code objects: 'old_cut_lower_fun' 'old_get_branches_heights'
'old_heights_per_k.dendrogram'. By adding them as "#' @aliases"
* Fix "Codoc mismatches from documentation object"
* 'rotate': - by removing k and h (since I never got to implement them...)
*
* Fix "Mismatches in argument default values"
* tanglegram
* Name: 'margin_inner' Code: 3 Docs: 1.8
* Name: 'lab.cex' Code: NULL Docs: 1
* Name: 'remove_nodePar' Code: FALSE Docs: F
* Fix "Argument names in code not in docs", for: edge.lwd dLeaf_left dLeaf_right main sub rank_branches hang match_order_by_labels cex_main cex_main_left cex_main_right cex_sub
* Fix "'library' or 'require' call not declared from: 'ape'" by commenting-off every "require(ape)" command in the code, since it is already mentioned in imports!
(see:)
The problem still persists because of .onLoad in zzz.r, but we'll look into this later...
* Fix "Undocumented arguments in documentation object" for:
* 'bakers_gamma_for_2_k_matrix'
* 'cor_bakers_gamma'
* 'cut_lower_fun'
* Fix "Objects in \usage without \alias in documentation object 'shuffle'":
'shuffle.dendrogram' 'shuffle.hclust' 'shuffle.phylo'
* Fix "Argument items with no description in Rd object": 'plot_horiz.dendrogram', 'untangle_step_rotate_1side'.
* rotate - Remove the "flip" command in the example (after I noticed that "rev" does this just fine...)
###UPDATED TESTS:
* S3methods no longer seem to be exported (due to something in roxygen2), I chose to update the tests accordingly.
* cutree.hclust -> dendextend:::cutree.hclust
* cutree.dendrogram -> dendextend:::cutree.dendrogram
* cut_lower_fun acts diffirently on dendextendRcpp vs old dendextend, so I updated the tests to reflect that.
* Fixed the usage of person() in DESCRIPTION. (props goes to Uwe Ligges for his input)
###OTHER NOTES
* Fixing .Rd indentation.
* Fix S3method in NAMESPACE.
* Added "ape::" to as.phylo.
* Added to .Rbuildignore: (large files which are not essential)
* inst/doc/2013-09-05_Boston-useR
* vignettes/figure
* vignettes/ (we'll deal with this later...)
* Removed "Enhances: ape" from DESCRIPTION
* README.md - Using 'talgalili/dendextend', in install_github
dendextend 0.11.2 (2013-08-31)
----------------------------------------
###UPDATED FUNCTIONS:
* tanglegram now has "sub" and "cex_sub" parameters.
* untangle_step_rotate_2side added k_seq parameter.
* "trim" is now called "prune"!
###VIGNETTES:
* Finished tanglegram and untangle.
* Finished statistical measures of similarity between trees.
dendextend 0.11.1 (2013-08-29)
----------------------------------------
###UPDATED FUNCTIONS:
* color_labels - added a "warn" parameter. And also set the default (in case no k/h is supplied) - to just color all of the labels.
* Added "warn" parameter to: assign_values_to_leaves_nodePar, And set it to FALSE when used inside "tanglegram".
* tanglegram now returns an invisible list with the two dendrograms (after they have been modified within the function).
###BUG FIXES
* untangle_random_search - made sure the function will return the original trees if no better tree was found.
###OTHER NOTES
* Seperated 2013-09-05_Boston-useR.Rpres into two files (since RStudio is not able to handle them)
dendextend 0.11.0 (2013-08-24)
----------------------------------------
###VIGNETTES:
* Added a knitr presentation for "Boston-useR" 2013-09-05. Includes an introduction to hclust and dendrogram objects, tree manipulation, and dendextend modules (still needs the dendextend section on tanglegram...)
###UPDATED FUNCTIONS:
* tanglegram - added cex_main parameter.
###OTHER NOTES
* Gave proper credit to contributers in the DESCRIPTION file (and not just the .Rd files)
dendextend 0.10.0 (2013-08-20)
----------------------------------------
###NEW FUNCTIONS ADDED:
* cut_lower_fun - it wraps the "cut" function, and is built to be masked by the function in dendextendRcpp in order to gain 4-14 speed gain.
###NEW TESTS ADDED:
* For Bk methods.
###OTHER NOTES
* The dendextendRcpp package (version 0.3.0) is now on github, and offers functions for making cutree.dendrogram(h) faster (between 4 to 14 times faster).
###VIGNETTES:
* Added cut_lower_fun to the Rcpp section.
* Added FM-index and Bk plot sections.
dendextend 0.9.2 (2013-08-20)
----------------------------------------
###NEW FUNCTIONS ADDED:
* cor_bakers_gamma.hclust
###UPDATED FUNCTIONS:
* cutree.hclust - added the "use_labels_not_values" paremter (ignored)
dendextend 0.9.1 (2013-08-19)
----------------------------------------
###UPDATED FUNCTIONS:
* color_labels - added "labels" parameter for selective coloring of labels by name.
* Bk_plot - now adds dots for the asymptotic lines in case of NA's
* Bk - now calculates cutree once for all relevant k's - and only then goes forth with FM_index.
###BUG FIXES
* FM_index_R - now returns NA when comparing NA vectors (when, for example, there is no possible split for some k), instead of crashing (as it did before).
* Bk_plot - now won't turn one dendrogram into hclust, while leaving the other a dendrogram.
###OTHER NOTES
* The dendextendRcpp package (version 0.2.0) is now on github, and offers functions for making cutree.dendrogram(k) MUCH faster (between 20 to 100 times faster). (this is besided having labels.dendrogram now also accept a leaf as a tree.)
###VIGNETTES:
* Added Rcpp section.
* Started the Bk section (some theory, but no code yet - although it is all written by now...).
dendextend 0.9.0 (2013-08-18)
----------------------------------------
###NEW FUNCTIONS ADDED:
* sort_2_clusters_vectors
* FM_index_profdpm
* FM_index_R
* FM_index
* FM_index_permutation - for checking permutation distribution of the FM Index
* Bk
* Bk_permutations
* Bk_plot (it can be MUCH slower for dendrograms with large trees, but works great for hclust)
###UPDATED FUNCTIONS:
* color_labels - removed unused 'groupLabels' parameter.
###VIGNETTES:
* Added the FM Index section.
FILE CHANGES:
* Bk-method.r file added.
###OTHER NOTES
* The dendextendRcpp package (version 0.1.1) is now on github, and offers a faster labels.dendrogram function (It is 20 to 40 times faster than the 'stats' function!)
* Added a commented-out section which could (in the future) be the basis of an Rcpp cutree (actually cutree_1h.dendrogram) function!
dendextend 0.8.0 (2013-08-14)
----------------------------------------
###NEW FUNCTIONS ADDED:
* cor_bakers_gamma
* sample.dendrogram
* rank_order.dendrogram - for fixing leaves value order.
* duplicate_leaf - for sample.dendrogram
* sample.dendrogram - for bootstraping trees when the original data table is missing.
* sort_dist_mat
* cor_cophenetic
###UPDATED FUNCTIONS:
* tanglegram - added the match_order_by_labels parameter.
###VIGNETTES:
* Added the Baker's Gamma Index section.
* Added a bootstrap and permutation examples for inference on Baker's Gamma.
* Also for Cophenetic correlation.
FILE CHANGES:
* sample.dendrogram.r file added.
###BUG FIXES
* fix_members_attr.dendrogram - fixed a bug introduced by the new "members" method in nleaves. (test added)
dendextend 0.7.3 (2013-08-14)
----------------------------------------
###NEW FUNCTIONS ADDED:
* get_childrens_heights - Get height attributes from a dendrogram's children
* rank_branches - ranks the heights of branches - making comparison of the topologies of two trees easier.
###UPDATED FUNCTIONS:
* sort_levels_values - now returns a vector with NA's as is without changing it. Also, a warning is issued (with a parameter to supress the warning called 'warn')
* cutree - now supresses warnings produced by sort_levels_values, in the case of NA values.
* plotNode_horiz now uses "Recall" (I might implement this in more function).
* tanglegram - added parameters hang and rank_branches.
###BUG FIXES
* tanglegram - fixed the right tree's labels position relative to the leaves tips. (they were too far away because of a combination of text_adj with dLeaf)
###VIGNETTES:
* Fixed the dLeaf in tanglegram plots, and gave an example of using rank_branches.
dendextend 0.7.2 (2013-08-13)
----------------------------------------
###NEW FUNCTIONS ADDED:
* plotNode_horiz - allows the labels, in plot_horiz.dendrogram, to be aligned to the leaves tips when the tree is plotted horizontally, its leaves facing left.
###UPDATED FUNCTIONS:
* plot_horiz.dendrogram - allows the labels to be aligned to the leaves tips when the tree is plotted horizontally, its leaves facing left. (took a lot of digging into internal functions used by plot.dendrogram)
* tanglegram - added the parameters: dLeaf_left dLeaf_right. Also, labels are now alligned to the leaves tips in the right dendrogram.
###BUG FIXES
* Fix untangle_step_rotate_1side to work with non-missing dend_heights_per_k
* Set sort_cluster_numbers = TRUE for cutree, in order to make it compatible with stats::cutree. Added a test for this.
* Fix cutree.hclust to work with a vector of k when !order_clusters_as_data
* Fix cutree.dendrogram to give default results as stats::hclust does, by setting the default to sort_cluster_numbers = TRUE.
###OTHER NOTES
* Variations of the changes to plot_horiz.dendrogram and plotNode_horiz should be added to R core in order to allow forward compatability.
dendextend 0.7.1 (2013-08-12)
----------------------------------------
###NEW FUNCTIONS ADDED:
* untangle_step_rotate_2side
###VIGNETTES NEW SECTIONS ADDED:
* untangle_forward_rotate_2side
dendextend 0.7 (2013-08-11)
---------------------------
###NEW FUNCTIONS ADDED:
* shuffle - Random rotation of trees
* untangle_random_search - random search for two trees with low entanglement.
* flip_leaves
* all_couple_rotations_at_k
* untangle_forward_rotate_1side
###OTHER NOTES
* rotate - minor code improvements.
###VIGNETTES NEW SECTIONS ADDED:
* untangle_random_search
* untangle_forward_rotate_1side
dendextend 0.6 (2013-08-10)
---------------------------
###NEW FUNCTIONS ADDED:
* tanglegram - major addition!
* plot_horiz.dendrogram - Plotting a left-tip-adjusted horizontal dendrogram
* remove_leaves_nodePar
* assign_values_to_branches_edgePar
* remove_branches_edgePar
* match_order_by_labels
* match_order_dendrogram_by_old_order - like match_order_by_labels, but faster
* entanglement
###UPDATED FUNCTIONS:
* assign_values_to_leaves_nodePar - now makes sure pch==NA if we are modifying a nodePar value which is other than pch (and pch did not exist before).
* nleaves - now allow the use of the "members" attr of a dendrogram for telling us the size of the tree.
###OTHER NOTES
* entanglement.r file added
* untangle.r file added
###VIGNETTES NEW SECTIONS ADDED:
* Tanglegram
* Entanglement
dendextend 0.5 (2013-08-05)
---------------------------
###NEW FUNCTIONS ADDED:
* tanglegram
###UPDATED FUNCTIONS:
* rotate - fixes calling the same functions more than once (minor improvements)
* fac2num - keep_names parameter added
* intersect_trees - added the "warn" parameter.
###NEW TESTS:
* order.dendrogram gives warning and can be changed
* fac2num works
dendextend 0.4 (2013-08-02)
---------------------------
###NEW FUNCTIONS ADDED:
(including tests and documentation)
* is.natural.number
* cutree_1h.dendrogram - like cutree, but only for 1 height value.
* fix_members_attr.dendrogram - just to validate that prune works o.k.
* hang.dendrogram - hangs a dendrogram leaves (also allows for a rotated hanged dendrogram), works also for non-binary trees.
* nnodes - count the number of nodes in a tree
* as.dendrogram.phylo - based on as.hclust.
* get_nodes_attr - allows easy access to attributes of branches and leaves
* get_branches_heights
* fix_members_attr.dendrogram
* heights_per_k.dendrogram - get the heights for a tree that will yield each k cluster.
* is.hclust
* is.dendrogram
* is.phylo
* fac2num
* as.phylo.dendrogram - based on as.hclust.
* cutree_1k.dendrogram - like cutree, but only for 1 k (number of clusters) value.
* cutree.dendrogram - like cutree but for dendrograms (and it is also vectorized)
* cutree.hclust - like cutree but for hclust
* cutree.phylo - like cutree but for phylo
* sort_levels_values - make the resulting clusters from cutree to be ordered from left to right
* cutree - with S3 methods for dendrogram/hclust/phylo
* color_branches - color a tree branches based on its clusters. This is a modified version of the color_clusters function from jefferis's dendroextra package. It extends it by using my own version of cutree.dendrogram - allowing the function to work for trees that hclust can not handle (unrooted and non-ultrametric trees). Also, it allows REPEATED cluster color assignments to branches on to the same tree. Something which the original function was not able to handle. It also handles extreme cases, such as when the labels of a tree are integers.
* color_labels - just like color_branches, but for labels.
* assign_values_to_leaves_nodePar - allows for complex manipulation of dendrogram's leaves parameters.
###UPDATED FUNCTIONS:
* nleaves - added nleaves.phylo methods, based on as.hclust so it could be improved in the future.
* "labels_colors<-" - fixed it so that by default it would not add phc=1 to the leaves.
* "order.dendrogram<-" - now returns an integer (instead of numeric)
* cutree (cutree.dendogram / cutree.hclust) - Prevent R from crashing when using
cutree on a subset tree (e.g: dend[[1]])
* Renaming the unroot function -> to -> unbranch
* get_leaves_attr - added a simplify parameter.
###OTHER NOTES
* Updated the exact way the GPL was stated in DESCRIPTION and gave a better reference within each file.
###VIGNETTES NEW SECTIONS ADDED:
* Hanging trees
* Coloring branches.
dendextend 0.3 (2013-07-27)
---------------------------
###NEW FUNCTIONS ADDED:
* removed "flip", added rev.hclust instead (since rev.dendrogram already exists)
###VIGNETTES NEW SECTIONS ADDED:
* Vignettes created (using LaTeX)
* Basic introduction to dendrogram objects
* Labels extraction and assignment, and measuring tree size.
* Tree manipulation: unrooting, pruning, label coloring, rotation
###NEW TESTS ADDED:
* labels extraction, assignment and tree size (especially important for comparing hclust and dendrogram!)
* Tree manipulation: unrooting, pruning, label coloring, rotation
###UPDATED FUNCTIONS:
* "labels.hclust" - added the "order" parameter. (based on some ideas from Gregory Jefferis's dendroextras package)
* "labels.hclust" and "labels.hclust<-" - now both use order=TRUE as default. this makes them consistent with labels.dendrogram. Proper tests have been implemented.
* "labels<-.dendrogram" - make sure the new dendrogram does not have each of its node of class "dendrogram" (which happens when using dendrapply)
* unclass_dend - now uses dendrapply
* get_branches_attr - added "warning" parameter
* unroot.dendrogram - Can now deal with unrooting more than 3 branches. supresses various warnings.
* as_hclust_fixed - now works just as as.hclust when hc is missing.
* rotate - allowed "order" to accept character vector.
###OTHER NOTES
* Extending the documentation for: rotate, labels.hclust,
* Added a welcome massage to when loading the package (zzz.r file added)
* Added a first template for browseVignettes(package ='dendextend')
* Added a tests folder - making the foundation for using testthat.
* Added tests for labels assignment
* Added a clear GPL-2+ copyright notice on each r file.
* Forcing {ape} to load before {dendextend}, thus allowing for both rotate and unroot to work for BOTH packages. It does add extra noise when loading the package, but it is the best solution I can think of at this point.
dendextend 0.2 (2013-04-10)
---------------------------
###NEW FUNCTIONS ADDED:
* count_terminal_nodes
* labels_colors (retrieving and assignment)
* unclass_dend
* head.dendrogram (S3 method for dendrogram)
* nleaves (with S3 methods for dendrogram and hclust)
* rotate (with S3 methods for dendrogram, hclust and phylo)
* sort (with S3 methods for dendrogram and hclust)
* flip (works for both dendrogram and hclust)
* prune - prunes leaves off a dendrogram/hclust/phylo trees. (based on the prune_leaf function)
* as_hclust_fixed
* get_branches_attr
* unroot (dendrogram/hclust/phylo)
* raise.dendrogram
* flatten.dendrogram
* order.dendrogram<-
* intersect_trees
###UPDATED FUNCTIONS:
* "labels<-.dendrogram" - made sure to allow shorter length of labels than the size of the tree (now uses recycling). This version is now sure to deal correctly with labeling trees with duplicate labels.
###OTHER NOTES
* From here on I will be using "." only for S3 method functions. Other functions will use "_"
* Added more .r files, and changed the locations of some functions.
dendextend 0.1 (2013-04-05) - FIRST version!
----------------------------------------
###NEW FUNCTIONS
* S3 methods for label assignment operator for vector, dendrogram, hclust, matrix.
###OTHER NOTES
* Includes skeletons for some functions that will be added in the future. | https://sources.debian.org/src/r-cran-dendextend/1.9.0+dfsg-1/NEWS/ | CC-MAIN-2019-26 | refinedweb | 8,013 | 50.63 |
01 May 2009 23:59 [Source: ICIS news]
LONDON (ICIS news)--European oxo-alcohol and plasticiser producers are targeting firmer spot prices in May on the back of higher raw material costs, market sources said on Friday.
It was too early to assess the success of these producer initiatives although a lukewarm response from buyers and traders was noted.
Following the €23/tonne ($31/tonne) increase in the May propylene contract price upstream, several producers were looking to implement higher spot prices for their consumers.
One producer, currently selling n-butanol (NBA) at around €750/tonne FD (free delivered) NWE (northwest Europe), said the rise upstream combined with increased demand in Europe and ?xml:namespace>
Another butanol producer said it would also target slightly higher spot numbers in May for similar reasons.
Plasticiser producers were equally determined.
“Because of rising feedstock prices and our margins getting smaller and smaller we are making a price increase for May,” said one di-isononyl phthalate (DINP) supplier, looking to pass on a hike of €50/tonne to customers.
Propylene contract prices had increased €65/tonne since February, but DINP spot prices had continued to fall steadily.
The source added that other raw material costs, such as orthoxylene (OX), had also firmed and these costs needed to be passed on.
Despite these justifications, buyers did not expect any notable spot price changes in May.
One NBA buyer said some slight firming was a possibility but added that some of its suppliers had already indicated that they would roll over prices in May.
A European DINP buyer disregarded talk of price increases in May, stating that demand remained weak and product was still readily available, which made price stability more likely.
Similarly, trader reports did not support talk of possible May spot prices rises. Certain traders said NBA and isobutanol (IBA) spot prices had actually weakened going into May.
European oxo-alcohol producers include Perstorp and BASF. Plasticiser suppliers include Oxea and Polynt.
( | http://www.icis.com/Articles/2009/05/01/9213035/europe-oxo-alcohol-producers-target-may-price-rises.html | CC-MAIN-2013-48 | refinedweb | 328 | 51.38 |
Where are my python modules?08 Oct 2015
When we are developing software, it doesn’t matter which language, it is a best practice
to split the code in small pieces, it helps the legibility and
code organization. When working with C, for example, we create header (*.h)
files and implementation files (*.c). When working with python there are module files which
have extension
.py. To load a module we use the
import keyword.
A question often asked is how to find the location of my python modules. For example this error message: ImportError: No module named XXX. Has it ever happened to you? :D Here we will try to understand a little bit more about this problem.
To start I will create a
pub directory and add to it a module named
drink.py.
As I’m living in England, to drink a pint is part of my culture now. \o/
$ mkdir pub $ touch pub/drink.py $ echo "print('give me a pint')" > pub/drink.py
Lets try to import the module
drink.py.
$ python Python 2.7.10 (default, Jul 14 2015, 19:46:27) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import drink Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named drink
ImportError: No module named drink. The message is clear and tell us that
python doesn’t know where our module is. Maybe you are thinking, go to
pub
directory and run the interpreter from there.
$ cd pub/ $ python >>> import drink give me a pint
It works! Why? And if I have modules in different directories, how to import all
at the same time? In this case we can use
sys.path that is a list of
strings that specifies the search path for modules.
Ok, lets go back to the previous directory and add
pub in the
sys.path.
$ cd .. $ python >>> import sys >>> sys.path ['', '/Library/Python/2.7/site-packages/npm-0.1.1-py2.7.egg', '/Library/Python/2.7/site-packages/optional_django-0.1.0-py2.7.egg', '/tmp', '']
Have a look in the first value of the list ‘’. It means that the module will be firstly searched in the current directory, as a result when we ran the interpreter inside the pub directory the module was found.
>>> import sys >>> sys.path.append('pub') >>> import drink give me a pint
Another way to add a directory in
sys.path is through the
PYTHONPATH
environment variable. Come on! Test it!
$ export PYTHONPATH=pub $ python >>> import drink give me a pint
This was a basic hint, we can find much more information at docs.python.org. I hope I’ve helped and if you have comments leave me a comment.
Happy Coding!
Reference
- Friends
- Christian and Grazziella reviewing my bad English.
- Python docs | http://benatto.xyz/2015/10/08/where-are-my-python-modules.html | CC-MAIN-2018-05 | refinedweb | 483 | 76.32 |
As we all know, Angular is most discussed word in frontend development world. It has gained a lot of popularity in these years. Angular becomes a choice of a lot of frontend developers and I am also one of them.
In this article, I will highlight about when angular came and how quickly it grew into a market. In every six months, Angular team is coming with the new version of angular, making it more powerful with new features. I must appreciate the angular team for their wonderful efforts.
Well, now you might be excited how it evolved, what new features added in different versions. Now, the wait is over. Let’s get started!
Before moving ahead, I would like to let my readers know that ‘Angular 8’ is the latest version which our talented angular team released.
Birth of Hero (AngularJS) — 2010
In 2010, birth of today’s market hero took place. It was known as ‘AngularJS’.
AngularJS is a Javascript framework that is developed by Google. It is used to make single page application (SPA). Wait! I hope you know what is SPA. If not, ask Google about it and then come back as this is one of the beautiful approach that angular uses, but here is a brief intro of SPA (those who knows, you can skip):
SPA or Single page application is an application in which everything downloads in one go. All the necessary code is downloaded in one go. Unlike multiple page app, you don’t have to request a web page from the server for every hit and loads the page. In SPA, index.html file downloads once with all the content and then for every url change, existing webpage dynamically loads the demanding content. This approach gives better UX in switching between different pages and gives you feel of the app.
That’s it about SPA. I hope it gives you a brief picture of it. Let’s move to the original topic.
AngularJS allow developers to develop web application faster. It uses a client side rendering — a technique in which rendering of the content is taken care by client (browser). Well, I will not go into detail about what’s the pros and cons of this technique, but the main concern in this technique is SEO. This rendering which angular uses is poor in SEO. Wait! Dont’t think angular is bad. There is very popular Bollywood dialogue — “Picture abhi baki hai mere dost (Movie is still left, my friend)”. I think this dialogue suits the current situation. I will discuss why I said this.
Now, we should focus on the features of AngularJS. Here are some :
1)Data binding — automatic synchronization between model and view.
2) Dependency Injection System — a design pattern in which system supplies the dependent objects when it creates the object.
3) scope — that takes care of controller and view.
4) Services — for sharing info among different parts of application.
5)Directives — It gives super power to HTML. For instance, ng-model, ng-app.
6)Controllers — heart of the application where logic resides.
7)Template — view that gives information using our controller and model.
This list is long. I will not go into detail about angularJS but in angularJS, controllers are the heart of the application. Well, angularJS came with lots of features for developing a powerful web application but failed at some point like huge bundle size, performance issues, SEO problem, code maintainability issues but that does not mean that it is total failure. Data binding, Dependency Injection concepts are success of angular. Hence, we can say it’s half failure and half success.
The imperfection in AngularJS made angular team to rewrite the whole framework from scratch. Such big change in new version of any framework/library was never encountered in the market. New version of angular is totally different from AngularJS. How it is different? Let’s understand by diving into our next sub -topic.
Angular 2–2016
Later version of AngularJS came into the market in 2016, Then, it was no more known as AngularJS. It got a name ‘Angular’. Angular 1.x version is known as AngularJS. Later version after 2.x is known as Angular. When I encountered these two words in my web development journey. I literally thought that these are two different frameworks and after a research I got to know that angular is updated version of angularJS. I hope my readers would not have misunderstood it.
Angular 2 came into market with drastic changes. The biggest change was introduction of Typescript. Typescript is superset of javascript with additional features like it follows Oops concepts, strongly typed. Programmers that is coming from object oriented world finds it more familiar unlike javascript.
Components are the heart of angular 2+ world. Angular introduced various packages for achieving basic and important functionalities like routing package for easily defining routes, http package for fetching data from the server, animation package for animations and so on.
Angular 2 also provides directive concept like AngularJS. Directives gives superpowers to our HTML like *ngFor, *ngIf (structural directive) makes our html dynamic whereas attribute directives like ngModel (for two way data binding), ngStyle takes care of appearance and behaviour of our DOM.
Another feature which does not change in angular is, it also uses DI system. Like angularJS, DI system supplies the dependent objects (dependencies) to the component.
Wait! I forgot to tell you about wonderful helping hand which angular provides which is angular-cli. It’s a great helping hand that helps us in developing our application faster like for generating component just use ‘ng g c component-name’ (‘g’ is generate and ‘c’ is component). This is another cool feature added in angular.
This is not the end. I told you that the biggest disadvantage of angularjs is that it is poor in SEO and remember that famous dialogue which I said. You will understand in few minutes why I used that.
It is biggest cons that angularJS is poor in SEO. When you view the page source, you can see there is nothing(no Html) which makes crawlers to think website useless i.e. without any info. It is very painful when your website wants google crawlers to index them and make them to reach on top in google search but could not achieve it. Angular team took care of it and introduced angular universal which takes care of SEO of your angular website. It uses server side rendering which in turn solves SEO problem.
These are the features with which angular 2 came in a market.
Angular 4 — March, 2017
After reading the subtopic, you might wonder why not Angular 3? This is a common question that can come in anyone mind. Don’t worry! I will let you know. The reason why not angular 3 was because of the router package. The angular router package was already distributed as v3. To avoid confusion, angular team released angular 4 version. I think now my readers can jump to understand the features of angular 4.
Angular 4 came with bug fixes and other new features and improvements. The biggest improvement that was done was in bundle size. They reduced bundle size by 60% which in turn made application lighter and hence application loading time decreased.
The other thing that was done was in animation package. They pulled out animation feature in a separate package- @angular/animations.
Another improvement was in structural directive. *ngIf came with else part in this version.
This is the brief introduction of features of angular 4.
Angular 5 — Nov 2017
After six months, angular team came with another new version, i.e. angular 5. This angular 5 version again came with a lot of new features, improvements and bug fixes.
As the main concern for every website is faster loading time. Angular took care about it in this version as well.To enhance application performance more, they introduced build optimizer- It is a tool that was introduced to make small bundle size. It uses tree shaking technique to remove the dead code from application.
Compiler improvements were also made that makes faster rebuild of the application.
Another feature that was introduced was state Transfer key(TransferStateKey which is part of the platform/browser package). Well! You might think what it is? When to use? You can avail the beauty of this feature in your application if you are using SSR. Yes, If you have implemented SSR then you must use state transfer key feature. The reason why I said this because when you are using SSR and your application is making any HTTP request (which is quite common) then this request is going be to invoke two times i.e one on the server and another on the browser. This causes flickering issue (I have gone through this flickering issue in my application because the HTTP request was invoking two times). Thanks to state transfer key feature. This feature makes browser to use the response of HTTP request that is hit on server. It uses the response that server gets from HTTP request. As its name suggest, server transfers the state of response with html to browser. Hence, two times hit of HTTP request can be avoided.
Another improvement that was done was in http client package. In this version, HTTPClientModule came with improvements like using this module, developers do not have to parse the response using a map. Parsing step is not needed any more. Suppose if there is non-JSON type response, then you can specify that response using responseType in your HTTP request .
This is all about feature of angular 5. Now the time has come to move to angular 6 version. After another six month, angular 6 came into market with more power.
Angular 6 — May 2018
In May, 2018- just after six months, angular team released another version of angular — angular 6. This version also came with lots of new features. I will list down some of them.
In this version, angular CLI got updated. New commands were introduced like ng update. To update your angular project dependencies to latest, you can use it. For instance:
ng update @angular/core
The other improvement that was done was in RxJS library called as RxJS6. The two important changes were :
- RxJS6 introduced new internal package structure.
- Usage of operators.
New internal package structure involves changes in the way of importing packages. Instead of related import, we can use single import in this. For instance :
import {Observable} from ‘rxjs/Observable’;
import {Subject} from ‘rxjs/Subject’;
import ‘rxjs/add/operator/map’;
Now, with rxjs6 :
import {Observable, Subject} from ‘rxjs’;
import {map} from ’rxjs/operators’;
Operators usage are also changed in angular 6. For instance :
Old version:
import ‘rxjs/add/operator/map’;
this.http.get(url).map((response)=>response.json)
New version:
import {map} from ’rxjs/operators’;
this.http.get(url).pipe(map((data)=>data*2)
I hope you got the changes which is done in RxJS library.
Another change is — angular-cli.json is replaced with angular.json. This file defines the configuration of the project like styles, scripts, testing, build process and so on. In angular.json, more options for configuration are added like multiple projects configuration can be done.
The other improvements are — <ng-template> is now available instead of <template>. There is change in the way of making services available for use like in previous version, if we want to make service available in entire application or in specific component- we have to provide it in provider array but in this version, in service file itself there is ‘providedIn’ metadata that is available for it. You can specify there the availability of services. By default, it makes service available at root level.
Another beauty that is added is an ‘angular element’ package. This package allows developers to use your angular component in another environment (non-angular environment) like Vue.js. It’s another interesting feature that makes you to use your angular component in other environment
This is all about the features of angular 6 which I learnt. Now the last version’s feature I will discuss which is — Angular 7
Angular 7 — October 2018
In October 2018, another version with more beauty came into the market.
The features that were added were CLI prompts, virtual scroll, drag and drop and again bundling size reduction. In CLI prompts, angular-cli asks you about options like when you make new application using ng new application-name. CLI asks you whether you want to add routing file or not and so on. There is also budget property added in angular.json in which you can specify your maximum and minimum budget size value.
This is all about feature of angular 7. I know I have not stated all other features of angular 7 because I have not gone into those features.Hence, not comfortable in talking about those. We also know that Angular 8 version is released, but due to the same reason of not going into the features of angular 8, I have not written about it.
Note to my readers: I may have left many features, but tried my best to write about the features of different versions in short (It’s not possible to list down all the features but ya I tried whatever features I read in my journey) but I would love if my readers can comment about those features which they found in their learning journey.
Thanks for reading. | https://www.freecodecamp.org/news/angular-a-journey-into-one-of-the-most-popular-front-end-tools-in-todays-job-market/ | CC-MAIN-2021-25 | refinedweb | 2,256 | 66.23 |
[SOLVED] property var array: [ ] doesn't trigger onChanged signal
With a "property var array: [ 1, 2 ], changes to the array don't trigger an onArrayChanged signal. I couldn't find anything in the documentation that explained this; does anyone know if this is supported?
import QtQuick 2.4
import QtQuick.Controls 1.3
ApplicationWindow {
width: 640
height: 480
visible: true
Rectangle { id: root anchors.fill: parent property var array: [ 1, 2 ] property int integer: 1 MouseArea { anchors.fill: parent onClicked: { root.array[0]++; root.integer++; console.log("array incremented to", root.array[0]); console.log("integer incremented to", root.integer); } } // this happens only on initialization, not on a mouse click onArrayChanged: console.log("array changed to", array[0]) // this happens on a mouse click onIntegerChanged: console.log("integer changed to", integer) }
}
The documentation says:
A list property cannot be modified in any other way. Items cannot be dynamically added to or removed from the list through JavaScript operations; any push() operations on the list only modify a copy of the list and not the actual list. (These current limitations are due to restrictions on Property Binding where lists are involved.)
You can, however, modify a copy of the list and then reassign the property to the modified value. Other options are to create an array object from within a .js JavaScript file, or implement a custom list element in C++. Here is a QML element that modifies the list in a JavaScript file:
(...)
However, note that a JavaScript list should not be used as a QML property value, as the property is not updated when the list changes.
I'm not actually using the "list" type which is a distinct type from the "var" type. I believe the documentation you reference is from Qt 4. In Qt 5, you can add/remove/modify elements in a var array; running the code above shows the array is being modified, it's just not generating change notification.
The documentation on the "var" type does point out some limitations on change notification, but trying the workaround there didn't help:
Hi!
You are right. Stumbled over this list problem some time ago and now thought things were still the same. I'm sorry!
Ok, played around a bit with this. Looks like things are basically the same as before.
Like the current documentation () states, one has to reassign the property object to trigger the changed signal:] } Text { text: "" + car[0] + " " + car[1] + " " + car[2] } } }
Thanks, I tried that and it worked. It seems a bit ugly though; in real use, I'd have to make a copy of the array, modify it, and assign it back to the property.
I tried putting "arrayChanged();" after modifying array, and this actually worked (bindings to "array" were handed as expected). I've not seen that documented anywhere; is it legal?
Well, this myPropertyChanged - thing is documented to be universally valid and I don't think var properties are any exception to this. I use it all the time for all kinds of objects, so... :-)
Just for the record: How this array stuff works:] } Button { text: "Works, too" onClicked: { car[0] = 6 car[1] = 7 car[2] = 8 carChanged() } } Text { text: "" + car[0] + " " + car[1] + " " + car[2] } } } | https://forum.qt.io/topic/52239/solved-property-var-array-doesn-t-trigger-onchanged-signal | CC-MAIN-2018-05 | refinedweb | 540 | 56.55 |
Ext JS 4.2 Beta is Now Available
Ext JS 4.2 Beta is Now Available
Today we are excited to make available the beta release of Ext JS 4.2.
For those eager to get the bits, you can download the beta here:
The most significant change in Ext JS 4.2 is in the Grid component. There is an excellent newsletter article on this, so I highly recommend taking a look at it for details. In addition to the exciting grid improvements, Ext JS 4.2 contains all of the bug fixes found in Ext JS 4.1.2 and Ext JS 4.1.3 that have previously been shipped only to support subscribers.
Let’s dive in to the other highlights of this release.
IE 10
With Ext JS 4.2 we now have much improved IE 10 support. The introduction of IE10 brings an entirely new challenge to consider in your applications, however: 2 quirks modes! That’s right. All other browsers to date have a strict mode and a quirks mode (the mode you get with no DOCTYPE specified). With IE10, strict mode is enabled as you would expect: when you specify the DOCTYPE. What is a surprise is that when you do not specify a DOCTYPE, the new “interoperable quirks” mode is enabled. This quirks mode is not at all like previous “IE Quirks” modes. The old quirks mode is enabled only with the following meta tag:
Code:
<meta http-
For the official story from Microsoft on IE 10 and its document modes, see
RTL
In addition to Grid, the long anticipated support of Right-to-Left (or RTL) languages has arrived in Ext JS 4.2! The core functionality of RTL is provided as a set of overrides in the Ext.rtl namespace. Enabling RTL on your viewport is a two-line addition:
Code:
Ext.define('MyApp.view.Viewport', { extend: 'Ext.container.Viewport', requires: [ 'Ext.rtl.*' ], rtl: true });
In all browsers except for legacy IE (that is, IE6 in all modes and IE7 to IE9 in quirks mode), it is possible to change the “rtl” mode of child containers, say for a portal where not all portlets have the same RTL mode. In legacy IE, however, RTL is a global option. Simply including the RTL css file will cause many things to flip in to RTL.
The additional RTL support code is not included in ext-all.js but instead in ext-all-rtl.js. Of course, using Sencha Cmd, your application’s build will only include the pieces you need and the ext-all-*.js files are only important at development time.
The extra CSS rules are likewise not included in ext-all.css since most users do not need them. Enabling them is as simple as switching to ext-all-rtl.css. To support legacy IE, you will need to dynamically determine whether to include ext-all.css or ext-all-rtl.css since including ext-all-rtl.css will cause the much of the UI to flip to RTL mode. You will still need the support code and “rtl” config to complete the functionality, but the RTL rule CSS selectors in ext-all-rtl.css will match all components and containers in IE6.
XTemplate
Ext JS 4.1 released a ton of new bells and whistles in XTemplate, but like any language, there are always more useful features out there. In Ext JS 4.2, XTemplate has learned a couple new tricks. These can be seen in this example:
Code:
<tpl foreach="someObject" between=","> {$}={.} </tpl>
MVC
In Ext JS 4.1.3 we started a series of internal fixes to the MVC core classes to help with larger applications that needed to share models, views and controllers between pages. In previous versions, you could do this:
Code:
Ext.define('MyApp.controller.Main', { views: [ 'Foo' 'Common.views.Bar' ] });
Code:
Ext.define('MyApp.controller.Main', { views: [ 'Foo', 'Bar@Common.views' // maps to Common.views.Bar ] });
Beyond convenience methods like these, the MVC changes have been targeted to also allow you to more easily unit test your controller classes and to share your application classes across pages.
To enable controllers to be unit tested, we have modified the Ext.app.Controller class to no longer require an Ext.app.Application instance in order to be instantiated. This also affects how the events wire up worked since that required the application and a helper object called the “event bus”.
You can now write a standard-looking Ext.app.Application derived class and pass its name to Ext.application:
Code:
Ext.define('MyApp.Application', { extend: 'Ext.app.Application', name: 'MyApp' ... }); Ext.application('MyApp.Application');
Should you not use them, these internal changes should be transparent to your applications. But if you have previously tried to unit test your controllers and found that you could not … you should try again with Ext JS 4.2.
Trees
In Ext JS 4.2, the many grid improvements also serve for trees. Most importantly, the buffered rendering plugin.
One challenge that people encounter when using trees is the NodeInterface class. This class gets injected “on top” of your model class and that can create problems since it may override methods you want to implement. In Ext JS 4.1.3, we added Ext.data.TreeModel as a class from which you can derive your own models for use in a tree. Because Ext.data.TreeModel gets the NodeInterface applied to it, your models are free to override any method you need. The change is simple:
Code:
Ext.define('MyApp.models.Foo', { extend: 'Ext.data.TreeModel', ... });
To some degree this is largely an internal change, but it will surely have impact to some applications. In Ext JS 4.1, the auto container layout class had to manage all of its child components individually. This posed performance challenges for containers that had lots of child components, so in Ext JS 4.2, we have added wrapping elements to create a bounding box around auto container contents.
We have heavily tested the interaction of these elements with things like child margins, body padding, scrolling overflow, text flow and the like. The side-effects should, therefore, be minimal to your styling.
This change also affects anchor layout and column layout.
Life-cycle Optimizations
We have also performed a good deal of tuning on the component life-cycle methods (like setUI). We moved more of the DOM element preparation work into the beforeRender phase to ensure that the markup we produce is right and does not need tweaking in afterRender.
One change in this area that could affect your applications is that the “add” and “remove” container events no longer bubble by default. In Ext JS 4.0/4.1, these events would fire for each component you created and they would bubble up to the top of the component hierarchy. In almost every case, there were no listeners to these events and all this work was for naught. In Ext JS 4.2, these events no longer bubble, which allows us to take advantage of our optimized event listener detection and maybe avoid even calling fireEvent (if there are no listeners).Don Griffin
Ext JS Development Team Lead
Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue
"Use the source, Luke!"-->
- Join Date
- Mar 2007
- Location
- Notts/Redwood City
- 30,464
- Vote Rating
- 29
To clarify, Trees will be able to use
Code:
plugins: { ptype: 'bufferedrenderer' }Search the forum:
Read the docs too:
Scope:--> | http://www.sencha.com/forum/showthread.php?251214 | CC-MAIN-2014-10 | refinedweb | 1,258 | 67.35 |
Before you post any code I expect you to have compiled it and corrected any compilation errors.
If there are no errors, don't bother to post it.
Move to the next.
Before you post any code I expect you to have compiled it and corrected any compilation errors.
If there are no errors, don't bother to post it.
Move to the next class.
Okay well so far there are no errors. I have no errors in my AddressBook class but i don't think it is complete.
public class AddressBook { //index of the last entry private int top = 0; //constant number that indicates the maximum //number of entries in the address book private static final int MAXENTRIES = 10; //array of Address Book Entries private Contact[] list; /** * Constructor for objects of class AddressBook */ public AddressBook() { list = new Contact[MAXENTRIES]; } }
For any class, you need to define what the data is it will hold and how you will manipulate that data.
What does the AddressBook class hold (its variables) and how will you manipulate that data (its methods)?
Well the data the AddressBook holds i believe is all the data in the contact class but held as the object. I presume the methods are to set and return but i'm unsure as it just needs to hold a collection of contacts. Are the methods here meant to be the addentry and so on or are they left for the textui
On the same point in my contact class i don't have any methods just the constructor, to string and the equals so my contact class surely isn't finished
What methods do you need to add to do the set and return?What methods do you need to add to do the set and return?AddressBook holds i believe is all the data in the contact class but held as the object. I presume the methods are to set and return but i'm unsure as it just needs to hold a collection of contacts.
Leave out the TextUI stuff.
Classes often are not finished. You will probably have to go back and add more code as the need arises.Classes often are not finished. You will probably have to go back and add more code as the need arises.class surely isn't finished
That's normal.
Programming is an iterative process. Do a little here, do a little there, go back, do some more here and then some there and again and again and again.
I need to add these :
public AddressBook(Contact contact) { this.contact = contact; } public Contact getContact() { return contact; } public void setContact(Contact contact) { this.contact = contact; }
How are the Contact objects stored? How will the caller of the get method describe which Contact object he wants?
Same for the set method. What is to be set?
I'm not sure, when you create a contact object it just ask's for you to enter name and address not any of the individual data.
With this in addressbook it will just ask you to enter something in with contact which doesn't make sense
What do your program assignment specifications say?
Must provide the following functionality
1. Enable user to add contacts
2. User to delete existing contacts
3. User search address details based on family name
4. List all contacts in address book.
Does reading what you posted answer your question?
Well it just wants you hold the data so it should be fine i think
Keep going. Come back when you have an error that you don't understand.
Okay well still being unsure of my AddressBook class.
Looking at the textui and the method addEntry it doesnt actually add to the array
public void addEntry() { BufferedReader keyIn = new BufferedReader(new InputStreamReader(System.in)); String forename = ""; String surname = ""; String street = ""; String city = ""; String county = ""; String country = ""; String post = ""; if(top == MAXENTRIES){ System.out.println("Address Book is full"); return; } //asks the user for the data of the address book try{ System.out.print("Forename: "); forename = keyIn.readLine(); System.out.print("Surname: "); surname =); } Name name = new Name(forename, surname); Postcode postcode = new Postcode(post); Address address = new Address(street, city, county, country); }
So i need to add this i think to the method.
AddressBook entry = new AddressBook (); list[top] = entry; top++;
It compiles fine and when i bring up the menu it works, it asks for all the inputs but i get an error when you finish the last input
"java.lang.String.NullPointerException: null" and the line of code is -list[top] = entry;
The TextUI class needs a reference to the AddressBook object that the addEntry method can use to call a method in the AddressBook class to add the entry the addEntry method has created.
Wow statements like that make me cry haha
Sois a reference to the class but not the object.is a reference to the class but not the object.private AddressBook addressbook;
I'm got to need it breaking down a bit more its alot to take it
private AddressBook addressbook;
That defines a reference variable but does NOT give it a value.
So i need to do use a constructor and have
public AddressTextUi(AddressBook addressbook) { this.addressbook = addressbook; }
right?
I don't know what you need. What are the program's requirements?
What class creates the AddressTextUi class?
Think about how a user will use this program.
What class will first start and then what will happen?
The first class that kicks it off is
AddressBookDriver,
public class AddressbookDriver { public static void main(String [] args) { AddressTextUi ui = new AddressTextUi(); ui.mainMenu(); } }
This connects to the AddressTextUi class and launches the main menu that is found within the class.
The main menu displays the options to the user:
[A] ADD ENTRY
[D] DELETE ENTRY
[U] UPDATE ENTRY
[V] VIEW ALL ENTRIES
[S] SEARCH ENTRIES BY SURNAME
[Q] Quit
Enter desired action:
Like so, when the user types in Q and enters the program closes and displays a message that is fine.
When the user types A and hits enter you get this:
Enter Contact Details:
Forename:
Once the forename has been entered it asks for surname and so on.
It Also should check whether the addressbook is full if so it should display a message and go back to the main menu
Ok. Have you answered the question you asked in post#145
What happens when the user enters the other options:
D
U
V
S
With the delete entry - it checks to see whether the addressbook is empty if so displays a message and returns to main menu.
If not then it displays all the entries of the address book with the index number, and prompts the user to enter the index number of the contact they wish to delete it then deletes and goes back to the main menu.
Update entry i dont require and am taking it out.
View entries - checks to see if its empty first, if not displays all the contacts in the address book
So far i have the view and delete entries in and i think they will work but i have this error at the end of typing in the data for the addentry method so i'm not sure and need to fix that first
Ok keep up the good work.Ok keep up the good work. | http://www.javaprogrammingforums.com/whats-wrong-my-code/10523-address-book-program-issues-6.html | CC-MAIN-2016-07 | refinedweb | 1,231 | 70.84 |
Usually threads are designed to serve one purpose. A thread could contain functions that are called from other functions, which would not make it self contained. Due to the single purpose all the functions would be related. By identifying the code path of a specific thread we might be able to help with enumerating the threads functionality. First we will need to get the address for each call to CreateThread. This can be done using LocByName("CreateThread",0).
def retlistofCreateThreadAddr(): addr = [] for x in CodeRefsTo(LocByName("CreateThread"),0): addr.append(x) return addrWe will then need to get the offset that is pushed on to the stack for the thread's function start address (lpStartAddr). This is the third argument.
push eax ; lpThreadId push ebx ; dwCreationFlags push esi ; lpParameter push offset StartAddress_SearchFiles ; lpStartAddress push ebx ; dwStackSize push ebx ; lpThreadAttributes call ds:CreateThread cmp eax, ebx jz short loc_1000341E
HANDLE CreateThread( LPSECURITY_ATTRIBUTES lpsa, DWORD cbStack, LPTHREAD_START_ROUTINE lpStartAddr, LPVOID lpvThreadParam, DWORD fdwCreate, LPDWORD lpIDThread ); // lpStartAddr: [in] Long pointer to the application-defined function of type // LPTHREAD_START_ROUTINE to be executed by the thread; represents the starting // address of the thread. For more information on the thread function, see ThreadProc.
IDA is usually good at identifying lpStartAddr. If we rely on IDA, we can back trace a number of instructions from the address found in retlistofCreateThreadAddr() until we find the string "lpStartAddr" in the comments. Once we have the address we just need to read a Dword for the threads function start address. There are a couple of flaws to this approach. One is that we are relying on IDA for comments and another is we are relying on lpStartAddr to be a Dword address. The function address could be chosen at runtime. If this is the case we won't be abel to find lpStartAddr. The code will then need to be manually analyzed. An easy way to determine if we were able to receive the lpStartAddr is to check if it's a valid address using GetFunctionName(lpStartAddr).
def getStartAddr(ct_addr): # backtrace to find string "lpStartAddress" count = 0 addr = PrevHead(ct_addr,minea=0) while count < 10: if 'lpStartAddress' in str(Comment(addr)): return Dword(addr+1) count = count + 1 addr = PrevHead(addr,minea=0) continue.
This script will be included in the upcoming release of IDAScope. In the previous post we used a script to rename all functions in a subroutine block. The same script can be used for renaming all child functions in a thread. For the IDAScope release of this script it will be in it's own window. Plus, the script will have an option to add a repeating comment to all child functions in a thread. Constant appending to the function name starts to clutter it up. Dan has been doing some awesome work on IDAScope (makes my updates look like sophomore programing examples).
For anyone who wants the code now the script and code can be found below.
Note: Calculating the depth is probably the slowest part of the code. I tried to figure out away to get the depth from inside the graph_down function but I had no luck. I spent a good amount of time reviewing others code in graphing down and graphing up. Cody Pierce has some great code on Tipping Point's blog but it's for graphing up. If anyone has any thoughts please shoot me an email. My address is in the comments of the script.
Source code of an_threads.py, Download
## an_threads.py is a script that can be used to help with analyzing threads ## and their child functions. Usage IDA > File > Script file.. > Select an_threads.py ## The output will be displayed to the Output Window. IDA > View > Output Window ## Created by alexander.hanel@gmail.com, version 0.01 from idaapi import * import idautils import idc import sys def format_depth(x): # Get's the depth of each function from the parent/root function for index in range(0, len(x)): if x[index][1] == None: x[index].append(0) continue if index == 1: x[index].append(1) continue # Indent Child Function if x[index][0] == x[index-1][1]: x[index].append(x[index-1][2]+1) continue # No Indent same function if x[index][0] == x[index-1][0]: x[index].append(x[index-1][2]) continue if x[index][0] != x[index-1][1] or x[index][0] != x[index-1][0]: for v in range(1, index): if len(x[index]) == 3: continue if x[index][0] == x[v][0]: x[index].append(x[v][2]) continue if len(x[index]) == 3: continue # returns list # format parent, child, depth return x def print_dep(dep): # prints the output for line in dep: if line[1] == None: print GetFunctionName(int(line[0],16)), "(lpStartAddr)" else: space = ' ' * 3 * line[2] func_string = GetFunctionName(int(line[1],16)) if func_string == '': func_string = '* Call ' + GetDisasm(int(line[1],16))[6:-6] print space , func_string return def graph_down(ea, depth, graph = {}, path = set([]) ): # This function was borrowed from Carlos G. Prado. Check out his Milf-Plugin for IDA on Google Code. graph[ea] = list() # Create a new entry on the graph dictionary {node: [child1, child2, ...], ...} path.add(ea) # This is a set, therefore the add() method # Iterate through all function instructions and take only call instructions for x in [x for x in FuncItems(ea) if is_call_insn(x)]: # Take the call elements for xref in XrefsFrom(x, XREF_FAR): if not xref.iscode: continue if xref.to not in path or 'extrn' in GetDisasm(xref.to): depth.append([hex(LocByName(GetFunctionName(x))), hex(xref.to)]) if xref.to not in path: # Eliminates recursions graph[ea].append(xref.to) graph_down(xref.to, depth, graph, path) return depth def retlistofCreateThreadAddr(): # returns a list of all addresses that call CreateThread addr = [] for x in CodeRefsTo(LocByName("CreateThread"),0): addr.append(x) return addr def getStartAddr(ct_addr): # backtrace to find string "lpStartAddress" # then read and return Dword count = 0 addr = PrevHead(ct_addr,minea=0) while count < 10: if 'lpStartAddress' in str(Comment(addr)): return Dword(addr+1) count = count + 1 addr = PrevHead(addr,minea=0) continue ## Main() threads = [] for x in retlistofCreateThreadAddr(): # return (CreateFunction Address, StartAddress) threads.append((x,(getStartAddr(x)))) print "Number of Threads %s" % (len(threads)) for addr in threads: print "CreateThread Call %s" % hex(addr[0]) if GetFunctionName(addr[1]) == '': print "[Warning] Could Not Get lpStartAddr [Warning]" print continue x = graph_down(addr[1], depth=[[hex(LocByName(GetFunctionName(addr[1]))),None]]) print_dep(format_depth(x)) print | http://hooked-on-mnemonics.blogspot.com/2012/08/ida-thread-analysis-sript.html | CC-MAIN-2017-22 | refinedweb | 1,077 | 56.15 |
Can ST2 auto close tab after the file was deleted? I delete file because I don't need it anymore, but then I must close the tab manually. It's so annoying
Totally agree!
I am constantly hitting save on deleted files by accident and thereby undeleting them. often don't realize I've done it for a while
You can do this with a plugin. I didn't really test this much, so you may want to test on non critical stuff first. It does just close the view, so worst case is that you lose some existing work. That being said, I'm pretty sure it works fine.
import sublime_plugin
import os
class MyEvents(sublime_plugin.EventListener):
def on_activated(self, view):
if view.file_name():
if not os.path.exists(view.file_name()):
view.set_scratch(True)
view.window().run_command("close")
I agree, it would be nice to have ST close the tab of a deleted file.
@skuroda cool, thx, works nice
How do I add this? Just make a new plugin folder?
Go to "Tools -> New Plugin". Paste the content I posted above into the file. Save it into "Packages/User". You can choose whatever file name you want, just be sure the extension is ".py"
Theres a Problem with the plugin, i just found out that the Default-Settings-Files, also Default-Keymap, are not displayed anymore, they get instantly closed after opening.
So you can not view the default settings of Sublimetext or other Plugins.
I just upgraded the script a little bit. Now it checks first if the file is not a kind of "Default"-file.
import sublime_plugin
import os
class MyEvents( sublime_plugin.EventListener ):
def on_activated( self, view ):
s = view.file_name()
if s:
if not os.path.exists( s ):
if not "Default" in s:
view.set_scratch( True )
view.window().run_command( "close_file" )
Any chance this plugin can be updated for ST3? Doesn't seem to work there for me.
Or maybe I'm doing something wrong? I've added it to packages/user as CloseDeletedTabs.py, and even went so far as to restart Sublime, but it doesn't seem to do anything. The tab stays open after deleting the file.
So I hadn't ever looked at plugins in ST3 before, but I've done some research and found that in ST3 plugins don't run on the main thread, so you need to use sublime.set_timeout to run the close_file command on the main thread and avoid a crash.
This seems to work for me in ST3 on OSX:
[code]import sublime_pluginimport sublimeimport os
class MyEvents( sublime_plugin.EventListener ): def on_activated( self, view ): s = view.file_name()
if s:
if not os.path.exists( s ):
if not "Default" in s:
view.set_scratch( True )
sublime.set_timeout(lambda: view.window().run_command("close_file"), 0)[/code]
The on_activated event wasn't working properly when deleting the file. I had to change tabs and then click the deleted file's tab for it to disappear. The method I needed was on_modified_sync.
import sublime_plugin
import sublime
import os
class MyEvents(sublime_plugin.EventListener):
def on_modified_async(self, view):
s = view.file_name()
if s:
if not os.path.exists(s):
# Without checking for this string in the path, config files seem to be automatically closed.
if "Sublime Text 3" not in s:
view.set_scratch(True)
sublime.set_timeout(lambda: view.window().run_command("close_file"), 0)
never use the on_modified_async listener for something like this, then typing lags, etc, etc
Updated the plugin again to be faster and fix a bug when creating a new file from the subl command line.
import sublime_plugin
import sublime
import time
import os
class MyEvents(sublime_plugin.EventListener):
def on_deactivated_async(self, view):
s = view.file_name()
if s:
time.sleep(0.1) # Give the file time to be removed from the filesystem
if not os.path.exists(s):
print("Closing view", s)
view.set_scratch(True)
view.window().run_command("close_file")
Gist:
Hope this helps. | https://forum.sublimetext.com/t/close-tab-after-delete-file/9439/1 | CC-MAIN-2017-43 | refinedweb | 646 | 69.58 |
malloc_stats man page
malloc_stats — print memory allocation statistics
Synopsis
#include <malloc.h>
void malloc_stats(void);
Description).
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
This function is a GNU extension.
Notes
More detailed information about memory allocations in the main arena can be obtained using mallinfo(3).
See Also
mmap(2), mallinfo(3), malloc(3), malloc_info(3), mallopt(3)
Colophon
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
mallinfo(3), malloc_info(3), mallopt(3). | https://www.mankier.com/3/malloc_stats | CC-MAIN-2018-13 | refinedweb | 110 | 52.15 |
Please note: Portions of this transcript have been edited for clarity
Introduction
James Z (Moderator): We are pleased to welcome Randy Franklin Smith contributing Editor and Author of Windows 2003 Security Log ()
Randy Franklin Smith (Expert): Hi! I’m Randy Franklin Smith. I’m an information security consultant, an SSCP, a CISA, and a Security MVP. I write extensively about Windows security for Windows IT Pro magazine including many articles on the Windows security log. I’ve also recently compiled my research on the security log into a free, online resource called the Windows Security Log encyclopedia at where you can also learn about my 2-day Security Log Secrets course.
James Z (Moderator): Let’s begin the chat. We welcome you to begin submitting your questions for Randy.
Start of Chat
Randy Franklin Smith (Expert):Q: I run an I.T. consulting firm. The security event log is very confusing. Often I'm asked to monitor if someone is accessing files or applications they shouldn't be. What is the best way to monitor this and extract that info from the security event log?A: You can monitor either successful or failed accesses of specified files using the "Audit object access" audit policy. Typically you enable auditing on just important, critical folders. Then to find instances of someone trying to access the file who isn't authorized you configure the folder's audit policy for failed read attempts. To get an audit trail of who is modifying data, change the folders audit policy to trap successful Write and Append events.
Randy Franklin Smith (Expert):Q: Suggested software for getting a handle on the event logs from a dozen or more servers and finding the important events.A: I get asked this question a lot. There are, as you may know, many products out there for merging, archiving, reporting and alerting on the security log. I've only worked with a few. You read about the solutions I have experience with at . You can also look at a comparative I did for the magazine at which gives you some good evaluation criteria. That being said, I can't make a recommendation without knowing more about your specific needs. However make sure you get a tool that allows you to filter based on the fields within the event's description since this is where most of the important information is found and all indications are that this trend will continue with Longhorn. Also, check out the free LogParser tool from MS and Audit Collection Services which is in beta at MS.
Randy Franklin Smith (Expert):Q: Is there a Microsoft version of the Unix syslog server functionality?A: Yes, search Google on EventReporter if I am not mistaken. Also check out Audit Collection Services which is in beta currently at Microsoft.
Randy Franklin Smith (Expert):Q: why am i getting these, there from a DC, Category: Account Logon; type: Failure Audit; Event ID: 673; it's a Service ticket request from a server on network with the word host in front of it, e.g. host/fs1.domain.com, ticket options: 0x40830000 Failure 0xDA: That is very common and you can ignore them. Every computer on the network work frequently chats with the DC for a variety of reasons - including the refresh of group policy. To facilitate this computer to DC communication the computer maintains Kerberos tickets which eventually expire causing the failures you see. Nothing to worry about.
Randy Franklin Smith (Expert):Q: Is there any way to get a stand-along server, (such as a co-located web server) to email a daily report of all event logs, or better yet, just any warnings or non-standard informational events?A: There are plenty of programs you can buy but you might also look at writing a couple logparser queries that produce the information you want and then use blat to email the results to you. Combine that in a batch file and create a scheduled task. logparser is part of the IIS resource kit from MS and you can find blat on the Internet.
Randy Franklin Smith (Expert):Q: On a machine that generates logs on a minute by minute basis, there is a 6 hour gap. What are some of the programs/hacker tools that could edit or suspend event logs?A: Interesting. First I would check the events immediately preceding and following the gap. Look for events that indicate a reboot or audit policy change. See events 512, 513 and 612 at my encyclopedia at. Both phenomena could be the source of the gab. Otherwise the only other tool I'm aware of that will allow an admin to delete events is winzapper. For more info on winzapper see my article at.
Randy Franklin Smith (Expert):Q: Suggestions for using logon/off events to create a billing process for a shared machine ?A: Unfortunately Windows does a horrible job of logging logoff events. So there is no good way at all to get a good record of when users logoff.
Randy Franklin Smith (Expert):Q: would that be the same advice for a Failure event on an Exchange 2003 server Event ID 680 & 529? These are valid users, they get into their email fine but I see failures all the time of this type....A: what are the failure/error codes? I have them documented at and
Randy Franklin Smith (Expert):Q: topic of collecting logs, could the logs from all servers be sent to a SQL server for collection? assuming the right tables, etc were already built. How would I send them automatically to SQL 2000 without going to every server and doing a manual exportsA: This is exactly what ACS does. I don't know how much ACS will cost if anything. But ACS puts an agent on each server which sends security events (not other event logs) in near real time to the ACS server which has the tables already built as you described.
Randy Franklin Smith (Expert):Q: I get a single Event ID 565 each time a workstation is turned on. The source is Security and the category is Directory Service Access. Is there something I should adjust to get rid of these or should I just ignore them?A: There is no end to the events you just need to ignore which really illustrates why I say you absolutely need some kind of tool whether a free one like logparser or something you pay for like Dorian or Engagents tools which give you the ability to filter out what you don't care about
Randy Franklin Smith (Expert):Q: Do you work with or recommend the use of templates that will set the appropriate audit log values for DC's?A: Well there are only 9 audit policies that you have to configure so a template helps but not a big deal to just manually configure them in the correct group policy object. The most important categories to enable for auditing on DCs are Account Management, Account Logons, Directory Service Access, Policy Change and System events.
Randy Franklin Smith (Expert):Q: Suggestions for imediate notification of important events/failures? Email/pager/phone?A: You are in luck! Here is an article I wrote showing how to use WMI filters for specific event IDs and then email them to you. The code is at
Randy Franklin Smith (Expert):Q: This is exactly what ACS does. ACS ??):Q: Hey Randy, I've done what you suggested, but the event log seems to fill up with a LOT of stuff. Is there an easy way just to extract the info related to something like "who has been accessing c:\secret_docs??? Thanks!!!A: Logparser, logparser, logparser
Randy Franklin Smith (Expert):Q: MACS Beta looks like it's been production ready for ages - any idea when they'll finally release this and under what conditions (purchase/free/free SMB version/cost Enterprise version)??A: Your questions are so relevant. I wish I had some answers for you. I can't get anything from MS on this so I figure they haven't decided yet.
Randy Franklin Smith (Expert):Q: One of our servers intermittently logs on to the other as "anonymous" with the event IDs 538 and 540. Is this something I should change?A: This is normal. Recommend ignoring.
Randy Franklin Smith (Expert):Q: We have some clustered win/exchange 2003 that are getting access denied via OWA . The only event id is 537 (Logon Failure: Reason: An error occurred during logon )A: Interesting. That is a pretty rare event. Perhaps you can provide more information on this offline via email. rsmith@montereytechgroup.com
Randy Franklin Smith (Expert):Q: Is there a list of events that you can safely ignore?A: Good question. I don't have a list per se but the best generalization I can make is that it is usually safe to ignore any generated by computers which you can distinguish because computer accounts always end with a $ sign.
Randy Franklin Smith (Expert): I've noticed that, and understandably, there's a lot of interest on tools to merge, report and alert on the security log. It is a HUGE problem. You have 3 options: 1) roll your own solution with utilities and scripts your write or collect from the Internet 2) Buy a solution 3) Wait on ACS from MS. Unfortunately, all 3 options still require you to understand the security log and write your own reports and alerts. 3rd party ISV solutions are good at the merge, alert and report functions but all of them that I've seen are pretty lean as far as the "canned" reports and alerts that come with the tool. As far as I've seen with ACS from MS, it just gets everything into a well structured DB but it seems it will be up to you to write your own reports.
Randy Franklin Smith (Expert):Q: On a number of plain 2003sp1 servers - "A provider, Rsop Planning Mode Provider, has been registered in the WMI namespace, root\RSOP, but did not specify the HostingModel property. ... will be run using the LocalSystem account. ..." Can you give more info?A: what's the event ID? Is this in the security log or a different log?
Randy Franklin Smith (Expert):Q: Throughout this chat I've heard a LOT of answers that go something like, "you can just ignore them" then WHY are they there? and there must be a way to PROPERLY fix something ..... ignore them just does NOT seem like an acceptable answer from Microsoft!!A: First of all, let me make clear I am NOT part of MS. I am an independent consultant. I don't waste time therefore trying to figure out why or explain why MS code does what it does. :-): I have a security log quick reference available as a free download in which I list what I consider the most important events to monitor. You can get it at: In addition to the events in the quick reference chart I just mentioned I would monitor for changes to GPOs and OUs for change control purposes. You could also set up file auditing to alert you whenever file permissions are changed without also bugging you every time a file is opened.
Randy Franklin Smith (Expert):Q: Event Id 560 and 576 do you know what benefit monitoring this have. I tried to get an understanding from the web but could not really find out exactly what use in protection or accountability they provideA: 576 isn't that useful. It just tells you what user rights a user had at the time he/she logged on. Windows uses this event for user rights which get logged so frequently it would be bad to log each use. See. As far as 560, 560 tells you when a file, registry key or other object is accessed. Unless you turn on auditing for specific folders or keys you shouldn't be getting many 560s. You will get some useless 560s for SAM related events . The point with 560 and many other events is that the event ID alone is not enough to base monitoring on. You have to look at the fields within the event's description.
Randy Franklin Smith (Expert):Q: Blackberry Enterprise Server is generating 565 errors on our Exchange server. I've discussed with RIM with no luck. The BES works fine but the errors fill up the log. They reference "Unknown Specific Access (bit 8)" - How do I interpret that?A: I need to see the entire event with any sensitive information obfuscated
Randy Franklin Smith (Expert):Q: How can I determine what happened when I see an event like this ? Event Type: Success Audit Event Source: Security Event Category: Privilege Use Event ID: 576 Date: 1/14/2004 Time: 1:24:15 AM User: S-1-5-21-420350432-1808818903-1233803906-1004 Computer: {editted} Description: Special privileges assigned to new logon: User Name: Domain: Logon ID: (0x0,0xF6CD317) Privileges: SeChangeNotifyPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilegeFor more information, see Help and Support Center at: all this means is that someone (SID S-1-5-21-420350432-1808818903-1233803906-1004) logged on with the 4 rights listed under Privileges. See for an explanation of those rights
Randy Franklin Smith (Expert):Q: We get the following in the event log of ourWindows 2003 SQL 2000 server: Event ID 40961; Category SPNEGO - The security System could not establish a secured connection with the server DNS/prisoner.iana.org. No authentication protocol was available.A: That is not a security log event but I recognize it. Your computer is trying to update its DNS record against the indicated DNS server if I'm right
Randy Franklin Smith (Expert):Q: Third party services that do log aggregation and analysis for a (large?) fee ?A: My firm, Monterey Technology Group, Inc. :-) Feel free to email or call me.
Randy Franklin Smith (Expert):Q: Suggestions for getting a "baseline" of normal events to ignore ?A: These chats are so valuable because I learn what your folks in the trenches really need. This is the 2nd or 3rd time such a request has come up in this log. I don't have anything now but I'll try to put a list together in the near future at. (I know - vaporware :-) Anyway, everyone is welcome to submit what they have found to be very common in the log and safe to ignore. I'll compile the feedback and my own thoughts on the subject.
Randy Franklin Smith (Expert):Q: Randy, concerning the event for the BES, can I send that to you via email? Chat truncates the text.A: sure
Randy Franklin Smith (Expert):Q: How about the opposite of what to ignore - what events usually indicate a problem and need attention?A: 675, 676 or (failed 672 on Win2003), 642, 632, 636, 660, 624, 644, 617. See for why I say these events. BTW, the security log won't tell you when there is an attack or intrusion. It just tells you what is happening on the system that has security relevance. With any event, you have to evaluate whether it is innocent or not. For instance, 675, failed authentication. Usually indicates bad password. But is it a legit user who fat fingered it or a bad guy? Your or our reporting tool has to consider things like the quantity of 675s for the same user as well as the IP address of the client in the event's description. Again I stress the need to understand and use the info in each event's description
Randy Franklin Smith (Expert):Q: Randy - I tried clicking on one of the links to your site and got a DNS error. So I added an 's' to the link - " - and the link worked. Thought I would mention it incase anyone else was flummoxed by the error.A: thanks! I'm a bad typer
Randy Franklin Smith (Expert):Q: Randy, I know you can't make excuses for MS code, but does MS have any plans to put correct links in the event logs in the future. (i.e. in the log entry above is broken.) Maybe they could link to your site. :)A: That would be fine with me :-). In fact at least one log monitoring company (Dorian) links to my site from their reports.
Randy Franklin Smith (Expert):Q: From source "RegSrvc" "The description for Event ID ( 0 ) in Source ( RegSrvc )."A: You are looking at an event log created on a different computer with a different version of windows which means your local computer doesn't have the static text description for the event
Randy Franklin Smith (Expert):Q: Any way to identify a workstation trying to logon, that is not in your domain, from the info provided in the event log?A: yes, you need "account logon" auditing enabled (not to be confused with the logon/logoff category) on your DCs. Then look for failed events from the Account Logon category. The events should list the client IP address and/or client workstation depending on authentication protocol and version of windows.
Randy Franklin Smith (Expert):Q: Suggestions for learning the basics of Event Log analysis ?A: Basics: check out my articles at. For advanced: (shameless plug) come to my Security Log Secrets course in DC next month.
Randy Franklin Smith (Expert):Q: Suggestions for including logs from IIS, Firewalls, IDS, etc. in a log aggregation service ? (OK, spelling is not my best event.)A: I know I've talked about logparser a lot but it is truly an amazing tool. It is free and allows you to query all of those log formats using SQL-SELECT commands - just like what Access queries use. Other than that, the only tool I'm aware of that monitors many different log formats (even allows you to train) is Intruder Alert. When I last looked at IA it was owned by Axent Technologies which was subsequently bought out by Symantec and I haven't looked at it since...
Randy Franklin Smith (Expert):Q: Randy, which security logs on my network need to be monitored? Domain controllers, servers, workstations?A: domain controllers definitely. But there is still a lot of security activity that only gets logged on the local server itself such as attacks on local accounts (as opposed to domain accounts which gets logged on the DCs) or access events for the files on that server. as far was workstations, it's a good idea to turn on auditing for logon/logoff and process tracking even if (like most companies) you don't/can't monitor the logs. It helps to have the information if you suddenly have to investigate a user
Randy Franklin Smith (Expert):Q: How would you track whether a user attempted to access the registry on their workstation?A: Well, you have to understand the from the operating system's point of view, the user is accessing the registry all the time, whenever they run an application that accesses their preferences or the app's own configuration settings. However, if you mean a user trying to access the registry using the Registry Editor for instance, you could turn on the Process Tracking category and look for event 592 where the executable name is regedit or regedt32. Bear in mind that this would not catch other programs trying access the registry or scripts the user my write
Randy Franklin Smith (Expert): I've mentioned logparser a lot. You can learn more about it at and you can download it from. If you haven't already checked out logparser you really must. It is one of the coolest tools to come out in a long time.
Randy Franklin Smith (Expert):Q: How easy is it for an attacker to tamper with the security log?A: Not very. You either need physical access to the server or admin authority. If you are an admin you can find winzapper on the Internet which allows you to delete events from the log. There is no way to protect the log from admins except for very frequently shipping the events out of the security log and to a secure, isolated server which tools like the ones I mention at my web site accomplish as well as upcoming ACS from Microsoft.
Randy Franklin Smith (Expert):Q: Is there any chace that MS will release a Security MP for MOM 2005 that scours the Security logs for basic audit events. So, for example, real-time notification of someone adding themselves to the Domain Admins Security Group.A: I doubt it. MOM is operations focused and not designed with security requirements of audit log integrity built into it. My understanding is that Microsoft's feeling is that ACS is for the security log and MOM for everything else.
Randy Franklin Smith (Expert):Q: Are workstation events logged on the DC security events normally when a user logs on? Is this info replicated to all DC's or just logged on the DC the user happened to authenticate to?A: No and No. Each system has its own security log and there is NO replication of security events -even between DCS.
Randy Franklin Smith (Expert):Q: Are workstation events logged on the DC security events normally when a user logs on? Is this info replicated to all DC's or just logged on the DC the user happened to authenticate to?A: so authentication events are logged as you described - just on the DC that happens to services the request
Adam Carheden (Expert):Q: what is the best way to script the log to alert me via mail or whatever, when there is an alert of qualifying criteria?A: You are in luck! Here is an article Randy wrote showing how to use WMI filters for specific event IDs and then email them to you. The code is at
Randy Franklin Smith (Expert):Q: if we want to be careful about security and want to track security on several files and folders, dc, whatever, do you recommend another server or the same box will normally due?A: The only reason you need to push security events off one computer to another is if you want to protect those events from tampering by either the admins or a hacker that gets admin authority- admin authority and/or the "manage auditing and security log" user right
Randy Franklin Smith (Expert):Q: Is there anyway to get dcom related security events in the event log. (IE.: A web app tries to launch excel trough DCOM to output a report trough asp and the IUSR_machinename gets denied, nothing is logged)A: Unfortunately no. Just a few components of the Windows OS report the majority of the events to the security log so unless the problem you are describing is a security event from the standpoint of the Security Reference Monitor, a logon process, Active Directory, etc, you won't get an event. This means that there is some security activity best monitored outside the security log. :-(
Randy Franklin Smith (Expert):Q: OK I'm going to admit not knowing what ACS is. Can someone give me the 20 sec pitch?A: Please see my earlier posts regarding ACS where I describe it's agent architecture and central DB
Randy Franklin Smith (Expert):Q: OK I'm going to admit not knowing what ACS is. Can someone give me the 20 sec pitch): In case anyone just came into the chat, I've got lots of free information on security log at where you can also learn about my course. I appreciate all your questions and I'll be working on that "Events Safe to Ignore" list later this week so stay tuned...
Randy Franklin Smith (Expert):Q: Outside of the event log what are the best place to monitor for security related information? (C:\windows\system32\logfiles , ...)A: The IAS log is useful but other than that you really have to think about the services and applications installed on a given server. Then find out where they log their information. I wish it were simpler but it really depends on each developer.
Randy Franklin Smith (Expert):Q: With auditing turned on in Exchange and AD, getting info meaningful real-time data out of the logs can be like drinking from a fire hose. What tool (right now) would you recommend that users use to get data out of the audit logs.A: Logparser which is free or else check my posts earlier on this subject
Randy Franklin Smith (Expert): OK folks. Thanks again for attending. Good bye.
James Z (Moderator): Thanks Randy and everyone else for coming. This concludes today's chat on The Security Event Log: The Unofficial Guide. | http://technet.microsoft.com/en-us/cc678960.aspx | crawl-002 | refinedweb | 4,147 | 69.72 |
I spend most of my time with .NET technologies(SharePoint, ASP, WF...) but I am aware of that J2EE is very powerful and number one technology in the enterprise application world, so I decided that I should learn some basics of J2EE. You can download the source code here. The source code contains already implemented the changes described in the second part of this post.
This is a simple tutorial on "How to build simple enterprise web application on top of DB" using Java Persistence API, JSF, NetBeans and Derby Database. We will create application that allows the management of "Companies and their Products".
You can just download the NetBeans Bundle which already includes everything. Just keep in mind that it was tested on the 6.7.1/GlassFish 2.1 version and some setting might not work on the newer versions.
In NetBeans select New -> New Project -> Java EE -> Enterprise Application. Later you will be asked to specify Application Server, here you can select the installed GlassFish instance.
In the last tab you can specify the names for the EJB and Web Application Module. That is because NetBeans will create two modules. The EJB module will take care of your "Model and Control", the Web Application Module will take care of the "View".
If you wish to model you Entity Beans with NetBeans UML tool than select New Project -> UML -> Java-Platform Model -> Finish, and later you can create your Class Diagram.
First we want to create our Entity Beans - objects which represent some real world entities, which will be stored in the database. In our application there are 2: Company and Product. Here is the class diagram which we want to create.
From this diagram we can generate the Entity classes. Right click the UML project -> Generate Code. You should obtain following classes:
public class Company {
private int id;
private String description;
private String name;
private List<product> products;
public Company () {
}
public int getId () {
return id;
}
public void setId (int val) {
id = val;
}
public String getName () {
return name;
}
public void setName (String val) {
name = val;
}
public List<product> getProducts() {
return products;
}
public void setProducts(List<product> products) {
this.products = products;
}
public String getDescription () {
return description;
}
public void setDescription (String val) {
this.description = val;
}
}
public class Product {
private int id;
private String description;
private String name;
public Product () {
}
public String getDescription () {
return description;
}
public void setDescription (String val) {
description = val;
}
public int getId () {
return id;
}
public void setId (int val) {
id = val;
}
public String getName () {
return name;
}
public void setName (String val) {
name = val;
}
}
Also you can write the classes and use the Reverse Engineer to obtain the class diagram.
To convert the the class to Entity Beans you have to do two simple steps - add annotations and implement the Serializable interface.
public class Company implements Serializable {
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
@Column(name="companyID",nullable=false)
private int id;
@Column(name="companyDescription")
private String description;
@Column(name="companyName")
private String name;
@ManyToMany
private List<product> products;
...and all the setters and getter...
}
It is quite self-explanatory. The class has to be annotated as @Entity, and has to have at least one @Id field. Then we can specify the name of the column which will be created in the database, and also strategy for generating the IDs value.
You will notice that there will the NetBeans light bulb telling you that there is no Persistence Unit - now we will create one.
Persistence Unit will perform the object - relational mapping for us. To create one we will first create a database.
On the "Services" pane localize Databases -> Java DB -> Create Database and specify the demanded details.
Now when we have the Database, we can create Database Connection which will be used by the Persistence Unit to connect to the DB.
Databases -> New Connection.
Now go back and right click EJB Module of you application and select New -> Persistence Unit.
Before we continue with Session Beans we will prepare a Named Query. Named queries are static queries which are later compiled to SQL and used by Persistence Unit. We will use a simple queries getting all the companies in the table. We place the query above the class definition.
@Entity
@NamedQuery(
name="Company.getAllCompanies",
query="SELECT c FROM Company c"
)
public class Company implements Serializable {
}
Now that you have finished the Persistance Unit you can try to deploy the project. Of course there is no functionality so far created, but during the deployment the database should be created for you. You can check the resulted database in the Services tab.
Now we will create the Session Bean, which will provide method and actions which we can perform with our Entity Beans.
Go to Enterprise Beans -> New -> Session Bean, than specify the package and leave the standard settings.
You can notice that the newly created Bean implements interface ending with "Local".
Now we will add the first method which will return all the companies in the database. NetBeans tells you how to do this - Context Menu -> Insert Code -> Add Bussiness Method.
The method will be defined in the interface and method stub created in the implementation. Now you can edit the code like this:
@Stateless
public class SalesSessionBean implements SalesSessionLocal {
@PersistenceContext
private EntityManager em;
public List<company> getAllCompanies() {
List<company> companies = em.createNamedQuery(
"Company.getAllCompanies").getResultList();
return companies;
}
}
Notice that we defined EntityManager which is a class which manages Persistance Context. Persistance Context is basket managing your Entities(objects representing Company, Product...). Classes which are managed by the Entity Manager are our Entity Beans. In the method you can see that we all calling Named Query which we have created before.
Now we will create a middle layer between the Session Bean and JSP site representing the GUI - this layer is a Backing Bean. Backing bean is a Java class which manages and is accessible from the actual JSP page. Create new class in the Web Module (New -> Java Class) and name it SalesBack. Now here is the implementation:
public class SalesBack {
@EJB
SalesSessionLocal ssl;
public List<company> getAllCompanies(){
return ssl.getAllCompanies();
}
}
You can see that the class basically references the Session Bean (Be careful you have to reference the interface, not the actual implementation). Than in the method we simply calling the method of the Session Bean. From this it seems that this layer is not needed, but actually it is quiet helpful as you will see later.
<managed-bean>
<managed-bean-name>sales</managed-bean-name>
<managed-bean-class>sales.back.SalesBack</managed-bean-class>
<managed-bean-scope>session</managed-bean-scope>
</managed-bean>
Be sure to check the Backing Bean class (including the package name). Later you can reference this Backing Bean as "sales" in your JSP page.
Now we will show what advantages/components brings Java Server Faces and how to use them. First we will create simple page just showing a table of all companies in stored in the DB. On the Web Module create a new JSP page with JSF. (New -> JSF JSP Page). After you create the page you can see that it contains two @taglib directives referencing the JSF framework TAGs.
JSP technology can be extended by "custom tags". When we register the prefixes using the taglib directory, we introduce all the custom tags which come with the JSF framework. First group with prefix "f" named "Core" references all the components which are independent on the renderer (e.g. converters, validators). The second group with prefix "h" named "HTML" introduces all the HTML tags and components that brings JSF to create nice GUI and functional GUI (buttons, tables, grids...).
OK, so now lets put in the code which will show the table of companies.
<h1><h:outputtext</h:outputtext></h1>
<h:datatable
<h:column>
<f:facet<h:outputtext</h:outputtext>
<h:outputtext
</h:outputtext>
<h:column>
<f:facet<h:outputtext
</h:outputtext>
<h:outputtext
</h:outputtext>
</f:facet>
</h:column></f:facet></h:column></h:datatable>
The main advantage of JSF is that it lets us bind the content of some HTML components to the properties/fields in the Backing Bean. Because in our Backing Bean we had a method called "getAllCompanies" than here we can reference the result of this method as "#{sales.allCompanies}". This binding is done on the "<datatable>" component by setting the value attribute. Notice that the second attribute var lets you set the "name" for one row of the binded collection. Later in the columns definitions you can address one company in the collection by this name (here "item").
value
var
Ok now that you have created the JSP file is time to try it. Before you will have to tell the application server, that if the user navigates to your page, the page contains Faces elements and has to be processed using the Faces Servlet. Open the web.xml and alter the Faces Servlet settings this way:
>
Important part is the Mapping configuration. Basically you are saying that each file ending jsf will be processed by the Faces Servlet. Now if the name of the file you created was "companies.jsp", than in the browser you will reference it by "companies.jsf". Now run the project, and in the browser type the path to "companies.jsf" and you should get following result.
Obviously the companies table is empty. So go ahead and using NetBeans (Services -> Databases) run some SQL INSERT statements and you should be able to see the inserted data in your table.
INSERT INTO SALES.COMPANY (companyname, companydescription) values('First company',
'Sales bananas');
INSERT INTO SALES.COMPANY (companyname, companydescription) values('Second company',
'Sales oranges');
OK in the next post I will finish this application and provide some additional functionality to edit the company details and add and remove products of a company.
CONTINUE TO THE SECOND. | http://www.codeproject.com/Articles/94855/J2EE-NetBeans-JSF-Persistence-API?msg=3542045 | CC-MAIN-2015-11 | refinedweb | 1,631 | 55.44 |
!
The Problem
Based on this Reddit post, the OP asked about the problem of uniform dice rolling first, followed by a question about blood types, which don’t have uniform probabilities. What we want to find out is the expected number of dice rolls (or blood tests) that would need to be run. We also want to get an idea of the distribution, and perhaps get a confidence interval of some kind.
Uniform Coupons
The dice problem gives us an easy, uniform setup to the problem before moving on to something maybe a bit more complex. Since we are using an AMC, the first step is to build the transition matrix. That means defining all the states and linking them with probabilities. I do that with the following python code:
import numpy as np from itertools import combinations n = 6 state_probs = {k:1/6 for k in range(1,n+1)} n_states = 2**n states = ['start'] T = np.zeros((n_states,n_states)) for k in range(1,n): for comb in combinations(range(1,n+1),k): states.append(comb) curr_ind = len(states) - 1 if len(comb) == 1: prev_ind = 0 T[prev_ind,curr_ind] += state_probs[comb[0]] else: for rem in comb: comb_prev = list(comb) comb_prev.remove(rem) idx = states.index(tuple(comb_prev)) T[idx,curr_ind] += state_probs[rem] T[curr_ind,curr_ind] = sum(state_probs[x] for x in comb) states.append((1,2,3,4,5,6)) T[-1,-1] = 1 comb = (1,2,3,4,5,6) for a in states[-7:-1]: idx = states.index(a) comb_prev = list(comb) v = list(set(comb) - set(a))[0] T[idx,-1] = state_probs[v]
All the code does is loop over the various combinations of different lengths of dice roll sets and set up transition probabilities based on all the possible previous states. The transition matrix looks pretty, too:
This next block does the AMC math to get the expected number of steps to the final state, and the variance in the number of steps.
Q = T[:-1,:-1] nt = Q.shape[0] R = T[:-1,-1] Ir = T[-1,-1] It = np.eye(nt) N = np.linalg.inv(It - Q) t = np.dot(N,np.ones(nt)) tsq = t**2 t_var = np.dot(2*N - It,t) - tsq # expected number of rolls is t[0]
The expected number of dice rolls is 14.7, with a variance of about 39. This doesn’t tell us anything about the PMF of the dice rolls. To get that, we first simulate the AMC to get the probability of being in the final state for a range of dice roll counts. Going from 1 roll to 40 rolls gives the following CDF:
The CDF will go on to infinity, but I stopped calculating it because it was close enough to 1 for our purposes. CDFs are easily turned into PMFs, too:
I’ve filled in the 95% interval of the PMF that starts at the smallest roll. This tells us that 95% (roughly, since the PMF is obviously discrete) of the time we will roll 28 or fewer times. We will roll 29 or more times only 5% of the time.
Non-Uniform Coupons (Blood Types)
Using some Red Cross data on blood type ratios in the population, we can repeat the above analysis in the exact same way. Using the same code as above, but with a new set of items:
items = ['A+','A-','B+','B-','AB+','AB-','O+','O-'] state_probs = {'A+':.33,'A-':.07,'B+':.09,'B-':.02,'AB+':.03,'AB-':.01,'O+':.37,'O-':.08}
we get a much larger transition matrix, but one with the same kind of pattern:
The expected number of people to test before finding all blood types is 122.45. The variance is 8453, so the std. dev is about 92. That’s a lot of people to test just to get at least 1 of each (so go donate blood)!
The following two plots show the CDF and PMF (as a continuous line). I stopped running the AMC simulation forward when the absorbing probability reached 0.95. That means that the PMF shows the left-edge 95% confidence interval.
Notice how the maximum likelihood point is much smaller than the mean value. We expect that because this distribution can go on for infinity, which greatly skews the mean to the right tail.
Final Thoughts
Besides showing how to get the distribution of steps in a uniform coupon problem, we also used the same method to solve a non-uniform coupon problem. As one would expected, the distribution of steps has a very long right-tail. You might be searching a very long time for that last coupon!
The really easy thing about these particular problems is that you only need a set of population proportions to make them work. In the future I’ll probably re-do this for the McDonald’s Monopoly game, since it is just a coupon collecting game. Hopefully I’ll be able to pull out some data on how much you might be expected to win on the way to trying for big prizes!
One Comment
I am typically to blogging and i really appreciate your content. The article has really peaks my interest. I am going to bookmark your site and maintain checking for new information. | https://phaethonprime.wordpress.com/2015/09/08/non-uniform-coupon-collectorsthe-problem/ | CC-MAIN-2019-30 | refinedweb | 885 | 71.95 |
[algorithm] What are the mathematical/computational principles behind this game?
So there are k=55 cards containing m=8 pictures each from a pool of n pictures total. We can restate the question 'How many pictures n do we need, so that we can construct a set of k cards with only one shared picture between any pair of cards?' equivalently by asking:
Given an n-dimensional vector space and the set of all vectors, which contain exactly m elements equal to one and all other zero, how big has n to be, so that we can find a set of k vectors, whose pairwise dot products are all equal to 1?
There are exactly (n choose m) possible vectors to build pairs from. So we at least need a big enough n so that (n choose m) >= k. This is just a lower bound, so for fulfilling the pairwise compatibility constraint we possibly need a much higher n.
Just for experimenting a bit i wrote a small Haskell program to calculate valid card sets:
Edit: I just realized after seeing Neil's and Gajet's solution, that the algorithm i use doesn't always find the best possible solution, so everything below isn't necessarily valid. I'll try to update my code soon.
module Main where cardCandidates n m = cardCandidates' [] (n-m) m cardCandidates' buildup 0 0 = [buildup] cardCandidates' buildup zc oc | zc>0 && oc>0 = zerorec ++ onerec | zc>0 = zerorec | otherwise = onerec where zerorec = cardCandidates' (0:buildup) (zc-1) oc onerec = cardCandidates' (1:buildup) zc (oc-1) dot x y = sum $ zipWith (*) x y compatible x y = dot x y == 1 compatibleCards = compatibleCards' [] compatibleCards' valid [] = valid compatibleCards' valid (c:cs) | all (compatible c) valid = compatibleCards' (c:valid) cs | otherwise = compatibleCards' valid cs legalCardSet n m = compatibleCards $ cardCandidates n m main = mapM_ print [(n, length $ legalCardSet n m) | n<-[m..]] where m = 8
The resulting maximum number of compatible cards for m=8 pictures per card for different number of pictures to choose from n for the first few n looks like this:
This brute force method doesn't get very far though because of combinatorial explosion. But i thought it might still be interesting.
Interestingly, it seems that for given m, k increases with n only up to a certain n, after which it stays constant.
This means, that for every number of pictures per card there is a certain number of pictures to choose from, that results in maximum possible number of legal cards. Adding more pictures to choose from past that optimal number doesn't increase the number of legal cards any further.
The first few optimal k's are:
My kids have this fun game called Spot It! The game constraints (as best I can describe) are:
- It is a deck of 55 cards
- On each card are 8 unique pictures (i.e. a card can't have 2 of the same picture)
- Given any 2 cards chosen from the deck, there is 1 and only 1 matching picture.
- Matching pictures may be scaled differently on different cards but that is only to make the game harder (i.e. a small tree still matches a larger tree)
The principle of the game is: flip over 2 cards and whoever first picks the matching picture gets a point.
Here's a picture for clarification:
(Example: you can see from the bottom 2 cards above that the matching picture is the green dinosaur. Between the bottom-right and middle-right picture, it's a clown's head.)
I'm trying to understand the following:
What are the minimum number of different pictures required to meet these criteria and how would you determine this?
Using pseudocode (or Ruby), how would you generate 55 game cards from an array of N pictures (where N is the minimum number from question 1)?
Update:
Pictures do occur more than twice per deck (contrary to what some have surmised). See this picture of 3 cards, each with a lightning bolt:
Here's Gajet's solution in Python, since I find Python more readable. I have modified it so that it works with non-prime numbers as well. I have used Thies insight to generate some more easily understood display code.
from __future__ import print_function from itertools import * def create_cards(p): for min_factor in range(2, 1 + int(p ** 0.5)): if p % min_factor == 0: break else: min_factor = p cards = [] for i in range(p): cards.append(set([i * p + j for j in range(p)] + [p * p])) for i in range(min_factor): for j in range(p): cards.append(set([k * p + (j + i * k) % p for k in range(p)] + [p * p + 1 + i])) cards.append(set([p * p + i for i in range(min_factor + 1)])) return cards, p * p + p + 1 def display_using_stars(cards, num_pictures): for pictures_for_card in cards: print("".join('*' if picture in pictures_for_card else ' ' for picture in range(num_pictures))) def check_cards(cards): for card, other_card in combinations(cards, 2): if len(card & other_card) != 1: print("Cards", sorted(card), "and", sorted(other_card), "have intersection", sorted(card & other_card)) cards, num_pictures = create_cards(7) display_using_stars(cards, num_pictures) check_cards(cards)
With output:
*** * *** * **** * * * * * * * * * * * * * * * * * ** * ** * * * * * * * * * * * * * * ****
I very much like this thread. I build this github python project with parts of this code here to draw custom cards as png (so one can order custom card games in the internet).
Others have described the general framework for the design (finite projective plane) and shown how to generate finite projective planes of prime order. I would just like to fill in some gaps.
Finite projective planes can be generated for many different orders, but they are most straightforward in the case of prime order
p. Then the integers modulo
p form a finite field which can be used to describe coordinates for the points and lines in the plane. There are 3 different kinds of coordinates for points:
(1,x,y),
(0,1,x), and
(0,0,1), where
x and
y can take on values from
0 to
p-1. The 3 different kinds of points explains the formula
p^2+p+1 for the number of points in the system. We can also describe lines with the same 3 different kinds of coordinates:
[1,x,y],
[0,1,x], and
[0,0,1].
We compute whether a point and line are incident by whether the dot product of their coordinates is equal to 0 mod
p. So for example the point
(1,2,5) and the line
[0,1,1] are incident when
p=7 since
1*0+2*1+5*1 = 7 == 0 mod 7, but the point
(1,3,3) and the line
[1,2,6] are not incident since
1*1+3*2+3*6 = 25 != 0 mod 7.
Translating into the language of cards and pictures, that means the card with coordinates
(1,2,5) contains the picture with coordinates
[0,1,1], but the card with coordinates
(1,3,3) does not contain the picture with coordinates
[1,2,6]. We can use this procedure to develop a complete list of cards and the pictures that they contain.
By the way, I think it's easier to think of pictures as points and cards as lines, but there's a duality in projective geometry between points and lines so it really doesn't matter. However, in what follows I will be using points for pictures and lines for cards.
The same construction works for any finite field. We know that there is a finite field of order
q if and only if
q=p^k, a prime power. The field is called
GF(p^k) which stands for "Galois field". The fields are not as easy to construct in the prime power case as they are in the prime case.
Fortunately, the hard work has already been done and implemented in free software, namely Sage. To get a projective plane design of order 4, for example, just type
print designs.ProjectiveGeometryDesign(2,1,GF(4,'z'))
and you'll obtain output that looks like
ProjectiveGeometryDesign<points=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], blocks=[[0, 1, 2, 3, 20], [0, 4, 8, 12, 16], [0, 5, 10, 15, 19], [0, 6, 11, 13, 17], [0, 7, 9, 14, 18], [1, 4, 11, 14, 19], [1, 5, 9, 13, 16], [1, 6, 8, 15, 18], [1, 7, 10, 12, 17], [2, 4, 9, 15, 17], [2, 5, 11, 12, 18], [2, 6, 10, 14, 16], [2, 7, 8, 13, 19], [3, 4, 10, 13, 18], [3, 5, 8, 14, 17], [3, 6, 9, 12, 19], [3, 7, 11, 15, 16], [4, 5, 6, 7, 20], [8, 9, 10, 11, 20], [12, 13, 14, 15, 20], [16, 17, 18, 19, 20]]>
I interpret the above as follows: there are 21 pictures labeled from 0 to 20. Each of the blocks (line in projective geometry) tells me which pictures appears on a card. For example, the first card will have pictures 0, 1, 2, 3, and 20; the second card will have pictures 0, 4, 8, 12, and 16; and so on.
The system of order 7 can be generated by
print designs.ProjectiveGeometryDesign(2,1,GF(7))
which generates the output
ProjectiveGeometryDesign], blocks=[[0, 1, 2, 3, 4, 5, 6, 56], [0, 7, 14, 21, 28, 35, 42, 49], [0, 8, 16, 24, 32, 40, 48, 50], [0, 9, 18, 27, 29, 38, 47, 51], [0, 10, 20, 23, 33, 36, 46, 52], [0, 11, 15, 26, 30, 41, 45, 53], [0, 12, 17, 22, 34, 39, 44, 54], [0, 13, 19, 25, 31, 37, 43, 55], [1, 7, 20, 26, 32, 38, 44, 55], [1, 8, 15, 22, 29, 36, 43, 49], [1, 9, 17, 25, 33, 41, 42, 50], [1, 10, 19, 21, 30, 39, 48, 51], [1, 11, 14, 24, 34, 37, 47, 52], [1, 12, 16, 27, 31, 35, 46, 53], [1, 13, 18, 23, 28, 40, 45, 54], [2, 7, 19, 24, 29, 41, 46, 54], [2, 8, 14, 27, 33, 39, 45, 55], [2, 9, 16, 23, 30, 37, 44, 49], [2, 10, 18, 26, 34, 35, 43, 50], [2, 11, 20, 22, 31, 40, 42, 51], [2, 12, 15, 25, 28, 38, 48, 52], [2, 13, 17, 21, 32, 36, 47, 53], [3, 7, 18, 22, 33, 37, 48, 53], [3, 8, 20, 25, 30, 35, 47, 54], [3, 9, 15, 21, 34, 40, 46, 55], [3, 10, 17, 24, 31, 38, 45, 49], [3, 11, 19, 27, 28, 36, 44, 50], [3, 12, 14, 23, 32, 41, 43, 51], [3, 13, 16, 26, 29, 39, 42, 52], [4, 7, 17, 27, 30, 40, 43, 52], [4, 8, 19, 23, 34, 38, 42, 53], [4, 9, 14, 26, 31, 36, 48, 54], [4, 10, 16, 22, 28, 41, 47, 55], [4, 11, 18, 25, 32, 39, 46, 49], [4, 12, 20, 21, 29, 37, 45, 50], [4, 13, 15, 24, 33, 35, 44, 51], [5, 7, 16, 25, 34, 36, 45, 51], [5, 8, 18, 21, 31, 41, 44, 52], [5, 9, 20, 24, 28, 39, 43, 53], [5, 10, 15, 27, 32, 37, 42, 54], [5, 11, 17, 23, 29, 35, 48, 55], [5, 12, 19, 26, 33, 40, 47, 49], [5, 13, 14, 22, 30, 38, 46, 50], [6, 7, 15, 23, 31, 39, 47, 50], [6, 8, 17, 26, 28, 37, 46, 51], [6, 9, 19, 22, 32, 35, 45, 52], [6, 10, 14, 25, 29, 40, 44, 53], [6, 11, 16, 21, 33, 38, 43, 54], [6, 12, 18, 24, 30, 36, 42, 55], [6, 13, 20, 27, 34, 41, 48, 49], [7, 8, 9, 10, 11, 12, 13, 56], [14, 15, 16, 17, 18, 19, 20, 56], [21, 22, 23, 24, 25, 26, 27, 56], [28, 29, 30, 31, 32, 33, 34, 56], [35, 36, 37, 38, 39, 40, 41, 56], [42, 43, 44, 45, 46, 47, 48, 56], [49, 50, 51, 52, 53, 54, 55, 56]]> | http://code.i-harness.com/en/q/5f3771 | CC-MAIN-2018-47 | refinedweb | 2,011 | 65.59 |
Spring - How do you set Enum keys in a Map with annotations
groovy enum map
spring util:map
java enum map to function
java treemap enum key
java map enum to string
mapping between two enums java
java enum to map
I've an Enum class
public enum MyEnum{ ABC; }
than my 'Mick' class has this property
private Map<MyEnum, OtherObj> myMap;
I've this spring xml configuration.
<util:map <entry key="ABC" value- </util:map> <bean id="mick" class="com.x.Mick"> <property name="myMap" ref="myMap" /> </bean>
and this is fine. I'd like to replace this xml configuration with Spring annotations. Do you have any idea on how to autowire the map?
The problem here is that if I switch from xml config to the @Autowired annotation (on the myMap attribute of the Mick class) Spring is throwing this exception
nested exception is org.springframework.beans.FatalBeanException: Key type [class com.MyEnum] of map [java.util.Map] must be assignable to [java.lang.String]
Spring is no more able to recognize the string ABC as a MyEnum.ABC object. Any idea?
Thanks
This worked for me...
My Spring application context:
<util:map <entry key="#{T(com.acme.MyEnum).ELEM1}" value="value1" /> <entry key="#{T(com.acme.MyEnum).ELEM2}" value="value2" /> </util:map>
My class where the
Map gets injected:
public class MyClass { private @Resource Map<MyEnum, String> myMap; }
The important things to note are that in the Spring context I used SpEL (Spring Expression Language) which is only available since version 3.0. And in my class I used
@Resource, neither
@Inject (it didn't work for me) nor
@Autowired (I didn't try this). The only difference I'm aware of between
@Resource and
@Autowired, is that the former auto-inject by bean name while the later does it by bean type.
Enjoy!
Spring - How do you set Enum keys in a Map with annotations, I've an Enum class public enum MyEnum{ ABC; }. than my 'Mick' class has this property private Map<MyEnum, OtherObj> myMap;. I've this spring xml I've an Enum class public enum MyEnum{ ABC; } than my 'Mick' class has this property private Map<MyEnum, OtherObj> myMap; I've this spring xml configuration.
Mapping Enum Keys With EnumMaps [Snippets], I've an Enum class public enum MyEnum{ ABC; } than my 'Mick' class has this property private Map myMap; I've this spring xml configuration. EnumMap.
Application context
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <bean id="myProvider" class="com.project.MapProvider"> <property name="myMap" ref="myMap"/> </bean> <util:map <entry> <key><value type="com.project.MyEnum">FOO</value></key> <ref bean="objectValue1"/> </entry> </util:map> </beans>
Java class
package com.project; public class MapProvider { private Map<MyEnum, ValueObject> myMap; public void setMyMap(Map<MyEnum, ValueObject> myMap) { this.myMap = myMap; } }
Java: Create a Map With Predefined Keys, An often forgotten part of the JDK, learn how and why you should consider in handy when we want to define maps with enum types as keys: An EnumMap is a specialized Map . We'll create a map for a given enum: spring.profiles.active=prod Frequently Used Annotations in Spring Boot Applications. As explained earlier, Hibernate maps the enum values to an int or a String. But PostgreSQL expects you to set the value as an Object. If you want to map your enum to PostgreSQL’s enum type, you need to implement a custom mapping. But don’t worry, if you extend Hibernate’s EnumType, you just need to override 1 method to set the value as an
Should be:
public class Mick { private Map<MyEnum, OtherObj> myMap; @Autowired public void setMyMap(Map<MyEnum, OtherObj> myMap) { this.myMap = myMap; } }
Have a look at
Updated
The problem is that according to the util schema, you cannot specify the key or value types. You can however to implement a MapFactoryBean of your own (just inherit from org.springframework.beans.factory.config.MapFactoryBean). One ceveat - notice that the generic definition (even thought erased in runtime) doesn't get in the way.
Spring Util Map Annotation with Enum key, We can use the Builder annotation of Lombok to get a cleaner syntax. In this option, we would use an enum to predefine the keys. Then, you'll configure a Spring Boot API app that allows you to register your favorite beers Do you mean setting up the enum itself? I don't think that's possible. You cannot instantiate enums because they have a static nature. So I think that Spring IoC can't create enums as well. On the other hand, if you need to set initialize something with a enum please check out the Spring IoC chapter. (search for enum) There's a simple example
The
<util:map> element has key-type, resp. value-type attributes, that represents the class of the keys, resp. the values. If you specify the fully qualified class of your enum in the key-type attribute, the keys are then parsed into that enum when creating the map.
Spring verifies during injection that the map's key and value types -as declared in the class containing the map- are assignment-compatible with the key and value types of the map bean. This is actually where you get the exception from.
Configuration Metadata - Project Metadata API Guide, I wanted create java.util.Map in spring context xml file which injects the Map to one of my service class, but i was struggling with this because my Map's key is an enum value, Finally I was able fix that issue as following, l'élément <util:map> a un type de clé, resp. attribut value-type, qui représente la classe des clés, resp. valeur. Si vous spécifiez la classe entièrement qualifiée de votre enum dans l'attribut key-type, les clés sont alors analysées dans cet enum lors de la création de la carte.
Beginning Spring, Generating Your Own Metadata by Using the Annotation Processor "value": "create", "description": "Create the schema and destroy previous data. If your property is of type Map , you can provide hints for both the keys and the values We recommend that you use an Enum for those two values instead. Instead, it makes more sense to compare with the actual values of an enum. However, because of the limitations of annotations, such an annotation cannot be made generic. This is because arguments for annotations can only be concrete values of a specific enum, not instances of the enum parent class.
How and when to use Enums and Annotations, you. learned. in. thiS. chaPter representing the primary key attribute of the persistent class Annotations used to map 1:1,1:M, Annotations used to map Date, Time, Timestamp, Enum, and byte[] Java types to corresponding is used to perform persistence operations Spring's FactoryBean implementations to create an
JPA and Enums via @Enumerated, This course is designed to help you make the most effective use of Enums could be treated as a special type of classes and annotations as The EnumSet< T > is the regular set optimized to store enums is very close to the regular map with the difference that its keys could Spring Interview Questions. This article walks you through the lean ways of creating a Map with a fixed (predefined) set of keys.. Option One: Using Lombok and Jackson. In this approach, we would create a simple class.
- It's not clear what you're trying to do. What sort of annotations are you thinking of?
- I'd like to use the @Autowired annotation but it's not working. Do I have to specify something else to tell Spring to treat that Key value as an Enum instead of a String?
- Remember to use '$' instead of '.' for separating an inner enum from an outer class in Spring EL.
- Just a follow up that
@Autowiredwill also yield
Key type [class com.foo.Bar$BAZ] of map [java.util.Map] must be assignable to [java.lang.String],
@Resourceis the winner. +1 @Amir,
$is a gotcha.
- Question said "I'd like to replace this xml configuration with Spring annotations." yet there's still config XML here. Does that mean you can't do this in Spring purely with annotations?
- @Jonik I think the replacement intended was the bean and resource injection part, not the map part. Otherwise, I'm not sure why this answer was selected.
- Thank you very much for this. Only @ Resource works. I tried @ Autowired and it did not work. Strange, but I'll just go with @ Resource
- The answer needs a bit more explanation of what you listed here.
- I've created map with enum as key and used it for setter argument. I thought it is obvious :)
- Hi David, I know about the @Autowired annotation. Here the problem is that if I autowire the map Spring is no more able to recognize the string ABC as a MyEnum.ABC object. With XML configuration it works fine, with annotations configuration it's throwing this Exception nested exception is org.springframework.beans.FatalBeanException: Key type [class com.MyEnum] of map [java.util.Map] must be assignable to [java.lang.String] | http://thetopsites.net/article/51858521.shtml | CC-MAIN-2020-50 | refinedweb | 1,528 | 63.8 |
Registry.PerformanceData Field
Contains performance information for software components. This field reads the Windows registry base key HKEY_PERFORMANCE_DATA.
Assembly: mscorlib (in mscorlib.dll)
Each software component creates keys for its objects, counters when it is installed, and writes counter data while it is executing. You can access this data as you would access any other registry data, using the RegistryKey functions.
Although you use the registry to collect performance data, the data is not stored in the registry database. Instead, accessing the registry with this key causes the system to collect the data from the appropriate system object managers.
To obtain performance data from the local system, use the GetValue method, with the Registry.PerformanceData key. The first call opens the key (you do not need to explicitly open the key first). However, be sure to use the Close method to close the handle to the key when you are finished obtaining performance data. The user cannot install or remove a software component while its performance data is in use.
To obtain performance data from a remote system, you must use the OpenRemoteBaseKey method, with the computer name of the remote system and the Registry.PerformanceData key. This call retrieves a key representing the performance data for the remote system. To retrieve the data, call GetValue using this key, rather than the Registry.PerformanceData key.. Note that this example can often return no results, since there might be no performance data.
using System; using Microsoft.Win32; class Reg { public static void Main() { // Create a RegistryKey, which will access the HKEY_PERFORMANCE_DATA // key in the registry of this machine. RegistryKey rk = Registry.PerformanceData; //. | https://msdn.microsoft.com/en-us/library/microsoft.win32.registry.performancedata(v=vs.90).aspx | CC-MAIN-2018-22 | refinedweb | 271 | 56.55 |
1) It's like polluting a tranditional program's variable space
with stuff the application did not explicitly cause -- it makes
debugging more difficult (and confusing if the results of the
Ant execution is published in a readonly format like a website).
2) The previous statement might seem trivial if you only use Ant
to run build scripts. However, I personally dig Ant because I
can use it to do other kinds of things. In particular, Ant is
the foundation script and launch harness for our test management
system. Being able to remove the "Ant fixture bits" from the
test configuration and other system under test bits (and this
includes Ant components) is really important to us. The more
kruft Ant spews into the "Ant fixture bits" the more difficult
it becomes for a QA person to pick out what's important when
something fails.
Unless we limit what Ant components the QA/Dev team can use
(we *really* don't want to do this), scrubbing what gets captured
as the "Ant fixture bits" becomes difficult.
Is there no way to remove the scoped properties once the target
and/or task container is finished?
OK, enuf whining.
----------------
The Wabbit
At 12:40 PM 10/8/2004, you wrote:
>Ok, here are my responses:
>
> > From: Dominique Devienne [mailto:DDevienne@lgc.com]
> >
>[SNIP]
> > 2) All these uniquely named properties go on living after
> > the macro has executed. That pollutes the namespace.
> >
>
>Yes it does. But I still have to see a good argument on why shall
>that bother anyone. Unless you are talking about millions of executions
>within one project context. You can always mitigate this in
>some very complex build by using <antcall/> as a way fence out
>chuncks of temporary properties. But I would like to see a good
>example in whch this pollution is a real problem.
>
>[SNIP]
>Jose Alberto
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org | http://mail-archives.apache.org/mod_mbox/ant-dev/200410.mbox/%3C6.1.1.1.2.20041008125937.02e92828@pop1.mail.com%3E | CC-MAIN-2014-42 | refinedweb | 330 | 63.49 |
even on older devices without having to code an action bar from scratch. This library deals with all the details you do not want to care about. Even Googlers like Roman Nurik or Nick Butcher recommend to use this library and the Google IO app also makes use of this lib. So should you!
With this post I start my series about the action bar by simply adding an action bar to the project of my last post, adding some action items to it and explaining some basics of action bars. The focus of this post is mainly on getting ActionBarSherlock to run on your project. As such it is an introductory post about the action bar.
Since the action bar offers many functions and is the most important navigational component Android has, more posts about it will follow in the next weeks.
Getting ActionBarSherlock and adding it to Eclipse
The first thing you have to do, is to download the library from actionbarsherlock.com. After this unzip/untar it. The download contains three folders:
library,
samples and
website. The
samples folder contains four sample projects to showcase what you can do and to show you how to do it. You should have a look at them. The
website folder contains the code of the project website. The
library folder finally contains ActionBarSherlock's source code.
Now go to Eclipse and add the ABS-library as an Android project. Do not use Eclipse's import tool to import the ActionBarSherlock library - it would not work immediately and you would have to fix some settings. Instead use the project creation wizard of the Android Developer Tools.
Open File -> New -> Project -> Android Project From Existing Code.
In the next screen select the folder, which Eclipse then uses to search for projects. If you select the ActionBarSherlock root folder, Eclipse suggests a list of projects to create. Leave the "library" project checked and uncheck all others:
Click "Finish" to create the project.
Eclipse will now create a new project named "library". I prefer a more useful name, so select the project called "library" and hit F2 to rename the project. I have renamed the project in Eclipse to "ActionBarSherlock", so all following screens will refer to this name.
Adding the library to your project
Now that ABS is a library project you have to tell your own project to use it. I will use the project of my last post for this. Go to FragmentStarter's project settings and switch to the Android tab.
If necessary scroll down until you see the Library panel. Click "Add":
In the next window all available library projects will be listed. Select ActionBarSherlock and click "Ok".
When the window disappears the library should be listed in the library panel of the Android properties tab:
What is slightly annoying is that the Android Developer Tools do not use the name of a project to reference it, but instead point to the directory itself. And what's even more annoying is that a symlink gets translated to its real path, which is bound to change more often than a symlink.
Should you ever want to change the directory, you have to delete the reference to the library with the old path and link to the newly imported library. But unless you do a fresh install for blog posts, this probably won't happen too often 🙂
That's it. Your project bundles the lib from now on.
But wait a moment! If you have a look at your project, you will notice that it now sports a red warning icon. Go to the error tab and you will see lots of warnings. Eclipse states for
Fragment,
FragmentActivity and all the other classes of the support library, that they "cannot be resolved to a type". Whoops!
The reason is, that ActionBarSherlock comes bundled with the library as well. And most of the time the support library added when following my post about fragments is different from the one bundled with ActionBarSherlock. In the console tab of Eclipse you see the message "Jar mismatch! Fix your dependencies" with the hashcodes of the mismatching files and where these files are stored:
[FragmentStarterProject] Found 2 versions of android-support-v4.jar in the dependency list, [FragmentStarterProject] but not all the versions are identical (check is based on SHA-1 only at this time). [FragmentStarterProject] All versions of the libraries must be the same at this time. [FragmentStarterProject] Versions found are: [FragmentStarterProject] Path: /opt/workspace/FragmentStarterProject/libs/android-support-v4.jar [FragmentStarterProject] Length: 385685 [FragmentStarterProject] SHA-1: 48c94ae70fa65718b382098237806a5909bb096e [FragmentStarterProject] Path: /opt/libs/ActionBarSherlock/library/libs/android-support-v4.jar [FragmentStarterProject] Length: 349252 [FragmentStarterProject] SHA-1: 612846c9857077a039b533718f72db3bc041d389 [FragmentStarterProject] Jar mismatch! Fix your dependencies
To fix this, simply delete the support library from you project. Go to the libs folder, select the
android-support-v4.jar file and delete it. You can still use the support library because it's also part of the ABS project.
If you need a newer version of the support library than the one bundled with ActionBarSherlock, remove the support library from both projects and add it anew to the ActionBarSherlock project.
With this done your project is good again. The next step, of course, is to change some code.
Changing the types to their ActionBarSherlock counterparts
Just adding the library won't magically add an action bar to your project. Instead you have to change some of your code.
The first thing you have to do, is to change the classes from which you inherit. Instead of
Activity use
SherlockActivity, instead of
FragmentActivity use
SherlockFragmentActivity, instead of
Fragment use
SherlockFragment and so on. Now do this for the two activities and the two fragments you created while reading the last post.
You will notice that the
ItemDetailActivity won't compile any more. Whenever you add ActionBarSherlock to an existing project, this is bound to happen. Why is that?
Have a look at the error message Eclipse is displaying: "
Cannot override the final method from SherlockFragmentActivity". The method is the following:
@Override public boolean onOptionsItemSelected(MenuItem item) { // ... }
ActionBarSherlock overrides every method of it's superclasses that takes either a
MenuItem or
MenuInflater object of the
android.view package as parameter and declares those methods as final. You need to know that all the items in an action bar are actually menu items. Thus only by doing it this way ActionBarSherlock can take control of all the menu-handling and change it for older Android versions.
While the error message might sound bad, it actually isn't. ActionBarSherlock provides
for every final method another method with the same name and the same number of arguments. Even the class names of the arguments are the same. Only the packages differ.
This way ActionBarSherlock makes migration of existing projects very easy. You can keep your methods and do not have to change and adjust any logic. Simply delete all
import statements to the menu-related classes of the package
android.view. After this hit Ctrl-O to optimize and correct the import statements and when Eclipse displays the list of imports to choose from, choose those of the ActionBarSherlock library:
If you do this for
ItemDetailActivity the warnings disappear. Whenever you use Eclipse (or any other IDE for that matter) to generate the import statements for you, take care to select the types of the ActionBarSherlock library. Those all start with
com.actionbarsherlock.
Now that your code looks fine, you should try to run the project. Whether this run is successful or not depends on the Android version of your device. If you run this code on a device with at least Android 4.0 everything runs fine. But not so on older devices. Here you will get a Force Close dialog due to an
IllegalStateException:
java.lang.IllegalStateException: You must use Theme.Sherlock, Theme.Sherlock.Light, Theme.Sherlock.Light.DarkActionBar, or a derivative.
Gladly the exception is pretty specific: You have to use one of the Sherlock themes to get your project to run. ActionBarSherlock needs many definitions to get the action bar styled correctly. Thus it needs these styles and you have to use the ABS themes. You can use your own theme, of course, but you must use one of Sherlock's themes as a base for yours. See the Sherlock theming guide for more information on this.
Since adding the action bar to older devices is the reason to use ActionBarSherlock in the first place, you have to change the theme. Change your
AndroidManifest.xml file to use a Sherlock theme:
<application android: <!-- ... --> </application>
Now finally the app runs fine. And yes, there is an ActionBar. As you can see it works fine on current as well as on older devices:
Adding items to the ActionBar
The action bar is basically what the old menu was before Honeycomb. This means that you have to create a menu to see an action bar.
Create a file with the name
activity_itemlist.xml in the folder
/res/menu and add the following code to it. This file describes the structure of your action bar:
<menu xmlns: <item android: </item> <item android: </item> </menu>
Note: I use icons from the icon pack "Android Developer Icons 2" of opoloo. The values I use for the
android:icon attributes are specific to this icon pack. As soon as you want to try this code, you have to prepare icons that match this code or change the attribute value to fit your icons. See the appendix for more on this.
Only the xml file won't suffice. You also need to override the
onCreateOptionsMenu() method to inflate this file:
@Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getSupportMenuInflater(); inflater.inflate(R.menu.activity_itemlist, menu); return true; }
This is the result when you run the project again. Note the differences between the 2.1 device and the 4.2 device:
The overflow menu
When you compare the screenshots of the previous section, you quickly understand what the so-called overflow menu is. This is the menu hidden behind the three dots are the far end of the action bar on modern devices.
Android shows as many items as it can directly on the action bar. But especially on phones the place for action items is quite limited. Thus Android shows all those items, that do not fit on the action bar directly, in the overflow menu. You can also tell Android to put items in the overflow menu regardless of the space. You can do so by using the value
never for the attribute
android:showAsAction of these items. More on this attribute later on.
The items of the overflow menu are only visible if you click the three dots on the right of the action bar or press the menu button on those devices that have such a button.
The three dots appear only on devices without a menu button. That's the way to indicate to the user that more items are available and it's also the only way how users can access these items. On devices with a menu button on the other hand you do not have any indicator, but users can always check if an overflow menu is present by clicking the menu key. I actually think the solution for devices without menu button is way better than the solution on devices with a menu button. On these latter devices users always have to guess if an overflow menu is present or not. Alas Samsung, the vendor that sells by far the most Android devices, still ships devices with menu buttons 🙁
Sort your items by importance so that the most important actions are visible all the time. Those actions are the ones your users are most likely to perform on this screen. Think thoroughly about the importance of each item.
Other items should always be in the overflow menu, no matter how much space is left. For example if your app has an about screen, or some license info for the open source libs you use (well, ActionBarSherlock for example) or some information about what has changed with the recent version, I would put all those items into the overflow menu - no matter how much screen estate you have left.
The Android design page has some more guidance on this on their action bar pattern page and also shows you roughly how many items fit on the screen for some selected devices.
Use the android:icon as well as the android:title attribute
As you can see the action bar shows only an icon for the "Add Items" action. But that doesn't mean that you should neglect the strings. First of all these strings are used for screenreaders so that visually impaired people still can use your app. And secondly, if users do not understand an icon immediately, they can longpress those icons until a hint appears that displays the text. Of course this hint is no excuse for confusing icons!
Another thing to note. Overflow menu items show only text. But not so on older devices. There you see the icon and the text. So always provide values for
android:icon and for
android:title!
Use android:showAsAction to define the appearance of the action items
The
item element of the xml file for the action bar can contain four attributes that are only relevant for the action bar:
android:showAsAction,
android:actionLayout,
android:actionViewClass and
android:actionProviderClass. Of these I describe only the frst here, the other three are a topic for an extra post.
You can use the attribute
android:showAsAction to decide which icons to display and in which way. It can take the following five values:
As a rule of thumb you should always use
ifRoom, if you want the icon to be part of your action bar and you should use
never if you want the item to always be part of the overflow menu.
The Android documentation for menu resources explicitly urges you to use
always with great care. You should use this attribute only if the element is absolutely necessary. If you use this on multiple items and the place in the action bar is too small to do this properly Android displays overlapping icons. In this case the items are even less accessible than those of the overflow menu.
While you have to pick one of the first three, you can combine these with the value
withText. This value decides whether Android shows only the icon or the icon plus the text of the
android:title attribute. But Android does show the text only, if enough room is left.
As an example change the menu file to the following code:
<menu xmlns: <item android: </item> <item android: </item> <item android: </item> </menu>
Now these are the resulting screens depending on the Android version:
As you can see on phones Android still only displays the icons. Note what happens on small devices that have no menu button. Android had to put the second item into the overflow menu because it had not enough place for it. On devices with a menu button the second item would be directly visible in the action bar.
This is what the same app looks like on a tablet:
Now the
withText value makes a difference. While I like items to display text on tablets, it is very unusual. Have a look at other apps to see how they do it, before you decide whether to display text or not. But do not use text to gloss over weaknesses in your icon design.
Dealing with user input
So far you have created the action bar and made it look right. But when you click on an item, nothing happens.
What you have to do is to implement the
onOptionsItemSelected() method. You do this as you always did for menu item selections:
@Override public boolean onOptionsItemSelected(MenuItem item) { // Handle item selection switch (item.getItemId()) { case R.id.add_item: // do s.th. return true; case R.id.about: // do s.th. return true; default: return super.onOptionsItemSelected(item); } }
Action bar interaction is like any other user interaction and all the usual rules about the UI thread apply. So don't put any stuff in here that's long-lasting. Use
Loaders,
AsyncTasks,
IntentServices or whatever you prefer for the specific job to do the heavy lifting.
What's next?
In this first part of the ActionBar tutorial you have changed your project so that your activities and fragments now inherit from ActionBarSherlock's classes and then made it compile and run.
After this you added items to the action bar, learned about the overflow menu and about the ActionBar-related attributes of the menu xml files. But with this you barely scratched the surface of what the action bar covers.
According to the design guidelines the "action bar is one of the most important design elements you can implement". No wonder it features all sorts of possibilities. And, yes, it brings with it some complexity as well. So there is more to the action bar than just this post.
In the next post I am goig to cover the different navigation types the action bar supports. After this I will write yet another post about the so-called contextual action bar.
Appendix
As mentioned all icons that I use are icons of opoloo's Android Developers Icons 2. You have to either get this pack, create icons for yourself or download the "Action Bar Icon Pack" from the design site. The latter icon pack contains no "Add" icon, so you have to adapt the XML.
You can also use totally inappropriate icons and hassle with the design later on. For the sake of following this tutorial you could use something like
@android:drawable/ic_menu_add. But you should never use these for anything else than for just getting on with this tutorial.
You have to add the icons you want to use to the respective folders within your
/res folder. For details on how to name these folders see the Android documentation about providing resources. | https://www.grokkingandroid.com/adding-actionbarsherlock-to-your-project/ | CC-MAIN-2021-04 | refinedweb | 3,023 | 72.76 |
This C# section covers the list of available Date and time formatting options in C# programming language and a practical example.
C# DateTime
The C# DateTime is a struct available from System Namespace in the DotNet framework. Since DateTime is a struct, not a class, we know that structs are value types, i.e., DateTime can create as an object but not a reference. Since it is a value type, the variables or fields of DateTime cannot hold null values.
To store null values, they should convert to nullable types by using a question mark(?)
Ex: DateTime? dt = null
DateTime represents dates and times with a range of values from 00:00:00, January 1, 0001 through 11:59:59 P.M., December 31, 9999
Initializing the C# DateTime object
We can initialize the DateTime object in the following ways:
Call a constructor, either the default constructor or the one which will take arguments.
For example, a default constructor of DateTime looks like
DateTime dt = new DateTime();
Outputs 1/1/0001 12:00:00 AM
A DateTime constructor overloaded with year, month, day, hour, minute, second looks like:
DateTime dt = new DateTime(2018, 6, 12, 8, 20, 58);
By assigning a value
Assigning a date and time value returned by a property or method to the DateTime object looks like.
DateTime dt = DateTime.Now;
DateTime dt = DateTime.AddDays(20);
Parsing a string to a C# DateTime value
Parse, TryParse are used in general to parse a string to its equivalent date and Time value.
DateTime dt = DateTime.Parse(“1985, 01, 14”);
Formatting Date and Time in C#
In general, the C# Date and Time format string use a format specifier to define the text representation of its value.
To define a text representation of the date and time value, if a single format specifier is used, then it is said to be a standard date and time format string.
If a date and time format string has more than one character, including the white space, it calls as a custom format string.
The following table shows the different C# date and time format specifiers and their results.
The following table shows different custom C# Data and Time format strings and their results.
C# Date and Time Formatting Example
Let us see a C# code and demonstrate different properties and methods of DateTime struct by using its object.
Just take a windows form and add a button to it and write the following code in its button click event.
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace WindowsFormsApp2 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { DateTime current_date1; DateTime current_date2; DateTime current_date3; current_date1 = DateTime.Now; MessageBox.Show(current_date1.ToString()); int day = current_date1.Day; MessageBox.Show(day.ToString()); current_date2 = DateTime.Today; MessageBox.Show(current_date2.ToString()); current_date3 = DateTime.Now.AddDays(30); MessageBox.Show(current_date3.ToString()); string custom = current_date1.ToString("dddd, dd MMMM yyyy HH:mm:ss"); MessageBox.Show(custom.ToString()); } } }
OUTPUT
Once you run this C# date and time format project, the following window will open
When you click that button1, message boxes will display with the dates.
current_date1, current_date2, current_date3 are the different objects created for DateTime.
Object current_date1 is used to display the current system date through the DateTime property.
DateTime.Now;
The current system date displays through the message box.
Same way DateTime.AddDays(30) will display date and time by adding 30 days to the current date.
Finally, we have tried to display the result of a custom format string ddd, dd MMMM yyyy HH:mm:ss. | https://www.tutorialgateway.org/csharp-date-and-time-formatting/ | CC-MAIN-2021-43 | refinedweb | 617 | 57.67 |
exitclean 1.0.0-beta1
A more respectful core.stdc.stdlib.exit
To use this package, put the following dependency into your project's dependencies section:
exitclean
For simple programs (and even some less simple), C's
exit function from
stdlib.h is very convenient. You can use it in D (
import core.stdc.stdlib;), but it's not going to run any destructors or terminate the runtime cleanly.
exitclean deals with this by throwing a custom exception, so all stacks are appropriately unwound. Use it like this:
import exitclean; void foo() { if (somethingsWrong) exit(1); }
You can then catch the
ExitException yourself in your
main, extract the exit code (available as a member
code of the exception) and return it.
In order to save effort,
exitclean also defines a mixin template to generate a main for you that deals with the
try/catch and
return automagically. Use like this:
void myMain() { // Your program here } mixin Main!myMain;
or like this:
mixin Main!((string[] args) { // Your program here });
All that's required is that the function you provide to
Main takes either nothing or
string[] and returns either
void or
int.
Options
There are 2
versions that affect the behaviour of
Main.
ShowExitLoc causes the function name, file and line where
exit was called from to be printed to
stderr before exiting.
ShowExitTrace causes a full stack trace to be printed to
stderr.
ShowExitLoc is enabled by default in
debug builds.
Caveats
Because we are using an exception,
exit will trigger any
scope(failure) clauses on the way up the stack.
exit will be blocked by any
catch(Exception) statement on the way up the stack.If this is a problem for you, I suggest forking this project and changing
ExitException to inherit from
Throwable instead of
Exception.
- Registered by John Colvin
- John-Colvin/exitclean
- BSL-1.0
Dependencies: none
0 downloads today
0 downloads this week
0 downloads this month
48 downloads total | http://code.dlang.org/packages/exitclean | CC-MAIN-2017-30 | refinedweb | 324 | 64.71 |
If the term you're looking for isn't on this page, then consult a dictionary or check the Common Errors in English Usage site.
Word list
- +1's, +1'ing, +1'ed
- 2-Step Verification
- When referring to Google's 2-Step Verification, use initial caps. If you're referring to generic 2-step verification, use lowercase.
- 3D (not 3 "useful" or "that you can act on." Don't use it in the legal sense without consulting a lawyer.
- action bar
- Don't use. Instead, use app bar.
- docs
- among
- See between versus among.
- AM, PM
- To be consistent with Material Design, use all-caps, no periods, and a space before.
- Recommended: 9:00 AM
- Recommended: 10:30 PM
- & (ampersand)
- In headings or text, don't use instead of "and"; however, it's OK to use in tables and UIs. And of course it's fine to use
&for technical purposes in code.
- and/or
- Sometimes "and/or" is the clearest and most efficient way to express something. It's worth considering whether there's a good way to write around it, but it's not worth rewriting so that the text is harder to understand.
- and so on
- Avoid using "and so on" whenever possible. For more information, see etc.
- Android (never "android")
- Android-powered device (not "Android device")
- key (not "developer key" or "dev key")
- APIs (not "API's")
-.
- APIs Explorer (not "API explorer" or other variants)
- APK (not ".apk")
- app (not "application")
- application
- Don't use. Instead, use "app."
- app bar (formerly "action bar")
- In general, use the word "authenticated" only to refer to users, and "authorized" only to refer to requests that are sent by a client application on behalf of an authenticated user. A user authenticates that they are who they say they are by entering their password (or giving some other proof of identity). The authenticated user then authorizes the client application to send an authorized request to the server on the user's behalf.
- authN, authZ
- Don't use. Instead, use "authentication" or "authorization."
- autopopulate (not "auto populate" or "auto-populate")
- autoupdate
- Don't use. Instead, use "automatically update."
- backend
- backoff (noun), back off (verb), back-off (adjective)
- backward compatible (not "backwards compatible")
- (all lowercase)
- applications.
-screen buttons.
-.
- click (not "click on")
- Use "click in" when referring to a region ("click in the window"), but not when referring to a control or a link.
- For Android apps, don't use "click". Instead, use "tap."
- clickthrough (noun), click through (verb)
- Don't use. For details and alternatives, see Link text.
- client
- In REST and RPC API documentation, "client" is short for "client application"—that is, the application that the developer is writing. Don't use "client" as an abbreviation for "client library"; instead, use "library."
- client ID
- client secret
- codebase (not "code base")
- codelab (not "code lab")
- combo box (noun), combo-box (adjective)
- command line (noun), command-line (adjective)
- compile time (noun), compile-time (adjective)
- contents (noun)
- In its singular form, "content" can be a noun, adjective, or verb. In its plural form, it's nearly always a noun. In our docs,)".
- cross-site request forgery
- data
- In our usage, "data" is singular, not plural. Say "the data is," not "the data are." Also, in our usage, data is a mass noun, not a count noun; for example, say "less data" rather than "fewer data."
- data center (not "datacenter")
-.
- double-tap
-" may be a better choice.
- emoji
- Use "emoji" for both singular and plural forms. See Don't know the difference between emoji and emoticons? Let me explain and What's the Plural of Emoji?
- enable, enabled
- Don't use. Instead, use "turn on" or "on."
- endpoint (not "end point")
- end user (noun), end-user (adjective)
- Also consider just "user".
- error prone (noun), error-prone (adjective)
- etc.
- Avoid both "etc." and "and so on" wherever possible, but if you really need to use one, use "etc." Always include the period, even if a comma follows immediately after.
- Not recommended: Your app may experience instability, high latency, and so on.
- Not recommended (but acceptable): Your app may experience instability, high latency, etc.
- Not recommended (but acceptable): If your app experiences instability, high latency, etc., follow these steps:
- Recommended: Your app may experience problems such as instability or high latency.
- expander arrow
- The UI element used to expand or collapse a section of navigation or content. We don't often refer to these explicitly in docs, but when we do, use the terms "expander arrow" and "expandable section" rather than terms like "expando" or "zippy."
- exploit
- Don't use to mean "use." Only use in the negative sense.
- filename (not "file name")
- following
- Recommended: ... in the following code sample ...
- frontend
- functionality
- On the one hand, everyone knows what this means. On the other hand, it's kinda jargony. So where possible, use terms like "capabilities" and "features" instead.
-."
- hackathon (not "hack-a-thon")
- hardcode (verb), hardcoded (adjective)
- hit
- Don't use as a synonym for "click."
- home screen
- hostname (not "host name")
- HTTPS (not "HTTPs")
-.
- internet
- Changed to lowercase in August 2017, in part because several other style guides have recently made this change.
- I/O (see also Google I/O)
- IoT
- OK to use as an abbreviation for "Internet of Things."
- jank
- Use with caution. Think about whether your audience will understand it.
- JPEG
- Don't use a filename extension to refer to a type of file. For example, use "JPEG file" rather than ".jpg file."
- key-value (not "key/value"), especially as in "key-value pair"
- kill
- Don't use. Instead, use words like "stop," "exit," "cancel," or "end."
- lead-in (noun)
- learnings
- Don't use.
- let's (as a contraction of "let us")
- Don't use if at all possible.
- Not recommended: Let's click the OK button now.
- lifecycle (not "life cycle" or "life-cycle")
- limits
- In an API context, usually refers to usage limits (number of queries allowed per second or per day). Best to use the term "usage limit" where possible, because "limit" can refer to many different kinds of limits, including rules about acceptable use. See also quota.
- lint
- Write both command-line tool name and command in lowercase. Use code font except where inappropriate.
- livestream (not "live stream")
- lock screen
- login (noun or adjective); log in (verb)
- But, for the verb form, "sign in" is better.
-."
- Material Design
- markup (noun); mark up (verb)
- No hyphen. As a verb, it's two words.
- media type
- In most contexts, use "media type" instead of "content type" or "MIME type."
- metadata (no hyphen)
- metafeed (no hyphen)
- meta
- Most words that start with "meta" don't have hyphens in them. For example, "metaprogramming" and "metalanguage" have no hyphens.
-."
- namespace (not "name space")
- neither
- Say "neither A nor B," not "neither A or B."
- notification drawer
- OAuth 2.0 (not "OAuth 2" or "OAuth2")
- OK or okay (not "ok" or "Okay")
- omnibox
- Don't use. Instead, use "address bar."
- once
- If you mean "after," then use "after" instead of "once."
- open source (no hyphen, not even as an adjective or verb)
- overview screen
- Don't use. Instead, use "recents screen."
- parameter
- In our API documentation, "parameter" is usually short for "query parameter"; it's a
name=valuepair that's appended to a URL in an HTTP
GETrequest. In some contexts, however, the term may have other meanings.
- parent-child or parent/child (not parent – child or parent—child)
- page
- Preferred term when referring to a web page in general, and to a sub-page of the API Console in particular.
-."
- preceding
- Recommended: ... in the preceding example ...
- precondition (not "pre-condition")
- predefined (not "pre-defined")
- prerecorded (not "pre-recorded")
- Use for keyboard actions such as pressing a key. Also use for mechanical buttons. Use "tap" for onscreen and soft (capacitive) buttons.
- property
- In our API documentation, a "property" is an element in a resource. For example, a Task resource has properties like
kind,
id, and
title.
- quota
- In API contexts, usually refers to API usage limits. Best to use the phrase "usage limit" instead (except in cases where "quota" appears in a UI), because the word "quota" means many different things to many different people.
- realtime (not "real time" or "real-time", whether or not used as a modifier)
- recents screen (not "overview screen")
-")
- screenshot (not "screen shot")
- Search (noun); search (verb)
-.
- Don't use either form on its own. Use the hyphenated version as part of "single sign-on."
- since
- If you mean "because," then use "because" instead of "since." "Since" deals with the passage of time and "because" deals with causation or the reason for something.
- simple
- single most (not "singlemost")
- single sign-on (noun or adjective)
-.
- status bar
- style sheet (not "stylesheet")
- This is the official spelling, per the World Wide Web Consortium (W3C).
- subclass (not sub-class; noun or verb)
- sub-element
- subtree (not "sub-tree")
- tab
- When referring to the sub-pages of the API Console, use "page" instead of "tab".
- tablet
- "Tablet" is OK. If you don't know whether it's a tablet or a phone, use "device."
- tap
- Use for onscreen and soft (capacitive) buttons. For mechanical buttons, use "press."
- Use instead of "touch." However, "touch & hold" (not "touch and hold") is OK to use. (Note the "&". It's OK to use in this case.)
- "tap & hold" or "tap and hold"
- Use "touch & hold" (not "touch and hold") instead. (Note the "&". It's OK to use in this case.)
- target
- Avoid using as a verb when possible, especially in reference to people. For some readers, may have aggressive connotations. Instead of "targeting" audiences, we should.
- they (singular)
- This is our preferred gender-neutral pronoun. Whether used as singular or plural, it always takes the plural verb. For example, "A user authenticates that they are who they say they are by entering their password." See also gender-neutral he.
- their (singular)
-.
- timestamp (not "time stamp")
- timeout (noun), time out (verb)
- time zone (noun), time-zone (adjective)
- touch
- Don't use. Instead, use "tap." However, "touch & hold" is OK to use.
- touchscreen (not "touch screen")
- typically
- Use to describe what is usual or expected under normal circumstances. Don't use as the first word in a sentence, as doing so can leave the meaning open to misinterpretation.
- Unix-like
- uncheck
- Don't use to refer to clearing a check mark from a checkbox. Instead, use "clear."
- Not recommended: Uncheck Automatically check for updates.
- Not recommended: Deselect Automatically check for updates.
- Recommended: Clear Automatically check for updates.
- unselect
- Don't use.
- URL
- All caps. Plural is "URLs."
- Write "a URL" rather than "an URL", because the most common pronunciation starts with a consonant sound. For more information, see a and an.
- user base (not "userbase")
- username (not "user name")
- vs.
- Don't use "vs." as an abbreviation for "versus"; instead, use the unabbreviated "versus."
- voila
- Don't use.
-."
- zip
- Don't use a filename extension to refer to a type of file. For example, use "zip file" rather than ".zip file."
- zippy
- Don't use to refer to expander arrows, unless you're specifically referring to the Zippy widget in Closure. | https://developers.google.com/style/word-list | CC-MAIN-2017-39 | refinedweb | 1,853 | 68.36 |
Grant Edwards wrote: > On 2009-08-14, Erik Max Francis <max at alcyone.com> wrote: >> Grant Edwards wrote: >>> On 2009-08-14, Steven D'Aprano <steve at REMOVE-THIS-cybersource.com.au> wrote: >>>> What the hell >>>> would it actually do??? >>> IIRC in C++, >>> >>> cout << "Hello world"; >>> >>> is equivalent to this in C: >>> >>> printf("Hellow world"); >>> >>> or this in Python: >>> >>> print "hellow world" >> Well, plus or minus newlines. > > And a few miscellaneous typos... ... and includes and namespaces :-). -- Erik Max Francis && max at alcyone.com && San Jose, CA, USA && 37 18 N 121 57 W && AIM/Y!M/Skype erikmaxfrancis It's hard to say what I want my legacy to be when I'm long gone. -- Aaliyah | https://mail.python.org/pipermail/python-list/2009-August/547822.html | CC-MAIN-2017-30 | refinedweb | 116 | 77.23 |
Changing Drive Letters and Labels via PowerShell
Thomas
Q: I want to change the drive letter and the drive label for a new USB drive. Is there a way with PowerShell?
A: Of course. One way is to use WMI and the CIM cmdlets.
PowerShell does not have a cmdlet to change the drive letter or the caption directly. But the good news is that you can use WMI and the CIM cmdlets to change both the drive letter and drive label. The Windows management tools have cmdlets (
Set-Partition and
Set-Volume) to change the drive letter and the caption directly. But it is also good to know how to do it via WMI and the CIM cmdlets to change both the drive letter and drive label. And under the covers, when you use
Set-Partition, you are actually using WMI. Both the Windows Storage and Windows Networking teams make heavy use of WMI and expose cmdlets via CDXML modules. The
*-Partition cmdlets are implemented by the CDXML Storage module.
WMI Classes, Class properties and Class Methods
WMI holds a huge amount of information about a Windows host in the form of WMI classes. Every IT professional should know about WMI.
WMI holds a hierarchical database of classes and class occurrences. These classes describe the hardware and software in your computer. This database is organized in to namespaces each which contains classes and, optionally, additional namespaces. You can use the CIM cmdlets to both retrieve and update this information.
For example, you can discover the drive letter and drive label for a drive from the Win32_Volume class. This class in in the rootCimV2 namespace.
Many WMI classes also contain methods that you can use to act on the WMI object. You can use the
Format() method of the Win32_Volume class to format a Windows volume.
To obtain the values of the properties of a WMI class, or to invoke a class method, you can use the WMI cmdlets, which shipped with Windows PowerShell V1. However, these cmdlets no longer ship with PowerShell 7. Of course, a determined IT Pro could find a way around that – but you don’t have to!
With PowerShell 7, you use the CIM cmdlets to access this information. The CIM cmdlets first shipped with Windows PowerShell V3 and represented a major overhaul in how IT Pros access WMI. The newer cmdlets do the same job as the WMI cmdlets but have different cmdlets, and different ways of working as you can see in this article.
Discovering WMI Class Properties
You use the cmdlet
Get-CimClass to discover the names (and type) of the properties of any given class. You can discover the properties of the Win32_Volume class like this:
Get-CimClass -ClassName Win32_Volume | Select-Object -ExpandProperty CimClassProperties | Sort-Object -Property Name | Format-Table Name, CimType, Qualifiers
The output from this commands looks like this:
Name CimType Qualifiers ---- ------- ---------- Access UInt16 {read} Automount Boolean {read} Availability UInt16 {MappingStrings, read, ValueMap} BlockSize UInt64 {MappingStrings, read} BootVolume Boolean {read} Capacity UInt64 {read} Caption String {MaxLen, read} Compressed Boolean {read} ConfigManagerErrorCode UInt32 {read, Schema, ValueMap} ConfigManagerUserConfig Boolean {read, Schema} CreationClassName String {CIM_Key, read} Description String {read} DeviceID String {CIM_Key, read, key, MappingStrings, Override} DirtyBitSet Boolean {read} DriveLetter String {read, write} DriveType UInt32 {MappingStrings, read} ErrorCleared Boolean {read} ErrorDescription String {read} ErrorMethodology String {read} FileSystem String {read} FreeSpace UInt64 {read} IndexingEnabled Boolean {read, write} InstallDate DateTime {MappingStrings, read} Label String {read, write} LastErrorCode UInt32 {read} MaximumFileNameLength UInt32 {read} Name String {read} NumberOfBlocks UInt64 {MappingStrings} PageFilePresent Boolean {read} PNPDeviceID String {read, Schema} PowerManagementCapabilities UInt16Array {read} PowerManagementSupported Boolean {read} Purpose String {read} QuotasEnabled Boolean {read} QuotasIncomplete Boolean {read} QuotasRebuilding Boolean {read} SerialNumber UInt32 {read} Status String {MaxLen, read, ValueMap} StatusInfo UInt16 {MappingStrings, read, ValueMap} SupportsDiskQuotas Boolean {read} SupportsFileBasedCompression Boolean {read} SystemCreationClassName String {CIM_Key, Propagated, read} SystemName String {CIM_Key, Propagated, read} SystemVolume Boolean {read}
In this list, you see each property of the Win32_Volume WMI class, the data type of the property and qualifiers. Qualifiers tell you more about the property – in particular whether a given property is read-only or read-write. The PageFilePresent property tells whether a given volume contains a Windows paging file. This property can not be changed using the CIM cmdlets. The DriveLetter and Label properties, on the other hand, are ones you can update. Let’s look at how you can change those properties.
Getting WMI properties
Suppose you want to change the volume label of a disk drive. In my host, the
M: drive contains a collection of digitised music and my collection of thousands of Grateful Dead live concerts. I have been collecting for a long time and have a disk deadicated [SIC] to the task. But sometimes, when I plug in my USB backup drives to perform a backup, Windows changes the drive letter for me. To ensure my backup scripts work, I need to change it back so my backup scripts work properly.
To obtain the value of the drive label and drive letter, you can do this:
$Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" $Drive | Select-Object -Property SystemName, Label, DriveLetter
On my Windows 10 host (Cookham24), the output looks like this:
PS C:> $Drive | Select-Object -Property SystemName, DriveLetter, Label, DriveLetter SystemName DriveLetter Label ---------- ----------- ----- COOKHAM24 M: Master GD
Changing Drive Label
You saw above that both the drive label and the drive letter are writable properties. To change the label for this disk volume, you assign a new value to the label property of
$Drive. Changing the property value updates the in-memory class instance which is not a permanent change. In order to persist the change, you need to use the
Set-CimInstance CMDLET. Here is how you can change the drive label, and then confirm the change:
$Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" $Drive | Set-CimInstance -Property @{Label='Grateful Dead'} Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" | Select-Object -Property SystemName, Label, DriveLetter
The output form this command, which shows the updated system label, looks like this
SystemName Label DriveLetter ---------- ----- ----------- COOKHAM24 Grateful Dead M:
Changing Drive Letter
To change the drive letter for a volume, you use
Set-CimInstance to change the drive letter, like this:
$Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" $Drive | Set-CimInstance -Property @{DriveLetter ='X:'}
If you are running PowerShell 7 in a non-elevated session, this operation fails like this:
PS C:Foo> $Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" PS C:Foo> $Drive | Set-CimInstance -Property @{DriveLetter ='X:'} Set-CimInstance: Access is denied.
This error is expected since you are not running PowerShell as an administrator. To overcome this error, re-run the command in an elevated session (run as administrator). Then your output looks like this:
PS C:Foo> $Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'M:'" PS C:Foo> $Drive | Set-CimInstance -Property @{DriveLetter ='X:'} PS C:Foo> Get-Volume | Where-Object FileSystemLabel -eq 'Grateful Dead' DriveLetter FriendlyName FileSystemType DriveType HealthStatus OperationalStatus SizeRemaining Size ----------- ------------ -------------- --------- ------------ ----------------- ------------- ---- X Grateful Dead NTFS Fixed Healthy OK 591.78 GB 3.64 TB
Changing the drive letter can take a while – so be patient.
And as a final point – you can combine the two property updates in a single call to
Set-CimInstance. To revert this drive to the old drive letter (
M:) and it’s Label (GD Master) and confirm the change, you can do it like this:
$Drive = Get-CimInstance -ClassName Win32_Volume -Filter "DriveLetter = 'X:'" $Drive | Set-CimInstance -Property @{DriveLetter = 'M:'; Label = 'GD Master'}
You can view the resulting change to drive letter and label using
Get-Volume. The output should look this:
PS C:Foo> Get-Volume | Where-Object FileSystemLabel -match 'GD Master' DriveLetter FriendlyName FileSystemType DriveType HealthStatus OperationalStatus SizeRemaining Size ----------- ------------ -------------- --------- ------------ ----------------- ------------- ---- M GD Master NTFS Fixed Healthy OK 591.78 GB 3.64 TB
Note
One issue you may encounter when you change a drive letter then revert it as shown here. It appears that Windows holds on to the old drive letter and does not allow you revert it back immediately. Thus you may get a “Set-CimInstance: not available” error message when trying to revert the drive letter. To get around this, you have to reboot Windows – it appears just logging off and back on is not adequate.
Summary
Changing drive letters using PowerShell 7 is simple and straightforward. As you can see, using the
Set-CimInstance PowerShell cmdlet to modify writable WMI properties is easy. I feel it’s more intuitive than making multiple property value assignments (once you you master the hash table). The cool thing is that multiple properties can be modified at one time instead of making multiple value assignments.
And as ever, this post shows there is often more than one way to achieve any aim.
Tip of the Hat
This article was inspired by an earlier Scripting Guys Blog post: Change drive letters and labels via a simple PowerShell command. That article was written by the most excellent Ed Wilson – thanks Ed!
Hmm, I was quite sure I could do that with PowerShell anyway, without digging into WMI. Power to those who dare do it, but I’ve always been intimidated a bit by WMI. This is great help, thanks. I’ll probably be back here often.
“PowerShell does not have a cmdlet to change the drive letter or the caption directly”
WRONG!
“Changing drive letters using PowerShell 7 is simple and straightforward.”
Sure it is when you do it properly:
To change drive letter:
To change drive name:
And that’s it!
With all due respect, this article is wrong on so many levels. It is a great example of overengineering. It requires programming knowledge of objects, properties, hashtables, and internal features of Windows OS. It uses workarounds instead of standard and common measures to achieve the same goal. It shows a lack of PowerShell usage fundamentals as even Get-Command volume / drive / partition would give you a clue to handle this properly.
And lastly, it will be an example for newcomers when they will compare PowerShell to other shells:
“If simple things like changing drive letter or label require FOUR pages of explanation and hackiery, what about complicated things? PowerShell can’t be good!”
Please always refresh your knowledge and get a second opinion before you publish a blog on official channels. Right now it is a guide for developers on how to use WMI. IMHO, putting this article into the PowerShell section is doing more harm than good. I would suggest moving it to ‘WMI’ section of some other ‘developer blogs’ section.
Thanks for the comment. I have updated the article although it’ll take a day or two to get the change online. You can read the PR here in the meantime.
The reason for wanting to introduce WMI here is that, under the covers, the Storage Module is based on WMI anyway. Since I think Windows PowerShell 4, you have been able to use CDXML files to define cmdlets based on WMI classes. The Networking and Storage teams inside Windows make heavy use of this technology to release their WMI classes as cmdlets. So when you use Set-Volume, you are actually updating a WMI class via Volume.cdxml. The Storage module, for example, has 28 CDXML files for storage-related classes/cmdlets. I had intended to make this clear and I thank you for catching this.
I note your other comments. If you have specific ideas for potential blog articles, please come over to GItHub and file an issue suggesting a topic. You can file an issue at. There is an issue template for a suggested article and I’m really happy to see any specific suggestions. Even better if you want to write it or help write it.
@Thomas I commend you on the generous response to @Aliens’ comments regarding this first blog post. I am also thankful for the efforts made towards kicking this initiative off regarding a community improved PowerShell blogging experience.
The interaction here goes to highlight beautiful differences that one can answer and solve a question one way while not wrong and another correct response to the same question can be provided in another way. Thus, teaching both old schoolers and new schoolers in the process. Though, I am still just getting started and will continue too until well, I get started. Is there a command for that? #Getting-Started 🙂
Thanks for updating the article. Now I understand the reason why you were so focused on WMI. You wanted to cover the lowest common denominator which works even for Windows 2008 R2, to cover the widest audience. My concerns were mainly about “How PS is viewed by newcomers”. Now I’m certain that it can be both ways: first, provide the most simple and straightforward way (notice that I didn’t even use | for the pipeline) so PS would appear neet. Then provide an alternative example to cover the widest audience of systems/solutions without necessarily focusing on simplicity. I will post this suggestion on GitHub for wider discussion.
Many organizations/enterprise, industry-wide is barely off PowerShell v2 and v3. I support customers globally, and I can count the number of customers using PowerShellv6/7 on one hand.
is only available after Windows PowerShell v5x as noted below
So, you are valid relative to what WPSv5x and PSv7 provide, it does not apply to 80-90% of the rest of the world, or even the USA.
So, though publishing this to a WMI specific forum is prudent, yet, Set-Partition should have been covered relative to v5/v7. It is just as important this info to be here for much of the world that is not on WPSv5x/PSv7 and who will not get there anytime soon.
We can/should never assume that anyone is running the latest and greatest of anything (industry is always 5-10 years or more behind the release cycles), and should always write, show, explain code the lowest common (organization/enterprise/industry) denominator to the lastest, to cover the widest audience.
Agree. Now when I have seen the whole context, I believe that we can have the best of both worlds: the most simple and straightforward way for newcomers and alternative examples to cover the widest audience of systems/solutions.
Thanks for the heads up!! | https://devblogs.microsoft.com/powershell-community/changing-drive-letters-and-labels-via-powershell/?WT.mc_id=DOP-MVP-4025064 | CC-MAIN-2021-39 | refinedweb | 2,381 | 59.94 |
It’s time for another Gleam release! This time as well as taking a look at what’s new in the compiler, we’re going to take a look at what’s new in the wider Gleam ecosystem too.
Record update syntax
Previously when we wanted to update some of the fields on the record we would need to create a new instance and manually copy across the fields that have not changed.
pub fn level_up(pokemon: Pokemon) { let new_level = pokemon.level + 1 let new_moves = moves_for(pokemon.species, new_level) Pokemon( level: new_level, moves: new_moves, name: pokemon.name, species: pokemon.species, item: pokemon.item, ) }
This is quite verbose! Worse, it’s error prone and annoying to type every time.
To remedy this problem Quinn Wilton has added the record update syntax to Gleam, so now we only need to specify the fields that change.
pub fn level_up(pokemon: Pokemon) { let new_level = pokemon.level + 1 let new_moves = moves_for(pokemon.species, new_level) Pokemon(..pokemon, level: new_level, moves: new_moves) }
All data in Gleam is immutable, so this syntax does not alter the values of the existing record, instead it creates a new record with the original values and the updated values.
Numbers
Tom Whatmore has added some new features to Gleam’s numbers.
There’s now syntaxes for binary, octal, and hexadecimal int literals.
// 4 ways of writing the int 15 let hexadecimal = 0xF let decimal = 15 let octal = 0o17 let binary = 0b00001111
These might be handy in many situations such as when implementing a virtual machine or a game of tetris in Gleam.
Underscores can now be added to numbers, making larger numbers easier to read.
let billion = 1_000_000_000 let trillion_and_change = 1_000_000_000_000.57
Type holes
Type holes can be used to give partial annotations to values.
Here we’re saying that
x is a List, but we’re not saying what type the list
contains, leaving the compiler to infer it.
let x: List(_) = run()
This may be useful when adding annotations for documentation or to use a more restrictive type than would be inferred by the compiler, as you can leave out any parts of the annotation that are not important to you there.
Compiler improvements
The majority of the compiler work in this release has been improvements to existing features.
The formatter style has been improved in several ways, and the performance of the formatter has been improved. One popular change is to how it formats assignments that don’t fit on a single line.
// Before assert Ok( tuple(user, session), ) = application_registry.get_session(request.user_id) // Now assert Ok(tuple(user, session)) = application_registry.get_session(request.user_id)
Much better!
Additionally several error messages have been improved to include more information on how to fix the problem, and a whole bunch of bug have been squashed. Please see the changelog for more detail.
HTTP
Outside of the compiler the big new thing in Gleam is the first release of Gleam HTTP. Inspired by existing libraries such as Ruby’s Rack, Elixir’s Raxx, and Haskell’s WAI, Gleam HTTP provides a single well-typed interface for creating web applications and clients while keeping the underlying HTTP client and server code modular.
In Gleam a HTTP service is a regular function that takes a HTTP request and returns a HTTP response, where request and response are types defined by the HTTP library.
Services
Here’s a simplistic service that echoes back any body sent to it.
import gleam/http.{Request} pub fn echo_service(request: Request(_)) { http.response(200) |> http.set_resp_body(request.body) }
As a normal Gleam function it is easy to test this service, no special test helpers or test server are required.
import gleam/should import my_app pub fn echo_test() { let response = http.default_req() |> http.set_req_body("Hello, world!") |> my_app.service response.status |> should.equal(200) response.body |> should.equal("Hello, world!") }
Gleam’s HTTP library also provides middleware, which can be used to add additional functionality to a HTTP service.
import gleam/bit_builder import gleam/http/middleware pub fn service() { my_app.echo_service // Add a header |> middleware.prepend_resp_header("made-with", "Gleam") // Convert the response body type |> middleware.map_resp_body(bit_builder.from_bit_string) // Support PATCH, PUT, and DELETE from browsers |> middleware.method_override }
Running a service
Once we have defined a service we want to run the service with a web server so that it can handle HTTP requests from the outside world. To do this we use a HTTP server adapter, such as one for the Elli Erlang web server.
import my_app import gleam/http/elli pub fn start() { elli.start(my_app.service, on_port: 4000) }
Or one for the Cowboy Erlang web server.
import my_app import gleam/http/cowboy pub fn start() { cowboy.start(my_app.service, on_port: 4000) }
Or even using the adapter for the Elixir’s Plug interface, so that a Gleam HTTP service can be mounted within an Elixir Phoenix web application.
defmodule MyAppWeb.UserController do use MyAppWeb, :controller def show(conn, params) do conn |> GleamPlug.call_service(params, &:my_app.service/1) end end
A list of all available server adapters can be found in the Gleam HTTP project’s README.
If you would like to see more examples of Gleam HTTP being used to create a web application see the echo server on GitHub. It includes all the above as well as other concepts such as routing and logging.
HTTP clients
HTTP services are only half the story! We also can use the HTTP library to make out own requests.
import gleam/http import gleam/httpc pub fn get_my_ip() { try response = http.default_req() |> http.set_method(http.Get) |> http.set_host("api.ipify.org") |> httpc.send Ok(response.body) }
Here we’ve build a HTTP request and sent it using the
httpc client adapter,
returning the response body if the request is successful.
A list of all available client adapters can be found in the Gleam HTTP project’s README.
Try it out.
Lastly, a huge thank you to the contributors to and sponsors of Gleam since last release!
- Adam Bowen
- Adam Mokan
- Ahmad Sattar
- Al Dee
- Arian Daneshvar
- Arno Dirlam
- Ben Myles
- Bruno Dusausoy
- Bruno Michel
- Bryan Paxton
- Chris Young
- Christian Meunier
- Christian Wesselhoeft
- Connor Schembor
- Dave Lucia
- David McKay
- delucks
- Eric Meadows-Jönsson
- Erik Terpstra
- Gary Rennie
- Graeme Coupar
- Guilherme Pasqualino
- Hasan YILDIZ
- Hendrik Richter
- Herdy Handoko
- Ingmar Gagen
- Ivar Vong
- James MacAulay
- Jechol Lee
- John Palgut
- José Valim
- Clever Bunny LTD
- Lars Lillo Ulvestad
- Lars Wikman
- Leandro Cesquini Pereira
- Mario Vellandi
- mario
- Markus
- Matthew Cheely
- Michael Jones
- Mike Roach
- Milad
- ontofractal
- Parker Selbert
- Peter Saxton
- Quinn Wilton
- Raphael Megzari
- Robin Mattheussen
- Sam Aaron
- Santi Lertsumran
- Sasan Hezarkhani
- Sascha Wolf
- Saša Jurić
- Sean Jensen-Grey
- Sebastian Porto
- Shane Sveller
- Shritesh Bhattarai
- Simone Vittori
- Tim Buchwaldt
- Tom Whatmore
- Tristan Sloughter
- Tyler Wilcock
- Wojtek Mach
Thanks for reading! Have fun! 💜 | https://gleam.run/news/gleam-v0.11-released/ | CC-MAIN-2022-33 | refinedweb | 1,124 | 62.98 |
Is there documentation for the FM tuner and Radio Data Service (RDS) API on Nokia's Windows phones?
Thanks!
Jeremy
Is there documentation for the FM tuner and Radio Data Service (RDS) API on Nokia's Windows phones?
Thanks!
Jeremy
Hi Holy Samosa,
Please have a look @
Note: This app requires access to the phone’s media library (ID_CAP_MEDIALIB capability).
sreerajvr
I tried to use this namespace Microsoft.Devices.Radio
FMRadio.Instance - to create FMRadio singleton instance
RadioPowerMode.On - to power on
FMRadio.Frequency - to set the frequency
FMRadio.CurrentRegion = RadioRegion.UnitedStates (Currently other supported regions are Europe and Japan)
Tested in Windows Phone emulator. Not very impressive. Do not know whether i have missed any thing.
Always get some music (constantly played)
US - 89.4,91.4,93.4,96.4,101.4,106.8 (MHz)
Europe - 108.1 (MHz)
Japan - 90.1 (MHz)
I am trying to make it more accurate.
How's the radio reception in the emulator? I'd imagine it might work better in an actual device with an actual FM radio receiver.
Hey Guys!
Thanks to the pointer to Microsoft's new FMRadio class. My search missed that one.
Unfortunately and surprisingly, it doesn't support Radio Data Service (RDS)-- which is a must for my application. I would expect that the Nokia phones would support RDS, so will Nokia be providing an API?
Thanks!
Jeremy
Although the built-in radio has RDS, it is n't exposed via public API
sreerajvr
Will there be a Nokia API? I could really need RDS info for one of my apps | http://developer.nokia.com/community/discussion/showthread.php/230583-API-for-FM-Radio?p=875064 | CC-MAIN-2014-35 | refinedweb | 265 | 60.61 |
I have been trying to write up a simple plugin that will expand the selection till a found text, something like:
- Code: Select all
def func1(arg1, arg2):
pass
def func_name2(arg1, arg2):
pass
and say you have multiple cursors right after "def" and you want to expand selection till the arg2, so you could run this search and it will expand the selection.
The problem is that I can't seem to get it to work with the standard incremental search dialog, only with the show_input_panel, which means no extra options (toggle regular expressions, backwards search).
Am I missing something or is this not currently supported? | http://www.sublimetext.com/forum/viewtopic.php?f=2&t=10940&start=0 | CC-MAIN-2014-35 | refinedweb | 107 | 53.17 |
Creates game servers, offline games or demo playbacks. More...
#include <gamehost.h>
Creates game servers, offline games or demo playbacks.
GameHost launches games from the perspective of the "server admin" or "game master" or "local player" (whatever is appropriate in given circumstances). It works in close union with GameCreateParams which basically tells GameHost how to configure the game. This structure can be accessed at any time with params() method.
Game launch command line is built by sequentially executing add*() methods. Each of these methods is allowed to set error and bail early by setting appropriate Message with setMessage() method. Reference to command line arguments list can be accessed with args(). This points to a QStringList that can be manipulated freely.
Some configuration settings are universal between Doom game engines. Default GameHost implementation will try to handle them, so plugins don't need to build everything from scratch. Sometimes different engines handle the same type of configuration using different argument. This can include operations like loading PWADs or dehacked files, or playing back a demo. You can customize this argument by setting args returned by arg*() methods. Each arg*() method has an equivalent setArg*() method. For example, argForPort() has setArgForPort(). The best place to call setArg*() methods and configure these properties is the constructor of the subclass defined in the plugin.
Configuration settings that aren't universal need to be added by overriding addExtra() method.
Definition at line 69 of file gamehost.h.
"Custom parameters" are specified directly by user in "Create Game" dialog box.
Definition at line 95 of file gamehost.cpp.
[Virtual] Creates engine specific command line parameters out of passed DM flags list.
Default behavior does nothing.
Creates engine specific command line parameters out of Server class fields.
Following settings are already set by default implementation of createHostCommandLine() and don't need any additional handling:
Definition at line 123 of file gamehost.cpp.
[Virtual] Loads IWAD.
[Virtual] Loads PWADs and other mod files (dehacked patches, pk3s, etc.)
Command line parameter that is used to load a BEX file.
Default: "-deh".
Definition at line 180 of file gamehost.cpp.
Command line parameter that is used to load a DEHACKED file.
Default: "-deh".
Definition at line 185 of file gamehost.cpp.
Command line parameter for playing back a demo.
Default: "-playdemo".
Definition at line 210 of file gamehost.cpp.
Command line parameter for recording a demo.
Default: "-record";
Definition at line 215 of file gamehost.cpp.
Command line parameter that is used to set IWAD.
Default: "-iwad".
Definition at line 190 of file gamehost.cpp.
Command line parameter that is used to load optional WADs.
Default: "-file".
Definition at line 195 of file gamehost.cpp.
Command line parameter that is used to set network port for the game.
Default: "-port".
Definition at line 200 of file gamehost.cpp.
Command line parameter that is used to load a PWAD.
Default: "-file".
Definition at line 205 of file gamehost.cpp.
Command line parameter used to launch a server.
No default.
Definition at line 220 of file gamehost.cpp.
Reference to command line arguments.
This is where plugins should write all CMD line arguments they create for the executable run.
Definition at line 225 of file gamehost.cpp.
Builds command line arguments sequentially by calling other methods.
This can be overridden if the default behavior does the completely wrong thing. In most cases however this method should be left untouched and appropriate add*() methods should be overridden instead, or appropriate arg*() properties should be configured.
Definition at line 230 of file gamehost.cpp.
Definition at line 266 of file gamehost.cpp.
Definition at line 284 of file gamehost.cpp.
GameCreateParams with which this game should be configured.
Definition at line 316 of file gamehost.cpp.
EnginePlugin that this GameHost is associated with.
Definition at line 321 of file gamehost.cpp.
Call this method to convey errors.
GameHost checks for errors before making certain steps. If plugins want to prevent execution of the game, they should set a Message instance that will return 'true' on Message::isError().
Definition at line 380 of file gamehost.cpp. | https://doomseeker.drdteam.org/docs/doomseeker_1.0/classGameHost.php | CC-MAIN-2021-25 | refinedweb | 681 | 60.72 |
node.js REST framework specifically meant for web service APIs
Hi! Noob here playing around with restify. I have a simple restify app, and I wonder why server.on('after' always fires an err, even though nothing seems amiss?
Err obj:
{"methods":["GET"],"name":"get","params":{},"spec":{"path":{},"method":"GET","versions":[],"name":"get"}}
Server.on after snip:
server.on('after', function(req, res, err){ var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress || req.socket.remoteAddress || req.connection.socket.remoteAddress; let reqStr = req.toString(), shortReqArr = reqStr.split('HTTP'), shortReq = shortReqArr[0]; logga(timeDate() + ' 200: ' + shortReq + ' ip: ' + ip); if (err) { console.log(timeDate() + ' server after error ' + JSON.stringify(err) + ' \n' ); } next(); });, I get this error:
{"code":"ResourceNotFound","message":"/docs/"}
filein
opts, it defaults to
directory+
req.path, which becomes
doc/docs
this is not the desired behavior, and doesn't seem to be the behavior described in the documentation
The above
routeand
directorycombination will serve a file located in
./documentation/v1/docs/current/index.htmlwhen you attempt to hit.
@ashishpai2 I've faced the same question three times already. There are many options to consider (which, in my book, is a bit unfortunate), and there are tradeoff's to be considered.
Between Restify and Express the choice is somewhat simple: if you only need API, Restify is the best choice. Restify endpoints can be mostly seamlessly ported to Express if you need to change later.
However, both Restify and Express are "unopinionated", meaning, they don't give you any suggestion of structure. You can have all endpoints in a single file for all they care. And so they won't help you add more structure. I'd go with them only on the simplest of projects, and knowing that I may either want to discard them later on or face a real challenge once things start to get more complex.
Alternatives? You should look at Loopback, which seems very mature. There are others which I haven't considered yet, as you can see here:.
server.get('/endpoint', callback);but is there a way to have all routes in a file, and in one go "attach" them to the
serverobject? like
const routes = require('/routes'); server.get('/endpoint2', callback); // a normal route server.routes.attach(routes); // attach all from that routes/index.js file
Dinoloop has been designed from the start for gradual adoption, and you can use as little or as much dinoloop as you need. Perhaps you only want to develop some "REST APIs" using dinoloop and other REST APIs can be developed using expressjs. In this section, we will show how to create dinoloop REST API to an existing express app.
Step 1: Add HomeController (file: home.controller.ts)
import { ApiController, Controller, HttpGet } from 'dinoloop';
@Controller('/home')
export class HomeController extends ApiController {
@HttpGet('/get') get(): string { return 'Hello World!'; }
}
Step 2: Mount dinoloop and bind to express instance (file: app.ts)
const app = express();
app.use(bodyParser.json());
// Dino requires express instance and base-uri to which dino will be mounted
const dino = new Dino(app, '/api');
// Dino requires express router too
dino.useRouter(() => express.Router());
// Register controller
dino.registerController(HomeController);
// Bind dino to express
dino.bind();
// These are your normal express endpoints
app.get('/home', (req, res, next) => {
res.status(200).json('Hello World!');
});
app.get('/about', (req, res, next) => {
res.status(200).json('Hello World!');
});
// Start your express app
app.listen(8088, () => console.log('Server started on port 8088'));
Dinoloop is mounted on /api and all of its controller routes/endpoints which are registered with dinoloop are also mounted on /api. Dinoloop will handle those requests which are mounted on /api i.e. /api/home/get, the other end points /home and /about which are created by expressjs are not handled by dinoloop, this way you can slowly migrate your existing express app to dinoloop or you can start writing your new REST APIs using dinoloop.
please find the reference: | https://gitter.im/restify/node-restify?at=591b031433e9ee771cb19843 | CC-MAIN-2021-31 | refinedweb | 654 | 59.3 |
in favor of putting everything in a vcl namespace, even if we
still need to use macros to define vcl_foo as vcl::foo. This would be
step in to direction of eventually using "vcl::" directly. In the
mean time any troublesome compiler can still define vcl_foo as
std::foo.
If Amitha finds some other unseen problem with this approach, I think
a good second choice is the use of the vcl namespace only for
conflicting symbols like "swap" (as suggested by Brad).
As for the other topics:
A. I have no problem with introducing new symbols from C++0x into vcl
without hesitation. However, I am concerned about where these new
symbols can be used in VXL. Maybe we need another
"not-in-core-for-now" restriction. We can't just start replacing
vbl_smart_ptr with vcl_shared_ptr if the shared_ptr is only supported
on a few platforms. Or are you suggesting that we can use them
anywhere as long as we provided an ifdef'ed non-C++0x alternative?
B. I agree that we shouldn't have an ENABLE_CXX0X option once we have
the vcl infrastructure for handling C++0x in place. When we have a
working solution for bringing together std and std::tr1, it can go.
C. I think vcl features will have to be implemented on a best effort
basis. After looking into shared_ptr I'm convinced that some of the
C++0x new features will be incredibly difficulty to write generic
versions of. Much of the new stuff is geared toward multi-threading
and requires a lot of low level system specific interaction. Even
classes like the shared_ptr have these dependencies so that they can
make guarantees about their operation in a multi-threaded environment.
We could write striped down versions that are not thread safe, but
that seems to defeat the purpose of moving to the C++0x shared_ptr.
Matt
On Thu, Apr 3, 2008 at 3:13 PM, Amitha Perera
<amithaperera@...> wrote:
> Brad King wrote:
>
> > Huh? I was just pointing out that we don't have to do the "using" trick
> > for every name right now. Just the ones that are in both "std::" and
> > "std::tr1::" need to be done that way.
> >
>
> Sorry, my brain was somehow thinking of your solution of explicitly
> defining vcl_swap functions.
>
> Yes, you are right: the using trick could be used just for the conflicting
> variables.
>
> However, given that vcl/generic and vcl/iso (IIRC) are generated from a
> perl script, it may be easier to do it for everything. Anyhow, I'll try it
> at some point "soon".
>
> Amitha.
>
View entire thread | https://sourceforge.net/p/vxl/mailman/message/19035268/ | CC-MAIN-2018-17 | refinedweb | 434 | 72.46 |
The whole resolution process may seem awfully convoluted and cumbersome to someone accustomed to simple searches through the host table. Actually, though, it's usually quite fast. One of the features that speeds it up considerably is caching.
A name server processing a recursive query may have to send out quite a few queries to find an answer. However, it discovers a lot of information about the domain namespace as it does so. Each time it's referred to another list of name servers, it learns that those name servers are authoritative for some zone, and it learns the addresses of those servers. At the end of the resolution process, when it finally finds the data the original querier sought, it can store that data for future reference, too. The Microsoft DNS Server even implements negative caching: if a name server responds to a query with an answer that says the domain name or data type in the query doesn't exist, the local name server will also temporarily cache that information.
Name servers cache all this data to help speed up successive queries. The next time a resolver queries the name server for data about a domain name the name server knows something about, the process is shortened quite a bit. The name server may have cached the answer, positive or negative, in which case it simply returns the answer to the resolver. Even if it doesn't have the answer cached, it may have learned the identities of the name servers that are authoritative for the zone the domain name is in and be able to query them directly.
For example, say our name server has already looked up the address of eecs.berkeley.edu. In the process, it cached the names and addresses of the eecs.berkeley.edu and berkeley.edu name servers (plus eecs.berkeley.edu's IP address). Now if a resolver were to query our name server for the address of baobab.cs.berkeley.edu, our name server could skip querying the root name servers. Recognizing that berkeley.edu is the closest ancestor of baobab.cs.berkeley.edu that it knows about, our name server would start by querying a berkeley.edu name server, as shown in Figure 2-16. On the other hand, if our name server discovered that there was no address for eecs.berkeley.edu, the next time it received a query for the address, it could simply respond appropriately from its cache.
In addition to speeding up resolution, caching obviates a name server's need to query the root name servers to answer any queries it can't answer locally. This means it's not as dependent on the roots, and the roots won't suffer as much from all its queries.
Name servers can't cache data forever, of course. If they did, changes to that data on the authoritative name servers would never reach the rest of the network; remote name servers would just continue to use cached data. Consequently, the administrator of the zone that contains the data decides on a time to live (TTL) for the data. The time to live is the amount of time that any name server is allowed to cache the data. After the time to live expires, the name server must discard the cached data and get new data from the authoritative name servers. This also applies to negatively cached data: a name server must time out a negative answer after a period in case new data has been added on the authoritative name servers.
Deciding on a time to live for your data is essentially deciding on a trade-off between performance and consistency. A small TTL will help ensure that data in your zones is consistent across the network, because remote name servers will time it out more quickly and be forced to query your authoritative name servers more often for new data. On the other hand, this will increase the load on your name servers and lengthen the average resolution time for information in your zones.
A large TTL reduces the average time it takes to resolve information in your zones because the data can be cached longer. The drawback is that your information will be inconsistent longer if you make changes to the data on your name servers.
But enough of this theory?I'll bet you're antsy to get on with things. You have some homework to do before you can set up your zones and your name servers, though, and we'll assign it in the next chapter. | http://etutorials.org/Server+Administration/dns+windows+server/Chapter+2.+How+Does+DNS+Work/2.7+Caching/ | CC-MAIN-2017-22 | refinedweb | 764 | 70.33 |
Give access to the real-time state of the sensors. More...
#include <Sensor.hpp>
Give access to the real-time state of the sensors.
sf::Sensor provides an interface to the state of the various sensors that a device provides.
It only contains static functions, so it's not meant to be instantiated.
This class allows users to query the sensors values at any time and directly, without having to deal with a window and its events. Compared to the SensorChanged event, sf::Sensor can retrieve the state of a sensor at any time (you don't need to store and update its current value on your side).
Depending on the OS and hardware of the device (phone, tablet, ...), some sensor types may not be available. You should always check the availability of a sensor before trying to read it, with the sf::Sensor::isAvailable function.
You may wonder why some sensor types look so similar, for example Accelerometer and Gravity / UserAcceleration. The first one is the raw measurement of the acceleration, and takes into account both the earth gravity and the user movement. The others are more precise: they provide these components separately, which is usually more useful. In fact they are not direct sensors, they are computed internally based on the raw acceleration and other sensors. This is exactly the same for Gyroscope vs Orientation.
Because sensors consume a non-negligible amount of current, they are all disabled by default. You must call sf::Sensor::setEnabled for each sensor in which you are interested.
Usage example:
Definition at line 42 of file Sensor.hpp.
Definition at line 50 of file Sensor.hpp.
Get the current sensor value.
Check if a sensor is available on the underlying platform.
Enable or disable a sensor.
All sensors are disabled by default, to avoid consuming too much battery power. Once a sensor is enabled, it starts sending events of the corresponding type.
This function does nothing if the sensor is unavailable. | https://en.sfml-dev.org/documentation/2.4.2/classsf_1_1Sensor.php | CC-MAIN-2019-04 | refinedweb | 329 | 58.18 |
An in-app debugging and exploration tool for iOS
FLEX
FLEX (Flipboard Explorer) is a set of in-app debugging and exploration tools for iOS development. When presented, FLEX shows a toolbar that lives in a window above your application. From this toolbar, you can view and modify nearly every piece of state in your running application.
Give Yourself Debugging Superpowers
-values.
Unlike many other debugging tools, FLEX runs entirely inside your app, so you don't need to be connected to LLDB/Xcode or a different remote debugging server. It works well in the simulator and on physical devices.
Usage
In the iOS simulator, you can use keyboard shortcuts to activate FLEX.
f will toggle the FLEX toolbar. Hit the
? key for a full list of shortcuts. You can also show FLEX programatically:
Short version:
// Objective-C [[FLEXManager sharedManager] showExplorer];
// Swift FLEXManager.shared().showExplorer()
More complete version:
#if DEBUG #import "FLEXManager.h" #endif ... - (void)handleSixFingerQuadrupleTap:(UITapGestureRecognizer *)tapRecognizer { #if DEBUG if (tapRecognizer.state == UIGestureRecognizerStateRecognized) { // This could also live in a handler for a keyboard shortcut, debug menu item, etc. [[FLEXManager sharedManager] showExplorer]; } #endif }
Feature Examples
Modify Views
Once a view is selected, you can tap on the info bar below the toolbar to present more details about the view. From there, you can modify properties and call methods.
Network History
When enabled, network debugging allows you to view all requests made using NSURLConnection or NSURLSession. Settings allow you to adjust what kind of response bodies get cached and the maximum size limit of the response cache. You can choose to have network debugging enabled automatically on app launch. This setting is persisted across launches.
All Objects on the Heap
FLEX queries malloc for all the live allocated memory blocks and searches for ones that look like objects. You can see everything from here.
Simulator Keyboard Shortcuts
Default keyboard shortcuts allow you to activate the FLEX tools, scroll with the arrow keys, and close modals using the escape key. You can also add custom keyboard shortcuts via
-[FLEXMananger registerSimulatorShortcutWithKey:modifiers:action:description]
File Browser
View the file system within your app's sandbox. FLEX shows file sizes, image previews, and pretty prints
.json and
.plist files. You can copy text and image files to the pasteboard if you want to inspect them outside of your app.
SQLite Browser
SQLite database files (with either
.db or
.sqlite extensions), or Realm database files can be explored using FLEX. The database browser lets you view all tables, and individual tables can be sorted by tapping column headers.
3D Touch in the Simulator
Using a combination of the command, control, and shift keys, you can simulate different levels of 3D touch pressure in the simulator. Each key contributes 1/3 of maximum possible force. Note that you need to move the touch slightly to get pressure updates.
System Library Exploration
Go digging for all things public and private. To learn more about a class, you can create an instance of it and explore its default state.
NSUserDefaults Editing
FLEX allows you to edit defaults that are any combination of strings, numbers, arrays, and dictionaries. The input is parsed as
JSON. If other kinds of objects are set for a defaults key (i.e.
NSDate), you can view them but not edit them.
Learning from Other Apps
The code injection is left as an exercise for the reader. :innocent:
Installation
FLEX requires an app that targets iOS 7 or higher.
CocoaPods
FLEX is available on CocoaPods. Simply add the following line to your podfile:
pod 'FLEX', '~> 2.0', :configurations => ['Debug']
Carthage
Add the following to your Cartfile:
github "flipboard/FLEX" ~> 2.0
Manual
Manually add the files in
Classes/ to your Xcode project.
Excluding FLEX from Release (App Store) Builds
FLEX makes it easy to explore the internals of your app, so it is not something you should expose to your users. Fortunately, it is easy to exclude FLEX files from Release builds. The strategies differ depending on how you integrated FLEX in your project, and are described below.
At the places in your code where you integrate FLEX, do a
#if DEBUG check to ensure the tool is only accessible in your
Debug builds and to avoid errors in your
Release builds. For more help with integrating FLEX, see the example project.
FLEX added with CocoaPods
CocoaPods automatically excludes FLEX from release builds if you only specify the Debug configuration for FLEX in your Podfile.
FLEX added with Carthage
If you are using Carthage, only including the
FLEX.framework in debug builds is easy:
Do NOT add
FLEX.frameworkto the embedded binaries of your target, as it would otherwise be included in all builds (therefore also in release ones).
Instead, add
$(PROJECT_DIR)/Carthage/Build/iOSto your target Framework Search Paths (this setting might already be present if you already included other frameworks with Carthage). This makes it possible to import the FLEX framework from your source files. It does not harm if this setting is added for all configurations, but it should at least be added for the debug one.
Add a Run Script Phase to your target (inserting it after the existing
Link Binary with Librariesphase, for example), and which will embed
FLEX.frameworkin debug builds only:
if [ "$CONFIGURATION" == "Debug" ]; then /usr/local/bin/carthage copy-frameworks fi
Finally, add
$(SRCROOT)/Carthage/Build/iOS/FLEX.frameworkas input file of this script phase.
FLEX files added manually to a project
In Xcode, navigate to the "Build Settings" tab of your project. Click the plus and select
Add User-Defined Setting.
Name the setting
EXCLUDED_SOURCE_FILE_NAMES. For your
Release configuration, set the value to
FLEX*. This will exclude all files with the prefix FLEX from compilation. Leave the value blank for your
Debug configuration.
Additional Notes
- When setting fields of type
idor values in
NSUserDefaults, FLEX attempts to parse the input string as
JSON. This allows you to use a combination of strings, numbers, arrays, and dictionaries. If you want to set a string value, it must be wrapped in quotes. For ivars or properties that are explicitly typed as
NSStrings, quotes are not required.
- You may want to disable the exception breakpoint while using FLEX. Certain functions that FLEX uses throw exceptions when they get input they can't handle (i.e.
NSGetSizeAndAlignment()). FLEX catches these to avoid crashing, but your breakpoint will get hit if it is active. | https://iosexample.com/an-in-app-debugging-and-exploration-tool-for-ios/ | CC-MAIN-2019-09 | refinedweb | 1,063 | 56.86 |
Fear Your Coding Interview? This article shows you how to make your coding interview a success.
General Tips to Prepare Your Interview
- Watch Google Interview tips.
- Read Prof. Philip Guo’s tips.
- Practice coding in Google Docs. Don’t use a code highlighting editor for your training time.
- Solve at least 50+ code puzzles.
- And most importantly: Don’t panic.
Watch the following Instagram post and learn about popular Python interview questions (swipe left, swipe right):
Sieh dir diesen Beitrag auf Instagram an
[CHALLENGE] How many of the three questions can you answer? . . . #coffeebreakpython #python #python3 #pythonprogramming #pythoncoding #pythoncode #learning #computerscience #coding #datascience #puzzles #improve #learncomputerscience #programmers #pythonlearning #learncomputerscience #learningpython #codinglife #datascientist #it #developer #development #onlinelearning #programinglanguage #programing #education #pythonista #code #brainfood
Which Programming Questions Should You Prepare?
By reading this article, you will learn about these 15 popular interview questions. Feel free to jump ahead to any question that interests you most.
- Question 1: Get the missing number from an integer list 1-100.
- Question 2: Find duplicate number in integer list.
- Question 3: Check if a list contains an integer x.
- Question 4: Find the largest and the smallest number in an unsorted list.
- Question 5: Find pairs of integers in a list so that their sum is equal to the integer x.
- Question 6: Remove all duplicates from an integer list.
- Question 7: Sort a list with the Quicksort algorithm.
- Question 8: Sort a list with the Mergesort algorithm.
- Question 9: Check if two strings are anagrams.
- Question 10: Compute the intersection of two lists.
- Question 11: Reverse string using recursion.
- Question 12: Find all permutations of a string.
- Question 13: Check if a string is a palindrome.
- Question 14: Compute the first n Fibonacci numbers.
- Question 15: Use list as stack, array, and queue.
- Question 16: Search a sorted list in O(log n).. You will simply become a better Python coder on autopilot.
Question 1: Get the missing number from an integer list 1-100.
[python]
def get_missing_number(l):
nxt = 1
while nxt < len(l): if nxt != l[nxt-1]: return nxt nxt = nxt + 1 [/python]
There are many other ways to solve this problem (and more concise ones). For example, you can create a set of numbers from 1 to 100 and remove all elements in the list l. This is an elegant solution as it returns not one but all numbers that are missing in the sequence. Here is this solution:
[python]
set(range(l[len(l)-1])[1:]) – set(l)
[/python]
An alternative solution is the following:
lst = list(range(1, 101)) lst.remove(55) total = sum(range(max(lst) + 1)) print(total - sum(lst))
Question 2: Find duplicate number in integer list.
Say we have a list of integer numbers called elements. The goal is to create a function that finds ALL integer elements in that list that are duplicated, i.e., that exist at least two times in the list. For example, when applying our function to the list elements=[2, 2, 3, 4, 3], it returns a new list [2, 3] as integer elements 2 and 3 are duplicated in the list elements. In an interview, before even starting out with “programming on paper”, you should always ask the interviewer back with concrete examples to show that you have understood the question.
So let’s start coding. Here is my first attempt:
[python]]
[/python]
Note that the runtime complexity is pretty good. We iterate over all elements once in the main loop. The body of the main loop has constant runtime because I have selected a set for both variables “duplicates” and “seen”. Checking whether an element is in a set, as well as adding an element to the set has constant runtime (O(1)). Hence, the total runtime complexity is linear in the input size.
Question 3: Check if a list contains an integer x.
This is a very easy problem. I don’t know why an interviewer would ask such simple questions – maybe it’s the first “warm-up” question to make the interviewed person feel more comfortable. Still, many people reported that this was one of their interview questions.
To check whether a Python list contains an element x in Python, could be done by iterating over the whole list and checking whether the element is equal to the current iteration element. In fact, this would be my choice as well, if the list elements were complex objects that are not hashable.
However, the easy path is often the best one. The interview question explicitly asks for containment of an integer value x. As integer values are hashable, you can simply use the Python “in” keyword as follows.
[python]
l = [3, 3, 4, 5, 2, 111, 5]
print(111 in l)
# True
[/python]
Question 4: Find the largest and the smallest number in an unsorted list.
Again, this question is a simple question that shows your proficient use with the basic Python keywords. Remember: you don’t have a fancy editor with source code highlighting! Thus, if you do not train coding in Google Docs, this may be a serious hurdle. Even worse: the problem is in fact easy but if you fail to solve it, you will instantly fail the interview! NEVER UNDERESTIMATE ANY PROBLEM IN CODING!
Here is a simple solution for Python:
[python]
l = [4, 3, 6, 3, 4, 888, 1, -11, 22, 3]
print(max(l))
# 888
print(min(l))
# -11
[/python]
It feels like cheating, doesn’t it? But note that we didn’t even use a library to solve this interview question. Of course, you could also do something like this:
[python]
def find_max(l):
maxi = l[0]
for element in l:
if element > maxi:
maxi = element
return maxi
l = [4, 3, 6, 3, 4, 888, 1, -11, 22, 3]
print(max(l))
# 888
[/python]
Which version do you prefer?
Question 5: Find pairs of integers in a list so that their sum is equal to the integer x.
This problem is interesting. The straightforward solution is to use two nested for loops and check for each combination of elements whether their sum is equal to integer x. Here is what I mean:
[python]))
[/python]
Fail! It throws an exception: “AttributeError: ‘list’ object has no attribute ‘add’”
This is what I meant: it’s easy to underestimate the difficulty level of the puzzles, only to learn that you did a careless mistake again. So the corrected solution is this:
[python]))
[/python]
Now it depends whether your interviewer will accept this answer. The reason is that you have a lot of duplicated pairs. If he asked you to remove them, you could simply do a post-processing by removing all the duplicates from the list.
Actually, this is a common interview question as well (see next question).
Here is another beautiful one-liner solution submitted by one of our readers:
[python]
# Solution from user Martin
l = [4, 3, 6, 4, 888, 1, -11, 22, 3]
match = 9
res = set([(x, match – x) for e, x in enumerate(l) if x <= match / 2 and match - x in l[:e] + l[e+1:]]) print(res) [/python]
Question 6: Remove all duplicates from an integer list.
Given a list, the goal is to remove all elements, which exist more than once in the list. Note that you should be careful not to remove elements while iterating over a list.
Wrong example of modifying a list while iterating over it (don’t try this at home):
[python]
lst = list(range(10))
for element in lst:
if element>=5:
lst.remove(element)
print(lst)
# [0, 1, 2, 3, 4, 6, 8]
[/python]
As you can see, modifying the sequence over which you iterate causes unspecified behavior. After it removes the element 5 from the list, the iterator increases the index to 6. The iterator assumes this is the next element in the list. However, that’s not the case. As we have removed the element 5, element 6 is now at position 5. The iterator simply ignores the element. Hence, you get this unexpected semantics.
Yet, there is a much better way how to remove duplicates in Python. You have to know that sets in Python allow only a single instance of an element. So after converting the list to a set, all duplicates will be removed by Python. In contrast of the naive approach (checking all pairs of elements whether they are duplicates), this method has linear runtime complexity. The reason is that creation of a set is linear in the number of set elements. Now, we simply have to convert the set back to a list and voilà, the duplicates are removed.
[python])]
[/python]
Question 7: Sort a list with the Quicksort algorithm.
This is a difficult problem to solve during a coding interview. In my opinion, most software developers are not able to write the Quicksort algorithm correctly in a Google document. Still, we will do it, won’t we?
The main idea of Quicksort is to select a pivot element and then placing all elements that are larger or equal than the pivot element to the right and all elements that are smaller than the pivot element to the left. Now, you have divided the big problem of sorting the list into two smaller subproblems: sorting the right and the left partition of the list. What you do now is to repeat this procedure recursively until you obtain a list with zero elements. This list is already sorted, so the recursion terminates. Here is the quicksort algorithm as a Python one-liner:
[python]
def qsort(L):
if L == []: return []
return qsort([x for x in L[1:] if x< L[0]]) + L[0:1] + qsort([x for x in L[1:] if x>=L[0]])
lst = [44, 33, 22, 5, 77, 55, 999]
print(qsort(lst))
# [5, 22, 33, 44, 55, 77, 999]
[/python]
Question 8: Sort a list with the Mergesort algorithm.
It can be quite difficult to code the Mergesort algorithm under emotional and time pressure. So take your time now understanding it properly.
The idea is to break up the list into two sublists. For each of the sublist, you now call merge sort in a recursive manner. Assuming that both lists are sorted, you now merge the two sorted lists. Note that it is very efficient to merge two sorted lists: it takes only linear time in the size of the list.
Here is the algorithm solving this problem.
[python]] [/python]
Question 9: Check if two strings are anagrams.
You can find this interview question at so many different places online. It is one of the most popular interview questions.
The reason is that most students who have pursued an academic education in computer science, know exactly what to do here. It serves as a filter, a secret language, that immediately reveals whether you are in or out of this community.
In fact, it is nothing more. Checking for anagrams has little to no practical applicability. But it’s fun, I have to admit!
So what are anagrams? Two words are anagrams if they consist of exactly the same characters. Wikipedia defines it a bit more precisely: “An anagram is a word or phrase formed by rearranging the letters of a different word or phrase, typically using all the original letters exactly once”.
Here are a few examples:
- “listen” → “silence”
- “funeral” → “real fun”
- “elvis” → “lives”
Ok, now you know exactly what to do, right? So let’s start coding.
[python]
[/python]
As you can see, the program solves the problem efficiently and correctly. But this was not my first attempt. I suffered the old weakness of programmers: starting to code to early. I used a hands-on approach and created a recursive function is_anagram(s1, s2). I used the observation that s1 and s2 are anagrams iff (1) they have two equal characters and (2) they are still anagrams if we remove these two characters (the smaller problem). While this solution worked out, it also sucked out 10 minutes of my time.
While thinking about the problem, it struck me: why not simply sort the two strings? Two strings are anagrams if they have the same sorted character sequence. It’s that easy.
I am sure, without looking it up, that sorting the strings and comparing the sorted representations (as done in the code) is the cleanest solution to this problem.
Question 10: Compute the intersection of two lists.
This problem seems to be easy (be careful!). Of course, if you have some library knowledge (like numpy), you could solve this problem with a single function call. For example, Python’s library for linear algebra (numpy) has an implementation of the intersection function. Yet, we assume that we do NOT have any library knowledge in the coding interview (it’s a much safer bet).
The intersection function takes two lists as input and returns a new list that contains all elements that exist in both lists.
Here is an example of what we want to do:
- intersect([1, 2, 3], [2, 3]) → [2, 3]
- intersect([“hi”, “my”, “name”, “is”, “slim”, “shady”], [“i”, “like”, “slim”]) → [“slim”]
- intersect([3, 3, 3], [3, 3]) → [3, 3]
You can use the following code to do this.
[python]]
[/python]
So, we got the semantics right which should be enough to pass the interview. The code is correct and it ensures that the original list is not touched.
But is it really the most concise version? I don’t think so! My first idea was to use sets again on which we can perform operations such as set intersection. But when using sets, we lose the information about duplicated entries in the list. So a simple solution in this direction is not in sight.
Then, I was thinking about list comprehension. Can we do something on these lines? The first idea is to use list comprehension like this:
[python]
def intersect(lst1, lst2):
lst2_copy = lst2[:]
return [x for x in lst1 if lst2.remove(x)]
[/python]
However, do you see the problem with this approach?
The problem is that intersect([4, 4, 3], [4, 2]) returns [4, 4]. This is a clear mistake! It’s not easy to see – I have found many online resources which simply ignore this problem…
The number 4 exists twice in the first list but if you check “4 in [4, 2]”, it returns True – no matter how often you check. That’s why we need to remove the integer number 4 from the second list after finding it the first time.
This is exactly what I did in the above code. If you have any idea how to solve this with list comprehension, please let me know (admin@finxter.com)! 🙂
Question 11: Reverse string using recursion
Now let’s move on to the next problem: reversing a string using recursion.
Here is what we want to achieve:
- “hello” → “olleh”
- “no” → “on”
- “yes we can” → “nac ew sey”
There is a restriction on your solution: you have to use recursion. Roughly speaking, the function should call itself on a smaller problem instance.
Wikipedia explains recursion in an understandable way:
“Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem.”
Clearly, the following strategy would solve the problem in a recursive way. First, you take the first element of a string and move it to the end. Second, you take the rest of the string and recursively repeat this procedure until only a single character is left.
Here is the code:
[python]
def reverse(string):
if len(string)<=1: return string else: return reverse(string[1:]) + string[0] phrase1 = "hello" phrase2 = "no" phrase3 = "yes we can" print(reverse(phrase1)) # olleh print(reverse(phrase2)) # on print(reverse(phrase3)) # nac ew sey [/python]
The program does exactly what I described earlier: moving the first element to the end and calling the function recursively on the remaining string.
Question 12: Find all permutations of a string
This is a common problem of many coding interviews. Similar to the anagrams problem presented in the question above, the purpose of this question is twofold. First, the interviewers check your creativity and ability to solve algorithmic problems. Second, they check your pre-knowledge of computer science terminology.
What is a permutation? You get a permutation from a string by reordering its characters. Let’s go back to the anagram problem. Two anagrams are permutations from each other as you can construct the one from the other by reordering characters.
Here are all permutations from a few example strings:
- ‘hello’ → {‘olhel’, ‘olhle’, ‘hoell’, ‘ellho’, ‘lhoel’, ‘ollhe’, ‘hlleo’, ‘lhloe’, ‘hello’, ‘lhelo’, ‘hlelo’, ‘eohll’, ‘oellh’, ‘hlole’, ‘lhole’, ‘lehlo’, ‘ohlel’, ‘oehll’, ‘lleoh’, ‘olleh’, ‘lloeh’, ‘elhol’, ‘leolh’, ‘ehllo’, ‘lohle’, ‘eolhl’, ‘llheo’, ‘elhlo’, ‘ohlle’, ‘lohel’, ‘elohl’, ‘helol’, ‘loehl’, ‘lheol’, ‘holle’, ‘elloh’, ‘llhoe’, ‘eollh’, ‘olehl’, ‘lhleo’, ‘loleh’, ‘ohell’, ‘leohl’, ‘lelho’, ‘olelh’, ‘heoll’, ‘ehlol’, ‘loelh’, ‘llohe’, ‘lehol’, ‘holel’, ‘hleol’, ‘leloh’, ‘elolh’, ‘oelhl’, ‘hloel’, ‘lleho’, ‘eholl’, ‘hlloe’, ‘lolhe’}
- ‘hi’ → {‘hi’, ‘ih’}
- ‘bye’ → {‘bye’, ‘ybe’, ‘bey’, ‘yeb’, ‘eby’, ‘eyb’}
Conceptually, you can think about a string as a bucket of characters. Let’s say the string has length n. In this case, you have n positions to fill from the bucket of n characters. Having filled all n positions, you obtain a permutation from the string. You want to find ALL such permutations.
My first idea is to solve this problem recursively. Suppose, we already know all permutations from a string with n characters. Now, we want to find all permutations with n+1 characters by adding a character x. We obtain all such permutations by inserting x into each position of an existing permutation. We repeat this for all existing permutations.
However, as a rule-of-thumb: avoid overcomplicating the problem in a coding interview at all costs! Don’t try to be fancy! (And don’t use recursion – that is a logical conclusion from the previous statements…)
So is there an easier iterative solution? Unfortunately, I could not find a simple iterative solution (there is the Johnson-Trotter algorithm but this is hardly a solution to present at a coding interview).
Thus, I went back to implement the recursive solution described above. (*Teeth-gnashingly*)
[python]")) # {'nna', 'ann', 'nan'} # {'olhel', 'olhle', 'hoell', 'ellho', 'lhoel', 'ollhe', 'hlleo', 'lhloe', 'hello', 'lhelo', 'hlelo', 'eohll', 'oellh', 'hlole', 'lhole', 'lehlo', 'ohlel', 'oehll', 'lleoh', 'olleh', 'lloeh', 'elhol', 'leolh', 'ehllo', 'lohle', 'eolhl', 'llheo', 'elhlo', 'ohlle', 'lohel', 'elohl', 'helol', 'loehl', 'lheol', 'holle', 'elloh', 'llhoe', 'eollh', 'olehl', 'lhleo', 'loleh', 'ohell', 'leohl', 'lelho', 'olelh', 'heoll', 'ehlol', 'loelh', 'llohe', 'lehol', 'holel', 'hleol', 'leloh', 'elolh', 'oelhl', 'hloel', 'lleho', 'eholl', 'hlloe', 'lolhe'} # {'coeeff', 'ceoeff', 'ceofef', 'foecef', 'feecof', 'effeoc', 'eofefc', 'efcfoe', 'fecofe', 'eceoff', 'ceeffo', 'ecfeof', 'coefef', 'effoce', 'fceefo', 'feofce', 'fecefo', 'ocefef', 'ffecoe', 'ofcefe', 'fefceo', 'ffeoce', 'ffoeec', 'oefcfe', 'ofceef', 'efeofc', 'eefcof', 'ceffoe', 'eocfef', 'ceffeo', 'eoffec', 'ceoffe', 'fcoefe', 'cefofe', 'oeeffc', 'oeffec', 'fceeof', 'ecfofe', 'feefoc', 'ffcoee', 'feocef', 'ffceeo', 'fofcee', 'fecfoe', 'fefoec', 'eefcfo', 'eofcfe', 'ffceoe', 'ofcfee', 'ceefof', 'effoec', 'offcee', 'fofeec', 'eecffo', 'cofefe', 'feeofc', 'ecofef', 'effceo', 'cfeefo', 'ffeoec', 'eofcef', 'cffeeo', 'cffoee', 'efcefo', 'efoefc', 'eofecf', 'ffeceo', 'ofefec', 'foeecf', 'oefefc', 'oecffe', 'foecfe', 'eeffoc', 'ofecfe', 'oceeff', 'offece', 'efofce', 'fcoeef', 'fcofee', 'oefecf', 'fcefeo', 'cfefoe', 'cefoef', 'eoceff', 'ffoece', 'feofec', 'offeec', 'oceffe', 'eeoffc', 'cfoeef', 'fefcoe', 'ecoeff', 'oeecff', 'efofec', 'eeffco', 'eeofcf', 'ecfefo', 'feoefc', 'ecefof', 'feceof', 'oeefcf', 'ecffoe', 'efecfo', 'cefeof', 'fceofe', 'effeco', 'ecfoef', 'efeocf', 'ceeoff', 'foceef', 'focfee', 'eoeffc', 'efoecf', 'oefcef', 'oeffce', 'ffocee', 'efceof', 'fcfeeo', 'eoefcf', 'ocffee', 'oeceff', 'fcfeoe', 'fefeoc', 'efefco', 'cefefo', 'fecfeo', 'ffeeco', 'ofefce', 'cfofee', 'cfefeo', 'efcoef', 'ofeecf', 'eecoff', 'ffeeoc', 'eefofc', 'ecoffe', 'coeffe', 'eoecff', 'fceoef', 'foefec', 'cfeeof', 'cfoefe', 'efefoc', 'eeocff', 'eecfof', 'ofeefc', 'effcoe', 'efocef', 'eceffo', 'fefeco', 'cffeoe', 'feecfo', 'ecffeo', 'coffee', 'feefco', 'eefocf', 'fefoce', 'fofece', 'fcefoe', 'ocfeef', 'eoffce', 'efcofe', 'foefce', 'fecoef', 'cfeoef', 'focefe', 'ocfefe', 'eocffe', 'efocfe', 'feoecf', 'efecof', 'cofeef', 'fcfoee', 'oecfef', 'feeocf', 'ofecef', 'cfeofe', 'feocfe', 'efcfeo', 'foeefc'} [/python]
If you have any questions, please let me know! I was really surprised to find that there is not a Python one-liner solution to this problem. If you know one, please share it with me (admin@finxter.com)!
Question 13: Check if a string is a palindrome.
First things first. What’s a palindrome?
“A palindrome is a word, number, phrase, or other sequence of characters which reads the same backward as forward, such as madam or racecar or the number 10201. “Wikipedia
Here are a few fun examples:
- “Mr. Owl ate my metal worm”
- “Was it a car or a cat I saw?”
- “Go hang a salami, I’m a lasagna hog”
- “Rats live on no evil star”
- “Hannah”
- “Anna”
- “Bob”
Now, this sounds like there is a short and concise one-liner solution in Python!
[python]
def is_palindrome(phrase):
return phrase == phrase[::-1]
print(is_palindrome(“anna”))
print(is_palindrome(“kdljfasjf”))
print(is_palindrome(“rats live on no evil star”))
# True
# False
# True
[/python]
Here is an important tip: learn slicing in Python by heart for your coding interview. You can download my free slicing book to really prepare you thoroughly for the slicing part of the interview. Just register for my free newsletter and I will send you the version as soon as it is ready and proofreaded!
Question 14: Compute the first n Fibonacci numbers.
And here is … yet another toy problem that will instantly destroy your chances of success if not solved correctly.
The Fibonacci series was discovered by the Italian mathematician Leonardo Fibonacci in 1202 and even earlier by Indian mathematicians. The series appears in unexpected areas such as economics, mathematics, art, and nature.
The series starts with the Fibonacci numbers zero and one. Then, you can calculate the next element of the series as the sum of both last elements.
For this, the algorithm has to keep track only of the last two elements in the series. Thus, we maintain two variables a and b, being the second last and last element in the series, respectively.
[python]
# Fibonacci series:
a, b = 0, 1
n = 10 # how many numbers we calculate
for i in range(n):
print(b)
a, b = b, a+b
##1
##1
##2
##3
##5
##8
##13
##21
##34
##55
[/python]
For clarity of the code, I used the language feature of multiple assignments in the first and the last line.
This feature works as follows. On the left-hand side of the assignment, there is any sequence of variables such as a list or a tuple. On the right-hand side of the assignment, you specify the values to be assigned to these variables. Both sequences on the left and on the right must have the same length. Otherwise, the Python interpreter will throw an error.
Note that all expressions on the right-hand side are first evaluated before they are assigned. This is an important property for our algorithm. Without this property, the last line would be wrong as expression ‘a+b’ would consider the wrong value for ‘a’.
Question 15: Use a list as stack, array, and queue.
This problem sounds easy. But I am sure that it does what it is meant to do: separate the experienced programmers from the beginners.
To solve it, you have to know the syntax of lists by heart. And how many beginners have studied in detail how to access a list in Python? I guess not too many…
So take your time to study this problem carefully. Your knowledge about the list data structure is of great importance for your successful programming career!
Let’s start using a list in three different ways: as a stack, as an array, and as a queue.
[python]
# as a list …
l = []
l.append(3) # l = [3]
l.append(4) # l = [3, 4]
l += [5, 6] # l = [3, 4, 5, 6]
l.pop(0) # l = [4, 5, 6]
# … as a stack …
l.append(10) # l = [4, 5, 6, 10]
l.append(11) # l = [4, 5, 6, 10, 11]
l.pop() # l = [4, 5, 6, 10]
l.pop() # l = [4, 5, 6]
# … and as a queue
l.insert(0, 5) # l = [5, 4, 5, 6]
l.insert(0, 3) # l = [3, 5, 4, 5, 6]
l.pop() # l = [3, 5, 4, 5]
l.pop() # l = [3, 5, 4]
print(l)
# [3, 5, 4]
[/python]
If you need some background knowledge, check out the Python tutorial and these articles about the stack data structure and the queue data structure.
Question 16: Search a sorted list in O(log n)
How to search a list in logarithmic runtime? This problem has so many practical applications that I can understand that the coding interviewers love it.
The most popular algorithm that solves this problem is the binary search algorithm. Here are some of the applications:
“Applications of the binary search algorithm include sets, trees, dictionaries, bags, bag trees, bag dictionaries, hash sets, hash tables, maps and arrays.”Quora
Think about the impact of efficient searching! You use these data structures in every single non-trivial program (and in many trivial ones as well).
The graphic shows you the binary search algorithm at work. The sorted list consists of eight values. Suppose, you want to find the value 56 in the list.
The trivial algorithm goes over the whole list from the first to the last element comparing each against the searched value. If your list contains n elements, the trivial algorithm results in n comparisons. Hence, the runtime complexity of the trivial algorithm is O(n).
(If you don’t feel comfortable using the Big-O notation, refresh your knowledge of the Landau symbols here.)
But our goal is to traverse the sorted list in logarithmic time O(log n). So we can not afford to touch each element in the list.
The binary search algorithm in the graphic repeatedly probes the element in the middle of the list (rounding down). There are three cases:
- This element x is larger than the searched value 55. In this case, the algorithm ignores the right part of the list as all elements are larger than 55 as well. This is because the list is already sorted.
- The element x is smaller than the searched value 55. This is the case, we observe in the figure. Here, the algorithm ignores the left part of the list as they are smaller as well (again, using the property that the list is already sorted).
- The element x is equal to the searched value 55. You can see this case in the last line in the figure. Congrats, you have found the element in the list!
In each phase of the algorithm, the search space is reduced by half! This means that after a logarithmic number of steps, we have found the element!
After having understood the algorithm, it is easy to come up with the code. Here is my version of the binary search algorithm.
[python]
def binary_search(lst, value):
lo, hi = 0, len(lst)-1
while lo <= hi: mid = (lo + hi) // 2 if lst[mid] < value: lo = mid + 1 elif value < lst[mid]: hi = mid - 1 else: return mid return -1 l = [3, 6, 14, 16, 33, 55, 56, 89] x = 56 print(binary_search(l,x)) # 6 (the index of the found element) [/python]
Congratulations, you made it through these 15+ wildly popular interview questions. Don’t forget to solve at least 50 Python code puzzles here.
Thanks for reading this article. If you have any more interview questions (or you struggle with one of the above), please write me an email to admin@finxter.com.
I recommend that you subscribe to my free Python email course. You will get 5 super-simple Python cheat sheets. As a bonus, I will send you 10+ educative Python mails. No Spam. 100% | https://blog.finxter.com/python-interview-questions/ | CC-MAIN-2020-34 | refinedweb | 4,610 | 63.39 |
Controlling an RC Servo Motor With an Arduino and Two Momentary Switches
Introduction: Controlling an RC Servo Motor With an Arduino and Two Momentary Switches
The name says it all. Controlling an RC car servo motor with an Arduino and some resistors, jumper wires, and two tactile switches. I made this the second day I got my Arduino, so I'm pretty proud of myself.
Step 1: Parts List
Okay, your going to need the following:
Arduino-$30-35 USD
Find out where to buy those here.
Jumper Wires-$8.50 USD
I got mine from Amazon
Resistors- Pennies a piece
Get em from Radio Shack, Digi-Key, Mouser, Jameco, etc.
Your goin to need two around 100 ohms (brown black brown) and two around 10k ohms(brown black orange). These don't have to be exact.
Servo Motor- $10 USD
Yes, I know this isn't the cheapest one on the internet. Tower Hobbies
Breadboard- $9-$30 USD, Depending on the size.
Amazon
Tactile Switch- $0.20 USD
Only 6,427 left on Digi-Key I just salvaged mine...
Step 2: The Circuit
The circuit is fairly simple. You should be able to throw it on a breadboard in five minutes like I did. Make sure it makes no sense to your less geeky family, and looks like a wad of something you pulled off a drain snake. Yum.
Step 3: The Program/Sketch
Here's my code that I used. I might explain it later, I'm kind of lazy. Thats what this and this are for.
#include <Servo.h>
Servo myservo;
int button7=0;
int button6=0;
int pos=90;
void setup()
{
pinMode(7, INPUT);
pinMode(6, INPUT);
myservo.attach(9);
}
void loop()
{
button7=digitalRead(7);
button6=digitalRead(6);
myservo.write(pos);
delay(5);
pos=constrain(pos,0,180);
if(button7==1 && button6==0)
{
pos++;
}
if(button7==0 && button6==1)
{
pos--;
}
}
Any bugs, glitches? I don't notice any...
Step 4: It Works(or Doesn't)! And, Coming Soon.....
It hopefully works for you, if it doesn't post a comment. We of the instructable community are usualy good at helping people. Hoping to add a video sometime soon. Might just post a video of an Arduino controlling a servo in another project, since I've moved on to bigger and better things. So have fun with this, modify it, heck go out and make money off of it and then tell me! That would just make my day.
Hi can anyone help me use this with a NRF24L01? any help would be greatful
Hello Geeklord - was wondering just off hand if you could possibly help me to utilize 2 joysticks with x, y, and z axis to control 2 servos. Each servo will work off one ps2 style joystick. Servos will be set to pan and tilt. Any thoughts or assistance would be much much appreciated.
Tony
how do you keep it going in one direction ?
The Constrain should either be directly before the write, or after the if.
A jumper kit is nice, but you can also salvage them from different kabels.
Looks cool!
I got an Duemilanove in my XMas stocking with a Servo Shield and have tried my darndest to load a rudimentary motor sketch, to no avail.
I've gotten error after error with the #include <Servo.h> and am wondering if you could shed some light?
I have several instances of the arduino folder (thinking that I might have a path issue). One on the root (c:/), one in my documents, and one wherever the unzipper defaults it to. I have tried different paths, and only get different errors, and none of them are very helpful. I see that you just have a plain "#include <Servo.h>", just like the instructions on the Arduino site said, but that not working for me. Any help would be appreciated.
I'm going to give myself an epileptic seziure if all I can do on my new toy is change the blink rate on an LED (which is all I've done). : (
Thanks!
Hang in there... I'm not sure what you mean by Servo shield, do you mean the adafruit motor shield? Anyway, I don't know if you can type in the #include for the servo library, it seems like you should be able to, but what I did is Sketch>Import Library>Servo in the IDE. Also, have you looked the Servo library documentation? It explains a couple of the quirks with Arduino 0017 (but they make it better over all). when I made this I had the servo signal attached to pin 9, but they've changed the library since then, so now you can attach a servo on any pin. Here's the link -> .
sorry for the late reply...
I actually figured this out a couple of days ago. You are right, I can't just type in the #include. It has to be imported from the sketch library as you mention. Crazy huh? Maybe that can be fixed on the next software upgrade. I think I'll put in a suggestion on their site.
I program for a living, and this just wasn't intuitive to me. All other programming languages I've used would have picked up on the special characters when doing an include, and would have "included" it on the fly.
Either way, thanks for your insight and response. It was spot on.
Huh, I never knew you couldn't just type in the #include. Well, problem solved.
Man, i was about to read through this instructable...but allsteps is gone, its too frustrating...
So i suppose you got it to upload the code now ;)
Also, I found the best deal, From the arduino website so it is legit, its 28 Dollars, you save like 3-5 Dollars!
That is where i am buying mine, Good i'ble Rated ;)
i just bought an arduino Duemilanove and two servos, sooo pumped to try this!
Cool, its a lot of fun. You know you only needed one right?
oh i know, i'm working on a project and i needed two
Cool
does each servo go the opposite direction as the other one? like when one is goin left the other is goin right?
aahhhh, there is only one servo. The two different pictures on the front are just different angles.
ohhh... then How to i make what i explaned? :P
aahhh, attach another servo motor to + and-, and attach its signal pin to digital 10. add something like Servo myservo2; to the beginning of the code, put myservo2.attach(10); and another variable for its position in the beginning too. Then do myservo2.write(whatever you named the other variable); in the loop. Then in the two if comands put a variablename++ or --; the opposite of whatever is being done to the variable pos. I might write the code and try that out later.
the myservo2.attach(10) goes in void Setup() actually.
and put the constrain in there too for the other variable. I'm gonna have to write the code for you unless you understand all my run on sentances...
wrote one, it's getting closer to what i want
// Sweep
// by BARRAGAN <>
#include <Servo.h>
Servo myservo; // create servo object to control a servo
Servo myservo2; // a maximum of eight servo objects can be created
int pos = 0; // variable to store the servo position
void setup()
{
myservo.attach(9); // attaches the servo on pin 9 to the servo object
myservo2.attach(10);
}
void loop()
{
for(pos = 0; pos < 180; pos += 10) // goes from 0 degrees to 180 degrees
{ // in steps of 1 degree
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(20); // waits 15ms for the servo to reach the position
}
for(pos = 180; pos>=1; pos-=10) // goes from 180 degrees to 0 degrees
{
myservo2.write(pos); // tell servo to go to position in variable 'pos'
delay(20); // waits 15ms for the servo to reach the position
}
for(pos = 180; pos < 0; pos += 10) // goes from 0 degrees to 180 degrees
{ // in steps of 1 degree
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(20); // waits 15ms for the servo to reach the position
}
for(pos = 1; pos>=180; pos-=10) // goes from 180 degrees to 0 degrees
{
myservo2.write(pos); // tell servo to go to position in variable 'pos'
delay(20); // waits 15ms for the servo to reach the position
}
}
Lol try it, and then tell me(i understand your english :P) How to make one start going right while the other one is in neutral... =/
thanks
Cool, one of these days i will.
lol i'm gonna try to write the code, but it's my second day writing codes... xD so if you dont mind, write the code too plz :)
Awesome! Looking forward to the video. Arduino is definately fun. | http://www.instructables.com/id/Controlling-an-RC-Servo-motor-with-an-Arduino-and-/ | CC-MAIN-2018-05 | refinedweb | 1,491 | 82.75 |
fs
If you're just looking to install WordPress for Android, you can find it on Google Play. If you're a developer wanting to contribute, read on..wordpress android app mobile website write read
jBinary makes it easy to create, load, parse, modify and save complex binary files and data structures in both browser and Node.js. It works on top of jDataView (DataView polyfill with convenient extensions).parse edit buffer binary file read write manipulate
Proverb Teleprompter is a simple teleprompter software useful for video shoots that require the talent to read a lot of text (i.e. news shows, talk shows etc.).broadcast news prompt read teleprompter text video
Cross-platform. Supports: macOS, Windows, Linux, OpenBSD, FreeBSD, Android with Termux.Write (copy) to the clipboard asynchronously. Returns a Promise.clipboard copy paste copy-paste pasteboard read write pbcopy clip xclip xsel.read scrape grab article spider crawl readable readability
Read all stream content and pass it to callbackstream read buffer callback
asammdf is a fast parser/editor for ASAM (Associtation for Standardisation of Automation and Measuring Systems) MDF (Measurement Data Format) files.asammdf supports both MDF version 3 and 4 formats.read edit reader editor parser parse asam mdf
A generator based line reader. This node package will return the lines of a file as a generator when given file descriptor and the size of the file.I created this project primarily for better flow control of reading lines in a file. Instead of using callbacks for reading lines within a file, this will use a generator which has some unique benefits.generator file line reader read by es6 ecma2015
Returns a promise for a Vinyl file.Create a Vinyl file synchronously and return it.vinyl fs file read virtual format gulp gulpfriendly
Having tool specific config in package.json reduces the amount of metafiles in your repo (there are usually a lot!) and makes the config obvious compared to hidden dotfiles like .eslintrc, which can end up causing confusion. XO, for example, uses the xo namespace in package.json, and ESLint uses eslintConfig. Many more tools supports this, like AVA, Babel, nyc, etc.It walks up parent directories until a package.json can be found, reads it, and returns the user specified namespace or an empty object if not found.json read parse file fs graceful load pkg package config conf configuration object namespace namespaced
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects. | https://www.findbestopensource.com/tagged/read | CC-MAIN-2019-35 | refinedweb | 435 | 57.98 |
.
So I found this chunk of free code at who claims this EXIF parsing code will work with Qt. Indeed it does to a point. But I had to go bug hunting because the code kept return an orientation value of 0, which isn’t valid anyway, the official values are 1 to 8. And this is the bug I found:
int Exif::processEXIF(QByteArray *data, int itemlen, int *Orientation) { ... if (data->mid(6,2) == "II") { // Exif section in Intel order //qDebug() << data->mid(6,2); MotorolaOrder = 0; } else { if (data->mid(6,2) == "II"){ // Exif section in Motorola order //qDebug() << data->mid(6,2); MotorolaOrder = 1; } else { return -1; // Invalid Exif alignment marker. } } ...
Can you spot the bug? If not, pay attention to that if-then code block. Notice the redundant condition
if (data->mid(6,2) == "II")... in the
else code block. If the opening if condition was already proven false, why would it be true as an opposite condition. In this case, it would always return -1, so the rest of the source code won’t even get past this code block. So to fix this, you have to remove that redundant chunk of code, like so:
int Exif::processEXIF(QByteArray *data, int itemlen, int *Orientation) { ... if (data->mid(6,2) == "II") { // Exif section in Intel order //qDebug() << data->mid(6,2); MotorolaOrder = 0; } else { MotorolaOrder = 1; } ...
And wouldn’t you know it, then it runs like a charm!!! You’re very welcome!
You can download the optimized source code, which I personally cleaned up, here:
And this is how you would use it in Qt 4.8.5 at least:
#include <QFile> #include "exif.h" QString filePath = "c:/image.jpg"; QFile file(filePath); if (file.open(QIODevice::ReadOnly) == true) { Exif *exif = new Exif(); int orientation = 0; exif->readJpegSections(file, &orientation); file.close(); //do something with variable orientation if (orientation == 8) { //rotate image -90 degrees } }
Happy coding! I’m sure you can probably even go deeper and extract more EXIF tags, but that’s beyond my understanding. You could extract embedded thumbnails too, but there have been cases the thumbnail doesn’t match the actual data anymore once it’s modified, so even embedded EXIF thumbnails become unreliable. Otherwise, I think orientation and maybe even the comments are quite useful. Who knows, someone has to find it all useful. LOL. | https://www.eastfist.com/qt_tutorials/index.php/2017/02/07/how-to-read-exif-orientation-from-jpeg-in-qt-4-8-5/ | CC-MAIN-2019-09 | refinedweb | 393 | 65.83 |
Dual-purpose-code-embedded-external-python
Last Update: 12/22/2020
The originpro Python package works almost exactly the same for the Python interpreter embedded into Origin and a Python interpreter accessing Origin externally. This means that you could write a portable script that works for both types of interpreter.
However, there are two small additions that need to be made to such a script. Those additions are illustrated in the code below. They both target originpro only when run from an external Python interpreter.
The first addition sets the visibility of the instance of Origin that launches, while the second shuts down the running instance of Origin. Simply wrap your originpro code in these two code blocks and your script can become portable between interpreters.
import originpro as op
# Set Origin instance visibility.
# Important for only external Python.
# Should not be used with embedded Python.
if op.oext:
op.set_show(True)
# Your originpro-based Python code goes here.
# Exit running instance of Origin.
# Required for external Python but don't use with embedded Python.
if op.oext:
op.exit()
Keywords:Python, originpro, embedded, external, interpreter, portable, dual | http://cloud.originlab.com/doc/en/Quick-Help/Dual-purpose-code-embedded-external-python | CC-MAIN-2022-21 | refinedweb | 188 | 50.43 |
Perl Interview Questions And Answers for experienced professionals from Codingcompiler. These Perl interview questions were asked in various interviews conducted by top multinational companies across the globe. We hope that these interview questions on Perl will help you in cracking your job interview. All the best and happy learning.
Perl Interview Questions
- What is Perl language?
- Why Perl? Explain in brief?
- What are the various advantages and disadvantages of Perl?
- What are the various uses of Perl?
- Can you name the variables in which the chomp works? Also, how they are different from one another?
- Explain the execution of a program in Perl.
- While writing a program, why the code should be as short as possible?
- What are the features of Perl programming?
- Is perl a case sensitive language?
- What is a perl identifier?
Perl Interview Questions And Answers
Q1. What is Perl language?
Answer: Perl stands for “Practical Extraction and Reporting Language”. It’s a powerful scripting language and is rich in features. Using Perl, we can write powerful and efficient code that can be used in mission-critical projects.
Q2. Why Perl? Explain in brief?
Answer: UNIX system administrators and application developers often have to rely on several different languages to accomplish their tasks. This means learning a number of different syntaxes and having to write in multiple languages to accomplish a task. For example, to process a file, a system administrator might have to write a shell script using sh, process a file using awk or grep, and edit the file using sed. For other uses, the administrator may have to create a C program with its longer create/compile/debug development cycle.
It would be better if the administrator could combine many of these tasks into a simple language that is easy to write and develop, and reasonably efficient and complete. Along comes Perl.
In a single language, Perl combines some of the best features of C, sed, awk, and sh. People familiar with these languages have little difficulty being productive in Perl. Perl’s expression syntax is very C-like. Perl uses sophisticated pattern-matching techniques to scan large amounts of data very quickly. Although optimized for scanning text, Perl can also deal with binary data. If you have a problem on which you would ordinarily use sed, awk, or sh, but it exceeds these tools’ capabilities or must run a little faster and you don’t want to write the program in a compiled language such as C, Perl may be the language for you.
Q3. What are the various advantages and disadvantages of Perl?
Answer: Advantages.
The main disadvantage of Perl is that as it is an interpreted language, the execution speed is quite slow. Although it allows us to write high-level code, we cannot write complex code using Perl. Perl has too many features that can be exhaustive for a programmer to comprehend.
Q4. What are the various uses of Perl?
Answer: Perl is used in a mission-critical project – like the defense industry. It is also used in “Rapid Prototyping”.
Q5. Can you name the variables in which the chomp works? Also, how they are different from one another?
Answer:
These are: Scalar and Array
Scalar is generally denoted by symbol $ and it can have a variable which can either be a strong or a number. An array on the other side is denoted by @ symbol. An array is always a number. Both these variables have different namespace. The scalar variables are capable to hold a value of 1 digit while array can have more values. Both of them can be executed in the function whenever there is a need of the same.
Q6. Explain the execution of a program in Perl.
Answer: Perl is portable and Perl programs can be executed on any platform. Though having a Perl IDE is useful, we can even write the Perl code in a notepad and then execute the program using the command prompt.
For Example, consider the following simple program to print “Hello, World!!”
#!/usr/bin/perl
Print(“Hello, World!!”);
In this code, the first line “#!/usr/bin/perl”, is the path to the Perl interpreter.
Let’s name this file as “hello.pl”. We can execute this program by just giving the following command in the command window:
pl hello.pl
Output: Hello, World!!
Q7. While writing a program, why the code should be as short as possible?
Answer:
Complex codes are not always easy to handle. They are not even easy to be reused. Moreover, finding a bug in them is not at all a difficult job. Any software or application if have complex or lengthy code couldn’t work smoothly with the hardware and often have compatibility issues. Generally, they take more time to operate and thus becomes useless or of no preference for most of the users. The short code always makes sure that the project can be made user-friendly and it enables programmers to save a lot of time.
Q8. What are the features of Perl programming? 20,000 third party modules available from the Comprehensive Perl Archive Network.
- The Perl interpreter can be embedded into other systems.
Q9. Is perl a case sensitive language?
Answer: Yes! Perl is a case sensitive programming language
Q10. What is a perl identifier?
Answer: A Perl identifier is a name used to identify a variable, function, class, module, or other object. A Perl variable name starts with either $, @ or % followed by zero or more letters, underscores, and digits (0 to 9).
Frequently Asked Perl Interview Questions And Answers
Q11. What are data types that perl supports?
Answer: Perl has three basic data types − scalars, arrays of scalars, and hashes of scalars, also known as associative arrays.
Q12. What are different data types that Perl supports. Elaborate on them.
Answer:
PERL has three basic data types. They are:
Scalar data type: These are simple variables that are either a number, string or a reference. Scalar data types start with a dollar sign($). Scalar values are by default undefined. A scalar value is interpreted as TRUE in the Boolean sense if it not a null string.
Arrays of scalar: These are the ordered list of scalars that can be accessed by a numeric index and starts with a 0. They start with “at” sign (@). In simpler words, array stores the list of scalar values.
Hashes of scalars: Hashes are also known as associative arrays and are preceded by a percentage sign (%). Hashes are unordered sets of value pairs that can be accessed using the keys as subscripts. It stores associative arrays that use a key-value as an index instead of numerical indexes. It is a third major data type after scalars and arrays.
Q13. What does CPAN means?
Answer:
It stands for Comprehensive Perl Archive Network and is a large collection of all the documents and software related to Perl. The programmers can access the same and can avoid the difficulties they face. CPAN is of significance use for the programmers and they are free to derive a lot of useful information from the same.
Q14. What is a list context?
Answer:
Assignment to an array or a hash evaluates the right-hand side in a list context.
Q15. What is boolean context?
Answer:
Boolean context is simply any place where an expression is being evaluated to see whether it’s true or false.
Q16. Explain the meaning of Perl one-liner.
Answer: Perl one-liners are one line command programs that are used for success of any operation. They may include more than one Perl statements, and one advantage to using it is that the program can be typed and executed from the command line instantly. Example:
The #run program, but with warnings
Perl –w my_file
The #run program under debugger
Perl –d my_file
Q17. What happens when you return a reference to a private variable?
Answer:
Perl keeps track of your variables when we return a reference to a private variable whether it’s dynamic or otherwise. Perl is not going to free things you are done using them.
Q18. Differentiate USE and REQUIRE in Perl?
Answer:
- USE method is used for modules while REQUIRE method is used for both modules and libraries.
- The objects which are included are varied at compilation time while in REQUIRE the objects are included are verified at runtime.
- You are not supposed to give a file extension in USE and REQUIRE.
Q19. Write down the numeric operators in the Perl programming?
Answer: There are various operators in Perl including:
- Comparison operators
- Arithmetic operators
- Bitwise Operators
- String concatenation:
- comparison operators
From the above, the athematic operators work from left to right while on the other side the Bitwise operators work from right to left.
Q20. Write down flags or arguments that are used while executing a program in Perl?
Answer: There are so many different flags or arguments that are used in Pearl and some of them are given below;
e- Denotes execute
d-Denotes debugging
w- Denotes warning
c- Denotes compile only
Apart from these, the user can also leverage the combination of different arguments together.
Advanced Perl Interview Questions And Answers
Q21. How many control keys are there in Perl language?
Answer: In the Perl language, there are three main control keys known as:
- Redo statement
- Next statement
- Last statement
Q22. How to copy a file in Perl?
Answer: To copy the content of one file into another file, read all lines of the first file in a while loop and copy it in another file.
Q23. How to close a file in Perl?
Answer: Closing a file in Perl is not mandatory. However, using close() function will disassociate the file handle from the corresponding file.
Q24. What is the importance of Perl warnings? How do you turn them on?
Answers:’;
Q25. Explain ‘->’ in Perl?
Answer: It is a symbolic link which links one file name to a new file name.
For example, in file1 -> file2, if we read file1, we will end up reading file2.
Q26. What is a chop() function in Perl?
Answer: Perl chop() function removes the last character from a string regardless of what that character is. It returns the chopped character.
Q27. What is circular reference?
Answer
A circular reference takes place when two references have a reference to one another. While creating circular references, it is important to proceed carefully since a circular reference can also lead to memory leaks.
Q28. How can you empty an array?
Answer
There are three methods to empty an array that is as follow:
By setting length placing its length to any negative number.
By assigning a null list().
Set an array to undef to clear it.
Q29. Is the length of Perl code is same as in C++ and Java code?
Answer
Perl code is less as a comparison to Java and C++ language since Perl code is the one-fifth size of the C++ code, we need less to maintain, write and debug.
Q30. Name the options that can be used to wrap scripts inside loops?
Answer:
-n or -p options can be used.
Top Perl Interview Questions for Experienced
Q31. What are the different string manipulation operators in Perl?
Answer
Perl provides two different operators to manipulate strings.
Concatenation operator (.): Combines two strings to form a result string.
Repetition operator (x): Repeats string for a specified number of times.
Example
$str1 = “abc”;
$str2 = “def”;
$str3 = $str1.$str2;#concatenates the string and str3 has value ‘abcdef’
Q32. What is “grep” function in Perl?
Answer:
The grep function in Perl used for Pattern matching as in other scripting languages.
The “grep” function works on a list.
It evaluates an expression or a block for each element of the List.
For each statement that returns true as a result of evaluating an expression, it adds that element to the list of returning values.
Look at the following code snippet:
#!/usr/bin/[email protected] =(“foo”,10,0,”bar”,20);
@has_string = grep( /\s/,@list );
Print “@has_string\n”;
Output: foo bar
This code executes “grep” command on a list and matches the pattern string (/s) to the list. The output is only the elements.
Q33. How many types of operators are used in Perl?
Answer
Arithmetic operators: +,–,*
Assignment operators: += , -+, *=
Increment/ decrement operators: ++, – –
String concatenation: ‘.’ operator
comparison operators: ==, !=, >, < , >=
Logical operators: &&, ||, !
Q34. What does the q{ } operator do?
Answer :
The operator encloses a string in the single quotes.
Q35. Why Perl aliases are considered to be faster than references?
Answer:
Perl aliases are faster than references because they do not require any dereferencing.
Q36. When would ‘local $_’ in a function ruin your day?
Answer:
When your caller was in the middle for a while(m//g) loop
The /g state on a global variable is not protected by running local on it. That’ll teach you to stop using locals. Too bad $_ can’t be the target of a my() — yet.
Q37. How do I do < fill-in-the-blank > for each element in an array?
Answer:
#!/usr/bin/perl -w
@homeRunHitters = (‘McGwire’, ‘Sosa’, ‘Maris’, ‘Ruth’);
foreach (@homeRunHitters) {
print “$_ hit a lot of home runs in one yearn”;
}
Q38. Why is it hard to call this function: sub y { “because” }
Answer:].
Q39. How do you print out the next line from a filehandle with all its bytes reversed?
Answer:
print scalar reverse scalar <FH>
Surprisingly enough, you have to put both the reverse and the <FH> into scalar context separately for this to work.
Q40. How do I do < fill-in-the-blank > for each element in a hash?
Answer:”;
} | https://codingcompiler.com/perl-interview-questions-answers/ | CC-MAIN-2020-16 | refinedweb | 2,269 | 66.84 |
This section provides an overview of what x86 is, and why a developer might want to use it.
It should also mention any large subjects within x86, and link out to the related topics. Since the Documentation for x86 is new, you may need to create initial versions of those related topics.
The family of x86 assembly languages represents decades of advances on the original Intel 8086 architecture. In addition to there being several different dialects based on the assembler used, additional processor instructions, registers and other features have been added over the years while still remaining backwards compatible to the 16-bit assembly used in the 1980s.
The first step to working with x86 assembly is to determine what the goal is. If you are seeking to write code within an operating system, for example, you will want to additionally determine whether you will choose to use a stand-alone assembler or built-in inline assembly features of a higher level language such as C. If you wish to code down on the "bare metal" without an operating system, you simply need to install the assembler of your choice and understand how to create binary code that can be turned into flash memory, bootable image or otherwise be loaded into memory at the appropriate location to begin execution.
A very popular assembler that is well supported on a number of platforms is NASM (Netwide Assembler), which can be obtained from. On the NASM site you can proceed to download the latest release build for your platform.
Windows
Both 32-bit and 64-bit versions of NASM are available for Windows. NASM comes with a convenient installer that can be used on your Windows host to install the assembler automatically.
Linux
It may well be that NASM is already installed on your version of Linux. To check, execute:
nasm -v
If the command is not found, you will need to perform an install. Unless you are doing something that requires bleeding edge NASM features, the best path is to use your built-in package management tool for your Linux distribution to install NASM. For example, under Debian-derived systems such as Ubuntu and others, execute the following from a command prompt:
sudo apt-get install nasm
For RPM based systems, you might try:
sudo yum install nasm
Mac OS X
Recent versions of OS X (including Yosemite and El Capitan) come with an older version of NASM pre-installed. For example, El Capitan has version 0.98.40 installed. While this will likely work for almost all normal purposes, it is actually quite old. At this writing, NASM version 2.11 is released and 2.12 has a number of release candidates available.
You can obtain the NASM source code from the above link, but unless you have a specific need to install from source, it is far simpler to download the binary package from the OS X release directory and unzip it.
Once unzipped, it is strongly recommended that you not overwrite the system-installed version of NASM. Instead, you might install it into /usr/local:
$ sudo su <user's password entered to become root> # cd /usr/local/bin # cp <path/to/unzipped/nasm/files/nasm> ./ # exit
At this point, NASM is in
/usr/local/bin , but it is not in your path. You should now add the following line to the end of your profile:
$ echo 'export PATH=/usr/local/bin:$PATH' >> ~/.bash_profile
This will prepend
/usr/local/bin to your path. Executing
nasm -v at the command prompt should now display the proper, newer, version.
This is a basic Hello World program in NASM assembly for 32-bit x86 Linux, using system calls directly (without any libc function calls). It's a lot to take in, but over time it will become understandable. Lines starting with a semicolon(
; ) are comments.
If you don't already know low-level Unix systems programming, you might want to just write functions in asm and call them from C or C++ programs. Then you can just worry about learning how to handle registers and memory, without also learning the POSIX system-call API and the ABI for using it.
This makes two system calls:
write(2) and
_exit(2) (not the
exit(3) libc wrapper that flushes stdio buffers and so on). (Technically,
_exit() calls sys_exit_group, not sys_exit, but that only matters in a multi-threaded process.) See also
syscalls(2) for documentation about system calls in general, and the difference between making them directly vs. using the libc wrapper functions.
In summary, system calls are made by placing the args in the appropriate registers, and the system call number in
eax , then running an
int 0x80 instruction. See also What are the return values of system calls in Assembly? for more explanation of how the asm syscall interface is documented with mostly C syntax.
The syscall call numbers for the 32-bit ABI are in
/usr/include/i386-linux-gnu/asm/unistd_32.h (same contents in
/usr/include/x86_64-linux-gnu/asm/unistd_32.h ).
#include <sys/syscall.h> will ultimately include the right file, so you could run
echo '#include <sys/syscall.h>' | gcc -E - -dM | less to see the macro defs (see this answer for more about finding constants for asm in C headers)
section .text ; Executable code goes in the .text section global _start ; The linker looks for this symbol to set the process entry point, so execution start here ;;;a name followed by a colon defines a symbol. The global _start directive modifies it so it's a global symbol, not just one that we can CALL or JMP to from inside the asm. ;;; note that _start isn't really a "function". You can't return from it, and the kernel passes argc, argv, and env differently than main() would expect. _start: ;;; write(1, msg, len); ; Start by moving the arguments into registers, where the kernel will look for them mov edx,len ; 3rd arg goes in edx: buffer length mov ecx,msg ; 2nd arg goes in ecx: pointer to the buffer ;Set output to stdout (goes to your terminal, or wherever you redirect or pipe) mov ebx,1 ; 1st arg goes in ebx: Unix file descriptor. 1 = stdout, which is normally connected to the terminal. mov eax,4 ; system call number (from SYS_write / __NR_write from unistd_32.h). int 0x80 ; generate an interrupt, activating the kernel's system-call handling code. 64-bit code uses a different instruction, different registers, and different call numbers. ;; eax = return value, all other registers unchanged. ;;;Second, exit the process. There's nothing to return to, so we can't use a ret instruction (like we could if this was main() or any function with a caller) ;;; If we don't exit, execution continues into whatever bytes are next in the memory page, ;;; typically leading to a segmentation fault because the padding 00 00 decodes to add [eax],al. ;;; _exit(0); xor ebx,ebx ; first arg = exit status = 0. (will be truncated to 8 bits). Zeroing registers is a special case on x86, and mov ebx,0 would be less efficient. ;; leaving out the zeroing of ebx would mean we exit(1), i.e. with an error status, since ebx still holds 1 from earlier. mov eax,1 ; put __NR_exit into eax int 0x80 ;Execute the Linux function section .rodata ; Section for read-only constants ;; msg is a label, and in this context doesn't need to be msg:. It could be on a separate line. ;; db = Data Bytes: assemble some literal bytes into the output file. msg db 'Hello, world!',0xa ; ASCII string constant plus a newline (0x10) ;; No terminating zero byte is needed, because we're using write(), which takes a buffer + length instead of an implicit-length string. ;; To make this a C string that we could pass to puts or strlen, we'd need a terminating 0 byte. (e.g. "...", 0x10, 0) len equ $ - msg ; Define an assemble-time constant (not stored by itself in the output file, but will appear as an immediate operand in insns that use it) ; Calculate len = string length. subtract the address of the start ; of the string from the current position ($) ;; equivalently, we could have put a str_end: label after the string and done len equ str_end - str
On Linux, you can save this file as
Hello.asm and build a 32-bit executable from it with these commands:
nasm -felf32 Hello.asm # assemble as 32-bit code. Add -Worphan-labels -g -Fdwarf for debug symbols and warnings gcc -nostdlib -m32 Hello.o -o Hello # link without CRT startup code or libc, making a static binary
See this answer for more details on building assembly into 32 or 64-bit static or dynamically linked Linux executables, for NASM/YASM syntax or GNU AT&T syntax with GNU
as directives. (Key point: make sure to use
-m32 or equivalent when building 32-bit code on a 64-bit host, or you will have confusing problems at run-time.)
You can trace it's execution with
strace to see the system calls it makes:
$ strace ./Hello execve("./Hello", ["./Hello"], [/* 72 vars */]) = 0 [ Process PID=4019 runs in 32 bit mode. ] write(1, "Hello, world!\n", 14Hello, world! ) = 14 _exit(0) = ? +++ exited with 0 +++
The trace on stderr and the regular output on stdout are both going to the terminal here, so they interfere in the line with the
write system call. Redirect or trace to a file if you care. Notice how this lets us easily see the syscall return values without having to add code to print them, and is actually even easier than using a regular debugger (like gdb) for this.
The x86-64 version of this program would be extremely similar, passing the same args to the same system calls, just in different registers. And using the
syscall instruction instead of
int 0x80 . | https://riptutorial.com/x86/topic/1164/getting-started-with-intel-x86-assembly-language---microarchitecture | CC-MAIN-2019-13 | refinedweb | 1,652 | 61.46 |
UPDATE: This post underwent significant updates on 17 November 2016 to correct erroneous statements and examples, to fix the underlying HTML layout (not obvious to readers unless you view HTML source in a web browser), and to fix some spelling issues. If for some reason you want to see the old, incorrect post, check out the version archived by the Wayback Machine at.
I have blogged before regarding Groovy's support for switching on String. Groovy can switch on much more than just literal
Strings (and literal integral types that Java allows switching on) and I demonstrate this briefly here.
Groovy's
switch statement will use a method implemented with the name "
isCase" to determine if a particular
switch option is matched. This means that custom objects are "switchable" in Groovy. For the simple example in this blog post, I'll use the Java classes
SimpleState and
State.java.
SimpleState.java
package dustin.examples; import static java.lang.System.out; /** * Java class to be used in demonstrating the "switch on steroids" in Groovy. * The Groovy script will be able to {@code switch} on instances of this class * via the implicit invocation of {@code toString()} if the {@code case} * statements use {@code String}s as the items to match. */ public class SimpleState { private String stateName; public SimpleState(final String newStateName) { this.stateName = newStateName; } @Override public String toString() { return this.stateName; } }
The above Java class's
String representation can be switched on in a Groovy script as shown in the next code listing for
switchOnSimpleState.groovy:
switchOnSimpleState.groovy
#!/usr/bin/env groovy import dustin.examples.SimpleState SimpleState state = new SimpleState("Colorado") "'"
When the above Groovy script is run against the above simple Java class, the code prints out the correct information because Groovy implicitly invokes the
toString() method on the "state" instance of
State being switched on. Similar functionality can now be achieved in Java, but one needs to explicitly call
toString() on the object being switched on. It's also worth keeping in mind that when I wrote the original version of this post in early 2010, Java did not support switching on Strings. The output of running the above is shown in the screen snapshot below (the name of the script doesn't match above because this is an old screen snapshot from this original post before it was corrected and updated).
With Groovy and the
isCase method, I can switch on just about any data type I like. To demonstrate this, the Java class
State will be used and its code listing is shown below. It includes a
isCase(State) method that Groovy will implicitly call when instances of
State are being switched against as the
case choices. In this case, the
isCase(State) method simply calls the
State.equals(Object) method to determine if that
case is true. Although this is the typical behavior for implementations of
isCase(Object), we could have had it determine if it was the case or not in any way we wanted.
State.java
package dustin.examples; import static java.lang.System.out; public class State { private String stateName; public State(final String newStateName) { this.stateName = newStateName; } /** * Method to be used by Groovy's switch implicitly when an instance of this * class is switched on. * * @param compareState State passed via case to me to be compared to me. */ public boolean isCase(final State compareState) { return compareState != null ? compareState.equals(this) : false; } public boolean equals(final Object other) { if (!(other instanceof State)) { return false; } final State otherState = (State) other; if (this.stateName == null ? otherState.stateName != null : !this.stateName.equals(otherState.stateName)) { return false; } return true; } @Override public String toString() { return this.stateName; } }
The simple standard Java class shown above implements an
isCase method that will allow Groovy to switch on it. The following Groovy script uses this class and is able to successfully switch on the instance of
State.
#!/usr/bin/env groovy import dustin.examples.State State state = new State("Arkansas") State alabama = new State("Alabama") State arkansas = new State("Arkansas") State alaska = new State("Alaska") State arizona = new State("Arizona") State california = new State("California") State colorado = new State("Colorado") State connecticut = new State("Connecticut") "'"
The output in the next screen snapshot indicates that the Groovy script is able to successfully switch on an instance of a
State object. The first command is using the "simple" example discussed earlier and the second command is using the example that needs to invoke
State's
isCase(State) method.
The beauty of this ability to have classes be "switchable" based on the implementation of an
isCase() method is that it allows more concise syntax in situations that otherwise might have required lengthy
if/
else if/
else constructs. It's preferable to avoid such constructs completely, but sometimes we run into them and the Groovy
switch statement makes them less tedious.
It is entirely possible with the Groovy
switch to have multiple switch options match the specified conditions. Therefore, it is important to list the
case statements in order of which match is preferred because the first match will be the one executed. The
break keyword is used in Groovy's
switch as it is in Java.
There is much more power in what the Groovy
switch supports. Some posts that cover this power include Groovy Goodness: The Switch Statement, Groovy, let me count the ways in which I love thee, and the Groovy documentation.
1 comment:
Yes i am really agree with all of your knowledge....and its really gr8 you have added coding here..... | http://marxsoftware.blogspot.com/2010/01/groovy-switch-on-steroids.html | CC-MAIN-2017-13 | refinedweb | 920 | 61.87 |
I'm building a Metro App using VS 2012 and the Windows 8 SDK. In the app, I have this class (with a corresponding struct)
// Parameter data structure for tools
public struct ToolParameter
{
public string Title { get; set; }
public Object Value { get; set; }
public string Description { get; set; }
}
// Tool that will be used to execute something on phone
public class Tool
{
public string Title{ get; set; }
public ObservableCollection<ToolParameter> Parameters { get; set; }
public string Description { get; set; }
}
this.DataContext = currentTool;
<TextBox x:
From what I can see, youre TextBox is binding to
Value which is a property of the
ToolParameter class. The DataContext for the page is of type
Tool. Tool contains
Parameters which is a collection of ToolParameter objects. So, the TextBox needs to be within an ItemsCollection that has the ItemsSource set to bind to the Parameters property.
Example:
<StackPanel> <TextBlock Text="{Binding Title}"/> <TextBlock Text="{Binding Description}"/> <!-- showing a ListBox, but can be any ItemsControl --> <ListBox ItemsSource="{Binding Parameters}"> <ListBox.ItemTemplate> <DataTemplate> <TextBox Text="{Binding Value}"/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </StackPanel>
Also make sure that your classes
Tool and
ToolParameter implement INotifyPropertyChanged and that the setter for your properties fire the PropertyChanged event
UPDATE: Adding info that was too large for a comment
This should help understand Source/Target in bindings. For your TextBox, the source of the binding is the Value property and the Target is the TextProperty of the TextBox. When the source updates, the Text will update within the TextBox. If you the TextProperty of the TextBox changes, then it will update the Value property of your object (provided mode is set to TwoWay). You're tool however will NOT update and neither will the Parameters property of the Tool class. If you wish to update the tool object when a property of a ToolParameter updates, then you will need to subscribe to the PropertyChanged event of each ToolParameter object that gets added to the Parameters collection. | https://codedump.io/share/4mhioNObESmB/1/twoway-binding-not-updating-target---metro-app | CC-MAIN-2018-09 | refinedweb | 323 | 50.36 |
My good friend and coding partner in crime, Jordan, is building a Pokedex app using React. She's a CSS and design whiz! Check her out! Anyway, when we pair she does the fancy artsy magic and I tackle the APIs. So when she asked me to make a rails API for her Pokedex blog project I was more than happy to oblige!
I talked it over with her to find out exactly what she wanted. She wants users to be able to signup, login, and add pokemon to their decks with persistence. So my thought is to have a User model, a Pokemon Model, a joins table called Pokedex that will have a has-many-through relationship between a User and a Pokemon. Using draw.io I made up the relationships I'm envisioning to get her approval:
For my user authentication, I will use sessions and cookies. There are lots of opinions on whether this is the best choice versus using JSON web tokens, but I find sessions and cookies easier to navigate and since it's an app that I don't anticipate having huge amounts of users and it's strictly for fun, I'm not too concerned about CSRF attacks. That being said, this is why you don't reuse passwords people! Even though I will be using Bcrypt to hash the passwords and I won't know what you entered, it's always a good idea to use different passwords because you never know when an app isn't using good authentication practices. Ok, I'm going to get off my soapbox, and time to start building!
Firstly I wanted to make sure I was on the same page as Jordan, so I navigated to our collab channel on Github and cloned a copy of her personal-pokedex project.
In my own terminal I made a folder to hold both the back and front end projects called pokedex and ran:
$ git clone $ cd personal-pokedex $ yarn install $ yarn start
This allowed me to see what her front end is looking like right now and I can see how she has structured her components.
All right! Time to start on the API!
~/.../pokedex/personal-pokedex // ♥ > cd .. ~/.../post-grad/pokedex // ♥ > rails new pokedex-api --api --database=postgresql
Once Rails had completed its process of generating the files, I made a new git repository in Jordeks:
Then made my initial commit and it's time to rumble!
Let's add some gems that we'll need; go ahead and un-comment out
gem 'bcrypt', '~> 3.1.7' gem 'rack-cors'
in the Gemfile. We'll need bycrypt to set up password protection for users and rack-cors so the front end can make requests to the backend. Run
$ bundle install
so they are added to the Gemfile.lock. While we are thinking about cors, let's go to config/initializers/cors.rb and comment in:
Rails.application.config.middleware.insert_before 0, Rack::Cors do allow do origins '', '', '' resource '*', headers: :any, methods: [:get, :post, :put, :patch, :delete, :options, :head] end end
For your origins, you can use a '*' which is the wildcard and will allow for any URL to send requests, or you can specify which local ports you might use while in development and also later add the deployed URL.
I also know I will want to use a serializer to send the data as JSON so let's add the rails active model serializer:
$ bundle add active_model_serializers
After talking over what Jordan wants, I came up with this plan for making the models and tables:
Using $ rails g resource, I can generate migrations, models, controllers, serializers, and routes using the following commands (for Pokedex, I don't anticipate needing a list of all the joins, so I'm just going to generate a model for it):
$ rails g resource User username password_digest $ rails g resource Pokemon name p_id:integer image_url $ rails g model Pokedex user:belongs_to pokemon:belongs_to
If the data type is a string, you don’t need to specify their type following the column name. Adding user:belongs_to specifies the relationship between your two tables and sets up a column for user_id in your pokedexes table. Additionally, we use column name password_digest to avoid directly storing passwords as strings.
Once rails is done generating, I like to go through each file that is created to make sure everything worked as expected, which means for the resources checking the routes.rb, controller, model, serializer, and the migration. I'm looking for spelling mistakes and typos in particular in the migration table before I migrate it.
Top Trick: By installing the active model serializer before I generated the resources, rails knows to automatically build the serializer for anything that has a controller. That way I don't have to go back later and manually generate the serializers. However, if you don't do this, this blog includes some great step by steps for generating serializers.
Also, when looking at the serializers that were generated, I notice that the user_serializer includes the password_digest. Under no circumstance should that be sent to the frontend where malicious users might try to access it. So let's take that out now.
pokedex-api/app/serializers/user_serializer.rb
class UserSerializer < ActiveModel::Serializer attributes :id, :username, :password_digest end
to:
class UserSerializer < ActiveModel::Serializer attributes :id, :username end
Next, run $ rails db:create to create the back end and $ rails db:migrate to migrate your tables.
At this point I like to check that I can run
$ rails s
and see that it runs:
Woot! Welcome to rails! Now if I navigate to, I get the very helpful rails error: The action 'index' could not be found for PokemonsController. Which is exactly what I'm expecting at this point.
Ok, let's go back to the models and finish adding the relationships. Pokedex already has both belongs_to relationships because we generated it with that reference.
class Pokedex < ApplicationRecord belongs_to :user belongs_to :pokemon end
Let's add in the missing has_many relationships:
class Pokemon < ApplicationRecord has_many :pokedexes has_many :users, through: :pokedexes end
And for the User model, I'm going to add the has_secure_password macro so that we can use Bcrypt to protect their password.
class User < ApplicationRecord has_secure_password has_many :pokedexes has_many :pokemons, through: :pokedexes end
My next step is always to test out my relationships in the rails console to make sure it behaves the way I want it to. And guess what! By doing this I found a typo in where I placed the colon on one of my has_manys, so it is worth the time to do this! So after testing that I can make a user, a pokemon, and associate the two to create a pokedex, we will want to create a seeds file. I like taking time to do this as it helps set up a game plan for how the controllers will work later.
jordan = User.create(username: "Jordles", password: "password") meks = User.create(username: "Meks", password: "password") bulbasaur = Pokemon.create(name: "bulbasaur", p_id: 1) ivysaur = Pokemon.create(name: "ivysaur", p_id: 2) venusaur = Pokemon.create(name: "venusaur", p_id: 3) jordan.pokemons << bulbasaur jordan.pokemons << ivysaur meks.pokemons << ivysaur meks.pokemons << venusaur
Top Trick! You can use the commands:
$ rails db:drop db:create db:migrate db:seed
All in one line to drop the database so any users you made in the console that you don't want to conflict with your seeds are removed. Then it will recreate the database, migrate and seed it all in one go.
Next, let's test that we can build an endpoint, I doubt that Jordan will ever want to return all the pokemon in the database, but it's a good place to check that we are getting the data we expect as json through our serializer.
class PokemonsController < ApplicationController def index render json: Pokemon.all, status: 200 end end
Yes! We got back an array of objects with the name, id, and an empty URL for the image. After talking with Jordan, turns out she won't ever need to get the URL from the backend, so for now, we can take that out of the serializer.
class PokemonSerializer < ActiveModel::Serializer attributes :id, :name, :p_id end
Sweet. I am happy with this.
Ok, before I get too far ahead of myself, I want to move the pokemon and user controllers into a V1 folder inside an API folder inside the controllers' folder. This is a good habit to get into so that one can easily create new versions on the API that the front end can use and the only thing the front end has to change is which version it sends requests to. So we need to create the two folders, namespace the two controllers, and update the routes.
pokedex-api/app/controllers/api/v1/pokemons_controller.rb
class Api::V1::PokemonsController < ApplicationController def index render json: Pokemon.all, status: 200 end end
pokedex-api/app/controllers/api/v1/users_controller.rb
class Api::V1::UsersController < ApplicationController end
pokedex-api/config/routes.rb
Rails.application.routes.draw do namespace :api do namespace :v1 do resources :pokemons resources :users end end # For details on the DSL available within this file, see end
Now I can navigate to and see:
This is the route that the frontend will send requests to for all the pokemon (if it ever wants it).
Next let's take care of some validations. We don't want any entries that might mess up with our database. The user absolutely must have a username, and it must be unique since that is how the will be identified upon login.
class User < ApplicationRecord has_secure_password has_many :pokedexes has_many :pokemons, through: :pokedexes validates :username, presence: true validates :username, uniqueness: true end
And Pokemon are must also have names and p_ids that are present and unique:
class Pokemon < ApplicationRecord has_many :pokedexes has_many :users, through: :pokedexes validates :name, :p_id, presence: true validates :name, :p_id, uniqueness: true end
Now if I try to create a new Pokemon without the p_id, I get an error in console:
and a successful insertion if it is saved to the database! I can also see the validations at work if I try to make a Pokemon that already exists:
By using $ p.errors.any? I can see if there was an error and the command $ p.errors.messages returns to me a list of validation errors.
If you are with me this far, give yourselves a pat on the back! We have successfully made our database, models, controllers, serializers, routes, validations, and done some manual testing in console to make sure our relationships are working. In the next rendition, we will set up our user authentication system with sessions and cookies.
You can go through the code at the Jordeks github repository.
Happy coding!
Discussion (2)
Just what I was looking for, fantastic! Very well explained!! 👏☺️👌
Very clearly for new beginners! Keep going! | https://dev.to/mmcclure11/building-a-pokedex-with-rails-part-1-324h | CC-MAIN-2021-39 | refinedweb | 1,825 | 60.35 |
Hi again everyone. I am trying to change around this program I wrote to meet some new requirements, but getting stuck on how to do it.
*Write a program that reads a file consisting of students' test scores in the range of 0-200. The students scores should be stored into an array. It should then determine the number of students having sores in each of the following ranges: 0-24, 25-49, 50-74, 75-99, 100-124, 125-149, 150-174, and 175-200.
The first line in the input file will contain the number of students in the class.*
This is what I have to do... the program I had original would read in all the scores in the text file and output them to the console just fine, but my original array was dealing with the ranges of scores. I now need to change that so the student scores are used for the array, and the first number in the file is to show how many students in the class. I was trying to just edit my old program thinking it would be simple but I cant seem to get it to compile or run after all sorts of different tweaks. Any help appreciated.
-thanks-
import java.util.Scanner; import java.io.*; public class Grades { public static void main(String[] args) throws FileNotFoundException { Scanner inFile = new Scanner(new FileReader("scores.txt")); int range1, range2, range3, range4, range5, range6, range7, range8; int numstudents, sgrades; int begin = 0; int end; int[] gradeArray = new int[sgrades]; for (int i = 0; i < gradeArray.length; i++) gradeArray[i] = 0; while (inFile.hasNext()) { int score = inFile.nextInt(); int index = score / 25; gradeArray[index]++; } for (int i = 0; i < gradeArray.length; i++) { if (i == gradeArray.length - 1) { end = begin + 25; } else { end = begin + 24; } System.out.println(begin + "-" + end + ": " + gradeArray[i]); begin = end + 1; } inFile.close(); } } | https://www.daniweb.com/programming/software-development/threads/326402/student-grade-array-problem | CC-MAIN-2019-04 | refinedweb | 313 | 83.15 |
getdns response data¶
Response data from queries¶
- class
getdns.
Result¶
- A getdns query (
Context.address(),
Context.hostname(),
Context.service(), and
Context.general()) returns a Result object. A Result object is only returned from a query and may not be instantiated by the programmer. It is a read-only object. Contents may not be overwritten or deleted.
It has no methods but includes the following attributes:
status¶
The
statusattribute contains the status code returned by the query. Note that it may be the case that the query can be successful but there are no data matching the query parameters. Programmers using this API will need to first check to see if the query itself was successful, then check for the records returned.
The
statusattribute may have the following values:
getdns.
RESPSTATUS_NO_SECURE_ANSWERS¶
The context setting for getting only secure responses was specified, and at least one DNS response was received, but no DNS response was determined to be secure through DNSSEC.
The context setting for getting only secure responses was specified, and at least one DNS response was received, but all received responses for the requested name were bogus.
answer_type¶
The
answer_typeattribute contains the type of data that are returned (i.e., the namespace). The
answer_typeattribute may have the following values:
canonical_name¶
The value of
canonical_nameis the name that the API used for its lookup. It is in FQDN presentation format.
just_address_answers¶
If the call was
address(), the attribute
just_address_answers(a list) is non-null..
replies_full¶
The
replies_fullattribute is a Python dictionary containing the entire set of records returned by the query.
The following lists the status codes for response objects. Note that, if the status is that there are no responses for the query, the lists in
replies_fulland
replies_treewill have zero length.
The top level of
replies_treecan optionally have the following names:
canonical_name,
intermediate_aliases(a list),
answer_ipv4_address
answer_ipv6_address, and
answer_type(an integer constant.).
- The value of
canonical_nameis the name that the API used for its lookup. It is in FQDN presentation format.
- The values in the
intermediate_aliaseslistand
answer_ipv6_addressare the addresses of the server from which the answer was received.
- The value of
answer_typeis the type of name service that generated the response. The values are:
If the call was
address(), the top level of
replies_treehas an additional name,
just_address_answers(a list)..
The API can make service discovery through SRV records easier. If the call was
service(), the top level of
replies_tree hasan additional name,
srv_addresses(a list). The list is ordered by priority and weight based on the weighting algorithm in RFC 2782, lowest priority value first. Each element of the list is a dictionary that has at least two names:
portand
domain_name. If the API was able to determine the address of the target domain name (such as from its cache or from the Additional section of responses), the dict for an element will also contain
address_type(whose value is currently either “IPv4” or “IPv6”) and
address_data(whose value is a string representation of an IP address). Note that the
dnssec_return_only_secureextension affects what will appear in the
srv_addresseslist.
validation_chain¶
The
validation_chainattribute is a Python list containing the set of DNSSEC-related records needed for validation of a particular response..
call_reporting¶
A list of dictionaries containing call_debugging information, if requested in the query.
replies_tree¶
The names in each entry in the the
replies_treelist for DNS responses include
header(a dict),
question(a dict),
answer(a list),
authority(a list), and
additional(a list), corresponding to the sections in the DNS message format. The
answer,
authority, and
additionallists each contain zero or more dicts, with each dict in each list representing a resource record.
The names in the
headerdict are all the fields from RFC 1035#section-4.1.1. They are:
id,
qr,
opcode,
aa,
tc,
rd,
ra,
z,
rcode,
qdcount,
ancount,
nscount, and
arcount. All are integers.
The names in the
questiondict are the three fields from RFC 1035#section-4.1.2:
qname,
qtype, and
qclass.
Resource records are a bit different than headers and question sections in that the RDATA portion often has its own structure. The other names in the resource record dictionaries are
name,
type,
class,
ttl, and
rdata(which is a dict); there is no name equivalent to the RDLENGTH field. The OPT resource record does not have the
classand the
ttlname, but instead provides
udp_payload_size,
extended_rcode,
version,
do, and
z.
The
rdatadictionary has different names for each response type. There is a complete list of the types defined in the API. For names that end in “-obsolete” or “-unknown”, the data are the entire RDATA field. For example, the
rdatafor an A record has a name
ipv4_address; the rdata for an SRV record has the names
priority,
weight,
port, and
target.
Each rdata dict also has a
rdata_rawelement. This is useful for types not defined in this version of the API. It also might be of value if a later version of the API allows for additional parsers. Thus, doing a query for types not known by the API still will return a result: an
rdatawith just a
rdata_raw.
It is expected that later extensions to the API will give some DNS types different names. It is also possible that later extensions will change the names for some of the DNS types listed above.
For example, a response to a Context.address() call for would look something like this:
{ # This is the response object "replies_full": [ <bindata of the first response>, <bindata of the second response> ], "just_address_answers": [ { "address_type": <bindata of "IPv4">, "address_data": <bindata of 0x0a0b0c01>, }, { "address_type": <bindata of "IPv6">, "address_data": <bindata of 0x33445566334455663344556633445566> } ], "canonical_name": <bindata of "">, "answer_type": NAMETYPE_DNS, "intermediate_aliases": [], "replies_tree": [ { # This is the first reply "header": { "id": 23456, "qr": 1, "opcode": 0, ... }, "question": { "qname": <bindata of "">, "qtype": 1, "qclass": 1 }, "answer": [ { "name": <bindata of "">, "type": 1, "class": 1, "ttl": 33000, "rdata": { "ipv4_address": <bindata of 0x0a0b0c01> "rdata_raw": <bindata of 0x0a0b0c01> } } ], "authority": [ { "name": <bindata of "ns1.example.com">, "type": 1, "class": 1, "ttl": 600, "rdata": { "ipv4_address": <bindata of 0x65439876> "rdata_raw": <bindata of 0x65439876> } } ] "additional": [], "canonical_name": <bindata of "">, "answer_type": NAMETYPE_DNS }, { # This is the second reply "header": { "id": 47809, "qr": 1, "opcode": 0, ... }, "question": { "qname": <bindata of "">, "qtype": 28, "qclass": 1 }, "answer": [ { "name": <bindata of "">, "type": 28, "class": 1, "ttl": 1000, "rdata": { "ipv6_address": <bindata of 0x33445566334455663344556633445566> "rdata_raw": <bindata of 0x33445566334455663344556633445566> } } ], "authority": [ # Same as for other record... ] "additional": [], }, ] }
Return Codes¶
The return codes for all the functions are:
getdns.
RETURN_UNKNOWN_TRANSACTION¶
An attempt was made to cancel a callback with a transaction_id that is not recognized
getdns.
RETURN_NO_SUCH_LIST_ITEM¶
A helper function for lists had an index argument that was too high.
getdns.
RETURN_NO_SUCH_DICT_NAME¶
A helper function for dicts had a name argument that for a name that is not in the dict.
getdns.
RETURN_WRONG_TYPE_REQUESTED¶
A helper function was supposed to return a certain type for an item, but the wrong type was given.
getdns.
RETURN_DNSSEC_WITH_STUB_DISALLOWED¶
A query was made with a context that is using stub resolution and a DNSSEC extension specified. | https://getdns.readthedocs.io/en/latest/response.html | CC-MAIN-2022-05 | refinedweb | 1,151 | 54.02 |
Precursors to CAM
As stated in the summary, CAM represents the latest technology in validating XML documents. This, of course, implies that previous technologies validated XML documents.
The oldest is known by the acronym DTD, which stands for Document Type Definition. As with most entry points in emerging technologies, it was limited. It facilitated validation of XML document structure, but not much in the way of semantics. It also used somewhat awkward syntax to define the valid XML structure.
DTD was later replaced by XSD, which stands for XML Schema Definition. This was a much more powerful means of validating XML documents. First, the syntax was similar to an XML document itself. Next, it offered improved support for semantics. For the last several years, bleeding-edge technologists have opted to validate their XML documents with XSD as opposed to DTD.
Enter CAM
The history of technology has shown repeatedly that there is always a better way to build the proverbial mousetrap. XML validation is no exception to that principle. CAM represents the latest and most sophisticated entry in the family of technologies used to validate XML documents.
CAM is offered by the standards body known as OASIS. This organization has provided a number of specifications, most notably regarding Web services and electronic business Extensible Markup Language (ebXML).
CAM is more powerful and flexible than its predecessors. Unlike XSD, it doesn't tightly couple the data structure to the business rules. It also provides for context-driven validation, something which is lacking in both XSD and DTD.
For most people who are familiar with XML, CAM is also much easier to learn than XSD or DTD. This is because, in defining structure, the format of a CAM document is strikingly similar to an XML instance. And, in defining business rules, CAM uses the well-known (XPath.
The structure of a CAM template
In Listing 1, you can see that the structure of a CAM template is not complicated.
Listing 1. The structure of a CAM template
<as:CAM xmlns: <as:Header /> <as:AssemblyStructure /> <as:BusinessUseContext /> </as:CAM>
The root element,
CAM, defines the namespace used throughout the template itself as well as the level and version of CAM.
The
Header element provides specific information about the validation document. Many of the child elements (not shown) are self-explanatory:
Description,
Owner,
Version, and
DateTime.
The
AssemblyStructure element defines the actual structure of the XML document instance. This is where CAM and XSD part company. The
AssemblyStructure element provides validation against the structure of the XML document but does not contain any information about semantics.
And, finally, the
BusinessUseContext element provides the business rules that were lacking in the previous element. How are these business rules enforced? That is an excellent question, but first you should be familiar with how CAM defines structure.
How CAM defines structure
Listing 2 shows how CAM defines the structure for a simple purchase order.
Listing 2. A CAM structure for a simple purchase order
>
In looking at Listing 2, note that the structure of the XML document is defined almost exactly as though it were an XML instance. In this respect, most IT professionals probably agree that CAM is far more readable than XSD for people who already understand XML syntax. The reality of the situation is that it really is depicted as an XML instance, but with irrelevant content, which I will explain anon.
The
Structure element is the parent of the actual structure definition. It has an
ID attribute that identifies this particular structure. The only currently recognized value for the
taxonomy attribute is
XML.
Notice that most elements include values demarcated by percent signs (%). These are simply place holders for actual content that will be included in the XML instance. They serve to make the document easier to understand to the naked eye as opposed to providing any validation logic. Some people, when constructing CAM templates, actually place example values inside the elements as opposed to the more generic values included in Listing 2. How to best include place holders is up to the individual developers.
Now that you understand how structure is defined in CAM, it's time to learn a little more about how business rules are enforced.
How CAM enforces business rules
It's really this simple: XPath.
Yes, that's right. XPath.
And now you have yet another advantage of CAM versus older validation technologies. It uses syntax that most XML technologists already understand to enforce business rules. For these people, there is no need to learn another language to implement CAM validation within their applications.
Listing 3 has an example of the
BusinessUseContext element.
Listing 3. Enforcing business rules with CAM
>
To the experienced XML developer, this structure should be fairly easy to interpret. This is not only because the constraints use XPath, but also because the validation rules are named in standard English. Again, this is what makes CAM so attractive.
The rules themselves are defined within the
context element. Each rule is an
action parameter of one of the
constraint child elements.
Note the first rule:
makeRepeatable(//PurchaseOrder/LineItems/LineItem). As the name implies, this is telling the validator that the
LineItem child element of the
LineItems element is repeatable. This means that there can be many of them, which makes perfect sense because a
typical purchase order may contain many different items.
The next rule is about the
Comment element. This rule states that comments are optional. In other words, the XML document can be valid with an empty
Comment element.
The next rule enforces the maximum length, in characters, of the
State element. In this case, that maximum length is
2, which is the understood postal abbreviation for a state in the United States.
The next rule enforces the format of the date. Here, the format
DD-MM-YYYY is used, although you can certainly use other formats as well. In this case, a valid date would be something like 03-03-2009, meaning March 3, 2009.
The next rule enforces the format of the
Quantity element. In this case, the contents of that element must be a number conforming to the
### mask. In other words, a purchase order containing a line item with a four-digit number in the
Quantity element would be considered invalid. With this rule,
a purchase order cannot contain a line item that orders a quantity of more than 999 of any one product.
The next two rules,
Price and
TotalPrice, are similar to the previous rule. Like the
Quantity rule, they enforce a number mask. The difference is that the number mask allows for decimal points. This is because these two elements are dollar values that can contain fractional
values representing cents.
And, finally, there is a particularly interesting rule. It is interesting because it introduces a context-driven constraint. What exactly is that? It's a constraint that can validate an XML document based on the content of certain elements. In this case, if the total price of the purchase order exceeds $100,
then the
ShippingMethod element of the XML document can be empty. Otherwise, it cannot be empty. The business rule being applied here is that orders totaling $100 or more automatically get free standard shipping. For orders less than $100, the document must specify a shipping method.
Putting it all together
Listing 4 shows an entire CAM document assembled from the fragments provided earlier.
Listing 4. All together now
<?xml version='1.0'?> <as:CAM CAM <as:Header> <as:Description>Simple Purchase Order</as:Description> <as:Owner>developerWorks</as:Owner> <as:Version>0.1</as:Version> <as:DateTime>2009-07-07T12:00:00</as:DateTime> </as:Header> > </as:CAM>
As you can see, Listing 4 is little more than a concatenation of Listings 2 and 3. A
Header element is added, which simply identifies information about this particular validation file. In this case, a
simple description, an owner, a version, and a document date are added.
Although it is not shown in Listing 4, the
Header element can also contain parameters. The validation of the XML
document can vary based on the value of the parameters. For example, if a parameter
named
noMoreThan10LineItems is set to
true, the CAM document enforces a business rule that there can be no more than 10
LineItem elements in the entire order. This is an example of how powerful and
flexible CAM can be when it comes to validation. The benefit here is that you can simply change that parameter to
false to invalidate that rule.
The benefits of CAM versus its competition
Obviously, just because a certain technology is new does not mean that it is useful or provides a higher return on investment than its predecessors. CAM, however, has several distinct advantages compared to its competition.
First, CAM separates structure from business rules. This is a recurring pattern throughout software development and is not at all limited to CAM. For example, the Model-View-Controller (MVC) pattern in distributed object development environments separates the model from the view from the controller. Contrary to CAM, XSD tightly couples the structure and the business rules, resulting in higher maintenance overhead.
CAM also enables context-driven validation. In other words, CAM recognizes a dynamic structure based on the content of certain elements or attributes. So, if element X contains a certain value, a business rule is applied to element Y. If it contains another value, that business rule can instead be applied to element Z. This was demonstrated in Listing 3 with the final rule. In that case, purchase order documents with a total price of $100 or more do not need to specify a shipping method because the standard shipping is free for those orders. CAM's predecessors do not facilitate such complex validation.
Analyzing rule sets and structure is much easier with CAM. The structure is represented
as an XML instance in the usual tree format, thereby humans as well as computers can read it more easily. The grammar used to enforce business
rules is likewise intuitive:
makeRepeatable,
makeOptional,
setLength, and so on are not terribly difficult to decipher. And, although the rules and the structure are separated, they are in the same document, making it easy
to get a bird's-eye view of the overall validation requirements. XSD, on the other hand, requires an understanding of a whole new set of non-intuitive definitions—such as
complexType (What does that mean?)—and is not so
easily analyzed.
Sticking with the "you don't have to learn anything new" theme, CAM uses XPath. As shown previously, this is the language that enforces business rules on certain elements. Not only is XPath intuitive and easy to learn, it is already understood by most XML technologists. This makes the transition to CAM much smoother because the business logic validation does not require XML developers to learn something totally new. The XSD grammar is not anything like XPath.
Another advantage of CAM over XSD is that localization needs are more easily enforced with CAM. With XSD, enumerations are static and, therefore, cannot be made context-aware. However, with CAM, you can apply particular enumerations based on context values. In the emerging global marketplace, the need for such streamlined validation should be self-explanatory.
CAM templates also provide next-generation Service-Oriented Architecture (SOA) support. CAM supports business processing technologies such as Business Process Execution Language (BPEL), Business Process Specification Schema (BPSS), and Business Process Modeling Notation (BPMN) modeling tools. To quote from the Wiki: "Completing the SOA picture CAM has extension mechanisms that can be used to support semantic registry referencing (such as ebXML-regrep) and metadata definitions (such as CCTS and OWL) external to the templates that are key to next generation SOA exchanges." Also, CAM was developed by OASIS, so you can be sure that the organization will ensure that CAM is compliant, if not compatible, with its other standards.
Conclusion
CAM represents the latest generation of XML validation technologies. It provides numerous benefits over its predecessors. Those benefits include a separation of concerns regarding structure and business logic, dynamic validation based on context, interoperability with cutting-edge technologies, lower maintenance overhead, and it is easier to learn. CAM is also endorsed by a well-respected standards organization, OASIS.
CAM is an emerging technology. As such, it is not as well documented and does not enjoy the benefit of mass experience. However, it certainly is robust in its initial release and promises to be a much more efficient means of XML validation.
CAM is almost certainly here to stay and supplant its predecessors.
Resources
Learn
- The OASIS CAM Wiki: Learn more about CAM.
- On XML Schema Tutorial: Explore how to create XML Schemas, why XML Schemas are more powerful than DTDs, and how to use XML Schema in your application.
- DTD Tutorial: Learn how to use DTDs.
- Introduction to XML (Doug Tidwell, developerWorks, August 2002): XML, the Extensible Markup Language, has gone from the latest buzzword to an entrenched eBusiness technology in record time. Learn what XML is, why it was developed, and how it's shaping the future of electronic commerce.
- Validating XML (Nicholas Chase, developerWorks, August 2003): Validate files and documents to make sure that data fits integrity constraints. Learn what validation is and how to check a document against a Document Type Definition (DTD) or XML Schema document.
- Design XML schemas for enterprise data (Bilal Siddiqui, developerWorks, October 2006): Learn to use W3C XML Schema features to design data formats for production. | http://www.ibm.com/developerworks/library/x-cam/ | CC-MAIN-2015-06 | refinedweb | 2,256 | 55.84 |
hi jukka
IMO the approach in the namespaceregistry is something different:
in that case you have a different write-root that needs to make
sure that the root (and the editing session) gets updated once
the changes are persisted.
but in this case as michael explained the setup is that one session
should be informed about modifications made by another session.
this works for all operations that are 'attached' to the SessionDelegate
but the refresh doesn't work for those classes that are just associated
with the Root (such as e.g. the UserManager which btw. makes transient
modifications on the root associated with editing session. the read-write
trick doesn't help here). btw: it's one of the fundamental design decisions
for the oak security code to build the implentations on top of the OAK API.
this contrasts to the Jackrabbit setup which lead to a lot of hacks and
at the end didn't work very well.
so, what we would need is a refresh on the Root instead of just refreshing
upon session access. i just discussed this with Michael and we both fear
that
this will open a huge can of worms...
what we are basically trying to do is to changing one of the fundamental
concepts of OAK (refresh only occurs if manually triggered), which IMHO
needs a careful analysis of the consequences.
let's discuss that in the call this afternoon.
angela
On 8/6/13 10:41 AM, "Jukka Zitting" <jukka.zitting@gmail.com> wrote:
>Hi,
>
>On Tue, Aug 6, 2013 at 11:24 AM, Michael Dürig <mduerig@apache.org> wrote:
>> We might have similar issues with other entities tied to a session like
>> PrincipalManager, VersionManager, ... Basically all (indirect) callers
>>of
>> SessionDelegate#getRoot() are suspect... and that's quite a list.
>
>We sorted out a good pattern for doing stuff like this already with
>the namespace registry. Basically:
>
>a) When making transient changes or reading information that can come
>from an earlier repository snapshot, use sessionDelegate.getRoot() so
>that you see the exact same state as the rest of the session.
>
>b) When persisting changes directly to the repository or reading from
>the latest repository state without interference from transient
>changes, use sessionDelegate.getContentSession().getLatestRoot() and
>follow up with a session.refresh(true) to force the rest of the
>session to keep up.
>
>The abstract method pattern in ReadWriteNamespaceRegistry (or
>something similar) can be used to avoid a direct oak-jcr /
>SessionDelegate dependency.
>
>That pattern should cover the needs of the UserManager and other
>places without the need to manage the states of multiple independent
>Root instances.
>
>BR,
>
>Jukka Zitting | http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201308.mbox/%3CCE268987.5AE8%25anchela@adobe.com%3E | CC-MAIN-2016-44 | refinedweb | 438 | 55.34 |
Dear community,
I'm successfully using a Teensy 3.6 as a Unicode-compatible native MP3 player, and would now like to add MTP.
However, I am running into several dependency issues. There seem to be two libraries: MTP and MTP-t4.
Despite its name, MTP-t4 says it supports Teensy 3.x - and I would prefer to use it, as I read it supports MTP+Serial.
With MTP I am running into the problem, that it requires SdFatSdioEX, which does not seem to exist in SDFat 2.0.4,
which I am using in my main program. So I changed SdFatSdioEX into SdExFat, as in my main program, and File into
ExFile. This seems to do it, however, the implementation of MTP does not seem to support SdFAT's UNICODE feature,
which I am using in my main program (I do have a lot of foreign script files, which otherwise would not load and play
correctly). I could go further this road, by implementing UTC2-UTF8 conversion methods for the filenames, but I am
not sure if this has perspective.
With MTP-t4 I am running into the problem that the "FS" type is not found (Storage.h). It seems to be implemented
in the LittleFS library, which apparently is obligatory (IMHO not completely clear from the README, which I originally read
as suggesting that LittleFS support is optional). I installed LittleFS, but the FS type is still not found, until I add an
#include <LittleFS.h> into Storage.h. When I do this, I get the following:
And other issues related to "File".And other issues related to "File".Code:C:\Users\tobia\Documents\Arduino\libraries\LittleFS-main\src/LittleFS.h: In member function 'virtual File LittleFSFile::openNextFile(uint8_t)': C:\Users\tobia\Documents\Arduino\libraries\LittleFS-main\src/LittleFS.h:164:51: error: no matching function for call to 'File::File(LittleFSFile*)' return File(new LittleFSFile(lfs, f, pathname));
I assume this is a version incompatibility with my SDFat 2.0.4. Do I need the SDFat-beta to run this?
I installed SDFat-beta, but then my main program would not compile as the ExFile type is not found.
Can I safely change ExFile into File without losing my long filename & Unicode functionality?
If so, won't "LittleFS" etc. break sooner or later due to char16_t* vs char8* incompatibility, just like the original MTP?
Not sure which path I should take. Thank you for any input.
Thank you | https://forum.pjrc.com/threads/66350-Teensy-Arduino-cross-compatibility?s=a753b4237f9cc7b04cc0b5ad263c76c0&goto=nextnewest | CC-MAIN-2021-43 | refinedweb | 411 | 57.47 |
working with html
templates. code-wise, it’s difficult to keep the right set of templates with each html file.
is it possible to have a file of template(s), much like a file of
css, that one includes in the
head section of the
html file?
for example, to supplement the
style section in
head, one can link to a stylesheet, eg,
<link rel="stylesheet" type="text/css" href="mystyle.css" >
my app uses several collections of
templates. can they be handled similar to stylesheets, ie, linked to in a separate file, or does each
template definition need to be directly part of the original
html file?
Imagine we want to import a bunch of
<template>s in a separate file called templates.html.
In the (main) homepage index.html, we can import the file via HTML Imports:
<link rel="import" href="templates.html" id="templates">
In the imported file templates.html, add one or as many templates as you need:
<template id="t1"> <div>content for template 1</div> </template> <template id="t2"> content for template 2 </template>
The imported document is available from the
import property of the
<link> element. You can use
querySelector on it.
<script> //get the imported document in doc: var link = document.querySelector( 'link#templates' ) var doc = link.import //fetch template2 1 and 2: var template1 = doc.querySelector( '#t1' ) var template2 = doc.querySelector( '#t2' ) </script>
Note: you can place the above script in the main document, or in the imported one because the
<script>s inside an imported file are executed as soon as the are parsed (at download time).
2020 Update
Now that HTML Imports have been deprectated, you could use
fetch() to download HTML code:
void async function () { //get the imported document in templates: var templates = document.createElement( 'template' ) templates.innerHTML = await ( await fetch( 'templates.html' ) ).text() //fetch template2 1 and 2: var template1 = templates.content.querySelector( '#t1' ) var template2 = templates.content.querySelector( '#t2' ) console.log( template2 ) } ()
###
You need HTML 5 though
<head> <link rel="import" href="/path/to/imports/stuff.html"> </head>
###
Unfortunately,
<head> <link rel="import" href="/path/to/imports/stuff.html"> </head>
does not work.
The entire
stuff.html is stuck in there as
html as part of
head, and for all practicable purposes inaccessible.
In other words, a
template defined instuff.html
cannot be found usingdocument.querySelector()`, and is therefore unavailable to the script.
fwiw, I don’t really understand any advantages of
import the way it works now – for it to be any good it needs to strip off (rather than adding) all the outer html before it appends the contents to
head – not its current action. | https://throwexceptions.com/javascript-can-one-include-templates-in-a-html-file-similar-to-css-throwexceptions.html | CC-MAIN-2020-50 | refinedweb | 441 | 67.55 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Google has released a new web toolkit. The possible significant item for LtU is the Java-to-Javascript compiler that is one of the central components of the toolkit. I am unconvinced of the value of coding in java rather than in javascript. However, the marketing folks have been thinking along the same lines as some of the threads here on LtU with respect to static type checking, code completion, etc. Maybe some of the static fans could comment on this?
cheers,
Jess
:-)
The subject line says what I think Google has done here. They observed that cross-browser Javascript is a pain and that with AJAX you're basically calling server-side logic anyway, so why not encapsulate the cross-browser cruft in a meta-language, and since you're calling the server anyway, why not make the meta-language the language the server uses? This seems deeply pragmatic to me: your HTML/CSS wonks get to continue doing what they do best; your Java wonks get to continue doing what they do best, and neither of them has to worry about IE6 vs. Netscape 7 vs. Firefox 1.0 vs. Firefox 1.5 vs. Safari 1.2...
your HTML/CSS wonks get to continue doing what they do best; your Java wonks get to continue doing what they do best
True, my DSL approach doesn't buy you this compatibility. It can buy lock-in, though :-)
HaXe tries to do something similar: it is a Java-like statically typed language, that can be compiled either
The nice point is that you can pass objects that are compiled from the same source form the client to the server and back (if they don't contain things like DOM nodes or database handles that make no sense on the other side).
There seems to be a trend here, with a whooping number of two projects doing similar things...
It seems like the tide is turning towards single language solutions, as opposed to the old wisdom of using different languages for the client, server, scripts etc.
To make this work smoothly, we'll need language innovation. This has to be a good rhing, right?
From the current trend, i'd say this "single language solution" looks more like "different specific languages all embedded and integrated into one", like javascript+html+more with Links and C#+SQL with LINQ.
If this turns out messy and hard, i don't believe it'll be much better than lots of little language sections separately developed, hopefully by specialists in their respective areas... it's really annoying when programmers need to be (crappy) webdesigners because the web UI is scattered among lots of "reusable components" and files...
Yes I agree with this. Taking a language such as Java that hasn't been designed with "targeting different runtimes" in mind might not work very well. For instance if you want to bind a simple JS "onclick" event, how can you do since there is neither local functions nor method rebinding in Java ? You need to modelize all you API with the infamous listeners and wrap the native API accordingly.
As for the "single language", I think it makes sense for the Web. I would say that currently people are experimenting a lot with "one language for several runtimes" with the opposite of "one runtime for several languages". The main reason is that language designers see that in its current state Javascript have a lot of drawbacks that need to be fixed by using a more highlevel strongly-typed language.
haXe is bringing its own solutions, especially for communications between JS/Flash/Server (see for a Tutorial on haXe Remoting).
Its's not typical do assign event listeners directly in JS nowadays anyway. It's more typical to use addEventListener(), attachEvent(), or some bit of code that wraps around them and detaches them on page unload to avoid memory leaks.
So while I'm quite weary of GWT, that's not an aspect it can be criticised upon.
Meaning I don't think anyone was advocating the benefits of developing in different langauges on client and server. All these technologies appeared at the same time and the lowly app developer was stuck with the job of making things work.
As web-based apps get more complex, and context-switching between server langague,client language, html, xml, xslt, etc gets more expensive for the developer to process, simplifling things by eliminating the context switches seems like natural evolution.
I am not sure how much wisdom was involved, but there were many discussions about these issues (some more well informed than others), which influenced the ways people chose to do things.
I don't mean to agree or disagree -- just explode this into more detail for thought.
Ehud Lamm: It seems like the tide is turning towards single language solutions, as opposed to the old wisdom of using different languages for the client, server, scripts etc.
It's like a single language solution for an initial simulation of a later deployment context that uses multiple languages. It seems very pragmatic as a way to avoid/remove the fragility of development directly in JavaScript. And simulating a system you target is a clever way to do things, though burdened with some up front costs (which folks usually hate with a passion).
But in a one vs many solution space, it seems more a move to many than a move to one, when you see the use of Java as a (temporary) substitute for JavaScript before deployment in JavaScript. If in the long run this was just a move to displace JavaScript with Java, yeah, that would be more a one language solution.
The flavor of Google's tactic (using Java to for development time debugging) resembles my notion to develop in an easier, very dynamic language (Lisp, Smalltalk, and/or Python) before deploying in whatever the runtime context demands when I apply code to some application delivery context.
I'm just not real excited about the use of Java for this, since Java reminds me of the verbosity of C++, which is a drag. But Java is an extremely practical choice given current tools, and given desire to see some adoption by folks out there trained now, as they are.
I'm glad Google is doing this, since it makes using a sequence of languages in development sound less crazy to folks who otherwise assume this can't work (and then offer little but flak). I wish more folks would get with the program, and learn everything you target is a machine you can design another layer on top of, to wrangle some control over your attempts to develop for the machine.
Ehud Lamm: To make this work smoothly, we'll need language innovation. This has to be a good rhing, right?
Yes, I think so. If nothing else, something in the web toolkit would reify some of the model of the whole machine involving JavaScript, web pages, and servers, etc. Not only would these stakes in the ground be useful, but the higher order manipulations of this reified system would be a welcome change to random chaos in development. (Does "random chaos" sound negative? Probably. :-)
I don't think anyone but Google would try this; I expect it has something to do with their reputation for hiring folks with higher educations, and thus with a tolerance for abstractions and plans, which are very much frowned upon in the industry today. Modeling anything at all is a big extravagance most places, where they don't even want you to write plain english notes on how your code works, because anything but code appears a frivolous waste of time.
Language innovation requires something for the innovation to target; so now we might get some if some higher order way of expressing how a system works might get some adoption. There might be some innovation in the way the web toolkit generates JavaScript. (There's needn't be, though. The translation could be written in a non-retargetable way with very opaque manner that brooks no interference, forking, or understanding. Depends on the goals of the toolkit maker.) [cf]
the use of Java as a (temporary) substitute for JavaScript before deployment in JavaScript
It seems to me JS is simply used in toolkits such as this as the "native" language of the target platform. The assembly language of the browser world, if you will. Amusingly enough the JVM was supposed to provide this portability layer, but beacuse of the way java applets work, the AJAX application model doesn't use java for client side programming.
The clue to look for in these situations is whether the generated code (in this case the JS code) is supposed to be directly maintainable (and readable) by people.
I've been working with it for the past few days, and I must say it's quite excellent. I don't really think any of the code it generates would be all that maintainable by humans, but the code I am writing in Java is a lot better organized than what I have written in JavaScript and the tools support is definitely making web development feel more maintainable overall. I'm a bit queasy about abdicating direct control of the source in the browser to a Google compiler. I've already thrown something at it that caused Firefox's JavaScript engine to freak out, but actually what I wrote was unecessarily complex automatic resizing on window resize because I wasn't understanding how to get what I wanted out of their dock panel. The benifits are so far outweighing my doubts about abandoning direct control over every line of scripting code in the browser. The real achievement for me is that they have managed to bring a decent component framework to the web. Existing JavaScript libraries are just that... libraries. It doesn't really feel like you're getting the kind of rich ready-made widgets that GWT gives you. The single-language approach also makes it extremely natural to implement RPC. You just define an interface for an object you wish to call remotely and write an asynchronous sibling interface, and GWT automatically creates an implementation of the asynchronous interface based on the original on the client side. You point it at a URL of the implementor of this interface and (as far as the abstraction they lay out is concerned) you're sending Java objects across the wire. It's very smooth. Anyway, thank god I'm over Ruby. Despite the Rails hype, I think Java is going to be an effective language for web development for some time to come. You just can't judge the entire language and community by the uncritically-adopted J2EE frameworks put out there by major vendors. Stuff coming out of Apache and Codehaus really impresses me. I think a lot of people who read LtU are pretty down on Java and rightfully so, considering it isn't exactly breaking massive theoretical ground. When working on research software in it I hated it a lot, but now that I'm writing web / business code, I'm starting to come around. Definitely less fun than Scheme and OCaml, but it's such a luxury to have selection when it comes to off-the-shelf tools. Java has a good community going for it. I recommend checking out Google's addition to it.
Thanks for the experience report!
Did anyone else try it out?
function a(){return window;}
function b(){return this.c + '@' + this.d();}
function e(f){return this === f;}
function g(){return h(this);}
function i(){}
_ = i.prototype = {};_.j = b;_.k = e;_.d = g;_.toString = function(){return this.j();};_.c = 'java.lang.Object';_.l = 0;function m(n){return n?n.c:null;}
function o(){return $moduleName;}
p = null;function q(){return ++r;}
function s(t){return t != null?(t.$H?t.$H:(t.$H = q())):0;}
function u(v){return v != null?(v.$H?v.$H:(v.$H = q())):0;}
r = 0;function w(){w = a;x = y('[N',[0],[0],[0],null);return window;}
function z(){return this.A;}
function B(){var C,D;C = m(this);D = this.E();if(D !== null) {return C + ': ' + D;}else {return C;}}
function F(ab){w();return ab;}
function bb(cb,db){w();cb.A = db;return cb;}
function eb(fb,gb,hb){w();fb.ib = hb;fb.A = gb;return fb;}
function jb(){}
_ = jb.prototype = new i();_.E = z;_.j = B;_.c = 'java.lang.Throwable';_.l = 1;_.ib = null;_.A = null;function kb(lb){F(lb);return lb;}
function mb(nb,ob){bb(nb,ob);return nb;}
function pb(qb,rb,sb){eb(qb,rb,sb);return qb;}
function tb(){}
_ = tb.prototype = new jb();_.c = 'java.lang.Exception';_.l = 2;function ub(vb,wb){mb(vb,wb);return vb;}
function xb(yb,zb,Ab){pb(yb,zb,Ab);return yb;}
function Bb(Cb){kb(Cb);return Cb;}
That source code looks awfully familiar. Look through the Google Calendar source code and you'll see what I mean. For example:
(This link may become inactive very quickly.)
I hadn't heard that Google used this framework to build it; I wonder if that's true?
(Edit:) On second thought, it's probably just the short variable names due to it being compiled. I don't see any instances of "_.prototype".
Looks like the .NET camp is trying out something similar with Script#. I find the wide range of responses to Nikhil's blog entry interesting.
In the C#-to-JavaScript case, I think the win comes from two things: the use of the Visual Studio IDE for development, and the use of the ASP.NET object/event model for the page. I can't say I'm convinced that C# is the best choice for coding this kind of thing (I feel the same way about Java), but I think leveraging an existing server-side code model, as Script# does, is potentially very powerful.
GWT (the google web toolkit) is probably part of the strategy for the google pc (if there is one).
Boutin notes that Vic Gundotra would be anyone's first pick to persuade software makers to build Web-based programs to Google's specs. While this may be true, it is not the whole story. The real clue is that you need developer tools, and if google is spending energy and money on developer tools it suggests that the google pc is proablby more than just a rumor. Microsoft is, of course, basing a large part of its stategy around C# and the .Net framework; Sun has Java. Google's task seems harder since the programming model required for modern web applications is more complicated.
We will keep monitoring how this story develops and see if my predications turn out to be true.
If we switch from JavaScript to Java, we gain tool support: static type checking, auto-completion, refactoring, debugging etc. But what do we lose?
We lose free-standing functions. For example, I have a function at(array,indices) that returns another array. In Java, I have to put it in some class - and no, I can't put it in Array, because Array in Java isn't even a class.
We lose closures. I have a function map(func,array) that returns another array. Now I need to put in some class (syntax inconvenience for all users), and func needs to become a functor object. What should the functor object return? Object? Hint hint: in my code, it sometimes doesn't return anything. Just use "return null"? Semantic inconvenience for all users.
We lose some useful argument-passing conventions. For example, I have a
function bind(f,x) { return function(y,z) { return f(x,y,z); }; }
This kind of code is tricky to translate to Java, because f can take 1, 2 or 3 arguments. In JavaScript, it just works.
We lose useful collections. The JavaScript array has [] syntax and is auto-resizing; Java doesn't have this. The JavaScript associative array has [] syntax and dot syntax; it can be used to implement explicit namespaces, objects and what you will. Java doesn't have this. Instead we get tons of collection classes; but I have never needed a custom collection class in JavaScript, the two built-in ones were always enough.
We lose prototypes.
We lose eval.
In short, we lose all higher-order programming and all metaprogramming. Is the switch worth it?
That's why you can use haXe instead.
You get :
I'll try it when I get tired of JavaScript :-)
Once i wrote a eclipse plugin to create ActionScript from Java, see osflash.org/j2as, runs on 3.1 only. My reasoning was to be able to use all the great java tooling in eclipse for my actionscript work. But in the end i stopped working on it because of two reasons:
1. It's very hard for a Javaprogrammer who doesn't know ActionScript, to create the appropriate code.
2. When you need to debug the code, you are debugging actionscript, and not java. This makes it very hard to see from which line in your java code a problem comes from. | http://lambda-the-ultimate.org/node/1485 | crawl-002 | refinedweb | 2,922 | 62.17 |
Edit 1/6/2013: As of the Scala 2.10 release, the Scala actors have been deprecated in favor of akka. For a guide to shutting down akka actors, checkout:.
I have been working more and more in scala as of late and have encountered the problem of shutting down actors. The only proper way I have found is to use the Actor.exit() method. While this seemed simple to me, I was unable to find any documentation about it that provided an explanation of how it works and what to expect from its behavior. So based on my experience, here is what I have learned.
Actor.exit()
Actor.exit() is the "proper" way to exit a Actor.loop{}. When called,
an exception is thrown internally to the Actor. This sets the
state of the Actor as being
Terminated and stops processing of messages
immediently, leaving any queued messages unprocessed. From
this point on, any message sent to the Actor is simply queued up. You can
restart the Actor later on to continue processing the queued messages if you
would like. Next
an Exit message is sent to any linked Actors. If
no
reason is supplied to Actor.exit(), than the message
Exit(self, 'normal) is sent. Otherwise the message
Exit(self, myMessage) is
sent.
Here is an example to demonstrate this behaviour:
import actors._ case class WorkUnit(i: Int) object StopWork object Producer extends Actor { def act() { loop { receive { case WorkUnit(i) => println("Doing work with: " + i) case StopWork => { println("Stopping producer.") exit() } } } } } Producer.start() Producer ! WorkUnit(1) Producer ! StopWork // Hanging work is made here. Producer ! WorkUnit(2) Thread.sleep(1000) Producer.restart() Producer ! WorkUnit(3) Producer ! StopWork
And the output of this would be:
$ scala test.scala Doing work with: 1 Stopping producer. Doing work with: 2 Doing work with: 3 Stopping producer.
It is worth noting that the Actor.exit() method of shutting down an actor
has the potential of leaving unprocessed messages in your actors mailbox. In an
ideal world I believe that an
Actor.shutdown() method would be added to the actor
that could stop messages from being accepted, process the remaining messages and
then close gracefully. | http://www.bigjason.com/blog/actor-exit-missing-manual | CC-MAIN-2016-30 | refinedweb | 363 | 58.28 |
bed 0.0.5
Small BDD testing framework for D.
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
bed
bed is a BDD testing framework for D heavily inspired by TJ Holowaychuk's Mocha (Node.js). It's still a WIP.
Current API
(heavily subject to changes - I'm looking at the dangling `t` param)
import bed; int add(int x, int y) { return x + y; } unittest { describe("add(x, y)", { it("1 + 3 = 3", { assert(add(1, 3) == 4); }) it("1 + 10 = 4", { assert(add(1, 10) == 11); }) it("2 + 2 = 5 (meant to fail)", { assert(add(2, 2) == 5, "what the hell happened?"); }) describe("when x is a negative number", { it("-10 + 2 = -8", { assert(add(-10, 2) == -8); }) it("-2 - 2 = -5", { assert(add(-2, -2) == -5, "oh my!"); }); }); }); }
Where I am at (approximately) with the output (reporter system):
LICENSE
This code is licensed under the MIT License. See LICENSE for more information.
- Registered by Pedro Tacla Yamada
- 0.0.5 released 8 years ago
- yamadapc/bed
- MIT
- Authors:
-
- Dependencies:
- colorize
- Versions:
- Show all 8 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
3183 downloads total
- Score:
- 0.0
- Short URL:
- bed.dub.pm | https://code.dlang.org/packages/bed | CC-MAIN-2022-05 | refinedweb | 221 | 60.85 |
In the last post, we saw how restructuring the code so that a view model is used to prepare the data for the view helps decouple the view from the calculations required to process the data before displaying it. In this post, we’ll have a look at how to apply a similar approach to user input.
We’ll modify the ComicShop site by adding in a boolean parameter specifying whether we’ve read a particular comic. To do this, we’ll need to modify the underlying database by adding a Read field. You can do this by opening the database in Visual Studio’s Server Explorer (double-click on the database file in Solution Explorer to do this), then navigating to the Books table, then right-clicking and selecting Edit Table Schema. Add in a column called Read, and set its type to ‘bit’. Set its default value to 0.
We also need to modify the Book class by adding a bool property called Read.
To make things easier in what follows, we will also modify the home page so that it displays the Read status of each comic book. We can do this by modifying the IndexModel.cshtml code to this:
@using ComicShop.Models; @model List<Book> <h2>Comics</h2> <p><a href="/Home/Add">Add new comic</a></p> <p><a href="/Home/Summary">Summary</a></p> <p><a href="/Home/Read">Comics read</a></p> <ul> @foreach (var comic in @Model) { <li> @comic.Title: <b>@comic.Volume</b> (@comic.Issue) [@(comic.Read ? "Read" : "Unread")] </li> } </ul>
Now we’re ready to add a page on which the user can change the Read status of one or more comics. We’d like to display a table of the comics in the database, with each row in the table showing the title, volume and issue of the comic, and a checkbox allowing the Read status to be edited. The controller method for this is very simple:
public ViewResult Read() { var model = comicRepository.GetComicList(); return View(model); }
Before we construct the view, we need to stop and think about what the Read view will return. All we really need to identify the read status of a comic is the comic’s primary key (the Id column in the database) and the read status itself. The input model for this page is then the simple class:
namespace ComicShop.Models { public class ComicReadStatus { public int Id { get; set; } public bool Read { get; set; } } }
This just uses the same code we had before, since the view requires the same data as the home page. The Read.cshtml file looks like this:
@using ComicShop.Models; @model IEnumerable<Book> @{ ViewBag. <table> <tr> <th>Title</th> <th>Volume</th> <th>Issue</th> <th>Read?</th> </tr> @{ int index = 0; } @foreach (var comic in Model) { <tr> <td>@comic.Title</td> <td>@comic.Volume</td> <td>@comic.Issue</td> <td> @Html.CheckBox("comicRead[" + index + "].Read", comic.Read) <input name="@("comicRead[" + index + "].Id")" value="@comic.Id" type="hidden" /> </td> </tr> index++; } </table> <button name="submit">Update</button> </form> </div>
The page constructs a table in the same way as we did for the ComicSummary page in the last post. The one notable feature here is the addition of the checkbox. Rather than use bare HTML for this, we’ve used one of MVC’s Html helper functions. This function takes 2 parameters; the first is the name of the checkbox and the second is its initial value.
Since we’re displaying a table of comics, there will be a checkbox for each comic, so we name the checkbox in such a way that the table builds an array. The name of the checkbox is translated dynamically into a data type, so we build up the array by defining an ‘index’ parameter to number the rows in the table and insert this into the ‘comicRead’ array for each row in the table. It’s important here to make sure that the properties of the array are the same as those in the data type we are to export from the form. In this case, we’ll be exporting a list of ComicReadStatus objects, so each array element must have a bool Read and an int Id property. The Read property will be set to the value in the checkbox when the form’s submit button is pressed.
If you’re familiar with basic HTML, you might know that a form returns a value for the checkbox only if it is checked, so that if the checkbox is cleared, there would be no corresponding data sent in the post from the form. The Html.Checkbox() function gets around this problem by defining an additional hidden HTML control with the same name as the checkbox, and a permanent value of ‘false’. If the checkbox is true, it overrides this hidden field, but if the checkbox is false it is ignored and the hidden field then gets sent back from the form. Thus we will have a definite value for each checkbox whether it is true or false.
The explicit hidden field in the Read.cshtml code passes the comic’s Id value back from the form. Thus the output of the form is a list of ComicReadStatus objects, each of which contains a primary key of a comic and its Read value as specified by the user on the form.
In the form definition on line 10, we see that the action called by submitting the form is UpdateReadStatus, so we’ll need to write that before we try to use the form (although you can run the site now and go to the /Home/Read page to see the form if you like).
The action method in HomeController is again very simple:
public ActionResult UpdateReadStatus(List<ComicReadStatus> comicRead) { comicRepository.UpdateReadStatus(comicRead); return RedirectToAction("Index"); }
The input parameter is a List<ComicReadStatus>. You might wonder how the program knows to convert the data sent back from the form into this structure. This is a feature of MVC’s model binding technology, and for now it’s probably safer just to accept that it works. One thing is important though: you must ensure that the name of the parameter (‘comicRead’ here) is the same as that of the array you defined in Read.cshtml’s checkbox. Unlike the usual C# parameter names, where the parameter in the function definition doesn’t have to match that of the variable that is sent into the function, here it does matter, and if you make the names different, the object received by the action method is null (with all the pain that that generates when you try to run the method).
We’ve delegated the work done by UpdateReadStatus() to a method called UpdateReadStatus() in the ComicRepository. This allows us to decouple the controller from the access to the data source, as we’ve done so far in order to enable unit testing. The last line of the action method redirects the browser back to the home page where you can see the results of your edits.
In the ComicRepository class that we’ve been using for interaction with the database, we therefore need to write some code which saves any changes made by the user back to the database. Herein lies a bit of a problem, since the Read view has no connection with, or knowledge of, where its data comes from (which is correct). However, as such, there’s no way to tell from the data sent back by the form which comics have had their Read status changed. There’s really only one way we can be sure of saving all the changes to the database: we’ll have to iterate over all the comics that were listed on the Read view and check to see which ones have had their Read status changed. This obviously isn’t very efficient, but the only other way of doing this involves responding directly to the user’s clicks on the Read page, and that will take us too far afield. So we’ll content ourselves with a fairly brute force method for updating the database. (Actually, a proper view wouldn’t have that many lines in a table anyway, so this probably isn’t all that inefficient, but still…)
Here’s the code in the ComicRepository class:
public void UpdateReadStatus(List<ComicReadStatus> comicReadStatus) { foreach (var comic in comicReadStatus) { Book book = database.ComicBooks.Find(comic.Id); if (book.Read != comic.Read) { book.Read = comic.Read; database.Entry(book).State = System.Data.EntityState.Modified; } database.SaveChanges(); } }
For each comic in the list, we use the DbSet’s Find() method to look up the full comic object in the database. Remember that the comicReadStatus list contains only ComicReadStatus objects, which contain only the Id and Read fields for a given comic book. The Find() method uses an object’s primary key to look it up in the database and then returns the full object.
We then check to see if the Read field has changed and if so, we change the Read field in the full Book object, and then mark this object’s State as Modified. Finally SaveChanges() will save all the rows that have been marked as modified. | https://programming-pages.com/2012/09/20/mvc-user-input-model/ | CC-MAIN-2018-26 | refinedweb | 1,543 | 65.96 |
Prev
C++ VC ATL STL Concurrency Experts Index
Headers
Your browser does not support iframes.
Re: Proper subclassing of streambuf
From:
James Kanze <james.kanze@gmail.com>
Newsgroups:
comp.lang.c++
Date:
Sat, 26 Feb 2011 04:15:52 -0800 (PST)
Message-ID:
<60675eb4-2d73-4b8f-988f-4912c5a56639@u12g2000vbf.googlegroups.com>
On Feb 25, 5:07 pm, mathieu <mathieu.malate...@gmail.com> wrote:
I am trying -again- to understand how to properly implement a
subclass of streambuf. In the following example I am trying to
reproduce a case where sync() should refuse to write any more
characters.
It's not too clear what you mean by that, but the default
behavior of sync() (if you don't override it) is to succeed
withouth doing anything. This is appropriate in cases where the
managed subsequence is identical with the actual underlying
controlled sequence. Otherwise, sync() must be overridden to
"synchronize" the controlled sequence with the managed
subsequence in the character array, by writing the characters in
the array to the controlled sequence. I'm not sure what you
mean by "refuse to write any more characters": if the controlled
sequence has a maximum fixed length, then overflow() (not
sync()) should ensure that the available buffer never allows
more characters to be written. (In most cases, sync() will not
be called until the file is closed.)
For the purpose of the exercise I used a fixed size buffer to
quickly get a segfault.
How does using a fixed size buffer quickly cause a segfault?
Could anyone of you comment on the
code and let me know how they would handle case where sync()
should not write anymore (obviously you do not have access to
the size of the buffer as it just an example).
If you're writing to a memory buffer, you don't have to do
anything in sync(); the default does exactly what you want.
#include <iostream>
#include <string>
#include <cassert>
#include <cstring>
class windowbuf : public std::streambuf {
char *buffer;
public:
windowbuf(char *b, size_t len):buffer(b) { setp(b, b+len); }
Here, you've defined the character array passed as an argument
to be the managed character array (sub)sequence. If this is the
also the underlying controlled sequence (and I don't see
anything else that could be the underlying controlled sequence),
then that's all you have to do. Period: all of the other
functions are invoked to synchronize the character array with
the controlled sequence, or move the subset of the controlled
sequence represented by the character array. If the character
array is the controlled sequence, then they don't have to do
anything, and the implementations in the base class are fine.
int_type sync ();
int_type write( const char *buffer, size_t len );
int_type overflow (int ch);
The functions Sync and overflow should not be public, but
protected. And I'm very suspicious of a public write function:
if you're trying to provide two different ways for client code
to write to the buffer, it's going to cause problems.
And by the way, sync returns an int, and not an int_type. (Of
course, for the specialization you're dealing with, int_type is
an int. For a non-template implementation, like this, I'd
normally declare both sync and overflow to return int. But
int_type is only correct for overflow.)
};
windowbuf::int_type windowbuf::write( const char *buf, size_t len )
{
memcpy(buffer, buf, len );
buffer += len;
return len;
}
When would you call this function, and what is it supposed to
do. It looks sort of like xsputn, except that it doesn't take
into account anything which has already been written to the
buffer, nor the remaining length in the buffer. A correct
implementation of xsputn would be more along the lines of:
windowbuf::streamsize windowbuf::xsputn( char const* source,
streamsize len )
{
streamsize toCopy = std::min( len, epptr() - pptr() );
memcpy( pptr(), source, toCopy );
return toCopy;
}
(This is not actually sufficient. If toCopy is less than len,
it should call overflow with the next character, or call sync,
if that's what overflow does, and try to write the remaining
characters.)
It both takes and returns a streamsize. An assert that the
argument isn't negative would probably be in order.
The implementation of this function is purely an optimization,
however. The implementation in the base class simply calls
sputc len times, which will ultimately have the same effect.
windowbuf::int_type windowbuf::sync ()
{
if (pptr () && pbase () < pptr () && pptr () <= epptr ())
{
int n = write( pbase(), pptr() - pbase() );
setp (pbase (), pbase() + n);
}
return 0;
}
And what on earth is this supposed to do? Your sequences are
already sync'ed. If they weren't (and the stream is pure
output, without support for seek), then sync() should copy the
range [pbase(),pptr()) into the underlying sequence, and then
call setp to reinitialize the buffer pointer. If the original
buffer size is to represent the maximum number of characters
which can be written, this would be:
setp( pptr(), epptr() );
..
int windowbuf::overflow (int ch)
{
if( pbase() == 0 ) return traits_type::eof();
How could this ever happen, given your code. You set pbase() to
non-null in the constructor, and you never set it to null later.
if( ch == traits_type::eof() )
return sync();
Overflow should not return what sync returns, since the
semantic is potentially different. (sync returns -1 in case of
error, overflow returns EOF. On all systems I've worked on, EOF
has been -1, but all that is formally required is that it be
negative.)
if( pptr() == epptr() )
sync();
*pptr() = (char_type)ch;
If pptr() == epptr(), this would cause undefined behavior. Or
if (as would happen in your sync), you've set epptr() beyond the
actual end of your buffer, and pptr() is beyond the end of your
buffer, you'll also get undefined behavior.
pbump(1);
return ch;
}
Note that one frequent trick is to pass an end pointer to setp
that is one less than the actual buffer size. That way,
overflow can still put its argument into the buffer and output
it with the rest.
--
James Kanze | http://preciseinfo.org/Convert/Articles_CPP/Concurrency_Experts/C++-VC-ATL-STL-Concurrency-Experts-110226141552.html | CC-MAIN-2021-49 | refinedweb | 1,008 | 59.53 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
At long last, I finally have a prototype version of polar plots / axes
/ transforms in CVS (mirrors may lag).
I don't use polar plots a lot in my own work, so those of you who do
should provide some feedback on the implementation, appearance and
functionality.
See examples/polar_demo.py (included below)
Here is a snapshot of the src distribution if you have CVS problems
In particular, I need some feedback on how setting axis limits should
work and what, if anything, the navigation controls are expected to
do. These issues are discussed a bit more in the polar_demo.py
example file.
Let me know....
JDH
#!/usr/bin/env python
#
# matplotlib now has a PolarAxes class and a polar function in the
# matplotlib interface. This is considered alpha and the interface
# may change as we work out how polar axes should best be integrated
#
# The only function that has been tested on polar axes is "plot" (the
# matlab interface function "polar" calls ax.plot where ax is a
# PolarAxes) -- other axes plotting functions may work on PolarAxes
# but haven't been tested and may need tweaking.
#
# you can get get a PolarSubplot instance by doing, for example
#
# subplot(211, polar=True)
#
# or a PolarAxes instance by doing
# axes([left, bottom, width, height], polar=True)
#
# The view limits (eg xlim and ylim) apply to the lower left and upper
# right of the rectangular box that surrounds to polar axes. Eg if
# you have
#
# r = arange(0,1,0.01)
# theta = 2*pi*r
#
# the lower left corner is 5/4pi, sqrt(2) and the
# upper right corner is 1/4pi, sqrt(2)
#
# you could change the radial bounding box (zoom out) by setting the
# ylim (radial coordinate is the second argument to the plot command,
# as in matlab, though this is not advised currently because it is not
# clear to me how the axes should behave in the change of view limits.
# Please advise me if you have opinions. Likewise, the pan/zoom
# controls probably do not do what you think they do and are better
# left alone on polar axes. Perhaps I will disable them for polar
# axes unless we come up with a meaningful, useful and functional
# implementation for them.
#
# Note that polar axes are sufficiently different that regular axes
# that I haven't stived for a consistent interface to the gridlines,
# labels, etc. To set the properties of the gridlines and labels,
# access the attributes directly from the polar axes, as in
#
# ax = gca()
# set(ax.rgridlines, color='r')
#
# The following attributes are defined
#
# thetagridlines : a list of Line2D for the theta grids
# rgridlines : a list of Line2D for the radial grids
# thetagridlabels : a list of Text for the theta grid labels
# rgridlabels : a list of Text for the theta grid labels
from matplotlib.matlab import *
r = arange(0,4,0.001)
theta = 6*pi*r
polar(theta, r)
title("It's about time!")
savefig('polar_demo')
ax = gca()
show() | http://sourceforge.net/p/matplotlib/mailman/message/7828415/ | CC-MAIN-2015-18 | refinedweb | 512 | 58.11 |
JPEG 2000 writer; may not write multiple times within a single execution
We have a system that generates and converts imagery. During testing, we came across a problem when requesting multiple JPEG 2000 files; where the image for the first request was written correctly, but subsequent writes failed. This problem was found in part of a larger system, but a simplified test case can be seen below. In the failiure case, a file is created, but contains 0 bytes of data. The JPEG2000 writer does not report any errors or throw any exceptions when it fails.
Using this test, with a 4000x4000 pixel source image, imageIO 1.0_01, windows native writerand the JVM with a max heap of 1024Mb.demonstrates this problem. However, if the maximum heap is reduced to 768Mb, then the problem is no longer apparent, and all 4 images are written. If the source image is 16x16 pixels then all 4 are also written.
I can't release the original image, but I can hopefully generate one that causes the problem to appear. In the code below there is a hard coded reference which will obviously need changing should the code be used for more then illustrative purposes.
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class TestJ2K
{
private final static Logger LOGGER = Logger.getLogger(TestJ2K.class);
public static void main(String[] args)
{
File source_file = new File("c:/imageData1184165300597.jpg");
BufferedImage image;
try
{
if (source_file.exists())
{
image = ImageIO.read(source_file);
File dest_1 = new File("c:/image_1" + ".jp2");
File dest_2 = new File("c:/image_2" + ".jp2");
File dest_3 = new File("c:/image_3" + ".jp2");
File dest_4 = new File("c:/image_4" + ".jp2");
ImageIO.write(image, "jpeg 2000", dest_1);
image = ImageIO.read(source_file);
ImageIO.write(image, "jpeg 2000", dest_2);
image = ImageIO.read(source_file);
ImageIO.write(image, "jpeg 2000", dest_3);
image = ImageIO.read(source_file);
ImageIO.write(image, "jpeg 2000", dest_4);
System.out.println("Complete");
}
else
{
System.out.println("Error, NULL");
}
}
catch (IOException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Sorry, I meant to say that I'd tried it with the 1.1 build and the same problem was experienced. I'll double check that it was running aginst the new version and not looking at the older version installed elsewhere, but I'm pretty sure that it's been tried with 1.1.
I've reproduced this problem (based upon the same source image) on mulitple machines, with different processor/memory configurations. They all run java 1.5 and either 1.0 or 1.1 jai image i/o tools.
Using the pure java writer instead of the native version seems to provide us with a work around, although it's not an ideal soultion.
> Using this test, with a 4000x4000 pixel source image, imageIO 1.0_01, windows native writer
Can you try the current release, JAI Image I/O Tools 1.1?
Thanks,
-James
---------------------------------------------------------------------
To unsubscribe, e-mail: interest-unsubscribe@jai-imageio.dev.java.net
For additional commands, e-mail: interest-help@jai-imageio.dev.java.net | https://www.java.net/node/667427 | CC-MAIN-2015-22 | refinedweb | 507 | 61.22 |
Just a few weeks ago we announced that we open sourced Lightning Web Components – which is a core UI technology for building applications on the Lightning Platform. Now developers can leverage the same framework plus their gained skills for building performant and reusable web applications on Salesforce and other platforms. This blog post will give you an overview of the major differences that you will discover between building Lightning Web Components (LWC) on Lightning Platform compared to their open source version.
The first notable difference is tooling. For developing on Lightning Platform you’d use something like the Salesforce Extensions for Visual Studio Code for building Lightning web components and Salesforce CLI for deploying them to an org. In the future, there will also be cool enhancements like LWC Local Development (register here for a recording of our preview webinar). Building with LWC Open Source is different.
First, there is no official IDE support. So while you can pick and choose your IDE, you won’t get things like code completion, automated imports, and so forth for LWC (besides the standard JavaScript and HTML features that an IDE offers). This can be a bit more time consuming when you start, so keep the LWC Open Source documentation site bookmarked. At the same time you can use the same general purpose tools like Prettier, ESLint, or Jest.
Second, you can choose your tooling. You can decide to build your own toolchain, for example using custom webpack or rollup.js based projects to build and compile your LWC projects. Or you can use lwc-create-app, which is an open source tool that bundles common development activities like project local development, creating production builds, unit testing and more, into the single npm dependency lwc-services. It follows (mostly) the pattern of other popular UI framework tools like Vue CLI or create-react-app, so if you’ve developed with those frameworks, you’ll be familiar with the experience.
To get started is simple: you must have node 10.x (the current LTS version) installed on your computer. Then run this command:
npx lwc-create-app your-app-name
After the guided wizard experience you’ll have a complete project scaffolding, where you can directly start the exploration by running
npm run watch. When you look at our LWC Recipes OSS sample application you’ll see the different pre-defined scripts and more. It’s a fast start, so give it a try!
Another notable difference between building Lightning web components on the Lightning Platform or on Open Source is the availability of Base Lightning Components. These pre-build components are not available on Open Source.
The simple reason behind this is that LWC represents the core technology to build Lightning Web Components. And Base Lightning Components are Lightning web components that are built using LWC, with some special flavor related to leverage certain specific functionality of running on Lightning Platform. They are not part of the core framework.
However, using LWC Open Source in combination with the CSS (framework) of your choice makes it really easy to build your own UI components. We did that here here with the ui modules in LWC Recipes OSS. Depending on your CSS needs you’ll have to make decisions about Shadow DOM, which you can read more about below.
On the Lightning Platform it is relatively simple to access data — you either use Apex or pre-defined Lightning Data Service methods to access data within your Salesforce org. It is different for LWC Open Source. LWC is a UI framework and doesn’t come with any data connectors. You have to define yourself how you want to access data from whatever API that you want.
For connecting to APIs you can pick and choose what you need — from standard XHR requests to using the Fetch API (like we did here in LWC Recipes OSS) or any of your preferred npm packages that does that for you. This also means that you will have to handle all the things like authentication, authorization etc. on your own, as you would have to do with other UI frameworks.
What is different compared to other UI frameworks (and also to the Lightning Platform) is that you can leverage the
@wire decorator to build your own declarative data access. This is super useful when you want to hide the complexity of data access by using a simple decorator, and at the same time, make use of the caching capabilities of the wire-service. The package on GitHub also contains a playground with several examples on how to build your own implementation (and a rollup.js config if you don’t use
lwc-services, but want to run your own project setup).
When you look at the different specifications that make up a “web component”, two may stand out: Custom Elements and DOM. Custom Elements, which is the ability to have your own HTML tags like
<my-super-component></my-super-component> rendered, is essentially the same on Lightning Platform and Open Source. The difference is that you can run your own namespace on LWC Open Source (we used three namespaces in the Recipes sample app). A more significant difference is there when it comes to the DOM — more specifically, the Shadow DOM definition.
On the Lightning Platform, we use a synthetic version of Shadow DOM. This creates a virtual DOM just like React does. A virtual DOM represents an in-memory representation of the DOM, which allows us to patch the DOM behavior. This is because we have to support many old browser versions that don’t fully support native Shadow DOM.
While the synthetic version behaves the similarly as the native version, you’ll notice two differences: in your browser markup you don’t see the
shadow-root tag on Lightning Platform, while on the Open Source version you’ll see it.
hello recipe on LWC OSS
hello recipe on Lightning Platform
The other difference is that because native Shadow DOM is enabled out-of-the-box for Open Source, you can’t just use a global stylesheet that then allows to cascade styles across inheriting Lightning web components. Everything is truly encapsulated, which is one of the huge benefits. You will have to rethink your CSS strategy when it comes to building Lightning web components, or if you want to reuse components that you built and styled on Lightning Platform. On the other side, you can choose with LWC Open Source to use synthetic shadow as an easier way to interoperate with existing UI if needed.
This is closely related to tooling. If you follow the Salesforce Developer blog, you likely read my post about how to debug your Lightning web components. The same techniques apply to LWC Open Source, with some minor differences.
Within a Salesforce org, you can switch between the minified version of your LWC code and a non-minified version by enabling debug mode. For LWC Open Source, it all depends with which parameters you build your code. Using
lwc-services this is determined by the mode flag (or a custom webpack config). In watch mode, for local development, you’ll see the code as is. If you create a webpack production build, everything is minified.
LWC code in webpack development build
What’s also different (but at the same time similar) is the location of your LWC code in the Resources view. Typical for webpack, your local code is accessible (only in watch mode) based on your project structure. In a production build, everything is bundled up into dedicated app.js files, based on webpacks heuristics.
Source code in webpack production build
You now learned about the main notable differences that you should be aware of if you develop LWC Open Source, and/or if you also develop on Lightning Platform. There are many things to explore for the Open Source version, like how to build your own custom wire adapters, how to share code between LWC projects, how to securely access APIs and so on. We’ll cover some of these topics — and more — in upcoming blog posts.
For now, head to lwc.dev and create your first app using lwc-create-app (soon to be renamed to create-lwc-app, with some cool enhancements). Then clone the LWC Recipes OSS repo and play with the different recipes. And if you’re deep into JavaScript (or want to be), check out the source code of the LWC framework itself!
René Winkelmeyer works as Principal Developer Evangelist at Salesforce. He focuses on enterprise integrations, Lightning, and all the other cool stuff that you can do with the Salesforce Platform. You can follow him on Twitter @muenzpraeger. | https://developer.salesforce.com/blogs/2019/06/differences-between-building-lightning-web-components-on-lightning-platform-and-open-source.html | CC-MAIN-2021-04 | refinedweb | 1,453 | 61.06 |
Enhancements to GTX Sachin Bansal, Andrew B. Kahng, Igor Markov, Mike Oliver, Dirk Stroobandt Calibrating Achievable Design are now by default localized to a namespace which groups them together according to the model used to compute their outputs. Parameters may also have a namespace: this is useful in identifying parameters which have no interesting “intrinsic” meaning, but are simply intermediate values in a calculation specific to a given model.
Example: There are two available rules to compute average interconnection length on chip, one (being used) from the DAVIS model, and the other from SUSPENS.
“Batch recorder” mode saves most operations that are performed to a file.
Example (default) view: sources are children of their sinks, and
both rules (blue) and parameters (black) are shown the same time.
The user can choose to make sinks children of their sources, or to show only rules or only parameters.
Chip side correction rule
Chip side
Chip area | https://www.slideserve.com/niveditha/enhancements-to-gtx | CC-MAIN-2018-43 | refinedweb | 154 | 51.89 |
[Shaun Jackman] > A grave bug has been file against a package I maintain pointing out > that the package does not work on AMD64 and in fact never has, even > though it builds on AMD64. Since it turns out this package has never > worked on AMD64, this bug is not a regression, but the status-quo. > Should such a bug be grave, or merely important? Leaving aside the point that amd64 isn't in Debian yet (it will be quite soon, so let's just pretend it is now), I think the bug is correctly RC. If you don't think the package will ever work on amd64 (in the near term, anyway), you can fix the bug by disabling the build on amd64. If your package only builds a single binary package, it's best to just exclude amd64 from the architecture line in debian/control: Architecture: alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc armeb hurd-i386 kfreebsd-i386 m32r ppc64 sh If it builds multiple binaries and you just want to exclude amd64 from some of them, a cleaner approach is what we use in subversion, in debian/rules. We need to build the 'libsvn-javahl' package only on architectures which have a working version of kaffe: ENABLE_JAVAHL := yes DISABLE_JAVAHL_ARCHS := arm armeb m68k mips mipsel kfreebsd-i386 export DEB_HOST_ARCH ?= $(shell dpkg-architecture -qDEB_HOST_ARCH) ifneq (,$(filter $(DEB_HOST_ARCH), $(DISABLE_JAVAHL_ARCHS))) ENABLE_JAVAHL := endif ifeq ($(ENABLE_JAVAHL), yes) confflags += --enable-javahl ... else export DH_OPTIONS := -Nlibsvn-javahl endif Here we've set up ENABLE_JAVAHL as a way to conditionalise the rest of debian/rules so as to build or not build the java package. The interesting part, though, is the last line, setting DH_OPTIONS. That variable tells debhelper not to build libsvn-javahl. Thus we don't need to list a dozen architectures in debian/control. We do still need to duplicate the arch exclusion list in debian/control: Build-Depends: ... kaffe-dev [!arm !armeb !m68k !mips !mipsel !kfreebsd-i386] but an exclusion list is much less cumbersome than an *inclusion* list, and you can't use an exclusion list in the Architecture line, unlike the Build-Depends line. HTH.
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel/2006/02/msg01014.html | CC-MAIN-2014-15 | refinedweb | 364 | 57.1 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
On Thu, Nov 01, 2001 at 11:15:36PM +0100, Stefan Olsson wrote: > Yes, but we would like to change the default allocator _without_ specifying > this every time we create a new string... [...] > typedef std::basic_string <char, std::char_traits<char>, > std_char_pthread_alloc > pthread_string; Really, those kinds of typedefs are the way to go. At the start of the development work, they may just be "pass-through" names, like namespace our_project { typedef std::string normal_string; typedef std::string threading_string; typedef std::string string_for_some_other_situation; typedef std::vector<int> container_of_ints; } using our_project::string; using ... but they provide hooks for easy project-wide changes; literally changing one line such as threading_string and recompiling. (A coder's dream! :-) If the underlying names have been used throughout, then the initial shift to this style of naming will seem like a massive change, but it can be largely mechanical. It's definitely worth | https://gcc.gnu.org/legacy-ml/libstdc++/2001-11/msg00026.html | CC-MAIN-2022-40 | refinedweb | 162 | 55.24 |
28 August 2013 23:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Wednesday’s end of day ?xml:namespace>
CRUDE: Oct WTI: $110.10/bbl, up $1.09; Oct Brent: $116.61/bbl, up $2.25
NYMEX WTI crude futures extended recent gains on concerns that military action against
RBOB: Sep $3.0943/gal, up 6.02 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled higher on stronger crude futures and after the EIA report showed a drop of 600,000 bbl in gasoline inventories to 217.8m bbl.
NATURAL GAS: Sep: $3.567/MMBtu, up 3.3 cents
The September contract settled its final day as the front month on the NYMEX natural gas futures market up just under 1%, rising on strong near-term demand due to the ongoing heat wave affecting much of the central US. Traders were less bullish over longer term demand, leaving forward month contracts wavering around the previous day’s close.
ETHANE: steady at 25.00 cents/gal
Ethane spot prices were steady.
AROMATICS: Mixed xylenes tighter at $4.35-4.45/gal, toluene tighter at $4.00-4.10/gal
Prompt mixed xylenes (MX) bid/offer levels were tighter on Wednesday as bids firmed. Offer levels were flat, taking prices to a more narrow range compared with $4.25-4.45/gal FOB (free on board) the previous session. Meanwhile, prompt toluene bid/offer also narrowed during the day with bids moving up. Toluene spot prices were tighter from $3.95-4.10/gal the previous session.
OLEFINS: ethylene traded flat at 55 cents/lb, PGP tighter at 66.50-67.75 cents/lb
US August ethylene prices were steady as material traded at 55 cents/lb, flat with two trades the previous day. US August polymer-grade propylene (PGP) bid/offer levels narrowed to 66.50-67.75 cents/lb from 66.00-69.00 | http://www.icis.com/Articles/2013/08/28/9701215/evening-snapshot---americas-markets-summary.html | CC-MAIN-2014-41 | refinedweb | 317 | 78.25 |
curl_multi_perform - reads/writes available data from each easy handle
#include <curl/curl.h> CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles);
When the app thinks there’s data available for the multi_handle, it should call this function to read/write whatever there is to read or write right now. curl_multi_perform()_multi_perform() and the amount of running_handles is changed from the previous call (or is less than the amount of easy handles you’ve added to the multi handle), you know that there is one or more transfers less "running". You can then call curl_multi_info_read(3) to get information about each individual completed transfer, and that returned info includes CURLcode and more. If an added handle fails very quickly, it may never be counted as a running_handle. When running_handles is set to zero (0) on the return of this function, there is no longer any transfers in progress.
CURLMcode type, general libcurl multi interface error code. Before version 7.20.0: If you receive CURLM_CALL_MULTI_PERFORM, this basically means that you should call curl_multi_perform again, before you select() on more actions. You don’t have to do it immediately, but the return code means that libcurl may have more data available to return or that there may be more data to send off before it is "satisfied". Do note that curl_multi_perform(3) will return CURLM_CALL_MULTI_PERFORM only when it wants to be called again immediately. When things are fine and there is nothing immediate it wants done, it’ll return CURLM_OK and you need to wait for "action" and then call this function again. This function only returns errors etc regarding the whole multi stack. Problems still might have occurred on individual transfers even when this function returns CURLM_OK.
Most applications will use curl_multi_fdset(3) to get the multi_handle’s file descriptors, then it’ll wait for action on them using select(3) and as soon as one or more of them are ready, curl_multi_perform(3) gets called.
curl_multi_cleanup(3), curl_multi_init(3), curl_multi_fdset(3), curl_multi_info_read(3), libcurl-errors(3) | http://huge-man-linux.net/man3/curl_multi_perform.html | CC-MAIN-2018-05 | refinedweb | 334 | 51.89 |
This is my small code. It works fine until dim=3. If I enter dim=4 I get this strange error:
makefile:10: recipe for target 'exec' failed
make: *** [exec] Aborted (core dumped)
I think "core dumped" appears when I try to access memory which I do not own but I cant see where the mistake is in this code.
1 #include <iostream>
2
3 using namespace std;
4
5 int main(){
6 int dim = 0;
7 double* y = new double[dim+1];
8 double* x = new double[dim+1];
9
10 do{
11 cout << "Enter a dimension:\n";
12 cin >> dim;
13 if(dim<=0) cerr << "Error: Dimension is lower equal 0!\n";
14 }while(dim<=0);
15
16 for(int i = 0; i<dim; i++){
17 *(y+i) = dim*i + 7;
18 *(x+i) = dim*i + 1;
19 cout << "*(y+" << i << "): " << *(y+i) << "\t" << "*(x+" << i << "): " << *(x+i) << endl;
20 }
21
22
23
24 delete[] y;
25 delete[] x;
26 return 0;
27 }
Let me explain what is wrong with your code:
int dim = 0; double* y = new double[dim+1]; double* x = new double[dim+1];
x and
y point to a single element because
new double[0 + 1]
Then you ask the user to enter the dimensions. Once the user enters the dimensions your code breaks because you are trying to dereference a pointer that points to an invalid location. This is called
undefined behavior.
To fix this issue you have to first ask the user to enter the
dim value and then allocate the appropriate space. | https://codedump.io/share/RHojA0w2S8YB/1/pointer-arithmetics-core-dumped | CC-MAIN-2017-04 | refinedweb | 261 | 67.01 |
56318/python-script-downloading-video-youtube-saving-directory
First download pytube using the following code
pip install pytube
Then download the video using the following
from pytube import YouTube
yt = YouTube("")
yt = yt.get('mp4','720p')
yt.download('/path/to/download/directory')
Hi @Mike. First, read both the csv ...READ MORE
Hi, there is a very simple solution ...READ MORE
Tuples are a Unchanging sequence of values, ...READ MORE
Try like this, it will give you ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Since I am using Python 3.6, I ...READ MORE
This can be done using simple string ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/56318/python-script-downloading-video-youtube-saving-directory | CC-MAIN-2021-43 | refinedweb | 160 | 69.89 |
Hey everybody, I've got a couple of tables during my database plus they all have a different inherited kind of a DataRow.
Additionally I've got a class that's designed to handle several things during my DataGrid (My database Tables are attached to DataGrids).
To be able to do this, among the techniques within this DataGrid handler needs to cast the rows towards the exact inherited kind of a DataRow.
something similar to this: (TempDataRow as DataRowTypeThatInheritsFromRegularDataRow).SpecialParameter = something
To be able to do this, I must pass the technique the inherited DataRow type, therefore it will understand how to perform the casting.
The technique will normally seem like this:
public void DoSomthing(DataRowType Type)
the truth is I'm not sure how you can pass the kind. Regular 'Type' type doesn't compile. and when I pass just 'DataRow' it will not understand how to perform the casting.
any suggestions? Thanks.
If you work with CFour., then have you thought about using the 'dynamic' type?
dynamic row = getDataRow(); doSomething( row ); public void doSomething( DataRowTypeThatInheritsFromRegularDataRow row ) { // <insert code here> } public void doSomething( SomeOtherDataRowType row ) { // <insert code here> }
This situation should select at run-time which function to call, based on what getDataRow() really returns.
For more reading through of dynaminc see msdn
You will find various ways you could do this this.
First, you could discover a typical base class or interface that types share, after which have DoSomething() take that base class or interface, or if you wish to be totally dynamic, you could utilize reflection. It's difficult to inform you the way to get it done, since you haven't given any concrete example:
using System.Reflection; ... public void DoSomething(object foo) { var dataType = foo.GetType(); type.GetProperty("SomeDynamicName").SetValue(foo, someOtherValue); }
(though should you be using CFour., as TK highlights, you can only use the dynamic type and that would be that!)
You could utilize:
TempDataRow as Type.GetType("DataRowTypeName") | http://codeblow.com/questions/type-ing-quesion/ | CC-MAIN-2019-04 | refinedweb | 326 | 54.12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.