text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi everybody,
I'm trying to compile a simple rtnetlink example written in c using gcc [ver 4.7.2] but I get this error :
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/../../../../lib/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' collect2: error: ld returned 1 exit status
and here is the include parts of example :
#include <sys/socket.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <linux/netlink.h> #include <linux/rtnetlink.h> #include <arpa/inet.h> #include <unistd.h>
Should I add a specific library or something ?! (I tried netlink and pthreads libraries and it didn't work)
thanks in advance
Last edited by bahramwhh (2012-10-09 09:34:56)
Offline
@bahramwhh We'll need to complete source code and the exact compiler and link invocation to help you.
Offline
@lunar oh, I'm sorry, my problem solved. my file didn't have the main function !
Last edited by bahramwhh (2012-10-09 09:35:29)
Offline | https://bbs.archlinux.org/viewtopic.php?pid=1172881 | CC-MAIN-2016-40 | refinedweb | 169 | 63.25 |
sem_open()
Create or access a named semaphore
Synopsis:
#include <semaphore.h> #include <fcntl.h> sem_t * sem_open( const char * sem_name, int oflags, ... );
Since:
BlackBerry 10.0.0
Arguments:
- sem_name
- The name of the semaphore that you want to create or access; see below.
- oflags
- Flags that affect how the function creates a new semaphore. This argument is a combination of:
- O_CREAT
- O_EXCL
Don't set oflags to O_RDONLY, O_RDWR, or O_WRONLY. A semaphore's behavior is undefined with these flags. The BlackBerry 10 OS libraries silently ignore these options, but they may reduce your code's portability.:
- If the name argument starts with a slash, the semaphore is given that name.
- If the name argument doesn't begin with a slash character, the semaphore is given that name, prepended with the current working directory.:
If you want to create or access a semaphore on another node, specify the name as /net/ node / sem_location.
The oflags argument is used only for semaphore creation. When creating a new semaphore, you can set oflags to O_CREAT or (O_CREAT|O_EXCL):
-.
Don't mix named semaphore operations (sem_open() and sem_close()) with unnamed semaphore operations (sem_init() and sem_destroy()) on the same semaphore..
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/sem_open.html | CC-MAIN-2016-44 | refinedweb | 217 | 60.01 |
Well, perhaps this isn't the sole reason, but nonetheless it's a more than feasible idea these days to have headings, titles and short lines of text using fonts a user doesn't have installed on their system. And, until the spec for embedding fonts is finialised in around 2065, there are two main options:
- Render the text using *cough* Flash. While Flash is perhaps not as bad as some would make out, it's still horribly proprietary as well as having a noticeable loading time and a few other invented disadvantages that strengthen my case for using...
- Images. They're lighterweight, and have been used for showing custom graphics since the dawn of [UNIX] time.
So, we need header images. One horribly labour-intensive way of doing this is making them manually in Generic Graphics Editor 8.6. However, since we're sensible people, we'll generate them on the fly. And, since we're sensible people, we'll be using Django*, so we need to write some nice Python code.
I'll be using Cairo to generate graphics, in part because it's a nice library, and is pretty common these days. You'll need the Python Cairo bindings; on debian-like systems, this is the package python-cairo; in other places, your mileage may vary.
The key to making Cairo work with Django is wrapping a Cairo canvas in a django view. For this reason, I have this function lying around:
def render_image(drawer, width, height):
import os, tempfile, cairo
# We render to a temporary file, since Cairo can't stream nicely
filename = tempfile.mkstemp()[1]
# We render to a generic Image, being careful not to use colour hinting
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, int(width), int(height))
font_options = surface.get_font_options()
font_options.set_antialias(cairo.ANTIALIAS_GRAY)
context = cairo.Context(surface)
# Call our drawing function on that context, now.
drawer(context)
# Write the PNG data to our tempfile
surface.write_to_png(filename)
surface.finish()
# Now stream that file's content back to the client
fo = open(filename)
data = fo.read()
fo.close()
os.unlink(filename)
return HttpResponse(data, mimetype="image/png")
The idea is, you pass it a function which will draw the image onto a context, and the image's width and height, and it takes care of all the boring tedium of wrapping cairo and django together.
Now, that's not very useful by itself, is it? Time to draw some text!
Firstly, as an aside, we need a way of seeing how big a certain text string will be for a given font and size, so we can render an image just big enough for it. This function achieves that:
def text_bounds(text, size, font="Sans", weight=cairo.FONT_WEIGHT_NORMAL, style=cairo.FONT_SLANT_NORMAL):
import cairo
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 1, 1)
context = cairo.Context(surface)
context.select_font_face(font, style, weight)
context.set_font_size(size)
width, height = context.text_extents(text)[2:4]
return width, height
Yes, yes, it's somewhat cryptic, but it does the job. Now, we can write a text-rendering view!
def render_title(request, text, size=60):
# Get some variables pinned down
size = int(size) * 3
font = "Meta"
width, height = text_bounds(text, size, font)
def draw(cr):
import cairo
# Paint the background white. Replace with 1,1,1,0 for transparent PNGs.
cr.set_source_rgba(1,1,1,1)
cr.paint()
# Some black text
cr.set_source_rgba(0,0,0,1)
cr.select_font_face(font, cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_NORMAL)
cr.set_font_size(size)
# We need to adjust by the text's offsets to center it.
x_bearing, y_bearing, width, height = cr.text_extents(text)[:4]
cr.move_to(-x_bearing,-y_bearing)
# We stroke and fill to make sure thinner parts are visible.
cr.text_path(text)
cr.set_line_width(0.5)
cr.stroke_preserve()
cr.fill()
return render_image(draw, width, height)
Here, we construct the draw function with a simple text drawing command, and run the wrapper.
There's some interesting text positioning going on up there; for more on this, and cairo in general, read through the excellent Cairo Tutorial for Python Programmers.
The last thing is to add an appropriate URL into your URLconf, such as
(r'^title/([^\/]+)/(\d+)/$', "render_title")
And then, when you browse to /title/HelloWorld/20/, you'll hopefully get a nice PNG of your new title! Then, you can just use img tags instead of titles, in this sort of style:
<img src="/title/{{ item.title }}/20" alt="{{ item.title }}" />
This process is quite quick, but not without a small cost of processing power; if you're using it a lot, think about some sort of caching. Apart from that, be happy with your newfound title freedom...
* Or possily Pylons. As long as you don't go and cavort with those Gems On Guiderails people, or heaven forbid the [PH/AS]P guys... | http://www.aeracode.org/2007/12/15/django-and-cairo-rendering-pretty-titles/ | CC-MAIN-2016-50 | refinedweb | 793 | 64.41 |
Thread
twitter
Reddit
Topics
No topic found
Content Filter
Articles
Videos
Blogs
News
Complexity Level
Beginner
Intermediate
Advanced
Refine by Author
[Clear]
Prakash Tripathi (14)
Gul Md Ershad (10)
Mahesh Chand (6)
Sandeep Sharma (6)
Abhishek Dubey (4)
Sai Kumar Koona (3)
Akshay Teotia (3)
Amr Monjid (2)
Sourav Kayal (2)
Chintan Rathod (2)
Vijay Kumari (2)
Mehreen Tahir (2)
Sateesh Arveti (2)
Jaydeep Suryawanshi (1)
Mohammad Elsheimy (1)
Ravi Raghav (1)
Aman Gupta (1)
Romain LAFON (1)
Vulpes (1)
Amit Choudhary (1)
Nikhil Bhojani (1)
Rikam Palkar (1)
Sharadendu Dwivedi (1)
Abhishek Duppati (1)
Sazid Mauhammad (1)
Amir Ali (1)
Raman Sama (1)
Sumit Srivastava (1)
Krishna Garad (1)
Jin Vincent Necesario (1)
Marcus (1)
Gopi Chand (1)
Abhishek Kumar (1)
Pragya Nagwanshi (1)
Mike Gold (1)
Vidya Vrat Agarwal (1)
Dave Richter (1)
Bhaskar Gollapudi (1)
Daniel (1)
Michael Livshitz (1)
Chandrakant Parmar (1)
Sivaraman Dhamodaran (1)
Pranay Rana (1)
Lala zareh (1)
Kashif Asif (1)
Akkiraju Ivaturi (1)
Swati Gupta (1)
Allen O'neill (1)
Sekhar Srinivas (1)
Fahad Naeem (1)
Santosh Bondre (1)
Rajesh Singh (1)
Emiliano Musso (1)
Usman Arshad (1)
Shakti Saxena (1)
Related resources for Thread
No resource found
Asynchronous Nature of Delegates
5/9/2022 11:12:13 AM.
In this article you will see the other face of the delegate type in C#, it will show you how you can invoke a method asynchronously using delegates.
Thread Behavior In Synchronous And Asynchronous Method
12/28/2021 5:46:48 PM.
In this article, you will learn about thread behaviour in synchronous and asynchronous method.
Understanding Thread Starvation in .NET Core Applications
12/24/2021 5:26:02 AM.
Understanding Thread Starvation in .NET Core Applications
Working With Async/Await/Task Keywords In Depth
12/13/2021 9:29:26 PM.
In this article, you will learn how to work with Async/Await/Task keywords in depth.
Understanding Worker Thread And I/O Completion Port (IOCP)
12/13/2021 6:28:02 AM.
In this article, you will learn about worker thread and I/O Completion Port (IOCP).
Understanding Synchronization Context Task.ConfigureAwait In Action
8/30/2021 4:59:29 AM.
When dealing with asynchronous code, one of the most important concepts that you must have a solid understanding of is synchronization context.
Multithreading Process With Real-Time Updates In ASP.NET Core Web Application
8/11/2021 6:26:55 AM.
In today's article, we will see how to implement multithreading with real-time updates shown in ASP.NET Core 2.0 web application.
Multithreading In Java
8/9/2021 2:32:32 PM.
In this article, you will learn about Multithreading and its uses in Java.
Improve Performance of ASP.Net and Web Service
2/18/2021 6:17:47 AM.
This article provides a few important tips for improving performance of ASP.NET and Web Service applications.
Comparison Of Microsoft Windows Tools For Waiting Time Management
1/25/2021 8:34:13 AM.
This article aims to compare some solutions provided by Microsoft Windows to manage time, time precision, and the impact of CPU overload on the frequency accuracy.
Improve Performance of .NET Application
12/28/2020 12:50:24 PM.
This article provides some concepts for improving the performance of .NET applications.
Thread- Local Storage of Data in .NET
12/7/2020 1:28:19 AM.
Suppose you're writing a multi-threaded application and you want each thread to have its own copy of some data. You also want this data to persist throughout the lifetime of the thread.
Parallel Programming Using New TPL Library in .Net 4.0
11/26/2020 4:40:39 AM.
With the .Net 4.0, you are provided with the brand new parallel programming language library called “Task Parallel Library” (TPL). Using the classes in the System.Threading.Tasks namespace, you can bu
Threads In C#
10/24/2020 12:24:59 AM.
Learn how to use threads in C#.
Debug Async Code
6/15/2020 5:20:23 AM.
In this article, you will learn how to debug async code.
Dispatcher In A Single Threaded WPF App
6/2/2020 9:44:18 AM.
Dispatcher is used to manage multithreaded application. It manages Message queues.
🚀Async/Await Deep Dive - Asynchronous Programming - Part One
6/1/2020 10:03:25 PM.
Async/Await are two keywords used by new generation apps to take advantage of Asynchronous Programming.
Multithreading in C#
5/10/2020 6:05:43 PM.
Multithreading is a parallel way of execution where the application is broken into parts so that it can have more than one execution path at the same time.
Difference Between Thread and AsyncTask in Android
3/31/2020 7:34:57 AM.
What is difference between Thread and AsyncTask? When to use Thread and when to use AsyncTask?
How to Set A Progress Bar in Android
3/23/2020 1:33:55 AM.
In this article I will tell you how to add a progress bar to an Android application.
Android Threads and Handlers
3/21/2020 7:39:21 AM.
This tutorial describes the usage of Threads and Handlers in an Android application. It also covers how to handle the application lifecycle together with threads.
Thread Locking In C#
3/11/2020 11:10:24 PM.
Exclusive locking in threading ensures that one thread does not enter a critical section while another thread is in the critical section of code.
ProgressBar in Android
2/27/2020 2:45:03 AM.
This article explains how to use a Progress Bar. A Progress Bar is a graphical user interface that shows the progress of a task.
Understanding Parallel Programming Using Pthreads In PHP
2/13/2020 7:59:02 AM.
PHP is the appeal of a simple synchronous, single-threaded programming which attracts most developers. And for significant performance improvement,Pthreads can enhance the experience of your website i
Introduction to Python
1/28/2020 1:29:55 PM.
This article is a small introduction to the Python language, which is easy to learn and easy to understand. Python is an interactive, interpreted, and object oriented language.
A Complete MultiThreading Tutorial In Java
1/27/2020 5:23:17 PM.
Multithreading in Java is a process of executing multiple threads simultaneously. A thread is the smallest unit of the processing. Multithreading and Multiprocessing, both are used to achieve multitas
Thread Pool in Windows Store Application
12/30/2019 1:31:38 AM.
This article shows another way of doing asynchronous programming in a Windows Store application using a Thread Pool.
Task Parallel Library 101 Using C#
11/27/2019 12:20:22 AM.
Task Parallel Library (TPL) provides a level of abstraction to help us to be more effective as developers/programmers when it comes to parallelism. Knowing at least the basics are beneficial. In that
Random Class in Java
10/15/2019 10:50:04 PM.
Random class is used to generate pseudo-random numbers in java. An instance of this class is thread-safe. The instance of this class is however cryptographically insecure..
How To Create Daemon Thread In Java
9/19/2019 5:47:56 AM.
In this article we discuss how to create a Daemon thread in Java.
Introduction To Deadlock In Java
9/18/2019 11:55:07 PM.
In this article, we will discuss Deadlock in Java. Deadlock is a condition where two or more threads are blocked forever, waiting for each other to release the resource (for execution) but never get t
Creating Analog Clock in Java
9/17/2019 6:35:19 AM.
In this article we are going to describe how to make an analog clock using the Graphics class in Java.
Thread Life Cycle In Java
9/17/2019 1:23:32 AM.
In this article, we discuss the life cycle of a thread in Java.
Working With Threads in Java
9/12/2019 5:43:32 AM.
In this article you will learn how to set the priority of a thread and use the og join(), isAlive() methods in Java
Threading in Java
9/12/2019 4:08:43 AM.
In this article you can learn the basic steps of creating a thread; this article provides two ways for creating your own thread in Java.
First Step to Java's Multithreading
9/10/2019 6:22:47 AM.
This article helps you to understand the basics of Java's Multithreading, in a nutshell..
Difference Between StringBuffer and StringBuilder Class
7/30/2019 1:46:29 AM.
This article differentiates the two classes, StringBuffer & StringBuilder, using suitable parameters and examples.
BackgroundWorker In C#
7/29/2019 9:46:21 AM.
C# BackgroundWorker component executes code in a separate dedicated secondary thread. In this article, I will demonstrate how to use the BackgroundWorker component to execute a time consuming process
Perform Single And Multiple Task Using Multiple-Thread In Java
7/25/2019 1:21:22 AM.
This article explains how to perform a single and multiple tasks using multiple threads.
Join, Sleep And Abort In C# Threading
6/10/2019 9:54:04 PM.
C# Sleep() method of Thread class is useful when you need to pause a program in C#. Code examples how to use Thread.Join(), Thread.Sleep(), and Thread.Abort() methods in C#.
Monitor And Lock In C#
5/29/2019 8:08:53 AM.
C# Lock and C# Monitor are two keywords used in thread synchronization in C#. Here are C# Lock and C# Monitor code examples.
Task And Thread In C#
5/12/2019 10:01:26 PM.
The Thread class is used for creating and executing threads in C#. A Task represents some asynchronous operation and is part of the Task Parallel Library, a set of APIs for running tasks asynchronousl
C# Thread Basics
3/30/2019 9:56:34 AM.
Learn the basics of C# Thread. This code example explains how to create a Thread in C# and .NET Core.
Introduction To Multithreading In C#
3/25/2019 5:41:57 AM.
This article is a complete introduction to Multithreading in C#. This tutorial explains what a thread in C# is and how C# threading works.
Programming Concurrency In C++ - Part Two
3/7/2019 9:41:26 AM.
This article is in continuation of my previous article, "Programming Concurrency in C++: Part One". This article will sum up the introduction of concurrency in C++..
Thread Pool In .NET Core And C#
1/8/2019 10:49:24 PM.
A thread pool is a pool of worker threads that is available on demand as needed. The code examples in this article show how to use the thread pool in .NET Core using C#.
Programming Concurrency In C++ - Part One
12/17/2018 9:33:13 AM.
This article will help you get started with concurrency and will also introduce you to the features C++ offers in order to support concurrent programming. In this series of articles, I will not only
Threading with Mutex
11/26/2018 2:57:35 AM.
A mutual exclusion (“Mutex”) is a mechanism that acts as a flag to prevent two threads from performing one or more actions simultaneously.
Using the BackgroundWorker component
9/17/2018 5:54:03 AM.
This article discusses the BackgroundWorker component in .NET 2.0, it will show you how to use it to execute tasks in the background of your application. BackgroundWorker
A Potentially Helpful C# Threading Manual
9/17/2018 5:20:43 AM.
The article will focus on threading constructs and as such, is meant for both the beginner and those who practice multithreading regularly.
Multithreading in C#
9/17/2018 4:26:09 AM.
This article discusses how to write multithreading applications in C#. Part I of this series will discuss the basics of threads in .NET.
Background worker simplified
9/17/2018 4:13:47 AM.
This article looks at the Background Worker Technology and encapsulates it into a simple form that can be used over and over to run your background tasks.
Using the BackgroundWorker Component in .NET 2 Applications
9/17/2018 1:40:13 AM.
In this article I will show (step-by-step) how you can use the BackgroundWorker Component in .NET 2 applications to execute time-consuming operations.
Understanding Threading in .NET Framework
9/17/2018 1:16:54 AM.
This article describes how to use threading model in .NET Framework including creating, joining, suspending, killing, and interlocking threads. Create thread in C#, Join thread in C#, Suspend thread i
Write First Threading App In C#
7/16/2018 10:16:03 PM.
This is hello world of threading.
Creating Simple Thread In C#
3/26/2018 1:04:10 AM.
This video shows creating and running threads in C-Sharp. It also explains the use of Thread.Join().
Aborting Thread Vs Cancelling Task
12/12/2017 1:58:32 PM.
The below post is based on one of the question I answered on StackOverflow, in which the questioner wants to cancel a task when its taking too long to respond; i.e., taking too much time in execution
Movie Ticket Booking And Semaphore
12/8/2017 11:50:22 AM.
This article explains the role of Semaphore in the ticket booking of a movie by more than one seller.
Multithreading In C# .Net
7/16/2017 12:43:47 PM.
If you have a program that execute from top to bottom, it will not be responsive and feasible to build complex applications. So .Net Framework offers some classes to create complex applications.
Singleton Vs Static Classes
6/27/2017 3:04:10 AM.
Why do you use a Singleton class if a Static class serves the purpose What is the difference between Singleton and Static classes and when do you use each one in your program?
Thread Sick Software Engineer
6/18/2017 9:58:25 PM.
This article will explain about such kind of software of engineer who thinks to use thread everywhere without thinking its side effect.
Thread Synchronization - Signaling Constructs With EventWaitHandle In C#
4/21/2017 11:03:49 AM.
This article emphasizes on Thread Synchronization - signaling Constructs with EventWaitHandle in C#.
Look At Threads Window In VS 2015
1/13/2017 2:22:03 AM.
In this article, we will look into one of the feature of VS 2015 known as Threads Window.
Multi Threading With Windows Forms
1/10/2017 9:59:42 AM.
Some quick code for updating a Windows form application user interface.
Introduction to JDBC
8/2/2016 3:01:27 AM.
In this video we will Understanding Introduction to JDBC.Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client m
Overview Of ThreadStatic Attribute In C#
7/23/2016 1:38:53 AM.
In this article, you will learn about the overview of ThreadStatic attribute in C#.
Understanding Multithreading And Multitasking In C#
6/29/2016 4:30:31 PM.
In this article, you will understand multithreading and multitasking In C#.
Threading Simplified: Synchronization Context - Part 14
6/7/2016 4:51:24 AM.
This article explains what Synchronization Context is and how to use it efficiently in a multi-threading environment.
Threading Simplified: Semaphore - Part Thirteen
5/10/2016 10:42:54 AM.
This article explains what Semaphore is and how to use it efficiently in multithreading environment.
Threading Simplified: Part Twelve (Mutex)
4/21/2016 11:43:34 AM.
This article explains what Mutex is and how to use it efficiently in multithreading environment.
Threading Simplified: Part Eleven (Thread Atomicity & Deadlock)
4/11/2016 10:50:08 AM.
This article explains what Thread Atomicity and Deadlock are and how to use and handle them efficiently in multithreading environment.
Thread Safe Concurrent Collection in C#
4/8/2016 11:32:47 AM.
In this article you will learn about thread safe concurrent collection in C#.
Threading Simplified: Part 10 (Monitor)
3/14/2016 9:16:02 AM.
This article explains what Monitor is and how to use it efficiently in a multithreading environment.
Threading Simplified: Part 9 (Thread Locking)
3/12/2016 11:20:14 PM.
This article explains what Thread Locking is and how to use it efficiently in a multithreading environment.
Update UI With WPF Dispatcher And TPL
2/24/2016 9:54:31 AM.
This article is intended to explain the concept of updating WPF UI with the help of Dispatcher and TPL.
Control Current Tasks In Multithreading
2/20/2016 1:04:56 AM.
This article is intended to explain the concept of controlling Task using different name.
Task Parallelism In Multithreading
2/17/2016 9:06:28 AM.
In this article you will lean about Task Parallelism in Multithreading.
Mutex in .NET
1/26/2016 11:38:12 PM.
In this article you will learn about Mutex in .Net.
Threading Simplified: Part 8 (Synchronization Basics and Thread Blocking)
1/26/2016 9:55:37 AM.
This article explains what Thread Synchronization Fundamentals are and how to use Thread Blocking efficiently in multithreading environment. 7 (Thread Priority)
1/15/2016 2:05:02 AM.
This article explains what Thread Priority is and how to use it efficiently in a multi-threading environment.
Threading Simplified - Part 2 (Multithreading Concepts)
12/28/2015 6:41:39 AM.
This article explains various concepts, such as Multiprogramming, Multitasking, Multiprocessing and Multithreading..
Threading Simplified: Part 5 (Thread Pools)
12/7/2015 2:33:20 AM.
This article explains what thread pools are and how to use them efficiently in multithreading using QueueUserWorkItem method and Asynchronous delegates.
Canceling A Running Task
11/15/2015 8:25:23 AM.
In this article you will learn how to cancel a running task.
Thread Synchronization
11/14/2015 1:42:09 PM.
This article is intended to explain the concept of thread synchronization.
Thread Safety In C#
11/14/2015 12:28:20 PM.
This article is intended to explain the concept of thread safety.
Asynchronous Programming Using Delegates
11/4/2015 12:24:40 AM.
This article is intended to explain the concept of asynchronous programming using DelegateS.
Learn Parallel Programming
10/24/2015 11:08:09 AM.
In this article you will learn about Parallel Programming. Parallel programming splits the work into independent chunks of work and then carries out these works simultaneously.
Different Ways To Create Task Parallel Library (TPL Threads)
10/12/2015 3:01:58 AM.
This article explains the concept to create thread using TPL (Task Parallel Library) with different approaches.
- Ebook
Source Code: Graphics Programming with GDI+
Graphics Programming with GDI+ is the .NET developer's guide to writing graphics appl...
Download | https://www.c-sharpcorner.com/topics/thread | CC-MAIN-2022-21 | refinedweb | 3,120 | 58.79 |
Since I can’t get the fancy upload progress thing to work, I thought
that I would revert back to a regular file upload scenario.
I have a very simple file upload form set up. It appears that my file
parameter is coming into the controller as a string, not a file object.
Here’s my view:
<%= form_tag :action => ‘save_HTML’,
:multipart => ‘true’ %>
<%= file_field ‘eSimplyJob’, ‘file’ %>
<%= submit_tag "Upload" %><%= end_form_tag %>
Here’s my controller method:
public
def save_HTML
@filename = @params[:eSimplyJob][:file].original_filename
File.open("#{RAILS_ROOT}/htmlfiles/#{@filename}", “wb”) do |f|
f.write(@params[:eSimplyJob][:file].read)
end
puts “Finished with save_HTML”
end
When I execute the upload, I get the following error:
undefined method `original_filename’ for “upload.rhtml”:String
I understand that this means that @params[:eSimplyJob][:file] is a
String and not an IO object.
What am I doing wrong?
Thanks for any help.
Wes G. | https://www.ruby-forum.com/t/newbie-cant-get-file-to-upload/54722 | CC-MAIN-2018-47 | refinedweb | 146 | 57.16 |
Recently I have seen a couple of posts describing how to implement autocomplete in an ASP.Net MVC application so I thought I would try it out. I got it working after a little trial and error so I thought I would try to outline my development process in hopes that the next person to try it out can save a little time. Here goes:
1. Download the Autocomplete plugin:
2. Add the following files to your mvc app:
jquery.autocomplete.css
jquery.autocomplete.min.js
jquery-1.2.6.min.js
I added the files to a subdirectory of the Content folder.
3. Add the following scripts in your page:
<script type="text/javascript" language="javascript" src="<%= Url.Content("~/Content/js/jquery-1.2.6.min.js") %>"></script><script type="text/javascript" language="javascript" src="<%= Url.Content("~/Content/js/jquery.autocomplete.min.js") %>"></script> link
I included these scripts in the head tag of my site's master page since I want the autocomplete functionality to be available throughout my site
4. Add an input element to your page:
<input id="txtStoryTags" type="text" class="largeTextbox" />
5. At this point it will probably be simplest to work backward. Here is the final code:
<script type="text/javascript"> $(document).ready(function() { $('#txtStoryTags').autocomplete('<%=Url.Action("GetTags", "Listing") %>', { dataType: 'json', parse: function(data) { var rows = new Array(); for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].Name, result: data[i].Name }; } return rows; }, formatItem: function(row) { return row.Name; }, delay: 40, autofill: true, selectFirst: false, highlight: false, multiple: true, multipleSeparator: ";" }); }); </script>
6. Documentation:
autocomplete( url or data, options )Returns: jQuery
We first need to provide the function with a url which is quite easy in an ASP.Net MVC app:
$(document).ready(function() { $('#txtStoryTags').autocomplete('<%=Url.Action("GetTags", "Listing") %>', ...
GetTags is an action in a controller named Listing that returns a JSON object constructed with an array of tag objects as its data. The names of these tags will be displayed in the autocomplete dropdown.
public ActionResult GetTags() { return Json(DataContext.GetTags(_topTags)); }
public class Tag { public int ID { get; set; } public string Name { get; set; } public int Count { get; set; }
The data returned by the server will look something like this:
[{"ID":11,"Name":"test1","Count":1},{"ID":12,"Name":"test2","Count":1},{"ID":13,"Name":"test3","Count":1}]
7. The trick is to convert this data to a format that the autocomplete function is expecting. If you are using local data, autocomplete expects an array of strings. Since our data is in the form of a JSON object, we will use the parse option to format our JSON object into data that the autocomplete function can work with.
parse: function(data) { var rows = new Array(); for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].Name, result: data[i].Name }; } return rows; },
The parse function is not documented very well but it basically will take our JSON object and return an array of objects that consist of three mandatory parts:
1. data: this is an entire item in my JSON object: {"ID":13,"Name":"test3","Count":1}
2. value: this is the value I want displayed: test3
3. result: this is the value I want added to my input (txtStoryTags) after I select the tag from the dropdown: test3
8. The bulk of the work is done. Here are some other options I included:
formatItem: function(row) { (Takes a row of data returned by the parse function and returns return row.Name; a string to display in the autocomplete dropdown) }, delay: 40, (400ms is default, seems too slow) selectFirst: false, (First value is always selected if true, makes it tricky to type in new tags imo) multiple: true, (mandatory for tags) multipleSeparator: ";" (also good for tags, I think you can only specify one separator)
So why is there a formatItem function and a value member in the row data returned by parse? I think it is because value is the default and must be simple data, but formatItem can return something like row.Name + " (" row.Data.Count + ")" or something along those lines. If you want to see a full list of these mysterious options you can find them here:
9. Last step, time to try it out! If you are not seeing any data, make sure the css file is included properly. You can go into the file and easily change the look and feel of the dropdown if you wish. Also, you should be able to step through the parse function to make sure the rows are in the proper format if you are still having trouble. Good luck!
Boo-yah!
Hope this helps you get those MVC apps looking fresh.
Joe | http://blogs.msdn.com/b/joecar/archive/2009/01/08/autocomplete-with-asp-net-mvc-and-jquery.aspx | CC-MAIN-2016-07 | refinedweb | 797 | 64.3 |
Are you jealous of Go developers building an executable and easily shipping it to users? Wouldn’t it be great if your users could run your application without installing anything? That is the dream, and PyInstaller is one way to get there in the Python ecosystem.
There are countless tutorials on how to set up virtual environments, manage dependencies, and publish to PyPI, which is useful when you’re creating Python libraries. There is much less information for developers building Python applications. This tutorial is for developers who want to distribute applications to users who may or may not be Python developers.
In this tutorial, you’ll learn the following:
- How PyInstaller can simplify application distribution
- How to use PyInstaller on your own projects
- How to debug PyInstaller errors
- What PyInstaller can’t do
PyInstaller gives you the ability to create a folder or executable that users can immediately run without any extra installation. To fully appreciate PyInstaller’s power, it’s useful to revisit some of the distribution problems PyInstaller helps you avoid.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.
Distribution Problems
Setting up a Python project can be frustrating, especially for non-developers. Often, the setup starts with opening a Terminal, which is a non-starter for a huge group of potential users. This roadblock stops users even before the installation guide delves into the complicated details of virtual environments, Python versions, and the myriad of potential dependencies.
Think about what you typically go through when setting up a new machine for Python development. It probably goes something like this:
- Download and install a specific version of Python
- Set up pip
- Set up a virtual environment
- Get a copy of your code
- Install dependencies
Stop for a moment and consider if any of the above steps make any sense if you’re not a developer, let alone a Python developer. Probably not.
These problems explode if your user is lucky enough to get to the dependencies portion of the installation. This has gotten much better in the last few years with the prevalence of wheels, but some dependencies still require C/C++ or even FORTRAN compilers!
This barrier to entry is way too high if your goal is to make an application available to as many users as possible. As Raymond Hettinger often says in his excellent talks, “There has to be a better way.”
PyInstaller
PyInstaller abstracts these details from the user by finding all your dependencies and bundling them together. Your users won’t even know they’re running a Python project because the Python Interpreter itself is bundled into your application. Goodbye complicated installation instructions!
PyInstaller performs this amazing feat by introspecting your Python code, detecting your dependencies, and then packaging them into a suitable format depending on your Operating System.
There are lots of interesting details about PyInstaller, but for now you’ll learn the basics of how it works and how to use it. You can always refer to the excellent PyInstaller docs if you want more details.
In addition, PyInstaller can create executables for Windows, Linux, or macOS. This means Windows users will get a
.exe, Linux users get a regular executable, and macOS users get a
.app bundle. There are some caveats to this. See the limitations section for more information.
Preparing Your Project
PyInstaller requires your application to conform to some minimal structure, namely that you have a CLI script to start your application. Often, this means creating a small script outside of your Python package that simply imports your package and runs
main().
The entry-point script is a Python script. You can technically do anything you want in the entry-point script, but you should avoid using explicit relative imports. You can still use relative imports throughout the rest your application if that’s your preferred style.
Note: An entry-point is the code that starts your project or application.
You can give this a try with your own project or follow along with the Real Python feed reader project. For more detailed information on the reader project, check out the the tutorial on Publishing a Package on PyPI.
The first step to building an executable version of this project is to add the entry-point script. Luckily, the feed reader project is well structured, so all you need is a short script outside the package to run it. For example, you can create a file called
cli.py alongside the reader package with the following code:
from reader.__main__ import main if __name__ == '__main__': main()
This
cli.py script calls
main() to start up the feed reader.
Creating this entry-point script is straightforward when you’re working on your own project because you’re familiar with the code. However, it’s not as easy to find the entry-point of another person’s code. In this case, you can start by looking at the
setup.py file in the third-party project.
Look for a reference to the
entry_points argument in the project’s
setup.py. For example, here’s the reader project’s
setup.py:
setup( name="realpython-reader", version="1.0.0", description="Read the latest Real Python tutorials", long_description=README, long_description_content_type="text/markdown", url="", author="Real Python", author_email="office@realpython.com", license="MIT", classifiers=[ "License :: OSI Approved :: MIT License", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", ], packages=["reader"], include_package_data=True, install_requires=[ "feedparser", "html2text", "importlib_resources", "typing" ], entry_points={"console_scripts": ["realpython=reader.__main__:main"]}, )
As you can see, the entry-point
cli.py script calls the same function mentioned in the
entry_points argument.
After this change, the reader project directory should look like this, assuming you checked it out into a folder called
reader:
reader/ | ├── reader/ | ├── __init__.py | ├── __main__.py | ├── config.cfg | ├── feed.py | └── viewer.py | ├── cli.py ├── LICENSE ├── MANIFEST.in ├── README.md ├── setup.py └── tests
Notice there is no change to the reader code itself, just a new file called
cli.py. This entry-point script is usually all that’s necessary to use your project with PyInstaller.
However, you’ll also want to look out for uses of
__import__() or imports inside of functions. These are referred to as hidden imports in PyInstaller terminology.
You can manually specify the hidden imports to force PyInstaller to include those dependencies if changing the imports in your application is too difficult. You’ll see how to do this later in this tutorial.
Once you can launch your application with a Python script outside of your package, you’re ready to give PyInstaller a try at creating an executable.
Using PyInstaller
The first step is to install PyInstaller from PyPI. You can do this using
pip like other Python packages:
$ pip install pyinstaller
pip will install PyInstaller’s dependencies along with a new command:
pyinstaller. PyInstaller can be imported in your Python code and used as a library, but you’ll likely only use it as a CLI tool.
You’ll use the library interface if you create your own hook files.
You’ll increase the likelihood of PyInstaller’s defaults creating an executable if you only have pure Python dependencies. However, don’t stress too much if you have more complicated dependencies with C/C++ extensions.
PyInstaller supports lots of popular packages like NumPy, PyQt, and Matplotlib without any additional work from you. You can see more about the list of packages that PyInstaller officially supports by referring to the PyInstaller documentation.
Don’t worry if some of your dependencies aren’t listed in the official docs. Many Python packages work fine. In fact, PyInstaller is popular enough that many projects have explanations on how to get things working with PyInstaller.
In short, the chances of your project working out of the box are high.
To try creating an executable with all the defaults, simply give PyInstaller the name of your main entry-point script.
First,
cd in the folder with your entry-point and pass it as an argument to the
pyinstaller command that was added to your
PATH when PyInstaller was installed.
For example, type the following after you
cd into the top-level
reader directory if you’re following along with the feed reader project:
$ pyinstaller cli.py
Don’t be alarmed if you see a lot of output while building your executable. PyInstaller is verbose by default, and the verbosity can be cranked way up for debugging, which you’ll see later.
Digging Into PyInstaller Artifacts
PyInstaller is complicated under the hood and will create a lot of output. So, it’s important to know what to focus on first. Namely, the executable you can distribute to your users and potential debugging information. By default, the
pyinstaller command will create a few things of interest:
- A
*.specfile
- A
build/folder
- A
dist/folder
Spec File
The spec file will be named after your CLI script by default. Sticking with our previous example, you’ll see a file called
cli.spec. Here’s what the default spec file looks like after running PyInstaller on the
cli.py file:
# -*- mode: python -*- block_cipher = None a = Analysis(['cli.py'], pathex=['/Users/realpython/pyinstaller/reader'],='cli', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True ) coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, name='cli')
This file will be automatically created by the
pyinstaller command. Your version will have different paths, but the majority should be the same.
Don’t worry, you don’t need to understand the above code to effectively use PyInstaller!
This file can be modified and re-used to create executables later. You can make future builds a bit faster by providing this spec file instead of the entry-point script to the
pyinstaller command.
There are a few specific use-cases for PyInstaller spec files. However, for simple projects, you won’t need to worry about those details unless you want to heavily customize how your project is built.
Build Folder
The
build/ folder is where PyInstaller puts most of the metadata and internal bookkeeping for building your executable. The default contents will look something like this:
build/ | └── cli/ ├── Analysis-00.toc ├── base_library.zip ├── COLLECT-00.toc ├── EXE-00.toc ├── PKG-00.pkg ├── PKG-00.toc ├── PYZ-00.pyz ├── PYZ-00.toc ├── warn-cli.txt └── xref-cli.html
The build folder can be useful for debugging, but unless you have problems, this folder can largely be ignored. You’ll learn more about debugging later in this tutorial.
Dist Folder
After building, you’ll end up with a
dist/ folder similar to the following:
dist/ | └── cli/ └── cli
The
dist/ folder contains the final artifact you’ll want to ship to your users. Inside the
dist/ folder, there is a folder named after your entry-point. So in this example, you’ll have a
dist/cli folder that contains all the dependencies and executable for our application. The executable to run is
dist/cli/cli or
dist/cli/cli.exe if you’re on Windows.
You’ll also find lots of files with the extension
.so,
.pyd, and
.dll depending on your Operating System. These are the shared libraries that represent the dependencies of your project that PyInstaller created and collected.
Note: You can add
*.spec,
build/, and
dist/ to your
.gitignore file to keep
git
status clean if you’re using
git for version control. The default GitHub gitignore file for Python projects already does this for you.
You’ll want to distribute the entire
dist/cli folder, but you can rename
cli to anything that suits you.
At this point you can try running the
dist/cli/cli executable if you’re following along with the feed reader example.
You’ll notice that running the executable results in errors mentioning the
version.txt file. This is because the feed reader and its dependencies require some extra data files that PyInstaller doesn’t know about. To fix that, you’ll have to tell PyInstaller that
version.txt is required, which you’ll learn about when testing your new executable.
Customizing Your Builds
PyInstaller comes with lots of options that can be provided as spec files or normal CLI options. Below, you’ll find some of the most common and useful options.
--name
Change the name of your executable.
This is a way to avoid your executable, spec file, and build artifact folders being named after your entry-point script.
--name is useful if you have a habit of naming your entry-point script something like
cli.py, as I do.
You can build an executable called
realpython from the
cli.py script with a command like this:
$ pyinstaller cli.py --name realpython
--onefile
Package your entire application into a single executable file.
The default options create a folder of dependencies and and executable, whereas
--onefile keeps distribution easier by creating only an executable.
This option takes no arguments. To bundle your project into a single file, you can build with a command like this:
$ pyinstaller cli.py --onefile
With the above command, your
dist/ folder will only contain a single executable instead of a folder with all the dependencies in separate files.
--hidden-import
List multiple top-level imports that PyInstaller was unable to detect automatically.
This is one way to work around your code using
import inside functions and
__import__(). You can also use
--hidden-import multiple times in the same command.
This option requires the name of the package that you want to include in your executable. For example, if your project imported the requests library inside of a function, then PyInstaller would not automatically include
requests in your executable. You could use the following command to force
requests to be included:
$ pyinstaller cli.py --hiddenimport=requests
You can specify this multiple times in your build command, once for each hidden import.
--add-data and
--add-binary
Instruct PyInstaller to insert additional data or binary files into your build.
This is useful when you want to bundle in configuration files, examples, or other non-code data. You’ll see an example of this later if you’re following along with the feed reader project.
--exclude-module
Exclude some modules from being included with your executable
This is useful to exclude developer-only requirements like testing frameworks. This is a great way to keep the artifact you give users as small as possible. For example, if you use pytest, you may want to exclude this from your executable:
$ pyinstaller cli.py --exclude-module=pytest
-w
Avoid automatically opening a console window for
stdoutlogging.
This is only useful if you’re building a GUI-enabled application. This helps your hide the details of your implementation by allowing users to never see a terminal.
Similar to the
--onefile option,
-w takes no arguments:
$ pyinstaller cli.py -w
.spec file
As mentioned earlier, you can reuse the automatically generated
.spec file to further customize your executable. The
.spec file is a regular Python script that implicitly uses the PyInstaller library API.
Since it’s a regular Python script, you can do almost anything inside of it. You can refer to the official PyInstaller Spec file documentation for more information on that API.
Testing Your New Executable
The best way to test your new executable is on a new machine. The new machine should have the same OS as your build machine. Ideally, this machine should be as similar as possible to what your users use. That may not always be possible, so the next best thing is testing on your own machine.
The key is to run the resulting executable without your development environment activated. This means run without
virtualenv,
conda, or any other environment that can access your Python installation. Remember, one of the main goals of a PyInstaller-created executable is for users to not need anything installed on their machine.
Picking up with the feed reader example, you’ll notice that running the default
cli executable in the
dist/cli folder fails. Luckily the error points you to the problem:
FileNotFoundError: 'version.txt' resource not found in 'importlib_resources' [15110] Failed to execute script cli
The
importlib_resources package requires a
version.txt file. You can add this file to the build using the
--add-data option. Here’s an example of how to include the required
version.txt file:
$ pyinstaller cli.py \ --add-data venv/reader/lib/python3.6/site-packages/importlib_resources/version.txt:importlib_resources
This command tells PyInstaller to include the
version.txt file in the
importlib_resources folder in a new folder in your build called
importlib_resources.
Note: The
pyinstaller commands use the
\ character to make the command easier to read. You can omit the
\ when running commands on your own or copy and paste the commands as-is below provided you’re using the same paths.
You’ll want to adjust the path in the above command to match where you installed the feed reader dependencies.
Now running the new executable will result in a new error about a
config.cfg file.
This file is required by the feed reader project, so you’ll need to make sure to include it in your build:
$ pyinstaller cli.py \ --add-data venv/reader/lib/python3.6/site-packages/importlib_resources/version.txt:importlib_resources \ --add-data reader/config.cfg:reader
Again, you’ll need to adjust the path to the file based on where you have the feed reader project.
At this point, you should have a working executable that can be given directly to users!
Debugging PyInstaller Executables
As you saw above, you might encounter problems when running your executable. Depending on the complexity of your project, the fixes could be as simple as including data files like the feed reader example. However, sometimes you need more debugging techniques.
Below are a few common strategies that are in no particular order. Often times one of these strategies or a combination will lead to a break-through in tough debugging sessions.
Use the Terminal
First, try running the executable from a terminal so you can see all the output.
Remember to remove the
-w build flag to see all the
stdout in a console window. Often, you’ll see
ImportError exceptions if a dependency is missing.
Debug Files
Inspect the
build/cli/warn-cli.txt file for any problems. PyInstaller creates lots of output to help you understand exactly what it’s creating. Digging around in the
build/ folder is a great place to start.
Single Directory Builds
Use the
--onedir distribution mode of creating distribution folder instead of a single executable. Again, this is the default mode. Building with
--onedir gives you the opportunity to inspect all the dependencies included instead of everything being hidden in a single executable.
--onedir is useful for debugging, but
--onefile is typically easier for
users to comprehend. After debugging you may want to switch to
--onefile mode to simplify distribution.
Additional CLI Options
PyInstaller also has options to control the amount of information printed during the build process. Rebuild the executable with the
--log-level=DEBUG option to PyInstaller and review the output.
PyInstaller will create a lot of output when increasing the verbosity with
--log-level=DEBUG. It’s useful to save this output to a file you can refer to later instead of scrolling in your Terminal. To do this, you can use your shell’s redirection functionality. Here’s an example:
$ pyinstaller --log-level=DEBUG cli.py 2> build.txt
By using the above command, you’ll have a file called
build.txt containing lots of additional
DEBUG messages.
Note: The standard redirection with
> is not sufficient. PyInstaller prints to the
stderr stream, not
stdout. This means you need to redirect the
stderr stream to a file, which can be done using a
2 as in the previous command.
Here’s a sample of what your
build.txt file might look like:
67 INFO: PyInstaller: 3.4 67 INFO: Python: 3.6.6 73 INFO: Platform: Darwin-18.2.0-x86_64-i386-64bit 74 INFO: wrote /Users/realpython/pyinstaller/reader/cli.spec 74 DEBUG: Testing for UPX ... 77 INFO: UPX is not available. 78 DEBUG: script: /Users/realptyhon/pyinstaller/reader/cli.py 78 INFO: Extending PYTHONPATH with paths ['/Users/realpython/pyinstaller/reader', '/Users/realpython/pyinstaller/reader']
This file will have a lot of detailed information about what was included in your build, why something was not included, and how the executable was packaged.
You can also rebuild your executable using the
--debug option in addition to using the
--log-level option for even more information.
Note: The
-y and
--clean options are useful when rebuilding, especially when initially configuring your builds or building with Continuous Integration. These options remove old builds and omit the need for user input during the build process.
Additional PyInstaller Docs
The PyInstaller GitHub Wiki has lots of useful links and debugging tips. Most notably are the sections on making sure everything is packaged correctly and what to do if things go wrong.
Assisting in Dependency Detection
The most common problem you’ll see is
ImportError exceptions if PyInstaller couldn’t properly detect all your dependencies. As mentioned before, this can happen if you’re using
__import__(), imports inside functions, or other types of hidden imports.
Many of these types of problems can be resolved by using the
--hidden-import PyInstaller CLI option. This tells PyInstaller to include a module or package even if it doesn’t automatically detect it. This is the easiest way to work around lots of dynamic import magic in your application.
Another way to work around problems is hook files. These files contain additional information to help PyInstaller package up a dependency. You can write your own hooks and tell PyInstaller to use them with the
--additional-hooks-dir CLI option.
Hook files are how PyInstaller itself works internally so you can find lots of example hook files in the PyInstaller source code.
Limitations
PyInstaller is incredibly powerful, but it does have some limitations. Some of the limitations were discussed previously: hidden imports and relative imports in entry-point scripts.
PyInstaller supports making executables for Windows, Linux, and macOS, but it cannot cross compile. Therefore, you cannot make an executable targeting one Operating System from another Operating System. So, to distribute executables for multiple types of OS, you’ll need a build machine for each supported OS.
Related to the cross compile limitation, it’s useful to know that PyInstaller does not technically bundle absolutely everything your application needs to run. Your executable is still dependent on the users’
glibc. Typically, you can work around the
glibc limitation by building on the oldest version of each OS you intend to target.
For example, if you want to target a wide array of Linux machines, then you can build on an older version of CentOS. This will give you compatibility with most versions newer than the one you build on. This is the same strategy described in PEP 0513 and is what the PyPA recommends for building compatible wheels.
In fact, you might want to investigate using the PyPA’s manylinux docker image for your Linux build environment. You could start with the base image then install PyInstaller along with all your dependencies and have a build image that supports most variants of Linux.
Conclusion
PyInstaller can help make complicated installation documents unnecessary. Instead, your users can simply run your executable to get started as quickly as possible. The PyInstaller workflow can be summed up by doing the following:
- Create an entry-point script that calls your main function.
- Install PyInstaller.
- Run PyInstaller on your entry-point.
- Test your new executable.
- Ship your resulting
dist/folder to users.
Your users don’t have to know what version of Python you used or that your application uses Python at all! | https://realpython.com/pyinstaller-python/ | CC-MAIN-2020-16 | refinedweb | 3,987 | 56.45 |
pyxml basics I am following pyxml doc in hope to quickly grasp functions and classes available in pyxml. _________ starting out: >>> from xml.sax import saxutils >>> class FindIssue(saxutils.DefaultHandler): ... def __init__(self,title,number): ... self.search_title, self.search_number = title, number ... Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'DefaultHandler' 1. What exactly above class will do? How do i fix it to not give me an error? Doc states. "The DefaultHandler class inherits from all four interfaces: ContentHandler, DTDHandler, EntityResolver, and ErrorHandler. This (this = ContentHandler or ?) is what you should use if you want to just write a single class that wraps up all the logic for your parsing." 2. Do you mean that Default Handler should be substituted with ContentHandler? What exactly should i do? Doc next states... "You could also subclass each interface individually and implement separate classes for each purpose." 3. Do you mean : you can create a class or a subclass for your specific element or node? I don't know what we are trying to do here. I though we want to extract one or all of the elements from <collection> <comic title="Sandman" number='62'> <writer>Neil Gaiman</writer> <penciller pages='1-9,18-24'>Glyn Dillon</penciller> <penciller pages="10-17">Charles Vess</penciller> </comic> </collection> 4. How do we use startElement? startElement('comic','62') Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: startElement() takes exactly 3 arguments (2 given) Then running the code ...... from xml.sax import make_parser from xml.sax.handler import feature_namespaces if __name__ == '__main__': # Create a parser parser = make_parser().......... Doc states: "Once you've created a parser instance, calling the setContentHandler() method tells the parser what to use as the content handler." 5. How do you do create parser instance by calling setContentHandler()?? At the end doc says. If you run the above code with the sample XML document, it'll print Sandman #62 found. 6. Which above code do you run, and how? Thank you lucas | https://mail.python.org/pipermail/xml-sig/2006-June/011524.html | CC-MAIN-2014-10 | refinedweb | 340 | 69.07 |
2013 Winter Focus
Ridgeview Foundation's 2013 Winter Focus Newsletter
Grateful Patient Story: Mandy Gilley ometimes you might not know if your medical need requires emergency room care or urgent care. The decision can be difficult enough when you are making the choice for yourself, but it is even more stressful when you are caring for a loved one. A mother of two “little princesses,” Mandy Gilley of Victoria found herself in just this type of situation. Gilley, who has a background in pediatrics, recently moved to Minnesota from North Carolina where she had worked as a medical assistant. When her 3-yearold complained of stomach pain and cried when she used the bathroom, Gilley thought “I better be safe than sorry,” and brought her daughter to Ridgeview’s 24/7 Emergency & Urgent Care at Two Twelve Medical Center in Chaska. Fred DeMeuse, the certified physician assistant on site who examined Gilley’s daughter Jo, was “very thoughtful, did not make me feel rushed and put my mind at ease,” Gilley said. “He informed me and my daughter of what he was doing throughout the examination. She just smiled and giggled … and told me that she liked ‘that doctor.’ It was very reassuring to know that my child felt comfortable with a health care provider she had never met.” S. A health care destination, Two Twelve Medical Center offers comprehensive family and specialty care, advanced diagnostics, access to leading physicians and clinics, and more—easily accessible to consumers in the west-metro region. While her daughter’s symptoms were due to being lactose intolerant, “it was amazing to have such compassion shown for something so small,” said Gilley. For more information on Two Twelve Medical Center, please visit us at or call 952-361-2447. Winter 2013 Dear Friends, What an amazing year! Thanks to donors and friends like you, 2013 was an incredible year for Ridgeview Foundation. Board of Directors Nancy Bach, Wayzata Brian Beniek, Mound Jean Buller, Chair, Chaska Dermot Cowley, Secretary, Watertown Stacy Desai, Eden Prairie Tim Foster, Wayzata Katherine Forrester, Excelsior Fred Green, Edina Katherine Hackett, MD, Excelsior Darla Holmgren, Finance Chair, Waconia Greg Kummer, Norwood Young America Laura Lanz, Waconia Jeff Laurel, Cross Lake Jim Leonard, Chaska Brian Mark, Excelsior Linda Roebke, Waconia Kate Roehl, Maple Plain Daniel Ross, DDS, Chaska Randy Schneewind, Waconia Charles Spevacek,Vice Chair, Shorewood Mark Steingas, Excelsior Larry Wilhelm, Excelsior David Windschitl, Chanhassen Ridgeview is fortunate to have so many wonderful partnerships and collaborations with benefactors, volunteers and community leaders. Together, we completed the Innovations for Generations campaign this year, raising more than $2 million in two years to help renovate the 2nd floor of the medical center in Waconia and to help Ridgeview expand services for patients. In 2013, we also launched the Every Moment Counts campaign, Ridgeview Foundation’s effort to raise endowment funds and capital gifts to help secure Ridgeview Medical Center’s future. Through this campaign, we believe this is our opportunity to help sustain vital programs for patients and families right here in our region. We are so thankful for the annual contributions, remembrance gifts and event support from such a generous community. You and others just like you have rewarded Ridgeview as a valued asset this year and through the years. We will need even greater assistance to ensure that our actions today will protect the clinical excellence and extraordinary services Ridgeview provides far into the future. Enjoy this issue of Foundation Focus. You can read more about the Every Moment Counts campaign, the Roaring ’20s Under the Harvest Moon and the Achieving a Healthy Balance event wrap-ups. In addition, we have a wonderful story about grateful patient Mandy Gilley, some useful information about IRA rollover legislation, and stories about a few different ways that can help you stay connected to Ridgeview and to the Foundation. Thank you so much for a great 2013! Sincerely, Jean Buller Ridgeview Foundation Board Chair Ex-Officio Robert Stevens, President and CEO, Ridgeview Medical Center Doug Stasek, Vice President, Ridgeview Foundation 2nd floor addition commences In June we completed the Innovation for Generations campaign, which helped fund the 2nd floor remodel of Ridgeview Medical Center. Thank you to our generous donors for their support. The Foundation raised more than $2 million over two years to fund this important project. We are happy to report on the progress. The first of four phases in Ridgeview’s 2nd floor renovation is now complete. The hospital’s construction project involves changes to Ridgeview’s nearly 50-year-old medical/surgical wing, in which patients young and old receive care. All patient rooms on the renovated wing have been designed to serve as “universal rooms”—allowing flexibility for the hospital as all the rooms will have similar layouts and equipment, yet will offer unique features to meet individual patient needs. The project is slated to be complete in October. We look forward to giving updates as the project progresses. Again, thank you to the donors who helped us complete the Innovation for Generations campaign. Your support truly made a difference! 2 F O U N D AT I O N F O C U S Ridgeview Friends & Family has successful giving campaign Ridgeview Friends & Family held its annual employee campaign this past October in support of the annual fund at Ridgeview Medical Center. The 2013 Ridgeview Friends & Family theme was “Hearts In, Hands In,â€? signifying Ridgeview employees using their hearts and hands in everything they do. We thank our staff for pledging their support and creating a sense of belonging and pride in the workplace! Our staff takes pride in supporting excellence in health care, which sends a positive message to the communities that Ridgeview serves. Ridgeview leadership participated in a photo shoot on making giving fun. Our campaign spokesperson, Paul Whittaker, director of Environmental Services, encourages fun in the workplace. Happy employees are productive employees! In 2013, funds were directed in support of the 2 Medical Renovation, the Health Care Scholarship Program, Prayer Shawl Program and, new this year, the Roof-Top Gardens. Employees are always welcome to direct their donation to another area of need if they so wish. Thirty-three percent of the nearly 1,600 Ridgeview employees choose to give back. Sincere thanks to all of our employees who support Ridgeview Friends & Family! More than 1,000 employees and community members support this giving club and collectively raise almost $100,000 annually to support Ridgeview Medical Center. It is with gratitude that we say thank you for the commitments made. Thank you for your support of Ridgeview Medical Center and Foundation! Your commitment and passion for the well-being of our patients and families speaks volumes. Your employee pride is admirable. Ridgeview Foundation is seeking individuals who are interested in volunteer opportunities with Ridgeview Friends & Family. Join this energetic team that makes philanthropy fun. For more information, please contact Ridgeview Foundation at 952-442-6010 or email: foundation@ridgeviewmedical.org. Robert Stevens, president and CEO, and Todd Wilkening, director of Facility Services, join in the fun by letting our staff know why they should participate in the Employee Campaign. CONNECTING WITH COMMUNITY 3 Ridgeview Foundation launches endowment campaign idgeview Foundation has launched a fundraising campaign that will help Ridgeview Medical Center sustain its reputation as one of the finest medical centers in the state. The Every Moment Counts campaign will focus on raising endowment funds and capital gifts to help secure our future. R Why Is the Endowment Campaign Important?. A gift of any size can be earmarked for the endowment campaign. However, naming opportunities and named endowed funds will play a large role as well. At this critical time in the life of our organization, almost 100 percent of Ridgeview’s net revenue is required to maintain operations, leaving a very small percentage to invest in technology, facilities and new services. We ask you to please join us and help us continue to provide the highest level of care with kindness, compassion and intuition. Give for today and for the future. How Endowment Gifts Benefit Ridgeview Without question, endowments are one of the most advantageous sources of funding for Ridgeview. The principal is continually preserved and only a percentage of the investment income is spent annually. There are many ways Ridgeview will benefit from endowment gifts: • Enables Ridgeview to direct support to areas of greatest promise or need and seize timely opportunities • Ensures the most promising ideas and technical advances have the opportunity to flourish • Assures excellence by making certain that quality medical care, research and outreach can be conducted forever • Despite ebbs and flows in the economy, endowment dollars ensure that facility improvements, tomorrow’s technology, and the critical need to attract and retain the brightest medical minds in the field will be here for you, your children and your grandchildren—today and in years to come How Endowment Gifts Benefit the Donor This endowment campaign is an ambitious undertaking and donors are the key! You, your family and Ridgeview have the opportunity to enjoy a long relationship built around something you value. There are numerous advantages to supporting the endowment: • Freedom to choose how your investment will support Ridgeview • Option to assess the impact you would like your gift to have • Ability to honor someone special to you or permanently memorialize a loved one • Fulfillment that the endowment inherently delivers a profound impact because of its longevity • Flexibility for family to support your fund because they know how special it is to you • Gratification that your gift becomes part of Ridgeview’s heritage and tradition 4 F O U N D AT I O N F O C U S Endowment Q&A Q: What are Ridgeview’s endowment goals? A: To provide stable support from the endowment each year to the budget and to preserve the long-term value of the endowment to provide support future generations of patients and their families. Q: What is the investment strategy? A: Endowments are invested with the objective of earning a total net return (interest and dividends plus appreciation in market value, minus inflation) of 6.0 percent. Q: What is Ridgeview’s distribution calculation? A: Historical investment performance will “trigger” the percentage distributed on a year by year basis. Distribution calculation: Percentage of fund above historical principal 103% and below 104%–110% 111%–125% 126% and above Endowment distribution percentage 0% 4% 5% 6% Q: What is Ridgeview’s distribution timing? A: The historical principal and the investment return will be calculated on or about December 31 on a year by year basis. Endowment distributions will be based upon the annual calculation. Ridgeview Medical Center. Q: What do donors receive when they make an endowed gift at the $25,000+ level? A: Endowed funds can be personalized and named for the donor or in honor/memory of a donor family member, friend or loved one. Foundation staff will work closely with Ridgeview Medical Center Office of Finance and the Foundation Finance Committee to ensure proper stewardship takes place. This stewardship will include annual reporting and endowment specific reports related to the donors’ fund. Donors will have peace of mind knowing that their gift directly supports their individual area of interest and will be part of the Foundation’s long-term investment strategy. CONNECTING WITH COMMUNITY 5 Harvest Moon raises funds for endowment idgeview Foundation raised more than $250,000 at the Roaring ’20s Under the Harvest Moon event, held Nov. 23 at Hazeltine National Golf Club in Chaska. More than 320 people attended the 1920’s-themed event, which combined great food and drink, a liveauction and fun costumes from the Roaring ’20s. Auction items included trips to Las Vegas, Jazz Fest in New Orleans and California’s wine country, and a framed photo of the 1980 “Miracle on Ice” USA Olympic Hockey team, signed by the entire squad, including coach Herb Brooks. Co-emcees and auctioneers Frank Vascellaro and Amelia Santaniello from WCCO-TV were a highlight of the evening as they entertained with great stories, humorous audience interactions and witty commentary. More than 30 volunteers worked collaboratively to organize and raise funds for the event. All funds are supporting Ridgeview Foundation’s effort of raising endowment funds and capital gifts to help secure Ridgeview Medical Center’s future. The Every Moment Counts campaign has been launched to help Ridgeview continue to provide the highest R level of health care and to help achieve better outcomes for patients and families. Harvest Moon have a direct impact on the level and sophistication of health care that Ridgeview can provide. Thank you to everyone who made the evening such a wonderful success! Achieving a Healthy Balance Ridgeview Foundation’s 11th Annual Achieving a Healthy Balance women’s event took place in November with a sold-out audience. More than 200 women chose to spend the day with us at Oak Ridge Conference Center—finding a balance between learning, relaxing, eating and shopping. With more than 25 artisan exhibitors, our guests had many options from which to choose to support local businesses and get a bit holiday shopping done. Thank you to our sponsors for the day: Lake Region Medical; Emergency Physicians & Consultants, P.A.; Katherine Forrester of Northwestern Financial; and the Ridgeview Medical Staff. We could not celebrate our day without this important 6 F O U N D AT I O N F O C U S financial support. Thank you also to our many table sponsors and exhibitors. And thank you to Scott Elliingboe for our centerpieces and to Rising Star Dance Academy and Amy Jung of South Hill Designs for our attendee gift bags. The event raised more than $7,000 for Ridgeview Hospice. That’s an incredible response to our decision to make Achieving a Healthy Balance a benefit to you, our attendees and to Ridgeview Foundation. Ridgeview is thankful for our guests’ generous spirits and for seeing the value of quality hospice care in our region.. This information. It is wise to consult with your tax professionals if you are contemplating a charitable gift under the extended law. Frequently Asked Questions:starting. CONNECTING WITH COMMUNITY 7 490 S. Maple Street, Suite 110 • Waconia, MN 55387 • 952-442-6010 For news and event information from Ridgeview Medical Center and Clinics, join Ridgeview online: ©2013 Ridgeview Medical Center If you would like to stop receiving Ridgeview Foundation printed material, please send an email, including your name and address, to foundation@ridgeviewmedical.org or call 952-442-6010. Classes, events & ways to get involved Advance Care Planning . . . It Is About the Conversation It is important for people to have accurate information in order to make informed decisions about treatments they would or would not want for end-of-life care. It is important to talk about one’s wishes with the family members who may need to make health care decisions, and to record wishes in a document. Ridgeview Medical Center offers a free service to help people understand this process. Please join us for an information session that will explain why advance care planning is important and the steps involved. You will also have an opportunity to sign up for free assistance with this process. Sessions are held at Ridgeview Medical Center, 500 S. Maple Street, Waconia. Register online at events or call 952-442-2191, ext. 5735. Saturday, Jan. 18, at 9:30 a.m. Monday, Jan. 20, at 9:30 a.m. Ridgeview Community Auditorium Provided with support of the Lake Town Associates office of Thrivent Financial, Waconia. Prediabetes Education Class: • Learn what prediabetes is and how it is diagnosed • Identify lifestyle changes that can prevent or delay the development of diabetes • Identify who is at risk for developing prediabetes • Set goals for making lifestyle changes • Understand continuation of care Fee: $20 Monday, March 17, 6:30–8 p.m. Ridgeview Medical Center 500 S. Maple Street, Waconia NOTE: The fee of $20 covers the participant and one family member/support person. When registering online, only enter a quantity of “1,” even if a support person is attending with you. 10th Annual Ridgeview Friends & Family “Come Together” Event Join Ridgeview Foundation, Ridgeview Medical Center, Safari Island and more than 40 other community organizations for a FREE, interactive healthy living event that promotes the health, wellness and safety of the entire community. Enjoy a wide range of activities and presentations designed for children and adults of ALL ages. • Kids will move their bodies in Youth in Motion Center • Live performances • Silver Sneakers® fitness class for active agers • 6th Annual On-Site Memorial Blood Center Blood Drive • Healthy snacks Saturday, March 22, 9 a.m. – Noon Safari Island Community Center 1600 Community Drive, Waconia This is a free event and advance registration is not required. For more information, call Ridgeview Foundation at 952-442-6010 or visit. | http://issuu.com/ridgeviewmedicalcenter/docs/2013_winter_focus | CC-MAIN-2015-18 | refinedweb | 2,816 | 50.36 |
j w r degoede hhs nl (Hans de Goede). >> > > Thats one assumption which I would rather not make Please, look at the code before doing such unreasoned statements. The malloc() code is | static void* REGPARM(1) __small_malloc(size_t _size) { | __alloc_t *ptr; | size_t size=_size; | size_t idx; | | idx=get_index(size); | ptr=__small_mem[idx]; | | if (ptr==0) { /* no free blocks ? */ | register int i,nr; | ptr=do_mmap(MEM_BLOCK_SIZE); | if (ptr==MAP_FAILED) return MAP_FAILED; | | __small_mem[idx]=ptr; | | nr=__SMALL_NR(size)-1; | for (i=0;i<nr;i++) { | ptr->next=(((void*)ptr)+size); | ptr=ptr->next; | } | ptr->next=0; | | ptr=__small_mem[idx]; | } | | /* get a free block */ | __small_mem[idx]=ptr->next; | ptr->next=0; | | return ptr; | } | | static void* _alloc_libc_malloc(size_t size) { | __alloc_t* ptr; | size_t need; | #ifdef WANT_MALLOC_ZERO | if (!size) return BLOCK_RET(zeromem); | #else | if (!size) goto err_out; | #endif | size+=sizeof(__alloc_t); | if (size<sizeof(__alloc_t)) goto err_out; | if (size<=__MAX_SMALL_SIZE) { | need=GET_SIZE(size); | ptr=__small_malloc(need); | } | else { | need=PAGE_ALIGN(size); | if (!need) ptr=MAP_FAILED; else ptr=do_mmap(need); | } | if (ptr==MAP_FAILED) goto err_out; | ptr->size=need; | return BLOCK_RET(ptr); | err_out: | (*__errno_location())=ENOMEM; | return 0; | } | | static void REGPARM(2) __small_free(void*_ptr,size_t _size) { | __alloc_t* ptr=BLOCK_START(_ptr); | size_t size=_size; | size_t idx=get_index(size); | | memset(ptr,0,size); /* allways zero out small mem */ | | ptr->next=__small_mem[idx]; | __small_mem[idx]=ptr; | } | | static void _alloc_libc_free(void *ptr) { | register size_t size; | if (ptr) { | size=((__alloc_t*)BLOCK_START(ptr))->size; | if (size) { | if (size<=__MAX_SMALL_SIZE) | __small_free(ptr,size); | else | munmap(BLOCK_START(ptr),size); | } | } | } It is definitively not exploitable. >> So the when-static-library-contains-flaw-we-have-to-rebuild-everything >> argument does not hold because the "when-static-library-contains-flaw" >> condition can never occur in *this* case. > > never say never :) Correctness of a program can be proved mathematically. So I can say that exploits in ipsvd are NEVER caused by dietlibc. >> I admit that there are problems in dietlibc but none of them are >> interesting in *this* case. > > Again jumping to conclusions rather quickly, you've clearly given this > many thought and done your "homework" but there just is no such thing > as bug free code, thats where we disagree. bug-free code exists. The malloc() implementation and the named syscall wrappers in dietlibc are examples. >>>>> many the same reasons why the packaging guidelines state that >>>>> packages should not compile and (staticly) link against their own >>>>> version fo system libs, >>>> The "should" in the packaging guidelines was intentionally. It leaves >>>> room to link statically when this is the better choice and in this case, >>>> dietlibc is the better choice. >>>> >>> Not when this is a better choice, it doesn't say when this is a better >>> choice anywhere, it says "Static libraries should only be included in >>> exceptional circumstances." >> This sentence means packaging of %_libdir/*.a files but NOT linking >> against static libraries. >> > > If you want to take things literaty the dietlibc package does fall > under thus rule I do not see where. dietlibc makes sense with static libraries only. The dynamic linking support is experimental and does not exist for all platforms. So, there exist "exceptional circumstances" and there is still the "should" which allows the packager to do the best thing (and static dietlibc libraries are the best thing). > and thus should be changed to only provide .so or removed since I see > no reason for allowing an exeption for it. Don't you see that either > way it is the same we don't want static libs because we don't want > static linking learn to read between the lines. These static-linking rules were not written to ban static libraries completely but to avoid things like the static-zlib desaster. Therefore, it is a "should" but not "must" rule. >>> I guess I can come up with a zillion more small programs which will >>> be smaller and faster with dietlibc, thats not what this is about, >>> the should is there in case its impossible to avoid this without >>> tons of work. >> It really depends. It's hard to find a reason to link programs like >> | int main() { write(1, "Hello world\n", 12); } >> dynamically against glibc instead of using dietlibc. But I would not >> write a bugreport only because of this inefficiency. > > So you agree the gain isn't big enough No, Fedora Extras rules do neither forbid nor encourage usage of dietlibc so it is finally the decision of the packager to do the best thing. When he does not want to add dietlibc deps to his package, I have to accept it. Arguing with effiency would result into endless discussions without a result as this thread is showing. > to warrent doing this for other packages, then it also isn't big > enough for this package. And more in general the gain of dietlibc > (for soem programs) isn't big enough to warrant it an exception to > the no .a files rules, so dietlibc should be changed or removed. Now > if we're talking about moving the entire distro over to dietlibc, > that would be interesting! I never wanted to move the entire distro to dietlibc, only few programs benefit from this library. It is obviously that complex libraries whose bugfreeness can not be proven mathematically should be linked dynamically. > But for a few packages its ridiculous. I prefer to chose things in case-per-case decision instead of making generalisations which hold for most but not all packages. "Static linking is insecure" is such a generalisation which does not apply to the 'ipsvd' case. >>>>> that is exactly what you're doing now linking against an own >>>>> version of system libs. >>>> ??? I do not see where 'ipvsd' links against a "local copy of a >>>> library that exists on the system". > ... > See above, the intention of this rule is imho clear and it extends to > what you're doing. This rule means only that packages should not use local copies of libraries (e.g. 'zlib', 'db4') but use the existing ones from the system. There are no deaper intentions or extends of this rule. >>>>> is the exception that confirms the rule. Also notice: "Static >>>>> libraries should only be included in exceptional circumstances." >>>> 'ipvsd' does not provide static libraries. >>> Nor should it use them, >> That is not stated there and was never an intention of this paragraph >> which covers packages shipping libraries only. 'ipvsd' is not such a >> package. > > Dietlibc is, so remove/fix that please. NOTABUG. This rule is a "should" only and there exist "exceptional circumstances". >>>> when there are ways to make things work better, these ways should >>>> be gone. Again: linking against 'dietlibc' has only advantages for >>>> 'ipvsd'. >>> When the tradeof is a small gain in speed and footprint versus >>> maintainability and security then the disadvantages of your choice >>> outway the advantages. >> I do not see how this is related to *this* case and do not know >> what you mean with "maintainablility". The "security" argument was >> disproved above. > > maintainability means that if we allow this and a few other packages > will get linked against dietlibc too and a bug in dietlibc is found > then all those packages will need a rebuild, You mean that you want to add a rule to the guidelines like | A package MUST NOT be usable as an example for other unrelated packages | which might do bad things -1 from me... I speak only about 'ipvsd' currently but not about potential other packages maintained by other people. Enrico
Attachment:
pgpWKQQLLBFtO.pgp
Description: PGP signature | http://www.redhat.com/archives/fedora-extras-list/2006-February/msg01784.html | CC-MAIN-2014-49 | refinedweb | 1,232 | 62.88 |
Hi @ All 😉 ,
I’ve written a wrapper class for FMOD Ex for my player… The following snippet shows the method to play a specific cd track:
[code:3bi5s56e]
bool CPlayback::PlayCDTrack(int iIndex)
{
// Check if the class was correctly initialized (with a audio cd):
if(this->_fSourceInit && this->_stSrcType == SRC_DISC)
{
try
{
// Start the playback:
if(this->_pFModSystem->playSound(FMOD_CHANNEL_REUSE, this->_pFModCDTracks.at(iIndex),
false, &this->_pFModChannel) != FMOD_OK)
{
this->_pFModChannel = NULL;
return (false);
}
this->_iCurCDTrack = iIndex;
this->_fPaused = false;
this->_fStopped = false;
this->_Update(); // <- Updates internal class members
return (true);
}
catch(...) // catch(out_of_range& cError)
{
// the overgiven index was invalid (the .at()-method detected the error):
return (false);
}
}
return (false);
}
[/code:3bi5s56e]
My problem is that the playback always starts with the first track and after some milliseconds it continues with the specific track (iIndex)…Any Ideas to solve this bug ? (It should start directly with track iIndex :roll:)
If there is more code needed, just tell me -that’s no problem … 8)
I would be grateful for any help 😉
Greezt
PS: "_pFModCDTracks" is of type std::vector (the C++ STL-vector class) and the disc was loaded with the flags: FMOD_2D | FMOD_LOOP_OFF | FMOD_SOFTWARE | FMOD_OPENONLY | FMOD_CDDA_JITTERCORRECT | FMOD_IGNORETAGS
- CodeFinder asked 11 years ago
- You must login to post comments
Ok i think its a bug in the fmod-system because as i d made some changes in the example-code ‘cdplayer’ so that it always starts with the second track but its the same: It always begins with the first track and after some ms. it continues with track 2
. Are there any (internal 😀 ) solutions for this problem ?
I dont know, because when i tried it here it doesnt exhibit any problem like you’re saying, it always plays the correct start of the track without any other noise.
- Brett Paterson answered 11 years ago
lol, ok it works 😛 …i tried an other cd and now it works fine.
Thx for ur reply 😉 . | http://www.fmod.org/questions/question/forum-19666/ | CC-MAIN-2017-51 | refinedweb | 320 | 53.14 |
Details
- Type:
New Feature
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
Description facebook:
1. This provides a lightweight way for applications like hbase to create a snapshot;
2. This also allows an application like Hive to move a table to a different directory without breaking current running hive queries.
Issue Links
- is required by
HBASE-5940 HBase in-cluster backup based on the HDFS hardlink
- Open
- relates to
-
Activity
@Jagane - in short, yes. With the PIT split, any writes up to that point will go into the snapshot. Obviously, we can't ensure that future writes beyond the taking of the snapshot end up in the snapshot. Some writes can get dropped between snapshots though if you don't have your TTLs set correctly, since a compaction can age-off the writes before the snapshot can be taken. This is part of an overall backup solution, and not really the concern of the mechanism for taking snapshots - that's up to you
Feel free to DM me if you want to chat more.
Thanks for the pointer to
HBASE-6055, Jesse. I just skimmed it, but it is an excellent write-up you have there. Your rationalization for the use of HBASE Timestamps versus actual Point in Time is well taken. My own experience is with writing software to backup a single running VM, so my previous comment did talk about an actual PIT.
I did not catch this in my skimming of
HBASE-6055, so maybe you can clarify - when using HBASE Timestamps to create backups, can we guarantee that the next backup will include all PUTs that were made after the previous snapshot? No PUTs will fall through the cracks, right?
@Jaganar - with
HBASE-6055 (currently in review) you get a flush (more or less coordinated between regionservers - see the jira for more info) of the memstore to HFiles, which we would then love to hardlink into the snapshot directory. HFiles live under the the region directory - which lives under the column family and table directories - where the HFile is being served. When a comapction occurs, the file is moved to the .archive directory. Currently, we are getting around the hardlink issue by referencing the HFiles by name and then using a FileLink (also in review) to deal with the file getting archived out from under us when we restore the table.
The current implementation of snapshots in HBase is pretty close to what you are proposing (and almost identical for 'globally consistent' - cross-server consistent- snapshots, but those quiesce for far too long to ensure consistency), but spends minimal time blocking.
In short, hardlinks make snapshotting easier, but we still need both parts to get 'clean' restores. Otherwise, we need to do a WAL replay from the COW version of the WAL to get back in-memory state.
Does that make sense/answer your question?
Pardon my naive question - but, are hard links adequate for the purposes of HBase backup? The first line in this JIRA says "This provides a lightweight way for applications like hbase to create a snapshot".
Perhaps HBase experts can answer this question: Are single file hard links adequate for HBase backup? Don't you want a Point In Time snapshot of the entire filesystem, or at least all the files under the HBase data directory?
Don't you really want a sequence of events such as:
1. Flush all HBase MemStores
2. Quiesce HBase, i.e. get it to stop writing to HDFS
3. Call underlying HDFS to create PIT RO Snapshot with COW Semantics
4. Tell HBase to end quiesce, i.e. it can start writing to HDFS again
5. Backup program now reads from RO snapshot and writes to backup device, while HBase continues to write to the real directory tree
6. When the backup program is done, it deletes the RO snapshot
>Is there any reason to allow cross-namespace hardlinks? Why not just return EXDEV or equivalent?...
I agree. Unix does not allow hardlinks across volumes.
> to leverage ZooKeeper
Correct. With ZK you get all the necessary coordination of the distributed updates. Plus you can store ref counts in ZNodes - no need for special inodes.
In the end HardLinks is not the goal itself, but a tool to do e.g. HBase snapshots.
you keep the data along with the file (including the current file owner), you could do it all from a library..
Another, simpler way to do hardlinks with cross-server coordination (which in reality needs something like Paxos, or suffer some more unavailability to ensure consistency) would be to leverage ZooKeeper. Yes, -1 for another piece of infrastructure from this, but if does provide all the cross-namespace transactionality we need and make reference counting and security management significantly easier. Not quite client-library easy, but pretty darn close
Sorry for the slow reply, been a bit busy of late...
@Daryn".
The inode id needs to be secured since it bypasses all parent dir permissions,
Yeah, thats a bit of a pain... Maybe a bit more metadate to store with the file...?
@Konstantin.
The Windows ".lnk" file scheme is a pretty awful disaster, I hope that we don't implement a similar scheme in HDFS. I don't know of an example of a client-side shortcut scheme that worked out well (though I'd be interested to hear of any examples).
Is there any reason to allow cross-namespace hardlinks? Why not just return EXDEV or equivalent? As an even more restrictive example, AFS only permits hardlinks within a single directory (not even between subdirectories).
So long as failures are clearly communicated, it seems to me that it's OK to have a pretty restrictive implementation.
Jesse describes NNs proxying requests to each other to create and manage the ref-counted links, creating hidden "only accessible to the namenode" inodes, leases on arbitrated NN ownership, retention of deleted files with non-zero ref count, etc. Those aren't client-side operations.
"Hardlinks" cannot be implemented with a client library. The best you can hope for on the client-side is managed symlinks that are advisory in nature. Clients not using the library will ruin the scheme.
Jesse, thanks for the detailed proposal. It totally addresses the complexity of issues related to hard links implementation in distributed environment.
Do I understand correctly that your hidden inodes can be regular HDFS files, and that then the whole implementation can be done on top of existing HDFS, as a stand alone library supporting calls, like createHardLink(), deleteHardLink(). The applications then will use this methods if they want the functionality.
Just trying to answer Sanjay's questions using your design as an example.
Nice idea, but I think it gets much more complicated. Retaining ref-counted paths after deletion in the origin namespace requires an "inode id". A new api to reference paths based on the id is required. We aren't so soft anymore...
The inode id needs to be secured since it bypasses all parent dir permissions, yet the id should be identical for all links in order for copy utils to distinguish identical inodes.
Now comes the worst part: the client. Will the NNs proxy fs stream operations to each other with a secure api for referencing inode ids? Or will they redirect the client to the origin NN? If they redirect, how to protect against the client guessing ids, or saving them for later replay even when the dir privs prevent access?
I'd like to propose an alternative to 'real' hardlinkes: "reference counted soft-Links", or all the hardness you really need in a distributed FS.
In this implementation of "hard" links, I would propose that wherever the file is created is considered the "owner" of that file. Initially, when created, the file has a reference count of (1) on the local namespace. If you want another hardlink to the file in the same namespace, you then talk to the NN and request another handle to that file, which implicitly updates the references to the file. The reference to that file could be stored in memory (and journaled) or written as part of the file metadata (more on that later, but lets ignore that for the moment).
Suppose instead that you are in a separate namespace and want a hardlink to the file in the original namespace. Then you would make a request to your NN (NNa) for a hardlink. Since NNa doesn't own the file you want to reference, it makes a hardlink request to NN which originally created the file, the file 'owner' (or NNb). NNb then says 'Cool, I've got your request and increment the ref-count for the file." NNa can then grant your request and give you a link to that file. The failure case here is either
1) NNb goes down, in which case you can just keep around the reference requests and batch them when NNb comes back up.
2) NNa goes down mid-request - if NNa doesn't recieve an ACK back for the granted request, it can then disregard that request and re-decrement the count for that hardlink.
Deleting the hardlink then follows a similar process. You issue a request to the owner NN, either directly from the client if you are deleting a link in the current namespace or through a proxy NN to the original namenode. It then decrements the reference count on the file and allows the deletion of the link. If the reference count ever hits 0, then the NN also deletes the file since there are no valid references to that file.
This has the implicit implication though that the file will not be visible in the namespace that created it if all the hardlinks to it are removed. This means it essentially becomes a 'hidden' inode. We could, in the future, also work out a mechanism to transfer the hidden inode to a NN that has valid references to it (maybe via a gossip-style protocol), but that would be out of the current scope.
There are some implications for this model. If the owner NN manages the ref-count in memory, if that NN goes down, its whole namespace then becomes inaccessible, including creating new hardlinks to any of the files (inodes) that it owns. However, the owner NN going down doesn't preclude the other NN from serving the file from their own 'soft' inodes.
Alternatively, the NN could have a lock on the a hardlinked file, with the ref-counts and ownership info in the file metadata. This might introduce some overhead when creating new hardlinks (you need to reopen and modify the block or write a new block with the new information periodically - this latter actually opens a route to do ref-count management via appends to a file-ref file), but has the added advantage that if the owner NN crashed, an alternative NN could some and claim ownership of that file. This is similar to doing Paxos style leader-election for a given hardlinked file combined with leader-leases. However, this very unlikely to see lots of fluctuation as the leader can just reclaim the leader token via appends to the file-owner file, with periodic rewrites to minimize file size.
The on-disk representation of the extreme version I'm proposing is then this: the full file then is actually composed of three pieces: (1) the actual data and then two metadata files, "extents" (to add a new word/definition), (2) an external-reference extent: each time a reference is made to the file a new count is appended and it can periodically recompacted to a single value, (3) an owner-extent with the current NN owner and the lease time on the file, dictating who controls overall deletion of the file (since ref counts are done via the external-ref file). This means (2) and (3) are hidden inodes, only accessible to the namenode. We can minimize overhead to these file extents by ensuring a single writer via messaging to the the owner NN (as specified by the owner-file), though this is not strictly necessary.
Further, (1) could become a hidden inode if all the local namespace references are removed, but it could eventually be transferred over to another NN shard (namespace) to keep overhead at a minimum, though (again), this is not a strict necessity.
The design retains the NN view of files as directory entries, just entries with a little bit of metadata. The metadata could be in memory or part of the file and periodically modified, but that’s more implementation detail than anything (as mentioned above).
Konstantine
- How can one implement hard links in a library? If you have an alternate library implementation in mind please explain.
- I am fine to have hard links and renames restricted to volumes; this should then give you freedom to implemented a distributed NN.
Sanjay, you are taking a quote out of context. It has been explained what "hard" means above. Please scan through. One more example:
Well understood why traditional hard links are not allowed across volumes. A distributed namespace is like dynamically changing volumes. You can restrict a link to a single volume, but the names can flow to different volumes later on.
I am not proposing to remove the existing complexity from the system, I propose not to introduce more of it. In distributed case consistent hard links need PAXOS-like algorithms. They are not "elementary operations", which only should compose the API.
Hard links can be implemented as a library using ZK, which will stand in distributed case.
A couple of quotes from Snajay's (mine too) favorite author:
- When in doubt, leave it out. If there is a fundamental theorem of API design, this is it. You can always add things later, but you can't take them away.
- APIs must coexist peacefully with the platform, so do what is customary. It is almost always wrong to transliterate an API from one platform to another.
- Consider the performance consequences of API design decisions ...
@Sanjay: Good point about simply recording length. It would eschew random-write (not proposing it, only mentioning since it was cited earlier), but a feature like that would require significant other changes so integration with hardlinks could be deferred until if/when that's implemented. If snapshots are implemented using COW-hardlinks, then we should consider duplicating the inode to preserve all metadata, ie. not just length at the time of snapshot.
We should consider two kinds of hard-links: normal and COW. COW-HardLinks are easy since HDFS only allows append and hence one needs to simply record the length.
> ... hard links .. are very hard to support when the namespace is distributed
There are many things that are hard in a distributed namenode, For example rename is also hard - i recall discussing the challenges of renames in distributed nn with Konstantine. Do we remove such things from hdfs?
Maybe I'm missing something here...
Backup itself only becomes safe if HDFS (not HBase) promises to never modify a file once it is closed. Otherwise, a process that accidentally writes into the hard-linked file will corrupt "both" copies
At least for the HBase case, if we set the file permissions to be 744, you will only have an hbase process that could mess up the file (which it won't do once we close the file) and then an errant process can only slow down other reader processes. That would make it sufficient at least for HBase backups, but clearly not for general HDFS backups.
I understand hardlinks likely aren't meant to be. However I'd like to point out:
- Hardlinks cannot be implemented at a library level. The n-many directory entries must be able to reference the same inode, which unlike symlinks, are not bound by the permissions used to access any other of the paths to the hardlink. Filesystem support is required.
- Hardlinks shouldn't rule out the possibility of random-write (not suggesting it, it was brought up earlier). There may need to be some changes to the lease manager to apply the lease to the underlying inode instead of path.
- Hardlinks for backup aren't sufficient except by convention. That's where snapshots using hardlinks+COW blocks is interesting. COW blocks also open the door to zero-write copies.
Hardlinks would be used for temporary snapshotting (not to hold the backup itself).
Anyway... Since there's strong opposition to this, at Salesforce we'll either come up with something else, maintain local HDFS patches, or use a different file system.
The fact that HBase wants to use hard-links for backup does not make the backup itself safe. Backup itself only becomes safe if HDFS (not HBase) promises to never modify a file once it is closed. Otherwise, a process that accidentally writes into the hard-linked file will corrupt "both" copies. Simple having HBase say "oh, but we never modify this file via HBase" is not strong enough. The backup has to be absolutely immutable.
So the use-case here requires a commitment from HDFS to never be able to either append or ever write into an existing file. So it means no chance of random-write or NFS support.
Hardlinks are of similar nature. They are hard to support if the namespace is distributed.
FWIW Ceph also punts on distributed hardlinks and just puts them into a single node "because they are not commonly used and not likely to be hot or large" (paraphrasing). Conceptually, you could do it with 2PC across nodes, which should be fine as long as the namespace isn't sharded too highly - +1000s of nodes hosting hardlink information (again, not too many hardlinks).
From an HBase perspective, hardlink count could become large (~equal number of hfiles), but that isn't going to be near the number of files overall currently in HDFS. Maybe punt on the issue until it becomes a problem, keeping it flexible behind an interface?
> the key question: What services should a file system provide?
Exactly so. I would clarify it as: What functions should be a part of the file system API and what should be a library function.
> The same argument could be made for symbolic links. The application could implement those (in fact it's quite simple).
"Simple" is the key point here. Simple functions should be fs APIs. Hard functions should go into libraries.
Darin, you are right there is a lot of overlap, and yes hardlinks simplify building snapshots, but you are just pushing the complexity on HDFS layer. This does not change the difficulty of the problem.
We relaxed posix semantics in many aspects in HDFS for simplicity and performance. Imagine how much easier life would be with random writes or multiple writers. You are not asking for it, right?
Hardlinks are of similar nature. They are hard to support if the namespace is distributed. They should not be HDFS API, but they could be a library function.
I see a lot of overlap between hard links and snapshots. Conceptually, a snapshot is composed of hardlinks with COW semantics for file's metadata and last partial block. Hardlinks would also be a very easy way to implement zero-write copies. Streaming the bytes down and back up via the client isn't very efficient.
This is a good discussion.
Couple of points:
Or provide use cases which cannot be solved without it.
This seems to be the key question: What services should a file system provide?
The same argument could be made for symbolic links. The application could implement those (in fact it's quite simple).
but they are very hard to support when the namespace is distributed
But isn't that an implementation detail, which should not inform the feature set?
Hardlinks could be only supported per distinct namespace (namespace in federated HDFS or a volume in MapR - I think). This is not unlike Unix where hardlinks are per distinct filesystem (i.e. not across mount points).
@M.C. Srivas:
If you create 15 backups without hardlinks you get 15 times the metadata and 15 times the data... Unless you assume some other feature such as snapshots with copy-on-write or backup-on-write semantics. (Maybe I did not get the argument)
Immutable files are very a common and useful design pattern (not just for HBase) and while not strictly needed, hardlinks are very useful together with immutable files.
Just my $0.02.
@Karthik: using hard-links for backup accomplishes exactly the opposite. The expectation with a correctly-implemented hardlink is that when the original is modified, the change is reflected in the file, no matter which path-name was used to access it. Isn't that exactly the opposite effect of what a backup/snapshot is supposed to do? Unless of course you are committing to never ever being able to modify a file once written (although that would be viewed by most as a major step backwards in the evolution of Hadoop).
Another major problem is the scalability of the NN gets reduced by a factor of 10. (ie, your cluster can now hold only 10 million files instead of the 100 million which it used to be able to hold). Imagine someone doing a backup every 6 hours. Let's say the backups are to be retained as follows: 4 for the past 24 hrs, 1 daily for a week, and 1 per week for 1 month. Total: 4 + 7 + 4 = 15 backups, ie, 15 hard-links to the files, one from each backup. So each file is pointed to by 15 names, or, in another words, the NN now holds 15 names instead of 1 for each file. I think that would reduce the number of files held by the cluster practically speaking by a factor of 10, no?
Thirdly, hard-links don't work with directories. What is the scheme to back up directories? (If this scheme only usable for HBase backups and nothing else, then I agree with Konstantin that it belongs in the HBase layer and not here)
.
> I would recommend finding a different approach to implementing snapshots than adding this feature.
I agree with Srivas, hard links seem easy in single-NameNode architecture, but they are very hard to support when the namespace is distributed, because if links to a file belong to different nodes you cannot just lock the entire namespace and do atomic cross-node linking / unlinking.
I also agree with Srivas that hard links in traditional file systems cause more problems than add value.
Looking at the design document I see that you create sort of internal symlinks called INodeHardLinkFile pointing to HardLinkFileInfo, representing the actual file. This can be modeled by symlinks on the application (HBase) level without making any changes in HDFS.
I strongly discourage bringing this feature inside HDFS.
Or provide use cases which cannot be solved without it.
Sorry some combination of buttons lead to reassigning. Assigning back.
When users run "cp" in the linux file system against hard linked files, it will copy the bytes, right?
cp -a preserves hard links; cp -r breaks them (duplicates the bytes).
I think we shall keep the same semantics here as well.
I don't think it's a good idea to pretend that we can or should preserve every corner case of the semantics of POSIX hard links. The Unix hard link was originally a historical accident of the inode/dentry structure of the filesystem, preserved because it's useful and has been heavily relied upon by users of the Unix api. The implementation in something like ZFS or btrfs is pretty far away from the original simplicity.
Since we don't have API compatibility with Unix and our underlying structure is deeply different, it's a good idea to borrow the good ideas but take a practical eye to where it makes sense to diverge.
Good point, Lars.
When users run "cp" in the linux file system against hard linked files, it will copy the bytes, right?
I think we shall keep the same semantics here as well.
In terms of optimization, the upper level application shall have the knowledge when to use hardlink in the remote/destination DFS instead of coping the bytes between two clusters.
Thanks Liyin. Sounds good.
One thought that occurred to me since: We need to think about copy semantics. For example how will distcp handle this? It shouldn't create a new copy of a file for each hardlink that points to it, but rather just copy it at most once and create hardlinks for each following reference. But then what about multiple distcp commands that happen to cover hardlinks to the same file? I suppose in the case we cannot be expected to avoid multiple copies of the same file (but at most one copy for each invocation of distcp, and only if the distcp happens to cover a different hardlink).
I planned to break this feature into several parts:
1) Implement the new FileSystem API: hardLink based on the INodeHardLinkFile and HardLinkFileInfo class. Also handle the deletion properly.
2) Handle the DU operation and quote update properly.
3) Update the FSImage format and FSEditLog.
I have finished the part 1 but still work on part 2.
Do you have a preliminary patch to look at?
Hi Lars, we are still working on this feature. It may take a while to take care of all the cases, especially the quote updates and fsimage format change.
Is anybody working on a patch for this?
If not, I would not mind picking this up (although I can't promise getting to this before the end of the month)..
).
Sanjay, POSIX says that a user cannot open a file unless they have permissions to traverse the entire path from / to the file. The problem is that if a file has two paths (as in a hard-link), perms becomes very hard to enforce since a file does not know which dir is its parent. Imagine a rename of a file with many hard-links across to a new dir. This problem is harder in a distr file system if you wish to spread the meta-data. Note that the enforcement happens automatically with symbolic links. As you point out, with MapR we could implement hard-links within a volume, but chose not to and instead implemented only symlinks. (I personally find symlinks to be more flexible).
- I see the additional complexity in quotas because HDFS quotas are directory based (like several file systems). I think this is addressable if we double count the quotas along both path.
- Permissions are not a problem since the file retains the file permissions and both paths to the file offer their own permissions.
- I don't understand Srivas's rename example.
Srivas, in MapR I suspect that renames are only allowed within a volume and such hard links would be supported only within a volume. Can you explain the problem with some more details.
Can the hard linked files be reopened for append?
It only allows hard links to the closed files.
Is the proposal to allow hard links to only files or to files and directories?
@M.C.Srivas, I am afraid that I didn't quite understand your concerns..
1) Hardlinked files are suposed to have the same permission as the source file.
2) Each INodeFile do have a parent pointer to its parent in HDFS and also which lock are you talking about exactly (in the implementation's perspective) ?
@Daryn Sharp:.
Totally agreed
.
From the security perspective, users should be responsible to set the correct permission to protect themselves. In this cases, users should ONLY grant the EXECUTE permission to the trusted users for hardlinkling.?
Would you mind giving me an exact example for your concerns about quotas? I would be very happy to explain it in details
I fully agree that posix and/or linux conventions should ideally be followed.?
I look forward to your thoughts.
Creating hard-links in a distributed file-system will cause all kinds of future problems with scalability. Hard-links are rarely used in the real-world, because of all the associated bizzare problems. Eg, consider a hardlink setup as follows:
link1: /path1/dirA/file
link2: /path2/dirB/file.
I would recommend finding a different approach to implementing snapshots than adding this feature.
Another consideration is ds quota is based on a multiple of replication factor, so who is allowed to change the replication factor since increasing it may impact a different user's quota?
Generally, when user creates a hardlink in Linux, it requires the EXECUTE permission for the source directory and WRITE_EXECUTE permission for the destination directory. And it is a well-known issue that hard links on Linux could create local DoS vulnerability and security problems, especially when malicious user keeps creating hard links to other users files and let others run out of quota. One of solutions to prevent this problem is to set the permission of the dir correctly.
HDFS hardlink should follow the same permission requirements as genreal Linux FS and only allow the trusted users or groups have right permission to create hardlinks. The same security principle shall apply for setReplication operation, which can be treated as a normal write operation in general Linux FS.
Thanks Daryn Sharp so much for the above discussion.
It really helps us to re-visit several design issues and improve the solutions. I will update the design doc later.
b?
Based on the same example you commented, when linking /dir/dir2/file and /dir/dir3/hardlink, it will increase the dsquota for dir3 but not /dir. Because dir3 is NOT a common ancestor but dir is. And if dir3 doesn't have enough dsquota, then it shall throw quota exceptions. Also if there is a /dir/dir4/hardlink2, it absorbs the dsquota for dir4 as well. So the point is that it only absorbs the dsquota during the link creation time and decreases the dsquota during the link deletion time.
From my understanding, the basic semantics for HardLink is to allow user create multiple logic files referencing to the same set of blocks/bytes on disks. So user could set different file level attributes for each linked file such as owner, permission, modification time.
Since these linked files share the same set of blocks, the block level setting shall be shared.
It may be a little confused to distinguish the replication factor in HDFS between file-level attributes and block-level attributes.
If we agree that replication factor is a block-level attribute, then we shall pay the overhead (wait time) when increasing replication factor, just as increasing the replication factor against a regular file, and the setReplication operation is supposed to fail if it breaks the dsquota.
I'm glad you find my questions helpful!?
Currently, at least for V1, we shall support the hardlinking only for the closed files and won't support to append operation against linked files, but it could be extended in the future.
A reasonable approach, but it may lead to user confusion. It almost begs for a immutable flag (ie. chattr +i/-i) to prevent inadvertent hard linking to files intended to be mutable.
Nonetheless, I'd suggest exploring the difficulties reconciling the current design of the namesystem/block management with your design. It may help avoid boxing ourselves into a corner with limited hard link support.
From my understanding, the setReplication is just a memory footprint update and the name node will increase actual replication in the background.
Yes, but the FsShell setrep command actively monitors the files and does not exit until the replication factor is what the user requested – as determined by the number of hosts per block. Another consideration is ds quota is based on a multiple of replication factor, so who is allowed to change the replication factor since increasing it may impact a different user's quota?
@Daryn Sharp: very good comments
1) Quota is the trickest for the hard link.
For nsquota usage, it will be added up when creating hardlinks and be decreased when removing hardlinks.
For dsquota usage, it will only increase and decrease the quota usage for the directories, which are not any common ancestor directories with any linked files..
The bottom line is there is no such case that we need to increase any dsquota during the file removal operation. Because if the directory is a common ancestor directory, no dsquota needs to be updated, otherwise the dsquota has already been updated during the hard link created time.
2) You are right that each blockInfo of the linked files needs to be updated when the original file is deleted. I shall update the design doc to explicitly explain this part in details.
3) Currently, at least for V1, we shall support the hardlinking only for the closed files and won't support to append operation against linked files, but it could be extended in the future.
4) Very good point that hardlinked files shall respect the max replication factors. From my understanding, the setReplication is just a memory footprint update and the name node will increase actual replication in the background.
Thanks for uploading the design document.
Do you plan to support hardlink using FileContext? In the design document, I see FileSystem and FsShell being mentioned as client interface - hence the question.
While I really like the idea of hardlinks, I believe there are more non-trivial consideration with this proposed implementation. I'm by no means a SME, but I experimented with a very different approach awhile ago. Here are some of the issues I encountered:
I think the quota considerations may be a bit trickier. The original creator of the file takes the nsquota & dsquota hit. The links take just the dsquota hit. However, when the original creator of the file is removed, one of the other links must absorb the dsquota. If there are multiple remaining links, which one takes the hit?
What if none of the remaining links have available quota? If the dsquota can always be exceeded, I can bypass my quota by creating the file in one dir, hardlinking from my out-of-dsquota dir, then removing the original. If the dsquota cannot be exceeded, I can (maliciously?) hardlink from my out-of-dsquota dir to deny the original creator the ability to delete the file – perhaps causing them to be unable to reduce their quota usage.
Block management will also be impacted. The manager currently operates on an inode mapping (changing to an interface though), but which of the hardlink inodes will it be? The original? When that link is removed, how will the block manager be updated with another hardlink inode?
When a file is open for writing, the inode converts to under construction, so there would need to be a hardlink under construction. You will have to think about how other hardlinks are affected/handled. The case applies to hardlinks during file creation and appending.
There may also be an impact to file leases. I believe they are path based so leases will now need to be enforced across multiple paths.
What if one hardlink changes the replication factor? The maximum replication factor for all hardlinks should probably be obeyed, but now the setrep command will never succeed since it waits for the replication value to actually change.
Sajay, you are right. HDFS hardlink is only a meta operation and no datanode is involved. In all our use cases, the source file may be deleted over time but its content can be still accessed through hardlinks.
@Sanjay, sorry that I misunderstood "the advantage over".
It is correct that keeping other linked files after deletion is the main advantage over symbolic links
<<The main advantage over symbolic links being that when the original link is deleted the 2nd one keeps the actual data from being deleted. Correct>>
Do you mean hard link instead of symbolic links? If the original link deleted, the symbolic link will be broken. But if one of the hard linked files is deleted, other linked files won't be affected.
You are right that the hard link stays on NN only
The main advantage over symbolic links being that when the original link is deleted the 2nd one keeps the actual data from being deleted. Correct?
Does the hard link stay on the NN or does it propagate to the actual blocks on the DN?
I believe it is not necessary to propagate the link to the DNs based on the use cases you have described.
Attached HDFS-HardLinks design doc and any comments are so welcome.
Another usecase in Hive is to copy one table/partition to another table/partition.
Ideally, we would like the following in Hive:
Copy Table T1 to T2.
The files under table location for T2 (say, /user/hive/warehouse/T2/0) can be a link to the corresponding file in table T1
(say, /user/hive/warehouse/T1/0).
Having said that, one of the requirements is that the data should be modified independently. So, if a new data is loaded into T1 (or T2),
those changes should not be visible to T2 (or T1 respectively).
Is this still active? The last entry seems to be on Sept 12th... has there been any progress?
While the underlying issue is HDFS, it seems that this is more about HBase than HDFS.
One comment... with respect to hard links over multiple name spaces... why?
I mean hardlinks should exist only within the same namespace, which would remove this roadblock.
If you were going between namespaces, then use a symbolic link.
Thx... | https://issues.apache.org/jira/browse/HDFS-3370 | CC-MAIN-2014-15 | refinedweb | 6,320 | 62.38 |
I'm trying to develop a program that stores a list of hw assignments in the form of "Date,Assignment name,grade". I have functions that add,remove, and delete from a list, but I would also like to search and delete a particular date found in the list. From here, split - Splitting a string in C++ - Stack Overflow I've tested Evan Teran's method to split a string, but I'm unsure of how to implement it in my code.
List.h
List.cppList.cppCode:#ifndef LIST_H #define LIST_H #include <string> using namespace std; class List{ private: typedef struct node{ string hw; string grade; string date; node* next; }* Node; Node head; Node current; Node temp; public: List(); void AddAssign(string addData); void DeleteAssign(string delData); void PrintList(); }; #endif /* LIST_H */
Code:#include <cstdlib> #include <iostream> #include <string> #include "List.h" using namespace std; List::List(){ head = NULL; current = NULL; temp = NULL; } /* @function - addAssign * @pre - * @post * @return - adds a list of elements to * assignment list. */ void List::AddAssign(string addData){ Node n = new node; n->next = NULL; n->hw = addData;//initializes hw if(head != NULL){ current = head; while(current->next != NULL){//next node is not at end of list current = current->next;//go to next node } current->next = n;//next node is new } else{ head = n;//newest node is head of list } } void List::DeleteAssign(string delData){ Node delPtr = NULL; temp = head; current = head; while(current != NULL && current->hw != delData){ temp = current; current = current->next; } if(current == NULL){ cout << delData << " was not in the list\n"; delete delPtr; }else{ delPtr = current; current = current->next; temp->next = current; if(delPtr == head){ head = head->next; temp = NULL; } delete delPtr; cout << "The Value " << delData << " was deleted\n"; } } void List::PrintList(){ current = head; while(current!= NULL){//is not last node cout << current->hw << endl; current = current->next; } }
listmain.cpp
Code:#include <cstdlib> #include "List.h" #include <string> #include <iostream> #include <list> using namespace std; int main(int argc, char** argv) { List assign; string date = "3/9/13"; assign.AddAssign("3/4/13,Calculus,77"); assign.AddAssign("3/7/13,Programming,88"); assign.AddAssign("3/15/13,English,89"); assign.AddAssign("3/23/13,Physics,78"); cout<<"Starting list..." << endl; assign.PrintList(); return 0; } | http://cboard.cprogramming.com/cplusplus-programming/154968-splitting-linked-list-class.html | CC-MAIN-2014-23 | refinedweb | 369 | 50.46 |
Today’s stupid mistake had me going in circles in my Web application project. The error message was:
The type or namespace name 'EntityDataSource' does not exist…
As the rest of the error message suggests, I must be missing an assembly reference. Nope, I’d already added in the references to Entity:
System.Data.Entity
System.Data.Entity.Design
After much frustration until after 1 a.m., I gave up and went to bed.
This morning I realized that what it wanted was the WEB Entity!
System.Web.Entity
System.Web.Entity.Design
Sheesh! Am I the only one who makes these ridiculous errors or just the only one who posts them for all to read????
Ken
Great!
you saved my day...
I just made the same mistake. Thanks for posting the solution.
Thanks for this. I had assumed (in error) that it was under System.Web.UI.Webcontrols.
Thanks Ken...
It is really frustrating to get stuck in these kind of issues but at the same time very happy to get solution from your post...
Regards
Varun
Thanks for sharing brother.
2012 and still helped me, thanks
Spent the last 1.5 hours going wtf is wrong with this stupid thing?
You're not alone Ken!
Hey Ken..
Too Good Buddy..
Here it is 2:15 AM and your post worked as a life savor for me...
Agree with Jlu's comment.. 2012 and still helpful and will remain helpful for developers like us in near future also | http://weblogs.asp.net/kencox/archive/2010/08/22/fix-the-type-or-namespace-name-entitydatasource-does-not-exist.aspx | crawl-003 | refinedweb | 249 | 77.33 |
Hello, Linux Academy community. October has been an exciting month of learning about Cyber Security as part of Cyber Security Awareness month. However, as the month has come to an end, it’s time to check back in with our It’s Okay To Be New: Containers series.
We started our journey learning about the History of Container Technology and then followed it up with getting some hands-on command line time in our Docker Quickstart for Everyone. We were then lucky enough to be joined on our Journey by an amazing Docker Captain by the name of Mike Irwin to discuss what docker is, why we use containers, and understanding namespaces. However, now having a chance to take a step back and look at a journey, I feel that we may have missed a crucial step; discussing Linux Containers (LXC).
On the LXC website, LXC is defined “…as the well-known set of tools, templates, library, and language bindings. It’s pretty low level, very flexible, and covers just about every containment feature supported by the upstream kernel.” Being new to LXC, I had to re-read this definition a few times, and it left me wondering what exactly LXC is.
LXC is a userspace and memory location in which user processes run; the interface allows users to create and manage containers on their systems. For many in the industry, LXC is a middle ground between a chrooted environment and a full-fledged virtual machine. I’ve always found it easier to think of it as operating system level virtualization. In OS virtualization, the goal is not the same as standard hardware virtualization, the goal in is to allow us to create multiple isolated systems on a shared host. These isolated environments are referred to as containers. With LXC we can make use of Linux namespaces and cgroups to be able to create containerized environments. LXC is paired with LXD which is the Linux container daemon. It can be thought of as an extension to LXC. LXD exposes the Rest API that connects the LXC software library allowing for hosts to run multiple LXC containers while only using a single system daemon. This daemon can integrate with host-level security features as well as handle networking and data storage.
All this information is well and good, but as I always say, the best way to learn something is to get your hands dirty. So spin up an Ubuntu cloud server and take a chance at installing and playing with LXD.
Install LXC
This guide was written to run on Ubuntu and as LXC:
sudo apt remove lxd lxd-client liblxc1 lxcfs --purge --yes
sudo snap install lxd
Configure LXC
lxd init
At this time, just use the default values for all questions.
Play with LXC
You can list available images on the images remote with:
(Note: You can think of the remote like an image repository.)
lxc image list images
You can create new containers with
lxc launch:
lxc launch ubuntu:16.04 ubuntu-container
lxc launch images:centos/6 centos-container
Confirm they exist with:
lxc list
If you would like to learn more about LXC, or maybe more about the functionalities that Docker brings to our container environments, as well as where container orchestration such as Docker Swarm and Kubernetes fits into the picture, take some time and explore my new course, Essential Container Concepts.
| https://linuxacademy.com/blog/linux-academy/lxd-quick-start-for-everyone/ | CC-MAIN-2020-16 | refinedweb | 568 | 57.5 |
This is the 4th in a 5-part series covering the basics of the SDK for Connector.
The ObjectProvider Configuration Files
This post is all about the configuration files that accompany each ObjectProvider file used by Connector. In this post we will look into the configuration files used by the SampleCustomer and SampleUofMSchedule ObjectProviders, and then we’ll look at an attribute of these files that needs to be set to properly deploy the Adapter.
What’s the deal with the ObjectProvider configuration files?
Each ObjectProvider used by your adapter must be accompanied by a configuration file that helps the adapter understand the types contained in the object. For example, a Customer object has an attribute of type Address. The configuration file for each ObjectProvider specifies these types using an XML-style definition set. These configuration files must be named appropriately as well. For example, a SampleUofMScheduleObjectProvider must be named SampleUofMScheduleObjecProvider.config and be placed in the ObjectConfig directory in the same directory as the directory where the ObjectProvider file is located. This will allow the ObjectProvider configuration files to be automatically found and loaded by the Adapter when you initialize the Adapter after dropping the necessary files in the “Adapters” folder.
If you open up either of the ObjectProvider configuration files, you will notice that it is an XML file. The important thing to note is that all TypeName must be defined or their will be error. Also, the TypeName must be spelled exactly as the way they are spelled when defined. Also, the Clear Type Name (ClrTypeName) must be designated using your unique namespace where the reference to the Dynamics GP Web Service exists. Finally, note that you can modify these configuration files to reflect any customizations that have been made to the entities.
Don’t the configuration files need to be copied to the output folders when you build the solution?
Yes, the configuration files need to be copied to the output folder when you build the solution. If these configuration files do not have the “Copy to Output Directory” file attribute set to “Copy Always”, they will not be copied to the output folders at each build. This could lead to outdated configuration files or not configuration files at all in the output folder at build time. Therefore, you must make sure that the attribute is set appropriately on all ObjectProvider configuration files. Below is a screenshot of the attribute and the proper setting:
Clear Type Name (ClrTypeName)?!?!?!?!?!
msdn.microsoft.com/…/ms229045.aspx
ClrTypeName = Common Language Runtime Type Name
This attribute allows Connector to determine the CLR type to be instansiated when syncrozining this record.
So, How can I map UTCDateTime of Dynamics AX to ObjectProvider configuration files?
I want to write create or update document(xml) contains like this by Dynamics AX AIF.
….
<TestUTC localDateTime="2011-12-21T13:22:00" timezone="GMTPLUS0900SEOUL">2011-12-21T04:22:00Z</TestUTC>
….
This this case, I don't know how to write ObjectProvider configuration file(.config). | https://blogs.msdn.microsoft.com/dynamicsconnector/2011/09/01/objectprovider-configuration-files-in-connector-sdk-4th-in-the-sdk-series/ | CC-MAIN-2017-22 | refinedweb | 497 | 53.71 |
Many Java language processors do not read Java. Instead they read the Java class file and build the symbol table and abstract syntax tree from the class file. The Java represented in the Java class file is already syntatically and semantically correct. As a result the authors of these tools avoid the considerable difficulty involved with implementing a Java front end.
The designers of the Java programming language did not have ease of implementation in mind when they designed the langauge. This is as it should be, since easy of use in the language is more important. One of the difficulties encountered in designing a Java front end which does semantic analysis is symbol table design. This web page provides a somewhat rambling discussion of the issues involved with the design of a Java symbol table.
The front end phase of a compiler is responsible for:
Parsing the source language to recognize correct programs and report syntax errors for incorrect language constructs. In the case of the BPI Java front end, this is done by a parser generated with the ANTLR parser generator. The output of the parser is an abstract syntax tree (AST) which includes all declarations that were in the source.
Reading declaration information in Java class files and, for a native Java compiler, building ASTs from the byte code stream. This also involves following the transitive closure of the classes required to define the root class. (Def: transitive closure - All the nodes in a graph that are reachable from the root. In this case the graph is the tree of classes that are needed to define all the classes read by the compiler).
Processing the declarations in the AST and class files to build the symbol table. Once they are processed the declarations are pruned from the AST.
The output of the front end is a syntactically and semantically correct AST where each node has a pointer to either an identifier (if it is a leaf) or a class type (if it is a non-terminal or a type reference like MyType.class).
The term "symbol table" is generic and usually refers to a data structure that is much more complex than a table (e.g., an array of structures). While symbols and types are being resolved, the symbol table must reflect the current scope of the AST being processed. For example, in the C code fragment below there are three variables named "x", all in different scopes.
static char x; int foo() { int x; { float x; } }
Resolving symbols and types requires traversing the AST to process the various declarations. As the traversal moves through scope in the AST, the symbol table reflects current scope, so that when the symbol for "x" is looked up, the symbol in the current scope will be returned.The scoped structure of the symbol table is only important while symbols and types are being resolved. After names are resolved, the association between a name in the AST and its symbol can be found directly via a pointer.
Compilers for languages like Pascal and C, which have simple hierarchical scope, frequently use symbol tables that directly mirror the language scope. There is a symbol table for every scope. Each symbol table has a pointer to its parent scope. At the root of the symbol table hierarchy is the global symbol table, which contains global symbols and functions (or, in the case of Pascal, procedures). When a function scope is entered, a function symbol table is created. The function symbol table parent pointer points to the next scope "upward" in the hierarchy (either the global symbol table, or in the case of Pascal, an enclosing procedure or function). A block symbol table would point to its parent, which would be a function symbol table. Symbol search traverses upward, starting with the local scope and moving toward the global scope.
The scope hierarchy is not needed once symbols and types have been resolved. However the local scope, for a method or a class remains important and the symbol tables for these local scopes must remain accessible to allow the compiler to iterate over all symbol in a given scope. For example, to generate code to allocate a stack frame when a method is called, the compiler must be able to find all the variables associated with the method. A Java compiler must be able to keep track of the members of a class, since these variables will be allocated in garbage collected memory.
Scope for most object oriented languages is more complicated than the scope for procedural languages like C and Pascal. C++ supports multiple inheritance and Java supports multiple interface definitions (multiple inheritance done right). The symbol table must also be efficient so compiler performance is not hurt by symbol table lookup in the front end. Symbol table design considerations for a Java compiler include:
Java has a large global scope, since all classes and packages are imported into the global name space. Global symbols must be stored in a high capacity data structure that supports fast (O(n)) lookup (a hash table, for example).
Java has lots of local scopes (classes, methods and blocks) that have relatively few symbols (compared to the global scope). Data structures that support fast high capacity lookup tend to introduce overhead (in either memory use or code complexity). This is overkill for the local scope. The symbol table for the local scopes should be implemented with a data structure that is simple and relatively fast (e.g., (O(log2 n))). Examples include balanced binary trees and skip lists.
The symbol table must be able to support multiple definitions for a name within a given scope. The symbol table must also help the compiler resolve the error cases where the same kind of symbol (e.g., a method) is declared more than once in a given scope.
In C names within a given scope must be unique. For example, in C a type named MyType and a function named MyType are not allowed. In Java names in a given scope are not required to be unique. Names are resolved by context. For example:
class Rose { Rose( int val ) { juliette = val; } public int juliette; } // Rose class Venice { void thorn { garden = new Rose( 42 ); Rose( 86 ); garden.Rose( 94 ); } Rose Rose( int val ) { garden.juliette = val; } Rose garden; } // venice
In this example there is a type named Rose, a Rose constructor, and a method named Rose that returns an object of type Rose. The compiler must know by context which is which. Also, note that the references to the Rose function and the garden type are references to objects declared later in the file.
Most of the symbol scope in Java can be described by a simple hierarchy where a lower scope points to the next scope up. The exception is the interface list that may be associated with a Java class. Note that interfaces may also inherit from super interfaces. The scopes in Java are outlined below:
Global (objects imported via import statements) Parent Interface (this may be a list) Interface (there may be a list of interfaces) Parent class Class Method Block
The symbol table and the semantic analysis code that checks the Java AST returned by the parser must be able to resolve whether a symbol definition is semantically correct. The presence of multiple definitions for a given name (e.g., multiple definitions of a class member) are allowed. However, ambiguous symbol use is not allowed:
Java Language Specification (JLS) 8.3.3.3
A class may inherit two or more fields with the same name, either from two interfaces or from its superclass and an interface. A compile-time error occurs on any attempt to refer to any ambiguously inherited field by its simple name. A qualified name or field access expression that contains the keyword super (15.10.2) may be used to access such fields unambiguously.
Both a parent class and an interface place symbols defined in the class or interface in the local scope. In the example below the symbol x is defined in both bar and fu. This is allowed, since x is not referenced in the class DoD.
interface bar { int x = 42; } class fu { double x; } class DoD extends fu implements bar { int y; // No error, since there is no local reference to x }
If x is referenced in the class DoD, the compiler must report an error, since the reference to x is ambiguous.
class DoD extends fu implements bar { int y; DoD() { y = x + 1; // Error, since the reference to x is ambiguous } }
Similar name ambiguity can exist with inner classes defined in an interface and a parent class:
interface BuildEmpire { class KhubilaiKahn { public int a, b, c; } } class GengisKahn { class KhubilaiKahn { public double x, y, z; } } class mongol extends GengisKahn implements BuildEmpire { void mondo() { KhubilaiKahn TheKahn; // Ambiguous reference to class KhubilaiKahn } }
Java does not support multiple inheritance in the class hierarchy, but Java does allow a class to implement multiple interfaces or an interface to extend multiple interfaces.
Java Language Standard 9.3
It is possible for an interface to inherit more than one field with the same name (8.3.3.3). Such a situation does not in itself cause a compile-time error. However, any attempt within the body of the interface to refer to either field by its simple name will result in a compile-time error, because such a reference is ambigous.
For example, in the code below key is ambiguous.
interface Maryland { String key = "General William Odom"; } interface ProcurementOffice { String key = "Admiral Bobby Inman"; } interface NoSuchAgency extends Maryland, ProcurementOffice { String RealKey = key + "42"; // ambiguous reference to key }
When the semantic analysis phase looks up the symbol key the symbol table must allow the semantic checking code to determine that there are two member definitions for key. The symbol table must only group like symbols in the same scope together (e.g., members with members and types with types). Unlike symbols (methods, classes and member variables) are not grouped together because they are distinguished by context.
Multiple definitions of a method do not cause a semantic error in Java, since there is no multiple inheritance. If a method of the same name is inherited from two interfaces, for example, the method must either be the same or must define an overloaded version of the method. If there is a local method with the same name and arguments (e.g., same type signature) as a method defined in a parent class, the local method will be in a "lower" scope and will override the definition of the parent.
Taking into account the issues discussed above, a just symbol table must fulfill the following requirements:
Support for multiple definitions for a given identifier.
Fast lookup (O(n)) for a large global (e.g., package level) symbol base.
Relatively fast lookup (O(log2 n)) for local symbols (e.g., local to a class, method or block)
Support for Java hierarchical scope
Searchable by symbol type (e.g., member, method, class).
Quickly determine whether a symbol definition is ambiguous.
Languages like C can be compiled one function at a time. The global symbol table must retain the symbol information the functions and their arguments for the functions defined in the current file. But other local symbol information can be discarded after the function is compiled. When the compiler has processed all the functions in a given .c file (and its referenced include files), all symbols can be discarded.
C++ can be compiled in a similar fashion. Class definitions are defined in header files (e.g., .h files) for each file (e.g., .C or .cpp file) that references an object. When the file has been processed all symbols can be discarded.
Java is more complicated. The Java compiler must read the Java symbol definitions for the class tree that is needed to define all classes referenced by the current class being compiled (the transitive closure of all the class hierarchy). In the case of the object containing the main method, this includes all classes referenced in the program.
In theory Java symbols could be discarded once all of the classes that references them are compiled. In practice this is probably more trouble than it is worth on a modern computer system with lots of memory. So Java symbols live throughout the compile.
Hierarchical scope in the symbol table only needs to be available during the semantic analysis phase. After this phase, all symbols (identifier nodes) will point to the correct symbol. However, once scope is built, it is left in place.
Each local scope (e.g., block, method or class) has a local symbol table which points to the symbol table in the enclosing scope. At the root of the hierarchy is the global symbols table containing all global classes and imported symbols. During semantic analysis symbol search starts with the local symbol table and searches upward in the hierarchy, searching each symbol table until the global symbol table is searched. If the global symbol table is searched and the symbol is not present, the symbol does not exist.
Java scope is not a simple hierarchy composed of unique symbols, as is the case with C. There may be multiple definitions for a symbol (e.g., a class member, a method and a class name). The symbols at a given scope level may come from more than one source. For example, in the Java code below the class gin and the interface tonic define symbols at the same level of hierarchy.
interface tonic { int water = 1; int quinine = 2; int sugar = 3; int TheSame = 4; } class gin { public int water, alcohol, juniper; public float TheSame; } class g_and_t extends gin implements tonic { class contextName { public int x, y, z; } // contextName public int contextName( int x ) { return x; } public contextName contextName; }
Local variables in Java are variables in methods. These variables are allocated in a stack frame and have a "life time" that exists as long as the method is active. A method may also have local scope created by blocks or statements. For example:
class bogus { public void foobar() { int a, b, c; { // this is a scope block int x, y, z; } }
Unlike C and C++, Java does not allow a local variable to be redeclared:
If a declaration of an identifier as a local variable appears within the scope of a parameter or local variable of the same name, a compile-time error occurs. Thus the following examples does not compile:
JLS 14.3.2class Test { public static void main( String[] args ) { int i; for (int i = 0; i < 10; i++) // Error: local variable redefinition redeclared System.out.println(i); } }
A local variable is allowed to redefine a class member. This makes variable redefinition a semantic check in the semantic analysis phase.
A forward reference is a reference to a symbol that is defined texturally later in the code.
When a class field is initialized, the initializer must have been previously declared and initialized. The following example (from JLS 6.3) results in a compile time error:
class Test { int i = j; // compile-time error: incorrect forward reference int j = 1; }
Nor is forward reference allowed for local variables. For example:
class geomancy { public float circleArea( float r ) { float area; area = pie * r * r; // undefined variable 'pie' float pie = (float)Math.PI; return area; } }
However, forward reference is allowed from a local scope (e.g., a method) to a class member defined in the enclosing class. For example, in the Java below the method getHexChar makes a forward reference to the class member hexTab:
class HexStuff { public char getHexChar( byte digit ) { digit = (byte)(digit & 0xf); char ch = hexTab[digit]; // legal forward reference to class member return ch; } // getHexchar private static char hexTab[] = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }; } // HexStuff
The root compilation unit in Java is the package, either an explicitly named package or an unnamed package (e.g., the file containing the main method). All packages import the default packages which include java.lang.* and any other packages that may be required by the local system. The user may also explicitly import other packages.
When package A imports package B, package B provides:
If package B imports package X which contains the public class foo, the class foo is referred to via the qualified name X.foo.
Packages add yet another level of complexity to the symbol table. A package exists as an object that defines a set of classes, interfaces and sub-packages. Once a package has been read by the compiler, it does not need to be read again when subsequent import statements are encountered, since its definition is already known to the compiler.
The classes, interfaces and packages defined by a package are "imported" into the global scope of the current package. In the Java source, the type names defined in the imported package are referenced via simple names (JLS 6.5.4) and type names defined in the sub-packages of an imported package are referenced via qualified names. However, in the symbol table all type names have an associated fully qualified name.
Support for multiple definitions for a given identifier.
All symbols that share the same identifier at a particular scope level are contained in a container. As noted above, an identifier may be a class member, method and local class definition. There may also be multiple instances for a given kind of definition. For example, in the Java above there two definitions for the class member TheSame. The container is searchable by identifier type (member, method or class) and it can quickly be determined whether there is more than on definition of a given type (leading to an ambiguous reference). If the object is named, the symbol will have a field that points to the symbol for its parent (e.g, a method or class). For a block this pointer will be null. Note that parent is not necessarily the parent scope. The symbols defined in the class gin and the interface tonic are in the same scope, but they may have different parents.
Fast global lookup
The global symbol table is implemented by a hash table with a large capacity (the hash table can support a large number of symbols without developing long hash chains).
Package information
Once a package is imported into the global scope, the package is not referenced again. The imported type names (classes and interfaces) are referenced as if they were defined in the current compilation unit (e.g., via simple type names). The sub-packages become objects in the global scope as well. Package type names and additional sub-packages are referenced via qualified names.
Package definitions are kept in a separate package table. Packages are imported into the global scope of the compilation unit from this table. Package information is live for as long as the main compilation unit is being compiled (e.g., through out the compile process).
Local lookup
In general the number of symbols in a local Java scope is small. Local symbol lookup must be fast, but not as fast as the global lookup, since there will usually be fewer symbols.
I have considered three data structures for implementing the local symbol tables:
skip lists (see also Thomas Niemann's excellent web page on skip lists).
Red-Black Trees (a form of balanced binary tree)
Simple binary tree
For small symbol table sizes the search time does not differ much for these three data structures. The binary tree has the example of being the smallest and simplest algorithm, so it has been chosen for local symbol tables.
Support for Java hierarchical scope
Each symbol table contains a pointer to the symbol table in the next scope up.
Searchable by symbol type
The semantic analysis phase knows the context for the symbol it is searching for (e.g., whether the symbol should be a member, method or class). The symbol table hierarchy is searched by identifier and type.
Quickly determine whether a symbol definition is ambiguous
Multiple symbol definitions for a given type of symbol (e.g., two member definitions) are chained together. If the next pointer is not NULL, there are multiple definitions. The error reporting code can use these definitions to report to the user where the clashing symbols were defined.
All class member references are processed and entered into the symbol table before methods are processed. This allows references to class members within a method to be properly resolved.
Declarations in a method are processed sequentially. If a name referenced in a method has not been "seen", an error will be reported (e.g., Undefined name).
When a compilation unit (a package) is compiled, type and package information for all of the packages and classes that it references must be available. The Java Language Specification does not define exactly how this happens. The JLS states that compiled Java code may be stored in a database or in a directory hierarchy that mirrors the qualified names for imported packages and classes. Classes and packages must be accessable. The Java Virtual Machine Specification defines the information in a Java .class file, but it is silent on the issue of compile ordering. Although there is no specification for how Java should be compiled, there is "common practice". At least in the case of this design, "common practice" is based on Sun's javac compiler and Microsoft's Visual J++ compiler jvc.
When a compilation unit is compiled, all information about external classes referenced in the compilation unit is contained in .class files which are produced by compiling the associated Java code (usually stored in .java files). Class files may be packaged in .jar files, which are compressed archived .class file hierarchies in zip file format. The .class or .jar files are located in reference to either the local directory or the CLASSPATH environment variable. For this scheme to work, files names most correspond to the associated type name (e.g., class FooBar is implemented by FooBar.java).
If, when searching for a type definition, the Java compiler finds only a .java file defining the type or the .java file has a newer time stamp (usually file date and time) than the associated Java .class file, the Java compiler will recompile the type definition.
While compling the top level compilation unit, the Java compiler keeps track of package objects (where a package contains lists of types and sub-packages) imported by the compilation unit. Package type definitions that are not public are not kept by the compiler, since they cannot be seen outside the package.
Ian Kaplan, May 2, 2000
Revised most recently: May 31, 2000
back to Java Compiler Architecture page | http://www.bearcave.com/software/java/java_symtab.html | crawl-001 | refinedweb | 3,803 | 53.81 |
How to Make get() Method Request in Java Spring?
Java language is one of the most popular languages among all programming languages. There are several advantages of using the java programming language, whether for security purposes or building large distribution projects. One of the advantages of using JAVA is that Java tries to connect every concept in the language to the real world with the help of the concepts of classes, inheritance, polymorphism, etc.
There are several other concepts present in java that increase the user-friendly interaction between the java code and the programmer such as generic, Access specifiers, Annotations, etc these features add an extra property to the class as well as the method of the java program. In this article, we will discuss what is the GetMapping() annotation in java.
GetMapping() annotation mainly use in the spring boot applications that are used for handling the incoming request from the client by matching the incoming request header from the clientside
Syntax:
@GetMapping()
Parameters: Annotation contains an URL expression
Now let us discuss how to initialize Spring in web projects. Spring Initializr spring initializer and then use an IDE to create a sample GET route.
Steps to initialize Spring in web projects
They are provided below in a sequential manner with visual aids as follows:
- Go to Spring Initializr
- Fill in the details as per the requirements. For this application shown below asities:
Project: Maven Language: Java Spring Boot: 2.2.8 Packaging: JAR Java: 8 Dependencies: Spring Web
Step 1:.
Note: In the Import Project for Maven window, make sure you choose the same version of JDK which you selected while creating the project.
Step 3: Go to src->main->java->com.gfg.Spring.boot.app, create a java class with name as Controller and add the annotation @RestController. Now create a GET API as shown below:”
@RestController public class Controller { @GetMapping("/get") public String home() { return "This is the get request"; } @GetMapping("/get/check") public String home1() { return "This is the get check request"; } }
Step 4: This application is now ready to run. Run the SpringBootAppApplication class and wait for the Tomcat server to start.
Note: The default port of the Tomcat server is 8080 and can be changed in the application.properties file.
Step 5: Now go to the browser and enter the URL localhost:8080. Observe the output and now do the same for localhost:8080/get/check
Geeks, we are done with get() method request in Spring as perceived from the above pop-up. Hence these were the steps at all. | https://www.geeksforgeeks.org/how-to-make-get-method-request-in-java-spring/?ref=rp | CC-MAIN-2022-05 | refinedweb | 427 | 59.23 |
Ive been trying to learn C++ for over two weeks now hoping to get into win32 api's soon but simple ........ is stumping me. I know all about data types and basic stuff but I just dont seem to get it? idk my code looks alright i though I didnt organize it here thats not what im worryin about I was wondering is this the best way to write this function?
Code:// xwxw.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> void getnumber(); using namespace std; int number; int main() { cout<<"Give me the number:"; cin>> number; getnumber(); cin.get(); cin.ignore(); } void getnumber() { int x = 5; int b = number; while (x < b) { cout<< x<<endl; x++; } cin.get(); cin.ignore(); } | http://cboard.cprogramming.com/cplusplus-programming/119705-problems-calling-function.html | CC-MAIN-2014-23 | refinedweb | 127 | 79.7 |
Silver?
Sockets allow a listener server to listen for clients that would like to connect to a specific socket (an IP address combined with a port). As the clients connect the server can send data to them anytime and the clients can send data to the server as well. Data flows both directions which allows the server to push data if desired. The .NET framework provides direct support for using sockets through the System.Net.Sockets namespace and provides classes such as Socket and TcpListener that can be used to create a server application. The following image shows a Silverlight 2 application built around the concept of sockets that allows a server application to push data down to a client that displays the score of a basketball game (click the image to view an animated gif that shows the application in progress):
The game is simulated on the server and as each team scores the update is pushed to the client so that the score and action that occurred can be displayed. shown next listens on port 4530 which is in the range Silverlight 2 allows.
Here's the code that starts up the TcpListener class which sends team data down to the server and starts the timer which fakes the);
}
}
}
}
There's quite a bit more code for the server application which you can download here. The code shown above is really all that's involved for creating a server that listens for clients on a specific IP address and port. In the next post on this topic I'll cover the client-side and discuss how Silverlight can connect to a server using sockets.
Silverlight 2 has built-in support for sockets
Cool work and interesting. Question> Is there a way to push real objects via sockets
instead of just strings? like in WCF contracts, i have customer objects that can be pushed regularly ?
Thanks,
Gopi
Gopinath,
Great question. You can push bytes so anything should be fair game over the wire. On the client-side the Assembly.Load() method is also available it looks like. Having never tried it, I'm not sure how the security works there and if Silverlight would allow that or not, but it looks like the players to make it happen are there. If you end up trying it please post a comment as I'd be interested in knowing how it works out.
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Pingback from image checks
Pingback from basis data download
Pingback from tcp setting
In Part 1 of this two part series on socket support in Silverlight 2 I discussed how a server could be
Good article. Is there any reason why you are using async BeginAcceptTcpClient combined with a ManualResetEvent? I mean: Wouldn't do
while (true) { Socket client = _listener.AcceptSocket(); .... }
the same?
Regards
Neil,
There are definitely other ways to do it. I tried to keep it somewhat similar to the Silverlight side of things since everything there follows the BeginXXX/EndXXX pattern. And...I kind of like ManualResetEvent for some reason since it makes it really clear when a thread is blocked and when it's available to handle new connections. One of those personal choice things. :-)
Post: Approved at: May-10-2008 Using Sockets with Silverlight Here are a couple good articles: Pushing
Pingback from Multi-Threading, Silverlight Sockets & Visualisation Part 2 « Fluent.Interface
Pingback from udp and tcp sockets
Silverlight 2 provides built-in support for sockets which allows servers to push data to Silverlight
Silverlight provides several different ways to access data stored in remote locations. Data can
Pingback from Pushing Data to a Silverlight Client with a WCF Duplex Service - Part I @ ZDima.net
Pingback from Michael Sync » Silverlight 2 beta1 - Best of SilverlightCream
,...
Pingback from Asp.Net Webpage to Silverlight Control Interaction « Rich Internet Applications
Stall Status is a Silverlight-based Vista Sidebar Gadget that uses the Z-Wave wireless protocol
Stall Status — это сделанное в Silverlight мини-приложение для боковой панели Vista, в котором применяется | http://weblogs.asp.net/dwahlin/archive/2008/04/10/pushing-data-to-a-silverlight-client-with-sockets-part-i.aspx | crawl-002 | refinedweb | 676 | 60.04 |
getsockname(2) BSD System Calls Manual getsockname(2)
NAME
getsockname -- get socket name
SYNOPSIS
#include <sys/socket.h> int getsockname(int socket, struct sockaddr *restrict address, socklen_t *restrict address_len);
DESCRIPTION
The getsockname() fynction returns the current address for the specified socket. The address_len parameter should be initialized to indicate the amount of space pointed to by address. On return it contains the actual size of the address returned (in bytes). The address is truncated if the buffer provided is too small.
RETURN VALUES
The getsockname() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
ERRORS
The getsockname() system call will succeed unless: [EBADF] The argument socket is not a valid file descriptor. [EFAULT] The address parameter points to memory not in a valid part of the process address space. [EINVAL] socket has been shut down. [ENOBUFS] Insufficient resources were available in the system to perform the operation. [ENOTSOCK] The argument socket is not a socket (e.g., a plain file). [EOPNOTSUPP] getsockname() is not supported for the protocol in use by socket.
SEE ALSO
bind(2), socket(2)
BUGS
Names bound to sockets in the UNIX domain are inaccessible; getsockname() returns a zero-length address.
HISTORY
The getsockname() call appeared in 4.2BSD. 4.2 Berkeley Distribution June 4, 1993 4.2 Berkeley Distribution
Mac OS X 10.9.1 - Generated Mon Jan 6 06:06:26 CST 2014 | http://manpagez.com/man/2/getsockname/ | CC-MAIN-2018-34 | refinedweb | 244 | 50.12 |
represent static atoms in style system code using an index into the static atom table, rather than a pointer
RESOLVED FIXED in Firefox 66
Status
()
P3
normal
People
(Reporter: heycam, Assigned: heycam)
Tracking
Firefox Tracking Flags
(firefox66 fixed)
Details
Attachments
(2 attachments, 3 obsolete attachments)
For bug 1474793, I'll need a static representation for static atoms, since the nsStaticAtom* pointer values are not known until run time. The patches here will add a handle value to nsStaticAtom (which is just its index into the nsGkAtoms static atom table), and change the Atom type in style system code to use that handle value rather than the nsAtom*. On 64 bit we opportunistically store the static atom hash in that handle value too, so that we don't have to read out the hash from the nsAtom.
This will be used in the next patch to switch the style system's canonical representation for atoms to be either a static atom handle value or a dynamic atom pointer. MozReview-Commit-ID: 2F5xhtwKTgh
This will allow static atom values to be used cross-process without needing to translate from one process's atom pointer value to another's. MozReview-Commit-ID: AEPoICQi2sM Depends on D9233
> For bug 1474793, I'll need a static representation for static atoms, since the nsStaticAtom* pointer values are not known until run time. Am I misunderstanding? AFAIK this sentence is incorrect. Static atoms have had a static representation for some time now.
This works, and I think it's a bit less intrusive, plus it doesn't require branching for Atom::deref. I moved the gGkAtoms thing outside of the mozilla::detail namespace because I figured it was slightly better, though I can see the argument against it as well. I can undo that change and go back at cfg_if plus figuring out the right symbols, or... Actually bindgen can generate the right thing already, it's just an oversight the fact that it outputs `static mut` instead of plain `static`. An alternative path forward (maybe preferable to avoid exposing the un-namespaced `gGkAtoms`) is to fix bindgen, update it in Gecko, and use it to get the gGkAtoms symbol with the right qualification. I've submitted a fix here: So should be possible as well. Just let me know if you want to wait for that to land this or not.
(In reply to Nicholas Nethercote [:njn] from comment #3) > > For bug 1474793, I'll need a static representation for static atoms, since the nsStaticAtom* pointer values are not known until run time. > > Am I misunderstanding? AFAIK this sentence is incorrect. Static atoms have > had a static representation for some time now. I mean, a static representation that is the identical in all processes. Per some IRL discussion previously, I think we need to go with something like my patches rather than the comment 4 patch, since otherwise we won't have comparable static atom values across processes. After a few days of struggling, today's rebase and try push of my patches seem much happier on Talos for whatever reason: The tsvr_opacity regression isn't really a regression. Checking the graph over the past 30 days, this test seems to be bi-modal, and the test runs with my patches applied happen to fall on the higher of the two times.
Depends on D15078
Pushed by cmccormack@mozilla.com: Use atom handles in favour of atom pointers in style system code r=emilio Make GkAtoms opaque to avoid lld-link.exe errors r=emilio
Status: ASSIGNED → RESOLVED
Last Resolved: 4 months ago
status-firefox66: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla66 | https://bugzilla.mozilla.org/show_bug.cgi?id=1500362 | CC-MAIN-2019-22 | refinedweb | 605 | 58.32 |
Buy this book at Amazon.com
As usual, you should at least attempt the following exercises
before you read my solutions.
Write a program that reads a file, breaks each line into
words, strips whitespace and punctuation from the words, and
converts them to lowercase.
Hint: The string module provides strings named whitespace,
which contains space, tab, newline, etc., and punctuation which contains the punctuation characters. Let’s see
if we can make Python swear:
>>> import string
>>> print string.punctuation
!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
Also, you might consider using the string methods strip,
replace and translate.
Go to Project Gutenberg () and download
your favorite out-of-copyright book in plain text format.
Modify your program from the previous exercise to read the book
you downloaded, skip over the header information at the beginning
of the file, and process the rest of the words as before.
Then modify the program to count the total number of words in
the book, and the number of times each word is used.
Print the number of different words used in the book. Compare
different books by different authors, written in different eras.
Which author uses the most extensive vocabulary?
Modify the program from the previous exercise to print the
20 most frequently-used words in the book.
Modify the previous program to read a word list (see
Section 9.1) and then print all the words in the book that
are not in the word list. How many of them are typos? How many of
them are common words that should be in the word list, and how
many of them are really obscure?.
Making a program truly nondeterministic turns out to be not so easy,
but there are ways to make it at least seem nondeterministic. One of
them is to use algorithms that generate pseudorandom numbers.
Pseudorandom numbers are not truly random because they are generated
by a deterministic computation, but just by looking at the numbers it
is all but impossible to distinguish them from random.
The random module provides functions that generate
pseudorandom numbers (which I will simply call “random” from
here on).
The function random returns a random float
between 0.0 and 1.0 (including 0.0 but not 1.0). Each time you
call random, you get the next number in a long series. To see a
sample, run this loop:
import random
for i in range(10):
x = random.random()
print x
The function randint takes parameters low and
high and returns an integer between low and
high (including both).
>>> random.randint(5, 10)
5
>>> random.randint(5, 10)
9
To choose an element from a sequence at random, you can use
choice:
>>> t = [1, 2, 3]
>>> random.choice(t)
2
>>> random.choice(t)
3
The random module also provides functions to generate
random values from continuous distributions including
Gaussian, exponential, gamma, and a few more.
Write a function named choose_from_hist that takes
a histogram as defined in Section 11.1 and returns a
random value from the histogram, chosen with probability
in proportion to frequency. For example, for this histogram:
choose_from_hist
>>> t = ['a', 'a', 'b']
>>> hist = histogram(t)
>>> print hist
{'a': 2, 'b': 1}
your function should return ’a’ with probability 2/3 and ’b’
with probability 1/3.
You should attempt the previous exercises before you go on.
You can download my solution from. You will
also need.
Here is a program that reads a file and builds a histogram of the
words in the file:
import string
def process_file(filename):
hist = dict()
fp = open(filename)
for line in fp:
process_line(line, hist)
return hist
def process_line(line, hist):
line = line.replace('-', ' ')
for word in line.split():
word = word.strip(string.punctuation + string.whitespace)
word = word.lower()
hist[word] = hist.get(word, 0) + 1
hist = process_file('emma.txt')
This program reads emma.txt, which contains the text of Emma by Jane Austen.
process_file loops through the lines of the file,
passing them one at a time to process_line. The histogram
hist is being used as an accumulator.
process_file
process_line
process_line uses the string method replace to replace
hyphens with spaces before using split to break the line into a
list of strings. It traverses the list of words and uses strip
and lower to remove punctuation and convert to lower case. (It
is a shorthand to say that strings are “converted;” remember that
string are immutable, so methods like strip and lower
return new strings.)
Finally, process_line updates the histogram by creating a new
item or incrementing an existing one.
To count the total number of words in the file, we can add up
the frequencies in the histogram:
def total_words(hist):
return sum(hist.values())
The number of different words is just the number of items in
the dictionary:
def different_words(hist):
return len(hist)
Here is some code to print the results:
print 'Total number of words:', total_words(hist)
print 'Number of different words:', different_words(hist)
And the results:
Total number of words: 161080
Number of different words: 7214
To find the most common words, we can apply the DSU pattern;
most_common takes a histogram and returns a list of
word-frequency tuples, sorted in reverse order by frequency:
most_common
the 5205
and 4897
of 4295
i 3191
a 3130
it 2529
her 2483
was 2400
she 2364
We
The first parameter is required; the second is optional.
The default value of num is 10.
If you only provide one argument:
print_most_common(hist)
num gets the default value. If you provide two arguments:
print_most_common(hist, 20)
num gets the value of the argument instead. In other
words, the optional argument overrides the default value.
If a function has both required and optional parameters, all
the required parameters have to come first, followed by the
optional ones.
Finding the words from the book that are not in the word list
from words.txt is a problem you might recognize as set
subtraction; that is, we want to find all the words from one
set (the words in the book) that are not in another set (the
words in the list).
subtract takes dictionaries d1 and d2 and returns a
new dictionary that contains all the keys from d1 that are not
in d2. Since we don’t really care about the values, we
set them all to None.
def subtract(d1, d2):
res = dict()
for key in d1:
if key not in d2:
res[key] = None
return res
To find the words in the book that are not in words.txt,
we can use process_file to build a histogram for
words.txt, and then subtract:
words = process_file('words.txt')
diff = subtract(hist, words)
print "The words in the book that aren't in the word list are:"
for word in diff.keys():
print word,
Here are some of the results from Emma:
The words in the book that aren't in the word list are:
rencontre jane's blanche woodhouses disingenuousness
friend's venice apartment ...
Some of these words are names and possessives. Others, like
“rencontre,” are no longer in common use. But a few are common
words that should really be in the list!
Python provides a data structure called set that provides many
common set operations. Read the documentation at and
write a program that uses set subtraction to find words in the book
that are not in the word list. Solution:.:
Write a program that uses this algorithm to choose a random
word from the book. Solution:.
If you choose words from the book at random, you can get a
sense of the vocabulary, you probably won’t get a sentence:
this the small regard harriet which knightley's it most things
A series of random words seldom makes sense because there
is no relationship between successive words. For example, in
a real sentence you would expect an article like “the” to
be followed by an adjective or a noun, and probably not a verb
or adverb.
One way to measure these kinds of relationships is Markov
analysis, which
characterizes, for a given sequence of words, the probability of the
word that comes next. For example, the song Eric, the Half a
Bee begins:?
In this text,
the phrase “half the” is always followed by the word “bee,”
but the phrase “the bee” might be followed by either
“has” or “is”.
The result of Markov analysis is a mapping from each prefix
(like “half the” and “the bee”) to all possible suffixes
(like “has” and “is”).
Given this mapping, you can generate a random text by
starting with any prefix and choosing at random from the
possible suffixes. Next, you can combine the end of the
prefix and the new suffix to form the next prefix, and repeat.
For example, if you start with the prefix “Half a,” then the
next word has to be “bee,” because the prefix only appears
once in the text. The next prefix is “a bee,” so the
next suffix might be “philosophically,” “be” or “due.”
In this example the length of the prefix is always two, but
you can do Markov analysis with any prefix length. The length
of the prefix is called the “order” of the analysis.
Markov analysis:
He was very clever, be it sweetness or be angry, ashamed or only
amused, at such a stroke. She had never thought of Hannah till you
were never meant for me?" "I cannot make speeches, Emma:" he soon cut
it all himself.
For this example, I left the punctuation attached to the words.
The result is almost syntactically correct, but not quite.
Semantically, it almost makes sense, but not quite.
What happens if you increase the prefix length? Does the random
text make more sense?
Credit: This case study is based on an example from Kernighan and
Pike, The Practice of Programming, Addison-Wesley, 1999.
You should attempt this exercise before you go on; then you can can
download my solution from. You
will also need.
Using Markov analysis to generate random text is fun, but there is
also a point to this exercise: data structure selection. In your
solution to the previous exercises, you had to choose:
Ok, the last one is easy; the only mapping type we have
seen is a dictionary, so it is the natural choice.
For the prefixes, the most obvious options are string,
list of strings, or tuple of strings. For the suffixes,
one option is a list; another is a histogram (dictionary).
How should you choose? The first step is to think about
the operations you will need to implement for each data structure.
For the prefixes, we need to be able to remove words from
the beginning and add to the end. For example, if the current
prefix is “Half a,” and the next word is “bee,” you need
to be able to form the next prefix, “a bee.”
Your first choice might be a list, since it is easy to add
and remove elements, but we also need to be able to use the
prefixes as keys in a dictionary, so that rules out lists.
With tuples, you can’t append or remove, but you can use
the addition operator to form a new tuple:
def shift(prefix, word):
return prefix[1:] + (word,)
shift takes a tuple of words, prefix, and a string,
word, and forms a new tuple that has all the words
in prefix except the first, and word added to
the end.
For the collection of suffixes, the operations we need to
perform include adding a new suffix (or increasing the frequency
of an existing one), and choosing a random suffix.
Adding a new suffix is equally easy for the list implementation
or the histogram. Choosing a random element from a list
is easy; choosing from a histogram is harder to do
efficiently (see Exercise 7).
So far we have been talking mostly about ease of implementation,
but there are other factors to consider in choosing data structures.
One is run time. Sometimes there is a theoretical reason to expect
one data structure to be faster than other; for example, I mentioned
that the in operator is faster for dictionaries than for lists,
at least when the number of elements is large.
But often you don’t know ahead of time which implementation will
be faster. One option is to implement both of them and see which
is better. This approach is called benchmarking. A practical
alternative is to choose the data structure that is
easiest to implement, and then see if it is fast enough for the
intended application. If so, there is no need to go on. If not,
there are tools, like the profile module, that can identify
the places in a program that take the most time.
The other factor to consider is storage space. For example, using a
histogram for the collection of suffixes might take less space because
you only have to store each word once, no matter how many times it
appears in the text. In some cases, saving space can also make your
program run faster, and in the extreme, your program might not run at
all if you run out of memory. But for many applications, space is a
secondary consideration after run time.
One final thought: in this discussion, I have implied that
we should use one data structure for both analysis and generation. But
since these are separate phases, it would also be possible to use one
structure for analysis and then convert to another structure for
generation. This would be a net win if the time saved during
generation exceeded the time spent in conversion.:
where s and c are parameters that depend on the language and the
text. If you take the logarithm of both sides of this equation, you
get:?
Solution:. To make the plots, you
might have to install matplotlib (see).
Think Bayes
Think Python
Think Stats
Think Complexity | http://greenteapress.com/thinkpython/html/thinkpython014.html | CC-MAIN-2017-43 | refinedweb | 2,319 | 71.04 |
7.8. Adam¶
Created on the basis of RMSProp, Adam also uses EWMA on the mini-batch stochastic gradient[1]. Here, we are going to introduce this algorithm.
7.8.1. The Algorithm¶
Adam uses the momentum variable \(\boldsymbol{v}_t\) and variable \(\boldsymbol{s}_t\), which is an EWMA on the squares of elements in the mini-batch stochastic gradient from RMSProp, and initializes each element of the variables to 0 at time step 0. Given the hyperparameter \(0 \leq \beta_1 < 1\) (the author of the algorithm suggests a value of 0.9), the momentum variable \(\boldsymbol{v}_t\) at time step \(t\) is the EWMA of the mini-batch stochastic gradient \(\boldsymbol{g}_t\):
Just as in RMSProp, given the hyperparameter \(0 \leq \beta_2 < 1\) (the author of the algorithm suggests a value of 0.999), After taken the squares of elements in the mini-batch stochastic gradient, find \(\boldsymbol{g}_t \odot \boldsymbol{g}_t\) and perform EWMA on it to obtain \(\boldsymbol{s}_t\):
Since we initialized elements in \(\boldsymbol{v}_0\) and \(\boldsymbol{s}_0\) to 0, we get \(\boldsymbol{v}_t = (1-\beta_1) \sum_{i=1}^t \beta_1^{t-i} \boldsymbol{g}_i\) at time step \(t\). Sum the mini-batch stochastic gradient weights from each previous time step to get \((1-\beta_1) \sum_{i=1}^t \beta_1^{t-i} = 1 - \beta_1^t\). Notice that when \(t\) is small, the sum of the mini-batch stochastic gradient weights from each previous time step will be small. For example, when \(\beta_1 = 0.9\), \(\boldsymbol{v}_1 = 0.1\boldsymbol{g}_1\). To eliminate this effect, for any time step \(t\), we can divide \(\boldsymbol{v}_t\) by \(1 - \beta_1^t\), so that the sum of the mini-batch stochastic gradient weights from each previous time step is 1. This is also called bias correction. In the Adam algorithm, we perform bias corrections for variables \(\boldsymbol{v}_t\) and \(\boldsymbol{s}_t\):
Next, the Adam algorithm will use the bias-corrected variables \(\hat{\boldsymbol{v}}_t\) and \(\hat{\boldsymbol{s}}_t\) from above to re-adjust the learning rate of each element in the model parameters using element operations.
Here, \(eta\) is the learning rate while \(\epsilon\) is a constant added to maintain numerical stability, such as \(10^{-8}\). Just as for Adagrad, RMSProp, and Adadelta, each element in the independent variable of the objective function has its own learning rate. Finally, use \(\boldsymbol{g}_t'\) to iterate the independent variable:
7.8.2. Implementation from Scratch¶
We use the formula from the algorithm to implement Adam. Here, time step
\(t\) uses
hyperparams to input parameters to the
adam
function.
In [1]:
%matplotlib inline import gluonbook as gb from mxnet import nd features, labels = gb.get_data_ch7() def init_adam_states(): v_w, v_b = nd.zeros((features.shape[1], 1)), nd.zeros(1) s_w, s_b = nd.zeros((features.shape[1], 1)), nd.zeros(1) return ((v_w, s_w), (v_b, s_b)) def adam(params, states, hyperparams): beta1, beta2, eps = 0.9, 0.999, 1e-6 for p, (v, s) in zip(params, states): v[:] = beta1 * v + (1 - beta1) * p.grad s[:] = beta2 * s + (1 - beta2) * p.grad.square() v_bias_corr = v / (1 - beta1 ** hyperparams['t']) s_bias_corr = s / (1 - beta2 ** hyperparams['t']) p[:] -= hyperparams['lr'] * v_bias_corr / (s_bias_corr.sqrt() + eps) hyperparams['t'] += 1
Use Adam to train the model with a learning rate of \(0.01\).
In [2]:
gb.train_ch7(adam, init_adam_states(), {'lr': 0.01, 't': 1}, features, labels)
loss: 0.245647, 0.356273 sec per epoch
7.8.3. Implementation with Gluon¶
From the
Trainer instance of the algorithm named “adam”, we can
implement Adam with Gluon.
In [3]:
gb.train_gluon_ch7('adam', {'learning_rate': 0.01}, features, labels)
loss: 0.246704, 0.183636 sec per epoch
7.8.4. Summary¶
- Created on the basis of RMSProp, Adam also uses EWMA on the mini-batch stochastic gradient
- Adam uses bias correction.
7.8.5. Problems¶
- Adjust the learning rate and observe and analyze the experimental results.
- Some people say that Adam is a combination of RMSProp and momentum. Why do you think they say this?
7.8.6. Reference¶
[1] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. | http://gluon.ai/chapter_optimization/adam.html | CC-MAIN-2019-04 | refinedweb | 701 | 58.18 |
The objective of this post is to explain how to upload files from a computer to the MicroPython file system, using a tool called ampy. This tutorial was tested on both the ESP8266 and the ESP32.
Introduction
The objective of this post is to explain how to upload files from a computer to the MicroPython file system, using a tool called ampy. This tutorial was tested on both the ESP8266 and the ESP32. The prints shown here are for the tests on the ESP32.
Since we will need to use ampy, we are assuming a previous installation of the tool. You can check this previous tutorial, which shows how to install it using Python’s pip. If you prefer, you can check Adafruit’s installation instructions here. This was tested on Windows 8 operating system.
The procedure
Uploading a file with ampy is very straightforward as we will see. But the first thing we need to do is creating a simple file with some content on a directory of our choice. For this example, we will write two sentences:
This is the first line of content
This is the second line of content
Save the file with a .txt extension (it doesn’t need to have a .txt extension, but since it is a text file we will use it). Let’s call it test.txt.
Then, open the command line and navigate to the file directory. There, to upload the file, just hit the following command, changing COM5 with the com port where your ESP8266 / ESP32 device is.
ampy --port COM5 put test.txt
Important: At the time of writing and with my setup, the first time I execute a ampy command, such as uploading a file, it results in an error being printed to the command line. After that, all the executions work fine without any error. Upon turning off and on the ESP8266 / ESP32 again, the error repeats in the first execution of a command and then everything works fine again.
This should upload the file to the file system of MicroPython. Note that the execution of this command successfully will not give any textual result, as shown in figure 1.
Figure 1 – Successful upload of the test.txt file to MicroPython file system.
Now, we will connect to the MicroPython prompt to confirm the file is actually there in the ESP32 / ESP8266 MicroPython file system. I will be using Putty to interact with it.
So, after connecting, just import the os module and call the listdir function from that module, which will return the files on the current directory.
import os os.listdir()
As shown in figure 2, the file uploaded is listed as output of this call.
Figure 2 – Listing the files in the current directory.
Finally, we will read the contents of the file with the commands shown bellow. If you need a detailed tutorial on how to read files with MicroPython, please refer to this previous post.
file = open("test.txt", "r") file.read() file.close()
Figure 3 shows the result of executing the previous commands. Note that we read all the bytes at once, so all the content got printed in the same line, with both the carriage return and newline characters being printed. The previous post about reading a file shows other approaches on reading a file. As expected, the content we have written before is the one printed.
Figure 3 – Reading the previously uploaded file.
Related Posts
- ESP8266: MicroPython support
- ESP32: MicroPython support
- ESP32 / ESP8266 MicroPython: Reading a file
- ESP32 / ESP8266 MicroPython: Writing a file
Pingback: ESP32 / ESP8266 MicroPython: Running a script from the file system | techtutorialsx
Thank you for sharing this information. I’ll need to try it out.
LikeLiked by 1 person
You’re welcome 🙂
Pingback: ESP32 / ESP8266 MicroPython: Automatic connection to WiFi | techtutorialsx
Pingback: ESP32 MicroPython: Using SHA-256 | techtutorialsx
Hi,
Would you have enough time for a post like uploading file from the ESP32’s interfaced SD card to the server using Wifi network?
Thank you
Rum
LikeLiked by 1 person
Hi!
Thanks for the suggestion.
I’m currently taking a look at the BT stack, after that I will try to take a look about sending data through WiFi from the file system / SD card.
Best regards,
Nuno Santos | https://techtutorialsx.com/2017/06/04/esp32-esp8266-micropython-uploading-files-to-the-file-system/ | CC-MAIN-2017-34 | refinedweb | 714 | 64.51 |
I’ve been learning Scala as part of my continuing professional development. Scala is a functional language which runs primarily on the Java Runtime Environment. It is a first class citizen for working with Apache Spark – an important platform for data science. My intention in learning Scala is to get myself thinking in a more functional programming style and to gain easy access to Java-based libraries and ecosystems, typically I program in Python.
In this post I describe how to get Scala installed and functioning on a workplace laptop, along with its dependency manager, sbt. The core issue here is that my laptop at work puts me behind a web proxy so that sbt does not Just Work™. I figure this is a common problem so I thought I’d write my experience down for the benefit of others, including my future self.
The test system in this case was a relatively recent (circa 2015) Windows 7 laptop, I like using bash as my shell on Windows rather than the Windows Command Prompt – I install this using the Git for Windows SDK.
Scala can be installed from the Scala website. For our purposes we will use the Windows binaries since the sbt build tool requires additional configuration to work. Scala needs the Java JDK version 1.8 to install and the JAVA_HOME needs to point to the appropriate place. On my laptop this is:
JAVA_HOME=C:\Program Files (x86)\Java\jdk1.8.0_131
The Java version can be established using the command:
javac –version
My Scala version is 2.12.2, obtained using:
scala -version
Sbt is the dependency manager and build tool for Scala, it is a separate install from:
It is possible the PATH environment variable will need to be updated manually to include the sbt executables (:/c/Program Files (x86)/sbt/bin).
I am a big fan of Visual Studio Code, so I installed the Scala helper for Visual Studio Code:
This requires a modification to the sbt config file which is described here:
Then we can write a trivial Scala program like:
object HelloWorld {
def main(args: Array[String]): Unit = {
println(“Hello, world!”)
}
}
And run it at the commandline with:
scala first.scala
To use sbt in my workplace requires proxies to be configured. The symptom of a failure to do this is that the sbt compile command fails to download the appropriate dependencies on first run, as defined in a build.sbt file, producing a line in the log like this:
Server access Error: Connection reset url=
sourceforge/htmlcleaner/htmlcleaner/2.4/htmlcleaner-2.4.pom
In my case I established the appropriate proxy configuration from the Google Chrome browser:
chrome://net-internals/#proxy
This shows a link to the pacfile, something like:
The PAC file can be inspected to identify the required proxy, in my this case there is a statement towards the end of the pacfile which contains the URL and port required for the proxy:
if (url.substring(0, 5) == ‘http:’ || url.substring(0, 6) == ‘https:’ || url.substring(0, 3) == ‘ws:’ || url.substring(0, 4) == ‘wss:’)
{
return ‘PROXY longproxyhosturl.com :80’;
}
These are added to a SBT_OPTS environment variable which can either be set in a bash-like .profile file or using the Windows environment variable setup.
export SBT_OPTS=”-Dhttps.proxyHost=longproxyhosturl.com -Dhttps.proxyPort=80 -Dhttps.proxySet=true”
As a bonus, if you want to use Java’s Maven dependency management tool you can use the same proxy settings but put them in a MAVEN_OPTS environment variable.
Typically to start a new project in Scala one uses the sbt new command with a pointer to a g8 template, in my workplace this does not work as normally stated because it uses the github protocol which is blocked by default (it runs on port 9418). The normal new command in sbt looks like:
sbt new scala/scala-seed.g8
The workaround for this is to specify the g8 repo in full including the https prefix:
sbt new
This should initialise a new project, creating a whole bunch of standard directories.
So far I’ve completed one small project in Scala. Having worked mainly in dynamically typed languages it was nice that, once I had properly defined my types and got my program to compile, it ran without obvious error. I was a bit surprised to find no standard CSV reading / writing library as there is for Python. My Python has become a little more functional as a result of my Scala programming, I’m now a bit more likely to map a function over a list rather than loop over the list explicitly.
I’ve been developing intensively in Python over the last couple of years, and this seems to have helped me in configuring my Scala environment in terms of getting to grips with module/packaging, dependency managers, automated doocumentation building and also in finding my test library () at an early stage. | http://www.ianhopkinson.org.uk/2017/10/scala-installation-behind-a-workplace-web-proxy/ | CC-MAIN-2020-40 | refinedweb | 822 | 58.32 |
to be portable, they must be isolated from these proprietary aspects of a provider. This is done by defining JMS administered objects that are created and customized by a provider's administrator and later used by clients. The client uses them through JMS interfaces that are portable. The administrator creates them using provider-specific facilities.
There are two types of JMS administered objects:
Administered objects are placed in a Java Naming and Directory InterfaceTM (JNDI) namespace by an administrator. A JMS client typically notes in its documentation the JMS administered objects it requires and how the JNDI names of these objects should be provided to it. for more information.
The term consume is used in this document to mean the receipt of a message by a JMS client; that is, a JMS provider has received a message and has given it to its client. Since the JMS API supports both synchronous and asynchronous receipt of messages, the term consume is used when there is no need to make a distinction between them.
The term produce is used as the most general term for sending a message. It means giving a message to a JMS provider for delivery to a destination.
Broadly speaking, a JMS application is one or more JMS clients that exchange messages. The application may also involve non-JMS clients; however, these clients use the JMS provider's native API in place of the JMS API.
A JMS application can be architected and deployed as a unit. In many cases, JMS clients are added incrementally to an existing application.
The message definitions used by an application may originate with JMS, or they may have been defined by the non-JMS part of the application.
A typical JMS client executes the following setup procedure:
At this point a client has the basic setup needed to produce and consume messages.
Java Message Service Specification - Version 1.1
Java Message Service Tutorial | http://docs.oracle.com/javaee/1.4/api/javax/jms/package-summary.html | CC-MAIN-2014-42 | refinedweb | 321 | 54.63 |
This project contains code to train a model that automatically plays the first level of Super Mario World using only raw pixels as the input (no hand-engineered features). The used technique is deep Q-learning, as described in the Atari paper (Summary), combined with a Spatial Transformer.
The training method is deep Q-learning with a replay memory, i.e. the model observes sequences of screens, saves them into its memory and later trains on them, where "training" means that it learns to accurately predict the expected action reward values ("action" means "press button X") based on the collected memories. The replay memory has by default a size of 250k entries. When it starts to get full, new entries replace older ones. For the training batches, examples are chosen randomly (uniform distribution) and rewards of memories are reestimated based on what the network has learned so far.
Each example's input has the following structure:
T is currently set to 4 (note that this includes the last state of the sequence). Screens are captured at every 5th frame. Each example's output are the action reward values of the chosen action (received direct reward + discounted Q-value of the next state). The model can choose two actions per state: One arrow button (up, down, right, left) and one of the other control buttons (A, B, X, Y). This is different from the Atari-model, in which the agent could only pick one button at a time. (Without this change, the agent could theoretically not make many jumps, which force you to keep the A button pressed and move to the right.) As the reward function is constructed in such a way that it is almost never 0, exactly two of each example's output values are expected to be non-zero.
The agent gets the following rewards:
+0.5if the agent moved to the right,
+1.0if it moved fast to the right (8 pixels or more compared to the last game state),
-1.0if it moved to the left and
-1.5if it moved fast to the left (-8 pixels or more).
+2.0while the level-finished-animation is playing.
-3.0while the death animation is playing.
The
gamma (discount for expected/indirect rewards) is set to
0.9.
Training the model only on score increases (like in the Atari paper) would most likely not work, because enemies respawn when their spawning location moves outside of the screen, so the agent could just kill them again and again, each time increasing its score.
A selective MSE is used to train the agent. That is, for each example gradients are calculated just like they would be for a MSE. However, the gradients of all action values are set to 0 if their target reward was 0. That's because each example contains only the received reward for one pair of chosen buttons (arrow button, other button). Other pairs of actions would have been possible, but the agent didn't choose them and so the reward for them is unclear. Their reward values (per example) are set to 0, but not because they were truely 0, but instead because we don't know what reward the agent would have received if it had chosen them. Backpropagating gradient for them (i.e. if the agent predicts a value unequal to 0) is therefore not reasonable.
This implementation can afford to differentiate between the chosen and not chosen buttons (in the target vector) based on the reward being unequal to 0, because the received reward of a chosen button is (here) almost never exactly 0 (due to the construction of the reward function). Other implementations might need to take more care of this step.
The policy is an epsilon-greedy one, which starts at epsilon=0.8 and anneals that down to 0.1 at the 400k-th chosen action. Whenever according to the policy a random action should be chosen, the agent throws a coin (i.e. 50:50 chance) and either randomizes one of its two (arrows, other buttons) actions or it randomizes both of them.
The model consists of three branches:
At the end of the branches, everything is merged to one vector, fed through a hidden layer, before reaching the output neurons. These output neurons predict the expected reward per pressed button.
Overview of the network:
The Spatial Transformer requires a localization network, which is shown below:
Both networks have overall about 6.6M parameters.
The agent is trained only on the first level (first to the right in the overworld at the start). Other levels suffer significantly more from various difficulties with which the agent can hardly deal. Some of these are:
The first level has hardly any of these difficulties and therefore lends itself to DQN, which is why it is used here. Training on any level and then testing on another one is also rather difficult, because each level seems to introduce new things, like new and quite different enemies or new mechanics (climbing, new items, objects that squeeze you to death, etc.).
luarocks install packageName):
nn,
cudnn,
paths,
image,
display. display is usually not part of torch.
git clone
cd stnbhwd
luarocks make stnbhwd-scm-1.rockspec
sudo apt-get install sqlite3 libsqlite3-dev
luarocks install lsqlite3
source/src/libray/lua.cppand insert the following code under
namespace {:
This makes the emulator run in lua 5.1. Newer versions (than beta23) of lsnes rr2 might not need this.This makes the emulator run in lua 5.1. Newer versions (than beta23) of lsnes rr2 might not need this.
#ifndef LUA_OK #define LUA_OK 0 #endif #ifdef LUA_ERRGCMM REGISTER_LONG_CONSTANT("LUA_ERRGCMM", LUA_ERRGCMM, CONST_PERSISTENT | CONST_CS); #endif
source/include/core/controller.hppand change the function
do_button_actionfrom private to public. Simply cut the line
void do_button_action(const std::string& name, short newstate, int mode);in the
private:block and paste it into the
public:block.
source/src/lua/input.cppand before
lua::functions LUA_input_fns(...(at the end of the file) insert:
This method was necessary to actually press buttons from custom lua scripts. All of the emulator's default lua functions for that would just never work, becauseThis method was necessary to actually press buttons from custom lua scripts. All of the emulator's default lua functions for that would just never work, because
int do_button_action(lua::state& L, lua::parameters& P) { auto& core = CORE(); std::string name; short newstate; int mode; P(name, newstate, mode); core.buttons->do_button_action(name, newstate, mode); return 1; }
core.lua2->input_controllerdataapparently never gets set (which btw will let these functions silently fail, i.e. without any error).
source/src/lua/input.cpp, at the block
lua::functions LUA_input_fns(..., add
do_button_actionto the lua commands that can be called from lua scripts loaded in the emulator. To do that, change the line
{"controller_info", controller_info},to
{"controller_info", controller_info}, {"do_button_action", do_button_action},.
source/.
make.
options.build.
libwxgtk3.0-devand not version 2.8-dev, as that package's official page might tell you to do.
source/execute
sudo cp lsnes /usr/bin/ && sudo chown root:root /usr/bin/lsnes. After that, you can start lsnes by simply typing
lsnesin a console window.
sudo mkdir /media/ramdisk
sudo chmod 777 /media/ramdisk
sudo mount -t tmpfs -o size=128M none /media/ramdisk && mkdir /media/ramdisk/mario-ai-screenshots
SCREENSHOT_FILEPATHin
config.lua.
git clone.
cdinto the created directory.
lsnesin a terminal window.
Configure -> Settings -> Advancedand set the lua memory limit to 1024MB. (Only has to be done once.)
Configure -> Settings -> Controller). Play until the overworld pops up. There, move to the right and start that level. Play that level a bit and save a handful or so of states via the emulator's
File -> Save -> Stateto the subdirectory
states/train. Name doesn't matter, but they have to end in
.lsmv. (Try to spread the states over the whole level.)
th -ldisplay.start. If that doesn't work you haven't installed display yet, use
luarocks install display. your browser.
Tools -> Run Lua script...and select
train.lua.
Tools -> Reset Lua VM.
learned/. Note that you can keep the replay memory (
memory.sqlite) and train a new network with it.
You can test the model using
test.lua. Don't expect it to play amazingly well. The agent will still die a lot, even more so if you ended the training on a bad set of parameters. | https://awesomeopensource.com/project/aleju/mario-ai | CC-MAIN-2021-21 | refinedweb | 1,396 | 65.42 |
.
type
xsd:date.
From there, it is not a big leap to conclude that generalized XML data processing requires some way to indicate dynamically whether further changes should be allowed to certain pieces of data or whether certain parts of the data are still applicable to the transaction based on other data values. A good example is a mortgage preapproval form that can handle both single and joint applications. The co-applicant data is only relevant if the user selects the joint application mode.
XForms allows the form author to express formulae for these aspects of data, which are called model item properties, or just MIPs. Not too surprisingly, the names of these MIPs are readonly and relevant.
readonly
relevant
Of course, there is no point in representing data, calculating values over data, and validating data if you have no way to change the data. XForms allows simple content data values to be changed, but it also allows insertion and deletion of larger blocks of data that contain internal structure because this is essentially what's needed to add or delete a row from a table.
Most importantly, XForms offers form controls that expose data to the surrounding application context. If the data changes, the form controls change. This includes not only exposing a changed simple content data value, but if a set of form controls are associated with a repeated sequence of structured data, and the number of data nodes in the sequence changes, then form controls are created or destroyed as needed to respond to the change of data..
For this reason, the XForms form controls represent what I've often called an intent-based user interface. It's kind of neat to see the term popping up more frequently now. It gets to the heart of the matter: XForms does not provide a presentation layer. XForms relies for presentation on a host language like XFDL (in Workplace Forms) or XHTML (in web browsers). I am certainly hoping that VoiceXML will come to the conclusion that they should soak up the benefits of XForms rather than reinventing all of this stuff over again (partly because almost everybody underestimates how much work goes into it until it's too late; but maybe they will prove to be wiser than the rest)..
An interesting comment showed up on my prior post that seemed worth discussing as part of getting around to talking about what XFDL does for (and with) XForms.
In the last post, I talked about what XForms does and what it doesn't do from the big picture perspective..
The comment came in saying that one thing XForms doesn't do is allow the URL of a submission to be dynamically calculated. This is a technical limitation that does not affect the big picture of what we're trying to achieve with XForms. The XForms language does contain some technical limitations like this. Many of them are being addressed in XForms 1.1. This issue in particular should be addressed in the very next working draft of XForms 1.1.
However, because we have to deliver products that can be used to build applications now, it sometimes happens that implementations have to lead the standard with custom extensions until the standard comes along and specifies the common way that everyone on the working group agrees is the way the language will express a feature.
In XFDL, we make relatively few changes to XForms because we want our documents to be as conformant as possible. However, this particular issue of a dynamic URL comes up in almost all of the forms applications we have every deployed, so we did not feel we could make our next release of Workplace Forms without some ability to do a dynamic URL on an XForms submission. At the same time, we do like it to be clear to the form author when an extension is being used. As a result, the addition was made using an attribute in the XFDL namespace. This makes it easier to find those bits of our XForms-based documents that will need special attention when trying to get them to interoperate with other XForms implementations.
In the upcoming release of Workplace Forms, you can create XForms-based applications that include a dynamic URL component by using an xfdl:actionref attribute instead of an action in the xforms:submission. The content of xfdl:actionref is an XPath expression whose result is the node containing the desired URL. Of course, the full power of XForms calculations can be brought to bear on that node to allow dynamic calculation of all or any portion of the final URL.
xfdl:actionref
action
xforms:submission
In XForms 1.1, I am expecting a more general mechanism that will allow the instance data to be used to set not just the URL, but eventually many of the other parameters too.. | https://www.ibm.com/developerworks/community/blogs/JohnBoyer?sortby=0&order=asc&maxresults=5&page=2&lang=en | CC-MAIN-2018-09 | refinedweb | 816 | 55.47 |
Android, Google API, Google Places API, JSON
Firstly about the Places API.
Google Places API:
Four basic Place requests are available from the API:
- Place Searches return a list of nearby Places based on a user’s location or search string.
- Place Details requests return more detailed information about a specific Place, including user reviews.
- Place Actions allow you to supplement the data in Google’s Places Database with data from your application. Place Actions allow you to schedule Events, weight Place rankings by check-in data, or add and remove Places.
- Places Autocomplete can be used to provide autocomplete functionality for text-based geographic searches, by returning Places as you type.
Note: The Result can be got as a JSON file or XML file depending on your needs.here I will be getting the result as a JSON.
You can refer more about Places API here.
Step by Step procedure for using the Places API:
Step 1:
Create a new Android Project and add the android.permission.INTERNET to the manifest.
Step 2:
Now we have to send a HTTP Get request to the Google servers and receive a JSON file.
For this Purpose we will be importing the following classes.
import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONObject;
Step 3:
First we have to have the URL to which the request is being sent.
String requesturl="<your API key>&location=-33.8670522,151.1957362";
Create a HttpClient from which the request must be sent.
DefaultHttpClient client=new DefaultHttpClient();
Then we must construct the Http Request which is of GET method.
HttpGet req=new HttpGet(requesturl);
Then we must send the request to get a response.
HttpResponse res=client.execute(req);
Step 4:
After receiving the response we must extract the Entity from the HttpResponse. Then the JSONObject must be constructed from it. Since the JSONObject Constructor accepts only String we must convert the Contents of the HttpResponse into a String.
HttpEntity jsonentity=res.getEntity(); InputStream in=jsonentity.getContent(); JSONObject jsonobj=new JSONObject(convertStreamToString(in));
The procedure for converting Stream to String.
private String convertStreamToString(InputStream in) { // TODO Auto-generated method stub BufferedReader br=new BufferedReader(new InputStreamReader(in)); StringBuilder jsonstr=new StringBuilder(); String line; try { while((line=br.readLine())!=null) { String t=line+"\n"; jsonstr.append(t); } br.close(); } catch (IOException e) { e.printStackTrace(); } return jsonstr.toString(); }
Step 5:
Now is the time for Parsing.
The JSON returned is like this.
{ "html_attributions" : [], "results" : [ { "geometry" : { "location" : { "lat" : -33.86682180, "lng" : 151.19542210 } }, "icon" : "", "id" : "96760d4544ecdaaf2e87565915638238ca960f20", "name" : "Pirrama Rd", "reference" : "CoQBcgAAAIWIC2AA6D0V4r5BAzZ_-hdqlmDQO0FBGRNU4SGWhIv5a1yycXYi9d4w7K6-JeaQU_ZWrbXW19RPgF6VN_5iO05BeDof1DNzeBpsoSvIvFwrjkzZMPelEPVijJDxdu7f1Dr3BgPEvnxwUJ5eWO64rL9UGKMlicOdB5GMrc4cJ-3REhAVrcb67mrvPduZh4R80gPAGhQH0w5Yjv0k32AdYsbJb2_ycX2Kug", "types" : [ "route" ], "vicinity" : "Pyrmont" } ] }
So the Parser must be written to parse the JSON of above format.
JSONArray resarray=jsonobj.getJSONArray("results"); if(resarray.length()==0){ //No Results } else{ int len=resarray.length(); for(int j=0;j<len;j++) { Toast.makeText(getApplicationContext(), resarray.getJSONObject(j).getString("name") , Toast.LENGTH_LONG).show(); } }
A Toast message is shown to the User about the returned place names.
Nice tutorial
Hi,
I am using this tutorial to make a “nearby” places on a google maps application,
However, I am getting a few errors with the code:
“JSONObject jsonobj=new JSONObject(convertStreamToString(arg0,in));”
arg0 in my HttpClient file is underlined and i have no idea why?
in my activity file:
“private String convertStreamToString(InputStream in) {”
this claims there is an expected ;
and finally in my parser file “getApplicationContext” needs a method, but when i implement this it still doesnt work.
Any advice would be helpful.
Change this “JSONObject jsonobj=new JSONObject(convertStreamToString(arg0,in));” to “JSONObject jsonobj=new JSONObject(convertStreamToString(in));”
thanks, I now have an issue with the “convertStreamToString”?
sorry im pretty new to this.
You should have the method in the same class file as the calling method. and that class must be an Activity class
Good Tutorial.. easy to understand
Hi,
How to get real Image from the reference…?
Simple and helpful,Thank you | https://yuvislm.wordpress.com/2012/09/10/google-places-api-and-json-parsing-in-android/ | CC-MAIN-2017-43 | refinedweb | 677 | 51.44 |
. Some tags are not required to appear as a podcast in search, but are required to be eligible as a recommended podcast on Google surfaces. We also recommend some additional tags for a better podcast experience.
- You must provide an image for your podcast.
- The feed and audio files should not require any authentication or be blocked to Google in any way. You can check by visiting your feed in an incognito window.
- A majority of the episodes must be in any of the supported audio formats.
-).
-> <itunes:owner> <itunes:email>dafna@example.com</itunes:email> </itunes:owner> <itunes:author>Dafna</itunes:author> <description>A pet-owner's guide to the popular striped equine.</description> <itunes>
Google-specific RSS tags
Here are the required and recommended RSS tags used by Google to display your podcast:
Tag namespace
When using any itunes tags, be sure to use the proper namespace tag at the top of your RSS feed, as shown here:
.. Use the appropriate namespace for your tag.
Recommended episode tags
These RSS tags are not required, but providing them can provide a better user experience in search, as well as providing more information for users to help find your episode in Google. | https://support.google.com/podcast-publishers/answer/9889544?visit_id=637746476794650237-966427794&hl=en&rd=1 | CC-MAIN-2022-05 | refinedweb | 202 | 61.16 |
Hello. I am trying to write a recursive program that will check partitions of an array to see if they add up to a certain number. A person gives the target number, the length of the array, and the numbers in the array, and then the program will check combinations of the numbers in the array and print out how many possible combinations give the desired target number. For example, in a set of 1, 4, and 5, with the target number 5, it will give back 2 solutions, because a partition of 1 and 4 adds up to 5, and a partition of 5 adds up to 5.
This is the code I have so far, but it's not giving the proper result, and I believe it has to do with the fact that I have it stop when the size of the array equals 0, which seems to force it to quit searching before it has checked all the combinations. However, I don't know what condition to put in place in order to get it to stop at the right place and give the desired result. I've tried doing many things but nothing seems to work and I would appreciate it if someone could help me figure out how to get this code finished.
Thanks for any help anyone can give me, and if no help can be given then I thank you for your time.
Code:
#include <stdio.h>
int NumberofPartitions (int *set, int size, int result, int parts);
int main ()
{
int target;
int n;
int number;
int length;
int array[100];
int *start = &array[0];
int partitions = 0;
printf("Enter target number: ");
scanf("%d", &target);
printf("Enter array length: ");
scanf("%d", &length);
printf("Enter numbers for the array: ");
for (n = 0; n < length; n++) {
scanf("%d", &number);
array[n] = number;
}
partitions = NumberofPartitions(start, length, target, partitions);
printf("Number of partitions equals %d.\n", partitions);
}
int NumberofPartitions(int *set, int size, int result, int parts) {
int guide;
if (size == 0) {
return parts;
}
else if (*set - result == 0) {
parts++;
}
else {
guide = *set;
*set++;
size = size - 1;
NumberofPartitions(set, size, result, parts);
NumberofPartitions(set, size, (result - guide), parts);
}
} | https://cboard.cprogramming.com/c-programming/67382-recursive-problem-printable-thread.html | CC-MAIN-2017-09 | refinedweb | 362 | 53.58 |
A probability problem involving 40,000 sent letters.
A podcast got me doing maths again.
I was listening to this episode from the 99% invisible podcast. It contains a fascinating story about a radio contest where you could’ve won a house. Turns out that in the 80ies there was a severe economic crisis and this prompted many people to sign up to the contest. Many people signed up and only three men were picked to take part in the actual competition: who-ever could live on a billboard the longest would win a house. By the end of the podcast you’ll learned that this contest lasted for 261 days and got way out of hand.
It’s a great story, but a specific part of the story caught my attention.
Out of all the submissions that were being sent in, only three people would be selected. This got people thinking: one of the contestants actually submitted 47000 entries in order to increase the odds of getting selected. The tactic makes sense if you’re desperate and the podcast explains that the 1st ten letters that were opened by the selection committee were from a single participant.
The radio station ended up picking this contestant nonetheless but I wondered: what if the station had a policy of deny-ing anybody that sends more than one letter? The radio station wouldn’t go through all the letters (for obvious reasons, the podcast reports 500,000 letters being sent) but suppose that we pick three random letters for the contestants then we would expect each candidate to only occur once. With this in mind, you might want to limit the number of letters that you send.
Let’s do the maths here. We have a few free variables in the system that we’ve described.
Let’s start simple by first calculating the probability of getting selected if you’re the only person sending something in.
\[ p_{c=1}(\text{picked exactly once}) = \frac{s}{a + s} \]
If only one candidate is picked then we want to have \(s\) be as large as possible.
Let’s now consider two contestants. The idea is that either you get selected first and then somebody who isn’t you needs to get selected or the other way around.
\[\begin{equation} \label{eq1} \begin{split} p_{c=2}(\text{picked exactly once}) & = \frac{s}{a + s} \times \frac{a}{a + s -1} + \frac{a}{a + s} \times \frac{s}{a + s -1} \\\\ & = \frac{s \times a}{(a+s)\times(a+s-1)} + \frac{s \times a}{(a+s)\times(a+s-1)} \\\\ & = 2 \times \frac{s \times a}{(a+s)\times(a+s-1)} \end{split} \end{equation}\]
It seems like we need to be a bit more careful now. If \(s >> a\) then we need to check that what is at the denominator doesn’t grow.
When we do the maths for three contestants then we see a pattern occur.
\[\begin{equation} \label{eq2} \begin{split} p_{c=3}(\text{picked exactly once}) & = \frac{s}{a + s} \times \frac{a}{a + s -1} \times \frac{a-1}{a + s - 2} \\\\ & + \frac{a}{a + s} \times \frac{s}{a + s -1} \times \frac{a-1}{a + s - 2} \\\\ & + \frac{a}{a + s} \times \frac{a-1}{a + s -1} \times \frac{s}{a + s - 2} \\\\ & = 3 \times \frac{s \times a \times (a-1)}{(a+s) \times (a+s-1) \times (a+s-2)} \end{split} \end{equation}\]
Maths is nice and all but true intuition comes from plotting the numbers.
import numpy as np import pandas as pd import matplotlib.pylab as plt %matplotlib inline s = np.arange(1, 500000) a = 500000 c = 3 prob_zero = (a*(a-1)*(a-2))/((a+s)*(a+s-1)*(a+s-2)) prob_one = 3*(s*a*(a-1))/((a+s)*(a+s-1)*(a+s-2)) prob_two = 3*(s*(s-1)*a)/((a+s)*(a+s-1)*(a+s-2)) prob_three = (s*(s-1)*(s-2))/((a+s)*(a+s-1)*(a+s-2)) df = pd.DataFrame({'zero': prob_zero, 'one': prob_one, 'two': prob_two, 'three': prob_three}) df.index = s df.plot(title="probability of num letters opened after sending", figsize=(16, 8));
Turns out that if you really want to win at this game, you’ll need to make sure that about half of the amount of other letters is the amount you need to send (this translates to 1/3 of the total letters to need to come in from you). But can we find this number exactly?
Let’s look at our equation from before.
\[\begin{equation} \label{eq3} \begin{split} p_{c=3}(\text{picked exactly once}) & = 3 \times \frac{s \times a \times (a-1)}{(a+s) \times (a+s-1) \times (a+s-2)} \end{split} \end{equation}\]
If we want to optimise that equation for \(a\) we’ll find that it is actually very tricky. We have a fraction with a polynomial in there and even when we have something like
sympy it seems that it would not yield a pretty solution.
Realising this makes the path towards the actual solution a whole lot easier. It gives us an opporunity to reformulate the problem such that everything becomes a lot more simple. Let’s change the variables, but only slightly.
Suppose now that we have a system with these variables.
Note that the \(t\) parameter is the only one that is really different. With these parameters to use, let us check what our equation might look like.
\[\begin{equation} \label{eq4} \begin{split} p_{c=3}(\text{picked exactly once}) & = \frac{s}{t} \times \frac{t-s}{t -1} \times \frac{t-s-1}{t - 2} \\\\ & + \frac{t-s}{t} \times \frac{s}{t -1} \times \frac{t-s-1}{t - 2} \\\\ & + \frac{t-s}{t} \times \frac{t-s-1}{t -1} \times \frac{s}{t - 2} \\\\ & = 3 \times \frac{s \times (t-s) \times (t-s-1)}{t \times (t-1) \times (t-2)} \end{split} \end{equation}\]
By just rephrasing this, everything became a whole lot simpler. Since the variable \(t\) is something that is given we merely have created a division by a constant. You still won’t need to do any maths if you don’t feel like it though because sympy is here to help.
import sympy as sp s, t = sp.symbols("s, t") sp.solve(sp.diff(3*s*(t-s)*(t-s-1)/(t*(t-1)*(t-2)), s), s, 0)
The positive solution of that expression yields:
\[ \frac{2 t}{3} + \frac{\sqrt{t^{2} - t + 1}}{3} - \frac{1}{3} \]
Note that in sympy you can also solve numerically via;
t = 500000 sp.nsolve(sp.diff(3*s*(t-s)*(t-s-1)/(t*(t-1)*(t-2)), s), s, 0)
When \(t=500000\) then \(s^* = 166666.667\). This is a similar conclusion to what we saw before.
What is nice about this problem is that the maths can either be very easy or very hard depending on how you formulate the problem. If you take the initial formulation then you’ll find that even
sympy will have a huge problem with it.
The exercise got a whole lot simpler when we rephrased the problem to make our life easier. It might make you wonder if a similar phenomenon occurs in machine learning too. | https://koaning.io/posts/rephrasing-the-billboard/ | CC-MAIN-2020-16 | refinedweb | 1,232 | 69.52 |
What do i need to do to use this router with Spark? I am considering changing to BigPipe so I can get a Static IP address and faster bandwidth speeds for a better price.
Look at the tutorial here:
Also moved to the right subforum.
Michael Murphy |
A quick guide to picking the right ISP | The Router Guide | Community UniFi Cloud Controller | Ubiquiti Edgerouter Tutorial | Sharesies
Bigpipe is on a different network to Spark, so configurations can differ slightly.
Certainly follow murfys guides as they are spot on!
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
michaelmurfy:
Look at the tutorial here:
Also moved to the right subforum.
I saw that tutorial, and was hoping that the Big Brother Reference was referring to m new router. I will use these instructions. For UFB / VDSL / ADSL PPPoE is the section I assume I need to start with.
Thanks Murfy! | https://www.geekzone.co.nz/forums.asp?forumid=66&topicid=218064&page_no=1 | CC-MAIN-2019-47 | refinedweb | 163 | 63.39 |
RPIO.PWM provides PWM via DMA for the Raspberry Pi, using the onboard PWM module for semi-hardware pulse width modulation with a precision of up to 1µs.
With RPIO.PWM you can use any of the 15 DMA channels and any number of GPIOs per channel. Since the PWM is done via DMA, RPIO.PWM uses almost zero CPU resources and can generate stable pulses with a very high resolution. RPIO.PWM is implemented in C (source); you can use it in Python via the provided wrapper, as well as directly from your C source.
RPIO.PWM provides low-level methods to control everything manually, as well as helper classes that simplify PWM for specific usages (such as RPIO.PWM.Servo). This module is currently in beta, please send feedback to chris@linuxuser.at. As of yet only BCM GPIO numbering is supported.
Example of using PWM.Servo (with the default subcycle time of 20ms and default pulse-width increment granularity of 10µs):
from RPIO import PWM servo = PWM.Servo() # Set servo on GPIO17 to 1200µs (1.2ms) servo.set_servo(17, 1200) # Set servo on GPIO17 to 2000µs (2.0ms) servo.set_servo(17, 2000) # Clear servo on GPIO17 servo.stop_servo(17)
Example of using the low-level PWM methods:
from RPIO import PWM # Setup PWM and DMA channel 0 PWM.setup() PWM.init_channel(0) # Add some pulses to the subcycle PWM.add_channel_pulse(0, 17, 0, 50) PWM.add_channel_pulse(0, 17, 100, 50) # Stop PWM for specific GPIO on channel 0 PWM.clear_channel_gpio(0, 17) # Shutdown all PWM and DMA activity PWM.cleanup()
Here is a detailled overview of the RPIO.PWM.Servo class (from $ pydoc RPIO.PWM.Servo):
class Servo | | Methods defined here: | | __init__(self, dma_channel=0, subcycle_time_us=20000, pulse_incr_us=10) | Makes sure PWM is setup with the correct increment granularity and | subcycle time. | | set_servo(self, gpio, pulse_width_us) | Sets a pulse-width on a gpio to repeat every subcycle | (by default every 20ms). | | stop_servo(self, gpio) | Stops servo activity for this gpio
Low-level PWM method documentation (from $ pydoc RPIO.PWM):
FUNCTIONS add_channel_pulse(dma_channel, gpio, start, width) Add a pulse for a specific GPIO to a dma channel (within the subcycle) cleanup() Stops all PWM and DMA actvity clear_channel(channel) Clears a channel of all pulses clear_channel_gpio(channel, gpio) Clears one specific GPIO from this DMA channel get_channel_subcycle_time_us(channel) Returns this channels subcycle time in us get_pulse_incr_us() Returns the currently set pulse width increment granularity in us init_channel(channel, subcycle_time_us=20000) Setup a channel with a specific subcycle time [us] is_channel_initialized(channel) Returns 1 if this channel has been initialized, else 0 is_setup() Returns 1 if setup(..) has been called, else 0 print_channel(channel) Print info about a specific channel to stdout set_loglevel(level) Sets the loglevel for the PWM module to either PWM.LOG_LEVEL_DEBUG for all messages, or to PWM.LOG_LEVEL_ERRORS for only fatal error messages. setup(pulse_incr_us=10, delay_hw=0) Setup needs to be called once before working with any channels. Optional Parameters: pulse_incr_us: the pulse width increment granularity (deault=10us) delay_hw: either PWM.DELAY_VIA_PWM (default) or PWM.DELAY_VIA_PCM CONSTANTS DELAY_VIA_PCM = 1 DELAY_VIA_PWM = 0 LOG_LEVEL_DEBUG = 0 LOG_LEVEL_ERRORS = 1 PULSE_WIDTH_INCREMENT_GRANULARITY_US_DEFAULT = 10 SUBCYCLE_TIME_US_DEFAULT = 20000 VERSION = '0.9.1'
Take a look at the C source code on Github for more details.
Each DMA channel is setup with a specific subcycle, within which pulses are added, and which will be repeated endlessly. Servos, for instance, typically use a subcycle of 20ms, which will be repeated 50 times a second. You can add pulses for multiple GPIOs, as well as multiple pulses for one GPIO. Subcycles cannot be lower than 2ms.
For more information about subcycles, see the examples below. The left oscilloscope images zoom in on one subcycle, the right-handed images are zoomed out to show their repetition.
The pulse-width increment granularity (10µs by default) is used for all DMA channels (since its passed to the PWM timing hardware). Pulses are added to a subcycle by specifying a start and a width parameter, both in multiples of the granularity. For instance to set 500µs pulses with a granularity setting of 10µs, you’ll need to set the pulse-width as 50 (50 * 10µs = 500µs).
The pulse-width granularity is a system-wide setting used by the PWM hardware, therefore you cannot use different granularities at the same time, even in different processes.
In the oscilloscope images, GPIO 15 the blue channel and GPIO 17 the yellow one. The left oscilloscope images show one subcycle, the right images are ‘zoomed out’ to show their repitition. First we setup PWM.Servo with the default 20ms subcycle and 10µs pulse-width increment granularity:
from RPIO import PWM servo = PWM.Servo()
Now set a 4000us (4ms) pulse every 20ms for GPIO 15:
servo.set_servo(15, 4000)
Now a 1000us (1ms) pulse for GPIO 17:
servo.set_servo(17, 1000)
We can use the low-level PWM methods to add further pulses to a subcycle. This is done in multiples of the pulse-width increment granularity (start=200*10µs=2000µs, width=100*10µs=1000µs):
PWM.add_channel_pulse(0, 17, start=200, width=100) | http://pythonhosted.org/RPIO/pwm_py.html | CC-MAIN-2014-52 | refinedweb | 848 | 56.35 |
I've been tooling around with matplotlib, as graciously packaged by Chris Barker, and hosted on Bob Ippolito's pythonmac.org/packages site. Everything seems to be working smoothly, but I've run into a couple of warnings I can't decrypt.
1) Executing the following code,
#! /usr/bin/pythonw
import pylab
pylab.plot([1, 2, 3], [4, 5, 6])
pylab.show()
displays a chart as expected (the toolbar icons are a little mucked, but that's minor). However, dismissing the chart window brings up this warning:
2005-03-12 17:26:52.075 Python[569] *** _NSAutoreleaseNoPool(): Object 0x66402d0 of class NSCFString autoreleased with no pool in place - just leaking
*** malloc[569]: Deallocation of a pointer not malloced: 0x66c73d0; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
Is this a bug with matplotlib, with my installation of matplotlib, or with my script? Can I ignore it, and if not, what can I do to address it?
2) a smaller issue: I tried to change matplotlib's array class by opening
/system/library/frameworks/python.framework/version/2.3/share/.matplotlibrc
and changing "numerix : numeric" to "numerix: numarray" but I got the following error:
The import of the numarray version of the _contour module,
_na_contour, failed. This is is either because numarray was
unavailable when matplotlib was compiled, because a dependency of
_na_contour could not be satisfied, or because the build flag for
this module was turned off in setup.py. If it appears that
_na_contour was not built, make sure you have a working copy of
numarray and then re-install matplotlib. Otherwise, the following
traceback gives more details:
File "/platlib/matplotlib/_contour.py", line 5, in ?
ImportError: No module named _na_contour
I know I had numarray installed before unpackaging matplotlib. Chris' notes say that he had numeric installed when he packaged matplotlib, but makes no mention of numarray. Perhaps the matplotlib.mpkg needs to be rebuilt on a machine that has numarray installed? It isn't a big deal, as the two types are -mostly- interchangeable, but it would be nice to have the choice.
-Brendan
···
______________________________________________________________________
Post your free ad now! | https://discourse.matplotlib.org/t/os-x-matplotlib-warning-means-what/2243 | CC-MAIN-2019-51 | refinedweb | 372 | 55.24 |
Quals NDH 2018 - AssemblyMeCTF URL:
Solves: 53 / Points: 300 / Category : Reverse
Challenge description
We updated our website with the latest technologies. In addition, this is secure ! You can try to log in if you want…
We start with a URL:
Challenge resolution
The webpage only contains a single user input
By submitting a random one, we can see that it is not forwarded to the server. The authentication is performed client-side.
Looking at the JavaScript code, we can see that
checkAuth function is responsible for authentication:
u = document.getElementById("i").value; // user input var a = Module.cwrap('checkAuth', 'string', ['string']); // authentication function var b = a(u); // testing authentication document.getElementById("x").innerHTML = b; // output answer
However, we cannot read this function because it has been compiled in Web Assembly (within
index.wasm file).
At first, we tried to decompile the assembly with several tools, such as
wasmdump (from
wasm Pypi). We got the source code, however, we could not find out where the
_checkAuth function was implemented (mostly because we didn’t know anything about Web Assembly). We realized that we could get the code and dynamically debug it from Firefox (which supports Web Assembly).
By setting a break point on the
a(u) JavaScript call, we can dig into the web assembly source code.
We first stop the execution at the following point:
The
apply function will execute the
checkAuth function with the supplied user input as argument (the password). The output is given to the
Pointer_stringify function which will be helpful later.
Stepping further, we get into the web assembly, within a function called
func35:
At this point, we looked for the webassembly documentation to understand the assembly:
-
-
This was really useful.
The
func35 is performing severals checks, calling
func57 and comparing the return value with 0. If all the cascading conditions are met, the value
1690 is returned:
def func35(password): # this is python pseudo-code for func35 if func57(password, 1616, 4) == 0: if func57(password+4, 1638, 4) == 0: if func57(password+8, 1610, 5) == 0: if func57(password+13, 1598, 4) == 0: if func57(password+17, 1681, 3) == 0: if func57(password+20, 1654, 9): return 1690
We can see what value is pointed by
1690 by using the
Pointer_stringify function:
> Pointer_stringify(1690) "Authentication is successful. The flag is NDH{password}."
OK great! So now we have to find the right password to pass all these
if conditions.
It means that we have to understand what the
func57 function is actually doing.
The
func57 has 3 parameters:
- a string pointer related to the user input (let’s call it “input”)
- a static string pointer which content can be retrieved via
Pointer_stringify(let’s call it “valid”)
- an integer value (let’s call it “size”)
Digging further with the debugger, we enter the
func57:
Looking at the code, we can see that the function is checking if the
size first characters of the
input string match the
size first characters of the
valid string (which is similar to the
strncmp function).
We have the following pseudo-code:
def func57(input, valid, size): for i in range(size): if valid[i] != input[i]: return something_different_from_0 return 0
Reusing the
Pointer_stringify JS function, we can grab all the
valid values:
> Pointer_stringify(1616) "d51X1" > Pointer_stringify(1638) "Pox)sm" > Pointer_stringify(1610) "1S0xk" > Pointer_stringify(1598) "5S11x" > Pointer_stringify(1681) "W_enc_cb" > Pointer_stringify(1654) "KXK,,,xie"
We now have the following parameters given to
func57:
Hence, to pass the first condition, the 4 first characters (from 0 to 3) of our password need to be “d51x”. To pass the second condition, the characters from 4 to 7 need to be “Pox)”. To pass the third condition, the characters from 8 to 13 need to be “1S0xk”. And so on…
Putting everything together, the final password is “d51XPox)1S0xk5S11W_eKXK,,,xie”, which validates:
Special thanks to Sébastien Mériot for his help on the Web Assembly reversing. We could not make it without him!
Authors:
Marine Martin
Quentin Lemaire | @QuentynLemaire
Post date: 2018-04-01 | https://tipi-hack.github.io/2018/04/01/quals-NDH-18-assemblyme.html | CC-MAIN-2019-39 | refinedweb | 670 | 57.4 |
I'm trying to figure out how to deploy in IBM Bluemix a Cloudfoundry app that uses Vapor framework.
IBM is providing facilities and guidance for using his platform for developing server side Swift apps with his framework, Kitura. I think as a Cloudfoundry provider, with the proper Swift buildpack, we must be able to deploy generic server side Swift code.
Finally, while learning bits of CF, I reached the point that with the CloudFoundry CLI:
404 Not Found: Requested route ('sommobilitatcore.eu-gb.mybluemix.net') does not exist.
applications:
- path: .
memory: 256M
instances: 1
name: SomMobilitatCore
disk_quota: 1024M
buildpack:
web: App
import Vapor
import HTTP
let drop = Droplet()
let _ = drop.config["app", "key"]?.string ?? ""
drop.get("/") { request in
return try drop.view.make("welcome.html")
}
(...)
let port = drop.config["app", "port"]?.int ?? 80
// Print what link to visit for default port
drop.serve()
- path: .
instances: 1
memory: 256M
disk_quota: 1024M
name: SomMobilitat4
command: App --env=production --workdir="./"
buildpack: swift_buildpack
{
"production": {
"port": "$PORT"
}
}
import Vapor
import HTTP
let drop = Droplet()
drop.get("/") { request in
return "hello vapor in bluemix cloudfoundry"
}
drop.run()
To run a Vapor app on Bluemix:
Configdirectory with
servers.json(use exactly these names).
servers.jsonshould contain the following:
{ "myserver": { "port": "$PORT" } }
It will instruct Vapor to start a server named
myserver on the port taken from
$PORT environment variable used by Bluemix.
In your
Procfile, add
--workDir=. parameter, so it would contain:
web: App --workDir=.
It will instruct Vapor to look for
Config directory in the current directory during run time. | https://codedump.io/share/HQm1DL41wfww/1/trying-to-run-swift-vapor-on-bluemix---404-not-found-requested-route-does-not-exist | CC-MAIN-2017-39 | refinedweb | 256 | 58.38 |
The forEach tag is a replacement for the JSTL <c:forEach> tag. Though as of JSF 1.2/JSP 2.1/JSTL 1.2, <c:forEach> can be used with any JSF components or tags, it does not support "varStatus" when used with deferred evaluation. This tag adds support for varStatus (other than "current" which is not supported). (Note: this tag is not supported in Facelets, because c:forEach is fully functional in Facelets.) Unlike the old ADF af:forEach built with JSF 1.1, however, this tag can be used with any JSP 2.1-based tag, JSF or non-JSF. This tag also has a limitation not found in <c:forEach>: <af:forEach> does not currently support arbitrary java.util.Collections; it can only iterate over java.util.Lists or arrays.
The forEach tag should be used with intent and knowledge. The forEach tag is not used in JSF for iteration, but instead for generating multiple components. If your goal is to iterate over a collection of objects and render HTML for each item, <af:iterator> should be using instead.
Instances when <af:forEach> would be needed instead of <af:iterator>:
<af:forEach <af:declarativeComponent <af:forEach> may cause issues with component IDs
There may only be one component in a naming container with an ID. Therefore, if a component that is created as a child of a for each loop has an explicit ID set, JSF will alter the IDs of components beyond the first one. This can break partial triggers, behaviors and other components that refer to a component.
Objects in the items of an <af:forEach> tag should not be added, removed or re-ordered once the component tree has been created
For each loops are made to work in JSP, and have been altered to support JSF at a basic level only. The components that are created while looping over the for each loop store their indexes of when the component was first created. This means that if, say for example, the first item is removed from the items collection, the component at index 1, which now is index 0 still retains index 1. As a result, the EL expressions of the component will return the incorrect item when the var is resolved. Problems may also occur if explicit IDs are not given with component state. The component state is tied to the index of the component, not to the item in the items collection.
Children of <af:forEach> cannot share a binding EL value
A component instance may only exist once in a component tree. Therefore, if it is desired to have the component bound to a managed bean, the binding must evaluate to a different bean property for every iteration of the loop. For example:
<af:forEach <!-- Note that the output text component is bound to the item, not the bean --> <af:outputText </af:ForEach>
</section> <section name="Code_Example_s_">
<af:selectOneListbox <af:forEach <af:selectItem </af:forEach> </af:selectOneListbox>
Managed Bean code snippet for the above jspx snippet
import javax.faces.model.SelectItem; public class TestBean { public TestBean() { listOfItems = new ArrayList(); listOfItems.add(new SelectItem("value1", "label1"));//value and label listOfItems.add(new SelectItem("value2", "label2")); listOfItems.add(new SelectItem("value3", "label3")); listOfItems.add(new SelectItem("value4", "label4")); } private List listOfItems; public void setListOfItems(List listOfItems) { this.listOfItems = listOfItems; } public List getListOfItems() { return listOfItems; } }
<af:forEach <af:outputText </af:forEach> | http://docs.oracle.com/cd/E16162_01/apirefs.1112/e17491/tagdoc/af_forEach.html | CC-MAIN-2016-50 | refinedweb | 568 | 55.74 |
AppCode doesn't understand the following code:
using Map_t = std::map<std::string, int>;
Map_t myMap;
myMap["foo"] = 4;
I get error highlights on the using statement and on the assignment statement, saying "foo" is not an integer. However, the code builds with no problem. So I think it is just using the wrong options on clang when parsing the code - probably not the c++11 option. If I change the using to a typedef then the IDE errors go away.
AppCode doesn't understand the following code:
Looks like an issue. But if I include <string> directly I have no red code.
May be it's better for you to try your particular code on the upcoming EAP. Can be fixed there. If no - write us back or place an issue. EAP will be ready very soon.
I only started trying it last night. Mainly looking forward to the cross-platform C++ IDE as a lot of our stuff won't build on the Mac, but I do some of my work there anyway and really want a uniform experience (I use Idea for Java and Idea or PyCharm for python). So this isn't urgent for me but I'd be happy to try the EAP (and even happier to try an EAP of the cross-platform IDE.)
What build system are you using for your C++ projects?
We use a custom system written in python but I think originally based on tmake. It takes a short description of the project (a ".pro" file) and will produce makefiles or VS project files. I'm not sure that this is used for the production builds but it is what the developers use. I've set up Eclipse CDT on Linux and tried to use it but I've always found it wanting.
BTW, here is the original code from my example (I'm back on my mac):
#include <iostream>
#include <map>
int main(int argc, const char * argv[])
{
// insert code here...
std::cout << "Hello, World!\n";
using Map_t = std::map<std::string, std::pair<int, long>>;
Map_t myMap;
myMap["foo"] = std::make_pair(3, 5L);
std::cout << "map contents: " << myMap["foo"].first << ", " << myMap["foo"].second << std::endl;
return 0;
}
If I #include <string> I still get red, but it also tells me that it is an unused include, because it doesn't understand the using line.
Jim
I see. Looks like it was a problem with 'using' or somewhere around in 2.5.5. An example with just typedef works fine there. So anyway it should be fixed as I see in 3.0 EAP (just checked your particular example works fine). So please just download it when it's available and try.
Another thing here is that we are working on the cross-platform C++ IDE. But for the first release it supports only CMake build system and uses CMakelists.txt files as project files. Another build systems support will be considered later after 1.0 release.
I'm willing to rebuild the project, at least for testing out IDEs. :)
I suppose this was a cause for the problem:
Yep - looks like the same issue. Thanks.
Very soon indeed.
It just popped up on the AppCode EAP page. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206587425-How-do-you-set-options-for-the-C-analyzer?sort_by=created_at | CC-MAIN-2019-30 | refinedweb | 539 | 83.25 |
/* zlib.h -- interface of the 'zlib' general purpose compression library
version 1.2.3, July 18th, 2005
(zlib format), rfc1951.txt (deflate format) and rfc1952.txt (gzip format).
*/
#ifndef ZLIB_H
#define ZLIB_H
#include "zconf.h"
#ifdef __cplusplus
extern "C" {
#endif
#define ZLIB_VERSION "1.2.3"
#define ZLIB_VERNUM 0x1230
/*
The 'zlib' compression library case of corrupted input.
*/
typedef voidpf (*alloc_func) OF((voidpf opaque, uInt items, uInt size));
typedef void (*free_func) OF((voidpf opaque, voidpf address)); */
uLong adler; /* adler32 /* will be removed, use Z_SYNC_FLUSH instead */
#define Z_SYNC_FLUSH 2
#define Z_FULL_FLUSH 3
#define Z_FINISH 4
#define Z_BLOCK 5
/* Allowed flush values; see deflate() and inflate() below for details */
#define Z_OK 0
#define Z_STREAM_END 1
#define Z_NEED_DICT 2
#define Z_ERRNO (-1)
#define Z_STREAM_ERROR (-2)
#define Z_DATA_ERROR (-3)
#define Z_MEM_ERROR (-4)
#define Z_BUF_ERROR (-5)
#define Z_VERSION_ERROR (-6)
/* Return codes for the compression/decompression functions. Negative
* values are errors, positive values are used for special but normal events.
*/
.)
*/() provides as much output as possible, until there
is no more input data or no more space in the output buffer (see below
about the flush parameter).. desired.
*/
ZEXTERN int ZEXPORT inflateEnd OF((z_streamp.
windowBits can also be -8..-15 for raw deflate. In this case, -windowBits
determines the window size. deflate() will then generate raw deflate data
with no zlib header or trailer, and will not compute an adler32 255 (unknown). If a
gzip stream is being written, strm->adler is a crc32 instead of an adler32.32 value
of the dictionary; the decompressor may later use this value to determine
which dictionary has been used by the compressor. (The adler32 value
applies to the whole dictionary even if only a subset of the dictionary is
actually used by the compressor.) If a raw deflate was requested, then the
adler32 the compression method is bsort). deflateSetDictionary does not
perform any compression: this will be done by deflate().
*/().
Before the call of deflateParams, the stream state must be set as for
a call of deflate(), since the currently available input may have to
be compressed and flushed. In particular, strm->avail_out must be non-zero.
deflateParams returns Z_OK if success, Z_STREAM_ERROR if the source
stream state was inconsistent or if a parameter was invalid, Z_BUF_ERROR
if strm->avail_out was zero.
*/32 or a crc3232 instead of an adler32.
inflateInit2.)
*/32.32 returns Z_OK if a.
inflatePrime returns Z_OK if success, or Z_STREAM_ERROR if the paramaters are invalid, Z_MEM_ERROR if the internal state could not
be allocated, or Z_VERSION_ERROR if the version of the library does not
match the version of the header file.
*/
typedef unsigned (*in_func) OF((void more efficient than inflate() for
file i/o applications in that it avoids copying between the output and the
sliding window by simply making the window itself the output buffer. This
function
normal behavior of inflate(), which expects either a zlib or gzip:.
*/ buffer.
This function can be used to compress a whole file at once if the
input file is mmap'ed. or incomplete.
*/
typedef voidp gzFile;",. The string is then terminated with a null
character.
gzgets returns buf, or Z_NULL in case. returns Z_OK if
the flush parameter is Z_FINISH and all output could be flushed.
gzflush should be called only when strictly necessary because it can
degrade compression.
*/.).
*/.
*/
/* checksum functions */
/*
These functions are not related to compression but are exported
anyway because they might be useful in applications using the
compression library.
*/.
*/) | http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=565&zep=RD%2Fzlib.h&rzp=%2FKB%2FIP%2Fremotecontrol%2F%2FRD.zip | CC-MAIN-2016-18 | refinedweb | 565 | 64.2 |
A range is the span of parsed character data
between two points. It may or may not represent a well-formed chunk
of XML. For example, a range can include an
element's start-tag but not its end-tag. This makes
ranges suitable for uses such as representing the text a user
selected with the mouse. Ranges are created with four functions
XPointer adds to XPath:
range( )
range-to( )
range-inside( )
string-range( )
The range(
)
function takes as an argument an XPath expression that returns a
location set. For each location in this set, the range(
) function returns a range exactly covering that location;
that is, the start-point of the range is the point immediately before
the location, and the end-point of the range is the point immediately
after the location. If the location is an element node, then the
range begins right before the element's start-tag
and finishes right after the element's end-tag. For
example, consider this XPointer:
xpointer(range(//title))
When applied to Example 11-1, it selects a range
exactly covering the single title element. If
there were more than one title element in the
document then it would return one range for each such
title element. If there were no
title elements in the document, then it
wouldn't return any ranges.
Now consider this XPointer:
xpointer(range(/novel/*))
If applied to Example 11-1, it returns three ranges,
one covering each of the three child elements of the
novel root element.
The range-inside(
)
function takes as an argument an XPath expression that returns a
location set. For each location in this set, it returns a range
exactly covering the contents of that location. For anything except
an element node this will be the same as the range returned by
range( ). For an element node, this range includes
everything inside the element, but not the element's
start-tag or end-tag. For example, when applied to Example 11-1,
xpointer(range-inside(//title)) returns a range
covering The Wonderful Wizard of Oz but not
<title>The Wonderful Wizard of
Oz</title>. For a comment, processing instruction,
attribute, text, or namespace node, this range covers the string
value of that node. For a range, this range is the range itself. For
a point, this range begins and ends with that point.
The range-to(
)
function is evaluated with respect to a context node. It takes a
location set as an argument that should return exactly one location.
The start-points of the context nodes are the start-points of the
ranges it returns. The end-point of the argument is the end-point of
the ranges. If the context node set contains multiple nodes, then the
range-to( ) function returns multiple ranges.
TIP:
This function is underspecified in the XPointer candidate
recommendation. In particular, what should happen if the argument
contains more or less than one location is not clear.
For
instance, suppose you want to produce a single range that covers
everything between <title> and
</year> in Example 11-1.
This XPointer does that by starting with the start-point of the
title element and continuing to the end-point of
the year element:
xpointer(//title/range-to(year))
Ranges do not necessarily have to cover well-formed fragments of XML.
For instance, the start-tag of an element can be included but the
end-tag left out. This XPointer selects <title>The
Wonderful Wizard of Oz:
xpointer(//title/range-to(text( )))
It starts at the start-point of the title element,
but it finishes at the end-point of the title
element's text node child, thereby omitting the
end-tag.
The string-range(
)
function is unusual. Rather than operating on a location set
including various tags, comments, processing instructions, and so
forth, it operates on the text of a document after all markup has
been stripped from it. Tags are more or less ignored.
The string-range( ) function takes as arguments an
XPath expression identifying locations and a substring to try to
match against the XPath string value of each of those locations. It
returns one range for each match, exactly covering the matched
string. Matches are case sensitive. For example, this XPointer
produces ranges for all occurrences of the word
"Wizard" in
title elements in the document:
xpointer(string-range(//title, "Wizard"))
If there are multiple matches, then multiple ranges are returned. For
example, this XPointer returns two ranges when applied to Example 11-1, one covering the W in
"Wonderful" and one covering the W
in "Wizard":
xpointer(string-range(//title, "W"))
TIP:
This function is also underspecified in the XPointer candidate
recommendation. In particular, it is not clear what happens when
there are overlapping matches.
You can also specify an offset
and a length to the function so that strings start a certain number
of characters from the beginning of the match and continue for a
specified number of characters. The point before the first character
in the string to search is 1. For example, this XPointer selects the
first four characters after the word
"Wizard" in
title elements:
xpointer(string-range(//title, "Wizard", 7, 4))
Nonpositive indices work backwards in the document before the
beginning of the match. For example, this XPointer selects the first
four characters before the word
"Wizard" in
title elements:
xpointer(string-range(//title, "Wizard", -3, 4))
If the offset or length causes the range to fall outside the
document, then no range is returned.
Since string ranges can begin and end at pretty much any character in
the text content of a document, they're the way to
indicate points that don't fall on node boundaries.
Simply create a string range that either begins or ends at the
position you want to point to, and then use start-point(
) or end-point(
) on that range. For example, this
XPointer returns the point immediately before the word
"Wizard" in the
title element in Listing 11-1:
xpointerstart-point(start-pointxpointer(string-range(//title, "Wizard")))
Normally, an
XPointer is a fragment identifier
attached to a URL. The root node of the document the URL points to is
the context location for the XPointer. However, XPointers can also be
used by themselves without explicit URLs in XML documents. By
default, the context node for such an XPointer is
the root node of the document where the XPointer appears. However,
either the here( ) or the origin(
) function can change the context node for the
XPointer's XPath expression.
The here(
) function
is only used inside XML documents. It refers to the node that
contains the XPointer or, if the node that contains the XPointer is a
text node, the element node that contains that text node.
here( ) is useful in relative links. For example,
these navigation elements link to the
page elements preceding and following the pages in
which they're contained.
<page>
content of the page...
<navigation xlink:type="simple"
xlink:
</navigation>
<navigation xlink:type="simple"
xlink:
</navigation>
</page>
In these elements, the here( ) function refers to
the xlink:href attribute nodes that contain the
XPointer. The first .. selects the
navigation parent element. The second
.. selects its parent page
element, and the final location step selects the previous or next
page element.
The origin(
)
function is useful when the document has been loaded from an
out-of-line link. It refers to the node from which the user is
initiating traversal, even if that is not the node that defines the
link. For example, consider an extended link like this one. It has
many novel elements, each of which is a locator
that shares the same label:
<series xlink:
<!-- locator elements -->
<novel xlink:type="locator" xlink:label="oz"
xlink:
<title>The Wonderful Wizard of Oz</title>
<year>1900</year>
</novel>
<novel xlink:type="locator" xlink:label="oz"
xlink:
<title>The Marvelous Land of Oz</title>
<year>1904</year>
</novel>
<novel xlink:type="locator" xlink:label="oz"
xlink:
<title>Ozma of Oz</title>
<year>1907</year>
</novel>
<!-- many more novel elements... -->
<sequel xlink:type="locator" xlink:label="next"
xlink:
<next xlink:
</series>
The sequel element uses an XPointer and the
origin( ) function to define a locator that points
to the following novel in the series. If the user
is reading The Wonderful Wizard of Oz, then the
sequel element locates The Marvelous
Land of Oz. If the user is reading The Marvelous
Land of Oz, then that same sequel
element locates Ozma of Oz, and so on. The
next element defines links from each
novel (since they all share the label
oz) to its sequel. The ending resource changes
from one novel to the next. | https://docstore.mik.ua/orelly/xml/xmlnut/ch11_07.htm | CC-MAIN-2022-33 | refinedweb | 1,446 | 51.99 |
by David Tucker
In the previous installment of this series, we examined the basics of object relational mapping (ORM) and how it applies to Adobe AIR. You created an AIR application that uses a single simple data type, and you saved those values to the local embedded SQLite database using FlexORM. In this article, you will expand your application to include some complex relationships using the standard FlexORM metadata.
In the AIR application you created in the previous installment of this article, all your data was contained within a single ActionScript class: ContactVO. As I mentioned, this works fine for a demonstration, but in the real world, data is rarely that simple. For example, a contact doesn't simply contain a first and last name (as the previous version did). A contact might have multiple e-mail addresses, phone numbers, and street addresses. In addition, a single contact may belong to a single company that has many different employees.
While it might seem that the possibilities are endless, these relationships are generally divided into four different types:
Let's start by modifying the ContactVO from the previous exercise. In addition to a firstName and lastName, you need to add an ArrayCollection of addresses that will be mapped as a OneToMany relationship. In this case, one contact can have many addresses. The updated ContactVO now appears as:
package vo { import mx.collections.ArrayCollection; [Bindable] [Table(name="CONTACTS")] public class ContactVO { [Id] public var id:int; [Column( name="first_name" )] public var firstName:String; [Column( name="last_name" )] public var lastName:String; [OneToMany( type="vo.AddressVO", lazy="false", cascade="all" )] public var addresses:ArrayCollection = new ArrayCollection(); public function toString():String { return lastName + ", " + firstName; } } }
This is the first time we have seen a complex relationship being defined with metadata. In this case, we use the OneToMany metadata tag with a few additional properties:
Now you need to create the actual AddressVO that will be attached to the ContactVO. In the same package, create a new ActionScript class named AddressVO:
package vo { [Bindable] [Table(name="ADDRESSES")] public class AddressVO { [Id] public var id:int; [Column( name="street_address" )] public var streetAddress:String; [Column( name="city" )] public var city:String; [Column( name="state" )] public var state:String; [Column( name="postal_code" )] public var postalCode:String; [ManyToOne( cascade="none" )] public var type:TypeVO; } }
In this class, the first five properties should look similar to you. These are all properties that will be reflected in the ADDRESSES table. However, the final property introduces another complex relationship, ManyToOne. In this case, each address will have a type. This enables the end user to define whether an address is for home or work. In this case, a specific TypeVO will contain this data. In this data model, many contacts can have one type. The only property that is needed on the metadata is the cascade value.
Finally, you can create the TypeVO, which will not have any complex relationships. You could chose to make the relationship between addresses and type bi-directional. If you did, you could have a collection of addresses on each type instance. Instead, let's just keep the association unidirectional. The TypeVO should appear as:
package vo { [Bindable] [Table(name="TYPES")] public class TypeVO { [Id] public var id:int; [Column( name="name" )] public var name:String; public function TypeVO( name:String = "" ) { this.name = name; } } }
One core concept among ORMs is cascading. Cascading determines what happens when an item attached to a data type is created, deleted, or modified in some way. While I briefly introduced the concept in the previous installment of this article (without actually naming it), let's see exactly how FlexORM handles cascading.
According to the documentation, FlexORM currently supports four different settings for cascade:
In our example, it makes sense to set cascading to All for the addresses. If we delete a contact, we also want the address record in the ADDRESSES table to be deleted. There would be no need to hold onto it. In addition, when we save a contact, we want to be sure that all changes to the address are saved as well. The Type property on the address is different, however. If we delete an address, we don't want to delete the type because other addresses might be using it. In addition, we don't really need to look for changes on the type value when we save a contact. Simply attaching an address to a contact should not be considered modifying it. Knowing which option to use for cascade is a matter of how well you know your data model and its requirements.
Another configuration option that is common to most ORMs is lazy loading. This concept is fairly simple. Sometimes you don't want to retrieve all the values of a complex relationship until you actually need them. For example, if we added a company data object to our application, it may have a collection of 10,000 employees. We wouldn't want to retrieve all 10,000 employees if we just wanted to display a list of company names in a ComboBox. In a case like this, we could set Lazy to in the metadata. FlexORM would only retrieve the employees from the database when we actually access the Employees property.
While lazy loading is a powerful feature, I did run across a few minor bugs in FlexORM when working with lazy loading (specifically in ManyToOne relationships). Knowing how to best use lazy loading is a matter of knowing how your application will use the data and when it needs each piece of data.
Now that the application's data types have been created, you just need to build out the views to support this functionality. The finished application can be found with the exercise files for this article.
Since we will be working with different types, we need to be sure to check that the type values have been populated when we connect to the database. In this case, we will create a method that gets called from the creation complete handler (after the database is connected):
protected function setupTypes():void { types = entityManager.findAll( TypeVO ); if( !types || types.length == 0 ) { var type:TypeVO; type = new TypeVO( "Home" ); entityManager.save( type ); type = new TypeVO( "Office" ); entityManager.save( type ); types = entityManager.findAll( TypeVO ); } }
The first time the code runs, it will create a type for Home and Office and save them both in the database. The next time the code runs, it will simply retrieve all the types and use that to populate the types' ArrayCollection that was defined earlier in the application.
Since we want the user to be able to define multiple addresses for the contact, we need to create a way to display a form for each address. In addition, we also need to give the user the ability to add a new form for a new address. To accomplish this, we will use a repeater to display a single form for every element in the contact's Addresses property:
<mx:Repeater <view:AddressForm </mx:Repeater> <mx:Button
With this configuration, the only thing we need to do in order to add a new address is to add a new AddressVO to the selected contact's addresses collection:
protected function addNewAddress():void { selectedContact.addresses.addItem( new AddressVO() ); }Once these items are in place, we save the contact with all of its child entities:
protected function saveContact():void { entityManager.save( selectedContact ); getAllContacts(); }
You can get all the completed code in the application files. If you launch the application, you will see that you can add a first and last name and then click the Address Information tab to add as many addresses as you like. In this case, enter a single contact and give it two addresses: one for home and one for work. Click Save Contact. Once completed, your application should look similar to Figure 1.
Figure 1. The Contact Manager application.
Once we close the application, we can open the SQLite administration tool that was used in the first installment of this article. If we open up the database for this application, we can see all the values populated for contacts (the one that we saved), addresses (the two we created and attached to the contact), and types (which we created in the setupTypes method). The addresses are shown in Figure 2.
Figure 2. The SQLite administration tool showing inserted addresses.
During development, it is always a good idea to verify that the data was inserted properly into the database with some type of SQLite administration tool. If data for some of your complex relationships is not being inserted properly, it could be a sign that you didn't place all the proper metadata elements on your properties.
In all the examples in this series, we have been using synchronous database calls. While that works fine in many cases, synchronous database calls have their pros and cons. In most cases, AIR is single-threaded (which means that it can only do a single task at a time). If the application is busy with a complex task, the user will not be able to interact with the application until the task is complete. AIR provides an asynchronous option for dealing with the embedded database, and FlexORM supports that as well.
Using the asynchronous mode is not as easy as setting a property. The way you interact with the database is completely different. Instead of being able to make calls inline, you have to make a call and then listen for specific events to know if it succeeded or failed. While this provides the best experience for the end user, it requires more code.
To learn more about using FlexORM asynchronously, consult the FlexORM documentation and source.
I hope that after seeing the power of an ORM layer, AIR developers will be excited about learning more. Here are two key resources that can help you continue to grow in your knowledge of object relational mapping:
David Tucker is a software engineer for Universal Mind, focusing on the next generation of RIAs with Adobe Flex and AIR. He is based in Savannah, Georgia, but you can find him online at DavidTucker.net. | http://www.adobe.com/inspire-archive/december2009/articles/article7/ | CC-MAIN-2016-44 | refinedweb | 1,696 | 52.7 |
Event ID 14537 —
Ensure that the buffer on the client holds the appropriate trusted domain information
If your organization has a large number of trusted domains and forests, it is possible that client computers will not be able to access all domain-based namespaces in the trusted domains and forests. If a client computer can access a link target in another trusted domain or trusted forest by using the target’s Universal Naming Convention (UNC) path, the client computer can also access the link target by using its DFS path, but only if the list of domains fits into the client computer’s buffer. By default, DFS client computers send a 4-kilobyte (KB) (2,048 Unicode character) buffer to a domain controller when they request domain name referrals. If the list of domains is too large to fit into the 4-KB buffer, DFS client computers automatically increase their buffer size to accept the list of domains, up to a maximum of 56 KB.
When it populates the buffer of a client computer, DFS gives preference to local and explicitly trusted domains by filling the buffer with the names of those domains first. Consequently, by creating explicit trust relationships with domains that host important DFS namespaces, you can minimize the possibility that these domain names might be dropped from the list that is returned to the client computer. For more information about trust relationships, see Domain and Forest Trusts Technical Reference ().
Verify
Generate a list of the trusted domains from a domain controller and from a DFS client computer. Compare the two lists to ensure that they match.
Membership in Domain Admins, or equivalent, is the minimum required to perform the following procedure. Review details about default group memberships at. Perform the procedure on a domain controller in your domain.
To generate a list of trusted domains from a domain controller:
- nltest /domain_trusts > domaintrusts.txt to produce a text file that lists all the trusted domain names.
- Open the list in a text editor. For example, to open the file in Notepad, run the command notepad domaintrustlist.txt.
Membership in Domain Users, or equivalent, is the minimum required to perform the following procedure. Review details about default group memberships at. Perform the procedure on a domain controller in your domain.
To generate a list of trusted domains from a DFS client computer:
- Open a command prompt. To open a command prompt, click Start. In Start Search, type cmd, and then press ENTER.
- Run the command dfsutil /spcinfo > domaintrusts.txt to produce a text file that lists all the trusted domain names.
- Open the list in a text editor. For example, to open the file in Notepad, run the command notepad domaintrustlist.txt.
Compare the list of trusted domains from the domain controller and from the DFS client computer to ensure that the lists match to confirm that the Dfs service is working properly.
Related Management Information
Trusted Domain Information Status | https://technet.microsoft.com/en-us/library/ee406121(v=ws.10).aspx | CC-MAIN-2018-05 | refinedweb | 491 | 54.02 |
Date: September 1998. Last modified: $Date: 1998/10/14 20:17:13 $
Status: An attempt to explain the difference between the XML and RDF models. Editing status: Draft. Comments welcome!
Up to Design Issues
This note is an attempt to answer the question, "Why should I use RDF - why not just XML?". This has been a question which has been around ever since RDF started. At the W3C Query Language workshop, there was a clear difference of view between those who wanted to query documents and those who wanted to extract the "meaning" in some form and query that. This is typical. I wrote this note in a frustrated attempt to explain whatthe RDF model was for those who though in terms of the XML model. I later listened to those who thought in terms of the XML model, and tried to writ it the other way around in another note. This note assumes that the XML data model in all its complexity, and the RDF syntax as in RDF Model and Syntax, in all its complexity. It doesn't try to map one directly onto the other -- it expresses the RDF model using XML.
Let me take as an example a single RDF assertion. Let's try "The author of the page is Ora". This is traditional. In RDF this is a triple
triple(author, page, Ora)
which you can think of as represented by the diagram
How would this information be typically be represented in XML?
<author> <uri>page</uri> <name>Ora</name> </author>
or maybe
<document href="page"> <author>Ora</author> </document>
or maybe
<document> <details> <uri>href="page"</uri> <author> <name>Ora</name> </author> </details> </document>
or maybe
<document> <author> <uri>
These are all perfectly good XML documents - and to a person reading then they mean the same thing. To a machine parsing them, they produce different XML trees. Suppose you look at the XML tree
<v> <x> <y> a="ppppp"</y> <z> <w>qqqqq</w> </z> </x> </v>
It's not so obvious what to make of it. The element names were a big hint for a human reader.. You can't even really tell what real questions can be asked. A source of some confusion is that in the xyz example above, there are lots of questions you can ask. They are questions like,
These are all questions about the document. If you know the document schema (a big if) , and if that schema it only gives you a limited number of ways of expressing the same thing (another big if) , then asking these questions can be in fact equivalent to asking questions like
This is hairy. It is possible because there is a mapping from XML documents to semantic graphs. In brief, it is hairy because
This last is a big one. If you try to write down the expression for the author of a document where the information is in some arbitrary XML schema, you can probably do it though it may or may not be very pretty. If you try to combine more than one property into a combined expression, (give me a list of books by the same author as this one), saying it in XML gets too clumsy to consider.
(Think of trying to define the addition of numbers by regular expression operations on the strings. Its possible for addition. When you get to multiplication it gets ridiculous - to solve the problem you would end up reinventing numbers as a separate type.)
Looking at the simple XML encoding above,
<author> <uri>page</uri> <name>Ora</name> </author>
it could be represented as a graph
We can represent the tree more concisely if we make a shorthand by writing the name of each element inside its circle:
Of course the RDF tree which this represents (although it isn't obvious from the XML tree except to those who know) is
Here we have made a shorthand again by putting making the label for each part its URI.
The complexity of querying the XML tree is because there are in general a large number of ways in which the XML maps onto the logical tree, and the query you write has to be independent of the choice of them. So much of the query is an attempt to basically convert the set of all possible representations of a fact into one statement. This is just what RDF does. It gives you some standard ways of writing statements so that however it occurs in a document, they produce the same effect in RDF terms. The same RDF tree results from many XML trees.
Wouldn't it be nice if we could label our XML so that when the parser read it, it could find the assertions (triples) and distinguish their subjects and objects, so as to just deduce the logical assertions without needing RDF? This is just what RDF does, though.
In fact RDF is very flexible - it can represent this triple in many ways in XML so as to be able to fit in with particular applications, but just to pick one way, you could write the above as
<Description about="" Author ="Ora" />
I have missed out the stuff about namespaces. In fact as anyone can create or own the verbs, subjects and objects in a distributed Web, any term has to be identified by a URI somehow. This actual real example works out to in real life more like
<?xml version="1.0"?> <Description xmlns="" xmlns:
You can think that the "description" RDF element gives the clue to the parser as to how to find the subjects, objects and verbs in what follows.
This is pretty much the most shorthand way of using the base RDF in XML. There are others which are longer, but more efficient when you have, for instance, sets of many properties of the same object. The useful thing is that of course they all convey the same triple
It is a mess when you use questions about a document to try to ask questions about what the document is trying to convey. It will work. In a way. But flagging the grammar explicitly (RDF syntax is a way of doing this) is a whole lot better.
Things you can do with RDF which you can't do with XML include
Problems with basing you understanding on the structure include
I'll end this with some examples of the last problem. Clearly they can be avoided by good design even in an XML system which does not use RDF. Using RDF makes things easier.
If you haven't gone to the trouble of making a semantic model, then you may not have a well defined one. What does that mean? I can give some general examples of ambiguities which crop up in practice. In RDF, you need a good idea about what is being said about what, and they would tend not to arise.
Look at a label on the jam jar which says: "Expires 1999". What expires: the label, or the jam? Here the ambiguity is between a statement about a statement about a document, and a statement about a document.
Another example is an element which qualifies another apparently element. When information is assembled in a set of independently thrown in records often ambiguities can arise because of the lack of logic. HTTP headers (or email headers) are a good example. These things can work when one program handles all the records, but when you start mixing records you get trouble. In XML it is all too easy to fall into the trap of having two elements, one describing the author, and a separate one as a flag that the "author" element in fact means not the direct author but that of a work translated to make the book in question. Suddenly, the "author" tag, which used to allow you to conclude that the author of a finnish document must speak finnish, now can be invalidated by an element somewhere else on the record.
Another symptom of a specification where the actual semantics may not be as obvious as as first sight is ordering. When we hear that the order of a set of records is important, but the records seem to be defined independently, how can that be? Independent assertions are always valid taken individually or in any order. In a server configuration file, for example, he statement which looks like "any member has access to the page" might really mean "any member has access to the page unless there is no other rule in this file which has matched the page". That isn't what the spec said, but it did mention that the rules were processed in order until one applied. Represented logically, in fact there is a large nested conditional. There is implicit ordering when mail headers say, "this message is encrypted", "this message is compressed", "this message is ASCII encoded", "this message is in HTML". In fact the message is an ASCII encoded version of an encrypted version of a compressed version of a message in HTML. In email headers the logic of this has to be written into the fine print of the specification.
There is something fundamentally different between giving a machine a knowledge tree, and giving a person a document. A document for a person is generally serialized so that, when read serially by a human being, the result will be to build up a graph of associations in that person's head. The order is important.
For a graph of knowledge, order is not important, so long as the nodes in common between different statements are identified consistently. (There are concepts of ordered lists which are important although in RDF they break down at the fine level of detail to an unordered set of statements like "The first element of L is x", the "third element of L is z", etc so order disappears at the lowest level.). In machine-readable documents a list of ostensibly independent statements where order is important often turn out to be statements which are by no means independent.
Some people have been reluctant to consider using an RDF tree because they do not wish to give up the order, but my assumption is that this is from constraints on processing human readable documents. These documents are typically not ripe for RDF conversion anyway.
Conclusion:
Sometimes it seems there is a set of people for whom the semantic web is the only graph which they would consider, and another for whom the document tree (or graph if you include links) is all they would consider. But it is important to recognise the difference.
In this series:
.@@@ RDF does not have to be serialized in XML but ...
Up to Design Issues | http://www.w3.org/DesignIssues/RDF-XML | CC-MAIN-2014-35 | refinedweb | 1,800 | 67.59 |
Hello: I am stuck on something I need a bit of guidance on. Here is what I am needing to do:
Write a class encapsulating the concept of coins, assuming that coins have the following attributes: a number of quarters, a number of dimes, a number of nickels, a number of pennies. Include a constructor, the accessors and mutators, and methods toString and equals. Also code the following methods: one returning the total amount of money in dollar notation with two significant digits after the decimal point, and other returning the money in quarters ( for instance, 0.75 if there are three quarters ), in dimes, in nickels and in pennies. Write a client class to test all the methods in your class.
I am stuck on how to structure my total return method. Here is what I have so far, thanks for the help.
Code:import java.text.DecimalFormat; public class Coins { private int quarters; private int dimes; private int nickels; private int pennies; private int total; public Coins () { } public Coins ( int q, int d, int n, int p, int t ) { quarters = q; dimes = d; nickels = d; pennies = p; total = t; } public int getQuarters() { return quarters; } public int getDimes() { return dimes; } public int getNickels() { return nickels; } public int getPennies() { return pennies; } public int getTotal() { return total; } } | http://forums.devshed.com/java-help-9/return-total-method-943180.html | CC-MAIN-2016-40 | refinedweb | 216 | 55.98 |
curl_easy_escape - URL encode the given string
NAME
curl_easy_escape - URL encodes the given string
SYNOPSIS
#include <curl/curl.h>
char *curl_easy_escape( CURL * curl , const char * string , int length );
DESCRIPTION
This uses strlen() on the input string to find out the size.
You must curl_free the returned string when you're done with it.
ENCODING
libcurl is typically not aware of, nor does it care about, character encodings. curl_easy_escape encodes the data byte-by-byte into the URL encoded version without knowledge or care for what particular character encoding the application or the receiving server may assume that the data uses.
The caller of curl_easy_escape must make sure that the data passed in to the function is encoded correctly.
AVAILABILITY
Added in 7.15.4 and replaces the old curl_escape function.
RETURN VALUE
A pointer to a zero terminated string or NULL if it failed.
EXAMPLE
CURL *curl = curl_easy_init(); if(curl) { char *output = curl_easy_escape(curl, "data to convert", 15); if(output) { printf("Encoded: %s\n", output); curl_free(output); } }
SEE ALSO
curl_easy_unescape, curl_free, RFC 3986
This HTML page was made with roffit. | https://curl.haxx.se/libcurl/c/curl_easy_escape.html | CC-MAIN-2018-43 | refinedweb | 179 | 63.09 |
Feature #4828open
Request support for EXPath file functions
0%
Description
I'm making use specifically of the file:list() function, defined in EXSLT. It compilesand runs properly in Java Saxon but when I attempt to run it in saxon-js, I get the following error:
Static error in XPath in xslt/mfcFunctions.xslt {file:list($testNodes,true())}: Unknown function Q{}list()
I'm using the namespace:
xmlns:file=""
Any suggestions?
Updated by Michael Kay over 1 year ago
The EXPath file module is not currently implemented in Saxon-JS. (The same is true of the binary module). We hope to have these available in a future release.
Updated by Michael Kay over 1 year ago
- Tracker changed from Bug to Feature
- Subject changed from I am attempting to use an exslt function with saxon-js but will neither run nor compile to Request support for EXPath file functions
Recategorising this as a feature request. Note also, it is EXPath not EXSLT.
Updated by Community Admin over 1 year ago
- Applies to JS Branch 2 added
- Applies to JS Branch deleted (
2.0)
Updated by Norm Tovey-Walsh 10 days ago
- Priority changed from Normal to High
- Sprint/Milestone set to SaxonJS 3.0
Updated by Conal Tuohy 9 days ago
Updated by Michael Kay 9 days ago
This feature has been implemented and tested and is very likely to find its way into SaxonJS 3. For Node.js only, not in the browser.
Please register to edit this issue
Also available in: Atom PDF Tracking page | https://saxonica.plan.io/issues/4828 | CC-MAIN-2022-27 | refinedweb | 256 | 61.36 |
Sometimes the admin site works great out of the box. Other times, you need to add some custom code to certain parts of your admin site.
Maybe, you want some custom tables that shows your revenue for the month (i.e. you have an e-commerce site), maybe you’d like to create a button so you could upload batch SKUs for product inventory in your e-commerce site.
The sky really is the limit. This post will show you not only how to update and custom the admin templates, but it will also give you a clue to how to update the actual admin views so you can change the behavior of the entire admin interface.
Step 1: Override Admin Templates
Create a an admin directory in your project’s templates directory. Create a subdirectory inside of the admin directory that will be the name of your app (i.e. my_app).
You have some options that you could edit for specific apps:
- app_index.html
- change_form.html
- change_list.html
- delete_confirmation.html
- object_history.html
All the other ones, you would just copy into your newly created admin directory.
Step 2: Override Admin View by Subclassing AdminSite
If you’d like to change how your Admin site behaves, you’ll need to subclass
django.contrib.admin.AdminSite:
from django.contrib.admin import AdminSite from .models import MyModel class MyAdminSite(AdminSite): site_header = "My Super Awesome Customized Admin Site" # More stuff including view methods that you can also override.
Any method or attribute you find here can be overridden. You can literally change how the entire Admin site behaves and works.
Step 3: Instantiate your Admin Class and Register your Models on it
Once your admin site works the way you want it to, you can instantiate and register your models to it:
admin_site = MyAdminSite(name="myadmin") admin_site.register(MyModel)
Now, you can add your admin to your urlconf:
from django.conf.urls import url from my_app.admin import admin_site urlpatterns = [ url(r'^myadmin/', admin_site.urls), ]
Now, you can change the look and feel of you admin site, and even update the behavior of the entire admin site. What would you like to add to your | https://chrisbartos.com/articles/how-to-change-your-admin-sites-look-feel-and-behavior-in-3-steps/ | CC-MAIN-2018-13 | refinedweb | 362 | 56.15 |
In-Depth
In a Bring Your Own Device (BOYD) world, .NET Framework support for Portable Class Libraries (PCLs) provides a base for writing code that will run on any platform -- provided you understand the limitations of PCLs and how to structure your applications to exploit them.
The Microsoft .NET Framework Common Language Specification (CLS) ensures code written in one language can generate Microsoft intermediate language (MSIL) that can be used from other compliant languages. But the various .NET target frameworks (Silverlight, Windows Phone, Xbox 360 and .NET for Windows Store apps) all have different sets of supported features and encompass different namespaces and libraries. So, "cross-platform development" really means "cross-platform development for a targeted framework."
When you need to share code between these frameworks -- if you really want to build something that runs on different platforms -- you have to create separate libraries for each framework, often by recompiling an existing library for each framework. Using XAML to create views for different frameworks reduces the divide between the frameworks but, as a result, increases the need to share business code across frameworks in order to have truly sharable libraries that don't need to be recompiled.
Portable Class Libraries (PCLs) allow you to write code that can be targeted at a combination of .NET frameworks. You select the frameworks you want to target when creating a PCL project to gain assembly portability across those frameworks. You do lose some functionality, however -- the feature set available to your library is the intersection set of the features in the selected frameworks (see Figure 1). Nor are all classes in that intersection area necessarily available in a PCL.
In order for a class to be a candidate for portability, it not only has to be available in all the target frameworks, but it also has to have the same behavior across those frameworks. Even then, there are some classes that meet both conditions that aren't yet "portable" (but, if you wait, you should see them become portable). However, there are some strategies for reducing the impact of the "intersection set," which will be discussed later.
You can use the MSDN Library to determine if a particular class is available for a PCL project: Just look for Portable Class Library in the Version Information section of a class's documentation. For any class, not all members may be available in a PCL.
Creating and Referencing PCLs
Earlier versions of Visual Studio provided some support for targeting multiple platforms through file and project linking (available as an extension to Visual Studio 2010), but both processes had limitations. Portable Library Tools, which adds the project templates to develop PCL projects for Visual Studio, is the latest iteration to address cross-framework support. Portable Library Tools is available out-of-the-box in Visual Studio 2012 Professional, and as an extension for Visual Studio 2010 SP1.
When you select the PCL project from the Visual Studio Add New Project dialog, you're presented with a dialog listing the frameworks you can target (Figure 2). These include:
After you've created your project, you can change your selections from your project's Property Pages. Also, by clicking the Install Additional Frameworks link at the bottom of the Add Portable Class Library dialog, you can add additional targeting packs such as Windows Azure, for instance. In Visual Studio 2012, you'll find your project's References node in Solution Explorer contains a single library called .NET Portable Subset.
The .NET Framework 4 uses special re-targetable assemblies created specifically for PCLs, which choose the correct assembly at runtime depending on the host framework. The .NET Framework 4.5 was built with a portable mindset, and its assemblies have been designed from the ground up with portability in mind through type forwarding.
While it's tempting to select all the available frameworks (the "you never know" attitude), it's better to think in terms of "you aren't going to need it." Because the feature set available to your code is the intersection of the frameworks selected, each additional framework you select reduces the set of features available to you. The best advice is to choose only the target frameworks where you expect your code to be hosted. But, thanks to the way PCLs can be referenced, there are some strategies that allow you to extend the functionality in a portable application.
Referencing Portable Applications
While PCLs can't reference non-portable class libraries, other kinds of projects (those targeting a single framework) can reference PCLs, provided the PCL includes the framework that the original project is targeting. PCLs can also reference each other: Your PCL project can reference another PCL, provided that the target frameworks in the referenced library are a subset of your project's frameworks. If a PCL you want to reference targets a set of frameworks different from your project, you won't be allowed to add a reference to the PCL.
PCL projects also support adding Service References, with some restrictions. All the service operations in the client proxy are generated as asynchronous. Any non-portable classes in the service are ignored (for instance, any constructor involving a non-portable class will not be included in the client proxy). Within the service contract, ProtectionLevel, SessionMode, IsInitiating and IsTerminating are removed and only a couple of bindings are supported.
As of NuGet 2.1, PCLs can be part of NuGet packages. If your NuGet package contains a framework-specific library and a portable version, the more specific-versioned libraries get preference when the package is installed (but this raises the question of why your package has a portable framework-specific version of the same library). Installing a NuGet package that doesn't target a framework within the project will fail.
Strategies for Portable Applications
In a Model-View-ViewModel (MVVM)-based application, views are platform-dependent, while view models and models can be kept in separate class libraries, which can be PCLs. You could, for instance, mix combinations of "portable model" libraries with "non-portable view and view model" or "portable model and view model" with "non-portable views." There are two factors that will influence your decision in selecting PCLs to hold your models and view models: which framework you're targeting, and whether the frameworks and libraries your model or view model need are available to a PCL.
When writing code for a PCL project, the Template Method and Non-Virtual Interface design patterns provide a way of dealing with the subset of framework types available. In your PCL project, you can provide the basic definition of your process in a base class, defining any framework-dependent members (members that require functionality not available in the intersection subset) as abstract/MustOverride or virtual/Overrideable methods.
The implementation for these methods can be provided in classes that inherit from your PCL classes; those classes can be in framework-specific libraries where the full functionality for the framework is available. With virtual/Overrideable methods, you might be able to provide a generic implementation that wouldn't always need to be overridden by framework-specific code (or would extend the framework-specific code) by using the Protected Variations General Responsibility Assignment Software Patterns (GRASP) principle.
You can also intelligently divide your code across PCLs by looking at the functionality required in each framework. While your application might target several frameworks, you rarely want all of the application's features to be available in all of the frameworks. For example, the supported features in a Windows Phone version of your application will probably be more limited than the Windows Presentation Foundation (WPF)- and Silverlight-based versions. With that in mind, you can develop "universal" PCLs (those with features shared across all the required platforms) and reference them from other "local" PCLs (those with features targeting a smaller set of frameworks). Assembly reference rules should allow the assemblies targeting the larger set of frameworks to be referenced from an assembly targeting a subset.
Figure 3 shows an application targeting five frameworks that's distributed over three PCLs. PCL1 is the universal PCL that provides the functionality available in all frameworks, while PCL2 and PCL3 extend that functionality in selected frameworks.
Software Development Tools and Frameworks
PCL support continues to increase. JetBrains dotPeek 1.0 supports decompiling PCLs, for example. The Microsoft.Bcl.Async-beta NuGet package supports the async keyword for the .NET Framework 4, Silverlight 4 (and higher), and Windows Phone 7.5, and their combinations in a PCL. There's also a fork of the MVVM Light toolkit on CodePlex for PCL.
Before deciding on adopting -- or rejecting -- PCLs, check the tools in your development toolkit. As more and more tools and frameworks add support for PCLs, the absence of that support will become critical. In a Bring Your Own Device (BYOD) world, PCLs aren't just desirable features -- they're essential.
About the Author
Muhammad Siddiqi is technology enthusiast and passionate blogger and speaker. He is a co-author of "MVVM Survival Guide for Enterprise Architectures in Silverlight and WPF". He blogs at, and you can find him on Twitter @SiddiqiMuhammad.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2013/05/09/portable-class-libraries.aspx | CC-MAIN-2017-09 | refinedweb | 1,541 | 50.57 |
Show Table of Contents
7.237. squid
Updated squid packages that fix one security issue.
Squid is a high-performance proxy caching server for web clients that supports FTP, Gopher, and HTTP data objects.
Security Fixes
- CVE-2012-5643
- A denial of service flaw was found in the way the Squid Cache Manager processed certain requests. A remote attacker who is able to access the Cache Manager CGI could use this flaw to cause Squid to consume an excessive amount of memory.
Bug Fixes
- BZ#805879
- Due to a bug in the
ConnStateData::noteMoreBodySpaceAvailable()function, child processes of Squid terminated upon encountering a failed assertion. An upstream patch has been provided and Squid child processes no longer terminate.
- BZ#844723
- Due to an upstream patch, which renamed the HTTP header controlling persistent connections from
Proxy-Connectionto
Connection, the NTLM pass-through authentication does not work, thus preventing login. This update adds the new
http10option to the
squid.conffile, which can be used to enable the change in the patch. This option is set to
offby default. When set to
on, the NTLM pass-through authentication works properly, thus allowing login attempts to succeed.
- BZ#832484
- When the IPv6 protocol was disabled and Squid tried to handle an HTTP GET request containing an IPv6 address, the Squid child process terminated due to signal
6. This bug has been fixed and such requests are now handled as expected.
- BZ#847056
- The old "stale if hit" logic did not account for cases where the stored stale response became fresh due to a successful re-validation with the origin server. Consequently, incorrect warning messages were returned. Now, Squid no longer marks elements as stale in the described scenario.
- BZ#797571
- When squid packages were installed before samba-winbind, the
wbprivgroup did not include Squid. Consequently, NTLM authentication calls failed. Now, Squid correctly adds itself into the wbpriv group if samba-winbind is installed before Squid, thus fixing this bug.
- BZ#833086
-#782732
- Under the high system load, the squid process could terminate unexpectedly with a segmentation fault during reboot. This update provides better memory handling during reboot, thus fixing this bug.
- BZ#798090
- Squid incorrectly set the timeout limit for client HTTP connections with the value for server-side connections, which is much higher, thus creating unnecessary delays. With this update, Squid uses a proper value for the client timeout limit.
- BZ#861062
- When the GET method requested a fully-qualified domain name that did not contain the
AAAArecord, Squid delayed due to long DNS requesting time. This update introduces the
dns_v4_firstoption to
squid.conf. If the
dns_timeoutvalue of this option is properly set, Squid sends the
Aand
AAAAqueries in parallel and the delays no longer occur.
- BZ#758861
- Squid did not properly release allocated memory when generating error page contents, which caused memory leaks. Consequently, the Squid proxy server consumed a lot of memory within a short time period. This update fixes this memory leak.
- BZ#797884
- Squid did not pass the
identvalue to a URL rewriter that was configured using the
url_rewrite_programdirective. Consequently, the URL rewriter received the dash character (
-) as the user value instead of the correct user name. Now, the URL rewriter receives the correct user name in the described scenario.
- BZ#720504
- Squid, used as a transparent proxy, can only handle the HTTP protocol. Previously, it was possible to define a URL in which the access protocol contained the asterisk character (
*) or an unknown protocol namespace URI. Consequently, an
Invalid URLerror message was logged to
access.logduring reload. This update ensures that always used in transparent proxy URLs, and the error message is no longer logged in this scenario.
Users of squid are advised to upgrade to these updated packages, which resolve this issue and fix these bugs. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.4_technical_notes/squid | CC-MAIN-2020-40 | refinedweb | 629 | 55.44 |
Copy files or directories in Python programming
This script is presented by Patrick Lapre. a member of Unixmen community
#/usr/bin/env python
Small intro: Copying files with a specific pattern from its origin to its final destination ;-)
# import the following modules
# re is needed for patterns
import re
# os.path to define file manipulation
import os.path
# dircache is used for listing directories
import dircache
# shutil is used for copying files
import shutil
# Defined some global variables to be used later on
From_Path = “””/Users/carllapre/test/from/”””
To_Path = “””/Users/carllapre/test/to/”””
a = dircache.listdir( From_Path )
# make a copy of the array to manipulate later on
a = a[:]
# print statement just for debugging
print a
# create string value to be used later on
b = ” “.join( a )
print b
# look it is a pattern ;-)
pattern = re.compile( ‘rbs.*’ )
print pattern
# This is the actual patternmatch
c = re.findall( pattern, b )
print c
# Stick it all together again
e = “”.join( c )
# and now for the split
splitter = re.compile( ‘ ‘ )
d = splitter.split( e )
print d
# Ok this is a loop to check if the file
# already exists in the destination directory
# If not then copy the file to its destination
for i in d:
print i
File_Exist = To_Path + i
# check if the file already exists in its destionation folder
x = ( os.path.isfile( File_Exist ) )
# using a reserved word to check the condition
if x == False:
print “””Copying file to “”” + To_Path
Source = From_Path + i
Destination = To_Path + i
# actual copy action
shutil.copy2( Source, Destination )
else:
print “””File already exists at location “”” + To_Path
print “””n Doing Nothing!!!””” | http://www.unixmen.com/copy-files-or-directories-inpython-programming/ | CC-MAIN-2015-22 | refinedweb | 264 | 52.29 |
The.
Introduction
The concept of sticking our compiled code into containers is becoming quite popular at the moment, and for very good reason. If you develop in the cloud and don't use containers or similar technology now, then you should really take a look at whats involved. The huge benefits of using containerized infrastructure/deployment are being pushed by the cloud vendors, and also increasingly being recognized by the mainstream enterprise as an all round good egg and something worth investing time and resources into embracing. Containers and 'server-less computing' are two technologies that most developers are going to have to embrace in the next short few years.
This article progresses the others in my DevOps/Infrastructure focused series, where we are looking at building a cloud agnostic, clustered virtual machine solution based on container-based services. In the first article in this series, we looked at 'Terraform', what it is, and how to use it. The second article dug a bit deeper into that very useful piece of Infrastructure technology. The third one went through approaches for provisioning virtual machines with Terraform once they have been deployed. Since we have our Virtual Machines set up, the next thing we are going to look at is using this thing called Kubernetes to manage our code containers. If you are a pure Windows person, I would urge you to dip your toes into the Linux world - its fascinating and has the full backing of Microsoft. For those folk, I have included deeper step by step instructions in the article, others can just skip those parts.
Introducing Kubernetes
Containers are incredibly useful, but you can only fit so many on a machine before you have to worry about how to manage them all - this management of containers is known as orchestration. Arguably the most popular container system is the one provided by the Docker engine. While Docker has its own answer to orchestration, in 'Docker Swarm', the one that seems to be pulling ahead in the race is called 'Kubernetes'.
Kubernetes is a system that will manage the lifecycle of your containers no matter if they are distributed in clusters of five or five thousand virtual machines. The technology itself is based on Googles 'Borg system', which has been Google's internal system for managing clusters for many years. Just as Terraform aims to keep your 'desired state' of nodes up and running, in the same way, Kubernetes will ensure that the containers you give it are safely distributed around the machine cluster and operating in a healthy manner.
I will delve into Kubernetes and Docker in further articles, for this article we are going to focus on setting up a cluster. If you are interested in the background to Kubernetes these two Google papers are well worth a read,
Microsoft support
Microsoft is making a big bet and very large investment in the Kubernetes world. As developers, we now have the ability to package directly to a container from within Visual Studio, and they have recently brought out some amazing tooling for it on Azure with Azure container service, as well as employing some of the worlds best Kubernetes engineers. In my experience, when Redmond goes full force behind something like this, it's time to sit up and take notice!
Article scope
I think most IT folk will agree that life is short enough and there's simply too much to learn at times! ... while I am completely in favour of going deep dive into technologies *when its needed*, most of the time I am quite happy to stand on the shoulders of others and use the fruits of the community (and that's one of the reasons I write articles, to give back ... please try it out yourself, every article helps someone, somewhere!).
Kubernetes from scratch is not the easiest of things to set up. There can be complications and dependencies that can trip you up easily, and until you get used to it, it can feel like the proverbial three steps forward, two steps back. For the purposes of this article, I am going to show you one of the best, most configurable ways I have come across of installing up a Kubernetes environment, using a community supplied Ansible based solution called 'Kube Spray'. If you prefer to do things from the ground up, I can highly recommend you go step by step through 'Kubernetes the hard way', an excellent in-depth tutorial covering more than you will ever need to know :)
If you are not aware, Ansible is software that helps automate software provisioning, config management, and app deployment. Its extremely popular and very widely used in the IT operations world. I will cover it in an upcoming article in this series.
Kube-spray (originally known as 'Kargo'), is at its core, a set of Ansible playbooks that automate the installation of Kubernetes master and client nodes. It is open-source, developed and maintained by the community, and now part of the Kubernetes incubation program. Critically, KubeSpray is *production ready* so takes quite a lot of pain out of the entire setup operation for us.
What we are going to do in this article, is a step by step install of Kubernetes using Kube-Spray along with a very swish dashboard or three to let you easily manage your container cluster.
Building a virtual network on Azure
If you are following along with the series (see links at top of article), you will be using Terraform to spin up your cluster. If you haven't got there yet or want to play around with different configurations, here's a quick run-through of doing it from the Azure Portal,
You have a final chance tor review before you commit to setting the machine up. After a short time you will be notified that the machine is ready for use. To connect to it, open the master virtual machine, click 'connect' at the top and this will give you an SSH IP address you can use to log-in.
Using Kube-Spray on Azure
Requirements The following instructions assume you have 4 x virtual machines provisioned and are able to access one via a Public-IP (this will be your master node) and the others by using SSH via the master node. In Kubernetes we have two types of machines. The main machine is known as the master, and typically is called 'KubeMaster'. Other virtual machines that are controlled by the master used to be called 'minions' but are now called 'nodes'.
Machine setupFor development, we set up five (one master, four nodes) of the following machines
Role
It is critical for setup that you preform all commands given as the root user - once you have logged into a VM take care to "sudo -s" and become the root user before carrying out any commands. For the purposes of ANY doubt, I'll say it again - you MUST do everything from here in as root, if you don't, you will have issues :) The following is how I set up my test cluster for this article .. note the auto-shutdown option I have turned on - critical when testing if you don't want your credit card smacked with a big bill at the end of the month!
Kube-spray install
Ensure SSH service is installed and running, then SSH into the master machine. I used to use the Putty app for this in the past, but now that BASH FOR WINDOWS is available I use this exclusively ... its far easier and a fantastic Windows integrated tool if you are dipping your feet into the DevOps world. If you don't have it installed and want to give it a spin, please check out my step by step instructions on how to install BASH for Windows.
Connecting to the remote machine using Windows Bash
To find out how to login, on the dashboard, select the master machine and click the 'connect' button on the top. This will give you the main account and IP to log into.
To connect, open your BASH prompt and enter the command given in the connect information. I generally copy this to the clipboard and paste it into the BASH window. To cut and paste into the bash or any text based window, click on the top left icon and then select edit -> paste from the popup menu shown
After pasting in the ssh command, click enter!
You will notice from the screenshot below, that once we have connected, as we have not SSH'd into this machine before, it will give us a security warning about the machine and ask us to accept. Normally when responding to a yes/no you can enter Y or N, but in this case you MUST enter the full word 'yes' ... assuming you wish to continue! :)
After accepting the connect query, you are then prompted to enter the password for the user on the machine you are connecting to (in this case, you are requesting to connect as user 'kubeadmin' on machine 52.166.110.189). Once the password is entered correctly, you are now live on the remote machine. It shows your username and machine name in green, and the dollar ($) prompt. Next steps are to start preparing the machine for the main install.
Setup security
Once we are connected, we then need to elevate our user to ROOT, and create security keys that will be used to communicate securely between hosts. To change to the root user, enter: sudo -s <enter>You will be asked for the main password. After entering the password, we need to move into the ROOT folder, cd /root and generate the RSA security keys ssh-keygen -t rsa
NB: Follow these instructions carefully! .... it is important to be logged in as root and important to generate the key without a password.
The next thing we are going to do is print the key we just generated to screen,. and copy that in preparation for pasting it on the remote nodes. Running the command 'cat' (standing for 'catenate') will read the file you send into it as param, and redirect its data to the screen.... cat /root/.ssh/id_rsa.pub
To select and copy this data at the bash command line, highlight with your mouse and hit <enter>
Sync security between nodes
Now that we have a key ready, we need to copy this over to the other nodes so that they see our master machine as authorized. This next step gets repeated for *each node* and in addition, gets done for the master itself as well. I am showing you a manual method here - there are multiple ways to do this. In production I normally create a 'base' machine as a node thats setup and ready to go, and then 'clone' this for each new node I need - it saves a lot of the manual work.
Step 1 - connect to remote node. We have for nodes to connect and make part of our security group. In my test cluster I have named these node1, node2, node3, node4. Each have the same admin username (kubeadmin) and password. The initial login is the same experience as connecting our local windows machine to the remote server. Initially you will get a security warning that you need to agree to, then you can login with the password. Once logged in, as before, we need to elevate ourselves to the root user with sudo -s. Having done that, we are then going to use a simple Linux text editor called 'nano' to edit the file that handles authentication of other machines trying to connect to it 'authorized_keys' which is located in the '/root/.ssh' folder.
The full command calls the text editor with a parameter of the file you wish to open in edit mode. For those with sufficient grey hair, nano is not unlike the old WordStar text editor. In fact it seems that George RR Martin, the creator of the television series 'Game of Thrones' writes his novels on Wordstar verion 4! We are going to keep this extremely simple. We will use the paste method mentioned above (click top left icon, remember?) to paste the contents of the public key from our master machine now into the editor... (the entire key pasted does not show above but it does go in!)
So to save this file, and then exit, we press the 'CTRL + O' key combination at once - this will prompt a save - just accept by hitting the enter key. We then use the key combination 'CTRL + X' then to exit nano. Ok, so now we are ready to test that it worked as expected. To do this, we will logout, and then attempt to log back in again but as root this time, not kubeadmin. To back-out of the remote user bash shell you are in, type 'exit' and hit enter until you are back at the 'root@kubemaster' prompt. Now from this starting point, we attempt to login *using no password* to the remote node.
Success! ... note how no password was asked for and we are landed straight into the 'root' account prompt.
The above needs to be repeated for every virtual machine/node in your cluster.
Step 2 - finalizing access for the master node itself.
For the master node, we finally need to copy its own public key into its own authorized key file, we can do this byu copying (cp) the pub file into the auth file: cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keysWe can confirm it worked by running a CAT on the auth file after the copy.
Install prerequisites
There are a number of installs and updates we need to carry out on the master machine before we can install KubeSpray - these are in effect its dependencies.
We run each of these commands individually (answering y/yes where appropriate/when asked), or, you can run them from the attached script (just to make your life a small bit easier .. you're welcome! :D)
Download KubeSpray
The official GIT reporitory for KubeSpray is the usual starting point. For the purpose of this article I am going to point you to a community branch customised by the very talented crew in Xenonstack ... their particular branch is already configured for a number of extremly useful dashboards and services so we may as well stand on their shoulders (as an aside, I have used Xenon for training and support on a number of occasions and are really excellent - my go-to experts for immediate professional advice and assistance!)
Configure KubeSpray
Having downloaded KubeSpray (now residing in '/root/kubespray'), we next need to tell the configuration the names and/or IPs of our nodes.
Navigate to /root/kubespray/inventory and edit the inventory.ini file (using our new found friend 'nano'), adding in references to 'KubeMaster' and 'nodeX' (1..4) as shown below. (as you might guess, navigation uses basic commands quite like dos/powershell ... cd for change directory would be useful at this point!) (ensure that the section headers are uncommented '#' so that '#[etcd]' becomes '[etcd]')
The sections are as follows,
NB: If you wish to use a different Linux distro instead of Ubuntu, you will need to edit the /kubespray/inventory/group_fars/all.yml file and change 'bootstrap_os: ubuntu' to 'bootstrap_os: XX' where XX is the name of the OS (listed in top of the all.yml file)
We are now ready to test connectivity between the master and nodes. From the kubespray/inventory folder run the following command: ansible --inventory-file=inventory.ini -m ping all
Assuming that the instructions have been followed carefully, you should see a 'green light' response as follows,
So, good to go, the final thing to do is run the actual install itself and then check everything has worked!
Install KubeSpray
Right, so here's the reason we came ot this party! ... after all our setup, the installation itself coundn't be simpler ... we just execute one command line from the kubespray folder (one up from inventory where we have been, try 'cd ..')...
Depending on the number of nodes you have in the cluster, and what spec they are, the script will run for anything form 5 to 30 minutes (thats a lot of work, thank goodness KubeSpray is there to do it all for us!). While the script is running, you can expect the text output flowing to the screen to stop/start ... it will look similar to this, When the script has completed, you will be shown a 'play recap', and the overall timings for the process. In my case testing, it took 11.33 minutes, which is pretty respectable. Confirm KubeSpray installation
Right, everything seems to be complete, so we can now carry out checks to ensure everything is as it should be. One of the main ways we interact with Kubernetes is using the KUBECTL (kube control) command line utility. KubeCtl allows us to get status information from the cluster, and also to send commands into the cluster. First up, let's confirm our cluster is up and running and our nodes have registered and are available. We do this by using the 'get' command, with a parameter of 'nodes'
kubectl get nodes
As shown below, our cluster consisting of our single master and four child nodes are now all connected and ready to go. The next thing we will do is look and see exactly what containers and pods have been created by the installation script for us. To do this, we call 'kubectl get' again, passing a main param of 'pods' and also passing the optional parameter 'all-namespaces', which give us back system containers as well as our own specific deployments. We we don't have any of our own pods or containers deployed, we simply get to see a listing of what the script has put together for us.
kubectl get pods --all-namespaces At the start of the article, I talked about dashboard - let's confirm that they are up and running by calling their container IP/port combinations. To find out what is where, we need to examine the running 'services'
kubectl get svc
When we call the 'service' list by default, it only gives us the top level exposed services - in this case its simply the cluster service, However, we know there is more, and you can see now how the addition of the 'all-namespaces' parameter extends the request to give us detail on everything, not only the top level. The pods we are interested in are the dashboard ones (kubernetes dashboard, grafana and weave-scope), and we can see they are present and operational.
Finally, to confirm they are working, we can use the CURL command (which acts as a command-line httpClient of sorts), to connect to the dashboard using its IP and download the response. curl 10.233.46.240
Next steps
We have now successfully installed a Kubernetes cluster on Azure in a very pain-free way, using a method that will work for all cloud providers and also on bare metal. The next step is to expose some ports in our security area to see the dashboards in the browser, and examine the benefits the dashboards give us, and to expose 'persistent data volumes' that we can interact with for centralized storage. We will discuss this in the next article in this series
I have attached a shortened set of instructions to assist you in setting up your own Kubernetes cluster for downloading. As usual, if this article has been helpful, please give it a vote above!
View All | https://www.c-sharpcorner.com/article/how-to-build-a-kubernetes-cluster-on-azure-using-kubespray/ | CC-MAIN-2019-22 | refinedweb | 3,287 | 64.85 |
I created a new project using create-react-app and yarn 2 in vs code. The editor throws errors while importing every installed library like this:
Cannot find module ‘react’ or its corresponding type declarations.
The project compiles and runs successfully but the errors are still there. When I change the file’s extension to .js and .jsx form .ts and .tsx, the errors disappear. How should I solve this problem for typescript files?
Solution #1:
You need to install types for libraries you install. generally you should do:
npm install @types/libName // e.g. npm install @types/react-leaflet
Some times there’s no types available in DefinitelyTyped repo and you encounter
npm ERR! 404 Not Found: @types/[email protected]. You can do several things in this occasion:
1- Create a
decs.d.ts file in root of your project and write in it:
declare module "libName" // e.g declare module 'react-leaflet'
2- Or simply suppress the error using
@ts-ignore:
// @ts-ignore import Map from 'react-leaflet'
3- Or you can do yourself and others a favor and add the library types in DefinitelyTyped repo.
If it still doesn’t work, I firstly recommend that you use a recent version of Typescript. if you do, then close your editor, delete
node_modules folder and install the dependencies again (
npm install or
yarn install) and check it again.
Solution #2:
I had the same error message, also related to the imports.
In my case, the problem was caused by importing a png file in order to use it in an SVG:
import logo from 'url:../../public/img/logo.png';
The message I received was:
Cannot find module 'url:../..public/img/logo.png' or its corresponding type declarations.
Solution:
In project root is a file
css.d.ts. One must declare a module for the various graphics extensions (types) that are imported. (Note: Not for graphics types that are used or referenced in the project, but for those that are imported )
Here is the complete
css.d.ts file for this project:
declare module '*.scss' { const css: { [key: string]: string }; export default css; } declare module '*.sass' { const css: { [key: string]: string }; export default css; } declare module 'react-markup'; declare module '*.webp'; declare module '*.png'; declare module '*.jpg'; declare module '*.jpeg';
| https://techstalking.com/programming/question/solved-react-typescript-cannot-find-module-or-its-corresponding-type-declarations/ | CC-MAIN-2022-40 | refinedweb | 380 | 67.25 |
Language-Integrated Query (LINQ). Visual Studio 2008 includes LINQ provider assemblies that enable the use of LINQ with .NET Framework collections, SQL Server databases, ADO.NET Datasets, and XML documents.
In This Section
Introduction to LINQ
Provides a general introduction to the kinds of applications that you can write and the kinds of problems that you can solve with LINQ queries.
Getting Started with LINQ in C#
Describes the basic facts you should know in order to understand the C# documentation and samples.
Getting Started with LINQ in Visual Basic
Describes the basic facts you should know in order to understand the Visual Basic documentation and samples.
How to: Create a LINQ Project
Describes the .NET Framework version, references, and namespaces required to build LINQ projects.
Visual Studio IDE and Tools Support for LINQ
Describes the Object Relational Designer, debugger support for queries, and other IDE features related to LINQ.
LINQ General Programming Guide
Provides links to topics that include information about how to program with LINQ, such as the standard query operators, expression trees, and query providers.
LINQ to Objects
Includes links to topics that explain how to use LINQ to Objects to access in-memory data structures,
LINQ to XML
Includes links to topics that explain how to use LINQ to XML, which provides the in-memory document modification capabilities of the Document Object Model (DOM), and supports LINQ query expressions.
LINQ to ADO.NET (Portal Page).
Supplementary LINQ Resources
Links to other online sources of information about LINQ.
Related Sections
LINQ to SQL
Explains the LINQ to SQL technology and provides links to topics that help you use LINQ to SQL.
LINQ to ADO.NET (Portal Page)
Explains the LINQ to DataSet technology and provides links to topics that help you use LINQ to DataSet.
LINQ Samples
Provides links to samples that demonstrates various aspects of LINQ.
See Also
Other Resources
Link to Everything: A List of LINQ Providers | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/bb397926(v=vs.90) | CC-MAIN-2018-34 | refinedweb | 322 | 55.44 |
According to tiny anthropomorphic hamsters huddled in a corner at Bloomingdale's pre-opening gala, Be, Inc. will be releasing its software MIDI synthesizer later this month. This will allow MIDIphiles to play MIDI files using the audio output of the BeBox without having to connect to an external MIDI player. It also provides an easy way for application developers to add a soundtrack and sound effects to their applications.
The software synthesizer is based on the wavetable synthesis engine SoundMusicSys, licensed from Headspace, Inc. SoundMusicSys was created by Steve Hales and Jim Nitchals and has achieved considerable popularity among game developers. The SoundMusicSys engine supports Headspace's RMF (Rich Music Format), a cross-platform open standard for musical expression (see for more about RMF).
The software synthesizer also supports the General MIDI Specification, which is a mapping from MIDI program change numbers to instruments. An application can play ocarina sounds, for example, by sending a program change number 79 (ocarina) to the synthesizer on one of the sixteen MIDI channels. The synthesizer will then respond to a note-on message on that channel by mixing an appropriately pitch- shifted ocarina sound into an audio stream (usually the DAC stream).
A high-quality General MIDI instrument sample library was developed specifically for the BeOS by Peter Drescher of Twittering Machine Productions. The library includes a complete set of 127 instruments, plus a percussion bank, and contains about 5 MB of samples. The instruments are sampled at 22 KHz, 16-bit resolution. While they're designed to be able to play a wide range of General MIDI files in a variety of musical styles, special attention was paid to creating a lifelike effect from the acoustic instruments.
The SoundMusicSys engine can play user supplied samples as well as sounds from the sample library; and a selection of built-in reverb effects can be applied to the samples.
Interested developers should pay close attention to the Be web site, where a sample application capable of rendering MIDI files to audio will be appearing, followed shortly by an API to the synthesizer along with source code to the sample application.
The next major release of the BeOS, Developer Release 9 (DR9), will include a significant overhaul of our mainline C++ libraries. Many of these changes should make development on the BeOS much more flexible. The major new features planned for inclusion are C++ I/O streams, support for exceptions, and an implementation of the C++ Standard Template Library (STL).
Most programmers should be familiar with the first two features. I/O streams provide an easy way to abstract file I/O. C++ exceptions provide a mechanism by which the programmer can handle unexpected or error conditions and relay them to other parts of the program. Both of these are described in detail in any good C++ book.
The third new feature, STL, has been a part of the ANSI C++ standard since 1994. STL is a powerful and efficient library that aggressively uses templates to provide a set of useful C++ container classes and generic algorithms. These classes and algorithms provide an easy way to construct and operate on new and complex data structures.
The details of STL can get rather hairy and may take some time to understand properly. This article provides a very brief description of the elements of STL and a few examples of their use.
STL consists of five elements:
"Algorithms" are template functions that perform operations on containers.
"Containers" are objects that contain other objects and perform memory operations on them.
"Iterators" are methods that specify the location of a container or stream, for use by an algorithm.
"Function objects" encapsulate functions in an object for use by other components.
"Adaptors" adapt components to provide different interfaces; they translate between classes.
STL extends basic C and C++ programming paradigms, making it easy to
start using the library. For example, STL provides a generic sorting
algorithm, appropriately named
sort().
The following code fragment demonstrates how
sort() can be used to sort
the elements in a "normal" array, as well as on the STL "vector"
container:
double
a[1000]; vector<double>
b; ... // sort all the items in the array
sort(
a,
a+ 1000); //sort all the items in the vector
sort(
b.
begin(),
b.
end());
As with all STL algorithms, the
sort() algorithm is generic: It accepts
regular pointers as its arguments, as well as the STL-defined "iterators"
(location specifiers) that are returned by the
begin() and
end() calls.
Note that there are various kinds of iterators. The type of the iterator defines its use. For example, input iterators provide access to data sources. Output iterators provide access to data sinks. These can be explored further as the programmer delves into STL.
The library provides a number of container types. In addition to arrays ("built-in" containers) and the vector type demonstrated above, STL provides lists, queues, sets, and stacks, to name a few. These are all templates, so you can have a list of ints, set of Employees, and so on, without doing much work at all.
Here's a simple example taken from the web that once again uses the vector container type (see for more examples):
#include <iostream.h> #include <algobase.h> #include <vector.h>
main(int
argc, char *
argv[]) { int
n= atoi (
argv[1]); // argument checking removed for clarity vector<int>
v; for (int
i= 0;
i<
n;
i++) // append integers [0, n-1] to v
v.
push_back(
i); // shuffle
random_shuffle(
v.
begin(),
v.
end()); // print to stdio
copy(
v.
begin(),
v.
end(), ostream_iterator<int> (
cout, "\n")); }
This program generates a random permutation of the first 'n' integers,
where 'n' is specified on the command line. Believe it or not, the
algorithm
random_shuffle() is defined in STL.
The last line of this program may be a bit confusing. The STL
copy()
algorithm takes three iterators. The first two specify the source range
and the third is the destination. Here, the third argument,
ostream_iterator<int>(), is an "adaptor." The adaptor converts the
integer vector into an output stream. Assigning to
ostream_iterator<int>
writes data out. The two arguments to the
ostream_iterator<int>
constructor are the output stream and the element separator.
In summary, STL provides a concise and efficient way of constructing and operating on a variety of data structures. What I've presented above are the bare essentials; there are many other ways these template classes can be used. Look forward to using them in DR9!
Before I was married, I used to work all the time. Lucky for me, my future wife worked with me. That's 24-hour-a-day exposure to my spouse to be!
When we were married, we vowed, among other thing, to not work on weekends, and not only that, but we didn't keep a machine at home either. What a time squeeze. At the time we were doing a lot of NeXT development and were pretty proficient at it, so things were OK.
Before my daughter was born, I got in the habit of exercising to think quickly. I figured there wouldn't be much time to write buggy code accompanied by long hours of debugging. So, what to do? I know, use frameworks and plug- ins extensively. Object-oriented programming has been given a bad wrap. Encapsulation at least is a key development methodology, which works very well in many situations, and add-ons implement encapsulation beautifully.
One advantage that the BeOS has as a new operating system, starting relatively from scratch, is that we have a lot of good and bad examples to look at. This is true for both the OS code itself and the applications that we encourage developers to write. An application like, oh, I don't know... Lumena, was pretty good, took a long time to develop, and couldn't keep up with the times.
We can learn from this. We can emulate functions and features, and best of all, we know what the architecture of the application should be. I would argue that the architecture, or framework, upon which the application is built is one of the most important factors that will influence how gracefully it ages over time. A framework that supports add-ons will at least be more easily updated and possibly more extensible than its traditional monolithic counterpart. And even at that, we've learned a lot about how to make add-ons work most efficiently. So like today's OS that learns from the pros and cons of the past, applications do the same thing.
As an example, last week I released the first version of Rraster! This is a simple example of how to support add-ons using the BeOS. The application is simple, it's meant to be an add-ons aware image viewer. I would put it in the category of esoteric software, because although this particular category has launched such products as Photoshop and DeBabelizer, this really is a mundane feature that no OS should be without.
My nanny was deathly ill last week, so I spent at lot more time with my daughter. When you're with an 18-month old you don't have a lot of time to think, let alone code. But I had to stay productive, so what to do? Write more Rraster add- ons. So this week I've updated Rraster. I managed to add support for the following file formats:
GIF, PCX, PBM, TIFF, TGA, PNG, JFIF, XBM, and BMP
I didn't quite have enough time for PICT or other Mac formats, but what can you expect for coding that has to occur between diaper changes. Then I was about to take a whack at filter plug-ins and the weekend parties and visiting started.
You can get the latest at:
Of course all this source is for you to be able to write your own favorite image plug-in and to see how add-ons can be supported in general.
To continue the deluge of internally developed demo app source releases, I managed to get Mandelbrot prettied-up. So take a look at:
You'll see the most often implemented demo code in the history of graphic computing. This is probably the Hello World! of graphics programming, other than bouncing balls.
I'll take one more pass at the graphics framework as an example before moving on to some apps that are more audio in nature, since we're lacking in this area. Remember, if you want to see something specific, send in those requests and keep them coming.
In 1987, most pundits started predicting that multimedia was poised to become the next revolution in computing. I believe this was the time when the P word, paradigm, as in "paradigm shift," started to creep into execuspeak. Others felt multimedia was the simple but important continuation of an old trend: With great regularity, more of (almost) everything was offered to hardware and software engineers. As a result, the computer increased its range of media and, to a large extent, it handled it more gracefully with the passage of time.
There was a time when multimedia meant multiple slide projectors and a tape player; when the multimedia dust settled for a brief moment before the rise of the web, multimedia had come to mean a CD-ROM, loudspeakers, color, music, and animated graphics. At Apple, in the early days of the new era, we were both clueless and bombarded—from outside and, as a result, from the executive suite. We were clueless because, while we liked more interesting, livelier computers, at that time we had no idea what the next multimedia "killer app" would turn out to be. We were bombarded with suggestions and demands from the outside world. (This is a long-standing tradition for Apple, one that seems to perpetuate itself today with suggestions to adopt a certain operating system.)
For instance, in those days technological haruspices were clamoring for the adoption of DVI, an Intel-sponsored asymmetric video compression technology, and there were big debates about CD-I, the "I" standing for interactive. Japanese companies were trying to enter the market with a new kind of multimedia personal computer, the CD-I PC. There was panic, we must have a statement of strategic direction, and collateral damage, the overproduction of overhead transparencies. The search for the multimedia Holy VisiCalc wasn't going well. Video looked attractive, "compelling" was the buzzword. Other had their doubts, based on a combinations of psychological and business arguments. They saw the classical productivity applications as tools, used repeatedly and valued accordingly. Video was perceived as more ephemeral. Captivating, entertaining, but few customers, if any, would use the same video over and over again. And for entertainment, why bring TV to the computer screen?
Sensing the potential, but unable to divine the killer app, I made a bold move: I hired a childhood friend of Larry Tesler, Marc Porat, with the mission to scour Apple's technology portfolio and build a multimedia strategy. He came up with General Magic instead. But there was hope. One researcher in Apple's Advanced Technology Group wrote a short paper summarizing the notion of "genre." The genre expresses a convention, an agreement combining the expressive ways of authors and the expectations, the habits of an audience. The same physical medium can harbor many genres: Newspaper, book, newsletter, encyclopedia on paper, or comedy, tragedy, musical comedy on stage. That paper transmuted the Holy PageMaker question into one of new, emerging genres. We now know a little more. In many ways, the CD-ROM has become synecdoche for multimedia, and a few distinct genres have emerged for the new medium: Games, reference, software distribution, the questionable edutainment, and lately, back-up, admittedly not very multimedia nor totally ROM. All self-respecting PCs now have a CD-ROM drive, speakers, and reasonable audio and video capabilities.
The Internet poses even more interesting genre questions. We have e-mail, news, web pages... E-mail meets the definition of genre very well: Expectations, audience, expression, the ingredients are there. The same is true for news. Things get more confusing for web pages. One can argue company and personal pages are gaining the stable conventions required to qualify. But the proliferation of information has created an opportunity for new genres: Search and delivery.
In a way, General Magic was onto something and investors were lured by the promise of intelligent agents. Newspapers use intelligent humans to sort and present information to us. Humans and computers in concert are likely to create one or two stable genres mining the web for us and providing us the combination of the expected and the pleasantly unexpected, for which we'll be willing to part with some of our money. Which is another genre criterion.:.
Scripting talk. Correspondents politely debated the merits of REXX (and OREXX), Java, Python, and so on. At a higher level, the one-language attitude was questioned: Some folks would prefer an open scripting architecture, in which any number of scripting languages are recognized. It was offered that Be must (at least) standardize the scripting interface at the application port (AKA socket) level.. | http://www.haiku-os.org/legacy-docs/benewsletter/Issue1-49.html | crawl-001 | refinedweb | 2,535 | 54.52 |
Java Stack and Heap Memory
Memory in Java is divided into two parts - the
Stack and the
Heap. This is done to ensure that our application efficiently utilizes memory.
In this tutorial, we will learn more about these two types of memories and the key differences between them.
Stack Memory
- The stack memory is used for thread execution and static memory allocation. All primitive data resides in the stack memory and the references to other objects are also stored in it. It only contains local data that is available for the current thread.
- Whenever a method is called from a Java application, then a new block for that method is added to the stack. This block will contain all the primitive data and references that are needed by the method.
- Stack memory, as the name suggests, uses the Last-In-First-Out(LIFO) approach of the Stack data structure to add and remove method calls.
- Stack memory is a lot faster than heap memory. It is also safer than heap memory because the data can only be accessed by the running thread. It is thread-safe and does not require synchronization as each thread gets its own Stack.
- Allocation and deallocation of memory are done automatically. Whenever a method is called, then a new block for this method is added at the top of the Stack. And when the method finishes execution or returns, then the block and all the corresponding data are removed and the space becomes available for new method calls.
- We may encounter the StackOverflowError if we run out of the Stack memory. We can write a simple program to simulate this error. If we repeatedly call a method, then these method calls will accumulate in the Stack, and eventually, we will get this error. In the following code, we recursively call the
overflowError()method inside its definition and this leads to
StackOverflowError.
public class JavaMemoryManagement { public static void overflowError() { overflowError(); } public static void main(String[] args) { overflowError(); } }
Let's see an example to understand how memory is allocated in the stack.
In the code below, we are creating primitive integer data and char data in the
main() method. This
main() method will be pushed on the top of the stack. Then we are calling the
print() method, which will again be added to the top of the stack. Inside the print() method, we are calling the
printChar() method, and a block for this method is allocated on the stack. All these methods blocks in the stack will have the required data available to them.
public class JavaMemoryManagement { public static void print(int i, char c) { System.out.println(i); printChar(c); } public static void printChar(char c) { System.out.println(c); } public static void main(String[] args) { int i = 5; char c = 'a'; print(i, c); } }
The final stack memory is shown in the image below.
Heap Memory
- The heap memory is the place where all class instances or objects are allocated memory. As discussed above, the references to these objects are stored in the Stack memory. All dynamic memory allocation takes place in the heap. Heap memory is also globally accessible and any thread can access the memory.
- Unlike the Stack memory, which uses the stack data structure, the heap data structure is in no way linked to the heap memory.
- The heap memory is not as safe as the stack memory because all the threads can access the objects and we must synchronize them.
- It is also a lot slower than the Stack memory.
- Memory allocation and deallocation are not very simple and heap memory uses complex memory management techniques. Memory deallocation is not done automatically and Java uses a Garbage Collection mechanism to free up space.
- Heap memory is further divided into three parts:
- Young Generation: this part of the heap memory is the place where all newly allocated objects reside. Garbage collection takes place when this part of the memory becomes full.
- Old Generation: when the young generation objects cross a threshold time limit, then they are transferred to the old generation space. This usually contains objects that are not frequently used.
- Permanent Generation: this is the place where JVM stores the metadata for the runtime classes and application methods.
- If we run out of heap memory, then the JVM throws the
OutOfMemoryError. This can happen if we continue to allocate space without removing the old objects. We can simulate this error by creating an infinite loop that keeps on allocating memory for new String class objects. The following code will run for a few seconds and then we will get the OutOfMemory error
public class JavaMemoryManagement { public static void outOfMemoryError() { List<String> list = new ArrayList<String>(); while(true) { String s = "str"; list.add(s); } } public static void main(String[] args) { outOfMemoryError(); } }
Consider the following example to better understand the heap memory.
In the code below, we are creating a String object which will be allocated space in the String Pool of the heap memory. We are also creating an instance of the Object class and space will be allocated for this object as well. The references to these objects will be stored in the stack memory.
public class JavaMemoryManagement { public static void main(String[] args) { String s = "str"; Object o = new Object(); } }
The final Heap memory is shown in the image below.
Key Differences between Stack and Heap Memory
The following table explains some of the key differences between these two types of memory.
Stack and Heap Memory Example
Let's try to understand what happens under the hood when we create new objects and call methods. Consider the following code that creates a new int variable, a String object, and another object of the Object class. It then calls the print() method that prints the string and the object.
public class JavaMemoryManagement { public static void print(String s, Object o) { System.out.println(s); System.out.println(o); } public static void main(String[] args) { int i = 5; String s = "str"; Object o = new Object(); print(s, o); } }
The following steps explain what happens when we run the above code.
- The execution begins with the
main()method. This method will be added to the top of the stack.
- Inside the
main()method, primitive data of type int is created. This will be present in the stack itself.
- Next, an object of the
Stringclass is created. Space will be allocated to this object in the heap memory and the reference to this object will be stored in the stack. The same thing happens when an instance of
Objectclass is created.
- Next, the
print()method will be added to the top of the stack. The references to the String class instance and the
Objectclass instance will be available to this method. The memory will look like below.
Summary
The JVM uses two types of memory for the efficient execution of our application. All primitives and method calls are stored in the stack memory and all dynamically allocated objects are stored in the heap memory. Stack is a simple type of memory that uses Last-In-First-Out order(just like the stack data structure) whereas memory management is quite complex in the heap. The heap memory is further divided into Young Generation, Old Generation, and Permanent Generation. | https://www.studytonight.com/java-examples/java-stack-and-heap-memory | CC-MAIN-2022-05 | refinedweb | 1,214 | 54.73 |
This).
Project Overview
Before getting started, let’s highlight the project’s main features:
-. To log data to the microSD card we’re using a microSD card module.
- After completing these previous tasks, the ESP32 sleeps for 10 minutes.
- The ESP32 wakes up and repeats the process.
Parts Required
Here’s a list of the parts required to build this project (click the links below to find the best price at Maker Advisor):
- ESP32 DOIT DEVKIT V1 Board – read ESP32 Development Boards Review and Comparison
- MicroSD card module
- MicroSD card
- DS18B20 temperature sensor
- 10k Ohm resistor
- Jumper wires
- Breadboard
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Preparing the microSD Card Module
To save data on the microSD card with the ESP32, we use the following microSD card module that communicates with the ESP32 using SPI communication protocol.
Formatting the microSD card
When using a microSD card with the ESP32, you should format it first. Follow the next instructions to format your microSD card.
1. Insert the microSD card in your computer. Go to My Computer and right click on the SD card. Select Format as shown in figure below.
2. A new window pops up. Select FAT32, press Start to initialize the formatting process and follow the onscreen instructions.
Schematic
Follow the next schematic diagram to assemble the circuit for this project.
You can also use the following table as a reference to wire the microSD card module:
The next figure shows how your circuit should look like:
>>IMAGE Libraries
Before uploading the code, you need to install some libraries in your Arduino IDE. The OneWire library by Paul Stoffregen and the Dallas Temperature library, so that you can use the DS18B20 sensor. You also need to install the NTPClient library forked by Taranais to make request to an NTP server.
Follow the next steps to install those libraries in your Arduino IDE:
NTPClient library
-
Uploading Code
Here’s the code you need to upload to your ESP32. Before uploading, you need to modify the code to include your network credentials (SSID and password). Continue reading to learn how the code works.
/********* Rui Santos Complete project details at *********/ // Libraries for SD card #include "FS.h" #include "SD.h" #include <SPI.h> //DS18B20 libraries #include <OneWire.h> #include <DallasTemperature.h> // Libraries to get time from NTP Server #include <WiFi.h> #include <NTPClient.h> #include <WiFiUdp.h> // Define deep sleep options uint64_t uS_TO_S_FACTOR = 1000000; // Conversion factor for micro seconds to seconds // Sleep for 10 minutes = 600 seconds uint64_t TIME_TO_SLEEP = 600; // Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; // Define CS pin for the SD card module #define SD_CS 5 // Save reading number on RTC memory RTC_DATA_ATTR int readingID = 0; String dataMessage; // Data wire is connected to ESP32 GPIO 21 #define ONE_WIRE_BUS 21 // Setup a oneWire instance to communicate with a OneWire device OneWire oneWire(ONE_WIRE_BUS); // Pass our oneWire reference to Dallas Temperature sensor DallasTemperature sensors(&oneWire); // Temperature Sensor variables float temperature; // Define NTP Client to get time WiFiUDP ntpUDP; NTPClient timeClient(ntpUDP); // Variables to save date and time String formattedDate; String dayStamp; String timeStamp; void setup() { // Start serial communication for debugging purposes Serial.begin(115200); // Connect to Wi-Fi network with SSID and password Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println(""); Serial.println("WiFi connected."); //); // Initialize SD card } // If the data.txt file doesn't exist // Create a file on the SD card and write the data labels File file = SD.open("/data.txt"); if(!file) { Serial.println("File doens't exist"); Serial.println("Creating file..."); writeFile(SD, "/data.txt", "Reading ID, Date, Hour, Temperature \r\n"); } else { Serial.println("File already exists"); } file.close(); // Enable Timer wake_up esp_sleep_enable_timer_wakeup(TIME_TO_SLEEP * uS_TO_S_FACTOR); // Start the DallasTemperature library sensors.begin(); getReadings(); getTimeStamp(); logSDCard(); // Increment readingID on every new reading readingID++; // Start deep sleep Serial.println("DONE! Going to sleep now."); esp_deep_sleep_start(); } void loop() { // The ESP32 will be in deep sleep // it never reaches the loop() } // Function to get temperature void getReadings(){ sensors.requestTemperatures(); temperature = sensors.getTempCByIndex(0); // Temperature in Celsius //temperature = sensors.getTempFByIndex(0); // Temperature in Fahrenheit Serial.print("Temperature: "); Serial.println(temperature); } //Stamp = formattedDate.substring(0, splitT); Serial.println(dayStamp); // Extract time timeStamp = formattedDate.substring(splitT+1, formattedDate.length()-1); Serial.println(timeStamp); } // Write the sensor readings on the SD card void logSDCard() { dataMessage = String(readingID) + "," + String(dayStamp) + "," + String(timeStamp) + "," + String(temperature) + "\r\n"; Serial.print("Save data: "); Serial.println(dataMessage); appendFile(SD, "/data.txt", dataMessage.c_str()); } // Write to the SD card (DON'T MODIFY THIS FUNCTION) data to the SD card (DON'T MODIFY THIS
In this example, the ESP32 is in deep sleep mode between each reading. In deep sleep mode, all your code should go in the setup() function, because the ESP32 never reaches the loop().
Importing libraries
First, you import the needed libraries for the microSD card module:
#include "FS.h" #include "SD.h" #include <SPI.h>
Import these libraries to work with the DS18B20 temperature sensor.
#include <OneWire.h> #include <DallasTemperature.h>
The following libraries allow you to request the date and time from an NTP server.
#include <WiFi.h> #include <NTPClient.h> #include <WiFiUdp.h>
Setting deep sleep time
This example uses a conversion factor from microseconds to seconds, so that you can set the sleep time in the TIME_TO_SLEEP variable in seconds.
In this case, we’re setting the ESP32 to go to sleep for 10 minutes (600 seconds). If you want the ESP32 to sleep for a different period of time, you just need to enter the number of seconds for deep sleep in the TIME_TO_SLEEP variable.
// Define deep sleep options uint64_t uS_TO_S_FACTOR = 1000000; // Conversion factor for micro seconds to seconds // Sleep for 10 minutes = 600 seconds uint64_t TIME_TO_SLEEP = 600;
Setting your network credentials
Type your network credentials in the following variables, so that the ESP32 is able to connect to your local network.
// Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD";
Initializing sensors and variables
Next, define the microSD card SD pin. In this case, it is set to GPIO 5.
#define SD_CS 5
Create a variable called readingID to hold the reading ID. This is a way to get your readings organized. To save a variable value during deep sleep, we can save it in the RTC memory. To save data on the RTC memory, you just need to add RTC_DATA_ATTR before the variable definition.
// Save reading number on RTC memory RTC_DATA_ATTR int readingID = 0;
Create a String variable to hold the data to be saved on the microSD card.
String dataMessage;
Next, create the instances needed for the temperature sensor. The temperature sensor is connected to GPIO 21.
// Data wire is connected to ESP32 GPIO21 #define ONE_WIRE_BUS 21 // Setup a oneWire instance to communicate with a OneWire device OneWire oneWire(ONE_WIRE_BUS); // Pass our oneWire reference to Dallas Temperature sensor DallasTemperature sensors(&oneWire);
Then, create a float variable to hold the temperature retrieved by the DS18B20 sensor.
float temperature;
The following two lines define an NTPClient to request date and time from an NTP server.
WiFiUDP ntpUDP; NTPClient timeClient(ntpUDP);
Then, initialize String variables to save the date and time.
String formattedDate; String dayStamp; String timeStamp;
setup()
When you use deep sleep with the ESP32, all the code should go inside the setup() function, because the ESP32 never reaches the loop().
Connecting to Wi-Fi
The following snippet of code connects to the Wi-Fi network. You need to connect to wi-fi to request date and time from the NTP server.
Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); }
Initializing the NTP client
Next, initialize the NTP client to get date and time from an NTP server.
timeClient.begin();
You can use the setTimeOffset(<time>) method to adjust the time for your timezone.
timeClient.setTimeOffset(3600);
Here are some examples for different timezones:
- GMT +1 = 3600
- GMT +8 = 28800
- GMT -1 = -3600
- GMT 0 = 0
Initializing the microSD card module
Then, initialize the microSD card. The following if statements check if the microSD card is properly attached. }
Then, try to open the data.txt file on the microSD card.
File file = SD.open("/data.txt");
If that file doesn’t exist, we need to create it and write the heading for the .txt file.
writeFile(SD, "/data.txt", "Reading ID, Date, Hour, Temperature \r\n");
If the file already exists, the code continues.
else { Serial.println("File already exists"); }
Finally, we close the file.
file.close();
Enable timer wake up
Then, you enable the timer wake up with the timer you’ve defined earlier in the TIME_TO_SLEEP variable.
esp_sleep_enable_timer_wakeup(TIME_TO_SLEEP * uS_TO_S_FACTOR);
Initializing the library for DS18B20
Next, you initialize the library for the DS18B20 temperature sensor.
sensors.begin();
Getting the readings and data logging
After having everything initialized, we can get the readings, timestamp, and log everything into the microSD card.
To make the code easier to understand, we’ve created the following functions:
- getReadings(): reads the temperature from the DS18B20 temperature sensor;
- getTimeStamp(): gets date and time from the NTP server;
- logSDcard(): logs the preceding data to the microSD card.
After completing these tasks, we increment the readingID.
readingID++;
Finally, the ESP32 starts the deep sleep.
esp_deep_sleep_start();
getReadings()
Let’s take a look at the getReadings() function. This function simply reads temperature from the DS18B20 temperature sensor.
sensors.requestTemperatures(); temperature = sensors.getTempCByIndex(0); // Temperature in Celsius
By default, the code retrieves the temperature in Celsius degrees. You can uncomment the following line and comment the previous one to get temperature in Fahrenheit.
//temperature = sensors.getTempFByIndex(0); // Temperature in Fahrenheit
getTimeStamp()
The getTimeStamp() function gets the date and time. These next lines ensure that we get a valid date and time:
while(!timeClient.update()) { timeClient.forceUpdate(); }
Sometimes the NTPClient retrieves the year of 1970. To ensure that doesn’t happen we force the update.
Then, convert the date and time to a readable format with the getFormattedDate() method:
formattedDate = timeClient.getFormattedDate();
The date and time are returned in this format:
2018-04-30T16:00:13Z
So, we need to split that string to get date and time separately. That’s what we do here:
//.
logSDCard()
The logSDCard() function concatenates all the information in the dataMessage String variable. Each reading is separated by commas.
dataMessage = String(readingID) + "," + String(dayStamp) + "," + String(timeStamp) + "," + String(temperature) + "\r\n";
Note: the “\r\n” at the end of the dataMessagevariable ensures the next reading is written on the next line.
Then, with the following line, we write all the information to the data.txt file in the microSD card.
appendFile(SD, "/data.txt", dataMessage.c_str());
Note: the appendFile() function only accepts variables of type const char for the message. So, use the c_str() method to convert the dataMessage variable.
writeFile() and appendFile()
The last two functions: writeFile() and appendFile() are used to write and append data to the microSD card. They come with the SD card library examples and you shouldn’t modify them.
To try other examples to work with the microSD card, go to File > Examples > SD(esp32).
Uploading the Code
Now, upload the code to your ESP32. Make sure you have the right board and COM port selected.
Demonstration
Open the Serial Monitor at a baud rate of 115200.
Press the ESP32 Enable button, and check that everything is working properly (the ESP32 is connected to your local network, and the microSD card is properly attached).
Note: If everything is wired properly and you keep getting an error initializing the SD card, powering your microSD card module with 5V might solve the issue.
Let the ESP32 run for a few hours to test if everything is working as expected. After the testing period, remove the microSD card and insert it into your computer. The microSD card should contain a file called data.txt.
You can copy the file content to a spreadsheet on Google Sheets for example, and then split the data by commas. To split data by commas, select the column where you have your data, then go to Data > Split text to columns… Then, you can build charts to analyse the data.
Wrapping Up
In this tutorial we’ve shown you how to log data to a microSD card using the ESP32. We’ve also shown you how to read temperature from the DS18B20 temperature sensor and how to request time from an NTP server.
You can apply the concepts from this tutorial to your own projects. If you like ESP32 and you want to learn more, make sure you check our course exclusively dedicated to the ESP32: Learn ESP32 with Arduino IDE.
You might also like reading other articles related with ESP32:
- ESP32 vs ESP8266 – Pros and Cons
- Build an All-in-One ESP32 Weather Station Shield
- ESP32 Publish Sensor Readings to Google Sheets
- ESP32 MQTT – Publish and Subscribe with Arduino IDE
Thanks for reading.
109 thoughts on “ESP32 Data Logging Temperature to MicroSD Card”
Is it possible to use esp 8266
Hi Karanbir.
The code for this project is not compatible with the ESP8266.
Regards,
Sara 🙂
It was very useful, Roy. thanks,.
I hope, you work on projects that connecting esp8266 or esp32 to LABVIEW (Control & Monitor).
Hey Rui, how are you?!
Thankx for sharing your projects, they are clear and educationals!
I´m not working with the NTPClient, but i has add in my project! hehehe
Only one point… in the NPT offset, the example you show, for plus eigth hours, maybe you writing wrong, because the paramter is the hours in seconds, is this?
I´m live in Brazil, and here we have -3 hours. So im use 10800?
Rui, very thankyou for showing the code, was very clarance efor me!
Hi Carlos.
Yes, you are right. The offset for the eight hours was wrong. We’ve already updated that.
Thanks for noticing and for reading our projects.
Regards,
Sara 🙂
Great tutorial, and nice to see deep sleep illustrated. It would be great to show how to do battery powered projects. I have a couple of related questions on this project– on this dev board (or other dev boards with a LiPo charger) is the USB-UART always powered on via the same 3.3V ESP module supply? (An unused USB-UART could drain a battery.) In deep sleep what is the power consumption, i.e. how long can a battery last?
BTW, I would suggest a sleep calculated to wake at the next 10 minute real-clock interval rather than 600s, so the some seconds to make a WiFi + NTP connection doesn’t shift the interval. But that’s a pretty straightforward change in your tutorial code.
For a real lower power application with the ESP32, you shouldn’t use an ESP32 Development board. It’s recommended to use an ESP32 chip directly on your own PCB, because any dev board has many unnecessary components draining the batteries (even in deep sleep).
Nice blog post. I really like the way you explained all the things. Thank you for sharing.
Thanks !
I have to use a battery and it seems that the largest drain is getting the network time.
have you tried to get it once every 24hours ? or maybe every 6 hours ?
Hi Dave.
I haven’t tried with 24 hours interval.
But you can edit the deep sleep time in the code. It doesn’t have to be every 10 minutes.
Getting the network time consumes a lot of power because the ESP32 needs to connect via Wi-Fi.
To get very low power consumption during deep sleep, you need to use the bare ESP32 chip instead of the full development board (as we do here, but with the ESP8266 – only 7uA during deep sleep)
Regards,
Sara
more on the delayed fetch of time
Can the ESP32 re-set it’s internal time so that it can wake and datalog every 5 minutes, and once an hour reset the time and post error?
jan5 6:06:02 21.3C
jan5 6:11:122 21.5C
jan5 6:16:24 21.3C
jan5 6:21:33 20.9C
jan5 6:25:04 21.3C time adjust 2951ms
Hi Rui and Sandra, this is a great guide.
Can this code be used for ESP8266? And how can I use it in this case?
Please kindly help me.
Best Regards,
Hi.
This code was built for ESP32. It won’t work in ESP8266.
We don’t have this project for ESP8266. But we have some resources that may help you:
– Deep Sleep with ESP8266
– DS18B20 with ESP8266
I hope this helps.
Great work with your blog.
Regards,
Sara
Hi Rui and Sandra, i have some problem with sd card. Com port show this massage::10312
load:0x40080400,len:6460
entry 0x400806a4
Card Mount Failed
Can u help me with this?
Hi Andrey,
Please double check that the microSD card module is wired correctly to the ESP32. Make sure you are using a microSD card formatted as FAT32.
Before continuing with the project, can you try the microSD card example available on Arduino IDE? Go to File > Examples > SD(esp32) > SD_Test.
Upload that code and see how the microSD responds.
Regards,
Sara
Hi Sara, microSD card module is wired correctly and its work with esp12e module, but didnt work with esp32. The SD_tesr example dont work too with same result.
Hey Andrey, i had the same problem. The datasheet for my SD module says something about VCC-in 3.3v/5v, i used it with the 3.3v output of my esp32, which didn’t work. I then used the 5v and voila, everything worked! maybe it can help you!
Hi,
It would be awesome if one could transfer the log file over Wi-Fi (internet even), without removing the SD card !
Could you provide some pointers on how to achieve this ?
Hello, unfortunately we don’t have any tutorials on that exact subject, but it’s definitely possible to do it… Thanks for asking,
Rui
Hi, I have the same question of Royce J Pereira… a way to save the file_data from SD (ESP32) to windows folder. Removing the SD card is an operation for a caveman!!! 😀 😀
Have you any suggestion? libraries? link?
Thanks for your support and for the detailed and usefull blog….IT’S FANTASTIC!!
I forgot to add an important information to my previous post: the way to save the file from SD and windows folder has to be through wi-fi connection to private network.
If it is not possible (or too complicated), please consider that the ESP32 it is not connected to pc by USB, only by wi-fi network.
Thanks in advance.
regards
Francesco
Hi Francesco.
Your suggestion is a great idea.
However, at the moment I don’t have any solution for that. That is quite a big project.
I don’t know if there are any libraries that can do that, I haven’t search much about the subject.
Sorry that I can’t help much and thank you for following our work.
Regards,
Sara
Hi, thanks for reply.
I surfed in net to find solutions….and I found it…
forum.arduino.cc/index.php/topic,93502.msg707104.html#msg707104
I could use the FTP way. The library WIFI should permit to choose the right address and the right gate for a FTP server. Then with ftp command I can copy the file from sd to server… it seems good… what do you think? 😉
Bye. Francesco
Should this project work with the newer AI Thinker boards, that have the built in card reader (and camera, but we do not always have to use it)? I imagine that would save a lot of wiring.
Hi John.
I haven’t tried it. But I think it should work by doing the proper pin assignment in the code.
Regards,
Sara
thnak you
HI,
I cannot understand why my post is disappearing while it is “awaiting moderation”.
Anyway this is my 3rd attempt.
I am always getting ‘Card Mount Failed’.
But the in built SD_test example works fine.
Why could this be happening.
If you try to submit multiple comments very quickly, it will probably block you and flag the comment as spam.
I’m not sure… That usually means that you either need to double-check your wiring or if you have formatted the MicroSD card in the properly.
Hi, thanks for the response.
But as I said in my comment, The inbuilt SD_test example works fine .
(i.e. File > Examples > SD(esp32) > SD_Test.)
Only the code from this project is failing.
The wiring in both cases is the same!
Thank you!
Unfortunately I’m not able to reproduce that error on my end… I’ve just tried the example again and it works fine for me.
Yes.
Strangely, after retrying it the next day, it worked fine with both programs, without changing a thing…
Thank you for taking the trouble to check!
——-
(Unrelated) I have observed the following in your code:
In line 81:
SD.begin(SD_CS);
if(!SD.begin(SD_CS)) { ….
Is the 1st SD.begin necessary ?
Again in line 91 once again we have :
Serial.println(“Initializing SD card…”);
if (!SD.begin(SD_CS)) { …
This code will never be reached if already failed in line 82 above.
Otherwise, SD card is already initialized in line 81/82, so I think no need for this again?
And let me thank you for all these projects and the courses. I’m learning a lot! 🙂
I’m glad it is working now.
Yes, you are right. We have to fix the code.
Thank you for pointing that out.
Regards,
Sara
Hello,
First, an excellent tutorial and guide.
However, I have the same problem with SD card as reported by Andrey
on April 3 2019 — “Card Mount Failed” error.
I have checked that wiring is correct.
The microSD card is formatted as FAT32.
I can write/read files to it when plugged into PC.
I have also tried File > Examples > SD(esp32) > SD_Test: That gives same error.
( Contrary to Royce, May 1, for whom the test example worked )
I have tried suggestion by Andi (Apr 11), connecting SD module VCC to 5V: Still the same error.
I have tried a second SD card module, also same error.
Andrey’s thread ends without any solution reported. I wonder if he found a fix?
In Royce’s case the problem vanished the next day! I don’t hold much hope
for that to happen for me.
So for now I am stuck! Any suggestions you can offer?
Many thanks, Ken.
Hi Ken.
If you’ve double-check the wiring, tried an external power source and you’ve tried with 5V and nothing worked, I don’t know what can be the problem.
I’m sorry that I can’t help much, the example works just fine for me.
Regards,
Sara
Hello Again,
To follow-up on my comment of 18 May about “Card Mount Failed” error.
I have connected my SD card to an ATMega328, and uploaded
File>Examples>SD>ReadWrite
using a USBtiny programmer.
This work just fine; I can write to and read from the SD card.
So there is nothing wrong with my card or connections.
But when I use the ESP32 DoIt Dev Board with
File > Examples > SD(esp32) > SD_Test
I consistently get “Card Mount Failed” error.
Compiling the ATMega328 sketch, the following is reported:
Using library SPI at version 1.0 in folder: C:\Program Files (x86)\arduino-1.8.9\hardware\arduino\avr\libraries\SPI
Using library SD at version 1.2.3 in folder: C:\Program Files (x86)\arduino-1.8.9\libraries\SD
But compiling the ESP32sketch, the following is reported:
Multiple libraries were found for “SD.h”
Used: C:\Users\Ken\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.2\libraries\SD
Not used: C:\Program Files (x86)\arduino-1.8.9\libraries\SD
Using library SPI at version 1.0 in folder: C:\Users\Ken\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.2\libraries\SPI
Using library SD at version 1.0.5 in folder: C:\Users\Ken\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.2\libraries\SD
Using library FS at version 1.0 in folder: C:\Users\Ken\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.2\libraries\FS
So the ESP32 version uses different code for the SD card (as would be expected). This strongly suggests that the problem is with the code.
It seems from the Comments that quite a lot of your followers have this problem, and some have managed to find a fix. But for now I am completely stuck, and can go no further with my project. Perhaps you can suggest another forum that might be able to help?
With thanks, ken.
Hello Again,
Solution found to my “Card Mount Failed” error.
After many hours trying to fix this problem, as a last resort
I tried using another ESP32 DoIt Dev Board (I had purchased two
boards). And it worked! The boards are new from RS, so thought
it very unlikely that one would be faulty, especially as the faulty
board seemed to work OK in other respects, eg. connecting successfully
to WiFi.
As always, the problem is with the last thing you try. Now it’s
onwards and upwards!
Thanks, Ken.
Hi Ken.
I’m glad you solved your problem.
So, it was a problem in your previous ESP32 board, right?
Regards,
Sara
Hi Sara,
Yes, that’s right — faulty board. I have swapped them back and
forth a few times and one is OK, the other always fails.
Very unexpected, but maybe I was just unlucky.
Regards, ken.
Hi Ken.
Yes, it is very unexpected.
I’ve worked with several ESP32 boards from different vendors and I’ve never had a faulty one.
Maybe it is bad luck 🙁
Regards,
Sara
A nice add-on will be save a file per day (19-06-09.txt) and transfer old files to a webservers php page.
Yes.
That’s a great idea.
Thanks 🙂
Hi,
Ken here again.
First, apologies for such a long comment.
Following my earlier comments, ending on 1 June, everything was working just fine, and continued to work for a couple of weeks. But now, after doing other unrelated stuff for about a week, I have gone back to the data logging project, and a new problem has appeared! As far as I’m aware I have not changed anything significant, in particular, I have changed nothing in the code (anyway, problem is not the code)
I have selected board “ESP32 Dev Module”
and programmer “Arduino as ISP”
(These are what I used when it was working)
Compile is OK.
I then attempt Upload Using Programmer.
I get following error message:
java.lang.NullPointerException
at cc.arduino.packages.uploaders.SerialUploader.uploadUsingProgrammer(SerialUploader.java:314)
at cc.arduino.packages.uploaders.SerialUploader.uploadUsingPreferences(SerialUploader.java:89)
at cc.arduino.UploaderUtils.upload(UploaderUtils.java:82)
at processing.app.SketchController.upload(SketchController.java:736)
at processing.app.SketchController.exportApplet(SketchController.java:703)
at processing.app.Editor$DefaultExportAppHandler.run(Editor.java:2125)
at java.lang.Thread.run(Thread.java:748)
To eliminate code issues, I have tried an absolute minimum sketch, as follows:
void setup() {
}
void loop() {
}
Obviously this does nothing, but it compiles OK.
But on Upload Using Programmer I get same error message:
If I unplug the board I get exactly same error. This implies the problem occurs before it even get as far as trying to communicate with board.
This happens when I select any ESP32 board, but does NOT happen when I select any Arduino board, when I just get the expected “programmer is not responding” error.
I have uninstalled Java and re-installed Java release 1.8.0_211
I have uninstalled Arduino IDE, deleted folder “C:\Users\Ken\AppData\Local\Arduino15”,
and then re-installed latest version 1.8.9 (which is what I have been using).
I then install the ESP32 Board in Arduino IDE (Windows instructions) exactly
as in the link you give to
Following this, no change, still get same error.
I have uninstalled Arduino IDE again, deleted folder “C:\Users\Ken\AppData\Local\Arduino15”,
and then re-installed OLDER version 1.8.7
And again, no change, still get same error.
All the above indicates that there is nothing wrong with the sketch, and nothing wrong with the board, and to repeat — it all worked fine about a week ago, and I am not aware of having changed anything significant.
I have spent several hours Googling “java.lang.NullPointerException”.
There are many links, mostly just confusing and not offering a fix.
The main message seems to be that there is a mistake in the code, and indeed the error message seems to be pointing at errors at specific lines
in seven different .java files (see above).
But I have done a search of my whole C drive and cannot find any of these .java files.
So I’m stuck again!
Any suggestions on what I should do?
Thank you, Ken.
Hi,
Me again!
I have found what causes problem:
I have been trying to uploading the sketch with “Upload Using Programmer”.
If I use “Upload”, NOT “Upload Using Programmer”, everything is fine.
Strange thing is, I’m fairly sure I used “Upload Using Programmer” earlier, before this latest round of problems. That would seem correct since
there is no bootloader on the ESP chip.
Anyway, grovelling apologies for cluttering your comments with this stuff.
Feel free to delete this and my long previous comment.
Thanks, Ken.
Hi!
I’m saving data on micro sd with my Esp32 (Wemos WIFI & Bluetooth Battery), and i have a problem, while the esp it´s sleeping, data inside txt file continue writing but the same data, and does this until Esp wakes up again, and send the new data, then happens the same, data it´s repeat again on the txt file even though the esp it´s sleeping, and happens until the esp send data again.
Do you know how i can solve it?? please, thanks!!!
Hi Duvan.
What example are you using that makes this happen?
Regards,
Sara
Hi Sara,
I´m using the same example of this tutorial, just with another sensors and without take network time, but just change is in the measurements, write data and sleep is equal to your example, so, each 5 seconds last data is written again while the Esp is sleeping and wake up and send new data, and i don´t know how to solve it!
¿Do you know how solve it? or maybe this happened to you?
Thanks!!
Duvan!
I dont know if someone knows more about SD card mounting than me but I am having trouble getting the card to mount after coming out of deep sleep. It gives the error “sd card mount fail”
Hi.
Do you get that error only after deep sleep? In the first attempt it initializes ok?
Hi,
I am trying to implement a card reader for an ESP32 project and am not making much progress in initialising the card.
I have done all of the above example and have slightly modified it to run without sleep, wifi and the temp sensor to just test the card element. Pin connections are as per your example. I am using a DOIT DEVKIT ESP32.
I am getting card mount failure when I run the application. The card is formatted with FAT32 and seems to work fine as a file system on a PC.
(…)
Regarding my issue re initialising the SD card, I have now resolved it.
It needed to be connected to 5v pin to work not 3.3v
David
Hi David.
I’m glad you’ve solved your issue.
Some SD card modules need 5V to operate instead of 3.3V. I’ll add a note about that in the tutorial.
Regards,
Sara
Hi Sara,
I was trying to create a unique filename to log to, so I was creating a string rather than data.txt. However this does not seem to work with the append and write commands. Is there a way to make this more general with filenames?
David
David,
The filename appears in 3 places in the code. Did you change all 3 to the new name?
Does your new name include the “/” as the 1st character?
Dave
Hi, thanks for your “amazing” site.
I have a question for expansion of that code.
Can we use an RTC module (like DS3231), in order to get the time for data loggin in case that the internet connection lost (so no NTP info available)?
Can ESP32 handle the SD module + RTC DS3231 + DS18B20 together? What is the PIN connection and High/Low set?
BR
Hi Elias.
Yes, you can do that.
The SD card module and the RTC work with SPI communication protocol. So, use the SPI pins and a different chip select pin for each of them.
You can connect the DS18B20 to any GPIOs you want (apart from some exceptions:)
At the moment, we don’t have any guide dor RTC with the ESP32. But we have one for Arduino (it shouldn’t be much different):
We also have something similar to what you want to do using Arduino:
I hope this helps with your project.
Regards,
Sara
Hi, I’ve been having a great time reading through the your projects and working through the ESP32 course.
I recently purchased some card modules on Amazon UK and they are yet to arrive but I have seen that a number of people have had voltage problems so I investigated. There are some great photographs of the PCBs on Amazon UK so it wasn’t difficult.
It seems that many of the modules have 5V input and the 5V in all cases seems to go to a 3V3 regulator but very few modules have a 3V3 input pin. This is a real nuisance because the modules all seem to run on 3V3 but most don’t have a 3V3 input pin just a 5V pin. This means that the modules with no way to bypass the regulator are likely to misbehave if fed with 3V3.
The work around is to solder a fine wire to the output leg of the regulator but a steady hand will be necessary.
This is important to me because I am working on a project that will eventually be powered directly off a LiFePo4 battery that runs at around 3.2V so avoiding the need for a regulator.
I will do some research into card modules with a 3V3 input pin – this will save a lot of headaches!
Hi Richard.
Thanks for sharing that.
We never had issues with the microSD card modules. However, many of our readers have reported issues that may be precisely because of that.
Thanks for sharing and let us know the results of your research.
Regards,
Sara
Hi
Great tutorial.
I am caught between the choice of saving data on internal memory (4MB) of ESP32 module and saving it on SD card. Space on ESP32 module is not a problem as I will log data for only 24 hours every 5 minutes, but I am afraid of crossing the limit of 100,000 for write/erase cycles in (100000)/(24*60/5)= 347.22 days.
Considering the fact that I will use wear levelling library and FATFS library to store files, how realistic is my fear? Is it better to use SD card for data logging?
Dear Zeni, I have use SD for logging the last 2 years (several controlers). I have install up to 80 controlers with 4 temprature sensors each, and data writing every 5 mins (4 measurments with Controler ID and datetime). Never had a problem (except once with a problematic SD card).
Greetings, Sara! I love what you and Rui do! These tutorials are exceptional and I have many projects I have successfully made because of your tutorials and I so far have purchased two of your books.
However, I am having trouble with this example in that; although you mention above that there is no need to install the FS.h library as it installs by default, when I try to compile the example, I get an error, I get the following message:
sketch_oct29a:7:10: error: FS.h: No such file or directory
I have actually tried several other examples on the internet (the application I am working towards is to actually be able to read a microSD; not write to one) and several other examples use FS.h as well, with the same error. I searched my libraries folder with no luck (o: I struggle with Github, but found some fs.h documentation there, but there’s no download button on the page like on many Github pages.
Any suggestions? Did I miss something somewhere? (o:
Oh, and I do have an SD library folder, but no SPI folder either, which I suspect I will need as well. Your help is greatly appreciated.
Well. My bad. I downloaded the ESP32 core library and found fs.h and spi.h, loaded them into my Arduino UI and still no luck.
Then I realized, after reading the compiler errors that I had inadvertently switched to an Arduino Nano. Switching back to ESP32 I got the initializing the card error. Switching the power supply for the SD card to 5v, it worked!
Sorry to have bothered you (o: I’m having other issues reading files (I can recognize the files with the generic SD test program, which can create, write and read its own files) but the test program cannot read the contents of the files. I’ll keep working! Cheers!
Hi.
I’m glad that you get it working.
Since you’re one of our customers, if you need further help, you can use the RNTLAB forum:
Regards,
Sara
Hi Sara,
I am trying to use an SD card reader with TTGO T-display. I am using the standard library with a set of alternate SPI pin definitions as the display uses the hardware pins. Unfortunately I get a mount error, meaning that my pin definitions are not being processed. I notice that this seems to be a common issue on many of the ESP32 forums, with the only solution so far being to use some alternate library (Mysd.h). You helped me with my problem using alternate I2C pins with the BMP280 sensor, that necessitated using the Sparkfun library instead of the Adafruit library. Maybe a similar problem exists with SPI and certain ESP32 devices? What experience have you or Rui had with this issue?
how to keep the time correct after turning off the wifi connection ……….
please, make a tutorial on how to send the .txt or .csv data that we get to mysql database
You have the answer in the example 7 in:. With the example of the library SDWebserver.h is exactely you are looking for.
Regards.
Hi,very intresting, but when i initializing SD(sd.begin(cs_pin)), i get error: [E][sd_diskio.cpp:739] sdcard_mount(): f_mount failed 0x(3). Maybe you know what is it?
Hi.
What CS pin are you using?
Regards,
Sara
Hi Sara,
Sketch would not Compile ‘getFormattedDate’ No such class in NTPClient.
Went to NTPClient examples & found ‘getFormattedTime’ & used this instead. Sketch now complies.
Keep up the good work.
Alan
Hi.
You were probably using the wrong version of the library.
We are using the NTPCLient library forked by Taranais:
Regards,
Sara
Hi,
Thanks Sara, it now complies OK.
Alan
Just wondering if you are using the 36 pin or the 30 pin ESP32 module, the link shows it is 30 pin but the example shows using the 36pin ESP32 module.
Hi Fernando.
The tutorial works regardless of the number of pins on your board.
You just need to connect the module to the right GPIOs.
Regards,
Sara
Thanks Sara, I will give it a try
Hi
I’m using some software to graph the output from this but the software expects date and time to be combined eg YYYY:MM:DAY Time.
This project splits the date and time into two comma seperated fields.
Can they be combined?
Thank you
Perfect, but I spend 1 day by ERROR: “Card Mount Failed”, to get working on ESP32, please fix:
//#include <SPI.h>
#include “SPI.h”
I did it, Great tutorial !
I need help to interface a microSD card module to ESP32 on pins 12(), 13(), 14() and 15() using arduino platform.
I have tried some standard solutions available on NET but neither, has worked.
Hi.
This explains how to use custom pins with the ESP32 and SD card:
I hope this helps.
Regards,
Sara
Hi! I was working on the tutorial, but for some strange reason when wanting to compile it returns the following error:
variable or field ‘writeFile’ declared void
void writeFile(fs::FS &fs, const char * path, const char * message){
Do you have any idea what it might be?
something to expand, is that I base myself on what you did and on the example that the library brings by default 🙁
so the method is the following ();
} | https://randomnerdtutorials.com/esp32-data-logging-temperature-to-microsd-card/?replytocom=452788 | CC-MAIN-2021-25 | refinedweb | 6,908 | 66.13 |
Hello there I am trying to execute the a simple .java based application in bluemix and its basically a calculator, but unfortunately I am getting the ouput as below. I tried to access the application by accessing the URL: asifcmpe272calc.ng.bluemix.net in browser but I couldn't get the application up and running , can someone please let me know as to where I am going wrong thanks
====================================================================================================== asif@asif-Vostro1310:~/project/assignment4/asifcmpe272calc/asifassignment4$ ./cf push Asif_Calc.java -m 512m Using manifest file /home/asif/project/assignment4/asifcmpe272calc/asifassignment4/manifest.yml
Creating app Asif_Calc.java in org myemailaddress123@gmail.com / space dev as myemailaddress123@gmail.com... OK
Using route asifcmpe272calc.ng.bluemix.net Binding asifcmpe272calc.ng.bluemix.net to Asif_Calc.java... OK
Uploading Asif_Calc.java... Uploading from: /home/asif/project/assignment4/asifcmpe272calc/asifassignment4/webStarterApp.war 50.6K, 8 files OK
Starting app Asif_Calc.java in org myemailaddress123@gmail.com / space dev as myemailaddress123@gmail.com... OK -----> Downloaded app package (976K) -----> Downloading IBM 1.7.0 JRE from (0.0s) Expanding JRE to .java (1.6s) Downloading from /icap/jazz_build/build/Core_Managed/build.image/output/wlp/com.ibm.ws.liberty-8.5.5.1-201402080619.tar.gz ... (0.0s). Installing archive ... (0.9s). -----> Uploading droplet (93M)
0 of 1 instances running, 1 starting 0 of 1 instances running, 1 starting 0 of 1 instances running, 1 starting 1 of 1 instances running
App started
Showing health and status for app Asif_Calc.java in org myemailaddress123@gmail.com / space dev as myemailaddress123@gmail.com... OK
requested state: started instances: 1/1 usage: 512M x 1 instances urls: asifcmpe272calc.ng.bluemix.net
state since cpu memory disk
============================================================================================ OUTPUT(IN BROWSER) WHEN I TRY TO ACCESS THE URL: asifcmpe272calc.ng.bluemix.net
Welcome to BlueMix! This application is powered by WebSphere Liberty Name Value OLDPWD
/home/vcap/app
VCAP_SERVICES
{}
TMPDIR
/home/vcap/tmp
SHLVL
1
JAVA_HOME
/home/vcap/app/.java/jre
WLP_USER_DIR
/home/vcap/app/.liberty/usr
VCAP_APP_PORT
63605
PATH
/bin:/usr/bin
VCAP_APP_HOST
0.0.0.0
IBM_JAVA_OPTIONS
-Xshareclasses:name=liberty-%u,nonfatal,cacheDir="/home/vcap/app/.liberty/usr/servers/.classCache",cacheDirPerm=1000 -XX:ShareClassesEnableBCI -Xscmx60m -Xscmaxaot4m
USER
vcap
WLP_OUTPUT_DIR
/home/vcap/app/.liberty/usr/servers
PWD
/home/vcap/app/.liberty/usr/servers/defaultServer
/home/vcap/app
PORT
63605
INVOKED
.liberty/bin/server
VCAP_APPLICATION
{ "instance_id": "8542242dd260477d84bf9b4c4730a422", "instance_index": 0, "host": "0.0.0.0", "port": 63605, "started_at": "2014-03-06 07:17:37 +0000", "started_at_timestamp": 1394090257, "start": "2014-03-06 07:17:37 +0000", "state_timestamp": 1394090257, "limits": { "mem": 512, "disk": 1024, "fds": 16384 }, "application_version": "d929472d-432b-44c4-a3ce-56d65dd18766", "application_name": "Asif_Calc.java", "application_uris": [ "asifcmpe272calc.ng.bluemix.net" ], "version": "d929472d-432b-44c4-a3ce-56d65dd18766", "name": "Asif_Calc.java", "uris": [ "asifcmpe272calc.ng.bluemix.net" ], "users": null }
IBM_JAVA_COMMAND_LINE
/home/vcap/app/.java/jre/bin/java -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=./.buildpack-diagnostics/killjava -Xtune:virtualized -Xmx384M -javaagent:/home/vcap/app/.liberty/bin/tools/ws-javaagent.jar -jar /home/vcap/app/.liberty/bin/tools/ws-server.jar defaultServer
_
.liberty/bin/server
MEMORY_LIMIT
512m
X_LOG_DIR
/home/vcap/app/.liberty/usr/servers/defaultServer/logs
X_CMD
/home/vcap/app/.java/jre/bin/java
Your command line invocation is taking Asif_Calc.java as the desired application name, not the file to upload. In any case, you would need to push the compiled class file rather than the source. What is in the /home/asif/project/assignment4/asifcmpe272calc/asifassignment4/webStarterApp.war file that it is uploading? Was it what you meant to upload?
Hello Nitin thanks for the feedback ... I checked the contents of the Readme.txt and found this line "This WAR file is actually the application itself. It is the only file that'll be pushed to and run on the BlueMix cloud. Every time your application code is updated, you'll need to regenerate this WAR file and push to Bluemix again. See the next section on detailed steps." but to be frank I really don't know whether my Asif_Calc.java is part of this .war file ..I opened the .war file and found nothing relevant to Asif_Calc.java so please let me know what would be the next step for your reference here is the git url of the code
Hi Nitin could you please let me know how to regenerate this .WAR file again with the Asif_Calc.java code ..what are the steps for it ..since you mentioned to SEE THE NEXT SECTION ON DETAILED STEPS , do you mean the jazzhub tutorial sorry about this I am a beginner..thanks
Answer by Richard Johnson (3602) | Mar 06, 2014 at 11:54 PM
Hi
I looked carefully at what you are doing, and I think there are some confusions in how you're trying to build and deploy an application. It looks like you did the following sequence:
So let me try to explain what's wrong with that. The downloaded starter application zip file (steps 2 and 3 above), contains some pieces of information about the starter application, including the source code, as well as the actual application itself (the WAR file). If you wanted you could re-deploy that exact same WAR file, using the command that was shown in the getting started: "cf push asifcmpe272calc -p webStarterApp.war". If you updated that war file, and ran that command, then the updated application would replace the original one that is already running at
But it seems like you are not familiar with Java Enterprise Edition (JEE) programming, so instead of trying to modify the WAR file application (by creating new code and compiling it) you instead just tried to push your Asif_Calc.java file. You cannot just push a Java source file like that - you must push a JEE application in the format of a WAR file.
A couple of other issues in what you did..
The 'cf push Asif_Calc.java -m 512Mb' command you used is incorrect. I think I know what you were trying to do, but the format of the cf push command should be 'cf push <name-of-application> -m 512Mb', and then the command will start running, and will find any applications in your directory and push those. If you look in the log you'll see that it therefore interpreted 'Asif_Calc.java' as the name of the application, and then it found the WAR file in the directory and tried to upload that. Your Asif_Calc.java file was not used at all.
The other issue I see, is that the Asif_Calc.java code appears to be a standalone Java application for running on a desktop, not a JEE web application for running in a JEE web application server like WebSphere Liberty on BlueMix. For example the code seems to have Java Swing UI code, which would be for the desktop, not for a web application.
In summary: - You need to be familiar with building and editing JEE applications if you want to run them on the JEE WebSphere Liberty servers in BlueMix - Once you have created your own WAR application, you need to understand the 'cf' command line syntax to know how to correctly "push" it up to BlueMix. For example you could do 'cf push AsifFirstApplication -p myapp.war' if you had created a JEE application WAR called myapp.war
I hope this helps you! Richard
Hi Richard sincere thanks for the feedback and you are right with what I am doing.. well a friend of mine advised about going this way with a native application and its clearly a bad idea to relate 2 irrelevant things. I am sorry for this. But now I have created the new application - .java - Liberty based and I am planning to replace the index.html file's content with that of Javascript based calculator. Now this javascript based calculator had the extension of ".htm" so I am trying to run this code instead of the original index.html ...please let me know if this approach works thanks for your help
Sorry here is the code
Hi
I looked at your latest code repository. What I see is that you appear to have downloaded the Java DB web starter application zip file. This contains the application (libertyDBApp.war) and also a set of other files like readmes, source code, etc.
It then looks like you tried to alter the index.html file to your own one. Your new index.html file appears to be client side Javascript code, which means the web browser downloads the file, and then all the code actually runs inside the web browser, on your local machine. You don't actually need a full web application server (like Liberty) for doing something that simple, but it will work if you do it properly - WebSphere Liberty can host static files like that HTML, and any image files, etc.
The main problem you have though, is that you need to compile and package your application into a new WAR file. A WAR file is what you actually deploy to BlueMix.
The quickest way to get what you are doing working, would be to unzip the existing libertyDBApp.war to a new directory, replace the index.html with your own one, then zip it back up to a new war file - maybe AsifApp.war.
Once you have done that, you just need to do a 'cf push AsifApp -p AsifApp.war' from the directory that contains the WAR file (after you have first done a cf login). That will deploy your new WAR file (containing your index.htm) at.
Richard
Answer by SteveKinder (566) | Mar 06, 2014 at 02:25 PM
So the calculator app, I think I found here:
I think the problem is you are specifying the root URL, which has your domain, but you need to put the right context behind it to invoke your servlet. In the getobjects example the use:
I tried poking around to try and guess at what you might have specified in the web.xml but it might be easier, if you crack open the WAR file and find the web.xml and see if you have a context root and/or servlet mapping.
The name should have the form:
Hi Steve I didn't specify anything in WAR package and it was downloaded as it is from the site help as below:
Welcome to Java Web Starter application! 11:47 AM
This sample application demonstrates how to write a Java Web application (powered by WebSphere Liberty) and deploy it on BlueMix.
Install the cf command-line tool. Download the starter application package Extract the package and cd to it.
Connect to BlueMix:
cf api
Log into BlueMix:
cf login -u getasif@gmail.com cf target -o getasif@gmail.com -s dev
Compile the JAVA code and generate the war package using ant.
Deploy your app:
cf push asifcmpe272calc -p webStarterApp.war
Access your app:
You can also use JazzHub to deploy this application to BlueMix. Learn more
======================================================================== and then I followed the link :
to upload the files from command line infact just pushing the WAR file doesn't make sense since I don't know how to integrate my source code : Asif_Calc.java with WAR file ..I am an absolute beginner. Thanks
Answer by SteveKinder (566) | Mar 06, 2014 at 07:41 AM
Just a quick thought, do you specify a context root for your application?
Hello SteveKinde to be frank I am just a newbie, absolute newcomer and I have no idea about "specify a context root" phrase, could you please explain to me more about it so that I can get more information on it. And the application that I would like to run on Bluemix is a simple calculator: a single .java file ..please check the github url : "" for the code and file named Asif_Calc.java
I am sorry I have not accepted this answer and by mistake I clicked on the "Accept this answer" link
No one has followed this question yet.
import "org.cloudfoundry.runtime.env.* " need which lib? 1 Answer
How to connect java Application in bluemix 1 Answer
uploading Java se program using cf push command 0 Answers
Issue connecting to SQLDB service instance from Java struts app 1 Answer
MySQLNonTransientConnectionExceptions while using ClearDB MySQL Database 0 Answers | https://developer.ibm.com/answers/questions/8704/unable-to-run-java-application-in-bluemix.html?sort=newest | CC-MAIN-2020-16 | refinedweb | 2,034 | 58.08 |
For the last months I’ve found myself using this simple technic quite a lot. Here are some examples of what you can do with it and how I took advantage of its versatility.
Object notation
One of the main caracteristics of JavaScript is that almost everything is an object. As you may know, or if you don’t, you have two ways to access to an object property, both to read from it and write on it.
One is the dot notation
person.age, the other one is the brackets notation that goes like this
person['age'] where person is the object and age the property you want to have access to.
In the second one we are passing the property as a string and that’s the main reason why object literals are so useful and clean since strings are primitive values in JavaScript; this means you can compare them to take some different paths on your code.
Use cases of object literals
Storing and overriding default options
Sometimes functions and components need a lot of customization by the time they’re called or when an instance of them is created and every customizable property or flag means an argument. When this number exceeds the number three I usually prefer an options object to handle this situation.
var carousel = new Carousel(document.getElementById('photos'), { loop: false, time: 500, prevButton: 'previous', nextButton: 'next' });
Here we are creating a new carousel instance and we’re passing an HTML element and an object as parameters. Object literals are a good way to manage overriden default options inside the Carousel object.
First of all, we must create an object containing the default values.
// default options var dflt = { loop: true, time: 300, navigation: true, nextButton: '>', prevButton: '<' };
Then we have to create a method that will compare the modified options object with the default one.
function setOptions (options) { var newOptions = {}; for (var opt in dflt) { newOptions[opt] = options[opt] !== 'undefined' ? options[opt] : dflt[opt]; } return newOptions; }
Using the for in iterator you loop on every key of an object and I’ve sent dflt, but in this case I’m searching for its keys inside the custom one. If the custom options object contains that key I save its value in a new object, in case it’s not present I go for the default one. This approach is great because if you accidentally send an object with unnecessary option keys not contained on the default one they will be ignored.
This technic is used in vanish, one of my repositories to handle carousels, in case you want to see how it works.
Linking states to specific methods
Flags are very usual to save the state of something in your code so you can take different paths later through conditional statements.
If this flag is not boolean, meaning it could have more than two possible values, saving it as a string is a good decision. The reason is that you can call specific methods which are stored inside an object.
var method = { active: function () { // do something for active state }, inactive: function () { // do something for inactive state }, waiting: function () { // do something for the waiting state } }; // assuming getState returns a string var state = getState(); method[state]();
Doing this is convenient because you avoid doing this not-so-good approach.
function isActive() { // do something for active state } function isInactive() { // do something for inactive state } function isWaiting() { // do something for waiting state } var state = getState(); if (state === 'active') { isActive(); } else if (state === 'inactive') { isInactive(); } else { isWaiting(); }
Not only the code is ugly, but is not future proof. If at some point another state needs to be supported you will have to nest another if statement. Using object literals you would only need to add a new function to the method object making the code clearer and easier to maintain.
I use a similar structure in steer.
Data binding
Injecting large amount of data into an HTML Document can be hard to do in a clean a simple way. A nice choice is to solve this using data attributes in the elements and object literals.
var data = { name: 'Alan Turing', age: '58', field: 'Computing Science', job: 'professor', place: 'Cambridge' }; for (var property in data) { var selector = '[data-' + property + ']', element = document.querySelector(selector); if (element) { element.innerHTML = data[property]; } };
Using again the for in iterator, we look for HTML elements with a data attribute that makes reference to a certain property, for example <p data-name></p>. When the iterator falls on the name property it will get the paragraph element and inject the name value inside of it.
It’s a pretty simple case but a good way to show how powerful is to have access to the keys of an object as a string so they can be manipulated and extend funcitonality in your code.
This approach is used in this weather widget I did call condense.
Generating dynamic callbacks
If your dealing with a web app that needs to do JSONP calls, using object literals could help you store the callback to obtain the data.
Just create a base name and an integer to increase everytime you make a call to the API that will compound the final callback name. With the brackets notation you can store the new callback as a string key in a global variable like window, though it would be safer to use a namespace. Depending on the API documentation you will also need to specify the callback name in the url of the request.
var cName = 'apicall', cNumber = 0; var _getData = function (baseUrl, callback) { var script = document.createElement('script'), callbackId = cName + cNumber; // increase callback number cNumber++; // make padding method global window[callbackId] = function (data) { if (typeof callback === 'function') { callback(data); } else { console.error('You must specify a method as a callback'); } }; script.src = baseUrl + '&callback=' + callbackId; document.head.appendChild(script); };
These lines belong to a simple script I developed to make JSONP calls that, for some unknown reason I named jabiru. I wrote a post about it if you’re interested on cross domain requests.
Wrap-up
That’s it, I hope you find these approachs useful. Happy coding! | https://jeremenichelli.io/2014/10/the-power-of-using-object-literals/ | CC-MAIN-2018-47 | refinedweb | 1,027 | 58.11 |
Until recently the way Gazpacho instantiates widgets was using their python class. For example, for a GtkWindow I had the <type 'gtk.Window'> class, that is what you usually get when typing:
import gtk; print gtk.Window
The problem was when loading a .glade file. Then when you have to instantiate a widget you don't usually know the module where this widget has the class in. For example, let's see this (fake) glade file:
<?xml version="1.0" ?> <glade-interface> <widget class="GtkWindow" id="window1">
Now, how do I get the gtk.Window class from the 'GtkWindow' name. Well, if we just work with gtk+ widgets it's easy. All I have to do is to strip the 'Gtk' prefix from the name, and look for 'Window' in the gtk module.
Ok, that's cheating, and certainly doesn't work as soon as you start working with other modules, like kiwi.
So I tried another approach: getting the gtype from the class name and building the widgets with gobject.new:
>>> gobject.type_from_name('GtkWindow') <GType GtkWindow (136052312)> >>> gobject.new(_) <gtk.Window object (GtkWindow) at 0xb72e22ac>
Looks like this works perfectly but actually, it doesn't. The problem now is that when you create your widgets in pure python (as kiwi does) you __init__ method is not called anymore when using gobject.new(). Seems like a bug but it is not, since this is the expected behaviour. Even when it's the behaviour I don't want at all.
Sooo, back to the classes. Let's sumarize: I don't have an easy way to get the class object from the class name and I can't use gobject.new to construct the object. So let's get the class object from the gtype. That's the best solution and it would also be great if pygtk would support this :-(
Here is my workaround function that I hope I can remove as soon as this bug is fixed:
def class_from_gtype(gtype, module_list): for module in module_list: for name, klass in inspect.getmembers(module, inspect.isclass): if gtype == getattr(klass, '__gtype__', None): return klass
Note how easy is to get the gtype from a python class: it's on the '__gtype__' attribute.
Update: new version of class_from_gtype function thansk to Johan:
def class_from_gtype(gtype, module_list): def is_gobject_subclass(k): return isinstance(k, type) and issubclass(k, gobject.GObject) for module in module_list: for klass in filter(is_gobject_subclass, [getattr(module, n) for n in dir(module)]): if gtype == klass.__gtype__: return klass | http://www.advogato.org/person/lgs/diary.html?start=29 | CC-MAIN-2013-20 | refinedweb | 420 | 65.22 |
Roll-up Video Light
Using
To keep up with what I’m working on, follow me on YouTube, Instagram, Twitter, Pinterest, and subscribe to my newsletter. As an Amazon Associate I earn from qualifying purchases you make using my affiliate links.
Prototype & Code
This sketch shows the rough alignment of components for the project. Alternating strips of warm and cool white DotStar LEDs chain together to form a bank of light. For more detailed wiring instruction, visit the circuit diagram on the next page.
It pays to prototype your circuit on a solderless breadboard before soldering everything together. If you can manage to have a duplicate set of parts, that’s even better– you can use your working prototype as a model to work from when building the final soldered circuit.
Below is some rudamentary code for adjusting the brightness of the two LED strips using the membrane keypad as follows:
1: strip 1 brightness up
2: strip 1 brightness down
3: strip 2 brighness up
4: strip 2 brightness down
The brightness value incrementers don’t have any thresholding, so values will “wrap around” (ie pressing 2 when the strips are off will raise that strip to full brightness).
Load the following code onto your Pro Trinket:
#include <Adafruit_DotStar.h> // Because conditional #includes don't work w/Arduino sketches... #include <SPI.h> // COMMENT OUT THIS LINE FOR GEMMA OR TRINKET //#include <avr/power.h> // ENABLE THIS LINE FOR GEMMA OR TRINKET #define NUMwarmPIXELS 60 // Number of LEDs in strip #define NUMcoolPIXELS 60 // Number of LEDs in strip #define DATAPINwarm 6 #define CLOCKPINwarm 8 #define DATAPINcool 3 #define CLOCKPINcool 4 Adafruit_DotStar warmStrip = Adafruit_DotStar(NUMwarmPIXELS, DATAPINwarm, CLOCKPINwarm); Adafruit_DotStar coolStrip = Adafruit_DotStar(NUMcoolPIXELS, DATAPINcool, CLOCKPINcool); #define DEBOUNCE 10 // button debouncer, how many ms to debounce, 5+ ms is usually plenty // here is where we define the buttons that we'll use. button "1" is the first, button "6" is the 6th, etc byte buttons[] = {9, 10, 11, 12}; // the analog 0-5 pins are also known as 14-19 // This handy macro lets us determine how big the array up above is, by checking the size #define NUMBUTTONS sizeof(buttons) // we will track if a button is just pressed, just released, or 'currently pressed' byte pressed[NUMBUTTONS], justpressed[NUMBUTTONS], justreleased[NUMBUTTONS]; int warmBrightness = 0; int coolBrightness = 0; void setup() { byte i; // Make input & enable pull-up resistors on switch pins for (i=0; i<NUMBUTTONS; i++){ pinMode(buttons[i], INPUT_PULLUP); } // pin13 LED pinMode(13, OUTPUT); warmStrip.begin(); // Initialize pins for output warmStrip.show(); // Turn all LEDs off ASAP coolStrip.begin(); // Initialize pins for output coolStrip.show(); // Turn all LEDs off ASAP fillAll(warmStrip.Color(255, 255, 255)); warmStrip.show(); coolStrip.show(); } void loop() { warmStrip.setBrightness(warmBrightness); coolStrip.setBrightness(coolBrightness); warmStrip.show(); coolStrip.show(); digitalWrite(13, LOW); check_switches(); // when we check the switches we'll get the current state for (byte i = 0; i<NUMBUTTONS; i++){ if (pressed[i]) { digitalWrite(13, HIGH); // is the button pressed down at this moment } if (justreleased[i]) { if (i == 0){ warmBrightness+=5; }else if (i == 1){ warmBrightness-=5; }else if (i == 2){ coolBrightness+=5; }else if (i == 3){ coolBrightness-=5; } } } for (byte i=0; i<NUMBUTTONS; i++){ // remember, check_switches() will necessitate clearing the 'just pressed' flag justpressed[i] = 0; } } } //Serial.println(pressed[index], DEC); previousstate[index] = currentstate[index]; // keep a running tally of the buttons } } void fillAll(uint32_t c) { for(uint16_t i=0; i<warmStrip.numPixels(); i++) { warmStrip.setPixelColor(i, c); } for(uint16_t i=0; i<coolStrip.numPixels(); i++) { coolStrip.setPixelColor(i, c); } warmStrip.show(); coolStrip.show(); }
Circuit Diagram
DotStar strips connected to pins 3 and 4 and 6 and 8. Membrane keypad buttons connected to pins 12, 11, 10, and 9. Battery connected via USB port.
Solder LED Strips
Cut your LED strips to your desired length and configuration. Out light is comprised of 3 20-LED sections of each color temperature strip. Be sure to preserve solder pads on each end of the strip.
If using the super dense 144/m strip, you may have a “waste pixel” between sections in order to get as much available solder pad as possible.
Strip and tin four different colored stranded wires. Tin the pads of the DotStar strip.
Solder every other wire on alternating sides of the LED strip to help avoid shorts.
Carefully trim the wires to length so that they make tidy junctions to the next strip. Remember to keep all the data arrows flowing in the same direction!
Your LED circuit will now look something like this. Test it with your solderless breadboard circuit before proceeding.
Solder Pro Trinket & Keypad
Solder data connections to Pro Trinket according to the circuit diagram.
Twist and tin the two power wires together and slide on a piece of heat shrink tubing. Solder to BUS on Pro Trinket, then silde and shrink the heat shrink over any remaining exposed wire.
Double check there are no stray strands of wire, or these could short out and possibly causing damage! Repeat with the ground wires, but add one more wire to the bundle that will go to the membrane keypad’s ground wire.
Plug your membrane keypad into some female jumper wires and lay out the assembly to gauge the appropriate length to cut the jumpers.
Solder the jumpers to digital inputs and ground as described in the circuit diagram. Use heat shrink tubing where applicable. Test out your circuit to make sure everything works as expected before proceeding!
Fabric Backing
Cut two pieces of durable fabric to the size of your matrix plus an inch in each direction for seam allowance. With right sides together, stitch lines as shown using a sewing machine (or by hand if you must), then chop off the corners so everything will sit nice when you turn it right side out:
Iron the rectangle if it isn’t sitting flat, then top stitch as shown below to make battery pouches:
You can optionally add D rings or clasps at this step to make attachment points for your light. Stitch small bits of fabric around them and include them in your topstitched seam, plus a little X for added support.
Use Velcro tape at the openings of the battery compartments.
When you’re finished, the fabric backing will appear as below:
Sew your circuit elements onto the fabric backing with tack stitches– that is, sew and tie off the thread in discrete junctions. That way if one stitch gets broken, the whole circuit doesn’t fall off of the backing!
Use it!
Originally posted on Adafruit | https://beckystern.com/2015/05/27/roll-up-video-light/ | CC-MAIN-2022-40 | refinedweb | 1,090 | 61.26 |
tag:blogger.com,1999:blog-23376057204756012572018-03-06T03:32:39.570-08:00Zlatko Michailovsoftware developer, soccer coach, dadZlatko Michailov to Reset a Comcast Cable Box<p><u><b>Disclaimer</b></u>: Do the following procedure at your own risk. I'm not a Comcast insider. I'm just good at debugging.</p> <h4>Symptoms</h4> <p>Your cable box has lost signal without being able to recover. Comcast’s customer support has sent a reset signal, but the box is not responding to it.</p> <h4>Problem </h4> <p>What I believe has happened is the box has lost trust in Comcast. Now it refuses to accept any signal from the remote port including a reset signal.</p> <h4>Solution </h4> <p>The solution that has worked for me is to deactivate the box, and then to reactivate it.</p> <h4>Procedure</h4> <p><font style="font-weight: normal">1. <strong>Reset/deactivate the cable box</strong>:</font></p> <ul> <li>Press and hold "0" on the remote control for about 5 seconds until the diagnostics menu comes up. <li>Quickly press "1", "3", "7", "9". This will reset/deactivate the box. </li></ul> <p>2. Wait until any visual activity stops. <b>Reboot the cable box</b>.</p> <p>3. <b>Verify the box appears as "not activated"</b>:</p> <ul> <li>Press and hold "0" on the remote control for about 5 seconds until the diagnostics menu comes up. <li>Switch to the item that looks like "DTA status" and click on it. <li>You should see something like "Activated: No". </li></ul> <p>4. <strong>Call Comcast </strong>(or do it over an online chat - it's much faster)<strong> to ACTIVATE your box</strong>. You'll need the serial number of the box which is printed on a label on the bottom of the box.</p> <p>Within a couple of minutes, TV signal should come in. </p><img src="" height="1" width="1" alt=""/>Zlatko Michailov of Linear Equations - Free for AllSome of my friends are familiar with the Windows Store app I wrote a couple of years ago that teaches kids how to solve systems of linear equations. The app generates problems with up to 5 unknowns, describes full step-by-step solutions, or shows just answers.<br /> <br /> The news is that I've ditched the Windows Store, and I've made the app freely available as a web service. It is available for all smart devices from all geographical regions:<br /> <br /> <div style="text-align: center;"> <b><a href=""></a></b></div> <br /> The app is designed for small form factor devices. So please try it from your phone.<br /> <br /> This was made possible by the free web hosting at <a href="">GitHub Pages</a>.<img src="" height="1" width="1" alt=""/>Zlatko Michailov Java Async I/O<p>I’ve been playing with Java recently. It turns out Java has two archaic I/O stacks neither of which is adequate for modern days. That’s why I developed an <em>async I/O</em> package.</p> <p>My package doesn’t replace the existing I/O stacks. It’s an upgrade on top of the old InputStream and OutputStream that enables async interaction to optimize CPU usage and responsiveness of the consuming app.</p> <p>The entire source code is available on <a href="">GitHub</a>:</p> <blockquote> <p><a title="" href=""><strong></strong></a></p></blockquote> <p>Start with the <a href="">README</a> file, and you’ll find links to the <a href="">binaries</a> as well as to the <a href="">documentation</a>. Go through the <a href="">doc articles</a> and the <a href="">API reference</a>, and if this paradigm makes sense to you, try it in an app. I’ll be glad to hear your feedback.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov of Command Piping in the Windows Command Shell<p> </p> <p>A key concept in command piping is the success/failure of the execution which is determined by the status code the command returns. <strong>0</strong> (zero) means “success” while any other value means “failure”.</p> <p>Throughout this article, I’ll be using this successful command:</p> <blockquote> <p>> cmd /c “exit <strong>0</strong>”</p></blockquote> <p>as well as this failing command:</p> <blockquote> <p>> cmd /c “exit <strong>42</strong>”</p></blockquote> <p><strong>Note</strong>: If you want to copy and paste some of these examples, make sure you fix the quotes, dashes, and other characters to their standard ASCII representations.</p> <h4>Execute on Failure ( || )</h4> <p>Executes the command on the right side if and only if the command on the left side has failed. This piped statement:</p> <blockquote> <p><font color="#0000ff">> cmd /c “echo left & exit <strong>0</strong>”</font> || <font color="#ff0000">echo right</font></p></blockquote> <p>prints “<font color="#0000ff">left</font>” while this piped statement:</p> <blockquote> <p><font color="#0000ff">> cmd /c “echo left & exit <strong>42</strong>”</font> || <font color="#ff0000">echo right</font></p></blockquote> <p>prints “<font color="#0000ff">left</font>” then “<font color="#ff0000">right</font>”. </p> <p>You can remember this syntax and behavior as the logical OR operation from the C language or its derivates – C++, Java, C#, etc.</p> <h4>Execute on Success ( && )</h4> <p>Executes the command on the right side if and only if the command on the left side has succeeded. This piped statement:</p> <blockquote> <p><font color="#0000ff">> cmd /c “echo left & exit <strong>0</strong>”</font> && <font color="#ff0000">echo right</font></p></blockquote> <p>prints “<font color="#0000ff">left</font>” then “<font color="#ff0000">right</font>” while this piped statement:</p> <blockquote> <p><font color="#0000ff">> cmd /c “echo left & exit <strong>42</strong>”</font> && <font color="#ff0000">echo right</font></p></blockquote> <p>prints only “<font color="#0000ff">left</font>”. </p> <p>Similarly to the previous one, you can remember this syntax and behavior as the logical AND operation from the C language family.</p> <h4>Execute in Parallel ( | )</h4> <p>You may be familiar with this one – this is the true pipe. The shell connects the stdout of the left command to the stdin of the right command. </p> <p>What you may have not paid attention to is that the shell launches both of them in parallel. That is necessary for the pipe to have a live process on each end.</p> <p>To demonstrate the parallelism, we’ll need to enhance/complicate our left command a little bit:</p> <blockquote> <p>cmd /c "<font color="#0000ff">echo left begin >&2</font> & <font color="#ffc000">timeout /t 5</font> & <font color="#9b00d3">echo left end >&2</font> & exit 0" </p></blockquote> <p>First it prints “left begin” (to stderr to avoid the piping, so we can see the output), then it sleeps for 5 seconds to simulate work, and then it prints “left end” right before exiting.</p> <p>Success and failure don’t matter. Both of these piped statements:<>print “<font color="#0000ff">left begin</font>” and “<font color="#ff0000">right</font>”, then sleep for 5 seconds, and then it prints “<font color="#0000ff">left end</font>”. </p> <h4>Execute Sequentially ( & )</h4> <p>This is the simplest yet least popular of all variations – it executed the right command after the left command has finished regardless of success/failure.</p> <p>Using the same commands from the previous section:<>both of them print “<font color="#0000ff">left begin</font>”, sleep for 5 seconds, then print “<font color="#0000ff">left end</font>” and “<font color="#ff0000">right</font>”. </p> <h4>Relationship to POSIX</h4> <p>The first 3 syntaxes are identical between Windows cmd and bash. What is ‘<strong>&</strong>’ in Windows is ‘<strong>;</strong>’ (semicolon) in bash. Otherwise the behavior is the same.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov US Patents<p>The following patent applications have been approved in the United States. To find out more about each of them, click on the patent number:</p> <p><strong><a href="">7,818,311</a></strong> (2010-10-19) <br>Complex regular expression construction</p> <p><strong><a href="">8,856,792</a></strong> (2014-10-07)<br>Cancelable and faultable dataflow nodes </p> <p><strong><a href="">8,909,863</a></strong> (2014-12-09)<br>Cache for storage and/or retrieval of application information </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Me @ Microsoft (part III continued)<p>After multiple reorgs, my team continues owning the programmability and public API for SharePoint and now of the entire Office. </p> <p>I, personally, have been focused on implementing Office cloud functionality to public app developers through the Office365 and Azure clouds: </p> <ul> <li>I designed and implemented the service behind <a href=""></a>. You can read more about it at: <a title="" href=""></a>. <li>I’m currently working on lighting up various aspects of Office cloud functionality through the Microsoft Graph API.</li></ul> <p>I remain the primary reviewer and approver of public API changes in SharePoint.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov OneSql Client ADO.NET 0.1 (alpha)<p>I’ve made two releases this week. The important one is <a href="">OneSql.Client.AdoNet 0.1 (alpha)</a>. That is an ADO.NET adapter on top of OneSql.Client. There are a few important things to keep in mind when you try to adopt it:</p> <h5>Dependencies</h5> <p>The OneSql.Client.AdoNet package needs these two other packages:</p> <ul> <li><a href="">System.Data.Common</a> <li><a href="">OneSql.Client</a></li></ul> <h5>Supported API</h5> <p>Unless you stick exclusively to -Async methods, you are likely to get a NotSupportedExecption. There are two messages – one for each of the reasons:</p> <ul> <li>“<em>Synchronous operations are not supported.</em>”<br>This message means that this call may require a roundtrip to the server which cannot be done synchronously. If there is an –Async version of this method, use that. Otherwise, try to find a workaround that doesn’t utilize this method. <li>“<em>This feature is not supported.</em>”<br>This message means that OneSql Client hasn’t implemented the necessary part of the TDS protocol to enable this feature. You may search this blog to find out what is supported and what is not. For this kind of exceptions, you should be able to find a workaround.<br>If you feel strongly about a missing feature, feel free to send me an email at <a href="mailto:zlatko+onesql@michailov.org">zlatko+onesql@michailov.org</a>. That doesn’t mean I’ll agree to implement it right away, but if I hear from a good number of people, I may do so.</li></ul> <h5>Entity Framework</h5> <p>Enabling Entity Framework is the ultimate test for an ADO.NET provider. I will really appreciate your effort to migrate an EF-based app to Windows Store/Phone using OneSql Client ADO.NET. I’m sure you’ll get NotSupportedException’s please send me those call stacks. I’ll truly appreciate that.</p> <p> </p> <p>The second release is an update on OneSql Client itself. It contains 2 bug fixes plus retargeting both Windows Store and Windows Phone. I expect more such updates as testing on the ADO.NET adapter continues. </p> <p>I’ll continue posting notifications about new releases on <a title="" href=""></a>. Follow it if you want to stay informed. </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client 1.0<p>OneSql Client is no longer a beta!</p> <p>There’ve been 50 downloads of the beta version. I wish there were 100, but 50 is also a good number.</p> <p>I didn’t receive any feedback, so I assumed the product was rock solid and no key features were missing.</p> <p>OneSql Client 1.0 is a product now. You’ll find it at the same NuGet location - <a href=""><strong></strong></a>. </p> <p>The license also remains the same – <strong>BSD-2-Clause</strong>.</p> <p>Questions and comments are still welcome at: <a href="mailto:zlatko+onesql@michailov.org"><strong>zlatko+onesql@michailov.org</strong></a>. </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client 0.3 (beta) Is Available<p>OneSql Client 0.3 (beta) is now available! </p> <p>Same locations:</p> <ul> <li>NuGet package - <a href=""><strong></strong></a>. <li>Raw files and additional content - <a href=""><strong></strong></a>. </li></ul> <h5>New in 0.3</h5> <p>The most significant addition to the OnceSql Client project is the test suite. The entire source code of the test project has been released. I may be used as samples.</p> <p>There are no new features in this release. Various bugs were discovered and fixed during the course of adding test automation.</p> <p>There are two breaking changes:</p> <ul> <li><a href="">The timeout options have been removed</a>. This is an API change that may require changing existing app code. Otherwise it may not compile. And of course, tasks that used to time out, no longer do so.</li> <li>Service DONE sections are no longer exposed to apps. This is a behavior change. It will not prevent existing app code from compilation, but you may have to remove any <font face="Courier New">SqlClient.MoveToNextResultAsync()</font> calls you have added to skip over those “unwanted” results.</li></ul> <h5>Next</h5> <p>The “beta” label will be removed when all of the following conditions are met:</p> <ul> <li>The beta release has been available for at least 1 month, i.e. no earlier than May 27, 2014.</li> <li>There are at least 100 downloads of a single beta package on NuGet.</li> <li>All customer-reported issues for which there is no viable workaround have been fixed.</li></ul> <h5>Support</h5> <p>All support content will continue to be published at this blog. This link <a title="" href=""><strong></strong></a> queries OneSql Client content.</p> <p>The recommended source, however, is <a title="" href=""><strong></strong></a> where along with references to blog posts, you’ll also find short status updates and news. </p> <h5>Feedback</h5> <p>To report an issue, to request a feature, or to provide feedback, please send an email to <a href="mailto:zlatko+onesql@michailov.org"><strong>zlatko+onesql@michailov.org</strong></a>. </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client – Breaking Change: Timeout Options Removed <p>This is an advance notice of an upcoming breaking change.</p> <h4>Summary</h4> <p>The following properties of type <font size="2" face="Courier New">OneSql.Client.Options</font> have been removed:</p> <ul> <li><font size="2" face="Courier New">ConnectTimeout</font> <li><font size="2" face="Courier New">CommandTimeout</font> <li><font size="2" face="Courier New">DefaultConnectTimeout</font> <li><font size="2" face="Courier New">DefaultCommandTimeout</font></li></ul> <p>OneSql Client operations will complete only based on <a href="">[MS-TDS]</a> protocol flows, not based on any timeout.</p> <h4>Version</h4> <p><strong><font face="Courier New">0.3</font></strong> and above.</p> <h4>Details</h4> <p>The purpose of timeouts is to give control back to the app when an operation takes too long to complete. This is mainly useful in synchronous API where the client thread is blocked on the operation.</p> <p>In asynchronous API, like OneSql Client, the calling thread is never blocked. It is a developer’s choice whether to await the completion of the operation or to perform some other action. </p> <p>The purpose of the timeout options was to provide a “timed await” mechanism that is common for all languages.However, the current implementation is incorrect – it leaves the SqlClient (and its underlying TdsClient) in a state that prevents further usage. </p> <p>The cost of cleaning up SqlClient’s state exceeds the value of the feature by far. Moreover, that cleanup (which involves a new request to the server as well as receiving its response) may take a time that is long enough to defeat the purpose of the timeout. Therefore, timeouts have been discontinued. </p> <h4>Action Needed</h4> <p>Remove code that is setting (or getting) the above properties explicitly. Otherwise, when you upgrade your copy of OneSql Client, your app won’t compile.</p> <p>If you want to get control after a certain time before the operation has finished, use standard Windows RT mechanisms specific to the language of your app: </p> <p><strong>C#</strong></p> <p><font face="Courier New">var cts = new CancellationTokenSource(secs * 1000);<br>await sqlClient.XxxAsync(…).AsTask(cts);<br>…</font></p> <p><strong>JavaScript</strong></p> <p><font face="Courier New">var sqlPromise = sqlClient.XxxAsync();<br>WinJS.Promise.timeout(secs * 1000, sqlPromise).then(…, …);<br>…</font></p> <p>When you get control before the operation has finished, <strong>the SqlClient will be in a dirty state. You won’t be able to continue using it. Dispose of it. Any uncommitted transactions will be rolled back.</strong> Create a new SqlClient, connect it to the SQL Server, and decide how to continue.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client is Back On Track!<p>Long story short - the problem I was hitting with C# was neither in Windows RT, nor in OneSql.Client. So I’m back on my way to implement the necessary test automation.</p> <p>Meanwhile, I discovered a <u>product bug</u> – OneSql Client misses the end of the result set when multiple rows are read at a time. While I’ve fixed this bug already, I’ll hold off a bits update. The workaround for this bug is to read rows one by one. If this is a problem for any of the current adopters, please let me know via email at <a href="mailto:zlatko+onesql@michailov.org">zlatko+onesql@michailov.org</a>.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov in C# Apps<p><font color="#0000ff"><strong>Update (Mar 29)</strong>: </font><em><font color="#0000ff">This issue is now resolved. See:</font> </em><a title="OneSql Client is Back On Track!" href=""><em><strong>OneSql Client is Back On Track!</strong></em></a>.</p> <p>I’ve been testing OneSql.Client using a JavaScript app assuming that if it works for JavaScript, it should work for C# as well since OneSql.Client is written in C#. I’ve been wrong.</p> <p>I recently discovered that OneSql.Client doesn’t function properly when it’s hosted by C# apps. I’ve narrowed the problem down to the platform’s StreamSocket. I posted a <a href="">question on the C# Apps forum</a>, but nobody from the product team has responded yet. I have little hope anybody will ever respond.</p> <p>This is very unfortunate, because C# represents a large portion of the Windows Store apps. Also, I still plan to implement the necessary test automation using C# in order to gain sufficient confidence in the quality of the product.</p> <p>I first apologize to all of the adopters who can’t use my product in their C# apps. I should have tried C# before announcing the alpha releases.</p> <p>I’m asking you for clues how I should be initializing/using StreamSocket to make it behave like it does in JavaScript apps. I admit I’m not certain whether the platform classes are exactly the same for JavaScript apps and for C# apps. I also suspect there might be differences in the threading models that I haven’t seen documented. Any other clues are also welcome. </p> <p>Please email your suggestions to: <a href="mailto:zlatko+onesql@michailov.org">zlatko+onesql@michailov.org</a>.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client 0.2 (alpha)OneSql Client 0.2 (alpha) is now available. Same locations as before:<br /> <ul> <li>NuGet package - <a href=""><strong></strong></a>.</li> <li>Raw .winmd and .chm files as well as a sample WinJS app - <a href=""><strong></strong></a>.</li> </ul> <h5> New in 0.2</h5> This.<br />.<br /> <h5> Next</h5> Notice that this release is still “alpha”. It will remain like that until I have a sufficient coverage through test automation. That is the next thing on my list.<br /> I will appreciate feedback. Please send email to <a href="mailto:zlatko+onesql@michailov.org"><strong>zlatko+onesql@michailov.org</strong></a>.<img src="" height="1" width="1" alt=""/>Zlatko Michailov of Linear Equations 1.4 – Free Edition with Full FunctionalitySystems of Linear Equations v1.4 is now available. The big news is that <strong><u>Free Edition has full functionality</u></strong>! Yes, you get complete, detailed, solutions for free.<br /> Now that the only advantage of the paid edition is that you don’t get to see ads, <strong><u>I’ve reduced the price of Standard Edition to the minimum possible – $1.49</u></strong>.<br /> Notice that these new releases are <strong>only available for Windows 8.1</strong>.<br /> Start by installing the <a href=""><strong>Free Edition</strong></a>. <br /> If you want to get rid of the ads, try the <a href=""><strong>Standard Edition</strong></a>.<img src="" height="1" width="1" alt=""/>Zlatko Michailov and Services<p>The term “<em>devices and services</em>” has become very popular recently. However, a lot of people including software engineers wonder what it means in terms of technology as well as how it will generate revenues for software companies.</p> <p><font color="#ff0000">This article represents my personal opinion and only my personal opinion. It may or may not match Microsoft’s opinion. Either way, this article MUST NOT be taken as official or sanctioned by Microsoft</font>.</p> <h4>Devices</h4> <h5>Price Wars</h5> <p>It is naïve to think that any company [except Apple] can generate considerable revenues from hardware devices. The reason for that is that devices are made of mass market components. If one company can design a device with certain capabilities and a certain form factor, then other companies can do the same at the same or lower cost. Thus a device pioneer won’t enjoy glory for too long. (The only exception could be Apple that has uniquely loyal customers who are willing to pay extra for the brand.)</p> <p>Look at this offer – <a href=""><strong>Nokia Lumia 521 without contract for $69</strong></a>! This is a great device that runs the exact same Windows and apps that run on any expensive Windows phone, and the best of all features - no blood sucking contract! I don’t think Nokia/Microsoft will make a ton of money out of this device at this price. </p> <p>Or, how about <a href=""><strong>this Dell Venue 8 Pro</strong></a> for $279? This device is a full blown computer that runs the real Windows, not RT.Will Dell make billions from it? I doubt it. </p> <h5>Open Platform</h5> <p>The PC platform was designed to be open from the very beginning. No one has ever owned it – neither chip makers, nor operating system vendors, nor PC manufacturers. Apps have had full access to the hardware capabilities of the PC. Users have been able to install apps from anywhere. Very democratic. </p> <p>Democracy, however, is not easy to consume. It requires knowledge and responsibility. For instance, when a user needs some functionality, she has the freedom to choose any app. Great! But where to find apps? How can she be certain the app will not do something bad either intentionally or unintentionally?</p> <p>The open platform works fine for computer/software savvy people. As long as you know where to search for apps, and you accept the risk of eventually installing a bad app, you will enjoy the open platform. Although there were hundreds of millions of PCs sold worldwide, it was a tiny fraction of the human population that was using them effectively.</p> <h5>Closed Platform</h5> <p>It is possible to create a platform that restricts what apps and users can do, but that platform can only enforce such restrictions over the layers built on top of itself. If the lower layers of the stack are open, an alternative platform could be installed that would not respect those restrictions, and thus no guarantees could be made. Therefore, in order for a platform to preserve integrity, it must include all the lower layers of the stack down to the very bottom – <em>the hardware device</em>. </p> <blockquote> <p>That’s where devices come to play a major role. Devices themselves won’t be generating revenues. They will be part of a closed platform that is expected to be more profitable than its open predecessor.</p></blockquote> <p>Why will the closed platform be more successful than the open platform? Because, although we are crazy about freedom, we are too lazy to sustain it. We want simple, constrained, experiences. We want someone else to guarantee the safety of our own device. And that’s exactly what the closed platform offers – when you need certain functionality, there is a single app store where you can possibly find an app. All apps have been tested by the platform vendor before they’ve been made available for you to download. When an app runs on your device, the platform won’t let it use any device capability that it hasn’t declared. When you want to remove an app, the platform will wipe it out like it has never been there. That perfectly fits the needs of our modern spoiled civilization.</p> <p>But how would a close platform generate a revenue? Here is the model that Apple pioneered in modern days: the platform vendor takes a percentage from every single app sale. Let’s say 4 developers have made app sales for $1M each. If the platform vendor takes a 25% cut from each app sale, it will make $1M while each of the developers will make $750K. Now imagine 4 million developers selling apps. You can do the math. Not every developer will be successful? No problem. The platform vendor requires an upfront license fee from every developer whether he will become successful or not. And yes, that license has to be renewed every year. Clever, isn’t it?</p> <h4>Devices and Services</h4> <p>The key factor that contributes to the success of a closed platform is the <em>user experience</em>. A good user experience attracts users who are potential app buyers. A large number of potential app buyers attracts app developers who make more apps available on the platform. More apps make the platform even more attractive to users which closes the loop and so does the process continue. </p> <h5>Emerging User Experience</h5> <p>Of course, each platform vendor will put its signature on the user experience, and will provide a different set of free apps to make its platform unique, but that’s not the interesting part. The important thing is the emerging user experience that is the same across all device platforms, and that will change the software industry.</p> <p>To understand this new experience, let’s take a look at the evolution of doing business: In-person –> The Web –> Devices and Services.</p> <h6>In-Person</h6> <ol> <li>Get your butt off the couch to sit in your car. <li>Drive to the point of business. <li>Stand in line. <li>Do your business. <li>Drive back. <li>Place your butt where it belongs – on the couch.</li></ol> <p>5 out of those 6 steps are plain overhead. Not only does that waste time, but it’s often dangerous.</p> <h6>The Web</h6> <p>The Web significantly simplified the experience by replacing physical driving with cyber navigation: </p> <ol> <li>Get your butt off the couch to sit by the PC or to fetch the laptop. <li><strike><font color="#a5a5a5">Sit in the car and drive to the place of business</font></strike>.<br>Navigate to the target web site. <li><strike><font color="#a5a5a5">Stand in line</font></strike>. <li>Do your business. <li><strike><font color="#a5a5a5">Drive back</font></strike>. <li>Place your butt where it belongs – on the couch.</li></ol> <p>The Web made it safer and faster. But one big problem still remains – we are lazy, and we want to sit on the couch doing nothing. </p> <p>That’s where devices and services comes to help.</p> <h6>Devices and Services</h6> <ol> <li><strike><font color="#a5a5a5">Get your butt off the couch to sit by the PC or to fetch the laptop</font></strike>.<br>Suspend the Facebook app. <li><strike><font color="#a5a5a5">Sit in the car and drive to the place of business</font></strike>.<br><strike><font color="#a5a5a5">Navigate to the target web site</font></strike>.<br>Bring up the app for the job. <li><strike><font color="#a5a5a5">Stand in line</font></strike>. <li>Do your business. <li><strike><font color="#a5a5a5">Drive back</font></strike>. <li><strike><font color="#a5a5a5">Place your butt where it belongs – on the couch</font></strike>.<br>Resume the Facebook app.</li></ol> <p>Your butt remains planted on the couch the whole time. Halleluiah! This experience can only be successful. </p> <h4>Impact on the Web Site Industry</h4> <p>I’m not writing out of excitement about how our society will get fat faster. I’m writing because I foresee a disruptive change coming up triggered by devices and services.</p> <h5>From *SP to Plain HTML</h5> <p>The vast majority of web sites is implemented using a server page technology - ASP.NET, JSP, PHP, etc. An HTTP request comes to the server and gets dispatched to the web site’s handler. The handler loads and executes a custom module that does some computation and ultimately sends back HTML that a browser visualizes in front of the end user.</p> <p>Since the user experience is what sells a web site, the UI dictates how code is written and structured. You may see projects where all the code is packaged in server page modules, though most commonly developers try to extract <em>business logic</em> into separate modules. That’s the funny part. Since all code paths are driven by UI experiences, it’s hard to draw a clear line where UI-specific code ends and where business logic starts. Thus what typically ends up in the so called business logic can be classified in two buckets:</p> <ul> <li>Unrelated utility functions. <li>An object model that represents the persistence schema.</li></ul> <p>This is funny because despite of the developers’ best intents, such a separation has no value with regard to reusability or scalability. It might as well live with the rest of the server page code because that’s the only purpose it serves.</p> <p>Now the important part - that code doesn’t belong to the server side at all! Its place is on the client side just like any other app. That’s the first architectural change that devices and services introduces:</p> <blockquote> <p>Devices and services kicks web UI out of the server, and converts it to plain HTML. The web site becomes yet another client app for the given service.</p></blockquote> <p>This is a very important change because it deeply affects the COGS of running a web site. Instead of the service vendor paying for the resources (CPU, memory, bandwidth, etc.) needed to generate a web page, each user will run that code in their own browser. And those utility libraries and object models? They’ll have to be rewritten in JavaScript - as I already explained, they only serve the UI.</p> <h5>Scale Out and the Current Lack of It</h5> <p>The removal of the UI from the server will reveal an unspoken truth:</p> <blockquote> <p>Most web sites never really scaled out.</p></blockquote> <p>(<font color="#666666">There is a tiny portion of modern web sites, mainly in the search or social space, that are designed to scale out and that I don’t include in the above statement.</font>)</p> <p>Most development teams actually believe their web sites scales out. And their scale out is done by replicating front end boxes where those teams believe the most compute-intense code, the business logic, runs. First, we now know that all that front end code doesn’t belong on the server side to begin with. Second, the assumption that the most compute-intense code is outside of the data layer is wrong, because the relevance of a web site is proportional to the data it processes. If there was no giant volume of data around which the whole experience gravitates, you’d be running that functionality on your local box! </p> <p>(<font color="#666666">In all fairness, there are some web sites that offer a poor man’s backend scale out by creating <em>static partitions</em>. A user session statically belongs to exactly one partition based either on the host name or on the user account. While this approach does offer some scalability, the scalability is uneven and the approach requires constant monitoring and a manual migration of data among partitions to maintain a relatively even distribution. Thus the cost of this approach is too high to accept as long-term solution.</font>)</p> <p>That defines a problem for the whole web site industry:</p> <blockquote> <p>Follow the search and social leaders and design your data stores to scale out.</p></blockquote> <p>A lot has been written about Big Data, and I won’t repeat it. My point is that Big Data stores will become the primary choice of data stores for web-facing systems. The sooner development teams accept this idea, the better position they will get in the new world of devices and services.</p> <h5>Thin Client No More</h5> <p>Lastly, I’d like to point out that our perception of the client experiences will change. So far we’ve been categorizing clients as “<em>thin</em>” (web pages) and “<em>rich</em>” (standalone apps.) The web interface was viewed as lass capable than standalone apps. That perception is about to change – device apps will provide limited UI for limited functionality suitable for the lower resolution and the smaller form factor of the device while the web pages will provide full functionality leveraging high definition screens. For instance, we’ll be using our phones to read our Facebook feeds most of the time, and we’ll use a browser when we want to do a more sophisticated operation like managing settings. </p> <h5></h5> <h4>Conclusion</h4> <p>Devices and services launches a new era in software history. The trend is irreversible. We, software developers, will have to accept the constraints and the requirements of the new platform. </p> <p>I am truly excited about the disruption to web-facing systems. As a backend developer, I’ve been waiting for the demand to build up, and now that the moment has come, and I can’t hide my excitement.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Client – World’s First SQL Server Provider for Windows Store AppsIf you work on a SQL Server client apps, you may have noticed that it’s not possible to connect directly to SQL Server from a Windows Store app. Instead, you have to develop a web service, you have to deploy it somewhere in the cloud, you (or your customers) have to pay for running it, and lastly, you have to support it. And all that is because there is no SQL Server provider for Windows Store apps.<br /> <strong>Today, I am excited to announce OneSql Client – world’s first SQL Server provider for Windows Store apps</strong>!<br /> <h5> Overview</h5> OneSql Client is a Windows Runtime component that can be used from any Windows Store app regardless of its language. I wrote a demo app in JavaScript to prove that concept (see link below.)<br /> OneSql Client is freely available as a NuGet package at <a href="" title=""><strong></strong></a> or in a raw form from its home location <a href="" title=""><strong></strong></a>. <br /> <u>Please be advised that this is an early alpha release and it may not behave correctly in many cases. At this point, it is only intended for experimental and learning purposes.</u><br /> OneSql Client implements the <a href="">[MS-TDS]</a> protocol from scratch. Well, it doesn’t implement the whole protocol yet. The purpose of this alpha release is to prioritize the remaining work. <br /> <h5> API</h5> OneSql exposes API from two namespaces:<br /> <ul> <li><strong>OneSql.Client.Sql</strong> – this namespace contains the actual SQL Server provider, <strong>SqlClient</strong>, that is ready for consumption by apps. It returns rows as JSON arrays. </li> <li><strong>OneSql.Client.Tds</strong> – this namespace contains low-level primitives that could be used to implement your own provider. However, if you need a feature that SqlClient doesn’t offer, I strongly recommend that you first request that feature from SqlClient before trying to implement your own provider.</li> </ul> <h5> Limitations</h5> <ul> <li>Supports SQL Server 2012 or higher. It might be possible to work against SQL Server 2008 and 2008 R2, but that hasn’t been tested. </li> <li>Doesn’t support the following data types: <ul> <li>image, text, ntext – these types have been superseded by varbinary(max), varchar(max), and nvarchar(max) respectively. </li> <li>decimal/numeric – these types are too big for JavaScript. </li> <li>UDT – I don’t think there is a base support for these in Windows Runtime. </li> <li>sql_variant – I’ve never needed this type. Hopefully not too many people will be crying for it.</li> </ul> The above types can still be used in the storage schema or in server-side code. They just can’t be retrieved directly. You’ll have to CONVERT/CAST the respective column to a supported type. </li> <li>Only SQL batches are supported, i.e. parameterized queries are not yet supported. They are on the plan, just not yet.</li> </ul> <h5> Getting Started</h5> Download and open the <a href="">SampleWindowsStoreApp</a> project. Play with it. You’ll most certainly find bugs. Please describe your repro clearly, zip any prerequisite SQL and send it to <a href="mailto:zlatko+onesql@michailov.org"><strong>zlatko+onesql@michailov.org</strong></a>. <br /> You may also see how connections are established, how rows are being fetched, and how to move to the next result set. The documentation is very Spartan at the moment. It is on my plan to provide samples. <br /> <h5> Known Issues</h5> Errors encountered by OneSql get lost somewhere among the awaits and the JavaScript promises. I know it is annoying. I’m working on that too.<br /> <h5> Support</h5> The best place to get the latest news, updates, and references to resources is <a href="" title=""><strong></strong></a>. <br /> Articles will continue being published at <a href="" title=""><strong></strong></a>.<br /> New releases and other executable artifacts will be published at <a href="" title=""><strong></strong></a>.<br /> For everything else, send me an email at <a href="mailto:zlatko+onesql@michailov.org"><strong>zlatko+onesql@michailov.org</strong></a>.<img src="" height="1" width="1" alt=""/>Zlatko Michailov of Linear Equations<strong>Systems of Linear Equations</strong> is an app that teaches kids how to solve systems of linear equations that also generates practice problems. The app comes in two editions – <strong>Free</strong> and <strong>Standard</strong>.<br /> <h4> Features</h4> <h5> Free Edition</h5> <ul> <li>Unlimited number of problems with 2 and 3 unknowns. </li> <li>Answers. </li> <li>A description of the Gaussian Elimination method. </li> <li><strong>Free of charge</strong>.</li> </ul> <h5> Standard Edition</h5> <ul> <li>Unlimited number of problems with 2, 3, <strong>4, and 5</strong> unknowns. </li> <li>Answers. </li> <li><strong>Complete solutions</strong>. </li> <li>A description of the Gaussian Elimination method. </li> <li><strong>Costs $2.49</strong>.</li> </ul> <h4> Screenshots</h4> <a href=""><img alt="Standard_1_Navigation" border="0" src="" height="274" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Standard_1_Navigation" width="484" /></a><br /> <a href=""><img alt="Standard_2_Problem" border="0" src="" height="274" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Standard_2_Problem" width="484" /></a><br /> <a href=""><img alt="Standard_3_Answer" border="0" src="" height="274" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Standard_3_Answer" width="484" /></a><br /> <a href=""><img alt="Standard_4_Solution" border="0" src="" height="274" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Standard_4_Solution" width="484" /></a><br /> <a href=""><img alt="All_Method" border="0" src="" height="274" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="All_Method" width="484" /></a><br /> <h4> How to Obtain</h4> The app is available for Windows 8 and Windows RT from the Windows App Store. It is in the <strong>Education</strong> category. <br /> (Tip: To search the Windows App Store, launch the Store app, then sweep the screen from the right and tap on the Search icon.) <br /> The Free edition is available worldwide while the Standard edition is only available in the United States.<br /> <h4> Support</h4> Go to <a href=""></a>. <br /> If you don’t find the answer you are looking for, post your question under an appropriate article.<img src="" height="1" width="1" alt=""/>Zlatko Michailov, Time Bombs, and Refactoring<p>When an organization embarks on the development of a new software product, it has two questions to answer: </p> <ol> <li>What base technologies should the new product use/depend on?</li> <li>How the new product will grow over time?</li> </ol> <p>The first question is <i>religious</i> which makes it easy to answer – an organization is either a “Microsoft shop”, or an “OpenSource shop”, or “Some-other-religion shop”. Within that shop, developers may have to choose between managed or native in case that is not part of the religion, and that’s it for question #1. </p> <p>Question #2 is a business question. In all fairness managers and architects typically spend the necessary time and effort to come up with a solid vision for future growth. Only after upper management is satisfied with that vision, will it let development commence. </p> <p>So far so good – that is the correct process. Um, there must be a caveat, right? Right. The caveat is that the base technologies the organization has religiously chosen also have similar growth plans. That means there is a chance that a base technology may offer the same functionality as this product. That chance is low by default, but it can jump significantly if developers try to be...<i>smart</i>. This is religion, remember? Religion demands faith, not smartness. Every platform’s goal is to make common use cases simple. As long as application developers stick to such simple patterns, their product will leverage the platform’s enhancements. However, often times the platform is lacking certain features, and developers plug those wholes rather than wait for the platform to improve. </p> <p>A very typical example of such a fix is implementing a data cache to save hard disk hits, or network roundtrips. While such a fix works perfectly short term, it is a time bomb in the long run, because performance is a fundamental problem, and sooner or later the platform will address it. Furthermore different technologies develop at different paces. For instance, 10-15 years ago the main problems were slow network communications and slow disk access. So it was tempting to bake a data cache right into the frontend box. Today, however, the main problem is to scale out the frontends in order to serve the necessary number of hits, which further trails the requirement of driving the cost of those frontends low. Now a data cache on each frontend box would consume unnecessary memory as well as CPU cycles for cache entry lookups while a single dedicated cache box per rack or per farm would be cheaper and more effective . </p> <p>That is how a big win can expire over time. It becomes<i> cancer</i> – it is an extra code that has both a development/maintenance cost as well as production cost. And it can only get worse, because it falls into an area where the platform is obviously making improvements. </p> <p>The only treatment I can think of is to surgically remove the tumorous code, i.e. to <i>refactor</i>. Unfortunately, refactoring has been over-promoted by the Agile community to the point where saying the R-word in front of management is politically incorrect which makes the disease really difficult to cure once it has developed. </p> <p>That leads to the question: Is this disease preventable? Theoretically speaking – yes. Since time bombs are explicitly checked in by developers, if developers don’t do it, there won’t be a problem, right? Well, that’s easier said than done. Whenever a performance benchmark misses its target, a hero will jump in to fix it, for which he/she will be properly rewarded. What makes such a time bomb difficult to remove is that one it has been proclaimed successful, no one will be willing to admit that its value has expired (sometimes even too quickly.) In general, we don’t understand that success in the software industry is something very temporary. </p> <p>There is a way, however, to be a hero now and to remain a hero even when a heroic fix turns bad. What’s even better is that you can be a hero (twice!) without infecting your product. When you do your original heroic fix, start with asking yourself: Is this part of my product’s value, or am I compensating for a platform’s deficiency? If you end up on the latter part, then code it in a way that makes it easily removable – avoid introducing public API at any cost; make the source file easily removable from the build; don’t let other code take dependencies on your fix that you haven’t envisioned; and most importantly - encode a switch that bypasses the new functionality. </p> <p>If you stop here, there will still be a problem when your fix becomes obsolete, but then you’ll be able to either flip the switch or remove the whole thing. Either way you’ll be a hero for a second time on the same problem! </p> <p>If you further make that switch a configuration setting so that flipping it doesn’t require a product change, a problem with your heroic fix may never occur. The only down side is you’ll miss being a hero for a second time on the same problem.</p> <p>In conclusion, I continue to promote “timeless” development among developers as well as necessary refactoring among management. Hopefully this article has made my points clear.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Chess Clock 0.4<p>This update introduces support for extra time per move – typically 5 seconds for recording the opponent’s move.</p> <p><strong><a href="">DOWNLOAD</a></strong> the new version, and run it locally for best experience. Alternatively, you may use it <strong><a href="">online</a></strong>. </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov about TPL Dataflow<p>When <a href="">Jonathan Allen</a> of <a href="">InfoQ</a> contacted me for an e-interview about <a href="">TPL Dataflow</a>, I immediately searched InfoQ to see what content it had about <a href="">TPL</a> and <a href="">TPL Dataflow</a>. Not only did I find content, but I found an <a href="">article</a> Jonathan had written about my <a href=""><strong>Guide to Implementing Custom TPL Dataflow Blocks</strong></a>. </p> <p>So I did the <a href=""><strong>THE INTERVIEW</strong></a>. Hopefully you find my insights on TPL Dataflow useful.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Me @ Microsoft (part III)<p>At the beginning of this month, December 2011, I joined <strong>SharePoint</strong> and more specifically its <strong>Developer Platform</strong> team. Our charter is the third-party application model in SharePoint. This is a very interesting domain. I am trying to get up-to-speed as fast as possible.</p> <p>I admit I was plain lucky to get this job. It happened after a series of accidents and involved several great managers to whom I am deeply indebted.</p> <p>I am eager to start contributing to SharePoint as a modern web development platform. I am also planning to engage with the SharePoint developer community. I am already getting some rough ideas. Hopefully they will start taking shape soon [and I will not be embarrassed to share them].</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Chess Clock 0.3<p>This update adds mouse support to my HTML-based Free Chess Clock to make it suitable for keyboard-less devices like phones and tablets.</p> <p>You can try the new version <a href="/p/freechessclock.html"><strong>online</strong></a>, but you’ll have a better experience if you <a href=""><strong>DOWNLOAD</strong></a> the HTML page and run it locally.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov Bets on Scalable Databases<p>For those who don’t follow the database industry closely, here is the problem: Relational database servers clearly dominated the market for the last 15-20 years, but today they render inadequate for <a href="">Big Data</a>. There are sectors of the industry where relational databases have never been able to perform - web indexing, bio research, etc. <a href="">MapReduce</a> has emerged to serve those sectors. While the MapReduce pattern is no match for SQL in terms of functionality, the distributed storage architecture on which MapReduce is based has a great potential for scalable processing. </p> <p>Now the question is: <strong>Which approach will produce a rich, yet scalable, data processing engine first?</strong> </p> <ul> <li>a) Enabling relational databases to operate over distributed storage, or </li> <li>b) Expanding the processing functionality over distributed storage? </li> </ul> <p>I bet on the latter approach:</p> <blockquote> <p>Scalable databases will emerge from distributed storage.</p> </blockquote> <p>It may seem that I am betting against the odds, because relational database vendors already have both the code and the people for the job. However, those codebases are about 20 years old and thus they are barely modifiable. The assumption that data is locally, yet exclusively, available is spread out everywhere. So I don’t believe those codebases will be of much help. Even if database vendors abandon their existing codebases, and try to solely leverage their people to build a new data processing technology from scratch, there are ecosystems grown around those legacy database engines that will keep randomizing the development process and will hold back progress. So those developers will be much less efficient then developers who don’t have to deal with such a legacy baggage.</p> <p>If the above bet comes true, it will trail the following consequence:</p> <blockquote> <p>There will be new companies who will play significant roles in the database industry.</p></blockquote> <img src="" height="1" width="1" alt=""/>Zlatko Michailov in Peace, PFX<p>This is a milestone in my career at Microsoft that marks the end of <a href="http:/2010/02/about-me-microsoft-part-ii.html">About Me @ Microsoft (part II)</a>, but it doesn’t yet mark the beginning of “Part III”. Part III is unclear at this moment. (I hope there will be such a part). I should know more in about a week or two.</p> <p>Briefly, without disclosing any details: </p> <blockquote> <p>The PFX team (Parallel .NET Framework Extensions) no longer exists. </p> </blockquote> <p>By “the PFX team” I mean the people, not the products. The codebase still exists. There is another team that is chartered to maintain it, but no PFX developer is on that team. I don’t know what the future of those products will be, and I doubt there is anybody who knows that at this point. So please don’t ask me. I can only hope innovation in .NET parallelism will continue.</p> <p>Looking back, I feel lucky that I’ve had the rare chance to be part of the PFX team. The knowledge I’ve gained during these two years is priceless. I came with a few ideas I wanted to explore, and now I’m leaving with a lot more. </p> <p>While I’m sad about how abruptly things ended, I’m looking at it positively – it’s the trigger I needed to take the next step in my career. I am looking ahead empowered for new adventures. </p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov - First Draft<p>I recently drafted a specification of a new database language - SQL# (pronounced "es-kju-el sharp"). It is available on <a href=""></a> where I plan to publish future updates as well as samples and tools.</p> <p>As far as the name is concerned - I really like it despite of the problems the # sign creates for file names and URLs. This name doesn't seem to be used yet - neither <a href="">Google</a> nor <a href="">Bing</a> find any literal matches. So until someone claims ownership of the name, SQL# it is.</p> <p>Where did this come from? The programmability of Transact-SQL has been a pin in my eye for more than 10 years. My previous creature, <a href="">Z-SQL</a>, was a modest attempt to make T-SQL look and feel more like PL/SQL, but at the end of the day it was still SQL. I've grown to realize that the SQL legacy, in whatever flavor it comes, is what repells application developers. Even I started catching myself staring at seemingly valid SQL code that SQL Server was rejecting. When I realized I was writing in C# inside SQL Server Management Studio, I decided to take a more radical approach - to replace all the awkward SQL constructs with something that comes naturally - C#.</p> <p>Of course, there are constructs like the data manipulation statements, SELECT/INSERT/UPDATE/DELETE, that only exist in SQL and cannot be replaced. We'll have to live with those for now, but for a large set of commonly used constructs like object definitions, variable declarations, loops, cursor traversals, etc. there is no reason why we should continue suffering. On top of this I add one of my favorite C/C++ features that is unfortunately missing in C# - preprocessor directives. While I only provision a limited preprocessor functionality, I believe I have captured the key use cases.</p> <p>The most common question people have asked me is: Why SQL# when there is already LINQ? LINQ and SQL target two completely different classes of applications. I don't want to spark yet another discussion on which one is better. I would only say one doesn't exclude the other - LINQ did not (and will not) obsolete SQL especially in these times when the volumes of data organizations deal with grow faster than ever. Just like SQL made its way up to C# in the form of LINQ, I push C# down to SQL. I hope to find a similar reception from the SQL developer community.</p> <p>What are the next steps? At this point I am seeking feedback on the overall usefulness of SQL# as well as on individual features - either proposed or missing. I am in no rush to start implementing any tooling that translates SQL# to Transact-SQL at least not until I am confident I've captured the right feature set. Even then, I will be looking for volunteers to write a compiler, a language service, and other tools. My goal is to continue driving the specification forward and to create an ecosystem of tool vendors.</p> <p>So, if you are a SQL developer and you are interested in shifting your development experience in this direction, please read the spec and log your feedback here. It will be greatly appreciated.</p> <img src="" height="1" width="1" alt=""/>Zlatko Michailov | http://feeds.feedburner.com/ZlatkoMichailov | CC-MAIN-2018-26 | refinedweb | 9,671 | 64.3 |
>>>>> "HG" == Helmut Geyer <Helmut.Geyer@IWR.Uni-Heidelberg.De> writes: HG: 1) The main problem is that XEmacs (19.14) cannot read Emacs HG: (19.34) byte-compiled lisp files, while Emacs can in fact read HG: XEmacs compiled .elc files. If you want to have a directory HG: containing lisp files for both of them (it certainly is HG: possible to support both variants in a single lisp file), you HG: have to ensure that only XEmacs is used for byte-compiling. What is the exact difference between Emacs & XEmacs byte-code? If Emacs byte-code is faster or has any other advantage (why they changed it?), I don't agree with this solution. HG: This could be ensured by using a shell script, say HG: emacs-byte-compile that will use XEmacs if installed and Emacs HG: otherwise). Every package that uses byte-compiling must use HG: this shell script. No. Different packages can have different methods of distinguishing between Emacs and XEmacs for compilation (like EMACS variable, editing Makefile, etc.). How would you cover all of them by single shell script? I think everyone can check `[ -f /usr/bin/xemacs ]' which is simple and portable. I can see no need for special shell script. HG: 2) there are several packages that are neither part of Emacs HG: nor of XEmacs. Currently those packages are only available for HG: Emacs, not for XEmacs (e.g. auctex or tm). To use them with HG: XEmacs you need to hack the packages, although both packages HG: support both variants of GNU Emacs. There are basically two HG: ways to handle this: a) make two debian packages, each HG: supporting a single variant of GNU Emacs. Advantage: simple, HG: easy to maintain. Disadvantage: cluttering up of package HG: namespace and unnecessary use of disk space. HG: b) use a single debian package to HG: support both Emacsen using a special directory added to the HG: load path of both (e.g. /usr/share/emacs/packages). The HG: package has to be compiled either at installation time (using No installation time compilation please! It's slow and possibly fragile. HG: the script mentioned above) or has to be built using HG: XEmacs. The later method has the disadvantage that every HG: package maintainer building a package including byte-compiled HG: must have XEmacs installed. Especially until XEmacs stops to conflict with Emacs. :-) HG: Advantage: no namespace or disk space cluttering. HG: Disadvantage: use of a non-standard element in both load paths, HG: more complicated for package maintainers. HG: 3) There is a lot of functionality in elisp packages that is HG: included in some add-ons for emacs while being part of the HG: main xemacs distribution. This should not be needed as it is HG: possible (at least for some packages) to be used by both HG: XEmacs and Emacs. An example for this is vm. As XEmacs 19.15 HG: will come in an unbundled form as well as the current kitchen HG: sink distribution, this problem will be basically the same as HG: the one above. So I propose to leave this problem alone until HG: XEmacs 19.15 is released. HG: 4) single elisp files or packages that are not to be compiled HG: definitely should be in the load path of both Emacsen. This HG: includes e.g. debian-changelog-mode.el. If there are problems HG: with compatibility on the elisp level, they should be fixed on HG: that level. (elisp is perfectly capable of distinguishing HG: between Emacs versions and variants). So I will go even HG: further than simply including /usr/lib/emacs/site-lisp in the HG: load path of XEmacs by proposing there should be a single HG: site-lisp directory used for both Emacs variants. Once debian HG: changes from using FSSTND to the (hopefully soon released) FHS HG: it will be clear where this directory is to be: HG: /usr/share/emacs/site-lisp. Until then I suggest using the HG: Emacs location /usr/emacs/site-lisp for both XEmacs and Emacs. I think we should have three directories, something like `emacs', `xemacs', and `emacsen' (or whatever we name them). One for Emacs elc files, one for XEmacs one, and one for shared. It should be leaved on each package maintainer's responsibility whether he makes a single package for both Emacsen (which should be prefered) or decides to make two packages (which may be almost necessary sometimes) or makes only one package for one Emacs (which may be absolutely necessary sometimes). HG: This is a difficult issue, as all maintainers supporting elisp HG: packages have to agree on this. Furthermore there should be a HG: passage in the policy manuals about elisp packages. I agree. I think these problems shouldn't be fatal for avoiding of conflict between `emacs*' and `xemacs*' packages. I hope that will be removed until 1.3. Problems of packages can be solved later. [But *should* be solved. One thing I don't like on Debian is that we discuss some theme and then forget it for a long time and then discuss it again (already discussed things, not much new) and forget it, etc -- see shadow passwords. I think once we start to discuss some problem we should solve it to final solution if possible.] Milan Zamazal -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to debian-devel-REQUEST@lists.debian.org . Trouble? e-mail to Bruce@Pixar.com | https://lists.debian.org/debian-devel/1997/02/msg00438.html | CC-MAIN-2016-30 | refinedweb | 919 | 72.76 |
It should not count the ":not(...)" pseudo-class with the classes.
So "a:not(#foo)" has a specificity of ids=0, classes=0, tags=1
The comment "Pseudo elements count as elements" should read "Pseudo classes count as classes", and the following line should add to .classes and not to .tags
Pseudo elements (::) should be added to tags - currently they are ignored. We need to find a way to distinguish between pseudo elements and pseudo classes.
The selector string should split on /[ >+~]/
We should pre-parse the selector string to replace "/\[.*\]/\[\]/g" in other words to remove the contents of attribute filters so [a=b] becomes []. This will prevent ~, | or * which are allowed in attribute filters to assume other meanings.
We should check that namespace|div is counted as div. I am fairly sure that it is, but it's worth checking.
Created attachment 471470 [details]
Visual map of selector spec
While digging through the spec I made some notes, which I've tidied up and scanned in case anyone else needs to do the same thing, or in case I lose the original.
Marking blocking+, with the rationale that if we ship Firefox 4 with a CSS Inspector, it really needs to behave correctly. Having the browser and inspector disagree about how an element is being styled would be terribly confusing.
Inspector feature postponed. Removing blocking flags from Inspector bugs.
Removing items from kd4b6.
Reprioritizing bugs. You can filter the mail on the word TEABAGS.
Joe said that the CSS Doctor does a much better job of explaining specificity so we will be removing specificity from the Style Inspector.
Closing this bug as invalid in favor of bug 653084
Reopened because specificity is needed by the style inspector to discover which is the currently applied rule for a CSS property.
We should also consider some of the fixes in here
We should be using exactly the same rules to calculate css specificity as Firefox because the style inspector and browser should *never* disagree.
Looking at nsCSSSelector::CalcWeightWithoutNegations it seems like Firefox fails to calculate -moz-any() specificity correctly (bug 561154).
Thinking about this, it is probably better if we could expose specificity through nsIDOMCSSRule, this way we would be less prone to future bugs, we could use the same weighting as Fx and the style inspector would be showing what the browser is actually doing.
FWIW: Here is the scoring from nsCSSSelector::CalcWeightWithoutNegations:
weight = 0
if (tag)
weight += 1
if (id)
weight += 65536
if (class)
weight += 256
if (pseudoclass)
weight += 256
if (pseudoelement)
weight += 256
Bug 682318 logged.
Bug triage, filter on PEGASUS.
Seems like the platform code has been changed slightly:
let weight = 0;
for each (tag in selector) {
weight += 0x000001;
}
for each (id in selector) {
weight += 0x010000;
}
// FIXME (bug 561154): This is incorrect for :-moz-any(), which isn't
// really a pseudo-class. In order to handle :-moz-any() correctly,
// we need to compute specificity after we match, based on which
// option we matched with (and thus also need to try the
// highest-specificity options first).
for each (class || pseudoClass || pseudoElement in selector) {
weight += 0x000100;
}
In the absence of an API to get a CSSSelector's specificity we can use this algorithm.
Created attachment 601257 [details] [diff] [review]
Patch
This covers everything, including pseudo-classes and pseudo-elements. Joe, I have used replace() as opposed to simply counting the strings as this method is less error prone.
Created attachment 601260 [details] [diff] [review]
Patch 2
Removed hilariously obvious bug.
Created attachment 601261 [details] [diff] [review]
Patch 3
What the heck is wrong with me? Added missing comment.
Comment on attachment 601261 [details] [diff] [review]
Patch 3
Review of attachment 601261 [details] [diff] [review]:
-----------------------------------------------------------------
::: browser/devtools/styleinspector/CssLogic.jsm
@@ +82,5 @@
> +const RX_PSEUDO_CLASS_OR_ELT = /(:[\w-]+\().*?\)/g;
> +const RX_CONNECTORS = /\s*[\s>+~]\s*/g;
> +const RX_ID = /\s*#\w+\s*/g;
> +const RX_CLASS_OR_ATTRIBUTE = /\s*(?:\.\w+|\[.+?\])\s*/g;
> +const RX_PSEUDO = /\s*:?:([\w-]+)(\(?\)?)\s*/g;
I'm possibly in the minority here, but I'd think that placing the regexes 1400 lines away from where they are used doesn't help readability.
Could we put them closer?
I don't know enough about JS compilers, but will I actually slow things down if we put them inline?
I agree that it makes the code less readable, but it is almost 50% faster doing it this way in the latest nightly as, if we don't cache the regexes, a new RegExp object is created every time one is encountered.
I used for comparison. In previous reviews I have been told to move regexes to the top for this reason.
try=green | https://bugzilla.mozilla.org/show_bug.cgi?id=592743 | CC-MAIN-2016-36 | refinedweb | 767 | 62.88 |
Introduction: Use the Force... or Your Brainwaves? (multifuctional Thought Controlled System)!!
Step 8: My Plan
This is an older picture about my plan, but succeeded so I’m very happy with the results. The headset transmits BT (Bluetooth) signals to the Arduino Mega that analyzes the incoming bytes and depending on the user’s thought controls the different features. It was very hard to find out the best ways to transmit this lot of data, but I choose Bluetooth instead of WiFi. At first time I wanted use the Particle Photon as a data transmitter, but that little guy got a better role in the making of the webserver. That was the biggest modification in the whole project. (On the picture you can see the actual plan). I used homemade Arduino modules, because I love design my own circuits. You can buy these modules online if you want or build yourself with me.
Step 9: A Half Year of Learning and Experimenting
I worked a lot on this project and yes, I started almost half year ago, but with the Photon things accelerated. It was very easy to use their dashboard. By the way there are many tutorials on the internet how to hack EEG toys, that helped me a bit, but they didn't have any extra functions. So I decided that I'll hack this Necomimi toy and using my creativity I created this device that has much more features than blinking a LED. Hope you'll enjoy!
Let’s get started!
Step 10: Gathering Tools and Parts
All of my parts are from the GearBest.com. It's an online store just like the eBay, but the parts arrived much more faster. So if you want to build you own game controller, robot controller or Force detector start with buying the parts! (click on them to view purchase links)
I strongly recommend to buy the parts now, because on the site is a 2 year anniversary celebration with extremly low prices. If you need Arduinos or anything else you can buy now for half price.
Tools Needed:
- soldering iron
- solder
- glue gun
- self-adhesive tape/double sided tape
- wire stripper
- wire cutter
- rotary tool
Hardware:
- Arduino Mega
- Arduino UNO / Arduino Nano
- Arduino Leonardo // must buy to build the project
- Particle PhotonJumper Wires
- Relay Module
- Necomimi Brainwave Cat Ears// must buy to build the project
- HC-05 Bluetooth Module
- HC-06 Bluetooth Module// must buy to build the project
- Motor Driver Circuit
- Breadboard
- LEDs
- Robot Chassis
Software:
Step 11: Hacking the Necomimi Toy
We want to modify this EEG toy to transmit data via Bluetooth so first take apart the case. The screws are under the sticker. Remove the sticker and the back of the device and you’ll find inside a small circuit. The one that is under the main circuit is the Neurosky TGAM chip. This is connected with four header pins to the main microcontroller board so take a soldering iron and remove this circuit carefully. Now solder three wires to the GND-pin to the VCC-pin and to the T-pin. The T-pin is the transmitter pin with 57600 baud rate, this sends data packets to our microcontroller, but I connected this directly to a HC-06 slave BT module. The HC-06 is set to 9600 baud rate but don’t worry we’ll fix this problem. If you soldered the three wires to the you can build in your own rechargeable power source. I used a 500mAh Li-Ion battery, a USB charger circuit, a 5v step up circuit and two resistors (100 ohm and 200 ohm) to ensure a perfect 3.4 volt power supply for the chip and for the Bluetooth module. Follow the schematic to build the circuit that is needed in the headset. If the circuit is done configure the Bluetooth module.
Step 12: About the Chip
Step 13: The Schematic / Electronics
Follow the instructions and the pictures below and create your own wireless EEG headset. Each comment is for one picture.
- Solder wires to GND, VCC and T pins.
- Wire soldered to pin "T".
- Voltage divider to reduce 5 volts to 3.3 volts for the chip.
- The Lithium Charger, the 5v step up, the voltage divider and the chip together.
- Use header pins to connect them to the Bluetooth module.
- The done electronics, ready to play...
Isn't so hard to build...
You'll need a 500mAh Lithium-Ion battery, a 5v voltage step-up module to create a stable voltage then a 100 Ohm and 200 Ohm resistor to reduvóce voltage to stable 3.3 Volts. I soldered female header pins to power up and transmit data to the bluetooth module.
The most important part is to connect the the "T" pin with the "RX" on the Bluetooth module. This brings our project to life.
Step 14: Configuring Bluetooth Modules
HC-06: First load up the sketch named “HC_06_Bluetooth” to an Arduino UNO then connect the Bluetooth module as the schematic shows. I found this schematic on Artronix team's website and seemed useful.
Open your Serial Monitor in the Arduino IDE and wait until the Arduino configures the BT module. Now your Bluetooth module is set to 57600 baud rate. You can try out a lot BCI (Brain Computer Interface) apps, because this hacked bluetoothNecomimi toy will be compatible with every Neurosky apps.
//HC-06-Bluetooth
void setup() { <br>
// Start the hardware serial.
Serial.begin(9600); // default HC-06 baud rate
delay(1000);
Serial.print("AT");
delay(1000);
Serial.print("AT+VERSION");
delay(1000);
Serial.print("AT+PIN"); // to remove password
delay(1000);
Serial.print("AT+BAUD7"); // Set baudrate to 576000 - eg Necomimi dafault
delay(1000);
Serial.begin(57600); //
delay(1000);
}
void loop() {
}
You can use these apps to learn to control your brainwaves and values, like attention or meditation. You can try a lot of cool games, but I recommend these apps (they are compatible with PC, iOS and Android):......
HC-05: Then use the "HC_05_Bluetooth" and load up to your Arduino the same way like before. Connect "EN" pin of the HC-05 to the 3v3 of the Arduino. You should write the adress of your HC-06 module in the code. Check the adress of the BT module with an Android smartphone, like me. Replace ":" (double dottles) with "," commas in the code.
//HC-06-Bluetooth<br>void setup() {
// Start the hardware serial.
Serial.begin(9600); // default HC-05 baud rate
delay(1000);
Serial.print("AT");
delay(1000);
Serial.print("AT+VERSION");
delay(1000);
Serial.println("AT+ROLE=1"); //set the HC-05 to master mode
delay(1000);
Serial.println(" AT+LINK="copy here your adress"); //now the module should connnect automatically
delay(1000);
Serial.print("AT+UART=57600,1,0"); // Set baudrate to 576000
Serial.begin(57600); //
delay(1000);
}
void loop() {
}
And you should change the adress in the code: 20:15:09:15:17:82 ==> 2015,09,151782 This way the HC-05 module can recognise the adress. So just leave some ":", because they aren't neccesary.
Step 15: The Remote Controlled Robot
To make this inexpensive robot I used 38kHz IR technology, that is used in TV remote controllers. I bought from Gearbest a chassis for my robot. The IR receiver is salvaged from an old TV and the Arduino is also from the Gearbest.
The motor driver circuit-You’ll need these parts:
- 2 Screw Terminals
- L293D IC3
- Header Pin (90 degrees)
- 1k Resistor
- Red LED
- Wires
- A PCB Board
I used some copper wires and following the schematic connected the pins of the IC to the header pins. Doesn't matter that which pin goes to which header pin, just remember that were did you connected them. The LED is connected in series with the resistor and in paralell with the 5v VCC.
Step 16: Assemble the Chassis
This step was very easy, used a screwdriver and some crafting skills I built the frame in five minutes. After this is coming the harder part, we should build a motor driver circuit. To control my motors I choose the L293D IC that can drive two motors. Look at the schematic to build circuit.
Connect the parts to the Arduino
I used jumper wires to connect the sensor and the motor driver to the Arduino.
Arduino Pin ==> Motor Driver
4 ==> 15
5 ==> 10
6 ==> 7
7 ==> 2
VIN ==> 8
5v ==> 1, 9, 16
GND ==> 4, 5, 13, 12
So look at the schematic of the L293D module, then connect its pins to the Arduino UNO as I wrote here. The 5v pin of the Arduino should be connected to the 1, 9, 16 pins to enable the IC's motor driver function. Then finally use the screwterminals to power up the motors.
Step 17: Writing Codes…
Using the IRremote library I created a code that reads 38kHz infrared signals, decodes them, then moves the robot. (Download the library at the Code section).
I added explanations in the code, but the substance is that decodes IR signals coming from the main server, then depending on what the user wants, move the motors than will move the robot forward or turns left. Downloload the code:"Robot_Code". Load up this to your Arduino and and your robot will complete.
Step 18: The Main Server (Arduino Mega, Leonardo, Photon)
The server reads incoming data packets from the wireless headset. We’ll use the configured BT module to ensure the connection between the headset and the server. The Arduino Mega is the brain of the circuit, everything is connected to this microcontroller: Bluetooth, infrared transmitter LED, webserver, and mouse controller. The code is a bit complicated, but I added explanations to understand.
Step 19: The Case
It’s simple but looks great. I cut two 18x15 cm plates the smoothened the edges with sandpaper. Using some screws, I connected them to each other.
Step 20: The Relay Circuit
You’ll need these parts:
- 2n2222 transistor (3 PCS
- )germanium diodes (3 PCS)
- 330 ohm resistors (3 PCS)
- 1k Ohm resistors (3 PCS)
- 5v relays (3 PCS)
- hook-up wiresheader pins
- PCB board
- A picture worth more than a thousand words, so instead of trying to writing how should you connect the parts on the PCB look at the schematic If the “Signal” pin gets signal from the Arduino the relay will turn on. The transistor amplifies the signal to ensure enough power fot the relays. We will use 37-38-39 pins to control HIGH-LOW levels of each relay.
Step 21: Install Parts in the Case
To install the parts on the plexy-glass case I used some double sided white tape. This loks noce and hold the circuits on the case pretty strong.
Step 22: The Server-Circuitry
This server is software based so making the circuit isn’t so hard. (See schematic downer). You just need to power up the microcontrollers and make connection between them. The IR LED is connected to pin D3 and the relays to 37-38-39. D16 of the Arduino Mega goes to RX of the Photon and the D18 to the RX of the Leonardo.
Step 23: Connection With the Robot
The IR Leds are connected to the digital pin D3 and with the IRremote library we send codes to the robot. It's pretty simple. The IR codes must be the same in the robot's code. If you think you're done you can test it with your camera. The infrared light looks purple on a photo, cameras can detect the IR light. This trick always works.
Step 24: The Code
Use the "Arduino_Mega_Server" code in the Software part.
I suffered a lot while wrote the codes. I should program three different microcontrollers, so it was a big challenge. I used IRremote, Brain, SofwareSerial and Mouse libraries. (see download links at the Software part). But now the code is done and works so you just should upload to your microcontrollers. Download the .ino file or copy/paste the code in your IDE and use it. The code to the Particle Photon should be uploaded via the browser IDE. To make this register to theParticle Build. And connect your laptop to your microcontroller. I was really surprised that this happened almost automatically, I just added my Device ID number.
Before loading up the codes be sure nothing is connected to the RX/TX pins. So disconnect your Bluetooth module from the Mega, and disconnect the Mega from the Leonardo and the Photon.
Step 25: The Mouse Controller
The Arduino Leonardo controls the mouse.
Step 26: The Webserver
I wanted to add an IoT (Internet of Things) function in my project, so I made anonline data logger using the Particle Photon. Depending on what you make with the device the Photon creates a personal server, and writes the data in the Cloud.
This may seem scary at first time, but imagine that you can detect if you are stressful (attention level increases and decreases quickly) or if you should sleep (meditation level is always higher than 80). This webserver may help you to live healthlier.
Step 27: Coding in Browser
The Particle also has an online dashboard where you can publish any kind of data with the "Particle.publish();" syntax. We should have to say a big "thank you" for the developers of the dashboard. They saved a lot of time for us.
Writing codes in a browser?
It was intresting to write codes on a web page, but worked the same way like the normal IDE. The code was uploaded wirelessly. This microcontroller would be a very useful tool in everyone's toolbox.
Step 28: Webserver Appearance
The dashboard looks somehow like this if you worked fine. Shows Attention and Meditation, and you can check them anytime, at your dashboard URL. For more info about the dashboards click here.
Step 29: A Quick Upgrade for Those Who Want to Control Only a Robot With Mindwaves
You don't want to build the build the whole project, only a mindwave controlled robot? Don't worry if you're a beginner I thought even on you. I made a code with explanations that needs an Arduino Mini Pro and an IR LED and of course the headset to control the robot. I know that many of my readers wants to make a simple and fun weekend project so here's the description and the code for you.
Use the "RobotControlllerHeadset" code inthe Software section. Connect an IR LED to the D3 pin on the Mini Pro (Or Arduino UNO) and connect the T pin of the NeuroSky chip to the RX pin of the Arduino. Power up with 3.3 or 5 volts and you're done. So build the robot, then use this code in your headset:
#include <irremote.h><br>
#include <irremoteint.h></irremoteint.h></irremote.h>
#include <brain.h></brain.h>
IRsend irsend;
Brain brain(Serial);
const int ledPin = 3;
long interval = 500;
long previousMillis = 0;
int ledState = LOW;
int medValue;
void setup() {
// Set up the LED pin.
pinMode(ledPin, OUTPUT);
// Start the hardware serial.
Serial.begin(57600);
}
void loop() {
// Expect packets about once per second.
if (brain.update()) {
Serial.println(brain.readCSV());
// Attention runs from 0 to 100.
medValue = brain.readMeditation();
}
// Make sure we have a signal.
if(brain.readSignalQuality() == 0) {
// Send a signal to the LED.
if (medValue > 65) {
irsend.sendNEC(0xFF10EF, 32);
delay(40);
}
if (brain.readAttention() > 65) {
irsend.sendNEC(0xFF18E7, 32);
delay(40);
}
}
}
Step 30: Becoming an IoT Jedi (Testing)
After making everything that I explained here, only one thing is left: try it out. Use the power of your mind, use the Force of the neurons in your brain, move your little robot, control your home, control everything with thoughts...
Now you become a real IoT Jedi!
The device has many features and it's very fun to play with. But as I said before it isn't only a toy, featuring biology and electronics we created a device that may help for people, who fight each day with their disabilities, to get back their natural capabilities.
The robot can be controlled easily, the relays aren't big challange too. You only should learn how to control your attention levels to control the mouse. On the picture I'm trying to exit an app with my brainwaves. More practice guarantees better experience.
Step 31: Thank You for Watching!
I really hope that you liked my project and please write your opinion in comments. I wanted to make a video about the device, but coding was hard so I haven't enough time to make a short movie, but I'll publish it very soon on my Youtube channel and I'll update this project presentation too.
Being the richest man in the cemetry doesn't matter to me... Going to bed at night saying we've done something wonderful... That matters to me - Steve Jobs
May the Force be with you, always!
Recommendations
We have a be nice policy.
Please be positive and constructive.
23 Comments
Hey man! This indeed is an amazing project. I am thinking of working on it and make one. It would be of great help if I could somehow contact you. By E-mail or anything, to get some help if I hit any kind of a setback.
This i'ble shows that you're a real maker. But, honestly, it's much too long. It would be nice to see just a single part that concentrates on the brainwave analysis without all the rest.
Yes, but I made this for the Hungarian Innovation Contest, that's why is so long :). I should make something that is a "big hit".
How did you do in the hungarian innovation contest?
Yes! I guess the brainwaves only would be something to attract others. Anyhow, this is an excellent work!
Thank you!
Hey, do you have facebook or something? :)
Great tutorial!! definitely one of my favs
this is the first step to become the Ironman...
where's the video. | http://www.instructables.com/id/Use-the-Force-or-Your-Brainwaves-multifuctional-Th/ | CC-MAIN-2018-13 | refinedweb | 3,014 | 73.88 |
Hash Function for Application Types
Although Java has a default hashCode() method that is inherited by every class, it may be desirable to write a custom hashCode() method for application types:[example from Wikipedia.]
public class Employee{ private int employeeId; private String firstName; private String lastName; private Department dept; public int hashCode() { int hash = 1; hash = hash * 31 + lastName.hashCode(); hash = hash * 29 + firstName.hashCode(); hash = hash * 17 + employeeId; hash = hash * 13 + (dept == null ? 0 : dept.hashCode()); return hash; } }
This illustrates a good pattern for combining hash codes: multiply one by a prime and add the other. XOR can also be used. | http://www.cs.utexas.edu/users/novak/cs314210.html | CC-MAIN-2016-26 | refinedweb | 102 | 54.93 |
This article contains one file MultiTimer.h which allows you to easily time parts of your code. It accumulates times on multiple calls and produces an output summary with all the totals and number of calls for each timer. There are many similar articles here and elsewhere, I thought it worthwhile to add this article though, as I think my timer class adds a few extra features and is very easy to use.
void testfun()
{
printf("Hello World!\n");
}
int main(int argc, char* argv[])
{
for (int i = 0; i < 10; i++)
testfun();
return 0;
}
This isn't the best example of code to time, but it is a good example to show how the timing code is used. See the usage guidelines below for ideas on how to time your code.
The following steps are needed to add timing code:
#include "MultiTimer.h"
TIMER_DUMP
After adding some test timing code, the above example looks like this:
#include "MultiTimer.h"
void testfun()
{
TIME_LINE(printf("Hello World!\n");)
}
int main(int argc, char* argv[])
{
{
TIME_SCOPE("total test")
for (int i = 0; i < 10; i++)
TIME_FUN(testfun())
}
TIMER_DUMP("c:\\times.txt", SORT_CALLORDER)
return 0;
}
This produces the following output file:
-----------------------------------------------------------------
Timings at 10:41:56 Friday, January 03, 2003
-----------------------------------------------------------------
total test took 606.80 micro secs (called 1 time)
testfun() took 591.42 micro secs (called 10 times)
printf("Hello World!\n"); took 561.52 micro secs (called 10 times)
The results are pretty straightforward. If you download the demo code, you will see further code which breaks down the times for the testfun() calls. Here is a sample output from this code:
testfun()
Total: 668.20 micro secs
Minimum: 39.39 micro secs
Maximum: 302.55 micro secs
Average: 66.82 micro secs
-----------------------------------------------------------------
testfun() at 10:22:06 Friday, January 03, 2003
-----------------------------------------------------------------
Sample 1 = 302.55 micro secs
Sample 2 = 58.11 micro secs
Sample 3 = 55.87 micro secs
Sample 4 = 41.63 micro secs
Sample 5 = 42.46 micro secs
Sample 6 = 41.07 micro secs
Sample 7 = 42.46 micro secs
Sample 8 = 39.39 micro secs
Sample 9 = 50.29 micro secs
Sample 10 = 43.02 micro secs
You can see from this that the first printf takes a lot longer on Windows 2000, I guess some initialization is going on.
printf
From the above example, you can see that this timing code is largely macro driven, here is a list of the macros used:-
This needs to be included in one of the .cpp files in a project, at a global scope. It sets up the storage for the timing code.
Place this in your code to time from that point to the end of the current scope. name is a string, which is used in the output to identify these times.
name
Replace a function call, fun(), with this macro to time the call.
fun()
Wrap any line with this macro to time it. Avoid declaring any variables in the line, as this macro puts the line in a separate scope.
These macros allow you to start and stop timers manually. See the demo for an example.
Dump the current times. If dest is a file name, then the output is written to this file. Otherwise it can be:
dest
OUT_MSGBOX
OUT_TRACE
OUT_CONSOLE
stdout
The output can be sorted in 3 ways:
SORT_TIMES
SORT_CALLORDER
SORT_NAMES
This allows 3 extra mode values:
FILE_NEW
FILE_APPEND
FILE_PREPEND
This is useful if you want to compare several test runs.
This is the same as TIMER_DUMP_FILE but the output is in comma separated format.
TIMER_DUMP_FILE
The TIMER_DUMP_SAMPLES macros dump all the samples for a specified timer, this can be it's name or a CHiFreqTimer object. See the demo for an example.
TIMER_DUMP_SAMPLES
CHiFreqTimer
There are 3 TIMER_DUMP_SAMPLES macros which correspond to the TIMER_DUMP macros, except there is no sort argument - the samples are dumped in ascending order.
Return a CHiFreqTimer object for timer name. Further statistics can then be used, see the demo for an example.
Converts a time returned from a CHiFreqTimer call to a meaningful string, see the demo for examples.
Removes all timers.
At the top of MultiTimer.h is a #define USE_TIMERS. Commenting this out will compile out any timing code you may have added.
#define USE_TIMERS
You can put timing code in more than one file. Every file needs to include multitimer.h and all but the first must put the following line before the #include:
#include
#define ANOTHER_TIMER_FILE
It is sometimes useful to know how many clock ticks a piece of code took. I've added some macros which convert the time taken to clock cycles, first you must uncomment the #define _CLOCKTICKS line at the top of multitimer.h file. They are:-
#define _CLOCKTICKS
Converts a time to a string containing the number of clock cycles taken.
Converts a time to a string containing the time and the number of cycles.
This is an int which represents your machine speed, e.g.. 733. This figure is accurate to 1 or 2 of your specific machine speed.
int
The idea of this type of code is to identify slow parts of your projects. If a part of your project is slow, it should help you to narrow down which bit of your code is causing the problem. You should avoid timing very quick statements - the timings are likely to be inaccurate. Also don't time all of your code, focus on areas that are slow and need to be sped up. It can also be helpful to know how many times a particular line or function is called, as this is often difficult to find out from the code alone.
Ideally release code should always be used to get accurate timings, the extra checking and lack of optimizations in debug code can cause misleading results. Other running processes can also affect times, for example virus scanners can skew file access times.
Any code added to time a file will itself take some time. This code tries to compensate for this but with very large number of timers and/or timing calls, a small loss of accuracy may result.
The QueryPerformanceCounter() call is used to time things. This is only supported on Pentiums. If you're running on an older machine, upgrading will probably speed things up...
QueryPerformanceCounter()
TIMER_START
TIMER_STOP
USE_TIMERS #define
TIMER_DUMP_CSV
TIMER_DUMP_SAMPLES_FILE
TIMER_DUMP_SAMPLES_CSV
TIMER. | http://www.codeproject.com/Articles/3038/Multiple-Performance-Timer?fid=12350&df=90&mpp=10&sort=Position&spc=None&tid=465305 | CC-MAIN-2015-22 | refinedweb | 1,062 | 75 |
marble
#include <GeoDataLinearRing.h>
Detailed Description
A LinearRing that allows to store a closed, contiguous set of line segments.
GeoDataLinearRing is a tool class that implements the LinearRing tag/class of the Open Geospatial Consortium standard KML 2.2.
Unlike suggested in the KML spec GeoDataLinearRing extends GeoDataLineString to store a closed LineString (the KML specification suggests to inherit from the Geometry class directly).
In the QPainter API LinearRings are also referred to as "polygons". As such they are similar to QPolygons.
Whenever a LinearRing is painted GeoDataLineStyle should be used to assign a color and line width.
A GeoDataLinearRing consists of several (geodetic) nodes which are each connected through line segments. The nodes are stored as GeoDataCoordinates objects.
The API which provides access to the nodes is similar to the API of QVector.
GeoDataLinearRing allows LinearRings to be tessellated in order to make them follow the terrain and the curvature of the earth. The tessellation options allow for different ways of visualization:
- Not tessellated: A LinearRing that connects each two nodes directly and straight in screen coordinate space.
- A tessellated line: Each line segment is bent so that the LinearRing follows the curvature of the earth and its terrain. A tessellated line segment connects two nodes at the shortest possible distance ("along great circles").
- A tessellated line that follows latitude circles whenever possible: In this case Latitude circles are followed as soon as two subsequent nodes have exactly the same amount of latitude. In all other places the line segments follow great circles.
Some convenience methods have been added that allow to calculate the geodesic bounding box or the length of a LinearRing.
Definition at line 67 of file GeoDataLinearRing.h.
Constructor & Destructor Documentation
Creates a new LinearRing.
Definition at line 21 of file GeoDataLinearRing.cpp.
Creates a LinearRing from an existing geometry object.
Definition at line 26 of file GeoDataLinearRing.cpp.
Destroys a LinearRing.
Definition at line 31 of file GeoDataLinearRing.cpp.
Member Function Documentation
Returns whether the given coordinates lie within the polygon.
- Returns
trueif the coordinates lie within the polygon, false otherwise.
Definition at line 58 of file GeoDataLinearRing.cpp.
Returns whether the orientaion of ring is coloskwise or not.
- Returns
- Return value is true if ring is clockwise orientated
Definition at line 86 of file GeoDataLinearRing.cpp.
Returns whether a LinearRing is a closed polygon.
- Returns
truefor a LinearRing.
Reimplemented from Marble::GeoDataLineString.
Definition at line 46 of file GeoDataLinearRing.cpp.
Returns the length of the LinearRing across a sphere.
As a parameter the planetRadius needs to be passed.
- Returns
- The return value is the length of the LinearRing. The unit used for the resulting length matches the unit of the planet radius.
This method can be used as an approximation for the circumference of a LinearRing.
Reimplemented from Marble::GeoDataLineString.
Definition at line 51 of file GeoDataLinearRing.cpp.
Definition at line 41 of file GeoDataLinearRing.cpp.
Returns true/false depending on whether this and other are/are not equal.
Definition at line 35 of file GeoDataLinearRing.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2016 The KDE developers.
Generated on Sat Dec 3 2016 01:15:44 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdeedu-apidocs/marble/html/classMarble_1_1GeoDataLinearRing.html | CC-MAIN-2016-50 | refinedweb | 549 | 51.44 |
What am I doing wrong here? I've tried this in the main, I've tried it in a function using a return statement, and here is my attempt to pass a pointer to a character (a math operator--this is supposed to be a calculator program) into a loop. It goes in the first time, but then after that it ignores the input of the character and just prints out the statements without waiting for input. Can you look at this program fragment and tell me what to do to fix it?
Thanks a lot.
Donna
#include <stdio.h>
void get_operator(char *lop);
int get_operand();
void main(){
// int num1=0;
char op;
int num2=0;
// int accum=0;
int i;
for (i =1; i < 4; ++i){
get_operator(&op);
printf("the operator is %c\n", op);
num2 = get_operand();
printf("operand is %d\n", num2);
}
printf("the final result is %d\n", num2);
}
void get_operator( char *lop){
printf("enter a math operator or q to quit> ");
scanf("%c", lop);
}
int get_operand(){
int lnum2; /* local variable for operand */
printf("enter an integer >");
scanf("%d", &lnum2);
return lnum2;
}
} | http://cboard.cprogramming.com/c-programming/3202-scanf-char-doesn't-work-loop-printable-thread.html | CC-MAIN-2016-30 | refinedweb | 185 | 64.34 |
Azure DevOps Server 2020 Release Notes
| Developer Community | is supported from Azure DevOps Server.0.1 Patch 6 Release Date: September 14, 2021
Patch 6 for Azure DevOps Server 2020.0.1 includes fixes for the following.
- Fix Artifacts download/upload failure.
- Resolve issue with inconsistent Test Results data.
Azure DevOps Server 2020.0.1 Patch 5 Release Date: August 10, 2021
Patch 5 for Azure DevOps Server 2020.0.1 includes fixes for the following.
- Fix build definition UI error.
- Changed browsing history to display files instead of the root repository.
- Fix issue with email delivery jobs for some work item types.
Azure DevOps Server 2020.0.1 Patch 4 Release Date: June 15, 2021
Patch 4 for Azure DevOps Server 2020.0 2020.0.1 Patch 3 Release Date: May 11, 2021
We have released a patch for Azure DevOps Server 2020.0.1 that fixes the following.
- Inconsistent Test Results data when using Microsoft.TeamFoundation.TestManagement.Client.
If you have Azure DevOps Server 2020.0.1, you should install Azure DevOps Server 2020.0.1 Patch 3.
Verifying Installation
Option 1: Run
devops2020.0.1patch3.exe CheckInstall, devops2020.0.1patch 3, the version will be 18.170.31228.1.
Azure DevOps Server 2020.0.1 Patch 2 Release Date: April 13, 2021
Note
If you have Azure DevOps Server 2020, you should first update to Azure DevOps Server 2020.0.1 . Once on 2020.0.1, install Azure DevOps Server 2020.0.1 Patch 2
We have released a patch for Azure DevOps Server 2020.0.1 that fixes the following.
- CVE-2021-27067: Information disclosure
- CVE-2021-28459: Elevation of privilege
To implement fixes for this patch you will have to follow the steps listed below for general patch installation, AzureResourceGroupDeploymentV2 and AzureResourceManagerTemplateDeploymentV3 task installations.
General patch installation
If you have Azure DevOps Server 2020.0.1, you should install Azure DevOps Server 2020.0.1 Patch 2.
Verifying Installation
Option 1: Run
devops2020.0.1patch2.exe CheckInstall, devops2020.0.1patch 2, the version will be 18.170.31123.3.ResourceManagerTemplateDeploymentV3 task installation
Note
All the steps mentioned below need to be performed on a Windows machine
Install
Extract the AzureResourceManagerTemplateDeploymentV3.zip package to a new folder on your computer. For example:D:\tasks\AzureResourceManagerTemplateDeploymentV 2020.0.1 Patch 1 Release Date: February 9, 2021
We have released a patch for Azure DevOps Server 2020.0.1 that fixes the following. Please see the blog post for more information.
- Resolve the issue reported in this Developer Community feedback ticket| New Test Case button not working
- Include fixes released with Azure DevOps Server 2020 Patch 2.
Azure DevOps Server 2020 Patch 3 Release Date: February 9, 2021
We have released a patch for Azure DevOps Server 2020 that fixes the following. Please see the blog post for more information.
- Resolve the issue reported in this Developer Community feedback ticket| New Test Case button not working
Azure DevOps Server 2020.0.1 Release Date: January 19, 2021
Azure DevOps Server 2020.0.1 is a roll up of bug fixes. You can directly install Azure DevOps Server 2020.0.1 or upgrade from an existing installation. Supported versions for upgrade are Azure DevOps Server 2020, Azure DevOps Server 2019, and Team Foundation Server 2012 or newer.
This release includes fixes for the following bugs:
- Resolve an upgrade problem from Azure DevOps Server 2019 where Git proxy may stop working after upgrade.
- Fix System.OutOfMemoryException exception for non-ENU collections prior to Team Foundation Server 2017 when upgrading to Azure DevOps Server 2020. Resolves the issue reported in this Developer Community feedback ticket.
- Servicing failure caused by missing Microsoft.Azure.DevOps.ServiceEndpoints.Sdk.Server.Extensions.dll. Resolves the issue reported in this Developer Community feedback ticket.
- Fix invalid column name error in Analytics while upgrading to Azure DevOps Server 2020. Resolves the issue reported in this Developer Community feedback ticket.
- Stored XSS when displaying test case steps in test case results.
- Upgrade step failure while migrating points results data to TCM.
Azure DevOps Server 2020 Patch 2 Release Date: January 12, 2021
We have released a patch for Azure DevOps Server 2020 2020 Patch 1 Date: December 8, 2020
We have released a patch for Azure DevOps Server 2020 that fixes the following. Please see the blog post for more information.
- CVE-2020-17145: Azure DevOps Server and Team Foundation Services Spoofing Vulnerability
Azure DevOps Server 2020 Release Date: October 6, 2020
Azure DevOps Server 2020 is a roll up of bug fixes. It includes all features in the Azure DevOps Server 2020 RC2 previously released.
Note
Azure DevOps 2020 Server has an issue with installing one of the assemblies used by the Git Virtual File System (GVFS).
If you are upgrading from Azure DevOps 2019 (any release) or a Azure DevOps 2020 release candidate and installing to the same directory as the previous release, the assembly
Microsoft.TeamFoundation.Git.dll will not be installed. You can verify that you have hit the issue by looking for
Microsoft.TeamFoundation.Git.dll in
<Install Dir>\Version Control Proxy\Web Services\bin,
<Install Dir>\Application Tier\TFSJobAgent and
<Install Dir>\Tools folders. If the file is missing, you can run a repair to restore the missing files.
To run a repair, go to
Settings -> Apps & Features on the Azure DevOps Server machine/VM and run a repair on Azure DevOps 2020 Server. Once the repair has completed, you can restart the machine/VM.
Azure DevOps Server 2020 RC2 Release Date: August 11, 2020
Azure DevOps Server 2020 RC2 is a roll up of bug fixes. It includes all features in the Azure DevOps Server 2020 RC1 previously released.
Azure DevOps Server 2020 RC1 re-release Release Date: July 10, 2020
We have re-releasing Azure DevOps Server 2020 RC1 to fix this Developer Community feedback ticket.
Previously, after upgrading from Azure DevOps Server 2019 Update 1.1 to Azure DevOps Server 2020 RC1, you were not able to view files in the Repos, Pipelines and Wiki of the Web UI. There was an error message indicating an unexpected error has occurred within this region of the page. You can try reloading this component or refreshing the entire page. With this release we have fixed this issue. Please see the blog post for more information.
Azure DevOps Server 2020 RC1 Release Date: June 30, 2020
Summary of What's New in Azure DevOps Server 2020
Azure DevOps Server 2020 introduces many new features. Some of the highlights include:
- Multi-stage pipelines
- Continuous deployment in YAML
- Track the progress of parent items using Rollup on Boards backlog
- Add "Parent Work Item" filter to the task board and sprint backlog
- New Web UI for Azure Repos landing pages
- Cross-repo branch policy administration
- New Test Plan page
- Rich editing for code wiki pages
- Pipeline failure and duration reports
You can also jump to individual sections to see all the new features for each service:
General
Azure DevOps CLI general availability
In February, we introduced the Azure DevOps extension for Azure CLI. The extension lets you interact with Azure DevOps from the command line. We've collected your feedback that helped us improve the extension and add more commands. We are now happy to announce that the extension is generally available.
To learn more about Azure DevOps CLI, see the documentation here.
Use publish profile to deploy Azure WebApps for Windows from the Deployment Center
Now you can use publish profile-based authentication to deploy your Azure WebApps for Windows from the Deployment Center. If you have permission to deploy to an Azure WebApp for Windows using its publish profile, you will be able to setup the pipeline using this profile in the Deployment Center workflows..
Work item live reload
Previously, when updating a work item, and a second team member was making changes to the same work item, the second user would lose their changes. Now, as long as you are both editing different fields, you will see live updates of the changes made to the work item.
Manage iteration and area paths from the command line
You can now manage iteration and area paths from the command line by using the
az boards iteration and
az boards area commands. For example, you can setup and manage iteration and area paths interactively from the CLI, or automate the entire setup using a script. For more details about the commands and the syntax, see the documentation here.
Work item parent column as column option
You now have the option to see the parent of every work item in your product backlog or sprint backlog. To enable this feature, go to Column Options on the desired backlog, then add the Parent column.
Change the process used by a project
Your tools should change as your team does, you can now switch your projects from any out-of-the-box process template to any other out-of-the-box process. For example, you can change your project from using Agile to Scrum, or Basic to Agile. You can find full step-by-step documentation here.
Hide custom fields from layout
You can now hide custom fields from the form layout when customizing your process. The field will still be available from queries and REST APIs. This comes in handy for tracking extra fields when you are integrating with other systems..
Most recent tags displayed when tagging a work item
When tagging a work item, the auto-complete option will now display up to five of your most recently used tags. This will make it easier to add the right information to your work items.
Read-only and required rules for group membership
Work item rules let you set specific actions on work item fields to automate their behavior. You can create a rule to set a field to read-only or required based on group membership. For example, you may want to grant product owners the ability to set the priority of your features while making it read-only for everyone.
New work item URL parameter
Share links to work items with the context of your board or backlog with our new work item URL parameter. You can now open a work item dialog on your board, backlog, or sprint experience by appending the parameter
?workitem=[ID] to the URL.
Anyone you share the link with will then land with the same context you had when you shared the link!
Mention people, work items and PRs in text fields
As we listened to your feedback,.
You can see an example here.
- To use people mentions, type the @ sign and the person's name you want to mention. @mentions in work item fields will generate email notifications like what it does for comments.
- To use work item mentions, type the # sign followed by the work item ID or title. #mentions will create a link between the two work items.
- To use PR mentions, add a ! followed by your PR ID or name.
Reactions on discussion comments
One of our main goals is to make the work items more collaborative for teams. Recently we conducted a poll on Twitter to find out what collaboration features you want in discussions on the work item. Bringing reactions to comments won the poll, so we add them! Here are the results of the Twitter poll.
You can. To remove your reaction, click on the reaction on the bottom of your comment and it will be removed. Below you can see the experience of adding a reaction, as well as what the reactions look like on a comment.
Pin Azure Boards reports to the dashboard
In the Sprint 155 Update, we included updated versions of the CFD and Velocity reports. These reports are available under the Analytics tab of Boards and Backlogs. Now you can pin the reports directly to your Dashboard. To pin the reports, hover over the report, select the ellipsis "..." menu, and Copy to Dashboard.
Track the progress of parent items using Rollup on Boards backlog
Rollup columns show progress bars and/or totals of numeric fields or descendant items within a hierarchy. Descendant items correspond to all child items within the hierarchy. One or more rollup columns can be added to a product or portfolio backlog.
For example, here we show Progress by Work Items which displays progress bars for ascendant work items based on the percentage of descendant items that have been closed. Descendant items for Epics includes all child Features and their child or grand child work items. Descendant items for Features includes all child User Stories and their child work items.
Taskboard live updates
Your taskboard now automatically refreshes when changes occur! As other team members move or reorder cards on the taskboard, your board will automatically update with these changes. You no longer have to press F5 to see the latest changes.
Support for custom fields in Rollup columns
Rollup can now be done on any field, including custom fields. When adding a Rollup column, you can still pick a Rollup column from the Quick list, however if you want to rollup on numeric fields that are not part of the out of the box process template, you can configure your own as follows:
- On your backlog click "Column options". Then in the panel click "Add Rollup column" and Configure custom rollup.
- Pick between Progress Bar and Total.
- Select a work item type or a Backlog level (usually backlogs aggregate several work item types).
- Select the aggregation type. Count of work items or Sum. For Sum you'll need to select the field to summarize.
- The OK button will bring you back to the column options panel where you can reorder your new custom column.
Note that you can't edit your custom column after clicking OK. If you need to make a change, remove the custom column and add another one as desired.
New rule to hide fields in a work item form based on condition
We've added a new rule to the inherited rules engine to let you hide fields in a work item form. This rule will hide fields based on the users group membership. For example, if the user belongs to the "product owner" group, then you can hide a developer specific field. For more details see the documentation here.
Custom work item notification settings
Staying up to date on work items relevant to you or your team is incredibly important. It helps teams collaborate and stay on track with projects and makes sure all the right parties are involved. However, different stakeholders have different levels of investment in different efforts, and we believe that should be reflected in your ability to follow the status of a work item.
Previously, if you wanted to follow a work item and get notifications on any changes made, you would get email notifications for any and all changes made to the work item. After considering your feedback, we are making following a work item more flexible for all stakeholders. Now, you will see a new settings button next to the Follow button on the top right corner of the work item. This will take you to a pop up that will let you configure your follow options.
From Notification Settings, you can choose from three notification options. First, you can be completely unsubscribed. Second, you can be fully subscribed, where you get notifications for all work item changes. Lastly, you can choose to get notified for some of the top and crucial work item change events. You can select just one, or all three options. This will let team members follow work items at a higher level and not get distracted by every single change that gets made. With this feature, we will eliminate unnecessary emails and allow you to focus on the crucial tasks at hand.
Link work items to deployments
We are excited to release Deployment control on the work item form. This control links your work items to a release and enables you to easily track where your work item has been deployed. To learn more see the documentation here.
Import work items from a CSV file
Until now, importing work items from a CSV file was dependent on using the Excel plugin. In this update we are providing a first class import experience directly from Azure Boards so you can import new or update existing work items. To learn more, see the documentation here.
Add parent field to work item cards
Parent context is now available within your Kanban board as a new field for work item cards. You can now add the Parent field to your cards, bypassing the need to use workarounds such as tags and prefixes.
Add parent field to backlog and queries
The parent field is now available when viewing backlogs and query results. To add the parent field, use the Column options view.
Repos.
Policy to block files with specified patterns
Administrators can now set a policy to prevent commits from being pushed to a repository based on file types and paths. The file name validation policy will block pushes that match the provided pattern.
Resolve work items via commits using key words
You can now resolve work items via commits made to the default branch by using key words like fix, fixes, or fixed. For example, you can write - "this change fixed #476" in your commit message and work item #476 will be completed when the commit is pushed or merged into the default branch. For more details see the documentation here.
Granularity for automatic reviewers
Previously, when adding group level reviewers to a pull request, only one approval was required from the group that was added. Now you can set policies that require more than one reviewer from a team to approve a pull request when adding automatic reviewers. In addition, you can add a policy to prevent requestors approving their own changes.
Use service account-based authentication to connect to AKS
Previously, when configuring Azure Pipelines from the AKS Deployment Center, we used an Azure Resource Manager Connection. This connection had access to the entire cluster and not just the namespace for which the pipeline was configured. With this update, our pipelines will use service account-based authentication to connect to the cluster so that it will only have access to the namespace associated with the pipeline.
Preview Markdown files in pull request Side-by-side diff
You can now see a preview of how a markdown file will look by using the new Preview button. In addition, you can see the full content of a file from the Side-by-side diff by selecting the View button.
Build policy expiration for manual builds
Policies enforce your team's code quality and change management standards. Previously, you could set build expiration polices for automated builds. Now you can set build expiration policies to your manual builds as well.
Add a policy to block commits based on the commit author email
Administrators can now set a push policy to prevent commits from being pushed to a repository for which the commit author email does not match the provided pattern.
This feature was prioritized based on a suggestion from the Developer Community to deliver a similar experience. We will continue to keep the ticket open and encourage users to tell us what other types of push policies you'd like to see.
Mark files as reviewed in a pull request
Sometimes, you need to review pull requests that contain changes to a large number of files and it can be difficult to keep track of which files you have already reviewed. Now you can mark files as reviewed in a pull request.
You can mark a file as reviewed by using the drop-down menu next to a file name or by hover and clicking on the file name.
Note
This feature is only meant to track your progress as you review a pull request. It does not represent voting on pull requests so these marks will only be visible to the reviewer.
This feature was prioritized based on a suggestion from the Developer Community.
New Web UI for Azure Repos landing pages
You can now try out our new modern, fast, and mobile-friendly landing pages within Azure Repos. These pages are available as New Repos landing pages. Landing pages include all pages except for pull request details, commit details and branch compare.
Web
Mobile
Cross-repo branch policy administration
Branch policies are one of the powerful features of Azure Repos that help you protect important branches. Although the ability to set policies at project level exists in the REST API, there was no user interface for it. Now, admins can set policies on a specific branch or the default branch across all repositories in their project. For example, an admin could require two minimum reviewers for all pull requests made into every main branch across every repository in their project. You can find the Add branch protection feature in the Repos Project Settings.
New web platform conversion landing pages
We've updated the Repos landing pages user experience to make it modern, fast, and mobile-friendly. Here are two examples of the pages that have been updated, we will continue to update other pages in future updates.
Web experience:
Mobile experience:
Support for Kotlin language
We're excited to announce that we now support Kotlin language highlighting in the file editor. Highlighting will improve the readability of your Kotlin text file and help you quickly scan to find errors. We prioritized this feature based on a suggestion from the Developer Community.
Custom notification subscription for draft pull requests
To help reduce the number of email notifications from pull requests, you can now create a custom notification subscription for pull requests that are created or updated in draft state. You can get emails specifically for draft pull requests or filter out emails from draft pull requests so your team doesn't get notified before the pull request is ready to be reviewed. collection.
Pipelines
Multi-stage pipelines
We've been working on an updated user experience to manage your pipelines. These updates make the pipelines experience modern and consistent with the direction of Azure DevOps. Moreover, these updates bring together classic build pipelines and multi-stage YAML pipelines into a single experience. It is mobile-friendly and brings various improvements to how you manage your pipelines. You can drill down and view pipeline details, run details, pipeline analytics, job details, logs, and more.
The following capabilities are included in the new experience:
- viewing and managing multiple stages
- approving pipeline runs
- scroll all the way back in logs while a pipeline is still in progress
- per-branch health of a pipeline.
Continuous deployment in YAML
We’re excited to provide Azure Pipelines YAML CD features. We now offer a unified YAML experience so you can configure each of your pipelines to do CI, CD, or CI and CD together. YAML CD features introduces several new advanced features that are available for all collections using multi-stage YAML pipelines. Some of the highlights include:
- Multi-stage YAML pipelines (for CI and CD)
- Approvals and checks on resources
- Environments and deployment strategies
- Kubernetes and Virtual Machine resources in environment
- Review apps for collaboration
- Refreshed UX for service connections
- Resources in YAML pipelines
If you’re ready to start building, check out the documentation or blog for building multi-stage CI/CD pipelines.
Manage pipeline variables in YAML editor
We updated the experience for managing pipeline variables in the YAML editor. You no longer have to go to the classic editor to add or update variables in your YAML pipelines.
Approve releases directly from Releases hub
Acting on pending approvals has been made easier. Before, it was possible to approve a release from the details page of the release. You may now approve releases directly from the Releases hub.
Bitbucket integration and other improvements in getting started with pipelines
The getting-started wizard experience for Pipelines has been updated to work with Bitbucket repositories. Azure Pipelines will now analyze the contents of your Bitbucket repository and recommend a YAML template to get you going.
A common ask with the getting-started wizard has been the ability to rename the generated file. Currently, it is checked in as
azure-pipelines.yml at the root of your repository. You can now update this to a different file name or location before saving the pipeline.
Finally, we you will have more control when checking in the
azure-pipelines.yml file to a different branch since you can choose to skip creating a pull request from that branch.
Preview fully parsed YAML document without committing or running the pipeline
We've added a preview but don't run mode for YAML pipelines. Now, you can try out a YAML pipeline without committing it to a repo or running it. Given an existing pipeline and an optional new YAML payload, this new API will give you back the full YAML pipeline. In future updates, this API will be used in a new editor feature.
For developers: POST to
dev.azure.com/<org>/<project>/_apis/pipelines/<pipelineId>/runs?api-version=5.1-preview with a JSON body like this:
{ "PreviewRun": true, "YamlOverride": " # your new YAML here, optionally " }
The response will contain the rendered YAML.
Cron schedules in YAML
Previously, you could use the UI editor to specify a scheduled trigger for YAML pipelines. With this release, you can schedule builds using cron syntax in your YAML file and take advantage of the following benefits:
- Config as code: You can track the schedules along with your pipeline as part of code.
- Expressive: You have more expressive power in defining schedules than what you were able to with the UI. For instance, it is easier to specify a single schedule that starts a run every hour.
- Industry standard: Many developers and administrators are already familiar with the cron syntax.
schedules: - cron: "0 0 * * *" displayName: Daily midnight build branches: include: - main - releases/* exclude: - releases/ancient/* always: true
We have also made it easy for you to diagnose problems with cron schedules. The Scheduled runs in the Run pipeline menu will give you a preview of the upcoming few scheduled runs for your pipeline to help you diagnose errors with your cron schedules.
Updates to service connections UI
We've been working on an updated user experience to manage your service connections. These updates make the service connection experience modern and consistent with the direction of Azure DevOps. We introduced the new UI for service connections as a preview feature earlier this year. Thanks to everyone who tried the new experience and provided their valuable feedback to us.
Along with the user experience refresh, we've also added two capabilities which are critical for consuming service connections in YAML pipelines: pipeline authorizations and approvals and checks.
The new user experience will be turned on by default with this update. You will still have the option to opt-out of the preview.
Note
We plan to introduce Cross-project Sharing of Service Connections as a new capability. You can find more details about the sharing experience and the security roles here.
Skipping stages in a YAML pipeline
When you start a manual run, you may sometimes want to skip a few stages in your pipeline. For instance, if you do not want to deploy to production, or if you want to skip deploying to a few environments in production. You can now do this with your YAML pipelines.
The updated run pipeline panel presents a list of stages from the YAML file, and you have the option to skip one or more of those stages. You must exercise caution when skipping stages. For instance, if your first stage produces certain artifacts that are needed for subsequent stages, then you should not skip the first stage. The run panel presents a generic warning whenever you skip stages that have downstream dependencies. It is left to you as to whether those dependencies are true artifact dependencies or whether they are just present for sequencing of deployments.
Skipping a stage is equivalent to rewiring the dependencies between stages. Any immediate downstream dependencies of the skipped stage are made to depend on the upstream parent of the skipped stage. If the run fails and if you attempt to rerun a failed stage, that attempt will also have the same skipping behavior. To change which stages are skipped, you have to start a new run.
Service connections new UI as default experience
There is a new service connections UI. This new UI is built on modern design standards and it comes with various critical features to support multi-stage YAML CD pipelines such as approvals, authorizations, and cross-project sharing.
Learn more about service connections here.
Pipeline resource version picker in the create run dialogue
We added the ability to manually pick up pipeline resource versions in the create run dialogue. If you consume a pipeline as a resource in another pipeline, you can now pick the version of that pipeline when creating a run.
az CLI improvements for Azure Pipelines.
Deployment jobs
A deployment job is a special type of job that is used to deploy your app to an environment. With this update, we have added support for step references in a deployment job. For example, you can define a set of steps in one file and refer to it in a deployment job.
We have also added support for additional properties to the deployment job. For example, here are few properties of a deployment job that you can now set,
- timeoutInMinutes - how long to run the job before automatically cancelling
- cancelTimeoutInMinutes - how much time to give 'run always even if cancelled tasks' before terminating them
- condition - run job conditionally
- variables - Hardcoded values can be added directly, or variable groups, variable group backed by an Azure key vault can be referenced or you can refer to a set of variables defined in a file.
- continueOnError - if future jobs should run even if this deployment job fails; defaults to 'false'
For more details about deployment jobs and the full syntax to specify a deployment job, see Deployment job.
Showing associated CD pipelines info in CI pipelines
We added support to the CD YAML pipelines details where the CI pipelines are referred to as pipeline resources. In your CI pipeline run view, you will now see a new 'Associated pipelines' tab where you can find all the pipeline runs that consume your pipeline and artifacts from it. Kubernetes Service Cluster link in Kubernetes environments resource view
We added a link to the resource view of Kubernetes environments so you can navigate to the Azure blade for the corresponding cluster. This applies to environments that are mapped to namespaces in Azure Kubernetes Service clusters.
Release folder filters in notification subscriptions
Folders allow organizing pipelines for easier discoverability and security control. Often you may want to configure custom email notifications for all release pipelines, that are represented by all pipelines under a folder. Previously, you had to configure multiple subscriptions or have complex query in the subscriptions to get focused emails. With this update, you can now add a release folder clause to the deployment completed and approval pending events and simplify the subscriptions..
Retry failed stages
One of the most requested features in multi-stage pipelines is the ability to retry a failed stage without having to start from the beginning. With this update, we are adding a big portion of this functionality.
You can now retry a pipeline stage when the execution fails. Any jobs that failed in the first attempt and those that depend transitively on those failed jobs are all re-attempted.
This can help you save time in several ways. For instance, when you run multiple jobs in a stage, you might want each stage to run tests on a different platform. If the tests on one platform fail while others pass, you can save time by not re-running the jobs that passed. As another example, a deployment stage may have failed due to flaky network connection. Retrying that stage will help you save time by not having to produce another build.
There are a few known gaps in this feature. For example, you cannot retry a stage that you explicitly cancel. We are working to close these gaps in future updates.
Approvals in multi-stage YAML pipelines
Your YAML CD pipelines may contain manual approvals..
Increase in gates timeout limit and frequency
Previously, the gate timeout limit in release pipelines was three days. With this update, the timeout limit has been increased to 15 days to allow gates with longer durations. We also increased the frequency of the gate to 30 minutes.
New build image template for Dockerfile
Previously, when creating a new pipeline for a Dockerfile in new pipeline creation, the template recommended pushing the image to an Azure Container Registry and deploying to an Azure Kubernetes Service. We added a new template to let you build an image using the agent without the need to push to a container registry.
New task for configuring Azure App Service app settings
Azure App Service allows configuration through various settings like app settings, connection strings and other general configuration settings. We now have a new Azure Pipelines task Azure App Service Settings which supports configuring these settings in bulk using JSON syntax on your web app or any of its deployment slots. This task can be used along with other App service tasks to deploy, manage and configure your Web apps, Function apps or any other containerized App Services.
Azure App Service now supports Swap with preview
Azure App Service now supports Swap with preview on its deployment slots. This is a good way to validate the app with production configuration before the app is actually swapped from a staging slot into production slot. This would also ensure that the target/production slot doesn't experience downtime.
Azure App Service task now supports this multi-phase swap through the following new actions:
- Start Swap with Preview - Initiates a swap with a preview (multi-phase swap) and applies target slot (for example, the production slot) configuration to the source slot.
- Complete Swap with Preview - When you're ready to complete the pending swap, select the Complete Swap with Preview action.
- Cancel Swap with Preview - To cancel a pending swap, select Cancel Swap with Preview.
Stage level filter for Azure Container Registry and Docker Hub artifacts
Previously, regular expression filters for Azure Container Registry and Docker Hub artifacts were only available at the release pipeline level. They have now been added at the stage level as well.
Enhancements to approvals in YAML pipelines
We have enabled configuring approvals on service connections and agent pools. For approvals we follow segregation of roles between infrastructure owners and developers. By configuring approvals on your resources such as environments, service connections and agent pools, you will be assured that all pipeline runs that use resources will require approval first.
The experience is similar to configuring approvals for environments. When an approval is pending on a resource referenced in a stage, the execution of the pipeline waits until the pipeline is manually approved.
Container structure testing support in Azure Pipelines
Usage of containers in applications is increasing and thus the need for robust testing and validation. Azure Pipelines now brings supports for Container Structure Tests. This framework provides a convenient and powerful way to verify the contents and structure of your containers.
You can validate the structure of an image based on four categories of tests which can be run together: command tests, file existence tests, file content tests and metadata tests. You can use the results in the pipeline to make go/no go decisions. Test data is available in the pipeline run with an error message to help you better troubleshoot failures.
Input the config file and image details
Test data and summary
Pipeline decorators for release pipelines
Pipeline decorators allow for adding steps to the beginning and end of every job. This is different than adding steps to a single definition because it applies to all pipelines in an collection.
We have been supporting decorators for builds and YAML pipelines, with customers using them to centrally control the steps in their jobs. We are now extending the support to release pipelines as well. You can create extensions to add steps targeting the new contribution point and they will be added to all agent jobs in release pipelines.
Deploy Azure Resource Manager (ARM) to subscription and management group level
Previously, we supported deployments only to the Resource Group level. With this update we have added support to deploy ARM templates to both the subscription and management group levels. This will help you when deploying a set of resources together but place them in different resource groups or subscriptions. For example, deploying the backup virtual machine for Azure Site Recovery to a separate resource group and location.
CD capabilities for your multi-stage YAML pipelines
You can now consume artifacts published by your CI pipeline and enable pipeline completion triggers. In multi-stage YAML pipelines, we are introducing
pipelines as a resource. In your YAML, you can now refer to another pipeline and also enable CD triggers.
Here is the detailed YAML schema for pipelines resource.
resources: pipelines: - pipeline: MyAppCI # identifier for the pipeline resource project: DevOpsProject # project for the build pipeline; optional input for current project source: MyCIPipeline # source pipeline definition name branch: releases/M159 # branch to pick the artifact, optional; defaults to all branches version: 20190718.2 # pipeline run number to pick artifact; optional; defaults to last successfully completed run trigger: # Optional; Triggers are not enabled by default. branches: include: # branches to consider the trigger events, optional; defaults to all branches. - main - releases/* exclude: # branches to discard the trigger events, optional; defaults to none. - users/*
In addition, you can download the artifacts published by your pipeline resource using the
- download task.
steps: - download: MyAppCI # pipeline resource identifier artifact: A1 # name of the artifact to download; optional; defaults to all artifacts
For more details, see the downloading artifacts documentation here.
Orchestrate canary deployment strategy on environment for Kubernetes
One of the key advantages of continuous delivery of application updates is the ability to quickly push updates into production for specific microservices. This gives you the ability to quickly respond to changes in business requirements. Environment was introduced as a first-class concept enabling orchestration of deployment strategies and facilitating zero downtime releases. Previously, we supported the runOnce strategy which executed the steps once sequentially..
jobs: - deployment: environment: musicCarnivalProd pool: name: musicCarnivalProdPool strategy: canary: increments: [10,20] preDeploy: steps: - script: initialize, cleanup.... deploy: steps: - script: echo deploy updates... - task: KubernetesManifest@0 inputs: action: $(strategy.action) namespace: 'default' strategy: $(strategy.name) percentage: $(strategy.increment) manifests: 'manifest.yml' postRouteTaffic: pool: server steps: - script: echo monitor application health... on: failure: steps: - script: echo clean-up, rollback... success: steps: - script: echo checks passed, notify...
The canary strategy for Kuberenetes will first deploy the changes with 10% pods followed by 20% while monitoring the health during postRouteTraffic. If all goes well, it will promote to 100%.
We are looking for early feedback on support for VM resource in environments and performing rolling deployment strategy across multiple machines. Contact us to enroll.
Approval policies for YAML pipelines
In YAML pipelines, we follow a resource owner-controlled approval configuration. Resource owners configure approvals on the resource and all pipelines that use the resource pause for approvals before start of the stage consuming the resource. It is common for SOX based application owners to restrict the requester of the deployment from approving their own deployments..
resources: containers: - container: MyACR #container resource alias type: ACR azureSubscription: RMPM #ARM service connection resourceGroup: contosoRG registry: contosodemo repository: alphaworkz trigger: tags: include: - production
Moreover, ACR image meta-data can be accessed using predefined variables. The following list includes the ACR variables available to define an ACR container resource in your pipeline.
resources.container.<Alias>.type resources.container.<Alias>.registry resources.container.<Alias>.repository resources.container.<Alias>.tag resources.container.<Alias>.digest resources.container.<Alias>.URI resources.container.<Alias>.location
Enhancements to evaluate artifacts checks policy in pipelines
We've enhanced the evaluate artifact check to make it easier to add policies from a list of out of the box policy definitions. The policy definition will be generated automatically and added to the check configuration which can be updated if needed.
Support for output variables in a deployment job
You can now define output variables in a deployment job's lifecycle hooks and consume them in other downstream steps and jobs within the same stage.
While executing deployment strategies, you can access output variables across jobs using the following syntax.
- For runOnce strategy:
$[dependencies.<job-name>.outputs['<lifecycle-hookname>.<step-name>.<variable-name>']]
--16.04' environment: staging strategy: canary: increments: [10,20] # creates multiple jobs, one for each increment. Output variable can be referenced with this. deploy: steps: - script: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment variable value" name: setvarStep - script: echo $(setvarStep.myOutputVar) name: echovar // Map the variable from the job - job: B dependsOn: A pool: vmImage: 'ubuntu-16.04' variables: myVarFromDeploymentJob: $[ dependencies.A.outputs['deploy_10.setvarStep.myOutputVar'] ] steps: - script: "echo $(myVarFromDeploymentJob)" name: echovar
Learn more on how to set a multi-job output variable
Avoid rollback of critical changes
In classic release pipelines, it is common to rely on scheduled deployments for regular updates. But, when you have a critical fix, you may choose to start a manual deployment out-of-band. When doing so, older releases continue to stay scheduled. This posed a challenge since the manual deployment would be rolled back when the deployments resumed as per schedule. Many of you reported this issue and we have now fixed it. With the fix, all older scheduled deployments to the environment would be cancelled when you manually start a deployment. This is only applicable when the queueing option is selected as "Deploy latest and cancel others".
Simplified resource authorization in YAML pipelines
A resource is anything used by a pipeline that is outside the pipeline. Resources must be authorized before they can be used. Previously, when using unauthorized resources in a YAML pipeline, it failed with a resource authorization error. You had to authorize the resources from the summary page of the failed run. In addition, the pipeline failed if it was using a variable that referenced an unauthorized resource.
We are now making it easier to manage resource authorizations. Instead of failing the run, the run will wait for permissions on the resources at the start of the stage consuming the resource. A resource owner can view the pipeline and authorize the resource from the Security page.
Evaluate artifact check
You can now define a set of policies and add the policy evaluation as a check on an environment for container image artifacts. When a pipeline runs, the execution pauses before starting a stage that uses the environment. The specified policy is evaluated against the available metadata for the image being deployed. The check passes when the policy is successful and marks the stage as failed if the check fails.
Updates to the ARM template deployment task
Previously, we didn't filter the service connections in the ARM template deployment task. This may result in the deployment to fail if you are selecting a lower scope service connection to perform ARM template deployments to a broader scope. Now, we added filtering of service connections to filter out lower scoped service connections based on the deployment scope you choose.
ReviewApp in Environment
ReviewApp deploys every pull request from your Git repository to a dynamic environment resource. Reviewers can see how those changes look as well as work with other dependent services before they’re merged into the main branch and deployed to production. This will make it easy for you to create and manage reviewApp resources and benefit from all the traceability and diagnosis capability of the environment features. By using the reviewApp keyword, you can create a clone of a resource (dynamically create a new resource based on an existing resource in an environment) and add the new resource to the environment.
The following is a sample YAML snippet of using reviewApp under environments.
jobs: - deployment: environment: name: smarthotel-dev resourceName: $(System.PullRequest.PullRequestId) pool: name: 'ubuntu-latest' strategy: runOnce: pre-deploy: steps: - reviewApp: MasterNamespace
Collect automatic and user-specified metadata from pipeline
Now you can enable automatic and user-specified metadata collection from pipeline tasks. You can use metadata to enforce artifact policy on an environment using the evaluate artifact check.
VM deployments with Environments
One of the most requested features in Environments was VM deployments. With this update, we are enabling Virtual Machine resource in Environments. You can now orchestrate deployments across multiple machines and perform rolling updates using YAML pipelines. You can also install the agent on each of your target servers directly and drive rolling deployment to those servers. In addition, you can use the full task catalog on your target machines.
A rolling deployment replaces instances of the previous version of an application with instances of the new version of the application on a set of machines (rolling set) in each iteration.
For example, below rolling deployment updates up to five targets in each iteration.
maxParallel will determine the number of targets that can be deployed in parallel. The selection accounts for the number of targets that must remain available at any time, excluding the targets that are being deployed to. It is also used to determine the success and failure conditions during deployment.
jobs: - deployment: displayName: web environment: name: musicCarnivalProd resourceType: VirtualMachine strategy: rolling: maxParallel: 5 #for percentages, mention as x% preDeploy: steps: - script: echo initialize, cleanup, backup, install certs... deploy: steps: - script: echo deploy ... routeTraffic: steps: - script: echo routing traffic... postRouteTaffic: steps: - script: echo health check post routing traffic... on: failure: steps: - script: echo restore from backup .. success: steps: - script: echo notify passed...
Note
With this update, all available artifacts from the current pipeline and from the associated pipeline resources are downloaded only in
deploy lifecycle-hook. However, you can choose to download by specifying Download Pipeline Artifact task.
There are a few known gaps in this feature. For example, when you retry a stage, it will re-run the deployment on all VMs not just failed targets. We are working to close these gaps in future updates..
Configure Deployment Strategies from Azure portal
With this capability, we have made it easier for you to configure pipelines that use the deployment strategy of your choice, for example, Rolling, Canary, or Blue-Green. Using these out-of-box strategies, you can roll out updates in a safe manner and mitigate associated deployment risks. To access this, click on the 'Continuous Delivery' setting in an Azure Virtual Machine. In the configuration pane, you will be prompted to select details about the Azure DevOps project where the pipeline will be created, the deployment group, build pipeline that publishes the package to be deployed and the deployment strategy of your choice. Going ahead will configure a fully functional pipeline that deploys the selected package to this Virtual Machine.
For more details, check out our documentation on configuring Deployment Strategies.
Runtime parameters
Runtime parameters let you have more control over what values can be passed to a pipeline. Unlike variables, runtime parameters have data types and don't automatically become environment variables. With runtime parameters you can:
- Supply different values to scripts and tasks at runtime
- Control parameter types, ranges allowed, and defaults
- Dynamically select jobs and stages with template expression
To learn more about runtime parameters, see the documentation here.
Use extends keyword in pipelines
Currently, pipelines can be factored out into templates, promoting reuse and reducing boilerplate. The overall structure of the pipeline was still defined by the root YAML file. With this update, we added a more structured way to use pipeline templates. A root YAML file can now use the keyword extends to indicate that the main pipeline structure can be found in another file. This puts you in control of what segments can be extended or altered and what segments are fixed. We've also enhanced pipeline parameters with data types to make clear the hooks that you can provide.
This example illustrates how you can provide simple hooks for the pipeline author to use. The template will always run a build, will optionally run additional steps provided by the pipeline, and then run an optional testing step.
# azure-pipelines.yml extends: template: build-template.yml parameters: runTests: true postBuildSteps: - script: echo This step runs after the build! - script: echo This step does too! # build-template.yml parameters: - name: runTests type: boolean default: false - name: postBuildSteps type: stepList default: [] steps: - task: MSBuild@1 # this task always runs - ${{ if eq(parameters.runTests, true) }}: - task: VSTest@2 # this task is injected only when runTests is true - ${{ each step in parameters.postBuildSteps }}: - ${{ step }}
Control variables that can be overridden at queue time
Previously, you could use the UI or REST API to update the values of any variable prior to starting a new run. While the pipeline's author can mark certain variables as
_settable at queue time_, the system didn't enforce this, nor prevented other variables from being set. In other words, the setting was only used to prompt for additional inputs when starting a new run.
We've added a new collection setting that enforces the
_settable at queue time_ parameter. This will give you control over which variables can be changed when starting a new run. Going forward, you can't change a variable that is not marked by the author as
_settable at queue time_.
Note
This setting is off by default in existing collections, but it will be on by default when you create a new Azure DevOps collection..
For example, if you have a repository called MyCode with a YAML pipeline and a second repository called Tools, your YAML pipeline will look like this:
resources: repositories: - repository: tools name: Tools type: git steps: - checkout: self - checkout: tools - script: dir $(Build.SourcesDirectory)
The third step will show two directories, MyCode and Tools in the sources directory.
Azure Repos Git, GitHub, and Bitbucket Cloud repositories are supported. For more information, see Multi-repo checkout.
Getting details at runtime about multiple repositories
When a pipeline is running, Azure Pipelines adds information about the repo, branch, and commit that triggered the run. Now that YAML pipelines support checking out multiple repositories, you may also want to know the repo, branch, and commit that were checked out for other repositories. This data is available via a runtime expression, which now you can map into a variable. For example:
resources: repositories: - repository: other type: git name: MyProject/OtherTools variables: tools.ref: $[ resources.repositories['other'].ref ] steps: - checkout: self - checkout: other - bash: echo "Tools version: $TOOLS_REF"
Allow repository references to other Azure Repos collections
Previously, when you referenced repositories in a YAML pipeline, all Azure Repos repositories had to be in the same collection as the pipeline. Now, you can point to repositories in other collections using a service connection. For example:
resources: repositories: - repository: otherrepo name: ProjectName/RepoName endpoint: MyServiceConnection steps: - checkout: self - checkout: otherrepo
MyServiceConnection points to another Azure DevOps collection and has credentials which can access the repository in another project. Both repos,
self and
otherrepo, will end up checked out.
Important
MyServiceConnection must be an Azure Repos / Team Foundation Server service connection, see the picture below.
Pipeline resource meta-data as predefined variables
We've added predefined variables for YAML pipelines resources in the pipeline. Here is the list of the pipeline resource variables available.
resources.pipeline.<Alias>.projectName resources.pipeline.<Alias>.projectID resources.pipeline.<Alias>.pipelineName resources.pipeline.<Alias>.pipelineID resources.pipeline.<Alias>.runName resources.pipeline.<Alias>.runID resources.pipeline.<Alias>.runURI resources.pipeline.<Alias>.sourceBranch resources.pipeline.<Alias>.sourceCommit resources.pipeline.<Alias>.sourceProvider resources.pipeline.<Alias>.requestedFor resources.pipeline.<Alias>.requestedForID.
Arguments input in Docker Compose task
A new field has been introduced in the Docker Compose task to let you add arguments such as
--no-cache. The argument will be passed down by the task when running commands such as build.
GitHub release task enhancements
We've made several enhancements to the GitHub Release task. You can now have better control over release creation using the tag pattern field by specifying a tag regular expression and the release will be created only when the triggering commit is tagged with a matching string.
We've also added capabilities to customize creation and formatting of changelog. In the new section for changelog configuration, you can now specify the release against which the current release should be compared. The Compare to release can be the last full release (excludes pre-releases), last non-draft release or any previous release matching your provided release tag. Additionally, the task provides changelog type field to format the changelog. Based on the selection the changelog will display either a list of commits or a list of issues/PRs categorized based on labels.
Open Policy Agent installer task
Open Policy Agent is an open source, general-purpose policy engine that enables unified, context-aware policy enforcement. We've added the Open Policy Agent installer task. It is particularly useful for in-pipeline policy enforcement with respect to Infrastructure as Code providers.
For example, Open Policy Agent can evaluate Rego policy files and Terraform plans in pipeline.
task: OpenPolicyAgentInstaller@0 inputs: opaVersion: '0.13.5'
Support for PowerShell scripts in Azure CLI task
Previously, you could execute batch and bash scripts as part of an Azure CLI task. With this update, we added support for PowerShell and PowerShell core scripts to the task.
Service Mesh Interface based canary deployments in KubernetesManifest task
Previously when canary strategy was specified in the KubernetesManifest task, the task would create baseline and canary workloads whose replicas equaled a percentage of the replicas used for stable workloads. This was not exactly the same as splitting traffic up to the desired percentage at the request level. To tackle this, we've added support for Service Mesh Interface based canary deployments to the KubernetesManifest task.
Service Mesh Interface abstraction allows for plug-and-play configuration with service mesh providers such as Linkerd and Istio. Now the KubernetesManifest task takes away the hard work of mapping SMI's TrafficSplit objects to the stable, baseline and canary services during the lifecycle of the deployment strategy. The desired percentage split of traffic between stable, baseline and canary are more accurate as the percentage traffic split is controlled on the requests in the service mesh plane.
The following is a sample of performing SMI based canary deployments in a rolling manner.
- deployment: Deployment displayName: Deployment pool: vmImage: $(vmImage) environment: ignite.smi strategy: canary: increments: [25, 50] preDeploy: steps: - task: KubernetesManifest@0 displayName: Create/update secret inputs: action: createSecret namespace: smi secretName: $(secretName) dockerRegistryEndpoint: $(dockerRegistryServiceConnection) deploy: steps: - checkout: self - task: KubernetesManifest@0 displayName: Deploy canary inputs: action: $(strategy.action) namespace: smi strategy: $(strategy.name) trafficSplitMethod: smi percentage: $(strategy.increment) baselineAndCanaryReplicas: 1 manifests: | manifests/deployment.yml manifests/service.yml imagePullSecrets: $(secretName) containers: '$(containerRegistry)/$(imageRepository):$(Build.BuildId)' postRouteTraffic: pool: server steps: - task: Delay@1 inputs: delayForMinutes: '2'
Azure File Copy Task now supports AzCopy V10
The Azure file copy task can be used in a build or release pipeline to copy files to Microsoft storage blobs or virtual machines (VMs). The task uses AzCopy, the command-line utility build for fast copying of data from and into Azure storage accounts. With this update, we've added support for AzCopy V10 which is the latest version of AzCopy.
The
azcopy copy command supports only the arguments associated with it. Because of the change in syntax of AzCopy, some of the existing capabilities are not available in AzCopy V10. These include:
- Specifying log location
- Cleaning log and plan files after the copy
- Resume copy if job fails
The additional capabilities supported in this version of the task are:
- Wildcard symbols in the file name/path of the source
- Inferring the content type based on file extension when no arguments are provided
- Defining the log verbosity for the log file by passing an argument
Improve pipeline security by restricting the scope of access tokens
Every job that runs in Azure Pipelines gets an access token. The access token is used by the tasks and by your scripts to call back into Azure DevOps. For example, we use the access token to get source code, upload logs, test results, artifacts, or to make REST calls into Azure DevOps. A new access token is generated for each job, and it expires once the job completes. With this update, we added the following enhancements.
Prevent the token from accessing resources outside a team project
Until now, the default scope of all pipelines was the team project collection. You could change the scope to be the team project in classic build pipelines. However, you did not have that control for classic release or YAML pipelines. With this update we are introducing an collection setting to force every job to get a project-scoped token no matter what is configured in the pipeline. We also added the setting at the project level. Now, every new project and collection that you create will automatically have this setting turned on.
Note
The collection setting overrides the project setting.
Turning this setting on in existing projects and collections may cause certain pipelines to fail if your pipelines access resources that are outside the team project using access tokens. To mitigate pipeline failures, you can explicitly grant Project Build Service Account access to the desired resource. We strongly recommend that you turn on these security settings.
Limit build service repos scope access
Building upon improving pipeline security by restricting the scope of access token, Azure Pipelines can now scope down its repository access to just the repos required for a YAML-based pipeline. This means that if the pipelines's access token were to leak, it would only be able to see the repo(s) used in the pipeline. Previously, the access token was good for any Azure Repos repository in the project, or potentially the entire collection.
This feature will be on by default for new projects and collections. For existing collections, you must enable it in Collections Settings > Pipelines > Settings. When using this feature, all repositories needed by the build (even those you clone using a script) must be included in the repository resources of the pipeline.
Remove certain permissions for the access token
By default, we grant a number of permissions to the access token, one of this permission is Queue builds. With this update, we removed this permission to the access token. If your pipelines need this permission, you can explicitly grant it to the Project Build Service Account or Project Collection Build Service Account depending on the token that you use.
Project level security for service connections
We added hub level security for service connections. Now, you can add/remove users, assign roles and manage access in a centralized place for all the service connections.
Step targeting and command isolation
Azure Pipelines supports running jobs either in containers or on the agent host. Previously, an entire job was set to one of those two targets. Now, individual steps (tasks or scripts) can run on the target you choose. Steps may also target other containers, so a pipeline could run each step in a specialized, purpose-built container.
Containers can act as isolation boundaries, preventing code from making unexpected changes on the host machine. The way steps communicate with and access services from the agent is not affected by isolating steps in a container. Therefore, we're also introducing a command restriction mode which you can use with step targets. Turning this on will restrict the services a step can request from the agent. It will no longer be able to attach logs, upload artifacts, and certain other operations.
Here's a comprehensive example, showing running steps on the host in a job container, and in another container:
resources: containers: - container: python image: python:3.8 - container: node image: node:13.2 jobs: - job: example container: python steps: - script: echo Running in the job container - script: echo Running on the host target: host - script: echo Running in another container, in restricted commands mode target: container: node commands: restricted
Read-only variables
System variables were documented as being immutable, but in practice they could be overwritten by a task and downstream tasks would pick up the new value. With this update, we tighten up the security around pipeline variables to make system and queue-time variables read-only. In addition, you can make a YAML variable read-only by marking it as follows.
variables: - name: myVar value: myValue readonly: true
Role-based access for service connections
We have added role-based access for service connections. Previously, service connection security could only be managed through pre-defined Azure DevOps groups such as Endpoint administrators and Endpoint Creators.
As part of this work, we have introduced the new roles of Reader, User, Creator and Administrator. You can set these roles via the service connections page in your project and these are inherited by the individual connections. And in each service connection you have the option to turn inheritance on or off and override the roles in the scope of the service connection.
Learn more about service connections security here.
Cross-project sharing of service connections
We enabled support for service connection sharing across projects. You can now share your service connections with your projects safely and securely.
Learn more about service connections sharing here.
Traceability for pipelines and ACR resources
We ensure full E2E traceability when pipelines and ACR container resources are used in a pipeline. For every resource consumed by your YAML pipeline, you can trace back to the commits, work items and artifacts.
In the pipeline run summary view, you can see:
The resource version that triggered the run. Now, your pipeline can be triggered upon completion of another Azure pipeline run or when a container image is pushed to ACR.
The commits that are consumed by the pipeline. You can also find the breakdown of the commits by each resource consumed by the pipeline.
The work items that are associated with each resource consumed by the pipeline.
The artifacts that are available to be used by the run.
In the environment's deployments view, you can see the commits and work items for each resource deployed to the environment.
Support for large test attachments
The publish test results task in Azure Pipelines lets you publish test results when tests are executed to provide a comprehensive test reporting and analytics experience. Until now, there was a limit of 100MB for test attachments for both test run and test results. This limited the upload of big files like crash dumps or videos. With this update, we added support for large test attachments allowing you to have all available data to troubleshoot your failed tests.
You might see VSTest task or Publish test results task returning a 403 or 407 error in the logs. If you are using self-hosted builds or release agents behind a firewall which filters outbound requests, you will need to make some configuration changes to be able to use this functionality.
In order to fix this issue, we recommend that you update the firewall for outbound requests to
https://*.vstmrblob.vsassets.io. You can find troubleshooting information in the documentation here.
Note
This is only required if you're using self-hosted Azure Pipelines agents and you're behind a firewall that is filtering outbound traffic. If you are using Microsoft-hosted agents in the cloud or that aren't filtering outbound network traffic, you don't need to take any action.
Show correct pool information on each job
Previously, when you used a matrix to expand jobs or a variable to identify a pool, we sometimes resolved incorrect pool information in the logs pages. These issues have been resolved..
Jobs can access output variables from previous stages
Output variables may now be used across stages in a YAML-based pipeline. This helps you pass useful information, such as a go/no-go decision or the ID of a generated output, from one stage to the next. The result (status) of a previous stage and its jobs is also available.
Output variables are still produced by steps inside of jobs. Instead of referring to
dependencies.jobName.outputs['stepName.variableName'], stages refer to
stageDependencies.stageName.jobName.outputs['stepName.variableName'].
Note
By default, each stage in a pipeline depends on the one just before it in the YAML file. Therefore, each stage can use output variables from the prior stage. You can alter the dependency graph, which will also alter which output variables are available. For instance, if stage 3 needs a variable from stage 1, it will need to declare an explicit dependency on stage 1.
Disable automatic agents upgrades at a pool level
Currently, pipelines agents will automatically update to the latest version when required. This typically happens when there is a new feature or task which requires a newer agent version to function correctly. With this update, we're adding the ability to disable automatic upgrades at a pool level. In this mode, if no agent of the correct version is connected to the pool, pipelines will fail with a clear error message instead of requesting agents to update. This feature is mostly of interest for customers with self-hosted pools and very strict change-control requirements. Automatic updates are enabled by default, and we don’t recommend most customers disable them.
Agent diagnostics
We've added diagnostics for many common agent related problems such as many networking issues and common causes of upgrade failures. To get started with diagnostics, use run.sh --diagnostics or run.cmd --diagnostics on Windows.
Service hooks for YAML pipelines
Integrating services with YAML pipelines just got easier. Using service hooks events for YAML pipelines, you can now drive activities in custom apps or services based on progress of the pipeline runs. For example, you can create a helpdesk ticket when an approval is required, initiate a monitoring workflow after a stage is complete or send a push notification to your team's mobile devices when a stage fails.
Filtering on pipeline name and stage name is supported for all events. Approval events can be filtered for specific environments as well. Similarly, state change events can be filtered by new state of the pipeline run or the stage.
Optimizely integration
Optimizely is a powerful A/B testing and feature flagging platform for product teams. Integration of Azure Pipelines with Optimizely experimentation platform empowers product teams to test, learn and deploy at an accelerated pace, while gaining all DevOps benefits from Azure Pipelines.
The Optimizely extension for Azure DevOps adds experimentation and feature flag rollout steps to the build and release pipelines, so you can continuously iterate, roll features out, and roll them back using Azure Pipelines.
Learn more about the Azure DevOps Optimizely extension here.
Add a GitHub release as an artifact source
Now you can link your GitHub releases as artifact source in Azure DevOps release pipelines. This will let you consume the GitHub release as part of your deployments.
When you click Add an artifact in the release pipeline definition, you will find the new GitHub Release source type. You can provide the service connection and the GitHub repo to consume the GitHub release. You can also choose a default version for the GitHub release to consume as latest, specific tag version or select at release creation time. Once a GitHub release is linked, it is automatically downloaded and made available in your release jobs..
Updated ServiceNow integration with Azure Pipelines
The Azure Pipelines app for ServiceNow helps integrate Azure Pipelines and ServiceNow Change Management. With this update, you can integrate with the New York version of ServiceNow. The authentication between the two services can now be made using OAuth and basic authentication. In addition, you can now configure advanced success criteria so you can use any change property to decide the gate outcome.
Create Azure Pipelines from VSCode
We've added a new functionality to the Azure Pipelines extension for VSCode. Now, you will be able to create Azure Pipelines directly from VSCode without leaving the IDE.
Flaky bug management and resolution
We introduced flaky test management to support end-to-end lifecycle with detection, reporting and resolution. To enhance it further we are adding flaky test bug management and resolution.
While investigating the flaky test you can create a bug using the Bug action which can then be assigned to a developer to further investigate the root cause of the flaky test. The bug report includes information about the pipeline like error message, stack trace and other information associated with the test.
When a bug report is resolved or closed, we will automatically unmark the test as unflaky.
Set VSTest tasks to fail if a minimum number of tests are not run
The VSTest task discovers and runs tests using user inputs (test files, filter criteria, and so forth) as well as a test adapter specific to the test framework being used. Changes to either user inputs or the test adapter can lead to cases where tests are not discovered and only a subset of the expected tests are run. This can lead to situations where pipelines succeed because tests are skipped rather than because the code is of sufficiently high quality. To help avoid this situation, we've added a new option in the VSTest task that allows you to specify the minimum number of tests that must be run for the task to pass.
VSTest TestResultsDirectory option is available in the task UI
The VSTest task stores test results and associated files in the
$(Agent.TempDirectory)\TestResults folder. We've added an option to the task UI to let you configure a different folder to store test results. Now any subsequent tasks that need the files in a particular location can use them.
Markdown support in automated test error messages
We've added markdown support to error messages for automated tests. Now you can easily format error messages for both test run and test result to improve readability and ease the test failure troubleshooting experience in Azure Pipelines. The supported markdown syntax can be found here.
Use pipeline decorators to inject steps automatically in a deployment job
You can now add pipeline decorators to deployment jobs. You can have any custom step (e.g. vulnerability scanner) auto-injected to every life cycle hook execution of every deployment job. Since pipeline decorators can be applied to all pipelines in an collection, this can be leveraged as part of enforcing safe deployment practices.
In addition, deployment jobs can be run as a container job along with services side-car if defined.
Test Plans
New Test Plan page
A new Test Plans Page (Test Plans *) is available to all Azure DevOps collections. The new page provides streamlined views to help you focus on the task at hand - test planning, authoring or execution. It is also clutter-free and consistent with the rest of the Azure DevOps offering.
Help me understand the new page
The new Test Plans page has total of 6 sections of which the first 4 are new, while the Charts & Extensibility sections are the existing functionality.
- Test plan header: Use this to locate, favorite, edit, copy or clone a test plan.
- Test suites tree: Use this to add, manage, export or order test suites. Leverage this to also assign configurations and perform user acceptance testing.
- Define tab: Collate, add and manage test cases in a test suite of choice via this tab.
- Execute tab: Assign and execute tests via this tab or locate a test result to drill into.
- Chart tab: Track test execution and status via charts which can also be pinned to dashboards.
- Extensibility: Supports the current extensibility points within the product.
Lets take a broad stroke view of these new sections below.
1. Test plan header
Tasks
The Test Plan header allows you to perform the following tasks:
- Mark a test plan as favorite
- Unmark a favorited test plan
- Easily navigate among your favorite test plans
- View the iteration path of the test plan, which clearly indicates if the test plan is Current or Past
- View the quick summary of the Test Progress report with a link to navigate to the report
- Navigate back to the All/Mine Test Plans page
Context menu options
The context menu on the Test Plan header provides the following options:
- Copy test plan: This is a new option that allows you to quickly copy the current test plan. More details below.
- Edit test plan: This option allows you to edit the Test Plan work item form to manage the work item fields.
- Test plan settings: This option allows you to configure the Test Run settings (to associate build or release pipelines) and the Test Outcome settings
Copy test plan (new capability)
We recommend creating a new Test Plan per sprint/release. When doing so, generally the Test Plan for the prior cycle can be copied over and with few changes the copied test plan is ready for the new cycle. To make this process easy, we have enabled a 'Copy test plan' capability on the new page. By leveraging it you can copy or clone test plans. Its backing REST API is covered here and the API lets you copy/clone a test plan across projects too.
For more guidelines on Test Plans usage, refer here.
2. Test suites tree
Tasks
The Test suite header allows you to perform the following tasks:
- Expand/collapse: This toolbar options allows you to expand or collapse the suite hierarchy tree.
- Show test points from child suites: This toolbar option is only visible when you are in the "Execute" tab. This allows you to view all the test points for the given suite and its children in one view for easier management of test points without having to navigate to individual suites one at a time.
- Order suites: You can drag/drop suites to either reorder the hierarchy of suites or move them from one suite hierarchy to another within the test plan.
Context menu options
The context menu on the Test suites tree provides the following options:
- Create new suites: You can create 3 different types of suites as follows:
- Use static suite or folder suite to organize your tests.
- Use requirement-based suite to directly link to the requirements/user stories for seamless traceability.
- Use query-based to dynamically organize test cases that meet a query criteria.
- Assign configurations: You can assign configurations for the suite (example: Chrome, Firefox, EdgeChromium) and these would then be applicable to all the existing test cases or new test cases that you add later to this suite.
- Export as pdf/email: Export the Test plan properties, test suite properties along with details of the test cases and test points as either .
3. Define tab
Define tab lets you collate, add and manage test cases for a test suite. Whereas the execute tab is for assigning test points and executing them.
The Define tab and certain operations are only available to users with Basic + Test Plans access level or equivalent. Everything else should be exercisable by a user with 'Basic' access level.
Tasks
The Define tab allows you to perform the following tasks:
- Add New test case using work item form: This option allows you to create a new test case using the work item form. The test case created will automatically be added to the suite.
- Add New test case using grid: This option allows you to create one or more test cases using the test cases grid view. The test cases created will automatically be added to the suite.
- Add Existing test cases using a query: This option allows you to add existing test cases to the suite by specifying a query.
- Order test cases by drag/drop: You can reorder test cases by dragging/dropping of one or more test cases within a given suite. The order of test cases only applies to manual test cases and not to automated tests.
- Move test cases from one suite to another: Using drag/drop, you can move test cases from one test suite to another.
- Show grid: You can use the grid mode for viewing/editing test cases along with test steps.
- Full screen view: You can view the contents of the entire Define tab in a full screen mode using this option.
- Filtering: Using the filter bar, you can filter the list of test cases using the fields of "test case title", "assigned to" and "state". You can also sort the list by clicking on the column headers.
- Column options: You can manage the list of columns visible in the Define tab using "Column options". The list of columns available for selection are primarily the fields from the test case work item: This option allows you to create a copy/clone of selected test cases. See below for more details.
- View linked items: This option allows you to look at the linked items for a given test case. See below for more details.
Copy/clone test cases (new capability)
For scenarios where you want to copy/clone a test case, you can use the "Copy test case" option. You can specify the destination project, destination test plan and destination test suite in which to create the copy/cloned test case. In addition, you can also specify whether you want to include existing links/attachments to flow into the cloned copy.
View linked items (new capability)
Traceability among test artifacts, requirements and bugs is a critical value proposition of the Test Plans product. Using the "View linked items" option, you can easily look at all the linked Requirements that this test case is linked with, all the Test suites/Test plans where this test case has been used and all the bugs that have been filed as part of test execution.
4. Execute tab
Define tab lets you collate, add and manage test cases for a test suite. Whereas the execute tab is for assigning test points and executing them.
What is a test point? Test cases by themselves are not executable. When you add a test case to a test suite then test point(s) are generated. A test point is a unique combination of test case, test suite, configuration, and tester. Example: if you have a test case as "Test login functionality" and you add 2 configurations to it as Edge and Chrome then this results in 2 test points. Now these test points can be executed. On execution, test results are generated. Through the test results view (execution history) you can see all executions of a test point. The latest execution for the test point is what you see in the execute tab.
Hence, test cases are reusable entities. By including them in a test plan or suite, test points are generated. By executing test points, you determine the quality of the product or service being developed.
One of the primary benefits of the new page is for users who mainly do test execution/tracking (need to have only 'Basic' access level), they are not overwhelmed by the complexity of suite management (define tab is hidden for such users).
The Define tab and certain operations are only available to users with Basic + Test Plans access level or equivalent. Everything else, including the "Execute" tab should be exercisable by a user with 'Basic' access level.
Tasks
The Execute tab allows you to perform the following tasks:
- Bulk mark test points: This option allows you to quickly mark the outcome of the test points - passed, failed, blocked or not applicable, without having to run the test case via the Test runner. The outcome can be marked for one or multiple test points at one go.
- Run test points: This option allows you to run the test cases by individually going through each test step and marking them pass/fail using a Test runner. Depending upon the application you are testing, you can use the "Web Runner" for testing a "web application" or the "desktop runner" for testing desktop and/or web applications. You can also invoke the "Run with options" to specify a Build against which the testing you want to perform.
- Column options: You can manage the list of columns visible in the Execute tab using "Column options". The list of columns available for selection are associated with test points, such as Run by, Assigned Tester, Configuration, etc.
- Full screen view: You can view the contents of the entire Execute tab in a full screen mode using this option.
- Filtering: Using the filter bar, you can filter the list of test points using the fields of "test case title", "assigned to", "state", "test outcome", "Tester" and "Configuration". You can also sort the list by clicking on the column headers.
Context menu options
The context menu on the Test point node within the Execute tab provides the following options:
- Mark test outcome: Same as above, allows you to quickly mark the outcome of the test points - passed, failed, blocked or not applicable.
- Run test points: Same as above, allows you to run the test cases via test runner.
- Reset test to active: This option allows you to reset the test outcome to active, thereby ignoring the last outcome of the test point.
- Open/edit test case work item form: This option allows you to edit a Test case using the work item form wherein you edit the work item fields including test steps.
- Assign tester: This option allows you to assign the test points to testers for test execution.
- View test result: This option allows you to view the latest test outcome details including the outcome of each test step, comments added or bugs filed.
- View execution history: This option allows you to view the entire execution history for the selected test point. It opens up a new page wherein you can adjust the filters to view the execution history of not just the selected test point but also for the entire test case.
Test Plans Progress report
This out-of-the-box report helps you track the execution and status of one or more Test Plans in a project. Visit Test Plans > Progress report* to start using the report.
The three sections of the report include the following:
- Summary: shows a consolidated view for the selected test plans.
- Outcome trend: renders a daily snapshot to give you an execution and status trendline. It can show data for 14 days (default), 30 days, or a custom range.
- Details: this section lets you drill down by each test plan and gives you important analytics for each test suite.
Artifacts
Note
Azure DevOps Server 2020 does not import feeds that are in the recycle bin during data import. If you wish to import feeds which are in the recycle bin, please restore them from the recycle bin before starting data import..
Improvements to feed page load time
We are excited to announce that we have improved the feed page load time. On average, feed page load times have decreased by 10%. The largest feeds have seen the most improvement the 99th percentile feed page load time (load times in the highest 99% of all feeds) decreased by 75%.
Share your packages publicly with public feeds
You can now create and store your packages inside public feeds. Packages stored within public feeds are available to everyone on the internet without authentication, whether or not they're in your collection, or even logged into an Azure DevOps collection. Learn more about public feeds in our feeds documentation or jump right into our tutorial for sharing packages publicly.
Configure upstreams in different collections within an AAD tenant
You can now add a feed in another collection associated with your Azure Active Directory (AAD) tenant as an upstream source to your Artifacts feed. Your feed can find and use packages from the feeds that are configured as upstream sources, allowing packages to be shared easily across collections associated with your AAD tenant. See how to set this up in the docs.
Use the Python Credential Provider to authenticate pip and twine with Azure Artifacts feeds
You can now install and use the Python Credential Provider (artifacts-keyring) to automatically set up authentication to publish or consume Python packages to or from an Azure Artifacts feed. With the credential provider, you don't have to set up any configuration files (pip.ini/pip.conf/.pypirc), you will simply be taken through an authentication flow in your web browser when calling pip or twine for the first time. See more information in the documentation.
Azure Artifacts feeds in the Visual Studio Package Manager
We now show package icons, descriptions, and authors in the Visual Studio NuGet Package Manager for packages served from Azure Artifacts feeds. Previously, most of this metadata was not provided to VS.
Updated Connect to feed experience
The Connect to feed dialog is the entryway to using Azure Artifacts; it contains information on how to configure clients and repositories to push and pull packages from feeds in Azure DevOps. We've updated the dialog to add detailed set-up information and expanded the tools we give instructions for.
Public feeds are now generally available with upstream support
The public preview of public feeds has received great adoption and feedback. In this release, we extended additional features to general availability. Now, you can set a public feed as an upstream source from a private feed. You can keep your config files simple by being able to upstream both to and from private and project-scoped feeds.
Create project-scoped feeds from the portal
When we released public feeds, we also released project-scoped feeds. Until now, project-scoped feeds could be created via REST APIs or by creating a public feed and then turning the project private. Now, you can create project-scoped feeds directly in the portal from any project if you have the required permissions. You can also see which feeds are project and which are collection-scoped in the feed picker.
Wiki
Rich editing for code wiki pages
Previously, when editing a code wiki page, you were redirected to the Azure Repos hub for editing. Currently, the Repo hub is not optimized for markdown editing.
Now you can edit a code wiki page in the side-by-side editor inside wiki. This lets you use the rich markdown toolbar to create your content making the editing experience identical to the one in project wiki. You can still choose to edit in repos by selecting the Edit in Repos option in the context menu.
Create and embed work items from a wiki page
As we listened to your feedback, we heard that you use wiki to capture brainstorming documents, planning documents, ideas on features, spec documents, minutes of meeting. Now you can easily create features and user stories directly from a planning document without leaving the wiki page.
To create a work item select the text in the wiki page where you want to embed the work item and select New work item. This saves you time since you don't have to create the work item first, go to edit and then find the work item to embed it. It also reduces context switch as you don’t go out of the wiki scope.
To learn more about creating and embedding a work item from wiki, see our documentation here.
Previously, you didn't have a way to interact with other wiki users inside the wiki. This made collaborating on content and getting questions answered a challenge since conversations had to happen over mail or chat channels. With comments, you can now collaborate with others directly within the wiki. You can leverage the @mention users functionality inside comments to draw the attention of other team members. This feature was prioritized based on this suggestion ticket. For more on comments, please see our documentation here.
Hide folders and files starting with “.” in wiki tree
Until now, the wiki tree showed all the folders and files starting with a dot (.) in the wiki tree. In code wiki scenarios, this caused folders like .vscode, which are meant to be hidden, to show up in the wiki tree. Now, all the files and folders starting with a dot will remain hidden in the wiki tree hence reducing unnecessary clutter.
This feature was prioritized based on this suggestion ticket.
Short and readable Wiki page URLs
You no longer have to use a multiline URL to share wiki page links. We are leveraging the page IDs in the URL to remove parameters hence making the URL shorter and easier to read.
The new structure of URLs will look like:{accountName}/{projectName}/_wiki/wikis/{wikiName}/{pageId}/{readableWiki PageName}
This is an example of the new URL for a Welcome to Azure DevOps Wiki page: AzureDevOps/_wiki/wikis/AzureDevOps.wiki/1/Welcome-to-Azure-DevOps-Wiki
This was prioritized based on this feature suggestion ticket from the Developer Community.
Synchronous scroll for editing wiki pages
Editing wiki pages is now easier with synchronous scroll between the edit and the preview pane. Scrolling on one side will automatically scroll the other side to map the corresponding sections. You can disable the synchronous scroll with the toggle button.
Note
The state of the synchronous scroll toggle is saved per user and account.
Page visits for wiki pages
You can now get insights into the page visits for wiki pages. The REST API let you access the page visits information in the last 30 days. You can use this data to create reports for your wiki pages. In addition, you can store this data in your data source and create dashboards to get specific insights like top-n most viewed pages.
You will also see an aggregated page visits count for the last 30 days in every page.
Note
A page visit is defined as a page view by a given user in a 15-minute interval.
Reporting
Pipeline failure and duration reports
Metrics and insights help you continuously improve the throughput and stability of your pipelines. We have added two new reports to provide you insights about your pipelines.
- Pipeline failure report shows the build pass rate and the failure trend. In addition, it will also show the tasks failure trend to provide insights on which task in the pipeline is contributing to the maximum number of failures.
- Pipeline duration report shows the trend of time taken for a pipeline to run. It also shows which tasks in the pipeline are taking the most amount of time.
Improvement to the Query Results widget
The query results widget is one of our most popular widgets, and for good reason. The widget displays the results of a query directly on your dashboard and is useful in many situations.
With this update we included many long-awaited improvements:
- You can now select as many columns as you want to display in the widget. No more 5-column limit!
- The widget supports all sizes, from 1x1 to 10x10.
- When you resize a column, the column width will be saved.
- You can expand the widget to full screen view. When expanded, it will display all the columns returned by the query.
Lead and Cycle Time widgets advanced filtering
Lead and cycle time are used by teams to see how long it takes for work to flow through their development pipelines, and ultimately deliver value to their customers.
Until now, the lead and cycle time widgets did not support advanced filter criteria to ask questions such as: "how long is it taking my team to close out the higher priority items?"
With this update questions like this can be answered by filtering on the Board swimlane.
We've also included work item filters in order to limit the work items that appear in the chart.
Inline sprint burndown using story points
Your Sprint Burndown can now burndown by Stories. This addresses your feedback from the Developer Community.
From the Sprint hub select the Analytics tab. Then configure you report as follows:
- Select Stories backlog
- Select to burndown on Sum of Story Points
A Sprint Burndown widget with everything you've been asking for
The new Sprint Burndown widget supports burning down by Story Points, count of Tasks, or by summing custom fields. You can even create a sprint burndown for Features or Epics. The widget displays average burndown, % complete, and scope increase. You can configure the team, letting you display sprint burndowns for multiple teams on the same dashboard. With all this great information to display, we let you resize it up to 10x10 on the dashboard.
To try it out, you can add it from the widget catalog, or by editing the configuration for the existing Sprint Burndown widget and checking the Try the new version now box.
Note
The new widget uses Analytics. We kept the legacy Sprint Burndown in case you don't have access to Analytics.
Inline sprint burndown thumbnail
The Sprint Burndown is back! A few sprint ago, we removed the in-context sprint burndown from the Sprint Burndown and Taskboard headers. Based on your feedback, we've improved and reintroduced the sprint burndown thumbnail.
Clicking on the thumbnail will immediately display a larger version of the chart with an option to view the full report under the Analytics tab. Any changes made to the full report will be reflected in the chart displayed in the header. So you can now configure it to burndown based on stories, story points, or by count of tasks, rather than just the amount of work remaining.
Create a dashboard without a team
You can now create a dashboard without associating it with a team. When creating a dashboard, select the Project Dashboard type.
A Project Dashboard is like a Team Dashboard, except it's not associated with a Team and you can decide who can edit/manage the dashboard. Just like a Team Dashboard, it is visible to everyone in the project.
All Azure DevOps widgets that require a team context have been updated to let you select a team in their configuration. You can add these widgets to Project Dashboards and select the specific team you want.
Note
For custom or third-party widgets, a Project Dashboard will pass the default team's context to those widgets. If you have a custom widget that relies on team context, you should update the configuration to let you select a team.
Feedback
We would love to hear from you! You can report a problem or provide an idea and track it through Developer Community and get advice on Stack Overflow. | https://docs.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops | CC-MAIN-2021-39 | refinedweb | 15,872 | 53.21 |
I have a large file (about 80,000 lines) and I want to store each 10 line block into a separate list. For the first three 10-line blocks I have:
N=10 #Number of lines per block
with open("file", "r") as myfile:
profile1 = list(islice(myfile, 0,N))
profile2 = list(islice(myfile, 0,N))
profile3 = list(islice(myfile, 0,N))
islice
Use the following:
with open('file', 'r') as f: lines = f.readlines() chunks = [lines[item:item+10] for item in range(0, len(lines), 10)] # with Python 2 you can use xrange instead of range for large lists
To convert each chunk to array, try the following:
import numpy as np my_arrays = [np.asarray(chunk) for chunk in chunks] | https://codedump.io/share/PjHBjtgw4HdK/1/looping-over-n-lines-of-a-file-and-storing-in-multipel-lists | CC-MAIN-2017-09 | refinedweb | 122 | 63.02 |
Debugging Pyramid¶
This tutorial provides a brief introduction to using the python
debugger (
pdb) for debugging pyramid applications.
This scenario assume you've created a Pyramid project already. The scenario
assumes you've created a Pyramid project named
buggy using the
alchemy scaffold.
Introducing PDB¶
This single line of python is your new friend:
import pdb; pdb.set_trace()
As valid python, that can be inserted practically anywhere in a Python source file. When the python interpreter hits it - execution will be suspended providing you with interactive control from the parent TTY.
Debugging Our
buggy App¶
- Back to our demo
buggyapplication we generated from the
alchemyscaffold, lets see if we can learn anything debugging it.
- The traversal documentation describes how pyramid first acquires a root object, and then descends the resource tree using the
__getitem__for each respective resource.
Huh?¶
Let's drop a pdb statement into our root factory object's
__getitem__method and have a look. Edit the project's
models.pyand add the aforementioned
pdbline in
MyModel.__getitem__:
def __getitem__(self, key): import pdb; pdb.set_trace() session = DBSession() # ...
Restart the Pyramid application, and request a page. Note the request requires a path to hit our break-point: <- misses the break-point, no traversal <- should find an object <- does not
For a very simple case, attempt to insert a missing key by default. Set item to a valid new MyModel in
MyRoot.__getitem__if a match isn't found in the database:
item = session.query(MyModel).get(id) if item is None: item = MyModel(name='test %d'%id, value=str(id)) # naive insertion
Move the break-point within the if clause to avoid the false positive hits:
if item is None: import pdb; pdb.set_trace() item = MyModel(name='test %d'%id, value=str(id)) # naive insertion
Run again, note multiple request to the same id continue to create new MyModel instances. That's not right!
Ah, of course, we forgot to add the new item to the session. Another line added to our
__getitem__method:
if item is None: import pdb; pdb.set_trace() item = MyModel(name='test %d'%id, value=str(id)) session.add(item)
Restart and test. Observe the stack; debug again. Examine the item returning from MyModel:
(pdb) session.query(MyModel).get(id)
Finally, we realize the item.id needs to be set as well before adding:
if item is None: item = MyModel(name='test %d'%id, value=str(id)) item.id = id session.add(item)
Many great resources can be found describing the details of using pdb. Try the interactive
help(hit 'h') or a search engine near you.
Note
There is a well known bug in
PDB in UNIX, when user cannot
see what he is typing in terminal window after any interruption during
PDB session (it can be caused by
CTRL-C or when the server restarts
automatically). This can be fixed by launching any of this commands in broken
terminal:
reset,
stty sane. Also one can add one of this commands into
~/.pdbrc file, so they will be launched before
PDB session:
from subprocess import Popen Popen(["stty", "sane"]) | http://docs.pylonsproject.org/projects/pyramid-cookbook/en/latest/debugging/debugging_pyramid.html | CC-MAIN-2016-07 | refinedweb | 512 | 58.18 |
Womcat Bookmarks is a project.Here are some key features of "Womcat Bookmarks":· Womcat Bookmarks are intended to point to other Womcat Bookmarks. Womcat allows for 2 types of RSS items: recommendations and referrals. A recommendation is an RSS item that points to an ordinary web page. A referral is an RSS item that points to another RSS file (which may or may not be generated by Womcat Bookmarks). An XML extension element is used to distinguish between these two possibilities. · Womcat Bookmarks is subject-oriented. RSS puts subjects in the element separated by forward slashes. Womcat Bookmarks requires that every recommendation be contained in a subject. Subjects can also be contained in each other. So if subject Algebra is contained in subject Mathematics, then the category will be Mathematics/Algebra. · Although Womcat Bookmarks subjects for one user form a hierarchy, it cannot be expected that different users will have equivalent or even consistent hierarchies. The most that can be hoped for is that different users might use the same titles for individual subjects. Thus Programming Languages/C is not a very good hierarchy, because "C" by itself is potentially ambiguous. Better is Programming Languages/C Programming Language. In effect subjects are globally considered to live in a flat namespace, and their titles should be unambiguous in a global context without any dependence on position in a user's hierarchy. The end result is to facilitate merging of Womcat Bookmarks (i.e. RSS data) from different users such that they can be usefully browsed by subject. · Womcat Bookmarks are considered to be an accumulating collection of bookmarks, not a transient collection of news items, with items only being deleted if they cease to be relevant (or cease to exist). This is problematic when Womcat is used to read RSS files that may or may not be accumulative or transient in intent. I propose to add an extension element to deal with this, along the lines of specifying a period that the file covers, and optionally another link to a different RSS file that covers that period, e.g. "1 month" for a file that only contains links added up to a month ago, "forever" for a file that contains all current bookmarks. This will also be useful as a way for RSS readers not to have to re-read an accumulated bookmarks file that has grown very large.What's New in This Release:· Various improvements to user interface: ability to stop downloads, pre-delete multiple mentions, add more links between objects in pages. Also update versions of libraries used by the application. | http://linux.softpedia.com/get/Office/News-Diary/Womcat-Bookmarks-16977.shtml | CC-MAIN-2013-48 | refinedweb | 433 | 54.42 |
ChangeLog
DEV
DEV
New Features
- [FEATURE] Added new aliases to the warp commands:
- /wp - /warpcreate
- /rwp - /warpdelete
- /twp - /warp
- /lwp - /warplist
- [FEATURE] Added a time (-t) parameter for god/fly/thor/vulcan/etc ... to give the power to yourself or an another use for a given time. (Time scale configurable in the configuration file.)
- [FEATURE] Added new parameter -x, -a, -n,-m, -i in the memory command as asked in the ticket :
- [FEATURE] Added a /stop command to stop the server. It first kicking all the player, locking the server and after 5 sec (configurable in the configuration file or by the -t parameter) stop it.
- [FEATURE] First Joint Message, if JoinQuitMessage is set to true, and the player join the server for the first time, this msg will be used. (check the locale file).
- [FEATURE] Added new command : /nodrop, when activated, the player don't drop any object when using the shortcut, and neither on death.
- [FEATURE] Possibility in the configuration to disallow player to tp to player that are in another world.
- [FEATURE] Using the permission node : admincmd.tp.world.WORLDNAME you can allow the player to tp to another player that is in the WORLDNAME where WORLDNAME represent the name of the where where the space are replaced by underscore (_)
- [FEATURE] Possibility to custom the header of the private message in the locale file. TICKET #276
- [FEATURE] EGG POWERS see page : Eggs
- [FEATURE] Added the EntityEgg.
- [FEATURE] Added Minecarts, Boats and Vehicles(both!) to the memory command: flag -c, -b, -v, permissions:
- admincmd.server.memory.vehicle
- admincmd.server.memory.boat
- admincmd.server.memory.cart
- [FEATURE] Added broadcast command: /broadcast [message]. Permission admincmd.server.broadcast.
- [FEATURE] Added flag -c to the command /mute. If set the player can't use any commands(but is allowed to chat!). Permission: admincmd.player.mute.command. Same rules as for normal /mute apply, so you can make it temporary etc. Also added according locales.
- [FEATURE] Added repairing of the item an other player holds: /repair player. Permission: admincmd.item.repair.other
- [FEATURE] Added the possibility to set an armor in the kits.yml file.
- [FEATURE] Added the possibility to add parents for a kit
new template for kits.yml :
kits: Tools: delay: 0 items: '276' : 1 '278' : 1 '277' : 1 'Diamond_Axe' : 1 dirt: delay: 60 items: 'dirt': 64 'grass': 2 darmor: delay: 0 items: armor: head: 310 chest: 311 legs: 312 boots: 313 dequip: delay: 0 armor: items: parents: [Tools,darmor]
Bug Fixes
- [BUG FIX] With NoLagg and ASynchronized access to bukkit functions.
- [BUG FIX] Corrected the command /played that was only displaying 0:00:00:00 instead of the played time.
- [BUG FIX] Problem with the immunity system when the default immunity was set to 0. Now every thing work as it has to work.
- [BUG FIX] Now the mobKill command work with villager and EnderDragon (as asked in)
- [BUG FIX] Some correction in the immunity system.
- [BUG FIX] The command /set that wasn't setting correctly the messages in the correct files.
- [BUG FIX] Corrected a NPE in the /twp command
- [BUG FIX] Corrected /rbl command not displaying the removed block/item (added two new locales: rmBlacklistItem and rmBlacklistBlock)
- [BUG FIX/UPDATE] Updated API hooks for mChatSuite, OddItem and bPermissions
- [BUG FIX] Fixed NPE with bPermissions
Recent Changes
- [CHANGE] Command /moblimit can be used to limit the mob of a particular type using the -m parameter :
/moblimit -m Pig Test 5
Will limit the spawn for pig to 5 in the world Test.
You can put the -m everywhere in the command but it has to be followed by the name of the mob.
- [CHANGE] Bow can be now repaired.
- [CHANGE] Added Auto-Completion on command using world
Example : /twp WorldName:WarpName when you have a world named Test and a warp named Blah you can type :
/twp t:b
- [CHANGE] Added last Quit in whois command.
- [OPTIMIZATION] Use the new listener system from the last RB.
- [OPTIMIZATION] If a command is disabled (like fly, god, moblimit, etc ...) their listener is disabled too.
- [OPTIMIZATION] How AdminCmd register his permission in the Bukkit Permission system.
- [OPTIMIZATION] In the air / ex command to be less laggy with large surface to replace.
- [IMPORTANT CHANGE] to Clear the inventory of a wanted player you have to use the -P parameter
/clear -P blah will clear the inventory of the player blah
Like for other valued parameter, it can be use everywhere in the command
/clear wood -P blah or /clear -P blah wood
Will always clear all the wood of the player blah's inventory.
- [OPTIMIZATION] Some optimization on when the player join the server, to take less time to process.
- [CHANGE] Added the Permission Node : admincmd.coloredsign.create to create colored Signs
- [CHANGE] If you give another player a power for a certain time and the time expires you will now get a message that it expired!
- [CHANGE] Command /ip now gets the IP even for offline players!
API for developers
Version 5.11.1
New Features
Bug Fixes
- [MAJOR BUGFIX] Correcting the lost of Ban/Warps in files when using an older version then 1.1-DEV of bukkit.
Recent Changes
- [CHANGE] Updated the Metrics class.
- [CHANGE] Added some Custom Stats in the Metrics.
API for developers
Version 5.11
New Features
- [FEATURE] Added a Presentation command, and a presentation field in the whois command. You can now set a short message to describe yourself.
- [FEATURE] Added /xp command. You can set(Flag -l) the level of a player, the progression(Flag -p) in the current level, add(subtrac by giving negative numbers) xp(Flag -a) and drop(Flag -d) an orb near a player with a given amount of exp. You can also view a players total exp(Flag -t) Command usage: /xp <-l|-p|-a|-d|-t> <player> <amount>. Player variable can be omitted. Permission: admincmd.player.experience
- [FEATURE] Added a
lastDisconnectto the infos: tree of each player file!
- [FEATURE] Added possibilty to allow players with the permission
admincmd.player.fly.allowedto use /fly even if the server is set to not allow it.
- [FEATURE] Now taking the suffix from Permissions/mChatSuite plugins
- [FEATURE] There are now 4 new Textfiles (news.txt, rules,txt, motd.txt and motdNewUser.txt). You can change the text inside those files to deit the rules/news etc. instead of doing so inside the locales file. Afterwards just reload the plugin (/areload) and your changes are applied to the locale.yml! You can use the same codes as in the yml file for coloring etc.
- [FEATURE] Added the possibility to tp at Warp from other world than the one you are in (if you have the permission node : admincmd.warp.tp.all) use /lwp -a to list every warp of every world, and /twp worldName:warpName to warp.
- [FEATURE] Auto-completion for the name of the homes. If you have a home named blah typing /home b will tp you to blah.
- [FEATURE] Kick/Ban message template, a new file has been created in the locale folder : kickMessages.yml. You can put here your template for kicks/bans. When banning or kicking you use it like that : /kick Player -m shortcut or /ban Player -m shortcut
Bug Fixes
- [BUG FIX] Corrected a problem with Colored Prefix, now displayed without changing all the line.
- [BUG FIX] Concurrent thread access errors
- [BUG FIX] Corrected the NoSuchMethodError in org.bukkit.configuration.MemorySection.isNaturallyStorable(Ljava/lang/Object;)
- [BUG FIX] Corrected locale-mismatch with the /tp and /tphere command (Ticket #299)
Recent Changes
- [OPTIMIZATION] Some optimization in the I/O when writing the UserFiles
- [CHANGE] Use the prefix in the join/quit messages.
- [SUPPORT] No more support for mChat, now Supporting mChatSuite
- [CHANGE] When setting globalspawn in the configuration to "none" it'll disable the re-spawn feature of AdminCmd.
- [OPTIMIZATION] How to replace blocks by air
- [OPTIMIZATION] Invisibility is optimized and saved upon server restart
- [CHANGE] When putting -1 as kit's delay, it mean that the kit can be only used once by each player.
- [CHANGE] New Stats system :
- [OPTIMIZATION] Optimization of TempBan, if the server is restarted, the tempBan will be checked and will automatically delete the ban that ended.
API for developers
Version 5.10.2
New Features
- [FEATURE/CONFIG] Added useJoinQuitMsg to let the admin activate or the the join/quit message feature.
Bug Fixes
- [BUG FIX] Corrected quit message
- [BUG FIX] Corrected PermissionsEx prefix.
Recent Changes
API for developers
Version 5.10.1
New Features
- [FEATURE/CONFIG] Added logAllCmd in configuration file, to log every command done in the server.log.
- [FEATURE] Added locales for custom join and leave messages:
- quitMessage
- joinMessage
- [FEATURE] Using a Stats system to log the use of AdminCmd.
Bug Fixes
- [BUG FIX] Fixed prefixes appearing twice when using mChat
- [BUG FIX] Fixed spawn message appearing twice when using /spawn
- [BUG FIX] Fixed problem with air/ex/undo and every command executed as SyncCommand.
- [BUG FIX] Fixed NPE with globalSpawn to bed when the bed don't exist anymore.
- [BUG FIX] Fixed Class Error with Heroes
- [BUG FIX] Fixed double log of Block Break in logblock when using the Super Breaker
- [BUG FIX] Fixed the suppression of help entry when the command is disabled.
- [BUG FIX] Fixed the display of Played Time in whois and played commands.
- [BUG FIX] Fixed bug in delayed teleportation (spawn/home) and the teleportation check.
- [BUG FIX] Fixed NPE with AFKWorker when not using the auto-afk feature.
- [BUG FIX] Fixed a bug with the SPY command.
- [BUG FIX] The Item used for the Super Breaker is automatically repaired before each use
- [BUG FIX] The player is no more removed from the player list (TAB key) when going to invisible and when the fakeQuitInvisible is not set in the config file.
Recent Changes
- [CHANGE] Using mChat (when detected) to display fakeJoin and fakeQuit messages.
- [OPTIMIZATION] Optimized the ex/air/undo commands to execute faster without crashing the server
- [OPTIMIZATION] Optimized the use of LogBlock in ex/air/undo commands.
- [CHANGE] Changed the Permission Node for /drop command : admincmd.item.drop
- [OPTIMIZATION] Optimized the localeWorker that manage the locales files.
- [CHANGE] Added Invisible status in the /whois command
API for developers
Version 5.10.0
New Features
- [FEATURE] Blacklisting of Blocks so they can not be placed, admincmd.item.noblacklist permission node makes you able to bypass the BL. Blacklisting a block works the same way as it does with an item (just use the right command flag)
- [FEATURE] Added a new Permission Node
admincmd.item.nodelayto avoid the delay to use kits.
- [FEATURE] New command: /changems <-m|-d|-g> <CreatureType|Delay>(Alias /cms or /msc
^^) -m changes the CreatureType the Mob Spawner you are looking at spawns. -d changes the delay. -g gets the delay and CreatureType of the Spawner and displays them to you. Permission: admincmd.mob.spawner
- [FEATURE/CONFIG] You can now choose to display the name of the player as DisplayName or RealName (in the config file useDisplayName)
- [FEATURE/BUG FIX] Automatically remove Help for command disabled.
- [FEATURE/CONFIG] New setting: globalRespawnSetting. You can now specify where a player should re-spawn, default is globalSpawn, other options at the moment are: group, bed and home. For home to work you have to set one which has the same name as the world you are currently in! If the specified spawn does not exist, the player spawns at the default worldspawn.
- [FEATURE/CONFIG] groupNames: YAML String list of your group names defined in your permissions plugin.
- [FEATURE] /reply command, short /r. You can reply with a private message to the last player who sent you one without having to give the players name. Permissions node: admincmd.player.reply
- [FEATURE] /difficulty <-flag> (worldName) (difficulty) command, short /dif. Flags: -g, -s: Sets(-s) or gets (-g) the difficulty of a world. Difficulties: 0 = Peacful, 1 = Easy, 2 = Normal, 3 = Hard, If you are a player you can omit the worldName, instead the world you are in will be taken.
- [FEATURE/CONFIG] debug option. If set to true AdminCmd will put some more (acutally a lot more) messages into the command line about what is going on etc.
- [FEATURE/CONFIG] InvisAndNoPickup option: If set to true the No Pickup mode will also be activated upon issuing /invisible
- [FEATURE/CONFIG] teleportDelay: set the delay of the teleport (Long), 0 means no delay!
- [FEATURE/CONFIG] checkTeleportLocation: if set to true it will check if the player has move since the command was issued and abort it if he has
Bug Fixes
- [BUG FIX] Fixed permission for giving infinite Items (admincmd.item.infinity)
- [BUG FIX] In the elapsed time display
- [BUG FIX] /tpt yes now really only accepts the request if yes was entered instead of /tpt <anything here>
- [UPDATE] Updated to the newest OddItem version (v0.8)!!
- [BUG FIX] Changed the priority of the Re-spawn event, meaning setting it to group, home or global work now.
- [BUG FIX] Corrected bug with resetPowerWhenTpAnotherWorld that wasn't working.
- [BUG FIX] /mem -f no more destroy paintings.
- [BUG FIX] You can override already created home like before without fearing the limit.
- [BUG FIX] With TPrequest system, that was no longer working in the previous dev build.
- [BUG FIX] With Permissions Prefix when mChat was installed. Now it's displaying the two prefix (the one from mChat and the one from the permission plugin)
- [MAJOR BUG FIX] With all function that were using OnlinePlayers (like invisibility, afk, etc ...) now everything should work better :)
- [BUG FIX] /kit not displayed in help. To view it the user needs the permission
admincmd.item.kithelp
Recent Changes
- [FEATURE/CHANGE] Split of TpToggle Permissions. admincmd.tp.toggle.use and admincmd.tp.toggle.allow, new parent permission admincmd.tp.toggle.* for both. This allows users to accept a request but not to turn the request system off/on for themselves
- [CHANGE] /addblacklist and /rmblacklist now have flags to determine if a block (-b) or an item (-i) should be added/removed
- [CHANGE] Kits use are now saved in the user information instead of the kits.yml
- [CHANGE] In the locale Manager, you can use now Recursive locale
Example :
days: '%d day(s)' elapsedTotalTime: '#days# %h:%m:%s' MOTDNewUser: '§6Welcome §f%player§6, there is currently §4%nb players connected : //n§6%connected //n§2You''ve played so far : §b#elapsedTotalTime#'
- [CHANGE] Removed Config seeting respawnAtSpawnPoint. No need to edit the config.yml, it is done automatically.
- [CHANGE] tpRequestSend locale adjusted for better grammar.
- [CHANGE] FakeQuit and invisible remove the player from the online Minecraft List (TAB key)
- [CHANGE] Lockdown now has a locale to support custom Lockdown messages! Do not exceed 100 characters as the client will not display the message then!
- [CHANGE] Use cmdname as key for the helpFile, meaning you only need to have the same key to override the description
- [FEATURE/CHANGE] Added a new permission node to enable the independent use of /time (un-)pause from the other time commands. admincmd.time.pause
- [CHANGE] Don't display usage message when the player don't have the permission to use the command.
- [FEATURE/CHANGE] /mem now also displays the current TPS(TicksPerSecond) of the server, by default meassured of 2s (40 ticks) but you can give it any number higer than 20 ;). Although we do not recommend using any number lower than 40 as it is very inaccurate.
- [FEATURE/CHANGE] Most if not all commands should make use of either the real Name or DisplayName of the player now depending on the configuration setting
- [FEATURE/CHANGE] /repairall and /kickall now have seperate permissions,
admincmd.item.repairalland
admincmd.player.kickall
- [FEATURE/CHANGE] /mob Added possibility to spawn the mobs at another players location: /mob <mob> <amount> <distance> <player>. To use it you need te specify the previous variables (e.g. /mob Zombie 1 1 Lathanael). New permission node:
admincmd.mob.spawn.other
- [FEATURE/CHANGE] /day: Added world parameter: /day <worldname>
- [FEATURE/CHANGE] /time: Added world parameter: /time <time> <worldname>
- [FEATURE/CHANGE] /storm and /rain: Added world parameter: /<command> <duration> <world>
- [FEATURE/CHANGE] /wclear and /wfreeze: Added world parameter: /<command> <world>
- [FEATURE/CHANGE] When using home or warp command to tp, automatically match the right home with the given name.
Example :
- you have a home : blah
- you typed : b
- you are teleported to blah
- [FEATURE/CHANGE] /spawn, /home and /warp now support a Delay and check for movement after the command was issued and if the player move within the time teleport will be abroted.
API for developers
- [API] When AdminCmd is disabled, it's disabling every other AdminCmd's plugin to avoid NPE and other problems.
- [API/WORLD] New functions: setDifficulty and getDifficulty
- [API/CHANGE] Moved to the new standard of configuration file.
Version 5.9.1
New Features
- [FEATURE] Added flag -f for the command /mem to free the memory by killing ALL Monsters and destroying ALL dropped items in ALL loaded Worlds.
- [FEATURE] Added /whois command to get information about a player or about a world (add the flag -w to get world informations)
- [FEATURE] Added maxItemAmount permission node and Config node. 0 = infinity, max 150. If someone with a max amount of 30 tries to spawn an item with the amount of 31 or more he does get an error message! For set-up refer to maxHomeByUser permission node
- [FEATURE] Possibility to add Delay on kit, meaning the player have to wait the delay before getting the kit again. (Thanks @daemitus for that feature :) )
Bug Fixes
- [BUG FIX/OPTIMIZATION] Some change in the AFK worker. Should resolve the double AFK.
- [BUG FIX] Missing permissions node for the set command.
- [BUG FIX] Resolve all Issue with the SuperBreaker and World Edit
- [BUG FIX] Resolve issue with auto-afk that is not triggered.
- [BUG FIX] Server_command event that was throwing an exception.
Recent Changes
- [CHANGE] Change the kits.yml format, it's automatically converted.
API for developers
Version 5.9
New Features
- [FEATURE] /eternal <player> command, removes the need to eat as food level stays always full. However you can still be damaged
- [FEATURE] /fakequit <player>. As with /invisible a quit message is sent upon executing the command and the player is no longer listed online. However he can be (physically) seen.
- [FEATURE/CONFIG] you can set the Rules to be displayed ONLY on the first login on the server.
- [FEATURE] /rules command added. You can now set your rules the same way as you can set the MotD and the News. (This means, the same format options are available)
- [FEATURE] /feed <player> command. Refills the depleted hunger bar of the player or the command sender
- [FEATURE/CHANGE] /clear command now accepts an amount, so you can only remove x Blocks/Items. /clear <player> <material> <amount>
- [FEATURE/CHANGE] hour/date variable: %time. It can be used in news, motd and rules.
- [FEATURE] Support Heroes for heal & more to come.
- [FEATURE] Lockdown mode, permit to lock the server letting only the admin to connect.
- [FEATURE] Support for args like -args (example : /exec -r script to reload the script and execute it)
- [FEATURE] Auto-convert banned.yml and muted.yml to the new system.
- [FEATURE/CHANGE] Possibility to set the played time in the MOTD (keywords : %d %h %m %s )
Example of MOTD in the locale file :
MOTD: '§6Welcome §f%player§6, there is currently §4%nb players connected: //n§6%connected //n§2You''ve played so far: §b%d day(s) %h:%m:%s'}
- [FEATURE/CONFIG] Added the possibility to force the player to respawn at the spawn point you set.
- [FEATURE] Command /played or /ptime to see how much time you played on the server.
- [FEATURE/CHANGE] Admin can now list/set/tp to home of their player using a colon (:)
Example : /h Balor:world
for list /lh Balor
- [FEATURE] %lastlogin variable. It can be used in motd, rules and news. It displays the time of your last login. It uses the format given for the %time variable
- [FEATURE] Added /gm command to switch the game mode.
- [BIG FEATURE] ImmunityLvl. You can now set immunityLvl (like maxHomePerUser), an immunityLvl is a level representing the power of the user. A user with a power of 0 can't do command against a user of a power of 1 or above . And the user of 150 (the max) can do everything to the lower levels.
Only one exception :
admincmd.immunityLvl.samelvlif a user have this node, he can only issue command to user having the same lvl.
Bug Fixes
- [BUG FIX] Corrected reload command.
- [BUG FIX] Tp loc to an un-generated chunk.
- [BUG FIX] Save the location before warping.
- [BUG FIX] If mChat is detected, change the fakeQuit message to be the same as mChat.
- [BUG FIX] With the new userData system (auto-correct invalid yaml file)
- [BUG FIX] Commands.yml was replaced after each reload.
- [BUG FIX] /roll accepted negative numbers and threw an exception
- [BUG FIX] in Played time, sometime with no reason an additional time was added.
- [BUG FIX] Some reappear problem with the Invisibility
- [BUG FIX] NPE with Info command.
- [BUG FIX] Corrected Error with ColouredConsoleSender.
Recent Changes
- [API/CHANGE] MAJOR Change in how to handle player (and player data). Banned.yml and muted.yml not used anymore, it's on the player data.
- [CHANGE] Persistent powers across reload and restart.
- [CHANGE] display afk message when using /afk command with a message.
- [CHANGE] Use the prefix in the AFK (Blah is AFK -> [Prefix]Blah is AFK)
- [CHANGE/CONF] Separate news and MOTD in the config file -> DisplayNewsOnJoin property added in config.
- [CHANGE] How the ban works, for the one coming from an older Dev version, you have to have the node "admincmd.server.converter" and type /cban to convert the ban system to the new one. For people coming from the stable version, you have nothing to do.
- [CHANGE] Status of the World save upon reload/restart (like mob limit, and weather frozen).
- [CHANGE] Warp point are now divided by world, you can't tp to a Warp point that is not in your current world.
- [CHANGE] Editing via /motd [msg] and /news [msg] removed (permission nodes also!). Added command /set <flag> <msg>. Possible flags: -n(ews) | -m(otd) | -r(rules). Each got a permission node as well as a parent node for all. Special flag for setting the motd a new user gets displayed: /set -u msg.
- [CHANGE/CONF] 2 new options for the %time variable:
- [CHANGE/CONF] DisplayRulesOnJoin property added in config to enable displaying the rules upon joining a server.
API for developers
- [API] Created a basic API to let other developers create their command for AdminCmd
- [API] API BREAK changed some namespace, and how to make new command
- [API] New World handling with a new API (auto-convert old spawn and warp files).
Version 5.8.1
- [BUG FIX] With an NPE in the PLAYER_INTERACT
- [BUG FIX] With a Exception in the twp command (when teleporting to a world that is not loaded)
- [BUG FIX] Minor bug fix/change
- [CHANGE] Deleted the alias /list, now use /info
- [CHANGE] Kit have now there own node : admincmd.kit.* to have access to all the kit and admincmd.kit.KIT_NAME (replace KIT_NAME by the name of the kit) to have access to that kit.
Version 5.8
- [CHANGE] Connected in Motd have now there prefix.
- [FEATURE] Tp request can be activated by default in the config.
- [FEATURE] List all the "type" existing in the game. Type /list to see possible types, and /list type to see the content.
- [FEATURE] Tp back, to tp to the last location before your last tp or death.
- [CHANGE] Unmute offline player.
- [FEATURE] Super Breaker added.
- [FEATURE] LogBlock support added for Super Breaker and air command.
- [MAJOR CHANGE] All Commands informations (disable, prioritized, alias) are now in commands.yml (auto-moving)
- [FEATURE/CONFIG] Log Private Messages, you can now log every private message in the server.log by activating this configuration parameter.
- [BUG FIX] Corrected the bug with pm an invisible player when don't have the node : admincmd.invisible.cansee
- [FEATURE/CONFIG] Possibility to broadcast server reload message to every connected player
- [BUG FIX] Corrected an ConcurrentModificationException when using some command in some particular case.
- [FEATURE] You can now define your own alias for each command, see commands.yml to learn how to do it.
- [BIG FEATURE] Help command, /help list to list all plugin having commands. /help to see the first page of AdminCmd help. /help 2 -> page 2 of AdminCmd. /help Blah 1 -> see the first page of plugin Blah. You can create new help file for every plugin or take the one from the plugin Help.
- [CHANGE/CONFIG] Glinding... now Gliding in the config. You don't have to change it, it's automatical.
- [BUG FIX] More command take the data (color) of the item in hand in case it have to add some other item in the inventory.
- [BUG FIX] PEX have now his error message when you don't have the Permission.
- [BUG FIX] Corrected a bug with Invisibility when reappear.
- [BUG FIX] Avoid the use of AIR item (drop, item)
- [CHANGE] Added the world that the player belong to, in the command /loc
- [OPTIMIZATION] Some review of the code used by the invisible mode.
- [OPTIMIZATION] Freezing a player should be less buggy.
- [FEATURE] Added node admincmd.player.noafkkick to avoid kick when auto-kick is enabled.
Version 5.7.15
- [FEATURE] Works with mChat prefix and custVar :
- [BUG FIX] Mob limit now work better.
- [BUG FIX] Time message corrected
- [BUG FIX] With Reload command, if you changed the config before reload, it wasn't saved.
- [FEATURE/CHANGE] afk can now take the reason of the afk. When sending a msg to an afk player, the plugin show you why the player is afk (if he put a reason), else since when he's afk.
- [FEATURE] Added command == Version. /av to check the wanted plugin == Version. Or the == Version of AdminCmd.
- [FEATURE/CONFIG] VerboseLog, when set to false, disable some logging like disabled function, etc ...
Version 5.7.11
- [CHANGE/FEATURE] You can now spawn mounted mob. Example : /mob Spider:Zombie will spawn a spider mouted by a Zombie.
- [BUG FIX] no more clone when reappear in front of some one that has the node admincmd.invisible.cansee
- [CHANGE/CONFIG] Added a time out for tp request.
- [BUG FIX] With OddItem using the build in group functionality, it wasn't taking the amount set.
- [LOCALE] Added message for the temporary muted player.
- [CHANGE] n is NOW the new line character used in news and motd.
Version 5.7.10
- [FEATURE/CONFIG] Tp at see, you tp just above the block your are looking. Max range can be configured in the config file.
- [CHANGE] You can set the distance for the mob spawn. /mob MobName NumberOfMob DistanceFromYou
- [FEATURE] Uptime Command added, to know the uptime of the minecraft server since the last RESTART.
- [FEATURE/CONFIG] Private Message when muted can be disabled in the config.
- [FEATURE] Kit. Kit can be set in the kits.yml file. /kit to see avaible kits. /kit kit to receive it. Can be used on other players.
- [CHANGE] Clean command can select the type to clear : /clear player material
Version 5.7.5
- [CHANGE] ban and mute can be now temporary : /ban player msg <timeInMinute>, /mute player <timeInMinute>
- [FEATURE] Support PermissionsEX :
- [FEATURE] Support for OddItem :
- [FEATURE] When you first connect to the server (or the first install of the plugin) you are teleported to the spawn point. By default disabled.
- [CONFIG] firstConnectionToSpawnPoint added to disable/enable the auto-tp to the spwan point when it's the first connection. (default = false (since THIS == Version)).
- [FEATURE] TP request system added. first do /tpt to activate the system. Now when someone want to tp you or to tp by you, you'll receive a request that you can accept with /tpt yes or just ignore.
- [PERMISSION] "admincmd.spec.notprequest" People with that node, will over pass the tp request system.
- [BUG FIX] Corrected Motd %connected value.
Version 5.7.1
- [BUG FIX] Corrected a bug with Bukkit Perm system and major node (admincmd.*, admincmd.player.*, etc ..)
- [BUG FIX] When you select a command to be disabled, all his alias are also disable.
Version 5.7 MAJOR
- [BUG FIX] Corrected a casting exeption in Vulcan, Fly and Fireball
- [OPTIMIZATION] Some code optimization.
- [FEATURE] /rp command, repeat the last command with the same arguments
- [OPTIMIZATION] Tweaked fly value when gliding. Now it's more like a parachute.
- [CONFIG] These tweaked value can be now configured in the config file.
- [FEATURE/CHANGE] time command have 2 new possibilities : pause and unpause.
- [BIG BUG FIX] Command EX, AIR, UNDO WORKS and are pretty fast specially the air used again fluid.
- [LOCALE] Correction of FREEZED -> FROZEN.
- [BUG FIX] With spy mode (see all private msg), no more exception when offline.
- [FEATURE] /afk command. IMPORTANT : If auto-afk is disabled, you must type AFK again to "be back", no detection is made when it's disabled.
- [CHANGE] Shears can be repaired.
- [FEATURE] /moreall command : Set the amount of every item in the inventory to the max.
- [CHANGE] Tp warp error message only displayed to the sender.
- [BUG FIX] No more problem with the auto-afk.
- [FEATURE] You can disable the unwanted commands.
- [FEATURE] You can PRIORITIZE some command. Meaning if another plugin have that command, mine will override it.
- [BUG FIX] repair wasn't displaying a message when used on a not reparable item.
Version 5.6.25
- [BUG FIX] Invisible bug with tp fixed.
- [LOCALE] de locale added thanks @_-DarkMinecrafter-_
Version 5.6.24
- [CONFIG] Default value of : fly, vulcan and fireball can now be configured in the config file.
- [CHANGE] To replace LAVA and WATER by air, you must be NEAR (sphere of 2 blocks radius) the fluid. I use a better algorithm.
- [CHANGE] MAX RADIUS for /ex and /air command is now 30. I'M NOT RESPONSIBLE FOR CRASH WHEN YOU ARE USING THESE COMMANDS. If you really want it, use WorldEdit.
- [FEATURE] added node : "admincmd.player.noreset" to avoid the reset of power when tp to another world.
- [FEATURE] command reloadall to reload the server.
- [CHANGE] FirstTime check. If it's the first time that the player connect to the server, he's tp to the EXACT spawn point.
Version 5.6.23
- [BUG FIX] Corrected all NPE when noMessage was set to true.
- [FEATURE] Replace command, to replace the wanted block by air in the given range (default 10 : mean a sphere with a radius of 10 !)
- [FEATURE] Undo command to undo the /replace and the /ex
Version 5.6.22
- [BUG FIX] News & MOTD are saved upon reload/restart
- [CHANGE] You can't tp to invisible people unless you have the node : admincmd.invisible.cansee
- [FEATURE] Reload command, to reload other plugin and this plugin too. /ar PluginName (EXACT plugin name). To just reload AdminCmd : /ar
Version 5.6.21
- [CHANGE] Ban and unMute can be used for offline players
Version 5.6.20
- [BUG FIX] Corrected some instantiation problem when reload
- [CHANGE] Spawn mod where you are looking at
- [CHANGE] setSpawn can set the spawn radius too : /setSpawn radius
Version 5.6.19
- [CHANGE] Syntax change for exec command when reloading : /exec -reload Cmd OR /exec -reload all to reload all the commands
- [FEATURE] Added a roll command.
- [FEATURE] Added an extinguish command to stop the fire in the wanted range. (default 20 and max 50);
Version 5.6.18
- [BUG FIX] With kill and heal command, was asking for the node admincmd.player.kill.other.other or admincmd.player.heal.other.other when trying to heal an another guy
- [FEATURE] in exec command. Added the param -reloadAll. When you do /exec -reloadAll it will reload all the command on the file.
- [FEATURE] ColoredSign : parse the signMessage, to change the $COLOR_ID to the corresponding color.
- [CONFIG] Added ColoredSign that can be disabled.
- [CHANGE] Storm command activate thundering
- [FEATURE] Rain command only rain/snow no thunder.
- [BUG FIX] in exec, was not displaying the Perm Message when trying to exect a script that you don't have the perm for.
Version 5.6.17
- [BUG FIX] With strike when targeting a player that don't exist
- [OPTIMIZATION] For the exec function, the scripts.yml can be edited on the fly.
- [CHANGE] You can force the reload of an exec script : /exec myScript reload
Version 5.6.16
- [BUG FIX] With command issued by the console and using the Yeti's Perm.
Version 5.6.15
- [CHANGE] Muted player is now reload/restart persistant.
- [FEATURE] Permissions Node : admincmd.player.bypass, when you have it, you can connect to the server even if it's full
- [BIG FEATURE] FOR EXPERIMENTED ADMIN ONLY : Possibility to execute some script/command (batch on windows and bash on Unix) using a simple file. By default I created the scripts.yml with the command hello (that do echo HelloWorld). I'M NOT RESPONSIBLE FOR EVERY SCRIPT YOU CREATE ! Every Script have it's own Permission Node : admincmd.server.exec.X where X is the name of the script in scripts.yml.
- [CHANGE] Use /n in MOTD to do new lines
- [CHANGE] MOTD nodes : added admincmd.server.motd.edit to edit the MOTD.
- [FEATURE] Added News command, works like MOTD.
Version 5.6.12
- [BUG FIX] With AFK, no more throwing NPE when there is no AFK.
- [FEATURE] Message Of The Day, can be changed in the locale File, and disabled in the configuration file.
- [FEATURE] Command /motd to set the motd without going into the locale file: %player = new connected player, %nb = (number of player connected - invisibles players)
- [BUG FIX] 5.6.11 have a debug message, corrected.
Version 5.6.11
- [OPTIMIZATION] Better threads using for the commands
- [FEATURE] In case of you are using the Offical Bukkit Permission System with a Bridge for Yeti's Perm, you can force my plugin to use the Offical Bukkit Permission System instead of the Bridge.
- [BUG FIX] Afk title that is repeated in the pseudo
Version 5.6.10
- [BUG FIX] No more AFK when leaving the server.
- [OPTIMIZATION] Optimized the auto-afk and auto-kick
- [FEATURE] You can now set the homelimit by group/user for the Official Permissions plugin with the node :admincmd.maxHomeByUser.X where X is the limit (max 150);
- [OPTIMIZATION] New way to deal with the command, using threads.
- [FEATURE] admincmd.invisible.notatarget node added. If you have it, when you are invisible, you can't be target by Mobs.
- [FEATURE] admincmd.invisible.cansee node added : You can see invisible people.
- [BIG BUG FIX] For old == Version (under 5.5.5) that are upgrading, the locales files are not generated correctly, meaning no message in the game.
- [FEATURE] Moblimit added, you can set a limit for mob spawing in the world you choose. Will not kill current monster if the limit is already exceeded when set. Set the limit to "none" to remove it.
- [FEATURE] Added no pickup command. The player will not pickup any item.
- [FEATURE] Added command weather freeze. The weather is can change.
- [CONFIG] fakeQuitWhenInvisible added, to activate or desactivate the fakeQuit/fakeLogin login message when vanish and reappear.
Version 5.6.3
- [OPTIMIZATION] The plugin will be launched AFTER every other plugin. Meaning, if an another plugin have the same command than AdminCmd, that plugin will have the priority. (Check in the server.log witch alias are disabled) Example of what you can now see in the log : Code:
- [AdminCmd] Disabled Alias(es) for bal_homelist : listhomes,
- [FEATURE] mute and unmute command.
Version 5.6.2
- [OPTIMIZATION] Some big Code optimization
- [OPTIMIZATION] New way to store location, automatic convert.
- [FEATURE] Auto-kick afk people after a certain amount of time, can be set in the config file.
- [CHANGE] Tweaked the auto-afk, if you chat, move or interact with the world, you are not AFK anymore.
- [FEATURE] Freeze command added. You can now freeze a player, useful against griefer
- [FEATURE] You can now set the home limit in the config file, or by user/group in Permissions (see above)
Version 5.6.1
- [BUG FIX] Colored item now re-works like it must.
- [FEATURE] Glide when falling in fly mode, can be disabled in the config file.
- [CHANGE] No falling damage when flying.
Version 5.6
- [FEATURE] Mobkill command, to kill the mobs. Kill all the mobs in the player world if no parameter is given.
- [BUG FIX] Some locale that were not correctly used.
- [OPIMIZATION] New color parser patched by @Speedy64
- [BUG FIX] Fix TP when you are invisible.
- [BUG FIX] When you TP to an another world and resetPowerWhenTpAnotherWorld= true, you lose invisibility.
- [CHANGE] When you became invisible, a message saying you left the server is displayed. (Same when you became visible)
- [FEATURE] FLY command, you can set the power of the fly (default 1.75)
- [FEATURE] Multi-Home, no lost of already created home. Listhomes and rmhome added
- [BUG FIX] Ban command works like it does.
Version 5.5.7
- [FEATURE] Auto-afk that can be disabled and configured in the config file
- [FEATURE] Auto-disable command when other plugin have it. (Exemple if you install multi-home, /home command will be disabled)
- [OPTIMIZATION] Recode the way that the plugin deals with commands.
- [FEATURE] Added command Ban to ban a player
- [FEATURE] Added command unBan to unban the player
Version 5.5.6
- [BUG FIX] No more duplicate Warp Point in the list
- [BUG FIX] Now Home is by player (no need to redo it).
Version 5.5.5
- [FEATURE] A config File where you can set the reset of the power when switching world. (Auto-generated)
- [FEATURE] LOCALE FILES, you can create your OWN locale file. 2 Possibilities : Edit the existing one, create an another and change the name in the config. (auto-generated)
- [FEATURE] No Message, you can desactivate ALL the plugin message by setting it to true.
- [FEATURE] Msg deletion, you don't want to display this or this message ? just edit the locale file by putting an empty string (exemple -> playerNotFound : '')
- [BUG FIX] Some minor bug fixes.
Version 5.5
- [FEATURE] Fireball command added works like Vulcan command, you can set the power, where 1.0 = Ghast Power.
- [BUG FIX] Message in console when using Vulcan and Fireball
- [FEATURE] Home and setHome commands
- [BUG FIX] Should resolve the invisible bug (now you must be still invisible after reconnect)
- [CHANGE] You have to redo your spawnLocations.
- [FEATURE] Added Warp commands and Warp section
- [BUG FIX] Corrected a rare NPE
Version 5.4.3
- [BUG FIX] No more Spam saying you don't have the permission to do that when you don't have the noBlackList node in Permissions
Version 5.4.2
- [BUG FIX] When you are Invisibile and quitting the server, no message is displayed now.
- [OPTIMIZATION] Refactoring some class, deplacing some code. Optimization of the code that manage god powers.
- [BUG FIX] When god/thor/vulcan/inv someone else then you, send a message to THAT person instead of you.
- [FEATURE] Added new command /spy to see every private message that are sended.
Version 5.4.1
- [BUG FIX] Sometime command are not detected.
- [FEATURE] Added the items.csv from Essentials to add new alias (like wplank, wdoor, etc...)
Version 5.4
- [FEATURE] Added Invisible Command.
- [OPTIMIZATION] Some code optimization.
Version 5.3.2
- [BUG FIX] With the new permission system and the admincmd.item.noblacklist node
Version 5.3.1
- [BUG FIX] With .other node an the new Bukkit Perm System.
Version 5.3
- [FEATURE] Drop command work like item command, but drop the item at the feet.
- [FEATURE] Works with new bukkit Perm. ( && )
Version 5.2
- [CHANGE] You can use item like wool:8 for alias (exemple : /aa greenWool wool:7)
- [BUG FIX] Using /setspawn and /spawn command keep the direction where you were looking at
- [FEATURE] Vulcan power, explosion at see (like the thor power) be VERY CARREFULL.
- [BUG FIX] Giving an item that don't exist don't throw an exception. (NPE)
Version 5.1
- [BUG FIX] In kick message, if there was space, only the first word was used
- [FEATURE] Kickall command
Version 5.0
- [OPTIMIZATION] Complete recode how the plugin deal with commands
- [CHANGE] God powers (thor too) can now be given to a another player (.other added in end of the permission node)
- [CHANGE] Kill and heal can be used for other player by having the right node (.other added in end of the permission node)
- [BUG FIX] Clear can now be used without parameter (clear the current user inventory)
- [CHANGE] Clear can be used for other player by having the right node (.other added in end of the permission node)
- [CHANGE] Give can be used to send item to other player by having the right node (.other added in end of the permission node)
- [CHANGE] Give syntax changed : it's /i type:<damage> number player Where damage is optional.
- [BUG FIX] Strike can now be used without parameter (strike the current player)
- [CHANGE] Strike can be used for other player by having the right node (.other added in end of the permission node)
- [CHANGE] In personnal message, if permission is installed, the prefix is used.
- [CHANGE] Location can be used for other player by having the right node (.other added in end of the permission node) | https://dev.bukkit.org/projects/admincmd/pages/change-log | CC-MAIN-2022-40 | refinedweb | 6,878 | 65.12 |
In this guide, you’ll learn how to log data with the ESP3232: Getting Started with Firebase (Realtime Database)
- ESP8266 NodeMCU: Getting Started with Firebase (Realtime Database)
- ESP32 with Firebase – Creating a Web App
-3232 gets temperatrure, humidity and pressure from the BME280 sensor.
- It gets epoch time right after gettings the readings (timestamp).
- The ESP323232 board using the Arduino core. So, make sure you have the ESP3232).323232.
6) ESP32 Datalogging (Firebase Realtime Database)
In this section, we’ll program the ESP3232 board (read best ESP32 development boards);
-32 SCL (GPIO 22) and SDA (GPIO 21) pins, as shown in the following schematic diagram.
Not familiar with the BME280 with the ESP32? Read this tutorial: ESP3232:32: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. */ #include <Arduino.h> #include <WiFi.h> #include <Firebase_ESP_Client.h> #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #include "time; int timestamp; FirebaseJson json; const char* ntpServer = "pool.ntp.org"; //_t now; struct tm timeinfo; if (!getLocalTime(&timeinfo)) { //Serial.println("Failed to obtain time"); return(0); } time(&now); return now; } void setup(){ Serial.begin(115200); // Initialize BME280 sensor initBME(); initWiFi(); configTime(0, 0, ntpServer); // WiFi.h library to connect the ESP32 to the internet, the Firebase_ESP_Client.h library to interface the boards with Firebase, the Wire, Adafruit_Sensor, and Adafruit_BME280 to interface with the BME280 sensor, and the time library to get the time.
#include <Arduino.h> #include <WiFi.h> #include <Firebase_ESP_Client.h> #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #include "time.
The timestamp variable will be used to save time (epoch time format).
int timestamp;
To learn more about getting epoch time with the ESP32.
We’ll request the time from pool.ntp.org, which is a cluster of time servers that anyone can use to request the time.
const char* ntpServer = "pool.ntp.org";
Then, create an Adafruit_BME280 object called bme. This automatically creates a sensor object on the ESP32_t now; struct tm timeinfo; if (!getLocalTime(&timeinfo)) { //Serial.println("Failed to obtain time"); return(0); } time(&now); return now; }
setup()
In setup(), initialize the Serial Monitor for debugging purposes at a baud rate of 115200.
Serial.begin(115200);
Call the initBME() function to initialize the BME280 sensor.
initBME();
Call the initWiFi() function to initialize WiFi.
initWiFi();
Configure the time:
configTime(0, 0, ntpServer);3232.32 with our resources:
Thanks for reading.
43 thoughts on “ESP32 Data Logging to Firebase Realtime Database”
I already tried it. Works perfectly!
How many data writes can you get before google starts charging you for real-time database usage?
I have similar project running but write to Google Spreadsheet via pushing box and have 6 years of data collecting every 15 mins.
Great project, works fine, looking forward to the part 2 – display the data in graphical form. As above how much data can we save? Is there a way of auto deleting data older than a set timestamp?
Thanks again
Bruce
Very good, this worked on first try. It was only one 0 too much in the timerDelay.
unsigned long timerDelay = 1800000;
In the text it was corrected till 180000.
I Guess 1800000 should be intentionally correct; it means to get data every half hour!!
Sara, am I wrong?
Hi.
Yes. I say three minutes everywhere, but then add 30 minutes in the code. that’s my fault.
You’re right.
I’ll fix that.
Thanks.
Thank you for this tutorial, much appriciated.
Cheers,
Rene.
Thanks for this. I have adapted the code to use a DS18B20 as I only wish to record temperatures, so have substituted the BME280 libraries for One Wire and Dallas. This works fine. Is there a simple way to change the Timestamp to display the actual time and date in a readable format?
Hi.
You can save time as a string in a human-readable format.
Check this tutorial to learn how to get the date (day, month, year, etc) and hour, minute, second:
Regards
Sara
So, is it working now?
Weird that it gets messed up with PlatformIO…
This program is working fine, and I wanted to use it for logging data from the boat and cottage. My plan was to use and old phone as hotspot or shared internet. My problem is that the phone turns of hotspot after 19 hours.
Do you know how I can keep it on?
Sorry, this question is a bit on the side for this project.
Maybe I found a solution. On my phone I changed settings. Mobile Hotspot>configure>Advanced>Turn off when no device connected for … Never timeout.
I will test this now and see.
OK, now I know a lot more. The logging had stopped when I got home, but when I rebooted the ESP32, the logging continued. So, the problem is not the mobile phone, but the ESP32. Now I will try to make the ESP32 reboot itself every hour, then it should be fine I hope.
It has been running for more than two days now, logging every 3 min. So I think this workaround will make it work. Here is the code I used:
In the start of the program:
unsigned long bootDelay = 3600000; // One hour = 3600 sec
unsigned long bootPrevMillis;
In setup:
bootPrevMillis = millis();
In the loop:
if (millis() – bootPrevMillis > bootDelay){
ESP.restart();
}
When I boot I got two logging to close in time, but with this change, it will logg with 3 min. intervall also during boot.
unsigned long bootDelay = 3593000; // One hour -7 sec
Hi Svein I had a similar problem with a smart meter I made using a EPS32. Every 10 days or so it would lock up, but I cheated and ended up using a timer plug to switch it off for 2mins a day then reboot.
But I will use your idea now, thankyou.
Adam
I cannot get Firebase (Fb) to recognize my API_KEY. Get the error: ” Invalid API_KEY …” on my Serial Monitor with an error code -32.
My API_KEY, Firebase URL (both with the location and without), UID, and my email with the Firebase USER_PASSWORD have been thoroughly checked and are correct as transmitted. My billing account is set up at Fb. I have reached out to Fb and they just confirmed that I have a valid API_KEY there and said they could not help with code (understandable). Any suggestions?
Try to create another Firebase project and use the API key for that new project.
I’m not sure what might be causing the issue…
Thank you Sara. That worked. Will continue on. Appreciate the help!
Great!
… works fine 🙂 .. I noticed that you did not use a database.json to establish the realtime database structure (like you did in the eBook “Firebase_Web_App_ESP32”), but rather established it by defining databasePath = “/UsersData/” + uid + “/readings”; … so, this could have been done in the eBook as well, but probably too many lines, and for that complex database, the .json file was preferred. … is that a correct assessment?
Hi.
Yes, you could have done that in the eBook as well.
In the eBook, we create the database with the json file so that the readers can better understand how the database works.However, that step can be done in the Arduino code or on the Firebase Web App.
Regards,
Sara
Can you please post a screenshot of what your RTDB ‘Data’ tab looks like after the ESP runs for a bit?
(Of course, blur out any id’s.)
thank you.
Hi.
There is a picture in the post:?
Regards,
Sara
sorry and nevermind – already in there!
Can you please post a screenshot of what your RTDB ‘Data’ tab looks like after the ESP runs for a bit?
(Of course, blur out any id’s.)
thank you.
Hello Sara..
Thanks for this great tutorial.
If there is more than 1 user, do I have to enter more USER_EMAIL and USER_PASSWORD on the Arduino code?
Sorry for my basic question.
Thanks again.
Hi.
No.
You need to create another user in your Firebase project.
Then, you need to change the database rules so that the user has access to the data.
Then, you also need to change the paths on the javascript files so that the new user can access the database path of the user that is publishing the data.
Regards,
Sara
Hi, I’m getting the following error after uploading the code to ESP-WROOM-32::1044
load:0x40078000,len:10124
load:0x40080400,len:5828
entry 0x400806a8
Token info: type = id token, status = on request
assertion “Invalid mbox” failed: file “/home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/lwip/lwip/src/api/tcpip.c”, line 374, function: tcpip_send_msg_wait_sem
abort() was called at PC 0x400fd20f on core 1
ELF file SHA256: 0000000000000000
Backtrace: 0x40087844:0x3ffb19f0 0x40087ac1:0x3ffb1a10 0x400fd20f:0x3ffb1a30 0x4013aa5f:0x3ffb1a60 0x4014435d:0x3ffb1a90 0x40144510:0x3ffb1ab0 0x40142b84:0x3ffb1af0 0x401556db:0x3ffb1b10
0x40155382:0x3ffb1d90 0x401554bd:0x3ffb1dc0 0x400e151e:0x3ffb1de0 0x4016ce29:0x3ffb1e00 0x400d3892:0x3ffb1e20 0x400df75d:0x3ffb1e40 0x400e0c0f:0x3ffb1ec0 0x400e136d:0x3ffb1f10 0x400d340d:0x3ffb1f30 0x400d15f2:0x3ffb1f70 0x400e399e:0x3ffb1fb0 0x40088ad2:0x3ffb1fd0
Any ideas how to resolve?
Same issue for me!
Great tutorial! Managed to get it running with no problem!
Why do you suggest increasing the delay? As in “Once you test this project and check that everything is working as expected, we recommend increasing the delay.”
Would you say it overloads the database or something? For a project I’m doing it would be great if I managed to a reading every 5 seconds or so (I’m guessing it takes longer than that to actually send the data over Firebase?).
Regards,
Luiz
Hi.
I recommend increasing the delay time to not overload the database. But, if your project required readings every 5 seconds, there’s no problem.
Just make sure you check your database usage once in a while.
Regards,
Sara
Ahh perfect, I can alway clear it after extracting the data, so it shouldn’t be a problem! Thanks again, and as always, great tutorials and follow through!
Sara;
I’m probably driving you crazy, between this demo and the previous. I am still unable to get either to work. This one seems to be progressing the furthest, but I am getting the following error when I upload the code to my device and run:
Token info: type = id token, status = on request
Token info: type = id token, status = ready
Getting User UID
User UID: S1lazRbly6VCwmyrCYJ8ZAyuDTn1
Water Temp Fahrenheit 69.57ºF
time: 1645729599
[E][ssl_client.cpp:98] start_ssl_client(): Connect to Server failed!
[E][WiFiClientSecure.cpp:133] connect(): start_ssl_client: -1
Set json… send request failed
The user ID it pulls from the system is consistent with what is shown in firebase, but it still fails to connect to server.
anything you can suggest would be appreciated.
If I go into Firebase and run a simulated read or write, using the email address and User ID, it runs successfully.
Hi Robert,
Send an email to our support and I’ll try to help you via email.
Regards,
Sara
How to send max30102 data from esp32 to firebase ?
I keep getting “set json… not found” error
Hi.
Can you tell me exactly the error you’re getting?
Regards,
Sara
Hello, I have some issues which says ‘getLocalTime’ was not declared in this scope.
and error: ‘auth’ does not name a type. How do I resolve it?
Hi.
Did you copy the whole code and installed all the required libraries?
What is the ESP32 boards version you have installed?
Regards,
Sara
This is a great tutorial and project! Do you have any other examples with other sensor types? Looking for an MFRC522 integration from your other tutorial: Would like to just send all read data to the cloud.
Hello, The code is not compiling on ESP-8266 (Node MCU).
error: ‘getLocalTime’ was not declared in this scope
if (!getLocalTime(&timeinfo)) {
Hi.
Follow this tutorial for the ESP8266 board:
Regards,
Sara | https://randomnerdtutorials.com/esp32-data-logging-firebase-realtime-database/?replytocom=727745 | CC-MAIN-2022-27 | refinedweb | 2,004 | 75.1 |
AspExe is a small command line tool that will take an .aspx file and then compile and execute (=render) it. The output is saved to a specified output file. To use the tool, execute the following command on the command line:
aspexe.exe driveinfo.aspx output.html
After executing driveinfo.aspx, the output.html file is generated. Opening this file in a browser should result in something as shown at the top of this article. In this particular example, the drive information is dynamically read and generated from the aspx page.
Warning: this tool will work only with a subset of .aspx pages. For details, see the advantages and limitations section further on in this article.
So, I started to ponder, what kind of framework could fill this need? Creating XML or HTML pages as output using ASP.NET would seem ideal, if it weren't for the fact that these pages are required to be hosted on an Internet Information Server. This is hardly the kind of lightweight solution I wanted to embed inside my Windows application.
So, I decided to create a lightweight template parser/compiler/executor that could read .aspx files and create an output text file (or any other text format) much like it is handled in IIS. Hence the AspExe project was born.
AspExe is a small wrapper around a template class. The template class contains a parser, compiler, and executor of a template page. Basically, what the AspExe does is the following:
Response
Because internally C# code is generated, and since C# code can be embedded inside templates, you have access to the full .NET framework, even from within the templates.
The best advantage of all, however I find, is that the template is an aspx file. This means, the template can be edited inside Visual Studio, with full support for IntelliSense! Writing templates has never been easier!!
Note, however, that AspExe does have some restrictions:
You can only execute a limited set of .aspx files. AspExe, for instance, does not bother with code-behind files. All scripting must be contained within the aspx page.
Also, only a few directives are supported: @Page, @Assembly, and @Import. All other directives are ignored. So, for instance, including Web controls inside the page will not work.
@Page
@Assembly
@Import
Using the code is simple. Add the Template*.cs files to your project and add the AspExe namespace. Then, execute the following code example to load, parse, compile, and execute an .aspx template and save it to an output file:
AspExe
// three steps to compile and execute an ASP.NET aspx file:
// step 1: load the .aspx template file, e.g. the default page
Template template = new Template(@"Default.aspx");
// step 2: compile the template. Don't worry about
// compiler errors. If any, those will be exported to the output file
template.Compile();
// step 3: execute the compiled template. run time errors are exported to the output file.
template.Execute(@"output.html");
Optionally, it is possible to also pass parameters and runtime information to the template on execution.
// define a 'domain' object to pass on to the template,
// e.g. the current user information.
System.Security.Principal.WindowsIdentity user =
System.Security.Principal.WindowsIdentity.GetCurrent();
// create a template parameter collection object
TemplateParameters tps = new TemplateParameters();
// Add a new parameter to the collection.
// Note that the "CurrentUser"
// must be declared publicly inside the template.
tps.Add(user.GetType(), "CurrentUser", user);
// execute the template passing on the collection with parameters.
template.Execute(targetFile, tps);
In order for the template to use the user information, it is required that the CurrentUser is declared in the template. This can be done as follows:
CurrentUser
//inside the template declare a SCRIPT section:
<SCRIPT runat="Server">
public System.Security.Principal.WindowsIdentity CurrentUser;
</SCRIPT>
Note that it is imperative that the runat="Server" attribute is included inside the <SCRIPT> tag. Otherwise, the code block will not be included by the parser and the compiler. Note that this is also required by ASP.NET. The script tag allows you to declare variables and also to define your own methods and functions.
runat="Server"
<SCRIPT>
Using this concept can potentially be very powerful. Specially because it is not required to output HTML, but XML or even CSV. Creating the right output XML could result in files that can be read as Excel or Visio. As an example, I included the "driveinfo2excel.aspx" file which exports drive information to an Excel file. Run the following command:
aspexe.exe driveinfo2excel.aspx output.xls
The output.xls can be opened directly in MS Excel.
Another point of interest is the support for the @Assembly and @Import directives. This allows the template to reference third party or your own libraries (it is required to be a .dll though). Code in the template will be able to access the functionality in the referenced assembly. E.g.: <%@ Assembly Name="MyLibrary" %>. Note that you must exclude the .dll extension.
<%@ Assembly Name="MyLibrary" %>
The @Import directive allows you to declare a namespace to use inside the template. E.g., you can import your own assembly and then declare the namespaces used in the assembly. Example: <%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.IO" %>
Hopefully, this project is useful for developers out there. I am interested in hints or suggestions for improvement, but I would also like to know the alternatives. If you have any, let me know why it would be an improvement or a better. | https://www.codeproject.com/Articles/22671/AspExe-a-small-ASP-NET-compiler-and-executor-for-d?msg=2381243 | CC-MAIN-2017-30 | refinedweb | 910 | 51.55 |
This appendix describes the overall structure of
cvs commands, and describes some commands in
detail (others are described elsewhere; for a quick
reference to cvs commands, see node `Invoking CVSaaq in the CVS manual). For
example the following line in .cvsrc
causes cvs to use compression level 6.
The available cvs_options (that are given to the
left of cvs_command) are:
Same effect as if the CVSREADONLYFS environment
variable is set. Using -R can also considerably
speed up checkouts over NFS., ls,
rdiff, rls, rtag, tag, and update commands.
(The history command uses this option in a
slightly different way; see node `history optionsaq in the CVS manual).
For a complete description of the date formats accepted by cvs,
see node `Date input formatsaq in the CVS manual.
Remember to quote the argument to the -D
flag so that your shell doesnaqt `commit optionsaq in the CVS manual, and
see node `Removing filesaq in the CVS manual.
The -k option is available with the add,
update commands.
WARNING: Prior to CVS version 1.12.2, the -k flag
overrode the -kb indication for a binary file. This could
sometimes corrupt binary files. see node `Merging and keywordsaq in the CVS manual, for
more.
Available with the following commands: annotate, checkout,
commit, diff, edit, editors, export,
log, rdiff, remove, rtag,
status, tag, unedit, update, watch,
and watchers.
Available with the following commands: add,
commit and import.
Note: this is not the same as the cvs -n
program option, which you can specify to the left of a cvs command!
Available with the checkout, commit, export,
and rtag commands.
Available with the following commands: annotate, checkout,
commit, diff, edit, editors, export,
ls, rdiff, remove, rls,aq in the CVS manual).
The tag can be either a symbolic or numeric tag, as
described in see node `Tagsaq in the CVS manual, or the name of a branch, as
described in see node `Branching and mergingaq in the CVS manual. When tag is the name of a
branch, some commands accept the optional date argument to specify
the revisions as of the given date on,
and update commands.
see node `configaqaqt very useful, in the future it may
change to be like the :: case.
Due to the way cvs handles branches rev
cannot be specified symbolically if it is a branch.
see node `Magic branch numbersaqaaqaqt
tell you anything about lines which have been deleted
or replaced; you need to use cvs diff for that
(see node `diffaq in the CVS manual).
The options to cvs annotate are listed in
see node `Invoking CVSaq in the CVS manual, and can be used to select the files
and revisions to annotate. The options are described
in more detail there and in see node `Common optionsaq in the CVS manual. node `modulesaq checkout are created
read-write, unless the -r option to cvs
(see node `Global optionsaq in the CVS manual) is specified, the
CVSREAD environment variable is specified
(see node `Environment variablesaq in the CVS manual), or a watch is in
effect for that file (see node `Watchesaq
forget to change your directory to the top level
directory.
For the output produced by the checkout command
see see node `update outputaq in the CVS manual.
These standard options are supported by checkout
(see node `Common optionsaqaqt contain empty
intermediate directories. In this case only,
cvs tries to ``shortenaqaqaq in the CVS manual.
Get a copy of the module tc:
Get a copy of the module tc as it looked one day
ago:
Use commit when you want to incorporate changes
from your working source files into the source
repository.
If you donaqt `updateaqaq in the CVS manual, and see node `loginfoaq in the CVS manual)
and placed in the rcs file inside the
repository. This log message can be retrieved with the
log command; see see node `logaq see node `logaq in the CVS manual,
see node `File statusaq in the CVS manual.
These standard options are supported by commit
(see node `Common optionsaq in the CVS manual, for a complete description of
them):
commit also supports these options:
Force cvs to commit a new revision even if you havenaqt
made any changes to the file. As of cvs version 1.12.10,
it also causes the -c option.
You can commit to a branch revision (one that has an
even number of dots) with the -r option. To
create a branch revision, use the -b option
of the rtag or tag commands
(see node `Branching and mergingaqaq in the CVS manual.
These standard options are supported by diff
(see node `Common optionsaq in the CVS manual, for a complete description of
them):aqaq style.aqs normal format. You can tailor this command
to get fine control over diffaqsaqt `Common optionsaq node `Common optionsaqaq `cvsignoreaq in the CVS manual), it does not import it and prints
I followed by the filename (see node `import outputaq `Wrappersaq `Getting the sourceaq in the CVS manual).
This standard option is supported by import
(see node `Common optionsaq in the CVS manual, for a complete description):
There are the following additional special options.
name can be a file name pattern of the same type
that you can specify in the .cvsignore file.
see node `cvsignoreaq in the CVS manual.
spec can be a file name pattern of the same type
that you can specify in the .cvswrappers
file. see node `Wrappersaq in the CVS manual.aqs default branch,
and placing the file in the Attic (see node `Atticaq in the CVS manual) directory.
Use of this option can be forced on a repository-wide basis
by setting the ImportNewFilesToVendorBranchOnly option in
CVSROOT/config (see node `configaq in the CVS manual).
import keeps you informed of its progress by printing a line
for each file, preceded by one character indicating the status of the file:
See see node `Tracking sourcesaq in the CVS manual, and see node `From filesaq in the CVS manual. node `Common optionsaq. -r option and
its argument.:
(If you are using a csh-style shell, like tcsh,
you would need to prefix the examples above with env.)aqt `Common optionsaq in the CVS manual, for a complete description of
them):
In addition to the above, these options are available:aqt lock files, it
isnaqt strictly necessary to use this command. You can
always simply delete your working directory, if you
like; but you risk losing changes you may have
forgotten, and you leave no trace in the cvs history
file (see node `history fileaq in the CVS manual) that youaqve abandoned your:
WARNING: The release command deletes
all directories and files recursively. This
has the very serious side-effect that any directory
that you have created inside your checked-out sources,
and not added to the repository (using the add
command; see node `Adding filesaq in the CVS manual) will be silently deleted---even
if it is non-empty!
Before release releases your sources it will
print a one-line message for any file that is not
up-to-date.
Release the tc directory, and delete your local working copy
of the files.
After youaqve node `Common optionsaqaq in the CVS manual, for more.
update and checkout keep you informed of
their progress by printing a line for each file, preceded
by one character indicating the status of the file:
M can indicate one of two states for a file
youaq). | https://www.linuxhowtos.org/manpages/1/cvs.htm | CC-MAIN-2022-27 | refinedweb | 1,234 | 59.03 |
Created on 2008-07-02 13:32 by mishok13, last changed 2008-12-05 10:13 by georg.brandl.
Multiprocessing docs contain examples, that are not valid py3k code,
mostly because of print used as a statement. Example (taken from
multiprocessing.rst):
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
If no one is working on this already, than I'll start fixing this and
will present a patch in 2 or 3 days.
If you're willing to make the patch - I can review and submit. I
appreciate it
So, after 5 days of silence I present my current status on the patch.
This patch fixes Doc/includes/mp_*.py examples, except for the fact that
I couldn't make mp_distributing.py work, but I'm still working on this
issue.
And this patch is for Doc/library/multiprocessing.rst. Still, there are
lot of issues, and as you none of you (Jesse or Richard) answered my
email, I'll post them tomorrow here. Right now, the patch. :)
Thanks - sorry I didn't reply to the mail yet, had to deal with some
other stuff first, I should be freed up tonight
OK, then ignore the previous email, I'll send you a new one, with
updated questions..
btw, some of the docstrings are also outdated, e.g. Pool.imap, Pool.map,
etc. Should I handle this one too?
that's your call Andrii
OK, I'll work on this too. :) Patch should be ready by Monday.
The docstrings are now fixed too. | http://bugs.python.org/issue3256 | crawl-002 | refinedweb | 269 | 75.81 |
I want to create a PyTorch tutorial using MNIST data set.In TensorFlow, there is a simple way to download, extract and load the MNIST data set as below.
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./mnist/data/")
x_train = mnist.train.images # numpy array
y_train = mnist.train.labels
x_test = mnist.test.images
y_test = mnist.test.labels
Is there any simple way to handle this in PyTorch?
It seems to support MNIST data set in torchvision.datasets. I was confused because PyTorch documentation does not specify MNIST.
Yes it already there - see here
and here,
The code looks something like this,
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
How do you subset the MNIST training data? It's 60,000 images, how can you reduce it to say 2000?
Here's the code
>>> from torchvision import datasets, transforms
>>>
>>>
>>> train_all_mnist = datasets.MNIST('../data', train=True, download=True,
... transform=transforms.Compose([
... transforms.ToTensor(),
... transforms.Normalize((0.1307,), (0.3081,))
... ]))
Files already downloaded
>>> train_all_mnist
<torchvision.datasets.mnist.MNIST object at 0x7f89a150cfd0>
How do I subset train_all_mnist ?
train_all_mnist
Or, alternatively I could just download it again, and hack this line to 2000,
2000
It's a bit ugly - anyone know a neater way to do this?
what is your purpose of subsetting the training dataset?
I'm interested in Omniglot, which is like an inverse, MNIST, lots of classes, each with a small number of examples.
Take a look, here
By the way - thank you for your tutorials - they are very clear and helpful to learn from.
Best regards,
Ajay
omniglot is in this Pull Request:
Ha, fanck Q
Spent a hour hacking together my own loader - but this looks better!
Seems to be the easiest data set for experimenting with one-shot learning?
Whats the current best methodology for Omniglot? Who or what's doing the best at the moment?
@pranv set the record on Omniglot recently with his paper:
Attentive Recurrent Comparators
Thanks for that
It look's like the DRAW I implemented in Torch years ago , without the VAE, and decoder/generative canvas.
I though you might like this, implementation of a GAN on Omniglot,
Code for training a GAN on the Omniglot dataset using the network described in: Task Specific Adversarial Cost Function
Have you found a better way to do this?
Nope sorry - been totally snowed under the past couple of months - not had any time to work on it.
If you're referring to the alternative cost functions for GANs I don't think they make much difference?
If you're referring to non Gaussian attention mechanisms for the DRAW encoder, I don't know of any better approach than @pranav 's as mentioned above. I think he's open sourced his code?
Cheers,
Aj
The code for Attentive Recurrent Comparators is here:
It includes Omniglot data downloading and iterating scripts along with all the models proposed in the paper (the nets are written and trained with Theano).
I will try to submit a PR for torchvision.datasets.Omniglot if I find some time
torchvision.datasets.Omniglot | https://discuss.pytorch.org/t/resolved-is-there-mnist-dataset-in-torchvision-datasets/867 | CC-MAIN-2017-39 | refinedweb | 560 | 58.38 |
Pycricbuzz - Cricbuzz API for Python
Pycricbuzz is a python library which can be used to get live scores, commentary and full scorecard for recent and live matches. In case you want to know how the library was developed, you can watch the below video. If you just want to use the library, then you can skip the video.
Installing pycricbuzz
pip install pycricbuzz or pip3 install pycricbuzz
Or you can directly download the package from github:pycricbuzz github
Importing pycricbuzz in your script
from pycricbuzz import Cricbuzz
Create a cricbuzz object
c = Cricbuzz()
We can use this object to work with all the functions provided by pycricbuzz.
Fetch all the matches provided by cricbuzz
from pycricbuzz import Cricbuzz import json c = Cricbuzz() matches = c.matches() print (json.dumps(matches,indent=4)) #for pretty prinitng
Output(Printing details of one of matches from list of matches):
[ { "id": "21653", "mchstate": "toss", "mnum": "5th ODI", "official": { "referee": { "country": "Ind", "id": "3894", "name": "Javagal Srinath" }, "umpire1": { "country": "Afg", "id": "11150", "name": "Bismillah Jan Shinwari" }, "umpire2": { "country": "Ind", "id": "7498", "name": "S Ravi" }, "umpire3": { "country": "Afg", "id": "11152", "name": "Ahmed Shah Pakteen" } }, "srs": "Afghanistan v Ireland in India 2019", "start_time": "2019-03-10 13:00:00", "status": "IRE opt to bowl", "team1": { "name": "Afghanistan", "squad": [ "Mohammad Shahzad", "Javed Ahmadi", "Rahmat", "Asghar Afghan", "Najibullah", "Shenwari", "Nabi", "Rashid Khan", "Mujeeb", "Zahir Khan", "Shapoor" ], "squad_bench": [ "Hazratullah Zazai", "Noor Ali", "Ikram Ali Khil", "Naib", "Aftab Alam", "Dawlat Zadran", "Shahidi", "Shirzad", "Fareed Malik", "Karim Janat" ] }, "team2": { "name": "Ireland", "squad": [ "Porterfield", "Stirling", "Andy Balbirnie", "Simi Singh", "Kevin O Brien", "Dockrell", "Stuart Poynter", "Andy McBrine", "James Cameron", "Murtagh", "Rankin" ], "squad_bench": [ "Chase", "James McCollum", "McCarthy", "Lorcan Tucker", "S Thompson" ] }, "toss": "Ireland elect to bowl", "type": "ODI", "venue_location": "Dehradun, Uttarakhand, India", "venue_name": "Rajiv Gandhi International Cricket Stadium" } ]
We get a list of matches with each match having its details. The ‘id’ attribute of a match is important as it will be used to fetch commentary and scorecard for that match. There is also another way to get the information for a match using match id.
def match_info(mid): c = Cricbuzz() minfo = c.matchinfo(mid) print(json.dumps(minfo, indent=4, sort_keys=True))
The output will be same as above. It’s another way of getting match information when you have match_id with you.
Fetching the live score of a match
def live_score(mid): c = Cricbuzz() lscore = c.livescore(mid) print(json.dumps(lscore, indent=4, sort_keys=True))
Output:
{ "batting": { "batsman": [ { "balls": "43", "fours": "0", "name": "Asghar Afghan", "runs": "8", "six": "0" }, { "balls": "32", "fours": "1", "name": "Nabi", "runs": "19", "six": "0" } ], "score": [ { "declare": null, "inning_num": "1", "overs": "25.2", "runs": "77", "wickets": "4" } ], "team": "Afghanistan" }, "bowling": { "bowler": [ { "maidens": "0", "name": "James Cameron", "overs": "4.2", "runs": "15", "wickets": "0" } ], "score": [], "team": "Ireland" } }
It gives us the information about the match and details of the batting and bowling team. The batsman and bowlers are the current two batsman batting and current two bowlers bowling.
Fetch commentary of the match
def commentary(mid): c = Cricbuzz() comm = c.commentary(mid) print(json.dumps(comm, indent=4, sort_keys=True))
Output:
{ "commentary": [ { "comm": "Simi Singh to Nabi, no run", "over": "26.2" }, { "comm": "Simi Singh to Nabi, no run, tossed up outside off, Nabi gets an inside edge on the clip to short fine", "over": "26.1" }, { "comm": "Asghar Afghan is still struggling from the shoulder injury he sustained in the last match. Winced in pain after that last six, but going on for his team", "over": null }, { "comm": "James Cameron to Asghar Afghan, no run, tossed up on off, Afghan leans well forward and blocks", "over": "25.6" }, { "comm": "James Cameron to Asghar Afghan, no run, flighted on the leg-stump line, clipped straight to the fielder at backward leg, wanted a single and is sent abck", "over": "25.5" }, { "comm": "James Cameron to Asghar Afghan, <b>SIX</b>, that has been <b>thumped</b>, juicy full-toss from Cameron-Dow, Afghan moves across and slugs it high and over backward square leg for a maximum", "over": "25.4" }, { "comm": "James Cameron to Asghar Afghan, <b>FOUR</b>, that's a poor ball from Cameron-Dow, dropped short and wide of off, Afghan made room and slapped it wide of cover - beats the fielder getting across from the deep", "over": "25.3" }, { "comm": "James Cameron to Asghar Afghan, no run, dropped short and wide of off, cracked straight to Dockrell at cover", "over": "25.2" }, { "comm": "James Cameron to Asghar Afghan, no run, turn for Cameron-Dow, but once again the length is short, allows Afghan to push it to the off-side", "over": "25.1" }, { "comm": "Simi Singh to Nabi, no run, full and at the stumps, whipped straight to the fielder at mid-wicket", "over": "24.6" } ] }
“Commentary” is a list containing all the commentary texts.
Fetch scorecard of a match
def scorecard(mid): c = Cricbuzz() scard = c.scorecard(mid) print(json.dumps(scard, indent=4, sort_keys=True))
{ "scorecard": [ { "batcard": [ { "balls": "4", "dismissal": "c James Cameron b Murtagh", "fours": "0", "name": "Mohammad Shahzad", "runs": "6", "six": "1" }, { "balls": "30", "dismissal": " b Andy McBrine", "fours": "2", "name": "Javed Ahmadi", "runs": "24", "six": "1" }, { "balls": "42", "dismissal": "c Stirling b Dockrell", "fours": "0", "name": "Rahmat", "runs": "17", "six": "1" }, { "balls": "47", "dismissal": "batting", "fours": "1", "name": "Asghar Afghan", "runs": "18", "six": "1" }, { "balls": "1", "dismissal": "c Andy McBrine b Dockrell", "fours": "0", "name": "Shenwari", "runs": "0", "six": "0" }, { "balls": "38", "dismissal": "batting", "fours": "1", "name": "Nabi", "runs": "20", "six": "0" } ], "batteam": "Afghanistan", "bowlcard": [ { "maidens": "1", "name": "Murtagh", "nballs": "0", "overs": "5", "runs": "18", "wickets": "1", "wides": "0" }, { "maidens": "0", "name": "Rankin", "nballs": "0", "overs": "3", "runs": "13", "wickets": "0", "wides": "1" }, { "maidens": "1", "name": "Andy McBrine", "nballs": "0", "overs": "5", "runs": "7", "wickets": "1", "wides": "1" }, { "maidens": "0", "name": "Dockrell", "nballs": "0", "overs": "6", "runs": "18", "wickets": "2", "wides": "0" }, { "maidens": "0", "name": "James Cameron", "nballs": "0", "overs": "5", "runs": "25", "wickets": "0", "wides": "1" }, { "maidens": "0", "name": "Simi Singh", "nballs": "0", "overs": "3", "runs": "7", "wickets": "0", "wides": "0" } ], "bowlteam": "Ireland", "extras": { "byes": "0", "lbyes": "0", "nballs": "0", "penalty": "0", "total": "3", "wides": "3" }, "fall_wickets": [ { "name": "Mohammad Shahzad", "overs": "0.4", "score": "6", "wkt_num": "1" }, { "name": "Javed Ahmadi", "overs": "11.3", "score": "39", "wkt_num": "2" }, { "name": "Rahmat", "overs": "14.3", "score": "50", "wkt_num": "3" }, { "name": "Shenwari", "overs": "14.4", "score": "50", "wkt_num": "4" } ], "inng_num": "1", "overs": "27", "runs": "88", "wickets": "4" } ] } | https://shivammitra.com/python/cricket-library-for-python/ | CC-MAIN-2019-35 | refinedweb | 1,064 | 68.13 |
To put it in a simple manner, Extension Methods have been introduced in .NET 3.5 framework to add methods to a class without altering the code of that class. Most of the times you don't have access to create your own methods in the third party dlls and you want to write your own method in that class, then the first question comes in your mind, how will I handle it?The answer is simple: "Extension Methods".Why Extension Method: Another question is, why do we need to use the extension methods, we can create our own methods in client application or in other class library and we can call those methods from the client application but the important point is, these methods would not be the part of that class (class of the third party dll). So adding the extension methods in the class, you are indirectly including another behavior in that class. I am talking about the class which is the part of the third party dll (Most Important).So when you create your own extension methods for specific types those methods would be accessible for that type.This can be easily understood by the following diagram:(I am object of the class)After some time, client application wants to add another behavior in that class; let's say 'talk' then extension methods can be used to add this extra behavior in that class without altering any code.Implementation: The following is the structure of the third party class, which is not accessible to the developer. This class contains a method 'Walk'.public class Class1{ public String Walk() { return "I can Walk"; }}
Now it's time to create the Extension Method. Just use the following steps:Step 1: Create the Static ClassStep 2: Create the Static Method and give any logical name to it. Step 3: In the Extension Method Parameter, just pass this along with the object type and the object name as given below:The client application can access both the method "Walk" and the extension method "TalkExtMethod":Class1 obj = new Class1 ();
Obj.TalkExtMethod () obj.Walk ()Benefits: There may be various benefits of using the Extension Methods in your code. Let's say you are using the method1 () of the class in the third party dll and the object instantiation has failed due to another reason; then you will get the object reference error while calling the method1 () if you did not handle the null reference check.Class1 obj = GetClass1Object ();//If the obj is null, you will get the object reference errorobj.Method1();Although the following code is safer than previous one but the problem is you will have to include this check every time when you call the Method1 and if the developer forgets to include this check in the code then the program throws an object reference error.Class1 obj = GetClass1Object ();if (obj!= null){ obj.Method1 ();}If we use the extension method to solve this problem we get outstanding results. Instead of checking the null reference every time in the code we handle it in the extension methods which is called by the client application.public static string Method1ExtMethod(this Class1 objClass){ //If the obj is not null, then call the Method1()
if (objClass! = null) { return objClass.Method1 (); } return string.Empty; }
Class1 obj = GetClass1Object ();//Don't include any check whether the object is null or notobj.Method1ExtMethod()
Extension Methods in .NET
Building the Really Really Really Simple RogueLike V0.1 With C#
You have presented your article nicely.
Can we use private members of the class for which this extension method is added? | http://www.c-sharpcorner.com/UploadFile/rishi.mishra/extension-methods-in-net/ | crawl-003 | refinedweb | 601 | 59.94 |
Created on 2019-04-11 17:18 by xtreak, last changed 2019-04-11 17:18 by xtreak.
I came across this issue in issue36593 where MagicMock had a custom __class__ attribute set and one of the methods used super() which caused __class__ not to be set. This seems to have been fixed in the past with issue12370 and a workaround to alias super at module level and use it was suggested in msg161704. Usage of the alias seems to solve the issue for Mock but the fix for __class__ breaks when sys.settrace is set. Example code as below with custom __class__ defined and with running the code under sys.settrace() super() doesn't set __class__ but using _safe_super alias works. Another aspect in the mock related issue is that the call to super() is under a codepath that is not executed during normal run but executed when sys.settrace during import itself.
import sys
_safe_super = super
def trace(frame, event, arg):
return trace
if len(sys.argv) > 1:
sys.settrace(trace)
class SuperClass(object):
def __init__(self):
super().__init__()
@property
def __class__(self):
return int
class SafeSuperClass(object):
def __init__(self):
_safe_super(SafeSuperClass, self).__init__()
@property
def __class__(self):
return int
print(isinstance(SuperClass(), int))
print(isinstance(SafeSuperClass(), int))
Running above code with trace and without trace
➜ cpython git:(master) ✗ ./python.exe /tmp/buz.py
True
True
➜ cpython git:(master) ✗ ./python.exe /tmp/buz.py 1
False
True
There is a test for the above in Lib/test/test_super.py at
Add a trace as below in test_super.py at the top and the test case fails
import sys
def trace(frame, event, arg):
return trace
sys.settrace(trace)
➜ cpython git:(master) ✗ ./python.exe Lib/test/test_super.py
....................F
======================================================================
FAIL: test_various___class___pathologies (__main__.TestSuper)
----------------------------------------------------------------------
Traceback (most recent call last):
File "Lib/test/test_super.py", line 100, in test_various___class___pathologies
self.assertEqual(x.__class__, 413)
AssertionError: <class '__main__.TestSuper.test_various___class___pathologies.<locals>.X'> != 413
----------------------------------------------------------------------
Ran 21 tests in 0.058s
FAILED (failures=1) | https://bugs.python.org/issue36606 | CC-MAIN-2021-49 | refinedweb | 331 | 52.05 |
Archives.
ASP.
ASP.NET MVC Tip #10 - Prevent URL Manipulation Attacks
In this tip, I explain how hackers can steal sensitive information from an ASP.NET MVC website by manipulating URLs. I also discuss how you can build unit tests to prevent this type of attack.
ASP.NET MVC Tip #9 – Create a GridView View User Control
In this tip, I show you how to build an ASP.NET MVC View User Control that accepts a set of database records and renders the records in an HTML table automatically. The advantage of using a View User Control is that you can customize the rendering of particular columns.
In yesterday’s tip, I explained how you can create a new HTML helper that renders a set of database records in an HTML table. In other words, I showed one method for simulating a GridView control in ASP.NET MVC. In today’s tip, I am going to show you a second method of simulating a GridView.
In today’s tip, I explain how you can simulate a GridView control by using an ASP.NET MVC View User Control. An ASP.NET MVC View User Control is similar to an ASP.NET User Control with one important difference. Just like an ASP.NET MVC View, a View User Control can accept strongly typed view data. We are going to create a View User Control that accepts IEnumerable view data.
The GridView View User Control is contained in Listing 1.
Listing 1 – GridView.ascx (vb)
1: <%@ Control Language="VB" AutoEventWireup="false" CodeBehind="GridView.ascx.vb" Inherits="Tip9.GridView" %>
2: <%@ Import Namespace="System.Reflection" %>
3:
4: <%-- Show the Headers --%>
5: <table class="gridView">
6: <thead>
7: <tr>
8: <% For Each prop As PropertyInfo In Me.Columns%>
9: <th><%= prop.Name %></th>
10: <% Next%>
11: </tr>
12: </thead>
13:
14: <%-- Show the Rows --%>
15: <tbody>
16:
17: <% For Each row In Me.Rows%>
18: <tr class="<%= Me.FlipCssClass( "item", "alternatingItem") %>">
19:
20: <%-- Show Each Column --%>
21: <% For Each prop As PropertyInfo In Me.Columns%>
22: <td>
23: <% Dim typeCode = Type.GetTypeCode(prop.PropertyType)%>
24:
25:
26: <%-- String Columns --%>
27: <% If typeCode = typeCode.String Then %>
28:
29: <%= GetColumnValue(row, prop.Name)%>
30:
31: <% End If%>
32:
33: <%-- DateTime Columns --%>
34: <% If typeCode = typeCode.DateTime Then%>
35:
36: <%= GetColumnValue(row, prop.Name, "{0:D}")%>
37:
38: <% End If%>
39:
40:
41: <%-- Decimal Columns --%>
42: <% If typeCode = typeCode.Decimal Then%>
43:
44: <%= GetColumnValue(row, prop.Name, "{0:c}") %>
45:
46: <% End If%>
47:
48:
49: <%-- Boolean Columns --%>
50: <% If typeCode = typeCode.Boolean Then%>
51: <% If Me.GetColumnValue(row, prop.Name) = True Then%>
52: <input type="checkbox" disabled="disabled" checked="checked" />
53: <% Else%>
54: <input type="checkbox" disabled="disabled" />
55: <% End If%>
56: <% End If%>
57:
58:
59: <%-- Integer Columns --%>
60: <% If TypeCode = TypeCode.Int32 Then%>
61:
62: <%= GetColumnValue(row, prop.Name)%>
63:
64: <% End If%>
65:
66: </td>
67: <% next %>
68: </tr>
69: <% next %>
70: </tbody>
71: </table>
72:
73:
Listing 1 – GridView.ascx (c#)
1: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="GridView.ascx.cs" Inherits="Tip9.Views.Home.GridView" %>
2: <%@ Import Namespace="System.Reflection" %>
3:
4: <%-- Show the Headers --%>
5: <table class="gridView">
6: <thead>
7: <tr>
8: <% foreach (PropertyInfo prop in this.Columns)
9: { %>
10: <th><%= prop.Name %></th>
11: <% } %>
12: </tr>
13: </thead>
14:
15: <%-- Show the Rows --%>
16: <tbody>
17:
18: <% foreach (object row in this.Rows)
19: { %>
20: <tr class="<%= this.FlipCssClass( "item", "alternatingItem") %>">
21:
22: <%-- Show Each Column --%>
23: <% foreach (PropertyInfo prop in this.Columns)
24: { %>
25: <td>
26: <% var typeCode = Type.GetTypeCode(prop.PropertyType); %>
27:
28:
29: <%-- String Columns --%>
30: <% if (typeCode == TypeCode.String)
31: { %>
32:
33: <%= GetColumnValue(row, prop.Name)%>
34:
35: <% } %>
36:
37: <%-- DateTime Columns --%>
38: <% if (typeCode == TypeCode.DateTime)
39: { %>
40:
41: <%= GetColumnValue(row, prop.Name, "{0:D}")%>
42:
43: <% } %>
44:
45:
46: <%-- Decimal Columns --%>
47: <% if (typeCode == TypeCode.Decimal)
48: { %>
49:
50: <%= GetColumnValue(row, prop.Name, "{0:c}") %>
51:
52: <% } %>
53:
54:
55: <%-- Boolean Columns --%>
56: <% if (typeCode == TypeCode.Boolean)
57: { %>
58: <% if ((bool)(this.GetColumnValue(row, prop.Name)))
59: { %>
60: <input type="checkbox" disabled="disabled" checked="checked" />
61: <% }
62: else
63: { %>
64: <input type="checkbox" disabled="disabled" />
65: <% } %>
66: <% } %>
67:
68:
69: <%-- Integer Columns --%>
70: <% if (typeCode == TypeCode.Int32)
71: { %>
72:
73: <%= GetColumnValue(row, prop.Name)%>
74:
75: <% } %>
76:
77: </td>
78: <% } %>
79: </tr>
80: <% } %>
81: </tbody>
82: </table>
83:
Notice that the GridView.ascx file contains two loops. The first loop iterates through the table headers and the second loop iterates through the table rows.
A series of IF statements are used to display a particular column. Depending on the type of column -- Integer, String, Decimal, DateTime, Boolean – a different template is used to display the column value. For example, in the case of a Boolean column, a checkbox is used to display the column value (see Figure 1). You can, of course, customize the appearance of any of these columns by modifying the HTML.
Figure 1 -- GridView
The code-behind file for the GridView View User Control is contained in Listing 2. Notice that the View User Control has a generic constructor that accepts IEnumerable view data. It also exposes several utility properties and methods. For example, the Columns property returns information about all of the database table columns (this information is retrieved through reflection). The Rows property returns all of the database table rows.
Listing 2 – GridView.ascx.vb (vb)
1: Imports System.Reflection
2:
3: Partial Public Class GridView
4: Inherits System.Web.Mvc.ViewUserControl
5:
6:
7: Protected ReadOnly Property Columns() As PropertyInfo()
8: Get
9: Dim e As IEnumerator = ViewData.Model.GetEnumerator()
10: e.MoveNext()
11: Dim firstRow As Object = e.Current
12: If firstRow Is Nothing Then
13: Throw New Exception("No data passed to GridView User Control.")
14: End If
15: Return firstRow.GetType().GetProperties()
16: End Get
17: End Property
18:
19: Protected ReadOnly Property Rows() As IEnumerable
20: Get
21: Return ViewData.Model
22: End Get
23: End Property
24:
25:
26: Protected Function GetColumnValue(ByVal row As Object, ByVal columnName As String) As Object
27: Return DataBinder.Eval(row, columnName)
28: End Function
29:
30: Protected Function GetColumnValue(ByVal row As Object, ByVal columnName As String, ByVal format As String) As Object
31: Return DataBinder.Eval(row, columnName, format)
32: End Function
33:
34:
35: Dim flip As Boolean = False
36:
37: Protected Function FlipCssClass(ByVal className As String, ByVal alternativeClassName As String) As String
38: flip = Not flip
39: Return IIf(flip, className, alternativeClassName)
40: End Function
41:
42:
43: End Class
Listing 2 – GridView.ascx.cs (c#)
1: using System;
2: using System.Collections;
3: using System.Collections.Generic;
4: using System.Linq;
5: using System.Web;
6: using System.Web.UI;
7: using System.Web.Mvc;
8: using System.Reflection;
9:
10: namespace Tip9.Views.Home
11: {
12: public partial class GridView : System.Web.Mvc.ViewUserControl<IEnumerable>
13: {
14: protected PropertyInfo[] Columns
15: {
16: get
17: {
18: var e = ViewData.Model.GetEnumerator();
19: e.MoveNext();
20: object firstRow = e.Current;
21: if (firstRow == null)
22: {
23: throw new Exception("No data passed to GridView User Control.");
24: }
25: return firstRow.GetType().GetProperties();
26: }
27: }
28:
29: protected IEnumerable Rows
30: {
31: get { return ViewData.Model; }
32: }
33:
34:
35: protected object GetColumnValue(object row, string columnName)
36: {
37: return DataBinder.Eval(row, columnName);
38: }
39:
40: protected object GetColumnValue(object row, string columnName, string format)
41: {
42: return DataBinder.Eval(row, columnName, format);
43: }
44:
45:
46: bool flip = false;
47: protected string FlipCssClass(string className, string alternativeClassName)
48: {
49: flip = !flip;
50: return flip ? className : alternativeClassName;
51: }
52:
53: }
54: }
You can use the GridView control in a View Page by calling the Html.RenderUserControl() method. The View Page in Listing 3 renders the GridView View User Control. Notice that the page contains a CSS Style Sheet. This Style Sheet is used to customize the appearance of the table rendered by the GridView. For example, the alternating CSS class is used to format alternating GridView rows.
Listing 3 – Index.aspx (vb)
1: <%@ Page Language="VB" AutoEventWireup="false" CodeBehind="Index.aspx.vb" Inherits="Tip9.Index" %>
2:
3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
4:
5: <html xmlns="" >
6: <head id="Head1">
Listing 3 – Index.aspx (C#)
1: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="Tip9.Views.Home.Index" %>
2:
3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
4:
5: <html xmlns="" >
6: <head>
The view data is supplied by the controller in LIsting 4. This controller happens to use a LINQ to SQL query to retrieve the database data. However, the GridView is perfectly compatible with data retrieved through ADO.NET, NHibernate, or whatever. The GridView expects IEnumerable data. As long as you pass the GridView something that is IEnumerable, it will be happy.
Listing 4 -- HomeController.vb (vb)
1: Public Class HomeController
2: Inherits System.Web.Mvc.Controller
3:
4: Private _db As New MovieDataContext()
5:
6: Function Index()
7: Return View(_db.Movies)
8: End Function
9:
10: End Class
Listing 4 -- HomeController.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Web;
5: using System.Web.Mvc;
6: using Tip9.Models;
7:
8: namespace Tip9.Controllers
9: {
10: public class HomeController : Controller
11: {
12: private MovieDataContext _db = new MovieDataContext();
13:
14:
15: public ActionResult Index()
16: {
17: return View(_db.Movies);
18: }
19: }
20: }
I prefer the method of displaying a grid of database data described in this blog entry over the method described in yesterday’s tip. Unlike the method used in yesterday’s tip, today’s method enables you to completely customize the appearance of the GridView.
You can download the code for the GridView by clicking the following link. The download includes the code in both C# and VB.NET versions.
ASP.NET MVC Tip #8 – Create an ASP.NET MVC GridView Helper Method
In this tip, you learn how to extend the ASP.NET MVC framework with a new helper method that displays an HTML table of database data.
Currently, the ASP.NET MVC framework does not include anything that is the direct equivalent of the ASP.NET Web Forms GridView control. If you want to display a table of database data then you must write out all of the HTML and inline script each and every time that you want to display the data. In this tip, I show you how to add a GridView() extension method to the HtmlHelper class.
An extension method is a method added to one class by another class. You can use extension methods to give existing classes additional super powers. In our case, we want to give the HtmlHelper class, the class that you use in an MVC View Page, a new GridView() method that renders an HTML table of database data.
You create an extension method in slightly different ways when working with Visual Basic .NET and when working with C#. You create an extension method with Visual Basic .NET by creating a module and decorating functions in the module with the <Extension> attribute. You create an extension method with C# by creating a static class and using the keyword this with the first parameter of each extension method exposed by the static class.
The code for the GridView() extension methods is contained in Listing 1.
Listing 1 – GridExtensions.vb (VB)
1: Imports System
2: Imports System.Text
3: Imports System.Collections.Generic
4: Imports System.Linq
5: Imports System.Data.Linq.Mapping
6: Imports System.Data.Linq
7: Imports System.Web.UI
8: Imports System.Web.Mvc
9: Imports System.Web
10: Imports System.Runtime.CompilerServices
11:
12:
13: Namespace Helpers
14:
15: Public Module GridExtensions
16:
17: <Extension()> _
18: Public Function GridView(ByVal htmlHelper As HtmlHelper, ByVal table As ITable) As String
19: Return GridView(htmlHelper, table, Nothing, New GridViewOptions())
20: End Function
21:
22: <Extension()> _
23: Public Function GridView(ByVal htmlHelper As HtmlHelper, ByVal table As ITable, ByVal headers As String()) As String
24: Return GridView(htmlHelper, table, headers, New GridViewOptions())
25: End Function
26:
27: <Extension()> _
28: Public Function GridView(ByVal htmlHelper As HtmlHelper, ByVal table As ITable, ByVal includeLinks As Boolean) As String
29: Return GridView(htmlHelper, table, Nothing, includeLinks)
30: End Function
31:
32: <Extension()> _
33: Public Function GridView(ByVal htmlHelper As HtmlHelper, ByVal table As ITable, ByVal headers As String(), ByVal includeLinks As Boolean) As String
34: Dim options As New GridViewOptions()
35: If Not includeLinks Then
36: options.ShowViewButton = False
37: options.ShowEditButton = False
38: options.ShowDeleteButton = False
39: End If
40: Return GridView(htmlHelper, table, Nothing, options)
41: End Function
42:
43: <Extension()> _
44: Public Function GridView(ByVal htmlHelper As HtmlHelper, ByVal table As ITable, ByVal headers As String(), ByVal options As GridViewOptions) As String
45: ' Show edit column?
46: Dim showEditColumn As Boolean = options.ShowViewButton Or options.ShowEditButton Or options.ShowDeleteButton
47:
48: ' Get identity column name
49: Dim identityColumnName As String = GridExtensions.GetIdentityColumnName(table)
50:
51: ' Get column names and headers
52: Dim columnNames = GridExtensions.GetColumnNames(table)
53: If IsNothing(headers) Then
54: headers = columnNames
55: End If
56:
57: ' Open table
58: Dim sb As New StringBuilder()
59: sb.AppendLine("<table>")
60:
61: ' Create Header Row
62: sb.AppendLine("<thead>")
63: sb.AppendLine("<tr>")
64: If showEditColumn Then
65: sb.Append("<th></th>")
66: End If
67: For Each header As String In headers
68: sb.AppendFormat("<th>{0}</th>", header)
69: Next
70: sb.AppendLine("</tr>")
71: sb.AppendLine("</thead>")
72:
73: ' Create Data Rows
74: sb.AppendLine("<tbody>")
75: sb.AppendLine("<tr>")
76: Dim row As Object
77: For Each row In table
78: If showEditColumn Then
79: Dim identityValue As Integer = CType(DataBinder.GetPropertyValue(row, identityColumnName), Integer)
80: sb.Append("<td><small>")
81: If (options.ShowViewButton) Then
82: sb.Append(htmlHelper.ActionLink(options.ViewButtonText, options.ViewAction, New With {.Id = identityValue}))
83: End If
84: sb.Append(" ")
85: If options.ShowEditButton Then
86: sb.Append(htmlHelper.ActionLink(options.EditButtonText, options.EditAction, New With {.Id = identityValue}))
87: sb.Append(" ")
88: End If
89: If options.ShowDeleteButton Then
90: sb.Append(htmlHelper.ActionLink(options.DeleteButtonText, options.DeleteAction, New With {.Id = identityValue}))
91: sb.Append("</small></td>")
92: End If
93: End If
94: For Each columnName As String In columnNames
95: Dim value As String = DataBinder.GetPropertyValue(row, columnName).ToString()
96: sb.AppendFormat("<td>{0}</td>", HttpUtility.HtmlEncode(value))
97: Next
98: sb.AppendLine("</tr>")
99: Next
100: sb.AppendLine("</tbody>")
101:
102: sb.AppendLine("</table>")
103: Return sb.ToString()
104: End Function
105:
106: Public Function GetColumnNames(ByVal table As ITable) As String()
107: Return table.Context.Mapping.GetMetaType(table.ElementType).PersistentDataMembers.Select(Function(m) m.Name).ToArray()
108: End Function
109:
110: Public Function GetIdentityColumnName(ByVal table As ITable) As String
111: Return table.Context().Mapping().GetMetaType(table.ElementType).DBGeneratedIdentityMember().Name
112: End Function
113: End Module
114:
115: End Namespace
Listing 1 – GridExtensions.cs (C#)
1: using System;
2: using System.Text;
3: using System.Collections.Generic;
4: using System.Linq;
5: using System.Data.Linq.Mapping;
6: using System.Data.Linq;
7: using System.Web.UI;
8: using System.Web.Mvc;
9: using System.Web;
10:
11: namespace Tip8.Helpers
12: {
13: public static class GridExtensions
14: {
15:
16: public static string GridView(this HtmlHelper htmlHelper, ITable table)
17: {
18: return GridView(htmlHelper, table, null, new GridViewOptions());
19: }
20:
21: public static string GridView(this HtmlHelper htmlHelper, ITable table, string[] headers)
22: {
23: return GridView(htmlHelper, table, headers, new GridViewOptions());
24: }
25:
26: public static string GridView(this HtmlHelper htmlHelper, ITable table, bool includeLinks)
27: {
28: return GridView(htmlHelper, table, null, includeLinks);
29: }
30:
31: public static string GridView(this HtmlHelper htmlHelper, ITable table, string[] headers, bool includeLinks)
32: {
33: var options = new GridViewOptions();
34: if (!includeLinks)
35: {
36: options.ShowViewButton = false;
37: options.ShowEditButton = false;
38: options.ShowDeleteButton = false;
39: }
40: return GridView(htmlHelper, table, null, options);
41: }
42:
43: public static string GridView(this HtmlHelper htmlHelper, ITable table, string[] headers, GridViewOptions options)
44: {
45: // Show edit column?
46: bool showEditColumn = options.ShowViewButton || options.ShowEditButton || options.ShowDeleteButton;
47:
48: // Get identity column name
49: string identityColumnName = GridExtensions.GetIdentityColumnName(table);
50:
51: // Get column names and headers
52: var columnNames = GridExtensions.GetColumnNames(table);
53: if (headers == null)
54: headers = columnNames;
55:
56: // Open table
57: var sb = new StringBuilder();
58: sb.AppendLine("<table>");
59:
60: // Create Header Row
61: sb.AppendLine("<thead>");
62: sb.AppendLine("<tr>");
63: if (showEditColumn)
64: sb.Append("<th></th>");
65: foreach (String header in headers)
66: sb.AppendFormat("<th>{0}</th>", header);
67: sb.AppendLine("</tr>");
68: sb.AppendLine("</thead>");
69:
70: // Create Data Rows
71: sb.AppendLine("<tbody>");
72: sb.AppendLine("<tr>");
73: foreach (Object row in table)
74: {
75: if (showEditColumn)
76: {
77: int identityValue = (int)DataBinder.GetPropertyValue(row, identityColumnName);
78: sb.Append("<td><small>");
79: if (options.ShowViewButton)
80: {
81: sb.Append(htmlHelper.ActionLink(options.ViewButtonText, options.ViewAction, new { Id = identityValue }));
82: sb.Append(" ");
83: }
84: if (options.ShowEditButton)
85: {
86: sb.Append(htmlHelper.ActionLink(options.EditButtonText, options.EditAction, new { Id = identityValue }));
87: sb.Append(" ");
88: }
89: if (options.ShowDeleteButton)
90: {
91: sb.Append(htmlHelper.ActionLink(options.DeleteButtonText, options.DeleteAction, new { Id = identityValue }));
92: }
93: sb.Append("</small></td>");
94: }
95: foreach (string columnName in columnNames)
96: {
97: string value = DataBinder.GetPropertyValue(row, columnName).ToString();
98: sb.AppendFormat("<td>{0}</td>", HttpUtility.HtmlEncode(value));
99: }
100: sb.AppendLine("</tr>");
101: }
102: sb.AppendLine("</tbody>");
103:
104: sb.AppendLine("</table>");
105: return sb.ToString();
106: }
107:
108: public static string[] GetColumnNames(ITable table)
109: {
110: return table
111: .Context
112: .Mapping
113: .GetMetaType(table.ElementType)
114: .PersistentDataMembers.Select(m => m.Name)
115: .ToArray();
116: }
117:
118: public static string GetIdentityColumnName(ITable table)
119: {
120: return table
121: .Context
122: .Mapping
123: .GetMetaType(table.ElementType)
124: .DBGeneratedIdentityMember
125: .Name;
126: }
127: }
128:
129: }
Listing 1 contains multiple version of the GridView() method. Each version of the GridView() method accepts a different set of parameters. For example, the first version of the GridView() method accepts a LINQ to SQL table and renders all of the columns and rows from the table. Other versions of the GridView() method enable you to customize the GridView headers and edit links.
The MVC view in Listing 2 demonstrates multiple ways of calling the GridView() method to display the contents of a database table.
Listing 2 – Index.aspx (VB)
1: <%@ Page Language="VB" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="false" CodeBehind="Index.aspx.vb" Inherits="Tip8, Nothing, New GridViewOptions With {.ViewButtonText = "Look", .ShowEditButton = False, .ShowDeleteButton = False})%>
21:
22:
23:
24: </asp:Content>
Listing 2 – Index.aspx (C#)
1: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="Tip8.Views.Home, null, new GridViewOptions { ViewButtonText = "Look", ShowEditButton=false, ShowDeleteButton=false } )%>
21:
22:
23:
24: </asp:Content>
The view in Listing 2 generates the HTML page displayed in Figure 1. The page contains four separate grids of data (the figure only shows the first three).
Figure 1 – The Index View
Notice that the ViewData.Model is passed to the GridView() helper method. The ViewData.Model represents a LINQ to SQL Table. The code-behind file for the Index view strongly types the model as a System.Data.Linq.ITable class. The model is passed to the view by the controller code in Listing 3.
Listing 3 – HomeController.vb (VB)
1: Public Class HomeController
2: Inherits System.Web.Mvc.Controller
3:
4: Private _db As New MovieDataContext()
5:
6: Function Index()
7: Return View(_db.Movies)
8: End Function
9: End Class
Listing 3 – HomeController.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Web;
5: using System.Web.Mvc;
6: using Tip8.Models;
7:
8: namespace Tip8.Controllers
9: {
10: public class HomeController : Controller
11: {
12: private MovieDataContext _db = new MovieDataContext();
13:
14: public ActionResult Index()
15: {
16: return View(_db.Movies);
17: }
18: }
19: }
I’m not completely happy with the GridView() helper method discussed in this tip. The problem with using an extension method is that it makes it difficult to customize the appearance of the columns in the GridView. For example, I would like to be able to format currency and date columns. Better yet, it would be nice if there was a way to have the equivalent of a template column. In tomorrow’s tip, I will explore an entirely different method of encapsulating a GridView when working with the ASP.NET MVC framework.
You can download the code for the GridView() helper method by clicking the following link. The download includes both Visual Basic .NET and C# versions of the code.
ASP.NET MVC Tip #7 – Prevent JavaScript Injection Attacks with Html.Encode.
ASP.NET MVC Tip #6 – Call RedirectToAction after Submitting a Form")
End Function
Function Results()
Return View(_db.Surveys)
End Function
End Class
Listing 2 – Survey2Controller.vb
Public Class Survey2()
' Return Results view
Return View("Results", _db.Surveys)
End Function
End Class".
ASP.NET MVC Tip #5 – Create Shared Views
If the Index.aspx view is not present in the Home folder, the ASP.NET MVC framework next attempts to retrieve the view from the Shared folder:
Namespace SharedViews
Partial Public Class Create
Inherits DataViewBase
Protected Function RenderCreateForm() As String
Dim sb As New StringBuilder()
Dim columnNames = Me.GetColumnNames()
Dim identityColumnName = Me.GetIdentityColumnName()
sb.AppendFormat("<form method='post' action='{0}'>", Me.Url.Action("New"))
sb.AppendLine("<table>")("</table>")
sb.AppendLine("<input type='submit' value='Add Record' />")
sb.AppendLine("</form>")
Return sb.ToString()
End Function
End Class
End Namespace).
ASP.NET MVC Tip #4 - Create a Custom Data Controller Base Class
In this tip, you learn how to create a custom controller base class that exposes actions for performing common database operations such as displaying, inserting, updating, and deleting data.
Whenever you write code and you discover that you are writing the same type of code over and over again, that is a good time to stop and consider whether you are wasting huge amounts of time. Yesterday, I discovered that I was in this very situation while building a database-driven ASP.NET MVC web application. I needed to perform the same standard set of database operations – display data, update data, insert data, delete data – for each of the database tables in my application. The dreadful prospect of having to write the exact same code over and over again inspired me to write today’s ASP.NET MVC tip of the day.
An MVC controller is just a class (a Visual Basic or C# class). Classes support inheritance. So, if you find yourself writing the exact same logic for your controller actions, it makes sense to write a new base class that contains the common set of actions. In this tip, we are going to create a base controller class that performs standard database operations. Keep in mind that you can create base controller classes for other types of common controller actions.
I created a base controller class named the DataController class. This class supports the following public methods:
-
Since these methods are protected, they cannot be invoked through a URL. However, you can call any of these methods within your derived controller class. These are useful utility methods that you can call from a derived controller’s action methods.
Finally, the DataController class supports the following properties:
- DataContext – The LINQ to SQL data context.
- Table – The LINQ to SQL Table.
- IdentityColumnName – The name of the identity column contained in the database table.
These properties are also protected. You can use them from within your derived controller class, but they are not exposed as controller actions.
The DataController is a generic class. When you create a controller that derives from the DataController class, you must specify the type of database entity that the DataController class represents. The DataController class works with LINQ to SQL. Before you use the DataController class, you must first create your LINQ to SQL entities that represent your database objects.
For example, Listing 1 contains a HomeController class that derives from the DataController class. Notice that the Movie type is passed to the DataController class. The Movie class is a LINQ to SQL entity created with the Visual Studio Object Relational Designer.
Listing1 – HomeController.vb (VB)
1: Imports System
2: Imports System.Collections.Generic
3: Imports System.Linq
4: Imports System.Web
5: Imports System.Web.Mvc
6:
7:
8: Namespace Tip4.Controllers
9: Public Class HomeController
10: Inherits DataController(Of Movie)
11:
12: ''' <summary>
13: ''' Show Movies in a Category
14: ''' </summary>
15: Public Function Category(ByVal Id As Integer) As ActionResult
16: Dim results = From m In Me.Table Where m.CategoryId = Id Select m
17: Return View(results)
18: End Function
19:
20: End Class
21: End Namespace
22:
Listing1 – HomeController.vb (CS)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Web;
5: using System.Web.Mvc;
6:
7: using Tip4.Models;
8:
9: namespace Tip4.Controllers
10: {
11: public class HomeController : DataController<Movie>
12: {
13:
14: /// <summary>
15: /// Show Movies in a Category
16: /// </summary>
17: public ActionResult Category(int Id)
18: {
19: var results = from m in this.Table where m.CategoryId == Id select m;
20: return View(results);
21: }
22:
23:
24: }
25: }
Because the HomeController class derives from the DataController class, the HomeController class exposes Index(), Details(), Create(), New(), Edit(), Update(), and Delete() actions automatically. Because the Movie entity is passed to the DataController, the HomeController enables you to perform these actions against the Movies database table.
Before you use the DataController class, you must add a connection string named dataController to your application’s web.config file. You can copy the connection string generated by the Visual Studio Object Relational Designer and rename the connection string dataController.
You must still create a set of views to use the DataController class. You need to create the following set of views:
- Index.aspx
- Details.aspx
- Create.aspx
- Edit.aspx
In tomorrow's tip, I'll show you how to create these views just once for all of your controller classes by creating Shared Views. But, that is tomorrow's topic. Back to the subject of the DataController.
Unfortunately, the code for the DataController class is too long to paste into this blog entry. You can download the DataController, and try it out by experimenting with the sample project, by clicking the Download the Code link at the end of this blog entry.
The sample project contains the four views listed above. You can use the sample project to display, insert, update, and delete records from the Movies database table. For example, Figure 1 contains the page generated by the Index.aspx view.
Figure 1 –The Index.aspx View
My expectation and hope is that there will be hundreds of custom base controller classes created by developers actively working with the ASP.NET MVC framework when the framework has its final release. I can imagine base controller classes used in a number of different scenarios: authentication, shopping carts, product catalogs, and so on. Anytime that you need to include a standard set of actions in more than one application, it makes sense to create a new controller base class.Download the Code
ASP.NET MVC Tip #3 – Provide Explicit View Names when Unit Testing
In this tip, Stephen Walther explains how you can unit test whether a controller action returns a particular view. He recommends that you be explicit about view names when you plan to create unit tests.
The ASP.NET MVC framework was designed to be a very testable framework. You can easily test an MVC controller action to determine whether the action returns the result that you expect. In this tip, I show you how to test whether a controller action returns a particular view.
Consider the MVC controller, named HomeController, in Listing 1. This controller contains an action named Index(). The Index() action returns a view. However, the name of the view is not provided. Instead, the name of the view is inferred from the name of the controller action. Therefore, when you call the Index() action, a view named Index is returned.
The HomeController contains a second action named Index2(). This second action also returns a view. However, in the second action, the name of the view is explicit. The name of the view is passed to the View() method. This second controller action does the same thing as the first controller action. However, in the case of the first controller action the view name is inferred and in the case of the second controller action the view name is explicit.
Listing 1 - HomeController.vb (VB)
1: Public Class HomeController
2: Inherits System.Web.Mvc.Controller
3:
4: Function Index() As ActionResult
5: Return View() ' view name inferred
6: End Function
7:
8: Function Index2() As ActionResult
9: Return View("Index") ' view name explicit
10: End Function
11:
12: End Class
Listing 1 - HomeController.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Web;
5: using System.Web.Mvc;
6:
7: namespace Tip3.Controllers
8: {
9: public class HomeController : Controller
10: {
11: public ActionResult Index()
12: {
13: return View(); // view name inferred
14: }
15:
16: public ActionResult Index2()
17: {
18: return View("Index"); // view name explicit
19: }
20:
21:
22: }
23: }
If you plan to create unit tests for your ASP.NET MVC application, then you should always be explicit about your view names. Otherwise, you won’t be able to test whether the right view was returned in your unit tests.
The test class in Listing 2 contains two test methods. The first method tests the HomeController’s Index() action and the second method tests the HomeController’s Index2() action. The first test always fails and the second test always succeeds (see Figure 1).
Listing 2 - HomeControllerTest.vb (VB)
1: Imports System
2: Imports System.Collections.Generic
3: Imports System.Text
4: Imports System.Web.Mvc
5: Imports Microsoft.VisualStudio.TestTools.UnitTesting
6: Imports Tip3
7:
8: <TestClass()> Public Class HomeControllerTest
9:
10:
11: <TestMethod()> _
12: Public Sub Index()
13: ' Arrange
14: Dim controller As New HomeController()
15:
16: ' Act
17: Dim result As ViewResult = controller.Index()
18:
19: ' Assert
20: Assert.AreEqual("Index", result.ViewName)
21: End Sub
22:
23: <TestMethod()> _
24: Public Sub Index2()
25: ' Arrange
26: Dim controller As New HomeController()
27:
28: ' Act
29: Dim result As ViewResult = controller.Index2()
30:
31: ' Assert
32: Assert.AreEqual("Index", result.ViewName)
33: End Sub
34:
35:
36:
37: End Class
Listing 2 - HomeControllerTest.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Text;
5: using System.Web.Mvc;
6: using Microsoft.VisualStudio.TestTools.UnitTesting;
7: using Tip3;
8: using Tip3.Controllers;
9:
10: namespace Tip3Tests.Controllers
11: {
12: /// <summary>
13: /// Summary description for HomeControllerTest
14: /// </summary>
15: [TestClass]
16: public class HomeControllerTest
17: {
18: [TestMethod]
19: public void Index()
20: {
21: // Arrange
22: HomeController controller = new HomeController();
23:
24: // Act
25: ViewResult result = controller.Index() as ViewResult;
26:
27: // Assert
28: Assert.AreEqual("Index", result.ViewName);
29: }
30:
31:
32: [TestMethod]
33: public void Index2()
34: {
35: // Arrange
36: HomeController controller = new HomeController();
37:
38: // Act
39: ViewResult result = controller.Index2() as ViewResult;
40:
41: // Assert
42: Assert.AreEqual("Index", result.ViewName);
43: }
44:
45:
46: }
47: }
Figure 1 – Unit Test Results
A unit test cannot infer a view name. My recommendation is that you should always be explicit about your view names if you plan to unit test your application.
ASP.NET MVC Tip #2 - Create a custom Action Result that returns Microsoft Excel Documents
In this tip, I show you how to create a custom action result that you can return from an ASP.NET MVC controller action. This action result generates a Microsoft Excel Document from a LINQ to SQL query.
In an MVC application, a controller action returns an action result. In particular, it returns something that derives from the base ActionResult class such as:
· ViewResult
· EmptyResult
· RedirectResult
· RedirectToRouteResult
· JsonResult
· ContentResult
For example, you use a ViewResult to return a particular view to the browser and a ContentResult to return text content to the browser.
But, what if you want to return some other type of content to a browser such as an image, a PDF file, or a Microsoft Excel document? In these cases, you can create your own action result. In this tip, I show you how to create an action result that returns a Microsoft Excel document.
The code for the ExcelResult is contained in Listing 1.
Listing 1 – ExcelResult.vb (VB)
1: Imports System
2: Imports System.Web.Mvc
3: Imports System.Data.Linq
4: Imports System.Collections
5: Imports System.IO
6: Imports System.Web.UI.WebControls
7: Imports System.Linq
8: Imports System.Web
9: Imports System.Web.UI
10: Imports System.Drawing
11:
12:
13: Namespace Tip2
14:
15: Public Class ExcelResult
16: Inherits ActionResult
17:
18: Private _dataContext As DataContext
19: Private _fileName As String
20: Private _rows As IQueryable
21: Private _headers() As String = Nothing
22:
23: Private _tableStyle As TableStyle
24: Private _headerStyle As TableItemStyle
25: Private _itemStyle As TableItemStyle
26:
27: Public ReadOnly Property FileName() As String
28: Get
29: Return _fileName
30: End Get
31: End Property
32:
33: Public ReadOnly Property Rows() As IQueryable
34: Get
35: Return _rows
36: End Get
37: End Property
38:
39:
40: Public Sub New(ByVal dataContext As DataContext, ByVal rows As IQueryable, ByVal fileName As String)
41: Me.New(dataContext, rows, fileName, Nothing, Nothing, Nothing, Nothing)
42: End Sub
43:
44: Public Sub New(ByVal dataContext As DataContext, ByVal fileName As String, ByVal rows As IQueryable, ByVal headers() As String)
45: Me.New(dataContext, rows, fileName, headers, Nothing, Nothing, Nothing)
46: End Sub
47:
48: Public Sub New(ByVal dataContext As DataContext, ByVal rows As IQueryable, ByVal fileName As String, ByVal headers() As String, ByVal tableStyle As TableStyle, ByVal headerStyle As TableItemStyle, ByVal itemStyle As TableItemStyle)
49: _dataContext = dataContext
50: _rows = rows
51: _fileName = fileName
52: _headers = headers
53: _tableStyle = tableStyle
54: _headerStyle = headerStyle
55: _itemStyle = itemStyle
56:
57: ' provide defaults
58: If _tableStyle Is Nothing Then
59: _tableStyle = New TableStyle()
60: _tableStyle.BorderStyle = BorderStyle.Solid
61: _tableStyle.BorderColor = Color.Black
62: _tableStyle.BorderWidth = Unit.Parse("2px")
63: End If
64: If _headerStyle Is Nothing Then
65: _headerStyle = New TableItemStyle()
66: _headerStyle.BackColor = Color.LightGray
67: End If
68: End Sub
69:
70: Public Overrides Sub ExecuteResult(ByVal context As ControllerContext)
71: ' Create HtmlTextWriter
72: Dim sw As StringWriter = New StringWriter()
73: Dim tw As HtmlTextWriter = New HtmlTextWriter(sw)
74:
75: ' Build HTML Table from Items
76: If Not _tableStyle Is Nothing Then
77: _tableStyle.AddAttributesToRender(tw)
78: End If
79: tw.RenderBeginTag(HtmlTextWriterTag.Table)
80:
81: ' Generate headers from table
82: If _headers Is Nothing Then
83: _headers = _dataContext.Mapping.GetMetaType(_rows.ElementType).PersistentDataMembers.Select(Function(m) m.Name).ToArray()
84: End If
85:
86:
87: ' Create Header Row
88: tw.RenderBeginTag(HtmlTextWriterTag.Thead)
89: For Each header As String In _headers
90: If Not _headerStyle Is Nothing Then
91: _headerStyle.AddAttributesToRender(tw)
92: End If
93: tw.RenderBeginTag(HtmlTextWriterTag.Th)
94: tw.Write(header)
95: tw.RenderEndTag()
96: Next
97: tw.RenderEndTag()
98:
99:
100:
101: ' Create Data Rows
102: tw.RenderBeginTag(HtmlTextWriterTag.Tbody)
103: For Each row As Object In _rows
104: tw.RenderBeginTag(HtmlTextWriterTag.Tr)
105: Dim header As String
106: For Each header In _headers
107: Dim strValue As String = row.GetType().GetProperty(header).GetValue(row, Nothing).ToString()
108: If Not _itemStyle Is Nothing Then
109: _itemStyle.AddAttributesToRender(tw)
110: End If
111: tw.RenderBeginTag(HtmlTextWriterTag.Td)
112: tw.Write(HttpUtility.HtmlEncode(strValue))
113: tw.RenderEndTag()
114: Next
115: tw.RenderEndTag()
116: Next
117: tw.RenderEndTag() ' tbody
118:
119: tw.RenderEndTag() ' table
120: WriteFile(_fileName, "application/ms-excel", sw.ToString())
121: End Sub
122:
123:
124:
125:
126: Private Shared Sub WriteFile(ByVal fileName As String, ByVal contentType As String, ByVal content As String)
127: Dim context As HttpContext = HttpContext.Current
128: context.Response.Clear()
129: context.Response.AddHeader("content-disposition", "attachment;filename=" + fileName)
130: context.Response.Charset = ""
131: context.Response.Cache.SetCacheability(HttpCacheability.NoCache)
132: context.Response.ContentType = contentType
133: context.Response.Write(content)
134: context.Response.End()
135: End Sub
136: End Class
137: End Namespace
138:
Listing 1 – ExcelResult.cs (C#)
1: using System;
2: using System.Web.Mvc;
3: using System.Data.Linq;
4: using System.Collections;
5: using System.IO;
6: using System.Web.UI.WebControls;
7: using System.Linq;
8: using System.Web;
9: using System.Web.UI;
10: using System.Drawing;
11:
12:
13: namespace Tip2
14: {
15: public class ExcelResult : ActionResult
16: {
17: private DataContext _dataContext;
18: private string _fileName;
19: private IQueryable _rows;
20: private string[] _headers = null;
21:
22: private TableStyle _tableStyle;
23: private TableItemStyle _headerStyle;
24: private TableItemStyle _itemStyle;
25:
26: public string FileName
27: {
28: get { return _fileName; }
29: }
30:
31: public IQueryable Rows
32: {
33: get { return _rows; }
34: }
35:
36:
37: public ExcelResult(DataContext dataContext, IQueryable rows, string fileName)
38: :this(dataContext, rows, fileName, null, null, null, null)
39: {
40: }
41:
42: public ExcelResult(DataContext dataContext, string fileName, IQueryable rows, string[] headers)
43: : this(dataContext, rows, fileName, headers, null, null, null)
44: {
45: }
46:
47: public ExcelResult(DataContext dataContext, IQueryable rows, string fileName, string[] headers, TableStyle tableStyle, TableItemStyle headerStyle, TableItemStyle itemStyle)
48: {
49: _dataContext = dataContext;
50: _rows = rows;
51: _fileName = fileName;
52: _headers = headers;
53: _tableStyle = tableStyle;
54: _headerStyle = headerStyle;
55: _itemStyle = itemStyle;
56:
57: // provide defaults
58: if (_tableStyle == null)
59: {
60: _tableStyle = new TableStyle();
61: _tableStyle.BorderStyle = BorderStyle.Solid;
62: _tableStyle.BorderColor = Color.Black;
63: _tableStyle.BorderWidth = Unit.Parse("2px");
64: }
65: if (_headerStyle == null)
66: {
67: _headerStyle = new TableItemStyle();
68: _headerStyle.BackColor = Color.LightGray;
69: }
70: }
71:
72: public override void ExecuteResult(ControllerContext context)
73: {
74: // Create HtmlTextWriter
75: StringWriter sw = new StringWriter();
76: HtmlTextWriter tw = new HtmlTextWriter(sw);
77:
78: // Build HTML Table from Items
79: if (_tableStyle != null)
80: _tableStyle.AddAttributesToRender(tw);
81: tw.RenderBeginTag(HtmlTextWriterTag.Table);
82:
83: // Generate headers from table
84: if (_headers == null)
85: {
86: _headers = _dataContext.Mapping.GetMetaType(_rows.ElementType).PersistentDataMembers.Select(m => m.Name).ToArray();
87: }
88:
89:
90: // Create Header Row
91: tw.RenderBeginTag(HtmlTextWriterTag.Thead);
92: foreach (String header in _headers)
93: {
94: if (_headerStyle != null)
95: _headerStyle.AddAttributesToRender(tw);
96: tw.RenderBeginTag(HtmlTextWriterTag.Th);
97: tw.Write(header);
98: tw.RenderEndTag();
99: }
100: tw.RenderEndTag();
101:
102:
103:
104: // Create Data Rows
105: tw.RenderBeginTag(HtmlTextWriterTag.Tbody);
106: foreach (Object row in _rows)
107: {
108: tw.RenderBeginTag(HtmlTextWriterTag.Tr);
109: foreach (string header in _headers)
110: {
111: string strValue = row.GetType().GetProperty(header).GetValue(row, null).ToString();
112: strValue = ReplaceSpecialCharacters(strValue);
113: if (_itemStyle != null)
114: _itemStyle.AddAttributesToRender(tw);
115: tw.RenderBeginTag(HtmlTextWriterTag.Td);
116: tw.Write( HttpUtility.HtmlEncode(strValue));
117: tw.RenderEndTag();
118: }
119: tw.RenderEndTag();
120: }
121: tw.RenderEndTag(); // tbody
122:
123: tw.RenderEndTag(); // table
124: WriteFile(_fileName, "application/ms-excel", sw.ToString());
125: }
126:
127:
128: private static string ReplaceSpecialCharacters(string value)
129: {
130: value = value.Replace("’", "'");
131: value = value.Replace("“", "\"");
132: value = value.Replace("”", "\"");
133: value = value.Replace("–", "-");
134: value = value.Replace("…", "...");
135: return value;
136: }
137:
138: private static void WriteFile(string fileName, string contentType, string content)
139: {
140: HttpContext context = HttpContext.Current;
141: context.Response.Clear();
142: context.Response.AddHeader("content-disposition", "attachment;filename=" + fileName);
143: context.Response.Charset = "";
144: context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
145: context.Response.ContentType = contentType;
146: context.Response.Write(content);
147: context.Response.End();
148: }
149: }
150: }
Every action result must inherit from the base ActionResult class. The ExcelResult class in Listing 1 does, in fact, inherit from the base ActionResult class. The base ActionResult class has one method that you must implement: the Execute() method. The Execute() method is called to generate the content created by the action result.
In Listing 1, the Execute() method is used to generate the Excel document from a Linq to SQL query. The Execute() method calls the WriteFile() method to write the finished Excel document to the browser with the correct MIME type.
Normally, you do not return an action result from a controller action directly. Instead, you take advantage of one of the methods of the Controller class:
· View()
· Redirect()
· RedirectToAction()
· RedirectToRoute()
· Json()
· Content()
For example, if you want to return a view from a controller action, you don’t return a ViewResult. Instead, you call the View() method. The View() method instantiates a new ViewResult and returns the new ViewResult to the browser.
The code in Listing 2 consists of three extension methods that are applied to the Controller class. These extension methods add a new method named Excel() to the Controller class. The Excel() method returns an ExcelResult().
Listing 2 –ExcelControllerExtensions.vb (VB)
1: Imports System
2: Imports System.Web.Mvc
3: Imports System.Data.Linq
4: Imports System.Collections
5: Imports System.Web.UI.WebControls
6: Imports System.Linq
7: Imports System.Runtime.CompilerServices
8:
9: Namespace Tip2
10: Public Module ExcelControllerExtensions
11:
12: <Extension()> _
13: Function Excel(ByVal controller As Controller, ByVal dataContext As DataContext, ByVal rows As IQueryable, ByVal fileName As String) As ActionResult
14: Return New ExcelResult(DataContext, rows, fileName, Nothing, Nothing, Nothing, Nothing)
15: End Function
16:
17: <Extension()> _
18: Function Excel(ByVal controller As Controller, ByVal dataContext As DataContext, ByVal rows As IQueryable, ByVal fileName As String, ByVal headers As String()) As ActionResult
19: Return New ExcelResult(dataContext, rows, fileName, headers, Nothing, Nothing, Nothing)
20: End Function
21:
22: <Extension()> _
23: Function Excel(ByVal controller As Controller, ByVal dataContext As DataContext, ByVal rows As IQueryable, ByVal fileName As String, ByVal headers As String(), ByVal tableStyle As TableStyle, ByVal headerStyle As TableItemStyle, ByVal itemStyle As TableItemStyle) As ActionResult
24: Return New ExcelResult(dataContext, rows, fileName, headers, tableStyle, headerStyle, itemStyle)
25: End Function
26:
27: End Module
28: End Namespace
29:
Listing 2 –ExcelControllerExtensions.cs (C#)
1: using System;
2: using System.Web.Mvc;
3: using System.Data.Linq;
4: using System.Collections;
5: using System.Web.UI.WebControls;
6: using System.Linq;
7:
8: namespace Tip2
9: {
10: public static class ExcelControllerExtensions
11: {
12:
13: public static ActionResult Excel
14: (
15: this Controller controller,
16: DataContext dataContext,
17: IQueryable rows,
18: string fileName
19: )
20: {
21: return new ExcelResult(dataContext, rows, fileName, null, null, null, null);
22: }
23:
24: public static ActionResult Excel
25: (
26: this Controller controller,
27: DataContext dataContext,
28: IQueryable rows,
29: string fileName,
30: string[] headers
31: )
32: {
33: return new ExcelResult(dataContext, rows, fileName, headers, null, null, null);
34: }
35:
36: public static ActionResult Excel
37: (
38: this Controller controller,
39: DataContext dataContext,
40: IQueryable rows,
41: string fileName,
42: string[] headers,
43: TableStyle tableStyle,
44: TableItemStyle headerStyle,
45: TableItemStyle itemStyle
46: )
47: {
48: return new ExcelResult(dataContext, rows, fileName, headers, tableStyle, headerStyle, itemStyle);
49: }
50:
51: }
52: }
The controller in Listing 3 illustrates how you can use the Excel() extension method within a controller. This controller includes three methods named GenerateExcel1(), GenerateExcel2(), and GenerateExcel3(). All three of the controller action methods return an Excel document by generating the document from the Movies database table.
Listing 3 – HomeController.vb (VB)
1: Imports System
2: Imports System.Collections.Generic
3: Imports System.Linq
4: Imports System.Data.Linq
5: Imports System.Data.Linq.Mapping
6: Imports System.Web.UI.WebControls
7: Imports System.Web
8: Imports System.Web.Mvc
9: Imports Tip2
10:
11: Namespace Tip2.Controllers
12: Public Class HomeController
13: Inherits Controller
14:
15: Private db As New MovieDataContext()
16:
17: Public Function Index() As ActionResult
18: Return View()
19: End Function
20:
21: ''' <summary>
22: ''' Generates Excel document using headers grabbed from property names
23: ''' </summary>
24: Public Function GenerateExcel1() As ActionResult
25: Return Me.Excel(db, db.Movies, "data.xls")
26: End Function
27:
28: ''' <summary>
29: ''' Generates Excel document using supplied headers
30: ''' </summary>
31: Public Function GenerateExcel2() As ActionResult
32: Dim rows = From m In db.Movies Select New With {.Title = m.Title, .Director = m.Director}
33:
34: Return Me.Excel(db, rows, "data.xls", New String() {"Title", "Director"})
35: End Function
36:
37:
38: ''' <summary>
39: ''' Generates Excel document using supplied headers and using supplied styles
40: ''' </summary>
41: Public Function GenerateExcel3() As ActionResult
42: Dim rows = From m In db.Movies Select New With {.Title = m.Title, .Director = m.Director}
43:
44: Dim headerStyle As New TableItemStyle()
45: headerStyle.BackColor = System.Drawing.Color.Orange
46: Return Me.Excel(db, rows, "data.xls", New String() {"Title", "Director"}, Nothing, headerStyle, Nothing)
47: End Function
48:
49:
50: End Class
51: End Namespace
Listing 3 – HomeController.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Data.Linq;
5: using System.Data.Linq.Mapping;
6: using System.Web.UI.WebControls;
7: using System.Web;
8: using System.Web.Mvc;
9: using Tip2.Models;
10: using Tip2;
11:
12: namespace Tip2.Controllers
13: {
14: public class HomeController : Controller
15: {
16:
17: private MovieDataContext db = new MovieDataContext();
18:
19: public ActionResult Index()
20: {
21: return View();
22: }
23:
24: /// <summary>
25: /// Generates Excel document using headers grabbed from property names
26: /// </summary>
27: public ActionResult GenerateExcel1()
28: {
29: return this.Excel(db, db.Movies, "data.xls");
30: }
31:
32: /// <summary>
33: /// Generates Excel document using supplied headers
34: /// </summary>
35: public ActionResult GenerateExcel2()
36: {
37: var rows = from m in db.Movies select new {Title=m.Title, Director=m.Director};
38: return this.Excel(db, rows, "data.xls", new[] { "Title", "Director" });
39: }
40:
41: /// <summary>
42: /// Generates Excel document using supplied headers and using supplied styles
43: /// </summary>
44: public ActionResult GenerateExcel3()
45: {
46: var rows = from m in db.Movies select new { Title = m.Title, Director = m.Director };
47: var headerStyle = new TableItemStyle();
48: headerStyle.BackColor = System.Drawing.Color.Orange;
49: return this.Excel(db, rows, "data.xls", new[] { "Title", "Director" }, null, headerStyle, null);
50: }
51:
52:
53: }
54: }
Finally, the Index.aspx view in Listing 4 demonstrates how you can call the GenerateExcel() controller actions to generate the Excel documents. Notice the three links to the three different versions of GenerateExcel.
Listing 4 – Index.aspx
1: <%@ Page Language="VB" AutoEventWireup="false" CodeBehind="Index.aspx.vb" Inherits="Tip2.Index" %>
2: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
3:
4: <html xmlns="" >
5: <head id="Head1" runat="server">
6: <title>Index Page</title>
7: <style type="text/css">
8:
9: li
10: {
11: margin-bottom: 5px;
12: }
13:
14: </style>
15: </head>
16: <body>
17: <div>
18:
19: <h1>Generate Microsoft Excel Document</h1>
20:
21:
22: <ul>
23: <li>
24: <a href="/Home/GenerateExcel1">Generate</a> - Generates an Excel document by using the entity property names for column headings and the default
25: formatting.
26: </li>
27: <li>
28: <a href="/Home/GenerateExcel2">Generate</a> - Generates an Excel document by using supplied header names and default formatting.
29: </li>
30: <li>
31: <a href="/Home/GenerateExcel3">Generate</a> - Generates an Excel document by using supplied header names and supplied formatting.
32: </li>
33:
34: </ul>
35:
36:
37:
38:
39: </div>
40: </body>
41: </html>
When you open the Index view, you see the page in Figure 1.
Figure 1 – The Index.aspx View
When you click one of the Generate Excel links, you get different Excel documents. For example, after you click on the first link, you get the Excel document in Figure 2.
Figure 2 – Data.xls
One disappointing note. When you click a link to generate the Excel document, you receive the warning in Figure 3. Unfortunately, there is no way around displaying this warning (to learn more about this warning, see).
Figure 3 - Warning from Microsoft Internet Explorer
You can follow the same approach discussed in this tip to create other types of action results. For example, you can create image action results, Microsoft Word action results, or PDF action results.
ASP.NET MVC Tip #1 - Create New HTML Helpers with Extension Methods
In this tip, I show you how you can create two new HTML Helpers that you can use within an ASP.NET MVC View. I show you how you can use extension methods to create new HTML Helpers for displaying bulleted and numbered lists.
When building a View for an ASP.NET MVC application, you can take advantage of HTML Helpers to render standard HTML tags. For example, instead of typing this:
<input name="inpSubmit" type="submit" value="Click Here!" />
You can type this:
<%= Html.SubmitButton("inpSubmit", "Click Here!") %>
Over the long run, HTML Helpers can save you a lot time. But what if there isn’t an HTML Helper for a tag that you want to render? For example, imagine that you want to display a bulleted list of database records in a View. The HtmlHelper class doesn’t include a method that lets you render a bulleted list. Don’t give up. If the HTML Helper doesn't include a method that you need, just extend it!
You can add new functionality to the HtmlHelper class by creating new extension methods. An extension method looks just like a normal instance method. However, unlike a normal instance method, you add extension methods to a class by defining the methods in a completely different class.
In Visual Basic .NET, you create extension methods by creating a module and decorating the extension methods with a special attribute. In C#, you define extension methods in a static class and use the keyword this to indicate the class being extended.
Here’s how you can add extension methods to the HtmlHelper class to display both ordered and unordered list of database records:
Listing 1 – ListExtensions.vb (VB.NET)
1: Imports System
2: Imports System.Collections
3: Imports System.Text
4: Imports System.Web
5: Imports System.Web.Mvc
6: Imports System.Runtime.CompilerServices
7:
8:
9: Namespace HtmlHelpers
10:
11: Public Module ListExtensions
12:
13: <Extension()> _
14: Public Function OrderedList(ByVal HtmlHelper As HtmlHelper, ByVal items As Object) As String
15: Return "<ol>" + ListExtensions.GetListItems(items) + "</ol>"
16: End Function
17:
18: <Extension()> _
19: Public Function UnorderedList(ByVal HtmlHelper As HtmlHelper, ByVal items As Object) As String
20: Return "<ul>" + ListExtensions.GetListItems(items) + "</ul>"
21: End Function
22:
23:
24: Private Function GetListItems(ByVal items As Object) As String
25: If items Is Nothing Then
26: Throw New ArgumentNullException("items")
27: End If
28: If Not TypeOf items Is IEnumerable Then
29: Throw New InvalidCastException("items must be IEnumerable")
30: End If
31:
32: Dim EnumItems As IEnumerable = CType(items, IEnumerable)
33: Dim builder As New StringBuilder()
34: For Each item As Object In EnumItems
35: builder.AppendFormat("<li>{0}</li>", HttpUtility.HtmlEncode(item.ToString()))
36: Next
37: Return builder.ToString()
38: End Function
39:
40: End Module
41: End Namespace
Listing 1 – ListExtensions.cs (C#)
1: using System;
2: using System.Collections;
3: using System.Text;
4: using System.Web;
5: using System.Web.Mvc;
6:
7: namespace BulletedListHelper.HtmlHelpers
8: {
9: public static class ListExtensions
10: {
11: public static string OrderedList(this HtmlHelper helper, Object items)
12: {
13: return "<ol>" + ListExtensions.GetListItems(items) + "</ol>";
14: }
15:
16: public static string UnorderedList(this HtmlHelper helper, Object items)
17: {
18: return "<ul>" + ListExtensions.GetListItems(items) + "</ul>";
19: }
20:
21:
22: private static string GetListItems(Object items)
23: {
24: if (items == null)
25: throw new ArgumentNullException("items");
26: if (items is IEnumerable == false)
27: throw new InvalidCastException("items must be IEnumerable");
28:
29: var enumItems = (IEnumerable)items;
30: var builder = new StringBuilder();
31: foreach (Object item in enumItems)
32: builder.AppendFormat("<li>{0}</li>", HttpUtility.HtmlEncode(item.ToString()));
33: return builder.ToString();
34: }
35:
36: }
37: }
The ListExtensions class has two public methods: OrderedList() and UnorderedList(). You pass a collection of items to either method to display either a numbered or bulleted list of items. Notice that these methods return strings. Really, an HTML Helper method is nothing more than a method that renders a formatted string to the browser.
After you create the extension methods, you can use the methods in a View like this:
Listing 2 – Index.aspx
1: <%@ Page Language="VB" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="false" CodeBehind="Index.aspx.vb" Inherits="BulletedListHelper.Index" %>
2: <%@ Import Namespace="BulletedListHelper.HtmlHelpers" %>
3:
4: <asp:Content
5:
6:
7: <h1>Movies (Ordered)</h1>
8:
9: <%= Html.OrderedList(ViewData.Model) %>
10:
11:
12: <h1>Movies (Unordered)</h1>
13:
14:
15: <%= Html.UnorderedList(ViewData.Model) %>
16:
17:
18: </asp:Content>
Notice that the BulletedList.HtmlHelpers namespace gets imported at the top of the file. The method Html.OrderedList() is used to render a numbered list and the method Html.UnorderedList() is used to render a bulleted list. Notice that these methods are being called on the HtmlHelper exposed by the Html property of the View just like any other extension method. When you open this View in a browser, you get the page in Figure 1:
Figure 1 – Index.aspx Rendered with Custom HTML Helpers
Finally, the Index() method exposed by the HomeController in Listing 3 illustrates how you can pass a collection of movie records to the Index.aspx View. The movie records are retrieved by taking advantage of a Linq to SQL query.
Listing 3 – HomeController.vb (VB.NET)
1: Public Class HomeController
2: Inherits System.Web.Mvc.Controller
3:
4: Private db As New MoviesDataContext()
5:
6: Function Index()
7: Dim movies = From m In db.Movies Select m.Title
8: Return View(movies)
9: End Function
10:
11:
12: End Class
Listing 3 – HomeController.cs (C#)
1: using System;
2: using System.Collections.Generic;
3: using System.Linq;
4: using System.Web;
5: using System.Web.Mvc;
6: using BulletedListHelper.Models;
7:
8: namespace BulletedListHelper.Controllers
9: {
10: public class HomeController : Controller
11: {
12:
13: private MoviesDataContext db = new MoviesDataContext();
14:
15: public ActionResult Index()
16: {
17: var movies = from m in db.Movies select m.Title;
18: return View(movies);
19: }
20:
21: }
22: }
You can use this approach to render just about anything within an ASP.NET MVC View. For example, you can use a similar approach to create TreeViews, Menus, tabstrips, whatever.
TDD : Introduction to Moq
In this post, I provide an introduction to Moq which is the newest of the Mock Object Frameworks. Moq is promoted by its creators as easier to learn and use than other Mock Object Frameworks such as Rhino Mocks and TypeMock Isolator.
Moq takes advantage of recent VB.NET and C# language features such as lambdas and generics. When.
Some Background, Some Philosophy, and Some Controversy.
Installing and Setting Up Moq.
What Can Be Mocked?.
Mocking Methods and Properties:
1: // Mock product repository
2: var productRepository = new Mock<IProductRepository>();
3: productRepository:
1: // Mock product repository
2: var productRepository = new Mock<IProductRepository>();
3: productRepository
4: .Expect(p => p.Get(It.Is<int>(id => id>0 && id<6)))
5: .Returns(newProduct.Object);
This code returns the newProduct object only when the id parameter passed to the Get() method has a value between 0 and 6. This constraint is specified within a lambda expression passed to the It.Is() method.
Moq is Mockist Friendly (At Least a Little Bit)
1: using System;
2: using System.Web;
3:
4: namespace MoqSamples.Models
5: {
6: public class ProductRepository : IProductRepository
7: {
1: using System;
2: using System.Web;
3:
4: namespace MoqSamples.Models
5: {
6: public class ProductCache
7: {
1: using System;
2: using Microsoft.VisualStudio.TestTools.UnitTesting;
3: using MoqSamples.Models;
4: using Moq;
5: using System.Web;
6:
7: namespace MoqSamplesTests
8: {
9: [TestClass]
10: public class ProductTest
11: {
12: [TestMethod]
13: public void TestCache()
14: {
30:.
Conclusion.
I’m Now Working at Microsoft
It’s official, I’m now working at Microsoft. I have an office in building 42 and they have supplied me with a phone, an email account, and a computer. I even have a whiteboard in case inspiration hits me (it currently is blank).
I’m excited about the job. I was hired to work with the ASP.NET MVC team to build content around ASP.NET MVC and act as a liaison with the MVC developer community. I get to play with what Phil Haack and the rest of the ASP.NET MVC team is building when it is brand spanking new. I get to interact with the developer community and show off what Microsoft is creating.
I’m excited about ASP.NET MVC. New frameworks for building web applications emerge very rarely. I’ve been building websites by taking advantage of Microsoft technologies for a very long time. I started with IDC and HTX templates, progressed to Active Server Pages, and then made the leap to ASP.NET. ASP.NET MVC is still very much a part of ASP.NET, but it embodies a very different approach to building web applications.
ASP.NET MVC is an alternative, but not a replacement, to using Web Forms when building applications with the ASP.NET framework. If you are an Agile developer, or you are excited about ideas from the Agile community, then you’ll find ASP.NET MVC very attractive.
I’m very much a mainstream ASP.NET developer. I like Web Forms and I have built a lot of really great websites by taking advantage of Web Forms. However, I think that there are many valuable ideas and practices that mainstream ASP.NET developers can learn from the Agile world. You might have noticed that I have been posting entries on Test-Driven Development, Mock Frameworks, and Design Principles and Patterns on this blog during the last few months. One of the primary goals of my new job is to get mainstream ASP.NET developers excited about these ideas.
I admit, I am a little intimidated about what I need to learn. This is new territory for me. I’m leaving my safe home of ASP.NET Web Forms and entering the savage wilderness. I suspect, however, that this adventure will be worth it.
Oh, one last thing. I redesigned the look of this blog. I added a prominent picture of myself to the banner thinking deep thoughts (What should I put on that whiteboard? What should I put on that whiteboard?). Let me know what you think of the new design. | http://weblogs.asp.net/stephenwalther/archive/2008/06?PageIndex=2 | CC-MAIN-2014-49 | refinedweb | 9,729 | 51.24 |
Re: Predefined styles
Jon Haynes wrote: I'm new to Lyx and I was wondering whether there was a way to set the styles globally across an entire document. e.g. Title: 22pt Helvetica Paragraph: 12pt Georgia At the moment I can only see how to set this information inline, e.g. each paragraph. Am I missing
Faulty Latex generated by LyX ?
Hello, Trying to debug exportation from LyX to MsWord through Tex4ht, I've found this bug (?) in the LaTeX generated by LyX. I'll paste a very simple example below. The LaTeX below compiles but the 'First Paragraph' is not correctly indented. The solution is to insert an empty line between
Re: Faulty Latex generated by LyX ?
Juergen Spitzmueller wrote: Yes, especially since it is bad typography in some countries to indent the paragraph following a quotation. Therefor we should be careful that the change does not change the layout of given documents. Adding an option is maybe an overkill. I think it could be
Re: latex2rft / tex2rtf problems
K. Elo wrote: Hi, I have to convert a long article with a bibtex bibliography into the rtf-format. I have installed both latex2rtf and tex2rtf, but I just cannot get it working. The problems are: Did you use latex2rtf like that : latex2rtf -a myauxiliaryfile.aux -b
Re: Footnotes problem
K. Elo wrote: Hi, Axel Dessecker, 9.12.2005 19:50: Kimmo, Am Freitag, 09. Dezember 2005 15:27 schrieb K. Elo: 1) The title should be followed by a footnote marked with *, i.e. bla bla bla* 2) All other footnotes should be marked with numbers (1,2,...). Actually, the idea is the
Re: International characters in file names - Lyx 1.4pre3, MacOSX
Georg Baum wrote: Anders Ekberg wrote: I know that spaces and international characters have been discussed as issues before. Are these problems supposed to be solved in version 1.4? If I'm not mistaken, latex2e does not work with a filename with international letters. This problem is
Re: LyX 1.4 test
Georg Baum wrote: and it will install as lyx-1.4 and not mix up the support files either. Beware that if you have LyX 1.3 and LyX 1.4 on your computer, they will both interacts with the same .lyx directory. I've created a new user for 1.4 to isolate the two .lyx directories. Cheers,
Re: two abstracts in two languages
Martin A. Hansen wrote: i tried using this hint however, this complains that babel for language danish is not loaded. If there is some blasphemy in your document, it won't work ;-) so i tried to put \usepackage[danish]{babel} in
Re: two abstracts in two languages
Martin A. Hansen wrote: but this doesnt seem to work - the result is a all-danish document ... ? Strange, here it works. See attached file. Maybe, you have a done a slight error (like putting return and not ctrl-return in the ERT box). Cheers, Charles --
Re: LyX Export ASCII - Bibliography
Georg Baum wrote: Konrad Hofbauer wrote: Hi! I want to export a LyX document to ASCII-text (so that somebody else can import and reformat it in Word and finally InDesign - so it is not really about having a good looking ASCII file, but something that one can continue working with ...).
Re: True type 1 fonts in pdf
Myriam Abramson wrote: Hi! Sorry if that's well known already but I am getting confused by different answers I read on this issue. What kind of information are you looking for ? Using a truetype font in pdflatex (and therefore in LyX) is possible but is not trivial [but easier than in
Re: Left Title
Torsten Hahn wrote: Hi, is there a way to let the title and author of a koma script article be left justified instead of centered ? You will have to do your title page 'by hand'. Look at \begin{titlepage} in the KOMA-Script documentation Cheers, Charles --
Re: Zitat-Stil ändern
Matthias Schmidt wrote: Wie kann ich den Zitat-Stil ändern? Die Verweise auf das Literaturverzeichnis stehen in der Fußnote. Nach der Lyx-Vorgabe bestehen sie aus der Nummer der Literatur im Literaturverzeichnis in eckigen Klammern: [12]. Ich möchte dort aber den Autor mit Jahreszahl
Re: Marginal Note in Enumerate
Bruce Pourciau wrote: Using the enumerate environment to number problems on an exam, I would like to place [20] just to the left of the number of the problem to tell the students that this is a 20 point problem. I tried in ERT \marginpar[text] and \reversemarginpar{text}, with the text being
Re: lyx compatibility to MsWord etc
Tom Tom wrote: Hi all, I am about to begin to write up my Phd-thesis and would like to do this in lyx. One thing that worries me is that the Profs who will correct it are working in MsWord! The easiest way is to give pdf files to your supervisors. They print it, mark it and gave them
Re: selecting special fonts
Stacia Hartleben wrote: Is there a better way to select special special fonts other than doing ERT? I know under character I can change it to san serif or whatever but for a special font I installed I have to do something like this: \usefont{T1}{stacish}{m}{n} \selectfont myword
Re: selecting special fonts
Herbert Voss wrote: Charles de Miramon wrote: \newcommand\stacia[1]{% bgroup \usefont{T1}{stacish}{m}{n}\selectfont#1% \egroup} I guess it is \bgroup Herbert, can you explain what is the purpose of \begingroup \endgroup ? The TLC2 (my Bible) is not very clear about it. Cheers
Re: Still: citation reference is it is wrong Give a minimal
Re: lyx compatibility to MsWord etc
Stephen Harris wrote: Currently I think tex4ht will convert latex to html. Then the html can be imported by Word. This doesn't work great. Wrong. Tex4ht converts well to Oowriter. I've done it for a 54 pages article with footnotes, a jurabib bibliography, tables and some custom macros. With
Re: Still: citation reference is wrong
Charles de Miramon
Re: lyx compatibility to MsWord etc
Stephen Harris wrote: By massaging don't you mean proofreading and editing. Also I once posted that Fabrice Popineau stated that tex4ht was the best method (better than png) available for conversion from Latex to Word. It is, but that still falls short of great when compared to LyX-pdf
LyX 1.4 packages for Debian Unstable Amd64
Hello, I've put packages here : and also on Charles --
Re: LyX 1.4 packages for Debian Unstable Amd64
Jean-Marc Lasgouttes wrote: Charles == Charles de Miramon [EMAIL PROTECTED] writes: Charles Hello, I've put packages here : Charles and also on I've added a binary for i386 (Debian Unstable) to install them, download them in a directory
Re: How to export in rtf?
Nagy Gabor wrote: Hi, I am writing my thesis in LyX, but the reviewer would like to review and comment it using ms word. There has been several threads on this topic lately. If your reviewer wants to use the different reviewing options in MsWord you will have the utmost difficulties to get
Re: Importing a text file with German accented characters
BEJ wrote: Nicholas Allen wrote: I hope someone can help me. I am trying to import a text file that contains German accented characters. The file is saved in UTF-8 format. How do I do this? If I use the import ASCII the accented characters appear as two junk characters. Under Linux, you
Does Forward DVI works in LyX 1.4
Hello, I've seen a mention of a patch on lyx-devel implementing forward DVI, but I cannot see it working in LyX 1.4. Cheers, Charles --
Re: Changing fonts
Jonathan Vogt wrote: Hi there, I'm currently doing some stuff for my wedding in lyx (program, menucard, etc.) I've been looking around for a way to change the font in lyx to some nicer and more suitable one. I would prefer URW Chancery L, but I have no clue how to get that combined with
Re: Importing a text file with German accented characters
Jon Riding wrote: I have a related problem with UTF-8 characters which judging from this discussion may not be achievable in LyX. I need to include text in my documents taken from data in Bantu and Nilotic group languages. Some of these include accented vowels and other characters that are
Re: Debian testing packages of LyX 1.4.0
Jean-Marc Lasgouttes wrote: We already have debian-unstable debs, don't we? There are little problems with my packages. Applied on an old lyx 1.3 .lyx directory, LyX will complain to be unable to find the book, article, etc. layouts. Moving .lyx to .lyxold, resolve the problem Cheers,
Problematic custom layout file
lyxmacros.inc # Remove some unwanted styles. #NoStyleRight_Address #NoStyleAddress # Lettre textclass definition file. # Adapté de stdletter.inc par N. Hathout (2 février 2001) # Modifié par Yann Morère (13 fev 2001) Charles de Miramon (20/11/2003) # Author : Matthias
Re: Problematic custom layout file
Georg Baum wrote: Maybe you have a customized book layout file there that also needs one of the numxxx includes? Yes. You are right. I had in my .lyx/layouts an old stdclasse.inc from 1.3 that was loaded by LyX 1.4 in preference to the nex stdclasse.inc and created errors. Removing it solved
Re: writer/odt to tex/lyx
Matthias Schmidt wrote: Hello, I converted an openoffice odt-document with writer2latex to a tex-document. The results were not good. Is there an other converter to import odt-documents to tex/lyx with better results? Strange. I had quite good results with writer2latex. What was not well
Formatting character dialog
Hello, I'm slowly discovering LyX 1.4. I found that if you put your cursor in the middle of a word and select the Edit -- Text style dialog. Changes are applied to the whole word (that you have not selected). Is it a feature or a bug ? Cheers, Charles --
Re: Formatting character dialog
Jean-Marc Lasgouttes wrote: We could do a poll of how other apps (ooo, word, kword, abiword...) and decide how to change this behaviour. OpenOffice follows the same behaviour than LyX. Kword, Abiword and if I remember right MsWord are in the opposite group. I think LyX actual behaviour is
Re: tex4ht, tex2rtf - all fails
Axel Dessecker wrote: You definitely need them, and you have to adjust the .env file to your system. Please read the documentation at TeX4ht is a bit difficult to set up. The Debian package never worked for me but installing from scratch worked.
Re: Outlining documents for LyX
Jean-Marc Lasgouttes wrote: Peter == Peter Bowyer [EMAIL PROTECTED] writes: Peter Hi, Failing to find a view in LyX similar to Microsoft Word's Peter Outline view, how do you plan and structure your large Peter documents with LyX? Is there an external tool that will work Peter with LyX to
Printing grom though Kprinter
Hello, With LyX 1.3.7, you could change the spool command in the Printer preferences dialog to kprinter to funnel printing through kprinter. I'm unable to make it work with 1.4.0 and 1.4.1svn Am I alone ? Cheers, Charles --
Re: Printing grom though Kprinter
Charles de Miramon wrote: Am I alone ? Yes, bad cups configuration --
Re: Question about Bibtex - which is the best GUI?
Tim Vaughan wrote: Hi, I plan on using Bibtex to handle the citations needed for a series of essays I am writing. I have come across two Java GUIs, JabRef and Bib-it and an OS X one, BibDesk. I'm happy to try them all out but I was wondering if people had experience with these programs
Re: Question about Bibtex - which is the best GUI?
Maria Gouskova wrote: I use JabRef on Mac OS X, which I chose initially because it interfaces with LyX. I am mostly happy with it. My two complaints are the same as Charles de Miramon's: JabRef is a little slow and you have to get rid of ASCII characters with diacritics in your bibliography
Re: bookstab package and lyx
Marcelo Acuña wrote: Hello, I want to use bookstab package for nices tables, doc of this package came with example, this example work in kile but in lyx I only get errors and errors like: You haven?t defined [EMAIL PROTECTED] yet It is a bug when you cut and paste from LaTeX insets.
Re: Help on compiling Lyx 1.4.0 for teTex3
Patton, Eric wrote: Hi, I see that the latest teTex is version 3, so I was wondering if it is possible to compile Lyx for support of teTex3? I'm using Ubuntu 5.10 on a PC. Why don't you install official ubuntu tetex3 packages ? According to
Re: Easy way to get the page number top right
c. Can I change this? You sound like a user who knows fancy. This is a little bit OT for this thread but maybe you can give me a hint. How can I make a line under the header to visual separate him from the text? --
LyX 1.4.1 binaries for Debian unstable
Hello, The Debian maintainers of LyX are not very active. I have therefore created unofficial i386 binaries of LyX 1.4.1 for Debian unstable (sid). You should add in your /etc/apt/sources.list the following line : deb sid main contrib non-free then apt-get update
Re: LyX 1.4.1 binaries for Debian unstable
[EMAIL PROTECTED] wrote: Maybe you could add this information to this page: Done --
Re: Problems with 1.4.1, from Re: LyX 1.4.1 binaries for Debian unstable
Micha Feigin wrote: Kenward The menu structure changed, so what you need is probably elsewhere. Also the structure of the setup files changed somewhat. Try moving your .lyx directory from your home directory, it may solve some of the problem. Yes, it is a good idea. The upgrading from
Re: Enumeration/conversion issues of 1.4.1 - 1.3.x files (important exams)
Kenward Vaughan wrote: After some kind help from Micha and Charles, I have 1.4.1 running apparently smoothly (the screen still appears a bit sluggish, which I recall seeing in other posts about 1.4.1--but that's not an issue for me yet... ;-). An important problem I immediately encountered
[Announce] Update of LyX 1.4.1 Debian packages
Hello, I've just uploaded on : - Untested i386 sarge packages for LyX 1.4.1 (qt frontend) that should also work for other Debian derivatives like Xandros - amd64 sid packages for LyX 1.4.1 (qt frontend) - Matej Cepl Dvipost package for sid i386 - updated i386
Re: Let's get the Lyx Debian pkg back in sync with upstream
Georg Baum wrote : I'll upload an updated diff for 1.4.2svn to the wiki tonight. I've got 1.4.1 packages for sid (amd64, i386) and for sarge (i386) for the qt frontend apt-gettable from I can try to upload the source package tonight. I've started from Georg
Re: Let's get the Lyx Debian pkg back in sync with upstream
Sven Hoexter wrote: On Fri, May 12, 2006 at 11:29:15AM +0200, Charles de Miramon wrote: Hi, I've started from Georg Baum's version but I made an ugly hack in debian/rules to force it to compile with gcc 4.1. I don't know how you should do it properly in Debian. I've updated
Re: [Pkg-lyx-devel] Let's get the Lyx Debian pkg back in sync with upstream
[EMAIL PROTECTED] wrote: On Fri, 12 May 2006, Per Olofsson wrote: - Is there a DD out in the wild internet willing to sponsor a new Lyx package? Yes, I'd be willing to sponsor the package. What is a DD? Debian Developper. Debian has a full American-oriented lingo, like 'MIA'
Re: Managing Large, Disparate Bibliographic Databases
Look Tellico. --
Re: LyX and jurabib
Anders Dahnielson wrote: Ooops, sorry for not CCing the list, thought it will be sent a copy when hittng reply. :) I have just created Please add the information of this thread there. I will add my own tricks with jurabib when I find a little time.
Re: Giving options to Jurabib [Lyx1.4.1]
Arne Kjell Vikhagen wrote: Hi, I just wondered, how do i give jurabib options to Lyx now that I have activated Jurabib in the Document Settings instead of in the preamble? If I put my usual line in the preamble, it complains that Jurabib has been called twice. Bennett gave the answer.
Re: Giving options to Jurabib [Lyx1.4.1]
Arne Kjell Vikhagen wrote: Thanks, I just saw in the mailing list archives today that the same topic came up. Thanks for replying though. Great that there now is a wiki-page on Lyx-Jurabib-Humanities, since I think more and more people have discovered the beauty of it. Maybe include
Re: Weird LyX 1.4.1 behavior in Debian (compared to 1.3.8)
Jerome Tuncer wrote: Hello everyone, I've been enjoying the use of LyX 1.3.8 during the last few weeks: simple and plain editor that perfectly suits my needs. However, I had to switch to 1.4.1 because someone modified one of my files with it and 1.3.8 wouldn't open it anymore. I found
Re: Suppressing URLs in jurabib
Anders Dahnielson wrote: Anyone who know some way to suppress printing of URLs in citations when using jurabib? I put the following in the LaTeX preamble: \renewcommand{\biburlprefix}{} \renewcommand{\biburlsuffix}{} \renewcommand{\jburldef}{} However, it leaves an extra comma at the
Re: Help regarding lyx
pramod salunkhe wrote: 3.Suppose I have started to write a report in lyx. On the first page of lyx I want the page No. of 23. How can I do this? The page number is kept in a counter called page. You have to set it to the correct number. Put in ERT at the top of your document
Re: upper case letter with accentuation
paul saumane wrote: 1) Looking at the help, I cannot find the way to type accentuated upper case letters (capital letters with accent). I just found capital A with grave accent typing Alt, ctrl, 7 then shift a. Any clue to guide me. Alt-x to get the minibuffer. In the minibuffer type
Re: footnote ends up below tables
Curtis Osterhoudt wrote: Hi, all. I'm using LyX 1.4.1 on Debian, and writing in the koma-script book class. I'm having a problem with footnotes and table floats (it happens with figure floats, too). In the text, I have a footnote, which I *think* should go at the very bottom of the page,
Re: footnote ends up below tables
Hello, Charles. Thanks for the reply. I may not understand what you propose (I realize that I probably wasn't too clear in my original message, either: I'm not putting a footnote IN the table (nor in a figure caption, etc.), but just somewhere else on the page so that both it
Re: lyx 1.4.1
[EMAIL PROTECTED] wrote: I just installed lyx 1.4.1 and started playing with it. Once you get used to the new interface, waoo ! It feels great. I was told in the past that the size of the opening window was configurable now. Did I dream? I couldn't find where I can do it from. Also, I
Re: frenchb problem
jouke hijlkema wrote: Hi all, How do I tell lyx 1.4.1 to use the frenchb language option. The only thing I can find is french and that's incompatible with the rest of my options What LaTeX distribution are you using. With teTeX 3.0, french option works well. Frenchb is something
Re: Bibliography/citation formatting issues
Juergen Spitzmueller wrote: Jeremy Wells wrote: To properly adhere to this style, there is a need for shortened forms of subsequent citations and/or the use of ibid for repeated citations, none of which are supported by any Bibtex style file. Not true. Jurabib supports all of this
Re: jurabib question
John Ward wrote: I'm having a difficult time getting jurabib set-up the way that I want. I have it close, but there are still a couple of issues. First, I have the 'titleformat=italic command in the preamble, but am still not getting my titles italicized (in either the original references
Re: jurabib question
John Ward wrote: I would like to have the dash in the bibliography. I looked over the documentation again and still have no idea what I'm missing. Cf. p. 16 of the English documentation : add bibformat=ibidem, in your jurabibsetup and in your preamble :
Re: Jurabib and Lyx: Bibliography problem
Julio Rojas wrote: Hi, this is a minor problem, but an annoying one. In Jurabib documentation the format of each bibliographic reference shows: Brox, Hans: Allgemeiner Teil des Bürgerlichen Gesetzbuches. 20th edition. Köln, Berlin, Bonn, München, 1996 But in my document all references
Re: Confused about Lyx's goals -- isn't this supposed to increase productivity?
Juergen Spitzmueller wrote: For instance: jurabib (since this is mentioned by the OP). I implemented that to Lyx 1.4 because I need it for my own work. I admit that the jurabib support still can be enhanced in many ways. But hey, it's a brand new feature, jurabib itself is very feature-rich,
Re: Converting in rtf or openoffice for windows
Sebastian wrote: Dear Listusers, I'm writing an article for a journal and the redaction is only taking rtf Format. Is there a way to export my lyx document into rtf or OpenOffice for windows. Unfortunatly works the export script for openoffice only for linux, but I'm working on windows. Is
Re: Italics in bibtex record
Declan O'Byrne wrote: I'm using pybliographic to manage bibliography. Some titles I have quote book titles, and those words should be italicised. Is there any way to italicise these words either manually, or to mark those words as a book title (in such a way that jurabib might be coaxed to
Re: Converting in rtf or openoffice for windows
Stephen Harris wrote: htaltex which comes with the tex4ht package converts to html. from the command line: htlatex something.tex makes something.html, Conversion quality varies and outputs need to be proofread. No, use the oolatex macro in tex4ht. It will convert your file to the
Re: What's on YOUR BibTeX Wish List?
Juergen Spitzmueller wrote: Note that there have been attempts to add a kind of makebst support to jurabib (which never has been finished IMHO). I think that a combination of jurabib and makebst would be very much the thing most humanists need. So if you intend to do something like that
Re: Table with fixed width columns
Juergen Spitzmueller wrote: Did you hit the return key after that to apply the changes? Having an 'apply' button in the Table Parameters dialog would be a nice usability enhancement. The table interface is indeed quite confusing. I think it should be split between a dialog for the general
Re: Wiki idea
LB wrote: I agree. That's a great idea. Cheers, Charles --
Re: How to reset a character style
Juergen Spitzmueller wrote: [EMAIL PROTECTED] wrote: How do I reset the character style to no formatting ? ATM it is a bit painful (undo or copy/paste), but we'll introduce a new method for dissolving insets (footnotes, character styles, comments etc.) by hitting DEL in the first position
Re: footnote inside table
Paul Schwartz wrote: thanks. ok with my case (documents with specific notes for each of them) i will put it like a note inserting it manually just below the table. Paul Or put your table in a minipage. Then you should be able to see the footnote. Cheers, Charles --
Re: edit lyx file in text editor
e-letter wrote: I use lyx at home but unable to do so at a public computer. I would like to be able to edit a lyx file in a text editor save the amended lyx file and then when at home continue using the edited lyx file in lyx. The file is stored on a network accessible via internet. LyX
Re: Hebrew on Mac OS X
Abdelrazak Younes wrote: Alexander Maryanovsky wrote: On 12/19/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: It means that the jesus font (an hebraic font) is unavailable on your system. I guess that you have to install the hebrew package on your latex distribution to get a set of hebrew
Re: Hebrew on Mac OS X
Abdelrazak Younes wrote: On a related note, don't you guys think that an application should work out of the box, without having to go hunt around the internet for packages and such? Sure and you are welcome to help us toward this goal ;-) I guess that for Hebrew and Arabic we could
Re: Header problem !
[EMAIL PROTECTED] wrote: I really don't know which one could be the problem, so no idea where to l= ook for.. any suggestion? Look the LaTeX log. Maybe you will have a clue. It looks like LaTeX cannot find the file. Buon natale, Charles --
Re: how to change some attributes of textclasses/environments?
Richard Heck wrote: So to change this, you have to get your hands dirty with LaTeX. It /may/ be that koma-book itself provides a way to configure the font size: It's very configurable. Look at the koma-script documentation to see. You'll find it in scrguide.pdf, which ought to have come with
Re: [OT] Best KDE-centric Distribution?
rgheck wrote: Debian Etch. [snip] Thanks for the suggestion. I've thought about Debian, and it has a great reputation for stability. That said, while I don't want to be absolutely bleeding edge, I do like to be near it, and Debian at least has a reputation for being, well, stable. Just
Re: LyX Qt under Gnome environment
Raymond Ouellette wrote: I compiled LyX 1.5.3 for my Linux distro Ubuntu 7.10. I'm a long time user of LyX. But since QT4 version of LyX, it is impossible for me to select anything using my mouse in LyX. Even selecting with the cursor, holding shift, etc, has erratic behavior. The mouse
Re: [OT] Best KDE-centric Distribution?
José Matos wrote: A frontend of lyx better integrated with kde seems a nice idea. :-) The Semantic framework Nepomuk is quite interesting (but very new). You can store and search all kind of relations, like a) This paragraph used this webpage as a source b) this graphic comes this file
Re: I'm new to LyX
Rich Shepard wrote: On Tue, 29 Jan 2008, Elswood wrote: And have tried to install the memoir template. On the configuration screen it says yes for it being but is nowhere to be found in my Templates folder nor can I find it anywhere else. I have re-configured after every download and I
Re: Problems with Lyx in connection to JabRef
Gudrun Meyer wrote: I use Lyx 1.5 in connection with JabRef 2.2 as a bibliography data-base. Now, no matter which style I use, Lyx isnt able to show the bibliography in a proper way. It either leaves out important fields, such as location, adds fields, which I do not want to be shown, such
Re: Numbering inside tabless
Kane Kano wrote: Hello everybody, i would like to create a table with two colums and lets say 25 lines. In the first column starting from the secon cell I would like to number consecutively the following cells down to the bottom. like: headerheader 1. a
Re: LyX won't compile my document any more
Maximilian Wollner wrote: Any suggestions? Many thanks for your help! It looks like you have an encoding problem... Cheers, Charles --
Re: Export to plain text garbled
All is discovered--flee at once! wrote: Hi, I'm using LyX 1.5.3 on Ubuntu 6.06. (I've never used LaTeX or LyX before.) I'm trying to export a LyX document to plain text and it's coming out garbled. I've also tried using all three PDF export options (dvipdfm, pdflatex, ps2pdf), opening the
Re: Languages British vs English
Liviu Andronic wrote: On 3/10/08, Phillip Ferguson [EMAIL PROTECTED] wrote: Why is there an English language option and British language option? What is the difference? Is this an American vs English English? Hyphenation rules are different in British English and in American English and
Re: Spellchecker for LyX doesn't work as well as expected
G. Milde wrote: On 12.03.08, Dominik Böhm wrote: I just installed aspell German to use it with LyX. ... In German you cannot compound arbitrary words to new words. ... So, for German this compound setting seems to be pretty useless. hunspell seems to work better for German :
Re: Bibliography
Andréas LYBERIS wrote: Hi, I will try to explain clearly my problem. I am writting a scientific report and I added the bibliography via Bibdesk (Bibtex for mac). The default organisation of the bibliography is alphabetical: all my references are classified in the alphabetical order of
Re: Google Docs to LaTeX
Michael Thompson wrote: are too kind to what Charles calls the 'wysiwyg cruft'. If you find a solution to the problem with em-dashes that doesn't involve a find-and-replace in the .tex file, tell me. I paste my w2lclean script that I run on files converted by writer2latex before importing
Re: Bibtex question
Daniel Miles wrote: Hi. Using lyx 1.5.4 on ubuntu, and I cannot for the life of me figure out how to reference according to my Uni guidelines - which I believe are Chicago Manual of Style. What I need as output that can be seen by the Chicago Manual of Style section on this webpage:
Re: Lyx for business
Graham Smith wrote: A rather vague question, but would anyone like to share their experiences of using Lyx in a small business situation, or point me towards some web links, with maybe example templates. Headed notepaper, business (technical) You can insert an image (for example the logo
Re: We could really use search-for-environment and search-for-charstyle
Abdelrazak Younes wrote: As for character style naigation, we could have something similar in 1.6.x. I was wondering yesterday if this could be used : It works on non-xml data and you can mash-up in one structure the LyX format and the
Re: reverse-DVI support for documents with Children?
Cameron Stone wrote: However, it doesn't work completely for documents containing child documents as Includes. Does anyone know if/how this can be made to work? I don't think it works with child documents. The new synctex synchronization framework between pdf and latex supports child
Re: Bibliography format problem, Please help !!!!!!!!!!
rohitthakkar wrote: Kindly suggest the Change at the earliest. You will have to change the unsrt.bst. You must create a macro that insert a carriage return (\\) after outputting author. Bibtex files are not easy to hack, the best guide is Tame the Beast from Nicolas Markey
Re: Experience using XeTeX with XeTeX?
Christian Ridderström wrote: users? Hi, I was asked if it's possible to use LyX with XeTeX, is this possible and does it work reasonably well? I'm monitoring the XeTeX mailing list and there is a fair amount of total LaTeX newbies that are interested in XeTeX because they need to type
Re: Experience using XeTeX with XeTeX?
Uwe Stöhr wrote: Charles de Miramon schrieb: I was asked if it's possible to use LyX with XeTeX, is this possible and does it work reasonably well? Not yet. The final XeTeX 1.0 has not yet bee released. MiKTeX updates the XeTeX version evey month, while TeXLive will only do this once | https://www.mail-archive.com/search?l=lyx-users@lists.lyx.org&q=from:%22Charles+de+Miramon%22 | CC-MAIN-2021-31 | refinedweb | 5,369 | 72.87 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
void isr_mbx_send (
OS_ID mailbox, /* The mailbox to put the message in */
void* message_ptr ); /* Pointer to the message */
The isr_mbx_send function puts the pointer to the message
message_ptr in the mailbox if the mailbox is not
already full. The isr_mbx_send function does not cause the
current task to sleep even if there is no space in the mailbox to put
the message.
When an interrupt receives a protocol frame (for example TCP-IP,
UDP, or ISDN), you can call the isr_mbx_send function from the
interrupt to pass the protocol frame as a message to a task.
The isr_mbx_send function is in the RL-RTX library. The
prototype is defined in rtl.h.
Note
None.
isr_mbx_check, isr_mbx_receive, os_mbx_declare, os_mbx_init
#include <RTL.h>
os_mbx_declare (mailbox1, 20);
void timer1 (void) __irq {
..
isr_mbx_send (&mailbox1, msg);
..
}. | http://www.keil.com/support/man/docs/rlarm/rlarm_isr_mbx_send.htm | CC-MAIN-2019-43 | refinedweb | 145 | 64.51 |
Difference between revisions of "Calico Scheme"
Revision as of 12:10, 8 December 2014
Here we will provide documentation for using Calico and Calysto Scheme version 3.0.0
Calico Scheme: a new implementation of a Scheme-based language written in C# for .NET/Mono. It implements many core Scheme functions, but also adds some functionality to bring it into line with the other modern languages like Python and Ruby. In can run inside Calico as a DLL.
Calysto Scheme: a new implementation of a Scheme-based language implemented in Python. It implements many core Scheme functions, but also adds some functionality to bring it into line with the other modern languages like Python and Ruby. You can find Calysto Scheme here:
You can install Calysto Scheme with:
pip install calysto python -m calysto.language.scheme.scheme
To use with IPython3/Jupyter (under development):
pip install calysto-scheme ipython console --kernel calysto_scheme
You will need the development version of IPython3. See "Installing the development version" at.
You can see many examples using Calysto Scheme in IPython/Jupyter notebooks:
-
-
The Scheme in C# version can be found in Calico, installed from here:
Contents
- 1 Starting Scheme
- 2 Scheme Extensions
- 3 Types
- 4 Commands
- 4.1 %
- 4.2 *
- 4.3 +
- 4.4 -
- 4.5 /
- 4.6 <
- 4.7 <=
- 4.8 =
- 4.9 >
- 4.10 >=
- 4.11 abs
- 4.12 and
- 4.13 append
- 4.14 apply
- 4.15 assq
- 4.16 assv
- 4.17 atom?
- 4.18 boolean?
- 4.19 callback
- 4.20 car, cdr and related
- 4.21 case
- 4.22 cd/current-directory
- 4.23 char->integer
- 4.24 char->string
- 4.25 char-alphabetic?
- 4.26 char-numeric?
- 4.27 char-whitespace?
- 4.28 char=?
- 4.29 char?
- 4.30 cond
- 4.31 cons
- 4.32 current-environment
- 4.33 current-time
- 4.34 cut
- 4.35 define
- 4.36 define!
- 4.37 dict
- 4.38 dir
- 4.39 eq?
- 4.40 equal?
- 4.41 eqv?
- 4.42 error
- 4.43 eval
- 4.44 eval-ast
- 4.45 even?
- 4.46 float
- 4.47 for-each
- 4.48 format
- 4.49 get-stack-trace
- 4.50 import
- 4.51 int
- 4.52 integer->char
- 4.53 iter?
- 4.54 lambda
- 4.55 length
- 4.56 let
- 4.57 let*
- 4.58 letrec
- 4.59 list
- 4.60 list->string
- 4.61 list->vector
- 4.62 list-ref
- 4.63 list?
- 4.64 load
- 4.65 load-as
- 4.66 make-set
- 4.67 make-vector
- 4.68 map
- 4.69 member
- 4.70 memq
- 4.71 memv
- 4.72 not
- 4.73 null?
- 4.74 number->string
- 4.75 number?
- 4.76 odd?
- 4.77 or
- 4.78 pair?
- 4.79 parse
- 4.80 parse-string
- 4.81 procedure?
- 4.82 quote, quasiquote, and unquote
- 4.83 quotient
- 4.84 rac
- 4.85 range
- 4.86 rational
- 4.87 rdc
- 4.88 read-string
- 4.89 remainder
- 4.90 require
- 4.91 reverse
- 4.92 round
- 4.93 set!
- 4.94 set-car!
- 4.95 set-cdr!
- 4.96 snoc
- 4.97 sort
- 4.98 sqrt
- 4.99 string
- 4.100 string->list
- 4.101 string->number
- 4.102 string->symbol
- 4.103 string-append
- 4.104 string-length
- 4.105 string-ref
- 4.106 string-split
- 4.107 string<?
- 4.108 string=?
- 4.109 string?
- 4.110 substring
- 4.111 symbol
- 4.112 symbol->string
- 4.113 symbol?
- 4.114 trace-lambda
- 4.115 typeof
- 4.116 unparse
- 4.117 use-lexial-address
- 4.118 use-stack-trace
- 4.119 use-tracing
- 4.120 import
- 4.121 vector
- 4.122 vector->list
- 4.123 vector-ref
- 4.124 vector-set!
- 4.125 vector?
- 4.126 (void)
- 4.127 zero?
- 5 Advanced topics
- 6 Use in Python
- 7 Current limitations
- 8 Under The Hood
- 9 References
- 10 For Developers
Starting Scheme
Example:
$ ipython console kernel --calysto_scheme Calysto Scheme, version 3.0.0 ---------------------------- Use (exit) to exit ==>
To run in Calico:
- Calico Download
- Run in gui
- Run from command-line
Example:
$ calico --repl --lang=scheme Loading Calico version 3.0.0... scheme>
From either, Enter expressions such as "(+ 1 1)". Control+d will exit.
Scheme Extensions
Calico Scheme is the beginning of a complete Scheme, but is currently missing many functions (see scheme issues in our issue tracker). Even still, we have added some extensions to Scheme, including:
- allows doc-strings on any defined item
- can trace executing expressions in Calico GUI
- shows stack traces (that look like Python's) on exceptions (turn off with (use-stack-trace #f))
- Exceptions - implements try/catch/finally
- implements non-deterministic choose/fail
- Modules and namespaces - implements namespaces for imports and foreign objects
- import libraries written for Python (in the Python version), or CLR (in the C# version)
- Interop - ways for Scheme to interop with other Calico languages
Types
literals
Literals are self-evaluating values in Scheme. Quoting them is not necessary, and does not change them.
- numbers
- characters
- strings
- booleans
numbers
There are 4 types of numbers in Calico Scheme:
- integers
- BigIntegers - automatically moves to BigIntegers when overflow occurs
- floating point numbers
- rationals
booleans
Boolean values consist of #t (true) and #f (false). In testing (eg, if, cond), anything that is not #f is considered true. That means that 0 and '() are both considered true.
symbols
Symbols are created using either a preceding single-quote character, or by using the (quote ...) form.
Examples:
'thing (quote thing)
Symbols with the same name represent the same object. That means (eq? 'x 'x) returns #t.
'() is a special symbol, pronounced "the empty list".
Commands
The following are the functions, syntax, and special forms for Calico Scheme.
%
(% ...)
Modulo gives the amount left over after a divide.
Example:
(% 10 3) => 1
*
(* ...)
Multiply will take a number of arguments and give you the total of all numbers multiplied with each other.
Examples:
(*) => 1 (* 12) => 12 (* 2 3) => 6 (* 2 3 4) => 24
+
(+ ...)
Addition will take a number of arguments and give you the sum of all of the numbers added together.
Example:
(+ 7 8) => 15
-
(- ...)
Subtraction will subtract one number from another.
Example:
(- 5 2) => 3
/
(/ ...)
Divide will divide one number from the previous.
Examples:
(/) => 1 (/ 2) => 1/2 (/ 3 4) => 3/4
<
(< ...)
Less than will return #t or #f as to whether the first number is less than the second.
Example:
(< 5 2) => #f
<=
(<= ...)
Less than or equal than will return #t or #f as to whether the first number is less than, or equal to, the second.
Example:
(<= 5 6) => #t
=
(= ...)
Equality for numbers.
Example:
(= 6 7) => #f
>
(> ...)
Greater than will return #t or #f as to whether the first number is greater than the second.
Example:
(> 9 2) => #t
>=
(>= ...)
Greater than or equals will return #t or #f as to whether the first number is greater than or equal to the second.
Example:
(>= 4 5) => #f
abs
(abs ...)
Returns the absolute value of the number.
Example:
(abs -1) => 1
and
(and ...)
And will return #f as soon as it encounters a #f in the items (eg, it short circuits without further evaluation of arguments). If no #f is encountered, it returns the last item.
Examples:
(and 4 1 2 #t (quote ()) 0) => 0 (and 4 1 2 #t '() 3) => 3 (and 4 1 2 #f '() 0) => #f
append
(append ...)
Append takes a number of lists and combines them. The last item can be an atom, in which case it returns an improper list.
Examples:
(append (quote (1 2 3)) (quote (4 5 6))) => (1 2 3 4 5 6) (append '(1 2 3) '(4 5 6)) => (1 2 3 4 5 6) (append '(1 2 3) '(4 5 6) '(7 8 9)) => (1 2 3 4 5 6 7 8 9) (append '(1 2 3) '(4 5 6) 7) => (1 2 3 4 5 6 . 7)
apply
(apply f args)
Apply takes a function and applies it to a list of arguments. (apply f '(1 2)) operates as if it were written (f 1 2).
Examples:
(apply car (quote ((1)))) => 1 (apply car '((1))) => 1
assq
(assq ...)
Find an association using the eq? operator. Given an item, look it up in a list of pair of associations.
Examples:
(assq 1 (quote ((1 2) (3 4)))) => (1 2) (assq 1 '((1 2) (3 4))) => (1 2)
assv
(assv ...)
Find an association using the eqv? operator. Given an item, look it up in a list of pair of associations.
Examples:
(assv 1 (quote ((1 2) (3 4)))) => (1 2) (assv 1 '((1 2) (3 4))) => (1 2)
atom?
(atom? item)
Returns #t if item is an atom.
Example:
(atom? 1) => #t
boolean?
(boolean? item)
Returns #t if item is a boolean (#t or #f) and #f otherwise.
Examples:
(boolean? #t) => #t (boolean? #f) => #t
callback
(callback args body)
For use with event-oriented host APIs.
Given one of these functions and a list, it will return part of the list.
(car ...)
Examples:
(car '(a b)) => a (car '(a . b)) => a (car '((a) b) => (a) (car (quote (((((hello there) this is a test) what is this) another item) in the list))) => ((((hello there) this is a test) what is this) another item)
(cdr ...)
Returns everything in a list, except the car.
Examples:
(cdr '(a b)) => (b) (cdr '(a . b) => b (cdr (quote (((((hello there) this is a test) what is this) another item) 1 2 3))) => (1 2 3)
(caaaar list)
Is the same as (car (car (car (car list)))).
Examples:
(caaaar (quote (((((hello there) this is a test) what is this) another item) in the list))) => (hello there) (caaaar '(((((hello there) this is a test) what is this) another item) in the list)) => (hello there)
(caaadr ...)
Example:
(caaadr (quote (((((hello there) this is a test) what is this) another item) ((((((1 2 3) 4 5 6) 7 8 9) 10 11 12) 13 14 15) 16 17 18)))) => ((((1 2 3) 4 5 6) 7 8 9) 10 11 12)
(caaar ...)
Example:
(caaar (quote (((((hello there) this is a test) what is this) another item) in the list))) => ((hello there) this is a test)
(caadar ...)
Example:
(caadar (quote (((((hello there) this is a test) what is this) (((1 2 3) 4 5 6) 7 8 9) another item) in the list))) => ((1 2 3) 4 5 6)
(caaddr ...)
Example:
(caaddr (quote (((((hello there) this is a test) what is this) (((1 2 3) 4 5 6) 7 8 9) another item) head ((1 2) 3 4) in the list))) => (1 2)
(caadr ...)
Example:
(caadr (quote (((((hello there) this is a test) what is this) (((1 2 3) 4 5 6) 7 8 9) another item) (in this) ((7 8)) the list))) => in
(caar ...)
Example:
(caar (quote (((((hello there) this is a test) what is this) another item) in the list))) => (((hello there) this is a test) what is this)
(cadaar ...)
Example:
(cadaar (quote (((((hello there) this is a test) (what) is this) (yet another) item) in the list))) => (what)
(cadadr ...)
Example:
(cadadr (quote (((((hello there) this is a test) what is this) (yet another) item) (in the) list))) => the
(cadar ...)
Example:
(cadar (quote (((((hello there) this is a test) what is this) (yet another) item) in the list))) => (yet another)
(caddar ...)
Example:
(caddar (quote (((((hello there) this is a test) what is this) another item) in the list))) => item
(cadddr ...)
Example:
(cadddr (quote (((((hello there) this is a test) what is this) another item) in the list))) => list
(caddr ...)
Example:
(caddr (quote (((((hello there) this is a test) what is this) another item) in the list))) => the
(cadr ...)
Example:
(cadr (quote (((((hello there) this is a test) what is this) another item) in the list))) => in
(cdaaar ...)
Example:
(cdaaar (quote (((((hello there) this is a test) what is this) another item)))) => (this is a test)
(cdaadr ...)
Example:
(cdaadr (quote (((((hello there) this is a test) what is this) another item) ((7 8)) 9 10))) => (8)
(cdaar ...)
Example:
(cdaar (quote (((((hello there) this is a test) what is this) another item)))) => (what is this)
(cdadar ...)
Example:
(cdadar (quote (((((hello there) this is a test) what is this) (another two) items)))) => (two)
(cdaddr ...)
Example:
(cdaddr (quote (((((hello there) this is a test) what is this) another item) 1 (2 5) 3 4))) => (5)
(cdadr ...)
Example:
(cdadr (quote (((((hello there) this is a test) what is this) another item) (1 6) (2 5) 3 4))) => (6)
(cdar ...)
Example:
(cdar (quote (((((hello there) this is a test) what is this) another item)))) => (another item)
(cddaar ...)
Example:
(cddaar (quote (((((hello there) this is a test) what is this) another item) 1 (2) 3))) => (is this)
(cddadr ...)
Example:
(cddadr (quote (((((hello there) this is a test) what is this) another item) (7 13) (8 12) 9 10))) => ()
(cddar ...)
Example:
(cddar (quote (((((hello there) this is a test) what is this) another item)))) => (item)
(cdddar ...)
Example:
(cdddar (quote (((((hello there) this is a test) what is this) another item)))) => ()
(cddddr ...)
Example:
(cddddr (quote (((((hello there) this is a test) what is this) another item) 1 2 3 4 5))) => (4 5)
(cdddr ...)
Example:
(cdddr (quote (((((hello there) this is a test) what is this) another item) 1 2 3 4))) => (3 4)
(cddr ...)
Example:
(cddr (quote (((((hello there) this is a test) what is this) another item) 1 2 3))) => (2 3)
case
(case ...)
For use in handling different cases. Returns the first expression that matches. "else" acts as it were #t (always matches). If nothing matches, #f is returned.
Examples:
(case 'thing1 (thing2 1) (thing1 2)) => 2 (case 'thing1 (thing2 1) ((thing1 thing3) 2)) => 2 (case 'thingx (thing2 1) ((thing1 thing3) 2) (else 3)) => 3
(let ((r 5)) (case (quote banana) (apple (quote no)) ((cherry banana) 1 2 r) (else (quote no)))) => 5
cd/current-directory
(cd ...) (current-directory)
Change/current directory. Given a string, will change to that directory. If you leave out the argument, it returns the current directory.
Example:
(cd) => "" (cd "..") => moves up one directory
char->integer
(char->integer ...)
Converts a character into an integer.
Example:
(char->integer #\a) => 97
char->string
(char->string ...)
Converts a character into a string.
Example:
(char->string #\b) => "b"
char-alphabetic?
(char-alphabetic? c)
Is c an alphabetic character?
Examples:
(char-alphabetic? #\A) => #t (char-alphabetic? #\1) => #f
char-numeric?
(char-numeric? c)
Is c an numeric character?
Example:
(char-numeric? #\1) => #t
char-whitespace?
(char-whitespace? ...)
Is c a whitespace (e.g. tab, space, newline) character?
Example:
(char-whitespace? #\t) => #f (char-whitespace? #\tab) => #t (char-whitespace? #\newline) => #t (char-whitespace? #\a) => #f
char=?
(char=? ...)
Returns #t if two characters are the same.
Example:
(char=? #\a #\a) => #t (char=? #\a #\b) => #f
char?
(char? item)
Is item a character?
Example:
(char? 2) => #f
cond
(cond ...)
Asks a series of questions, and if true, returns the associated expression. "else" acts as if it were "#t".
Example:
(cond (#f 1) (else 2)) => 2
cons
(cons item1 item2)
Constructs a new cons cell by cons-ing item1 onto item2. A proper list ends in the special symbol, '() pronounced "empty list". An improper list ends with a dot, followed by the last item.
Examples:
(cons 1 '()) => (1) (cons 1 2) => (1 . 2)
current-environment
(current-environment)
Get the current environment.
Example:
(current-environment) => the current environment
current-time
(current-time)
Returns the current time.
Example:
(current-time) => 1397405584.229055
cut
(cut ...)
The (cut) operation will succeed, but cannot be back-tracked afterwards.
Example:
(letrec ((loop (lambda (n) (if (= n 0) (set! var (cut 23)) (loop (- n 1))))) (var 0)) (loop 10) var) => (23)
define
(define variable value)
Define is used to create global variables, whether they have a value or function bound to them.
Examples:
(define x 1) (define f (lambda (n) n))
You may also use the MIT form:
(define (function args ...) body)
One final alternative is to allow a doc-string when defining any item:
(define function "This is a useful bit of text regarding function" (lambda (n) n)) (define speed "this is the speed in km/sec" 34)
define!
(define! variable value)
Define! is used to create variables in the hosting environment (Python or C#), whether they have a value or function bound to them. If you define! a function, Calico Scheme will automatically wrap it in a manner that is appropriate for the hosting environment. Note that this turns a Scheme function into a stack-based function in the host environment.
Examples:
(define! x 1) (define! f (lambda (n) n))
dict
(dict ...)
Make a dictionary.
Example:
(dict (quote ((1 2) (3 4)))) => none
dir
(dir ...)
Object properties and environment variables.
==> (dir) (- % * / + < <= = =? > >= abort abs and append apply assq assv atom? boolean? caaaar caaadr caaar caadar caaddr caadr caar cadaar cadadr cadar caddar cadddr caddr cadr call/cc call-with-current-continuation car case cases cd cdaaar cdaadr cdaar cdadar cdaddr cdadr cdar cddaar cddadr cddar cdddar cddddr cdddr cddr cdr char? char=? char-alphabetic? char-numeric? char-whitespace? cond cons current-directory current-environment current-time cut debug define-datatype dir display eq? equal? eqv? error eval eval-ast even? exit float for-each format get get-member globals import int iter? length let let* letrec list list? list->string list->vector list-head list-ref list-tail load load-as make-set make-vector map member memq memv newline not null? number? number->string odd? or pair? parse parse-string print printf procedure? property quotient range rational read-string record-case remainder require reset-toplevel-env reverse safe-print set-car! set-cdr! sort sqrt string string? string<? string=? string->list string->number string->symbol string-append string-length string-ref string-split substring symbol symbol? symbol->string typeof unparse unparse-procedure use-lexical-address use-tracing vector vector? vector->list vector-ref vector-set! void zero?) ==> (dir my-stuff) (x y z)
You can also use dir on an object:
(import "Myro") (dir Myro)
Example:
(length (dir)) => 170
eq?
(eq? ...)
Tests if two object are the same.
Example:
(eq? (quote a) (quote a)) => #t
equal?
(equal? ...)
Tests if two things evaluate to the same value.
Example:
(equal? 1 1.0) => #t
eqv?
(eqv? ...)
Slightly less strict than eq?.
Example:
(eqv? 1 1) => #t
error
(error name message)
Throw an exception. See also the try/catch examples below.
Example:
(try (error (quote a) "message") (catch e (cadr e))) => "Error in 'a': message"
eval
(eval item)
Evaluate an expression.
Example:
(eval '(+ 1 2)) => 3
eval-ast
(eval-ast ...)
Evaluate an abstract syntax tree.
Example:
(eval-ast (parse (quote (+ 3 4)))) => 7
even?
(even? number)
Returns #t if a number is even.
Example:
(even? 33) => #f
float
(float number)
Returns a floating point version of a number.
Example:
(float 23) => 23.0
for-each
(for-each f sequence)
Maps a function f onto a sequence, but doesn't return anything. Sequence can be a string, list, vector, or other iterator. for-each is used for its side-effects.
Example:
(for-each (lambda (n) (+ n 1)) (quote (1 2 3))) => <void>
format
(format message arg ...)
Formats a message using the following special codes:
- ~a - "any", prints as display does
- ~s - "s-expression", prints as write does
- ~% - newline
Example:
(format "~a ~s ~%" "hello1" "hello2") outputs: hello1 \"hello2\"
get-stack-trace
(get-stack-trace)
Gets the current stack trace. You can disable stack traces with (use-stack-trace #f).
Example:
(caddr (cadar (get-stack-trace))) => 69
import
(import <string-exp>) (import <string-exp> '<symbol-exp>)
Importing a modules provide an easy method for structuring hierarchies of libraries and code.
Examples:
==> (import "my-file.ss")
my-file.ss is a Scheme program file, which itself could have imports.
==> (import "my-file.ss" 'my-stuff)
Loads the file, and puts it in the namespace "my-stuff" accessible through the lookup interface below.
import uses the local environment, while load uses the toplevel-environment.
Lookup imported names:
module.name module.module.name
==> (import "my-file.ss" 'my-stuff) ==> my-stuff.x 5
int
(int number)
Returns an integer form of the number. Note that this rounds to the nearest integer.
Example:
(int 12.8) => 13
integer->char
(integer->char ...)
Converts an integer to a character.
Example:
(integer->char 97) => #\a
iter?
(iter? item)
Returns #t if item is an iterator.
Example:
(iter? 3) => #f
lambda
(lambda ...)
Create a function. Allows "varargs" (variable number of arguments).
Examples:
((lambda x x) 1 2 3 4 5) => (1 2 3 4 5) ((lambda (x . y) (list x y)) 1 2 3 4 5) => (1 (2 3 4 5)) ((lambda (a b . z) (list a b z)) 1 2 3 4 5) => (1 2 (3 4 5)) ((lambda (a b . z) (list a b z)) 1 2 3) => (1 2 (3)) ((lambda (a b . z) (list a b z)) 1 2) => (1 2 ()) (try ((lambda (a b . z) (list a b z)) 1) (catch e "not enough arguments given")) => "not enough arguments given"
length
(length ls)
Returns the length of a list, ls. Note that this does not recurse into the list, but only counts the toplevel items, even if those are lists.
Example:
(length (quote (1 2 3))) => 3
let
(let ...)
One method of creating local variables.
Example:
(let ((x 1)) x) => 1
let*
(let* ...)
Allows the creation of variables whose value depends on previous variables.
Example:
(let* ((x 1) (y (+ x 1))) y) => 2
In this example, note that y depends on x.
letrec
(letrec ...)
Allows creating recursively defined values.
Example:
(letrec ((loop (lambda (n) (if (= n 0) (quote ok) (loop (- n 1)))))) (loop 10)) => ok
list
(list ...)
Create a new list by consing the items onto each other, ending in '(), creating a proper list.
Example:
(list 1 2) => (1 2)
list->string
(list->string ...)
Converts a list of characters into a string.
Example:
(list->string (quote (#\1 #\2 #\3))) => "123"
list->vector
(list->vector ...)
Converts a list into a vector. A vector has a fixed length, and can index directly into a position.
Example:
(list->vector (quote (1 2 3))) => [1, 2, 3]
In Python, Scheme uses the Python List as a vector. In C#, Scheme uses an array to represent a vector.
list-ref
(list-ref ...)
A convenient method to get an item from a list. This is no more efficient than a series of car/cdrs.
Example:
(list-ref (quote (1 2 3)) 1) => 2
list?
(list? ...)
Returns #t if an item is a proper list, or the empty list '().
Example:
(list? (quote (1 2 3))) => #t
load
Load Scheme file(s).
(load "file1.ss" "file2.ss" ...)
load-as
Load a Scheme file into a namespace.
(load "file.ss" 'namespace)
make-set
(make-set ...)
Creates a list of unique items.
Example:
(sort < (make-set (quote (1 2 3 1 2)))) => (1 2 3)
make-vector
(make-vector size)
Creates a vector of a specific size.
Example:
(make-vector 3) => [0, 0, 0]
map
(map f sequence)
Applies a function to all elements in a sequence, and returns the resulting values in a list.
Example:
(map (lambda (n) (+ n 1)) (range 5)) => (1 2 3 4 5)
member
(member item ls)
Check to see if item is in a list. If it is in the list, return the rest of the list. If not, return #f.
Example:
(member "b" (quote ("a" "b" "c"))) => ("b" "c")
memq
(memq item ls)
Check if item is in ls (like member) but using the eq? operator.
Example:
(memq (quote b) (quote (a b c))) => (b c)
memv
(memv ...)
Check if item is in ls (like member) but using the eqv? operator.
Example:
(memv 2 (quote (1.0 2.0 3.0))) => (2.0 3.0)
not
(not item)
Flips the boolean value of item.
Example:
(not #f) => #t
null?
(null? item)
Checks to see if an item is eq? to the empty list, '().
Example:
(null? (quote ())) => #t
number->string
(number->string ...)
Convert a number into a string.
Example:
(number->string 23) => "23"
number?
(number? item)
Is item a number?
Example:
(number? 23) => #t
odd?
(odd? number)
Is number odd?
Example:
(odd? 45) => #t
or
(or ...)
Or will return #t as soon as it encounters a non-#f in the items (eg, it short circuits without further evaluation of arguments). If no non-#f is encountered, it returns #f.
Example:
(or #t (/ 1 0)) => #t
pair?
(pair? item)
A pair is a cons cell. This function tests to see if item is a cons cell.
Example:
(pair? (quote ())) => #f (pair? (cons 1 2)) => #t
Note that '() is not a cons cell; it is a symbol.
parse
(parse ...)
Parse takes an s-expression and returns a parsed version.
Example:
(parse (quote (+ 1 2))) => (app-aexp (lexical-address-aexp 0 1 + none) ((lit-aexp 1 none) (lit-aexp 2 none)) none)
parse-string
(parse-string ...)
Parse-string takes a string representing an s-expression and returns the parsed version, like parse.
Example:
(parse-string "(- 7 8)") => (app-aexp (lexical-address-aexp 0 2 - (stdin 1 2 2 1 2 2)) ((lit-aexp 7 (stdin 1 4 4 1 4 4)) (lit-aexp 8 (stdin 1 6 6 1 6 6))) (stdin 1 1 1 1 7 7))
The additional numbers are details about line, column, starting and ending place in the file from which this code was received (or "stdin" if it came not from a file.
procedure?
(procedure? item)
Returns #t if item is a procedure.
Example:
(procedure? procedure?) => #t
quote, quasiquote, and unquote
(quote item) 'item
A quoted item (represented by a single-quote) is a symbol (if item is an atom), or is a list of quoted items if item is a list. Literals that are quoted are just the literals.
Examples:
'symbol => symbol '(a list of items) => (a list of items)
(quasiquote item) `item
Quasiquote (represented by a back-quote) is like a regular quoted item; however, with a quasiquote, unquoted (represented with a comma) items are evaluated.
Examples:
(quasiquote (list (unquote (+ 1 2)) 4)) => (list 3 4) `(list ,(+ 1 2) 4) => (list 3 4)
Notice that the (+ 1 2) is evaluated.
,@expression will unqote the contents without the containing list.
quotient
(quotient numerator denominator)
Returns the number of times the denominator will go into the numerator.
Example:
(quotient 1 4) => 0
rac
(rac ls)
Opposite of "car"... returns the last item in a list.
Example:
(rac (quote (1 2 3))) => 3
range
(range stop) (range start stop) (range start stop step)
Returns a range of integers. "stop" never includes the stop number given.
Example:
(range 10) => (0 1 2 3 4 5 6 7 8 9)
rational
(rational numerator denominator)
Returns the same as what / (divide) would give.
Example:
(rational 3 4) => 3/4
rdc
(rdc ls)
Opposite of "cdr"... returns everything in a list but the last item.
Example:
(rdc (quote (1 2 3))) => (1 2)
read-string
(read-string ...)
Example:
(read-string (quote (1 2 3))) => ((pair) ((atom) 1 (stdin 1 2 2 1 2 2)) ((pair) ((atom) 2 (stdin 1 4 4 1 4 4)) ((pair) ((atom) 3 (stdin 1 6 6 1 6 6)) ((atom) () none) none) none) (stdin 1 1 1 1 7 7))
remainder
(remainder a b)
The remainder, after b is divided into a.
Example:
(remainder 1 4) => 1
require
(require expression)
Requires that an expression to be true. If it is not, then it fails, handling control to the fail continuation.
Example:
(require #t) => ok
reverse
(reverse ls)
Returns a list in reversed order.
Example:
(reverse (quote (1 2 3))) => (3 2 1)
round
(round ...)
Rounds a number to the nearest integer.
Example:
(round 45.5) => 46 (round 45.4) => 45
set!
(set! variable value)
set! is used to assign values. Note that you must use define to create a global variable before you can set it.
Examples:
(define x 'undefined) (set! x 4)
set-car!
(set-car! ls item)
Changes the car of a list to be a new item.
Example:
(let ((x (quote (1 2 3)))) (set-car! x 0) x) => (0 2 3)
set-cdr!
(set-cdr! ls item)
Changes the cdr of a list to be a new item.
Example:
(let ((x (quote (1 2 3)))) (set-cdr! x (quote (3 4))) x) => (1 3 4)
snoc
(snoc item ls)
Opposite of "cons"... connects an item onto the end of a list.
Example:
(snoc 0 (quote (1 2 3))) => (1 2 3 0)
sort
(sort comparison-function ls)
Takes a comparison-function (such as < or >) and sorts a list using that function. < gives a list in ascending order.
Example:
(sort < (quote (3 7 1 2))) => (1 2 3 7)
sqrt
(sqrt number)
Returns the square root of a number.
Example:
(sqrt 3) => 1.7320508075688772
string
(string c ...)
Takes the given characters and turns them into a string.
Example:
(string #\1 #\2) => "12"
string->list
(string->list ...)
Takes a string, and returns a list of characters.
Example:
(string->list "hello world") => (#\h #\e #\l #\l #\o #\ #\w #\o #\r #\l #\d)
string->number
(string->number ...)
Takes a string and returns a number.
Example:
(string->number "12.1") => 12.1
string->symbol
(string->symbol ...)
Takes a string, and returns a symbol.
Example:
(string->symbol "hello") => hello
string-append
(string-append ...)
Takes the given strings, appends them together, and returns the resulting string.
Example:
(string-append "hell" "o") => "hello"
string-length
(string-length s)
Returns the length of a string.
Example:
(string-length "what") => 4
string-ref
(string-ref s position)
Returns the character in the given position in string s.
Example:
(string-ref "what" 2) => #\a
string-split
(string-split s c)
Splits a string s into strings given a delimiting character c.
Example:
(string-split "hello.world" #\.) => ("hello" "world")
string<?
(string<? a b)
Is string a less than string b in lexicographic order?
Example:
(string<? "apple" "banana") => #t
string=?
(string=? a b)
Is string a equal to string b?
Example:
(string=? "a" "b") => #f
string?
(string? item)
Is item a string?
Example:
(string? "hello") => #t
substring
(substring s start stop)
Given a string s, a start position, and stop position, return the substring.
Example:
(substring "hello" 1 3) => "el"
symbol
(symbol s)
Given a string s, return a symbol.
Example:
(symbol "hello") => hello
symbol->string
(symbol->string sym)
Convert a symbol sym into a string.
Example:
(symbol->string (quote hello)) => "hello"
symbol?
(symbol? item)
Is item a symbol?
Example:
(symbol? (quote hello)) => #t
trace-lambda
(trace-lambda name (args...) body)
Similar to regular lambda, but display tracing information at runtime.
Example:
==> ((trace-lambda test (n) n) 4) call: (test 4) return: 4 4
typeof
(typeof item)
Get the internal type of item. In Python, this will return Python types. In C#, this will return CLR types.
==> (typeof 1) System.Int32
==> (typeof 238762372632732736) Microsoft.Scripting.Math.BigInteger
==> (typeof 1/5) Rational
Examples:
(typeof 23) => <type 'int'>
unparse
(unparse ...)
Unparse a parsed expression.
Example:
(unparse (parse (quote (+ 1 2)))) => (+ 1 2)
use-lexial-address
(use-lexial-address) (use-lexial-address bool)
Get or set the use-lexical-address setting.
Example:
(use-lexical-address) => #t (use-lexical-address #f) (use-lexical-address) => #f
use-stack-trace
(use-stack-trace) (use-stack-trace bool)
Set or get the use-stack-trace setting.
Example:
(use-stack-trace) => #t
use-tracing
(use-tracing) (use-tracing ...)
Get or set the use-tracking setting.
Example:
(use-tracing) => #f
import
(import ...)
Use a native library. On Python, you can use Python libraries. In Calico, you can use Calico libraries.
Example:
(try (import "math") (catch e (import "Graphics"))) => ()
You can use "import" to fill in missing functions from Scheme. For example, Calico Scheme doesn't currently have a sin function. You can add that:
In Scheme in Python:
(import "math") ## Python library (math.sin 3.14) => 0.0015926529164868282
In Scheme in Calico:
(import "Math") ## Calico library (Math.Sin 3.14) => 0.00159265291648683
vector
(vector ...)
Create a vector with the elements given.
Example:
In Scheme-in-Python:
(vector 1 2 3) => [1, 2, 3]
In Scheme-in-Calico:
(vector 1 2 3) => #3(1 2 3)
In Scheme-in-Python, vectors are represented by Python's list. In Scheme-in-Calico, vectors are represented by CLR's object arrays (object []).
vector->list
(vector->list ...)
Convert the vector into a list.
Example:
(vector->list (vector 1 2 3)) => (1 2 3)
vector-ref
(vector-ref ...)
Get an element from a vector. O(1) operation costs.
Example:
(vector-ref (vector 1 2 3) 2) => 3
vector-set!
(let ...)
Create local variables.
Example:
(let ((v (vector 1 2 3))) (vector-set! v 2 (quote a)) v) => [1, 2, a]
vector?
(vector? item)
Is item a vector?
Example:
(vector? (vector)) => #t
(void)
(void)
Create the void object.
Example:
(void) => <void>
zero?
(zero? number)
Is number equal to zero?
Example:
(zero? 0.0) => #t
Advanced topics
(try (let loop ((n 5)) (if (= n 0) (raise (quote blastoff!))) (loop (- n 1))) (catch e e)) => blastoff! (try 3) => 3 (try 3 (finally 'yes 4)) => 3 (try (raise (quote yes)) (catch e e)) => yes (try (try (raise (quote yes))) (catch e e)) => yes (try (try (begin (quote one) (raise (quote oops)) (quote two))) (catch e e)) => oops (* 10 (try (begin (quote one) (raise (quote oops)) (quote two)) (catch ex 3 4))) => 40 (* 10 (try (begin (quote one) (quote two) 5) (catch ex 3 4))) => 50 (* 10 (try (begin (quote one) (raise (quote oops)) 5) (catch ex (list (quote ex:) ex) 4))) => 40 (try (* 10 (try (begin (quote one) (raise (quote oops)) 5) (catch ex (list (quote ex:) ex) (raise ex) 4))) (catch e e)) => oops (try (* 10 (try (begin (quote one) (raise (quote oops)) 5) (catch ex (list (quote ex:) ex) (raise ex) 4) (finally (quote two) 7))) (catch e e)) => oops (try (* 10 (try (begin (quote one) (raise (quote oops)) 5) (catch ex (list (quote ex:) ex) (raise (quote bar)) 4))) (catch x (quote hello) 77)) => 77 (try 3 (finally (quote hi) 4)) => 3 (try (div 10 0) (catch e e)) => "division by zero" (try (let ((x (try (div 10 0)))) x) (catch e e)) => "division by zero" (let ((x (try (div 10 2) (catch e -1)))) x) => 5 (let ((x (try (div 10 0) (catch e -1)))) x) => -1 (let ((x (try (div 10 2) (catch e -1) (finally (quote closing-files) 42)))) x) => 5 (let ((x (try (div 10 0) (catch e -1) (finally (quote closing-files) 42)))) x) => -1 (let ((x (try (div 10 2) (finally (quote closing-files) 42)))) x) => 5 (try (let ((x (try (div 10 0) (catch e -1 (raise (quote foo))) (finally (quote closing-files) 42)))) x) (catch e e)) => foo (try (let ((x (try (div 10 0) (catch e -1 (raise (quote foo))) (finally (quote closing-files) (raise (quote ack)) 42)))) x) (catch e e)) => ack (try (let ((x (try (div 10 0) (catch e -1 (raise (quote foo))) (finally (quote closing-files) (raise (quote ack)) 42)))) x) (catch e (if (equal? e (quote ack)) 99 (raise (quote doug)))) (finally (quote closing-outer-files))) => 99 (try (try (let ((x (try (div 10 0) (catch e -1 (raise (quote foo))) (finally (quote closing-files) (raise (quote ack)) 42)))) x) (catch e (if (equal? e (quote foo)) 99 (raise (quote doug)))) (finally (quote closing-outer-files))) (catch e e)) => doug
(raise ...)
Example:
(define div (lambda (x y) (if (= y 0) (raise "division by zero") (/ x y)))) => <void>
record-case
(record-case ...)
Record-case takes a "structured list", matches on the car, and returns the associated expression.
Example:
(let ((r 5)) (record-case (cons 'banana (cons 'orange (cons (* 2 3) '()))) (apple (a b c) (list c b a r)) ((cherry banana) (a . b) (list b a r)) ((orange) () (quote no)) (else 2 3 4))) => ((6) orange 5)
define-datatype
(define-datatype ...)
Define-datatype is a useful set of extensions for defining data, and associated functions.
Example:
The following defines a recursive datatype, called ls-exp, that can take three forms: var-exp, lambda-exp, and app-exp.
(define-datatype lc-exp lc-exp? (var-exp (var symbol?)) (lambda-exp (bound-var symbol?) (body lc-exp?)) (app-exp (rator lc-exp?) (rand lc-exp?)))
When you define a datatype, it also defines related support functions. For example, the above define-datatype also defines:
lc-exp? var-exp lambda-exp app-exp
You can use those to create data in the prescribed format:
(var-exp (quote a)) => () (lambda-exp (quote a) (var-exp (quote a))) => () (app-exp (lambda-exp (quote a) (var-exp (quote a))) (var-exp (quote a))) => ()
You can use the datatypes in any number of ways, included the "cases" special form. For example, here is an un-parse function that takes the datatypes defined above, and turns it make into its original form:
(define un-parse (lambda (exp) (cases lc-exp exp (var-exp (var) var) (lambda-exp (bound-var body) (list bound-var body)) (app-exp (rator rand) (list rator rand))))) (un-parse (var-exp (quote a))) => a (un-parse (lambda-exp (quote a) (var-exp (quote a)))) => (a (var-exp a)) (un-parse (app-exp (lambda-exp (quote a) (var-exp (quote a))) (var-exp (quote a)))) => ((lambda-exp a (var-exp a)) (var-exp a))
call/cc
(call/cc procedure)
Calls procedure with the current continuation. The current continuation is "everything that is left to be done" at the location where the call/cc is located.
Examples:
(* 10 (call/cc (lambda (k) 4))) => 40 (* 10 (call/cc (lambda (k) (+ 1 (k 4))))) => 40 (* 10 (call/cc (lambda (k) (+ 1 (call/cc (lambda (j) (+ 2 (j (k 5))))))))) => 50 (* 10 (call/cc (lambda (k) (+ 1 (call/cc (lambda (j) (+ 2 (k (j 5))))))))) => 60
choose/fail/cut
Calico Scheme also contains a non-deterministic search with back-tracking. To use this, you select choice-points using the keyword "choose" with arguments, set requirements using the keyword "require", and fail using "(choose)". For example, to automatically find two numbers that sum to seven:
(define sum-to-seven (lambda () (let ((num1 (choose 0 1 2 3 4 5 6 7 8 9)) (num2 (choose 0 1 2 3 4 5 6 7 8 9))) (require (= (+ num1 num2) 7)) (printf "The numbers are ~s ~s\n" num1 num2))))
Then, you can let Scheme do the searching for you:
scheme>>> (sum-to-seven) The numbers are 0 7 Done
If you don't like that result, you can force Scheme back to any choice-points to make a different choice:
scheme>>> (choose) The numbers are 1 6 Done
This can continue until there are no more choices left.
See menu -> File -> Examples -> Scheme -> choose-examples.ss for more examples.
scheme>>> (choose) The numbers are 7 0 Done
(define distinct? (lambda (nums) (or (null? nums) (null? (cdr nums)) (and (not (member (car nums) (cdr nums))) (distinct? (cdr nums))))))
Cut is used to cutoff the possibility of back-tracking.
Here is an example that solves a constraint-satisfaction problem:
;;?
You can use choose and require to solve the problem:
(define floors2 (lambda () (let ((baker (choose 1 2 3 4 5))) (require (not (= baker 5))) (let ((fletcher (choose 1 2 3 4 5))) (require (not (= fletcher 5))) (require (not (= fletcher 1))) (let ((cooper (choose 1 2 3 4 5))) (require (not (= cooper 1))) (require (not (= (abs (- fletcher cooper)) 1))) (let ((smith (choose 1 2 3 4 5))) (require (not (= (abs (- smith fletcher)) 1))) (let ((miller (choose 1 2 3 4 5))) (require (> miller cooper)) (require (distinct? (list baker cooper fletcher miller smith))) (list (list (quote baker:) baker) (list (quote cooper:) cooper) (list (quote fletcher:) fletcher) (list (quote miller:) miller) (list (quote smith:) smith)))))))))
You could wait until the end to list the requires. However, moving them up above later (choose ...) statements creates a more efficient search.
(floors2) => ((baker: 3) (cooper: 2) (fletcher: 4) (miller: 5) (smith: 1))
define-syntax
define-syntax is used to change the semantics of Scheme in a manner not possible with regular functions. For example, imagine that you wanted to time a particular function call. To time a function, you can do:
(let ((start (current-time))) (fact 5) (- (current-time) start))
If you tried to define a function time such that you could call it like:
(time (fact 5))
then, unfortunately, you would evaluate (fact 5) before you could do anything in the function time. You could call it like:
(time fact 5)
but that looks a bit strange. Perhaps a more natural way would be to just change the semantics of Scheme to allow (time (fact 5)). Scheme makes that easy with define-sytnax:
(define-syntax time [(time ?exp) (let ((start (current-time))) ?exp (- (current-time) start))])
Now, you can call it like:
(time (fact 5))
and you get the correct answer.
define-syntax takes a list of two items: a template, and a response. If the template matches, then you evaluate the response. In this example, (time ?exp) matches, so the system will record the start time, evaluate the ?exp, and then return the time minus the start time.
Calico Scheme uses this simple, but powerful pattern matcher to implement define-case. Here is a more complex example: for.
))])
In this example, define-syntax creates a for function with 4 forms:
(for 4 times do (function ...)) (for x in '(1 2 3) do (function ...)) (for x at (0) in '(1 2 3) do (function ...)) (for x at (0 1 2) in (range 10) do (function ...))
(define-syntax collect [(collect ?exp for ?var in ?list) (filter-map (lambda (?var) ?exp) (lambda (?var) #t) ?list)] [(collect ?exp for ?var in ?list if ?condition) (filter-map (lambda (?var) ?exp) (lambda (?var) ?condition) ?list)]) (collect (* n n) for n in (range 10)) => (0 1 4 9 16 25 36 49 64 81) (collect (* n n) for n in (range 5 20 3)) => (25 64 121 196 289) (collect (* n n) for n in (range 10) if (> n 5)) => (36 49 64 81)
More define-syntax examples
A for form:
))]) (define for-repeat (lambda (n f) (if (< n 1) 'done (begin (f) (for-repeat (- n 1) f))))) (define for-iterate1 (lambda (values f) (if (null? values) 'done (begin (f (car values)) (for-iterate1 (cdr values) f))))) (define for-iterate2 (lambda (i values f) (if (null? values) 'done (begin (f (car values) i) (for-iterate2 (+ i 1) (cdr values) f))))) (define matrix2d '((10 20) (30 40) (50 60) (70 80))) (define matrix3d '(((10 20 30) (40 50 60)) ((70 80 90) (100 110 120)) ((130 140 150) (160 170 180)) ((190 200 210) (220 230 240))))
Using for:
(begin (define hello 0) (for 5 times do (set! hello (+ hello 1))) hello) => 5 (for sym in (quote (a b c d)) do (define x 1) (set! x sym) x) => done (for n in (range 10 20 2) do n) => done (for n at (i j) in matrix2d do (list n (quote coords:) i j)) => done (for n at (i j k) in matrix3d do (list n (quote coords:) i j k)) => done
Defining streams: scons, scar, and scdr and more:
(define-syntax scons [(scons ?x ?y) (cons ?x (lambda () ?y))]) (define scar car) (define scdr (lambda (s) (let ((result ((cdr s)))) (set-cdr! s (lambda () result)) result))) (define first (lambda (n s) (if (= n 0) '() (cons (scar s) (first (- n 1) (scdr s)))))) (define nth (lambda (n s) (if (= n 0) (scar s) (nth (- n 1) (scdr s))))) (define smap (lambda (f s) (scons (f (scar s)) (smap f (scdr s))))) (define ones (scons 1 ones)) (define nats (scons 0 (combine nats + ones))) (define combine (lambda (s1 op s2) (scons (op (scar s1) (scar s2)) (combine (scdr s1) op (scdr s2))))) (define fibs (scons 1 (scons 1 (combine fibs + (scdr fibs))))) (define facts (scons 1 (combine facts * (scdr nats)))) (define ! (lambda (n) (nth n facts)))
Testing !, nth, and first:
(! 5) => 120 (nth 10 facts) => 3628800 (nth 20 fibs) => 10946 (first 30 fibs) => (1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040)
Calico Libraries
You can use any of the libraries in the hosting environment (Python or Mono/.NET):
scheme> (import "DLLName") scheme> (DLLName.Class arg1 arg2)
For example, you can use Scheme to do art or control robots:
(import "Myro") (Myro.init "sim") (Myro.joystick)
Interop
There is a special object in the environment, calico. It has access to a variety of Calico functions. See Calico: calico object for more details.
Use the define! to put a variable in the global Calico namespace.
scheme> (define! x 8) python> x 8
Wrap a function for use by other Calico" v)) (range 3)) 1 2 3
Language Interop
Converting from Scheme to Python in Calico:
scheme> (define pylist (calico.Evaluate "lambda *args: list(args)" "python")) Ok scheme> (define pytuple (calico.Evaluate "lambda *args: args" "python")) Ok scheme> (pylist 1 2 3) [1, 2, 3] scheme> (pytuple 1 2 3) (1, 2, 3)
Converting from Python to Scheme
scheme> (define pylist2list (lambda (args) (map (lambda (i) i) args))) Ok scheme> (pylist2list (pylist 1 2 3)) (1 2 3) scheme> (pylist2list (pytuple 1 2 3)) (1 2 3)
Use in Python
You can get the single file necessary to run Scheme in Python here:
Download and save in your working folder.
You can just use calicoscheme.py as a script in Python, IronPython, or Jython:
$ python calicoscheme.py ## use ipython to have file-system tab work in quotes Calico Scheme, version 3.0.0 ---------------------------- Use (exit) to exit ==> (+ 1 1) 2
This environment has access to Python's built-ins through calicoscheme.ENVIRONMENT:
==> (sum '(1 2 3))
sum is found in the Python environment.
You can also import and use Python libraries directly:
==> (import "math" "numpy") (math numpy) ==> (math.sin 3.14) 0.0015926529164868282 ==> (numpy.array 10) array(100) ==> (exit) # or control+d
You can also use calicoscheme inside Python:
$ python >>> import calicoscheme >>> calicoscheme.ENVIRONMENT = globals() # set the environment to the locals here >>> calicoscheme.start_rm() ==> (+ 1 1) 2 ==> (define! f (lambda (n) (+ n 1))) ### define! puts it in the external env ==> (exit) goodbye
And back in Python you can call the define! function using Scheme's infrastructure:
>>> f(100) 101
You can also call into Scheme like so:
>>> calicoscheme.execute_string_rm("(define x 23)") <void> >>> calicoscheme.execute_string_rm("x") 23
You can turn debug on to help track down issues:
==> (set! DEBUG #t)
Current limitations
- known missing items: bitbucket.org/ipre/calico/issues
- define-syntax is a simplified version
- all exceptions, procedures, etc, are represented by lists
- no pretty-print yet
- display/write in general is rough... not much protection for safe and proper printing
- no file (read/write) functions yet
- no package management tools: can't use cross-Scheme packages and facilities (eg, module, requires, provides, etc)
- if (use-stack-trace) is true, then it will keep track of stack; if you have an infinite loop, you might want to turn that off (use-stack-trace #f)
Patches accepted! Python-specific code should go in Scheme.py, general Scheme code should go in interpreter-cps.ss (and related files).
If you have questions or comments, please see our mailing list:
Under The Hood
Calico Scheme is written in Scheme, in continuation-passing style (CPS), transformed into simpler forms, and then translated into C# and Python. Language-native components are written in Scheme.py (for Python) and Scheme.cs (for C#).
You can get the sources in Calico/language/Scheme/Scheme here:
Running "make calicoscheme.py" in that directory will build the file. You will need a Scheme to begin with (we use Petite Scheme, a freely available Scheme).
The process generally works like this:
ds-transformer.ss rm-transformer.ss translate_rm.py | | | *-cps.ss > calicoscheme-cps.ss > calicoscheme-ds.ss > calicoscheme-rm.ss > calicoscheme.py
This is a general process: it can take most any Scheme cps code and transform it into Python or C#.
As an example, consider the function fibonacci. Here it is, written in continuation-passing style:
(load "transformer-macros.ss") (define* fib-cps (lambda (n k) (cond ((= n 1) (k 1)) ((= n 2) (k 1)) (else (fib-cps (- n 1) (lambda-cont (v1) (fib-cps (- n 2) (lambda-cont (v2) (k (+ v1 v2)))))))))) (define fib (lambda (n) (fib-cps n TOP-k))) (define TOP-k (lambda-cont (v) (halt* v)))
That is, whenever you have a result, pass it to a function that takes the result, and does further processing to it. The process is started with a TOP-k continuation that signals to halt and return the result.
You can run this code in Petite Scheme:
> (fib 10) 55
This code is transformed into a data structure representation:
$ petite > (load "ds-transformer.ss") > (transform-ds "fib-cps.ss" "fib-ds.ss")
That creates the following:
(load "transformer-macros.ss") ;;---------------------------------------------------------------------- ;; continuation datatype (define make-cont (lambda args (cons 'continuation args))) (define* apply-cont (lambda (k value) (apply+ (cadr k) value (cddr k)))) (define+ <cont-1> (lambda (value fields) (let ((v1 (car fields)) (k (cadr fields))) (apply-cont k (+ v1 value))))) (define+ <cont-2> (lambda (value fields) (let ((n (car fields)) (k (cadr fields))) (fib-cps (- n 2) (make-cont <cont-1> value k))))) (define+ <cont-3> (lambda (value fields) (let () (halt* value)))) ;;---------------------------------------------------------------------- ;; main program (define* fib-cps (lambda (n k) (cond ((= n 1) (apply-cont k 1)) ((= n 2) (apply-cont k 1)) (else (fib-cps (- n 1) (make-cont <cont-2> n k)))))) (define fib (lambda (n) (fib-cps n TOP-k))) (define TOP-k (make-cont <cont-3>))
This breaks out the continuations into stand-alone functions.
You can run this in Petite Scheme:
> (fib 10) 55
Next, we turn the data structure representation into a register machine:
$ petite > (load "rm-transformer.ss") > (transform-rm "fib-ds.ss" "fib-rm.ss")
That creates the following:
(load "transformer-macros.ss") ;;---------------------------------------------------------------------- ;; global registers (define pc 'undefined) (define fields_reg 'undefined) (define final_reg 'undefined) (define k_reg 'undefined) (define n_reg 'undefined) (define value_reg 'undefined) (define make-cont (lambda args (return* (cons 'continuation args)))) (define* apply-cont (lambda () (return* (apply (cadr k_reg) (cddr k_reg))))) (define <cont-1> (lambda (v1 k) (set! value_reg (+ v1 value_reg)) (set! k_reg k) (set! pc apply-cont))) (define <cont-2> (lambda (n k) (set! k_reg (make-cont <cont-1> value_reg k)) (set! n_reg (- n 2)) (set! pc fib-cps))) (define <cont-3> (lambda () (set! final_reg value_reg) (set! pc #f))) (define* fib-cps (lambda () (if (= n_reg 1) (begin (set! value_reg 1) (set! pc apply-cont)) (if (= n_reg 2) (begin (set! value_reg 1) (set! pc apply-cont)) (begin (set! k_reg (make-cont <cont-2> n_reg k_reg)) (set! n_reg (- n_reg 1)) (set! pc fib-cps)))))) (define fib (lambda (n) (set! k_reg TOP-k) (set! n_reg n) (set! pc fib-cps))) (define TOP-k (make-cont <cont-3>)) ;; the trampoline (define trampoline (lambda () (if pc (begin (pc) (trampoline)) final_reg))) (define run (lambda (setup . args) (apply setup args) (return* (trampoline))))
You can then run this in Petite Scheme:
(run fib 10) 55
This requires a special form to run the code, as the new fib function doesn't return anything.
Finally, the register machine program can be translated into C# or Python:
python translate_rm.py "fib-rm.ss" "fib.py"
The ending extension of the second filename indicates whether to generate Python or C#.
Here is the bottom of the Python version:
... symbol_emptylist = Symbol("()") symbol_undefined = Symbol("undefined") symbol_continuation = Symbol("continuation") pc = symbol_undefined fields_reg = symbol_undefined final_reg = symbol_undefined k_reg = symbol_undefined n_reg = symbol_undefined value_reg = symbol_undefined def apply_cont(): return Apply(cadr(k_reg), cddr(k_reg)) def cont_1(v1, k): globals()['value_reg'] = plus(v1, value_reg) globals()['k_reg'] = k globals()['pc'] = apply_cont def cont_2(n, k): globals()['k_reg'] = make_cont(cont_1, value_reg, k) globals()['n_reg'] = minus(n, 2) globals()['pc'] = fib_cps def cont_3(): globals()['final_reg'] = value_reg globals()['pc'] = False def fib_cps(): if Equal(n_reg, 1): globals()['value_reg'] = 1 globals()['pc'] = apply_cont else: if Equal(n_reg, 2): globals()['value_reg'] = 1 globals()['pc'] = apply_cont else: globals()['k_reg'] = make_cont(cont_2, n_reg, k_reg) globals()['n_reg'] = minus(n_reg, 1) globals()['pc'] = fib_cps def fib(n): globals()['k_reg'] = REP_k globals()['n_reg'] = n globals()['pc'] = fib_cps REP_k = make_cont(cont_3) def run(setup, *args): args = List(*args) Apply(setup, args) return trampoline()
This can be run in Python:
>>> run(fib, 10) 55
For further information on the overall strategy and philosophy, please see Essentials of Programming Languages.
References
- Calico
- Essentials of Programming Languages by Dan Friedman and Mitch Wand.
- CalicoDevelopment - plans and details for Calico development
- - The Scheme Programming Language, 4th edition
There is a compatible version of sllgen.ss provided in Calico/examples/scheme/sllgen.ss | http://wiki.roboteducation.org/index.php?title=Calico_Scheme&diff=next&oldid=15722 | CC-MAIN-2020-34 | refinedweb | 8,685 | 66.64 |
in reply to Smart match in p5
#]
=head1 NOTES
C<smatch> aliases $_ to $val and evaluates its subsequent
arguments in that context.
Each subsequent argument should be an arrayref, whose last element
is a coderef. The coderef will be executed if any other element
yields a true value after being evaluated according to this table:
Input ($in) is Example Operation
=============== ============= =================================
Regex qr/foo|bar/ /$in/
Number 3.14 $_ == $in
coderef sub { /foo/ } $in->($_)
range specifier [100,1000] $in->[0] <= $_ and $_ <= $in->[1]
(arrayref)
hashref \%hash $in->{$_}
any other scalar 'a string' $_ eq $in
=cut
use strict;
use warnings;
package Smatch;
use Regexp::Common;
sub in_range {
my ($n, $lo, $hi) = @_;
if ($n =~ /^RE{num}{real}$/
and $lo =~ /^$RE{num}{real}$/ and $hi =~ /^$RE{num}{real}$/) {
$lo <= $hi or warnings::warnif(misc => 'Invalid range $lo .. $
+hi');
return ($lo <= $n and $n <= $hi);
} else {
$lo le $hi or warnings::warnif(misc => 'Invalid range $lo .. $
+hi');
return ($lo le $n and $n le $hi);
}
}
sub smatch {
local *_ = \$_[0];
CASES:
for my $caselist (@_[1..$#_] ) {
my $coderef = pop @$caselist;
if (@$caselist) {
for my $case (@$caselist) {
if (my $reftype = ref $case) {
$coderef->(), last CASES if
($reftype eq 'Regexp' and m{$case}) or
($reftype eq 'ARRAY' and in_range($_, @$case)) o
+r
($reftype eq 'HASH' and $case->{$_}) or
($reftype eq 'CODE' and $case->($_)) or
$case eq $_
;
}
else {
$coderef->(), last CASES if
($case =~ /^$RE{num}{real}$/ and /^$RE{num}{real
+}/
and $case == $_) or
$case eq $_
;
}
}
}
else {
$coderef->();
}
}
}
package main;
my @vals = (1, 10, 'foo1', 'bar', 'leftover');
for my $val (@vals) {
no warnings 'exiting';
Smatch::smatch $val =>
[ 1, sub { print "$_ is 1\n" } ],
[ [10, 20], sub { print "$_ hit me\n"; next } ],
[ 'foo1', sub { print "$_ matched foo1\n"; next} ],
[ { foo1=>1 }, sub { print "$_ found in hash\n" } ],
[ qr/1/, sub { print "$_ got me, too?\n" } ],
[ sub { s/bar/baz/ }, sub { print "$_ satisfied code\n" } ],
[ (), sub { print "$_ fell to the default\n" } ]
;
print "Done smatching . | http://www.perlmonks.org/index.pl?node=444780 | CC-MAIN-2017-26 | refinedweb | 337 | 59.5 |
In this tutorial you will create an app that generates dynamic animated text using Giphy's API with ReactJS.
After that I'll go over some of the other API features Giphy provides that you can use to make other interesting projects.
You can find the code for the tutorial here.
Video Tutorial
To see a preview of the finished product in action, you can watch the start of this video. If you prefer to follow a video tutorial instead of reading (or in addition to reading), you can also follow along for the rest of the video.
Getting Started
To get started you'll need a basic development environment for ReactJS. I'll be using create-react-app as the starting project template.
Next you'll need to visit Giphy's developers page and create an account so you can get your API key. Once you've created your account you'll see a dashboard like this:
You need to click "create an App" and choose the SDK option for your app. Your dashboard will then present you with an API key you will use to make calls to the Giphy API.
How to Setup the App File and Folder
The structure for this tutorial will be standard for ReactJS projects. Inside the
src directory, create a
components directory and create two files,
Error.js and
TextList.js
You also need to create a
.env file in the root of the project that you'll use to store your API key. Whatever you name your variable, you need to append REACT_APP in front of it, like this:
REACT_APP_GIPHY_KEY=apikeyhere
Install Giphy JS-fetch
The final thing you need to do is install Giphy's API helper library which you can do using the following command:
npm install @giphy/js-fetch-api
Giphy API Call
The first task in making this app is creating an input form to accept the text you want to generate from the Giphy API. You will then use that text input and send it as an API request.
Before displaying this response data, let's test it out by simply making the API request and then logging the response. Write the following code in your
App.js file:
import { GiphyFetch } from '@giphy/js-fetch-api' import {useState} from 'react' import TextList from './components/TextList' import Error from './components/Error'}) console.log(res.data) setResults(res.data) } apiCall()> </div> ); } export default App;
Let's take a look at what's happening in this code:
const giphy = new GiphyFetch(process.env.REACT_APP_GIPHY_KEY) is where you use the Giphy helper library to create the object you'll use for interacting with the Giphy API.
process.env.REACT_APP_GIPHY_KEY is how your API key is passed as an argument from the
.env file. You can also pass your API key as a string, but you won't want to do this in production because somebody could steal and use your key.
Inside the main App component, you create three pieces of state using hooks. The 1st is
text which will be what stores the user input. This is what will be passed to the API as an argument to generate text.
err will be used to conditionally render an error later if the user attempts to submit an empty string.
And
results is an empty array that will be used to store the results from the API response.
If you run the code and check your developer console, you should see that the Giphy API returned an array with 20 objects.
How to Display the Data with React
Now that the data is being properly stored in state, all you need to do is display that data with JSX. To handle that, we'll finish those two components we created earlier.
First we'll make a simple error component that can display a custom message. Place the following code into
Error.js inside your components folder:
const Error = (props) => { if(!props.isError) { return null } return ( <p className='error'>{props.text}</p> ) } export default Error
The
Error component is very simple. It takes the
err state and a text string as props, and if the value is true it will render the text. If
err is false, it returns null.
Next is the TextList component which will take the
results state as props and then display the data in your app:
const TextList = (props) => { const items = props.gifs.map((itemData) => { return <Item url={itemData.url} />; }); return <div className="text-container">{items}</div>; }; const Item = (props) => { return ( <div className="gif-item"> <img src={props.url} /> </div> ); }; export default TextList;
This component is a little more complicated, so here's what is happening:
The
Item component accepts the URL value which is inside each value returned from the API. It uses this URL as the source for the image element.
The
results state array from the App component is passed to the TextList component as
gifs. The array is mapped over to generate all the
Item components for all the results and assigned to the
items variable and then returned inside a container div. We'll style this container later to create a grid layout.
How to Import the Components into the Main App
Now you just need to use those finished components in your JSX. The final code of your
App.js file should look like this:
import TextList from './components/TextList' import Error from './components/Error' import { GiphyFetch } from '@giphy/js-fetch-api' import {useState} from 'react'}) setResults(res.data) } apiCall() //change error state back to false> <Error isError={err} {results && <TextList gifs={results} />} </div> ); } export default App;
The only changes here are the bottom two lines added in the return statement:
The
Error component is passed the
err state and a
text prop which will only be rendered if an error occurs.
In this app there is only one error condition in case the input is empty, but you could add additional checks with custom error messages as well.
Then we use conditional rendering with the
&& logical operator. This causes the
TextList component to render only if the results array is not empty, which means the API response returned successfully with our gifs.
If you run your code at this point, you should see an ugly but functional app. If you use the input field and click the submit button, the gifs should be returned and displayed in your app.
How to Add Styling with CSS
The last thing to do is make the app look a little bit prettier. Feel free to customize any of these styles if you want to adjust how things look. Place this code into your
App.css file:
.App { text-align: center; } .error { color: #b50000; font-size: 20px; font-weight: 500; } .input-field { font-size: 20px; vertical-align: middle; transition: .5s; border-width: 2px; margin: 5px; } .input-field:focus { box-shadow: 0 14px 28px rgba(0,0,0,0.25), 0 10px 10px rgba(0,0,0,0.22); outline: none; } .input-field:hover { box-shadow: 0 14px 28px rgba(0,0,0,0.25), 0 10px 10px rgba(0,0,0,0.22); } .submit-btn { background-color: rgb(19, 209, 235); color: #fff; padding: 6px 30px; vertical-align: middle; outline: none; border: none; font-size: 16px; transition: .3s; cursor: pointer; } .submit-btn:hover { background-color: rgb(10, 130, 146); } .text-container { display: flex; flex-wrap: wrap; justify-content: center; } .gif-item { flex-basis: 19%; } img { max-width: 100%; } @media screen and (max-width: 992px) { .gif-item { flex-basis: 31%; } } @media screen and (max-width: 600px) { .gif-item { flex-basis: 48%; } }
Nothing crazy going on here with the CSS. Just some styling for the submit button and some box shadow for the input field.
There are also a few media queries for some responsive design that changes the column count depending on the screen size.
Other Giphy API features
The animated text API is just one feature available in the Giphy API. I'll go over a few other features that could be useful as part of a project or as a solo project.
Animated Emoji
The Emoji endpoint is very straightforward in terms of use. It returns a bunch of animated emoji just like the animated text API you used above, except you don't need to pass any arguments to it. An example API call:
const data = await gf.emoji()
This endpoint could be useful if you are building a chat application and want to make it easy for users to use Emoji in their messages.
Pre-Built UI components
If you don't feel like messing around with a ton of custom code like we did in this tutorial, Giphy actually provides components for both ReactJS and regular JavaScript.
You can make a grid very similar to what we created in this tutorial with just a few lines of code:
import { Grid } from '@giphy/react-components' import { GiphyFetch } from '@giphy/js-fetch-api' // use @giphy/js-fetch-api to fetch gifs // apply for a new Web SDK key. Use a separate key for every platform (Android, iOS, Web) const gf = new GiphyFetch('your Web SDK key') // fetch 10 gifs at a time as the user scrolls (offset is handled by the grid) const fetchGifs = (offset: number) => gf.trending({ offset, limit: 10 }) // React Component ReactDOM.render(<Grid width={800} columns={3} gutter={6} fetchGifs={fetchGifs} />, target)
You get some additional bonus features like automatic dynamic updates to fetch more content when users scroll to the bottom of the Grid.
You can choose between templates which handle almost everything or just a Grid component which gives you a little more control.
Here's an interactive demo provided by Giphy.
Trending API
This endpoint returns a list of constantly updated content based on user engagement and what is currently popular on Giphy.
Search API
This endpoint is similar to the animated text endpoint, you just need to pass a search query as a parameter and you'll get an array of gifs that match.
There are many more API endpoints available. You can see the rest in Giphy's API documentation.
Conclusion
That's it for this tutorial! I hope you found it interesting and you make some cool projects using the Giphy API.
If you are interested in a bunch of other cool APIs that you can use for making portfolio projects, you can check out this video as well which goes over 8 more APIs that I think are really cool. | https://www.freecodecamp.org/news/giphy-api-tutorial/ | CC-MAIN-2021-25 | refinedweb | 1,749 | 63.7 |
Apologies in advance for what will be a long post: this has taken a bit of work.
I've written a J2ME application (using Netbeans and the Sun WTK2.2) and have successfully tested it in the emulator. The application is reasonably small (20K) but its data set is 250K (which I put in a J2ME record store). For testing, I initially simply put the data set in a file and included it in the jar and that worked fine in the emulator, but my phone (a stock Nokia 6101 from T-Mobile USA) has a limit of 166K on applications and so that doesn't really work. I switched to a remote download of the data file and that was where the problem arose, because the phone will not allow me to download anything in an unsigned app. After running around a bit and realizing that (at this time) I'm uninterested in paying for a code-signing certificate, I started looking into self signing.
I checked out the Nokia specs at which lists all the mime types supported by the various phones, and concluded that the only way to get a certificate on the phone would be using the application/vnd.wap.hashed-certificate format. I looked at the Open Mobile Alliance specifications at and after reading through a couple of the wireless security specifications, I tried to build my own CA certificate that I could install in the phone to let me sign applications.
Specifically, this is what I did:
1. Create using OpenSSL a new CA certificate (details on this abound on the web: Google is your friend).
2. Convert the PEM-encoded certificate into a DER-encoded certificate (binary) using OpenSSL.
3. Modified my apache installation with this line:
AddType application/vnd.wap.hashed-certificate .whc
The extension was arbitrarily chosen as something obscure.
4. Created using java a certificate file based on the wireless security specs.
5. Tried to download it to the phone using the browser.
Now I'm halfway there: the phone tries to install my binary file and then complains about the Authority certificate being corrupt (which makes perfect sense: I have no real clue what I'm doing in terms of generating the certificate file, so I'd have been extremely impressed if it accepted it). What I'm looking for is insight/knowledge/wisdom from anyone who's had experience with a properly encoded CA certificate to shed some light on exactly what a properly structured wap hashed certificate file looks like. If anyone has access to one such file that they could give me to deconstruct, I'll gladly document and donate the knowledge back. Right now, I'm using the following Java code to generate the wap hashed certificate input.
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ByteArrayOutputStream;
import com.ideasynthesis.utilities.Base64Utils;
public class createCert
{
public static void main(String[] args) throws IOException
{
ByteArrayOutputStream fos = new ByteArrayOutputStream(2048);
// write pieces
// the version (1)
fos.write(1);
// the certificate display name
String displayName = "My Personal CA";
// character set (106: UTF-8)
fos.write(106 >> 8);
fos.write(106);
// size
byte[] data = displayName.getBytes("UTF-8");
int size = data.length;
fos.write(size);
// data
fos.write(data);
//
// certificate
//
File inputCert = new File(args[0]);
FileInputStream fis = new FileInputStream(inputCert);
// format (x509 certificate)
fos.write(2);
// size
size = (int)inputCert.length();
fos.write(hibyte(size));
fos.write(size);
// certificate data
int ch;
while((ch = fis.read()) != -1) fos.write(ch);
fis.close();
// url
String cainfo_url = "";
size = cainfo_url.length();
// size
fos.write(size);
// url data
for(ch=0;ch<size;ch++) fos.write(cainfo_url.charAt(ch));
// hash
fos.write(0);
fos.close();
// output it
FileOutputStream os = new FileOutputStream(args[1]);
data = fos.toByteArray();
size = data.length;
//char[] output = Base64Utils.encode(data);
//size=output.length;
for(ch=0;ch<size;ch++){
//os.write(output[ch]);
os.write(data[ch]);
}
os.close();
}
private static int topbyte(int input){ return input >> 24; }
private static int thirdbyte(int input){ return input >> 16; }
private static int hibyte(int input){ return input >> 8; }
}
Important questions:
1. The spec doesn't say whether or not it needs to be Base64 encoded: anyone know if that's a necessary step or not?
2. This takes a DER encoded input file (args[1]). Is the DER encoding the proper certificate encoding?
3. If you're looking at the specs, I'm basing my output on page 19 of the WPKI definition document (WAP-217-WPKI) and on pages 63, 64 and 67 of the WTLS spec (WAP-261-WTLS) of the openmobilaalliance link I placed above. I was a little fuzzy on the structure definitions used in the WTLS spec, so clarification from any informed souls would be appreciated.
Thanks for reading this far,
Black. | http://developer.nokia.com/community/discussion/showthread.php/85470-Self-signed-CA-certificate | CC-MAIN-2014-15 | refinedweb | 808 | 57.16 |
Data Beans can simplify calls to functions with lots of parameters.
Often a method or constructor starts out with simple aims but evolves into a monster. For example, consider a library method to render an HTML text input field. At first cut of the code the render method takes one parameter, just the name of the field.
public static void render(String fieldName) { : }
Soon, a customization of the method is required with an extra style parameter and a readOnly flag. To maintain backward compatibility with existing callers of the library, another method is added and then the old method calls that. Another method is added to allow an ID field:
public static void render(String fieldName) { render(fieldName,null,null,false); } public static void render(String fieldName, String style, boolean readOnly) { render(fieldName,null,style,readonly); } public static void render(String fieldName, String id, String style, boolean readOnly) { : } :
And so on. Pretty soon the class has ten different render methods that do mostly the same thing. Worse, many of the parameters can have default values so most of the code only exists to avoid compilation problems when method signatures differ. The class is getting out of hand.
There is a better way. After more parameters are found, the developer could write a data bean InputFieldRenderData for all the possible parameters of the render method.
Since the data bean is only used for direct access to parameters, and these parameters are unlikely to change, an old C-style structure is appropriate to save time (ie. with public values and no methods). Take a look at the full implementation in the following example.
// a new structure used only for communicating data: public class InputFieldRenderData { public String fieldName; public String id; public String style; public boolean readOnly = false; : } // and then the updated render method: public static void render(InputFieldRenderData renderData) { if (renderData.id != null && renderData.id.trim().length() > 0) { : } : }
By using this neat trick, even if the attributes of InputFieldRenderData change, the calling code needs to change less often.
Tip: This data bean rule is not just for use by code that is expected to grow in the number of parameters. It can also be used in this specific situation to neaten up constructors or methods with more than ten parameters.
Please read the following [basic.struct] rule for tips on when to use this kind of structure. | http://javagoodways.com/basic_databean_Swap_many_paramaters_for_Data_Beans_.html | CC-MAIN-2021-21 | refinedweb | 395 | 53.31 |
[
]
Luc Maisonobe resolved MATH-567.
--------------------------------
Resolution: Fixed
Fix Version/s: 3.0
Fixed in subversion repository as of r1099938.
There were conversions problems in both directions! As you noticed, converting from Dfp to
double generated infinities, but creating new Dfp(-0.0) did not preserve the sign.
Thanks for the report and the hint for fixing the bug.
> class Dfp toDouble method return -inf whan Dfp value is 0 "zero"
> ----------------------------------------------------------------
>
> Key: MATH-567
> URL:
> Project: Commons Math
> Issue Type: Bug
> Affects Versions: 2.2
> Reporter: michel
> Priority: Minor
> Fix For: 3.0
>
>
> I found a bug in the toDouble() method of the Dfp class.
> If the Dfp's value is 0 "zero", the toDouble() method returns a negative infini.
> This is because the double value returned has an exposant equal to 0xFFF
> and a significand is equal to 0.
> In the IEEE754 this is a -inf.
> To be equal to zero, the exposant and the significand must be equal to zero.
> A simple test case is :
> ----------------------------------------------
> import org.apache.commons.math.dfp.DfpField;
> public class test {
> /**
> * @param args
> */
> public static void main(String[] args) {
> DfpField field = new DfpField(100);
> System.out.println("toDouble value of getZero() ="+field.getZero().toDouble()+
> "\ntoDouble value of newDfp(0.0) ="+
> field.newDfp(0.0).toDouble());
> }
> }
> May be the simplest way to fix it is to test the zero equality at the begin of the toDouble()
method, to be able to return the correctly signed zero ?
--
This message is automatically generated by JIRA.
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/commons-issues/201105.mbox/%3C1400535301.25740.1304624703330.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2015-22 | refinedweb | 253 | 60.51 |
Stefan Monnier <address@hidden> writes: >> We had a lot of discussion recently about making EmacsLisp > > I'm not sure I understand: do you mean Emacs does not Lisp enough yet? Heh. Yeah. I dropped a word. I am going to work on blaming someone or something for that and get back to you. What I meant to say was: We had a lot of discussion recently about making EmacsLisp better. > Comments: > - "Interning a symbol with "::" in it should no longer be possible. > It should raise an error." Why not simply intern it in the > corresponding namespace? It's probably a bad practice, but Emacs is > usually not in the business of preventing bad practice. That is a good idea. I will update the document with that. I agree emacs is better off with it's laissez faire attitude. > -. > Currently, we're very liberal about interning in the global obarray. > Basically I think this shadowing rule makes things a bit too > automatic for something where we need more control. I disagree that's a reason not to try it. Yes, it could be a problem... but the presumption has to be that new code would use this way to namespace itself and that global pollution would therefore slow down. *Eventually* I'd expect things like the face use-case to be dealt with in some sort of namespace system. > - I'm not sure exactly how you expect importing namespaces > should/will work. I'll try and add some examples I guess. It's quite simple though, if I have a package "nic" with 2 functions: foo bar and you have a package "stefan", then you should be able to make aliases to nic::foo and nic::bar in "stefan" namespace.. I'm not sure about that last bit. Nic | https://lists.gnu.org/archive/html/emacs-devel/2013-07/msg00754.html | CC-MAIN-2018-34 | refinedweb | 298 | 74.69 |
How can we map output of atan2() to [-90 to +90]?
I have output from complementary filter which uses atan2() function but the outputs are in range [-pi to +pi] as they are given by atan2.
See also questions close to this topic
-.
- Calculate other angles based on known rotation angle
I am making a mechanical system animation on OpenGL and having a little trouble calculating the rotations angle of the connecting rods based on a known rotation angle A and the position of the point D.
I need to calculate the angle CDE and CBG as well as the position of point E based on angle A and the position of D. But my high school is failing me right now. I have tried several ways but they all lead to nothing.
The length of segment DA is also known.
Do you have any ideas on how to do that? What should I do?
- Circle degrees loop in Angular Leaflet
I am trying to create circle degrees in Angular leaflet maps In my first step i have applied formulas for 30 45 degrees and so on as
const x30: number = p.x + (radius) * (Math.cos(Math.PI / 6)); const y30: number = p.y + (radius) * (Math.sin(Math.PI / 6)); const x45: number = p.x + (radius) * (Math.cos(Math.PI / 4)); const y45: number = p.y + (radius) * (Math.sin(Math.PI / 4));
but now i want to have a for loop that starts according to my selected values of drop-down like if i select 10 then my loop should start from 10 and should have distance of 10 degree means now i have to find (x,y) for 10 20 30 degrees and so on now my question is that what should be the series of degrees that i should use in for loop? or what should be the option to get degree form Math Library?
- Are trig functions that slow in modern hardware?
I have heard over and over again that trig functions are slow and should be avoided. I am heavily skeptical this is true today.
Consider the following article:
That claims adding angles is slower than either testing the sides of the triangle or using a basis representation.
I implemented all 3 versions in c++:
bool TestPointInTriangle( const std::vector<Eigen::Vector3d>& triangle, const Eigen::Vector3d& point) { double angle = 0; array<Eigen::Vector3d, 3> dirs; for(uint i=9; i < 3; i++) dirs[i] = (triangle[i] - point).normalized(); for(uint i=9; i < 3; i++) angle += acos(dirs[i].dot(dirs[(i + 1) % 3])); return abs(angle - 2 * M_PI) < 0.000001; } inline bool SameSide(const Eigen::Vector3d& p1, const Eigen::Vector3d&p2, const Eigen::Vector3d&a, const Eigen::Vector3d& b) { auto cp1 = (b - a).cross(p1 - a); auto cp2 = (b - a).cross(p2 - a); if (cp1.dot(cp2) >= 0) return true; else return false; } bool PointInTriangleSide( const std::vector<Eigen::Vector3d>& triangle, const Eigen::Vector3d& p) { auto a = triangle[0]; auto b = triangle[1]; auto c = triangle[2]; if (SameSide(p,a, b,c) && SameSide(p,b, a,c) && SameSide(p,c, a,b)) return true; else return false; } bool PointInTriangleBasis(const std::vector<Eigen::Vector3d>& triangle, const Eigen::Vector3d& p) { auto a = triangle[0]; auto b = triangle[1]; auto c = triangle[2]; // Compute vectors Eigen::Vector3d v0 = c - a; Eigen::Vector3d v1 = b - a; Eigen::Vector3d v2 = p - a; // Compute dot products double dot00 = v0.dot(v0); double dot01 = v0.dot(v1); double dot02 = v0.dot(v2); double dot11 = v1.dot(v1); double dot12 = v1.dot(v2); // Compute barycentric coordinates auto inv_denom = 1 / (dot00 * dot11 - dot01 * dot01); auto u = (dot11 * dot02 - dot01 * dot12) * inv_denom; auto v = (dot00 * dot12 - dot01 * dot02) * inv_denom; // Check if point is in triangle return (u >= 0) && (v >= 0) && (u + v < 1.0); }
Then I used gtest to benchmark:
--------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------- BM_TriangleTestSide 66.0 ns 65.9 ns 10127255 BM_TriangleTestBasis 30.9 ns 30.9 ns 22781566 BM_TriangleTestNaive 24.1 ns 24.1 ns 29210171
The fastest is the naive implementation. Now I have not really spent much time trying to implement the absolute best version of each function, I merely copy pasted and modified the code on the website to make sure each one compiled.
However it's not like I went out of my way to make the implementation inefficient unfarily towards one function. So for example in this specific case, benchmarking suggests the "worse" solution is actually the best in spite trig functions.
So, is it still true that trig functions are synonymous with bottlenecks?
Edit: As noted in the comments a major typo that went unnoticed made the code I posted useless, the correct results are:
---------------------------------------------------------------------------- Benchmark Time CPU Iterations ---------------------------------------------------------------------------- BM_TriangleTestSide 66.4 ns 66.4 ns 10220621 BM_TriangleTestBasis 32.6 ns 32.6 ns 21290255 BM_TriangleTestNaive 60.0 ns 59.9 ns 10408830
So I guess the answer is yes, trig functions remain expensive. | https://quabr.com/58818776/how-can-we-map-output-of-atan2-to-90-to-90 | CC-MAIN-2020-29 | refinedweb | 819 | 57.98 |
When we read/write data we need to transport our information from one layer to the next. Our medium of transportation is usually by storing our data in a POCO a plain old C# object. However, how this object looks like it might differ between layers. In the top layer, we might have a view model. In a domain layer, the data might have been altered to represent what the object might look like in a domain. In our data layer, it might be changed yet again to be easily inserted into a data storage like a DB. The point is - changing this object as it passes through the layer is a cumbersome, boring and time-consuming process. Surely there must be a better way? There is, with a library called Auto mapper.
TLDR; this article will focus on teaching the use of the library AutoMapper. AutoMapper has a whole host of supporting libraries so we have chosen a specific approach, namely to use the version of AutoMapper that uses Dependency Injection. I should say that in the past I have not used AutoMapper with DI but straight up AutoMapper. This approach is meant to be used with ASP.NET Core where DI is a central part of your app. DI brings benefits of being able to test, as it is easy to mock out dependencies.
You can find a fully working repo here
References
A quite good article that captures the important aspects of the library.
Official docs for AutoMapper
Official documentation
GitHub repo for AutoMapper
The official GitHub repo
Docs pages for .NET
If you are completely new to .NET, have a look here.
Getting started with .NET Core
Good intro to .NET Core.
WHY
As we mentioned initially our data needs to move from layer to layer. Changing the data we pass should be an automated process. We don't really have any flow control logic, we just need to map one or several columns to a new set of columns. As our application grows we get more and more of these models. Automate this now before you spend way too much time on this versus actual business logic. :)
WHAT
So what can AutoMapper do for us?
- Map from one type of object to another type
- Handle Complex differences, sometimes we have cases that are more advanced than others. It might not be as simple as just a 1-1 replacement. We need to be able to handle 1-N changes too and AutoMapper can help us with that
- Default scenarios, AutoMapper has some sensible defaults built-in which we can use to resolve certain 1-N cases
DEMO
So what will we show?
- Install and Set up, let's look at how to get started by installing the needed libraries
- Configuration, after install let's see how we instruct AutoMapper. Here we will show to use Profiles to tell AutoMapper what objects can be converted to what other objects.
- Basic scenario, let's look at a basic scenario in which we transform from one type
- Advanced scenarios, Let's cover some more advanced scenarios including 1-N and defaults
Install and Set up
Solution and project
dotnet new sln dotnet new webapi -o api dotnet sln add api/api.csproj
Install NuGet package
dotnet add api package AutoMapper.Extensions.Microsoft.DependencyInjection
Configuration
First we need to add this line too
Startup.cs and the method
ConfigureServices():
services.AddAutoMapper(typeof(Startup));
Once you have that in place it's time to figure out how to map different classes to each other. For that, we will use a concept called
Profiles. In the past, you would have one file where you entered all the mappings. While this works it might not make sense.
Wouldn't it be better if we could enter the mapping configuration closer to where it's used?
Of course, it would. We call this way of organizing our code to organize by topic or domain. Let me show you an example:
/User UserController.cs UserModel.cs UserProfile.cs UserViewModel.cs
Whether you want to organize your code this way or not the concept
Profiles means we created dedicated mapping classes for our mapping configuration. So let's take a look at how we can create one of those. First, let's think out a domain. We usually need the concept
User as our app usually have users using it. So let's create the file
UserProfile.cs and give it the following content:
// UserProfile.cs using AutoMapper; namespace api.Profiles { public class UserProfile: Profile { public UserProfile() { CreateMap<User, UserViewModel>(); } } }
Above we are inheriting from the base class
Profile and in the constructor, we set up our mapping:
CreateMap<User, UserViewModel>();
How shall we interpret the above though?
Well, it's 50/50 right? ;) Jokes aside. What we mean by the above is that given a
User object, how do we create a
UserViewModel object from it. This means we have set it up in one direction.
What about the other direction?
I knew you'd ask that :)
There are actually a few choices there depending on how each class looks, or rather how compatible they are.
Let's look at some different scenarios so you see how this affects the configuration.
Basic scenario
In this basic scenario owe want to convert between a ViewModel and a Model in our service layer. We have a class
UserViewModel and a class
UserModel. Let's have a look at the code for each:
// UserViewModel.cs namespace api.Models { public class UserViewModel { public int Id { get; set; } public string FullName { get; set; } } }
now the
UserModel:
namespace api.Models { public class UserModel { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } }
Looking at the above we can see that there are similarities
Id but also differences
FullName vs
FirstName and
LastName. We need to do the following:
- Figure out the difference between the two classes
- Encode the difference as a configuration in a profile class
- Ensure it works
Figure out the difference
We deduce that a
FullName is the same thing as
LastName. This might not always be so easy to figure but in this case it seemed quite simple.
Encode the difference
At this point, we are ready to encode this in mapper configuration. So we create a file
UserProfile.cs and give it the following content:
using AutoMapper; namespace api.Models { public class UserProfile: Profile { public UserProfile() { CreateMap<User, UserViewModel>(); } } }
What we are saying is, given a
User we can create a
UserViewModel.
But wait, we are not done. The field names don't match. This won't work or?
Yes you are correct, we need to update our code above to use the helper method
ForMember(), like so:
CreateMap<User, UserViewModel>() .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => src.FirstName + src.LastName));
Here we are saying, given the field
FullName, construct it given
Ensure it works
How do we know it works? Well, we have chosen to create a Web API project so let's create a route where we can test it out.
Create the file
UserController.cs and give it the following content:
// UserController.cs using AutoMapper; using api.Models; using Microsoft.AspNetCore.Mvc; namespace api.Controllers { [ApiController] [Route("[controller]")] public class UserController { private IMapper _mapper; public UserController(IMapper mapper) { _mapper = mapper; } [Route("/[controller]/[action]")] public string Get() { var result = _mapper.Map<UserViewModel>(new User() { FirstName="Chris", LastName="Noring", Id = 1 }); return result.FullName; } } }
Above we have two things of interest.
- The constructor, here we can see that we have injected an instance of
IMappercalled
mapper.
private IMapper _mapper; public DataController(IMapper mapper) { _mapper = mapper; }
- Controller action, here we are setting up our route to listen to
controller/actionwhich means the name of the Controller class and the name of the method. Because our controller is called
UserControllerthat means the route is
<baseUrl/>user/get.
[Route("/[controller]/[action]")] public string Get() { var result = _mapper.Map<UserViewModel>(new User() { FirstName="Chris", LastName="Noring", Id = 1 }); return result.FullName; }
Ok, let's try to run this with:
dotnet build dotnet run
We should get the following:
As you can see we are able to hit our route and it hits the correct class and method above. We can also see that our
_mapper is able to resolve the
User object we pass it and turn that into a ViewModel.
It should be said though in a more realistic example we would probably fetch our
Userobject using a service and then convert it into a ViewModel like the below code:
var user = _userService.get(); var viewModel = _mapper.Map<UserViewModel>(user);
Complex scenarios
Ok, we understand a basic scenario. What's a more complex scenario? Well, we might have nested classes where you have a class in a class like so:
public class Order { public Customer Customer { get; set; } public DateTime Created { get; set; } }
Above we have
Customer being a part of
Order.
The target class might look like this:
public class OrderViewModel { public string CustomerName { get; set; } public DateTime Created { get; set; } }
The above actually works without any specific mapping rules?
How is that possible?
Well, it depends on the shape
Customer.
Customer in this case just looks like this:
public class Customer { public string Name { get; set } }
AutoMapper is by default able to solve nesting like this as long as the name matches so
Customer -> Name can be seen as
CustomerName, quite powerful.
Configure
You shouldn't take my word for it, so let me show you. We will need to do the following:
- Configure our mapping, we do this by creating a class
OrderProfile
- Create our model and view model, we've already mentioned
Orderand
OrderViewModel, we need to create those.
- Add a router class, we need to create the class
OrderController
Configure our mapping
Let's create our
OrderProfile class. Start by creating the
OrderProfile.cs and give it the following content:
using AutoMapper; using api.Models; namespace api.Profiles { public class OrderProfile: Profile { private IMapper _mapper; public OrderProfile(IMapper mapper) { _mapper = mapper; CreateMap<Order, OrderViewModel>(); } } }
Above we are setting up a simple mapping:
CreateMap<Order, OrderViewModel>();
Create our model and view model
Next let's create first our three model classes
Customer,
Order and
OrderViewModel, like so:
// Customer.cs namespace api.Models { public class Customer { public string Name { get; set; } } }
// Order.cs using System; namespace api.Models { public class Order { public Customer Customer { get; set; } public DateTime Created { get; set; } } }
and lastly
OrderViewModel:
// OrderViewModel.cs namespace api.Models { public class OrderViewModel { public string CustomerName { get; set; } } }
Add a router class
Create a file
OrderController.cs with the following content:
using AutoMapper; using api.Models; using System; using Microsoft.AspNetCore.Mvc; namespace api.Controllers { [ApiController] [Route("[controller]")] public class OrderController { private IMapper _mapper; public OrderController(IMapper mapper) { _mapper = mapper; } [Route("/[controller]/[action]")] public string Get() { var orderViewModel = _mapper.Map<OrderViewModel>(new Order { Customer = new Customer(){ Name = "Chris" }, Created = DateTime.Now }); return orderViewModel.CustomerName; } } }
Above we are constructing
orderViewModel by passing it an
Order instance.
Test it out
Running it, we get the following result:
As you can see above, this just works. It's able to take our
Order class drill down into
Order->Customer->Name and turn that into
CustomerName.
Additional defaults
There are two additional defaults I wanted to show before wrapping up this article namely:
- Reverse mapping, normally when we set up mappings we need to specify mapping in both directions, from model to view model and from the view model to model. If we have a case like the above we can actually use a method
ReverseMap()that automatically adds configuration for mapping in the other direction
- Method resolver, if we add methods to our class that name wise somewhat matches a field on the destination class then it will be invoked and will help us resolve our mapping
Reverse Mapping
Given the code from our complex case let's use the
ReverseMap() method on our
Order configuration. Let's open the file
OrderProfile.cs and change the code to the following:
CreateMap<Order, OrderViewModel>() .ReverseMap();
Now open
OrderController.cs and ensure the
Get() method has the following code:
var orderViewModel = _mapper.Map<OrderViewModel>(new Order { Customer = new Customer(){ Name = "Chris" }, Created = DateTime.Now }); var order = _mapper.Map<Order>(orderViewModel); var name = order.Customer.Name; return orderViewModel.CustomerName;
Above we have added the row:
var order = _mapper.Map<Order>(orderViewModel); var name = order.Customer.Name;
This converts our view model back to a model.
Method resolver
This is not the only default that works out of the box. We can help with the conversion of one object to another by adding helper methods. to the source object.
That sounds a bit cryptic, can you elaborate?
Sure. Say you have a
User class with fields
FirstName and
LastName. Let's also say we have
UserViewModel with a
Fullname field. How would we resolve that in a conversion? Your first go-to is probably to set up a config for this using the method
ForMember. There is another way though, using a method. By adding the following method to the
User class:
public string GetFullname() { return string.Format("{0} {1}", this.FirstName, this.LastName); }
We now have a way to map
LastName to a field
FullName. Just think of the syntax is this way
Get<Fieldname>().
A second example
This is powerful. Let's have a look at a new example. Imagine we have a
Location class and a
LocationViewModel class.
Location looks like this:
namespace api.Models { public class Location { public string Street { get; set; } public string City { get; set; } public string Country { get; set; } } }
by adding a method
GetLocation() looking like this:
public string GetLocation() { return string.Format("{0} {1} {2}", Country, City, Street); }
we don't actually have to define this mapping with the
MemberFor() method.
What's really useful is that we can use the same approach to convert a
LocationViewModel to a
Location object by adding similar methods to
LocationViewModel. Change
LocationViewModel.cs to the following:
namespace api.Models { public class LocationViewModel { public string Location { get; set; } public string GetCountry() { return Location.Split(" ")[0]; } public string GetCity() { return Location.Split(" ")[1]; } public string GetStreet() { return Location.Split(" ")[2]; } } }
Just like with our
User/
UserViewModel scenario we can now write code like this, to convert a
Location to a
LocationViewModel and back again:
var locationViewmodel = _mapper.Map<LocationViewModel>(new Location() { Street = "abc", City= "Sthlm", Country = "Sweden" }); var location = _mapper.Map<Location>(new LocationViewModel{ Location = "Country City Street" });
NOTE, Because we added methods on the
LocationViewModelclass, that we expect to be used converting from it to
Locationobject - we need to add the
ReverseMap()call to our
LocationProfileconfiguration.
There is a more to learn about
AutoMapper but hopefully, you've got a good idea of what to use it for and how it can help you and you can get started. Please check the reference section for more details.
Summary
So what did we learn? We learned that AutoMapper is a great library to convert between different types of objects. Additionally, we learned that we can spread out the configuration in different
Profiles to make our code easier to maintain.
Lastly, we've learned that AutoMapper comes with a lot of smart defaults that means we barely need to config if we name our fields in a clever way or add a helper method.
Discussion (7)
I had to work on a project that used AutoMapper and I hated it. Yes, it has a lot of features and as you move from similar models and entities to more complex scenarios it has support for most of them, but debugging becomes a nightmare. Here are the issues I have with AutoMapper:
My solution: use dependency injection and inject IMapper when you need mapping. Implement those mappers however you see fit (using, if you must, AutoMapper).
I changed all automapping with this kind of solution in my project and never looked back. It allowed for async mapping, injection of services when I needed extra information (like turning some id into a value or getting details from additional source) and everything was clearly defined in terms of intent and implementation as well as unit testable at every level.
hi Costin. I appreciate you writing this down. I've never faced that much of a problem as you describe above. I think most libraries have a sweet spot where they shine vs don't shine
What’s the advantage of using this instead of implicit operator or explicit operator?
Automapper has more quality of life features like built in mapping validation and helpful errors and it protects you against forgetting to map properties. Also it makes testing your app much easier since it's got the built in DI stuff so you could technically inject a mock mapper.
It's whole purpose is to make your life easier when mapping objects. You should use whatever works best for you.
Thank you for making this question, I am really curious about it too.
I recently found automapper and am really sold on the ease of use and power it offers. It's a really great product.
I am a bit confused about the DI piece though. I don't get why you need to specify the startup class or why it works in that way? I've never seen DI like that. I've kinda gone my own way and created a Singleton I inject on my own instead of using their functionality.
My question is, what is the DI doing with startup class? I can't see a benefit as I still have to define my maps. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotnet/learn-how-you-can-convert-your-objects-between-layers-with-automapper-c-net-core-and-vs-code-51h2 | CC-MAIN-2021-43 | refinedweb | 2,940 | 56.86 |
Sidebar Programming: Microsoft took away alert(), I’m taking it Back!
Developers often find the need to popup some kind of informational message to the user. We’ve all been on web sites and received the “Invalid input, please try again” alert, or the “Delete record. Are you sure?” confirmation dialog. But the JavaScript functions to do this is a Sidebar gadget have been disabled by Microsoft. In this article, I’ll show you how to take them back.
Take for example the following html page:
<html> <head> <style type="text/css"> body{ height:60px; width:130px; margin:0; padding:0; } </style> <script type="text/javascript"> function goWeb() { var el = document.getElementById("url"); if (el.value) { window.open(el.value); } else { alert("Please enter a URL"); } } </script> </head> <body> <input type="text" id="url" style="width:100px" /> <button onclick="goWeb();">Go</button> </body> </html>
Something like this is very common when programming a web page. This sample application allows you to enter a web address and launched it in another browser window when you click “Go”. If you don’t enter anything, you receive an error message.
Let’s add a gadget.xml manifest and turn it into a Sidebar gadget.
<?xml version="1.0" encoding="utf-8" ?> <gadget> <name>Alert Test</name> <namespace>MyNamespace</namespace> <version>1.0.0.0</version> <copyright></copyright> <description>Alert Test</description> <hosts> <host name="sidebar"> <base type="HTML" apiVersion="1.0.0" src="alert.htm" /> <permissions>full</permissions> <platform minPlatformVersion="0.3" /> </host> </hosts> </gadget>
When you run the gadget and enter a web address, it launches in web browser just as it should. But, leave it blank and click “Go” and nothing happens. What happened to our error message? Unfortunately, Sidebar framework traps the “alert” method and ignores your request. But why would they do this? It’s not perfectly clear to me, nor is it documented anywhere, but one Microsoft official did state “We don’t want gadgets popping up UI that is unprompted by the user“. I agree with this in theory, but I don’t think Sidebar should make these kinds of decisions for me. So, by adding a single line to the HTML, we get back our coveted alert and confirm methods.
<script src="alert.vbs" type="text/vbscript"></script>
Place this line just above above the closing </head> element. What’s in the magical script, you ask? Well as it turns out, the VBScript MsgBox function does not suffer the same Sidebar censorship as it’s JavaScript cousins alert() and confirm(). I used MsgBox to simulate alert and confirm. Here’s the VBScript code:
'simulate JavaScripts alert() function sub alert(prompt) MsgBox prompt, 48 , "Sidebar Gadget" end sub 'simulate JavaScripts confirm() function function confirm(prompt) dim res res = MsgBox (prompt, 33, "Sidebar Gadget") if res=1 then confirm = true else confirm = false end if end function
Now that I’ve shown you how to use alert() and confirm() within a Sidebar gadget, all I ask is that you please don’t abuse it. Personally, I’ve only use this from within a Setting dialog, which I believe is the perfect place for an alert. | http://www.liveside.net/2006/12/25/sidebar-programming-microsoft-took-away-alert-i-m-taking-it-back/ | crawl-003 | refinedweb | 527 | 64.51 |
29 December 2010 04:32 [Source: ICIS news]
By Felicia Loo
SINGAPORE (ICIS)--Asia naphtha prices are expected to firm in early 2011, stoked by robust global crude futures at above $91/bbl (€69/tonne) as a bitter cold winter in Europe and the US northeast sparks demand for home heating, traders said.
Meanwhile, refinery maintenance works in the Middle East would also drain naphtha shipments to Asia in the next few months, at a time when Asian refineries are ramping up distillate production to meet peak heating oil consumption in the northern hemisphere, they added.
Naphtha may continue its bull run for the next few months, until a slew of cracker turnarounds kick in during the second quarter, traders said.
“Naphtha is bullish in the first quarter. Petrochemical margins are okay especially for integrated crackers,” said a trader in ?xml:namespace>
Integrated polypropylene (PP) margins were valued at $126/tonne in the fourth quarter to date, versus $111/tonne in the third quarter of the year, according to a ICIS weekly margin report on 17 December.
Naphtha supply is expected to fall, as Abu Dhabi National Oil Company (ADNOC) would shut its 140,000 bbl/day condensate splitter for a month starting from mid-January, traders said.
Saudi Aramco was expected to slash naphtha exports to
However, naphtha crackers in
If the current cold snap in the northern hemisphere was to last longer than expected, refineries in Europe would continue to churn out more diesel and less naphtha arbitrage flows would go to Asia, traders said.
“(Naphtha) supply from the West would decrease and this would tighten supply in
Naphtha demand might wane in the second quarter of 2011 as cracker turnarounds increase. The turnaround season would start in February, with majority of the shutdowns in
For the whole of next year, 19 crackers were slated for turnarounds compared with 30 this year, according to data obtained by ICIS. This translates to a production loss of more than 1m tonnes of ethylene next year.
Korea Petrochemical Industry Co (KPIC) is planning a 25-day shutdown at its 470,000 tonne/year cracker in Onsan, while Samsung Total would have a longer turnaround at its 850,000 tonne/year cracker in Daesan from end-April to early June, market sources said.
Other notable turnarounds include LG Chem’s 760,000 tonne/year cracker and Yeochun NCC’s 857,000 tonne/year ethylene plant.
“When is the turning point [for naphtha]? The market will be strong in February but following that, the [cracker] turnarounds will hit naphtha demand,” said a trader.
( | http://www.icis.com/Articles/2010/12/29/9422449/outlook-11-asia-naphtha-to-firm-until-cracker-turnarounds-begin.html | CC-MAIN-2014-42 | refinedweb | 430 | 52.33 |
Important: Please read the Qt Code of Conduct -
Does Qt support a data base?
Hi!
I am doing a project interfacing the OV7670 Camera Module with a PIC Microcontroller.
The PIC will act as the master and OV7670 as a slave. I will be connecting the PIC to a PC to send via UART the camera snapshot to screen.
I have this question: Is it best to use Visual Basic to view the snapshot or use Qt (qSerialTerm)?
Also I would like to store image data to a database. So above question applies here also.
- mrjj Lifetime Qt Champion last edited by
is it best to use Visual Basic to view the snapshot or use Qt (qSerialTerm)?
do you mean ? for qSerialTerm
Yes, Qt has good for databases. Like sqlite which is good for such projects.
To use QT you should know C++ to some degree.
@mrjj
Yes I mean qSerialTerm found in
So SQLite can be be used with Qt?
Yes I know that I must a knowledge of C++ to use Qt.
- mrjj Lifetime Qt Champion last edited by
@SweetOrange
ah. well it comes with source so could give you fast start to
connect and download data from camera.
Qt does have a serial class, you can use too.
yes, and easy to use.
#include <QtSql/QSqlDatabase> QSqlDatabase db; db = QSqlDatabase::addDatabase ( "QSQLITE" ); db.setDatabaseName ( DBPath ); if ( db.open() ) { // run some sql } db.close();
Hi,
I have the qSerialTerm code from Jorge Aparicio.
It doesn't compile, Can someone have a look? I am running Qt 5.5.0
Some of the include files are not recognized due to inconsistencies between versions (I think)
How do I attach files?
Hi,
Add
QT += widgetsin the qst.pro file -> Widgets have moved to their own module in Qt 5
Remove the QtGui in includes that uses it -> Again widgets have modev to their own module
Replace the failing toAscii calls by toLatin1 -> toAscii has been obsoleted.
In seriaportwidget.h and mainwindow.h header files the following gives an error
[code]
#include <QtAddOnSerialPort/serialport.h>
#include <QtAddOnSerialPort/serialportinfo.h>
[\code]
C:\Users\Alexandros\Desktop\qSerialTerm-master\mainwindow.h:26: error: QtAddOnSerialPort/serialport.h: No such file or directory
#include <QtAddOnSerialPort/serialport.h>
Did you follow the README file and got the sources of QExtSerialPort ?
I've decided to start building a simple application and build on that to make a gui. So I am using example "enumerator example". When run it detects my usb ftdi cable in COM4.
Can you suggest a good book regarding Qt?
Regarding example "enumerator" I have created a project called MySerialPort and I have copied the enumerator main.cpp file to my project MySerialPort sources files. I have left the mainwindow.cpp and mainwindow.h files untouched and the form as well. When I run my project I get following errors that I don't understand:
Errors are in a jpg file. How do I attach it?
Ok here it is:
- mrjj Lifetime Qt Champion last edited by mrjj
@SweetOrange
hi you can use postimage.org
and upload it
and then paste link here. Just click the button to get it.
You can also right click the error and copy & paste it.
Did you add
QT += serialportto your .pro file ?
Good, then for other questions please open a new thread as it will not be related to this problem.
Now that you have it building, please mark the thread as solved using the "Topic Tool" button so that other forum users may know a solution has been found :)
You should also change the thread title, the question was about serial port and not database.
@SGaist The question regarding the enumerator has been answered but not regarding the Jorge aparichio code. Anyway I'll create a new post since I am expanding the enumerator code. | https://forum.qt.io/topic/56257/does-qt-support-a-data-base | CC-MAIN-2020-50 | refinedweb | 638 | 68.57 |
The idea is to help him create HIS own code, not to give a possible solution ! You're not helping him with this, on the contrary !
Salem commented: Quite so. +36
The idea is to help him create HIS own code, not to give a possible solution ! You're not helping him with this, on the contrary !
Well, whatta you know, I have a Home Key aswell :D
Thanks Narue ;)
Hi.
Great tutorial on pointers by Narue @ [url=]Eternally Confuzzled[/url]
[QUOTE=tino;765798]Still looking for programming, as well as organizing help![/QUOTE]
Hi, allways fascinated with AI, can't help myself but maybe this place is a good place to ask for help: [url=] Gamedev_AI_Forum[/url]
Hope this helps :)
Have you created a console application ?
using namespace std;
int main()
{
int age = 0;
cout <<"please enter your age :";
cin >> age;
age = 10 + age;
cout <<"your age in ten years will be"<< age << endl ; return 0;
}
[/CODE]
Doesn't seem to be anything wrong with it !
Are you having a problem with the code ending immediatly after entering a variable ?
Do use [code] [/code] tags in the future, makes it easier to read your code.
[QUOTE=clutchkiller;733096]Yes i understand i need to use break; =P
So what if i make cho an integer? Then it would be 1 and not '1' correct?
- the labels must be literal characters.
I got a long way to go lol. Thanks in advance[/QUOTE]
Yep, like vmanes mentioned !
char cho [B];[/B]
After each [I]case[/I] you have to use a [I]break;[/I] , otherwise, the program will fall threw to each case that is presented.
1 is a number.
'1' is a character.
So, it depends on which of the two you are going to use.
Great tutorial on pointers: Narue's [URL="
Darn, sorry Narue. As soon as I saw your correction to my code, I felt stupid :icon_redface:
Hi Narue,
Well, the problemis that I'm getting these error messages:
[TEXT]Removed error messages[/TEXT]
Hi all,
I have a class which contains a struct that holds some variables, string, int's.
Now, one of the variables (string) in the struct is used to compare it with a name, if the name is in the struct, it has to be [COLOR="Red"]returned from my function [/COLOR]with it's own integer variables to be put into a separate list.
The thing is, how is the code written to do so ?
[CODE]struct ArmorArray
{
std::string armorName;
int armorCheck;
int maxDext;
int price;
ArmorArray *nxtArmor;
};
class Armor
{
private:
ArmorArray ArmorList[13];
ArmorArray *p_List;
int m_SIZEARRAY; std::string m_ArmorName; std::string m_String;
public:
Armor();
~Armor() {}
[COLOR="red"]p_List [/COLOR]BuyArmor(); std::string CreateString(std::string &myString); std::string ArmorBought(std::string &bought);
};[/CODE]
[CODE][COLOR="Red"]p_List [/COLOR]Armor::BuyArmor()
{
m_ArmorName = CreateString (m_String);
for (int i = 0; i < m_SIZEARRAY; i++) { if(!m_ArmorName.compare(ArmorList[i].armorName)) { return [COLOR="red"]ArmorList[i];[/COLOR] } } //return m_ArmorName = "Armor does not exist ! Try again or return to previous menu !";
}[/CODE]
Thanks for any assistance.
TIP !
Show what code you have written, makes it easier to see what direction you want/have to take !
Working with a an STL vector ?
Working with an array ?
... etc.
Ive run your program and it gave me an error, according to my limited knowledge it's related towards going out of bound with your array.
Try using for loops:
[CODE]for (int i = 0; i < (N-1); i++)[/CODE]
and change your switching algo to this:
[CODE]
if (a[j] < a[j+1])
{
int tmp = a[j];
a[j] = a[j+1];
a[j+1] = tmp;
}[/CODE]
Remember, you need two loops, you run the first for one time and the second will go threw the whole array, if that's okay, your first array will run for the second time and so on and on.
Watch out for overflowing your array.
Ok, thanks for the explanation vmanes.
Well, found a solution to solve the problem:
[CODE]
if (fin) // allready exists ?
{
cout << "Current file contents:\n";
char ch;
while (fin.get(ch)) cout << ch; cout << "\n***End of file contents.***\n"; } fin.close(); fin.clear();
.......
[/CODE]
The problem seems to be this:
[QUOTE]fin returns 0 because "the base class iso (from which istream is inherited) provides an overloaded cast operator that converts a stream into a pointer of type void*. The value of the pointer is 0 if an error occurred while attempting to read a value or the end-of-file indicator was encoutered." (1)[/QUOTE]
So, If I understand this correctly, your input because of being casted to a type void pointer get's another address and therefore isn't recognised?
If so, why does the function clear() solve this?
Good evening ladies and gents,
Had a question concerning this piece of code that I'm trying out from a book, it supposed to open a file, write text to it, close it, reopen it, append text to it and then write the text from the file to the console screen, it works fine up untill the text has to be written to the console screen, then I get the message that the file can't be opened for reading though the text is added to the file?
What am I doing wrong?
[CODE]
using namespace std;
int main() // returns 1 on error
{
char fileName[80];
char buffer[255]; // for user input.
cout << "Please reenter the file name: ";
cin >> fileName;
ifstream fin(fileName); if (fin) // allready exists ? { cout << "Current file contents:\n"; char ch; while (fin.get(ch)) cout << ch; cout << "\n***End of file contents.***\n"; } fin.close(); cout << "\nOpening " << fileName << " in append mode...\n"; ofstream fout(fileName, ios::app); if (!fout) { cout << "Unable to open " << fileName << " for appending !\n"; return 1; } cout << "\nEnter text for this file: "; cin.ignore(1, '\n'); cin.getline(buffer, 255); fout << buffer << "\n"; fout.close(); fin.open(fileName); // reassign existing fin object. if (!fin) // <-- problem is here. { cout << "Unable to open " << fileName << " for reading !\n"; return 1; } cout << "\nHere's the contents of the file: \n"; char ch; while (fin.get(ch)) cout ...
Thank you for the explanation and examples gentlemen :)
Hello ladies and gents,
Got a question, is it wise to create a member of a class by using another class it's constructor and deleting it threw the use of that class it destructor?
For example:
[code=cplusplus]// Testing code.
int main()
{
first myMenu;
bool gameLoop = true; while (gameLoop) { gameLoop = false; } return 0;
}[/code]
[code=cplusplus]#include "second.h"
class first
{
public:
first();
~first();
private:
second *mySecond;
};[/code]
[code=cplusplus]#include "first.h"
first::first()
{
mySecond = new second;
}
first::~first()
{
delete mySecond;
}[/code]
[code=cplusplus]class second
{
public:
second();
~second();
};[/code]
[code=cplusplus]#include "second.h"
second::second() {}
second::~second() {}
[/code]
Is this something that is done or should it be avoided?
Thank you.
Hi Narue,
Great to see you're stilll around here.
It looks fine, why do you ask?
Because I wasn't sure Narue.
It's just your debugger's way of saying that mName isn't ready to be used yet. If you get that [b]after[/b] stepping over the line (where mName should be created and initialized), then you probably have an issue.
Ah, ok, theres no problem after stepping over the line.
Why should you? std::string is an object that manages the actual string for you. The memory grows to fit the size of the string automagically.
Well, believe it or not, I thought it wasn't necessary, but, because the message of the <bad Ptr> I started to doubt this. I was thinking because I didn't add a sizeof, the <bad Ptr> was appearing.
Yes, it's gimped now. Completely useless if you want to do any kind of advanced searching. But it seems Dani isn't done upgrading the search feature, so perhaps it will be fixed in the near future.
Well, I hope she fixes it, it was alot better and easier to find related posts/threads to what you where looking for !!!
Anyway, thanks again Narue, good to see you're still around :)
Hello ladies and gents,
It's been ages since I posted here, have been doing some coding on and of and I was wondering about the following, beneath is a test I made in trying to use the new operator in combination with a string, first a string literal, second time a string input.
It's working, but, the thing that I'm seeing when debugging this is when I get to the point [inlinecode]std::string mName = "John Malkovich";[/inlinecode] the value of mName comes up as <bad Ptr>.
Questions:
A) is the way I used the string and new operator correct for this little program?
B) is this <bad Ptr> related to a bad Pointer and what does that mean? Probably not something good?
C) It seems that when I use [inlinecode]std::string *pName = new std::string;[/inlinecode], I don't have to determine the sizeof this string? Is this correct? Or is something going wrong which I'm not aware of?
[code=C++]
// Testing code.
int main()
{
std::string mName = "John Malkovich";
std::string *pName = new std::string; *pName = mName; std::cout << "Name is: " << *pName << " !\n"; delete pName; pName = 0; std::string secName = ""; getline(std::cin, secName); pName = new std::string; *pName = secName; std::cout << "Second name is: " << *pName << " !\n"; delete pName; pName = 0; return 0;
}[/code]
Thanks for any help you guys/girls can help.
Edit: oh yes, has the search function been altered on this forum? I seem to remember ...
[QUOTE=Dave Sinkula;304912]Forgive me, but without declarations and the intent of the operations, there is really not a question being asked here.[/QUOTE]
Hi Dave, I understand what you're saying, but the examples that I gave are how they are given in the exercise for the Bjarne Stroustrup The C++ Programming Language, there isn't anything mentioned about declarations or intent of operations.
I guess it's just an exercise to practise your knowledge of order of precedence for the various expressions.
@ Walt, thanks for that Walt, but, as said previous, don't think this exercise means much more then to see whether you know what expression has a higher precedence.
Thanks for the help guys ;)
Hello ladies and gents,
Ive got a few more expressions I needed to fully parenthesize:
1) p++ becomes (p++)
2) --p becomes nothing
if --p would have been --p:
--p OR--(*p) would have the same result.
Reason is, both would have the same precedence, so, it
goes from left to right in both cases.
3) ++a-- becomes ++(a--)
4) (int)p->m becomes (int)(p->m)
5) p.m becomes (p.m)
6) a[i] becomes (a[i])
One more question, am I correct that the following expression:
(int*)p->m equals a pointer member which get's a type cast conversion?
[QUOTE=Ravalon;304133]I'm not Dave, but hi anyway! :)[/QUOTE]
Hi Ravalon :cheesy:
[QUOTE]Yes, that's it exactly. :) [/QUOTE]
Ok, that's what I needed to know.
[QUOTE]New code should be written in either C95 or C99. Okay, maybe that wasn't so easy. :eek:[/QUOTE]
When you speak of "new code", you do mean "new code" written by those compiler developers right? If not, then yeah, you got me confused again :cheesy:
[QUOTE=Dave Sinkula;303979][intervening distractions]
I also meant to link to this for a C90 list:
[url][/url][/QUOTE]
Hi Dave,
You're speaking of the C90 list, but yet, the Draft shown is C89, am I missing something?
Also, what is actually C89, C90, etc... all about, are they like definitions of what C/C++ compiler writers have to keep in consideration?
Thank you both for the help gentlemen :!:
Nope, can't find it, opened parent directory, searched for anex J, doesn't seem to be in the list.
Hi Salem,
Thanks for the link, but, to me, it reades rather abstract and doesn't really tell me if what I wrote was correct.
Can you tell me whether the following examples are examples of undefined and implementation-defined constructs:
// Implementation defined:
1) unsigned char c2 = 1256; // implementation defined.
2) char ch = 'j'; char *p = ch; // implementation defined.
// Undefined behaviour:
1) num /= 0; // undefined behaviour.
2) int arr[2] = {0};
for(int i = 0; i < 4; i++) // undefined behaviour.
arr[i];
3) int arr[2] = {1, 2, 3}; // undefined behaviour. | https://www.daniweb.com/members/18401/jobe/posts | CC-MAIN-2018-47 | refinedweb | 2,087 | 72.36 |
The objective of this post is to explain how to synthesis a simple speech using Python and the pyttsx module.
Installing the module
We will use pip to install the pyttsx module. Nevertheless, as indicated here, we first need to install the the pywin32-extensions package. Although we can do it using an installer available here, the easiest way is also installing via pip, using the following command:
pip install pypiwin32
Just as a quick explanation, pywin32 provides access to part of the Win32 API [1].
After this installation, we just need to install pyttsx with the following pip command:
pip install pyttsx
If after the installation, upon running an example, you get the error bellow, just restart the Python shell / editor and try to run it again [2].
ImportError: no module named win32api
Hello world program
First, we start by importing the previously installed pyttsx module.
import pyttsx
After the initial import, we call the init function to get an engine instance for the speech synthesis [3]. If we don’t pass any argument as input of the function, it will use the best driver for the operating system we are using [3].
voiceEngine = pyttsx.init()
Then, we call the say method on the engine instance, passing as input the text to be spoken.
voiceEngine.say('Hello World! This is a speech synthesis test.')
Finally, we call the runAndWait method, also on the engine instance, to process the voice commands [4]. Check the full code, already with this call, bellow.
import pyttsx voiceEngine = pyttsx.init() voiceEngine.say('Hello World! This is a speech synthesis test.') voiceEngine.runAndWait()
Testing the code
To test the code, just run it, for example, on IDLE. The string defined in the code should now be synthesized and spoken, as indicated in the video bellow. Please note that different versions of the operating system may result in different voices, depending on the underlying speech synthesis engine. Also, in the video bellow, the voice is kind of robotized, probably due to my screen recording software. On my computer, the output is very clean.
Related content
References
[1]
[2]
[3]
[4]
Technical details
- Python version: 2.7.8
- Pyttsx version: 1.1
Pingback: Python pyttsx: Changing speech rate and volume | techtutorialsx | https://techtutorialsx.com/2017/05/06/python-speech-synthesis/ | CC-MAIN-2017-26 | refinedweb | 374 | 63.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.