text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I have a timestamp of form
[10/15/11 11:55:08:992 PDT] . . . log entry text . . .
I expect I can try the following specifier in props.conf file for the above Oct 10th 2011 date format:
TIMEPREFIX = ^.
MAXTIMESTAMPLOOKAHEAD = 22
TIMEFORMAT = %y/%d/%m %k:%M:%S
But for dates where the day of the month of log entry is less than 10 I hve something like:
[12/8/11 11:55:08:992 PDT] . . . log entry text . . .
My understanding is %d works for a two digit day format, but I don't see a good option when day can be two digits or a single non-padded digit day of month representation.
Suggestions?
I believe unfortunately that the "%e" opption still winds up with two characters.
Though lot of python tutorials do not mention it, when the day number is less than 10
"%e" seems to front pad with a blank, where "%d" frontpads with a zero.
As is born out by the folowing ksh and python script content and output.
#----------------------
#!/bin/ksh
# kshdatewithdand_e
# If current day of the month is greater than 9 then print date time out
# for the 9th of the month. Otherwise print out current date time
#
DAY=
date +%e
if [ $DAY -gt 9 ]
then
let BACK=$DAY-9
else
BACK=0
fi
date -d "$BACK days ago" +"%y/%d/%m %k:%M:%S"
date -d "$BACK days ago" +"%y/%e/%m %k:%M:%S"
# END
SAMPLE OUTPUT:
11/09/12 10:50:15
11/ 9/12 10:50:15
#----------------------
#!/usr/bin/python
# pythondatewithdand_e"
# Using hard coded date here
#
import time
t = (2011, 12, 9, 17, 3, 38, 1, 48, 0)
t = time.mktime(t)
print time.strftime("%y/%d/%m %k:%M:%S", time.gmtime(t))
print time.strftime("%y/%e/%m %k:%M:%S", time.gmtime(t))
# END
SAMPLE OUTPUT:
11/09/12 23:03:38
11/ 9/12 23:03:38
#----------------------
Unless splunk does something special for "%e" different than python or ksh,
it seems this would still not match for a single character day in date field
I have not had a chance to experiment further so is still conjecture on my part.
Yes - it does not do what it is supposed to do. I want to extract the day from "Aug 18 17:11:16" and "Aug 8 17:11:16". %e is not white space padded.
Hi, not that I've tried it, but
%e might work for you.
According to
%d - day of the month (01 to 31) %e - day of the month (1 to 31)
Hope this helps,
Kristian | https://community.splunk.com/t5/Getting-Data-In/How-do-I-configure-timestamp-extraction-where-day-may-be-one-or/td-p/32505 | CC-MAIN-2020-40 | refinedweb | 432 | 80.82 |
using which you can aggregate a related set of actions into a single unified
explaination
explaination can you give me the explaination of this coding.
import java.util.*;
public class StudentMarks{
double totalMarks;
String grade;
public void setTotalMarks(double totalMarks){
this.totalMarks=totalMarks;
}
public
struts - Struts
can we get the struts.jar files and could u give the exact link to get that jar...struts hi,
what is meant by struts-config.xml and wht are the tags... of struts, u understand wverything
goto google type for strut-blank.jar
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good explaination and example? thanks in advance. Hi Friend,
It is not thread
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-batis
Struts 2 Actions
request.
About Struts Action Interface
In Struts 2 all actions may implement... the Action interface and
simple POJO classes can be used as action.
Struts 2....
Different uses of Struts 2 Action
Struts 2 Action class can be used
struts
struts hi
Before asking question, i would like to thank you for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts.
i am doing one struts application where i
Hi - Struts
://
Thanks. Hi Soniya,
We can use oracle too in struts...Hi Hi friends,
must for struts in mysql or not necessary... know it is possible to run struts using oracle10g....please reply me fast its
Single thread model in Struts - Struts
Single thread model in Struts
Hi Friends,
Can u... me. Hi
Struts 1 Actions are singletons therefore they must... for that Action. The singleton strategy restricts to Struts 1 Actions and requires
Struts - Struts
Struts Dear Sir , I m very new to struts how to make a program in struts and how to call it in action mapping to to froward another location....
thanks and regards
Sanjeev Hi friend,
For more information
Struts Built-In Actions
Struts Built-In Actions
... actions shipped with Struts APIs. These
built-in utility actions provide different...;
to combine many similar actions into a
single action
Struts2 Actions
.
However with struts 2 actions you can get different return types other than... generated by a Struts
Tag. The action tag (within the struts root node of ... Action interface
All actions may implement
Struts - Struts
Struts Hi,
I m getting Error when runing struts application.
i... for more information.
Thanks...
/WEB-INF/struts-config.xml
1
Struts Architecture - Struts
Struts Architecture
Hi Friends,
Can u give clear struts architecture with flow. Hi friend,
Struts is an open source...
developers to adopt an MVC architecture. Struts framework provides three key
Struts - Struts
Struts Hi All,
Can we have more than one struts-config.xml in a web-application?
If so can u explain me how with an example?
Thanks in Advance.. Yes we can have more than one struts config files..
Here we hi
can anyone tell me how can i implement session tracking... one otherwise returns existing one.then u can put any object value in session.... you can do like this....
session.setAttribute("",);
//finally u can remove
no action mapped for action - Struts
no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld
struts - Struts
struts Hi,
i want to develop a struts application,iam using eclipse... for struts ,its well and good
by the way one point i should say you can put...)If possible explain me with one example.
Give me reply as soon as possible.
Thank
Writing Actions
Writing Actions
The Action is responsible for controlling of data flow within an application.
You can make any java class as action. But struts have some... using struts you can either user Action interface or
use ActionSupport class
struts Hi,
Here my quation is
can i have more than one validation-rules.xml files in a struts application
struts - Struts
the form tag...
but it is giving error..
can you please give me solution...struts hi..
i have a problem regarding the webpage
in the webpage... is taking different action..
as per my problem if we click on first two submit
Struts Tutorial
, Architecture of Struts, download and install struts,
struts actions, Struts Logic Tags... :
Struts provides the POJO based actions.
Thread safe.
Struts has support... the
information to them.
Struts Controller Component : In Controller, Action class
Struts Dispatch Action Example
function. Here in this example
you will learn more about Struts Dispatch Action... Struts Dispatch Action Example
Struts Dispatch Action
Java - Struts
in DispatchAction in Struts.
How can i pass the method name in "action...****
Please give me the suggestion.
And i have one more doubt i.e. how can i use more the one action button in the form using Java script.
please give me
Implementing Actions in Struts 2
Implementing Actions in Struts 2
Package com.opensymphony.xwork2 contains... in this method. When an
action is called the execute method is executed. You can...;roseindia" extends="struts-default">
<action name="
Hi.. - Struts
Hi..
Hi Friends,
I am new in hibernate please tell me.....if i am using hibernet with struts any database pkg is required or not.....without any database package using maintain data in struts+hiebernet....please help
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... more information,tutorials and examples on Struts with Hibernate visit... of this installation Hi friend,
Hibernate is Object-Oriented mapping dispatch action - Struts
Struts dispatch action i am using dispatch action. i send the parameter="addUserAction" as querystring.ex:
at this time it working fine... not contain handler parameter named 'parameter'
how can i overcome can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">...*;
import org.apache.struts.action.*;
public class LoginAction extends Action
Struts Forward Action Example
..
Here in this example
you will learn more about Struts Forward Action... Struts Forward Action Example
...). The ForwardAction is one of the Built-in Actions
that is shipped with struts framework
Struts - Framework
to learn
and can u tell me clearly sir/madam? Hi friend... using the View component. ActionServlet, Action, ActionForm and struts-config.xml...Struts Good day to you Sir/madam,
How can i start how to set value in i want to set Id in checkBox from the struts action. Hi friend,
For more information,Tutorials and Examples on Checkbox in struts visit to : <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...;gt;
<html:form
<pre>
Struts - Struts
Struts Can u giva explanation of Struts with annotation withy an example? Hi friend,
For solving the problem visit to :
Thanks
Struts 1 Tutorial and example programs
.
- STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS...;
Aggregating Actions In Struts Revisited -
In my previous article Aggregating Actions in Struts , I have given a brief idea of how
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile LookupDispatchAction Example
;
Struts LookupDispatch Action... function. Here in this example you will learn more about Struts LookupDispatchAction... in the action tag through struts-config.xml file). Then this matching key is mapped... please give some idea for installed tomcat version 5 i have already tomcat 4.1 please help me. its very urgent Hi friend,
Some points to be remember
Struts - Struts
compelete code.
thanks Hi friend,
Please give details with full....
Struts1/Struts2
For more information on struts visit to : Hello
I like to make a registration form in struts inwhich
different kinds of actions in Struts
different kinds of actions in Struts What are the different kinds of actions in Struts
if we have 2 struts-config files - Struts
same perameter mapping to differet action classes what will happen? Hi,
No we cannot have 2 struts-config.xml files. In one case we can use 2 struts-cofig.xml files with some minor change in the file name. i e we can use
Struts - Struts
.
Thanks in advance Hi friend,
Please give full details with source code to solve the problem.
For read more information on Struts visit...Struts Hello
I have 2 java pages and 2 jsp pages in struts
Calling Action on form load - Struts
Calling Action on form load Hi all, is it possible to call... Hi friends,
When the /editRegistration action is invoked... this attribute and set it to false .
Hi friends,Yes. If your Action does not need any data
Struts Tag Lib - Struts
Struts Tag Lib Hi
i am a beginner to struts. i dont have... use the custom tag in a JSP page.
You can use more than one taglib directive..., sun, and sunw etc.
For more information Dear Sir ,
I am very new in Struts and want to learn about validation and custom validation. U have given in a such nice way to understand but little bit confusion,Plz could u provide the zip for address
multiboxes - Struts
onclick event is coded in javascript)...
Can u please give me a solution either in javascript code or in struts bean. Hi friend,
Code to solve
Struts
Struts in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified
struts - Struts
of a struts application Hi friend,
Code to help in solving... are different.So,
it can be handle different submit button :
class MyAction extends... super.execute();
}
}
For more information on struts2 visit to :
http
action tag - Struts
action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank--
.
-----------------------------------------------
Read for more information.... application,and why would you use it? Hi mamatha,
The main aim of the MVC... all the operations that can be applied to transform that object. It only
Developing Struts Application
outline of Struts, we can
enumerate the following points.
All requests... of the servlets or Struts Actions...
All data submitted by user are sent... it to a specified instance of Action
class.(as specified in struts-config.xml
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
MVC - Struts
MVC CAN ANYONE GIVE ME A REAL TIME IMPLEMENTATION OF M-V-C ARCHITECTURE WITH A SMALL EXAMPLE...... Hi friend,
Read for more information.
Thanks
struts
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks
java - Struts
friend.
what can i do.
In Action Mapping
In login jsp
Hi friend,
You change the same "path" and "action" in code :
In Action Mapping
In login jsp
For read more information on struts
java - Struts
java how can i get dynavalidation in my applications using struts... in the Struts config :
*)The Form Bean can be used...:
For more information on struts visit to :
struts
struts hi
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
Struts
Struts How Struts is useful in application development? Where to learn Struts?
Thanks
Hi,
Struts is very useful in writing web... applications.
You can learn Struts at our Struts tutorials section.
Thanks
struts
struts Hi,
1) can we write two controller classes in struts... is not valid? Can I access DB from processPreprocess method.
Hi... will abort request processing.
For more information on struts visit
configuration - Struts
configuration Can you please tell me the clear definition of Action....
Action class:
An Action class in the struts application extends Struts...://
Struts(1.3) action code for file upload
Struts(1.3) action code for file upload Hi All,
I want to upload... application using HttpUrlConnection.
How can i write my struts(1.3) action code... is post to struts action.
Thanks
Hi,
thanks for a quick reply
validation - Struts
validation Hi,
Can you give me the instructions about... single time only.
thank you Hi friend,
Read for more information,
Thanks how to handle exception handling in struts reatime projects?
can u plz send me one example how to deal with exception?
can u plz send me how to develop our own exception handling... as validation disabled
plz give me reply as soon as possible. Hi friend...
---------------------------------
Visit for more information.
Struts Books
are rolling, you can get more details from the Jakarta Struts documentation or one... for more experienced readers eager to exploit Struts to the fullest.
... Edition maps out how to use the Jakarta Struts framework, so you can solve
Struts example - Struts
Struts example how to run the struts example Hi,
Do you know the structure of the struts?
If you use the Eclipse IDE,you can easily... for more information:
Thanks
Error - Struts
Error Hi,
I downloaded the roseindia first struts example... create the url for that action then
"Struts Problem Report
Struts has detected.... If you can please send me a small Struts application developed using eclips.
My
exception - Struts
:
can anybody help me
regards,
Sorna
Hi friend,
Here...exception Hi,
While try to upload the example given by you in struts I am getting the exception
javax.servlet.jsp.JspException: Cannot
Why Struts 2
. Struts
2 tags are more capable and result oriented. Struts 2 tag markup can... core interfaces are HTTP
independent. Struts 2 Action classes...;
- Actions are simple POJOs. Any java class with execute() method can
be used
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts... problem its urgent Hi friend,
Plz give full details and Source code to solve the problem :
For more information on struts visit
Struts Tutorials
types of cleanup you can do to improve your Struts configurations.
Prerequisites... application programming.With the Validator, you can validate input in your Struts... module based configuration. That means we can have multiple Struts configuration
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/4312 | CC-MAIN-2016-18 | refinedweb | 2,409 | 67.86 |
Python: decorate __call__ and normal function with the same decorator?
Is it possible in Python 2.6-2.7 to use the same decorator for the following task:
class ComplextCallableObject(object): def __call__(self, a1, a2): pass def simple_function(a1, a2): pass
Both
ComplextCallableObject.__call__
and
simple_function
have the same arguments, but
__call__
also has one
self
for the first arg. In the decorator,
wrap_me
I need access for the wrapped args functions.
source to share
Unfortunately, at the time of definition (a class block in this case), the code cannot determine how the function will be used other than by naming convention. Modify your example a bit:
class ComplextCallableObject(object): def __call__(self, a1, a2): pass #... def simple_function(tgt, a1, a2): pass ComplextCallableObject.anInstanceMethod = simple_function ComplextCallableObject.anClassMethod = classmethod(simple_function) ComplextCallableObject.aStaticMethod = staticmethod(simple_function)
In this case, it
simple_function
implements a function that takes a target and two parameters, an instance method that takes two parameters, a class method that takes two parameters, and a static method that takes a target and two parameters. But this usage is not related only after the function is defined. Both
staticmethod
and
classmethod
return a different type of object, so you can talk about it if needed.
If you want to use a convention, you can check the name of the first argument to the function to see if it is
self
:
def wrap_me(fn): names = fn.func_code.co_varnames if names and names[0]=='self': print 'looks like an instance method' else: print 'looks like a function' return fn
source to share
def wrap_me(*args): a1, a2 = args if len(args) == 2 else args[1:] ...
Of course, you will need to modify the return function to accept an argument
self
if
wrap_me
called in a class method as well.
By the way, if you are not using
self
inside the function you are decorating, it really should be a static method.
source to share
This can be pretty silly, but it can work in the simplest case too:
In [1]: def wrap_me(func): ...: def wrapped(*args): ...: print 'arg1 is', args[-2] ...: print 'arg2 is', args[-1] ...: func(*args) ...: return wrapped ...: In [2]: class ComplexCallableObject(object): ...: @wrap_me ...: def __call__(self, a1, a2): ...: print 'class object called' ...: In [3]: @wrap_me ...: def simple_function(a1, a2): ...: print 'function called' ...: In [4]: simple_function('A', 'B') arg1 is A arg2 is B function called In [5]: o = ComplexCallableObject() In [6]: o('A', 'B') arg1 is A arg2 is B class object called
source to share | https://daily-blog.netlify.app/questions/1891523/index.html | CC-MAIN-2021-43 | refinedweb | 416 | 65.52 |
On Monday 12 March 2012, Rafael J. Wysocki wrote:> Please pull Renesas SoC updates for v3.4 since commit> fde7d9049e55ab85a390be7f415d74c9f62dd0f9> > Linux 3.3-rc7> > with top-most commit 2854903ad1329d09d7ec35639fff0949e45d496d> > ARM: mach-shmobile: default to no earlytimer> > from the git repository at:> > git://git.kernel.org/pub/scm/linux/kernel/git/rafael/renesas.git soc> > They include:> > * The rename of shmobile struct clk_ops to struct sh_clk_ops to avoid> possible future name space collision with common struct clk code.> > This also affects drivers that are shared with the sh architecture,> so the branch containing this part of the material, clk_ops-rename,> will be merged into the Paul Mundt's sh tree if necessary.> > * Introduction of L2 Cache support for r8a7779.> > * Conversion of the mach-shmobile subarch to properly use a per-SoC> map_io and separate init_early callback for early serial console> support on platforms where that is possible.> > Magnus Damm is the author of all the changes.> Thanks for rebasing this, Olof will merge this soon. Note that the__io() issue has turned out to be more urgent than I first thoughtwhen we discussed it, so it would be good to apply the patch belowon top of your series. Arnd8<-----ARM: shmobile: remove additional __io() macro usesetup-r8a7779.c has grown a new user of the __io() macro. Rob Herring'sPIO cleanup series already gets rid of all other uses in shmobile, sowe should ensure that this one gets removed as well.Signed-off-by: Arnd Bergmann <arnd@arndb.de>diff --git a/arch/arm/mach-shmobile/setup-r8a7779.c b/arch/arm/mach-shmobile/setup-r8a7779.cindex ce57d90..9545d82 100644--- a/arch/arm/mach-shmobile/setup-r8a7779.c+++ b/arch/arm/mach-shmobile/setup-r8a7779.c@@ -246,7 +246,7 @@ void __init r8a7779_add_standard_devices(void) { #ifdef CONFIG_CACHE_L2X0 /* Early BRESP enable, Shared attribute override enable, 64K*16way */- l2x0_init(__io(0xf0100000), 0x40470000, 0x82000fff);+ l2x0_init((void __iomem __force *)(0xf0100000), 0x40470000, 0x82000fff); #endif r8a7779_pm_init(); | http://lkml.org/lkml/2012/3/13/255 | CC-MAIN-2016-30 | refinedweb | 317 | 57.37 |
Next: cpg-ref-SP_get_number_codes, Previous: cpg-ref-SP_get_list_n_bytes, Up: cpg-bif
SP_get_list_n_codes()
#include <sicstus/sicstus.h> int SP_get_list_n_codes(SP_term_ref term, SP_term_ref tail, size_t n, size_t *w, char *s);
Copies into
s the encoded string representing the
character codes in the initial elements of list
term, so
that at most
n bytes are used. The number of bytes actually
written is assigned to
*w.
tail is set to the remainder
of the list. The array
s must have room for at least
n bytes.
Please note: The array
s is never
NUL-terminated. Any zero character codes in the list
term will be converted to the overlong UTF-8 sequence
0xC0 0x80.
Zero if the conversion fails (as far as failure can be detected), and a nonzero value otherwise.
Accessing Prolog Terms. | https://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/cpg_002dref_002dSP_005fget_005flist_005fn_005fcodes.html | CC-MAIN-2015-18 | refinedweb | 131 | 57.87 |
Im trying to write a selection sort with ascending and descending options
selection sort pseudocode
insertion sort
selection sort program in c
analysis of selection sort
challenge: implement swap khan academy answers
asymptotic runtime of selection sort
selection sort in c++
I have a selection sort method to sort my objects by there year variable. I got it working to sort in ascending order, I can't seem to get the descending order working. It would be awesome if somebody could look at the code and possibly point me in the right direction
public static void sortYears(ArrayList<Movies3> list, int ad){ int max, min, i, j; Movies3 temp; if(ad == 1){ for (i = 0; i < list.size() - 1; i++){ max = i; for (j = i + 1; j < list.size(); j++){ if (list.get(max).getYear() > list.get(j).getYear()){ max = j; } } temp = list.get(i); list.set(i, list.get(max)); list.set(max, temp); } }else if(ad == 2){ for (i = 0; i < list.size() - 1; i++){ min = i; for (j = i + 1; j > list.size(); j++){ if (list.get(min).getYear() < list.get(j).getYear()){ min = j; } } temp = list.get(i); list.set(i, list.get(min)); list.set(min, temp); } } }
for (j = i + 1; j > list.size(); j++){
The predicate should be
j < list.size(); instead of
>, otherwise your loop would never iterate as
i+1 always
<=n, so
j is always
<=n
Sorting (article) | Selection sort, Sorting a list of items into ascending or descending order can help either a human or a I am having trouble with the second step of he implement swap challenge. If that doesn't make sense, keep looking, and break the code into parts. so when you try to modify temp , you are really modifying the array that temp is Go uses the testing package to write tests and benchmarks. These are the results for benchmarks of the various selection sort functions. Each benchmark sorts an array of 1,024 random integers from descending to ascending order and then sorts it from ascending to descending order.
Replace direct comparisons like
list.get(max).getYear() > list.get(j).getYear() with a Comparator:
comparator.compare(list.get(max).getYear(), list.get(j).getYear()) > 0
You can then easily achieve inverted sorting with Comparator.reversed()
Selection sort pseudocode (article), This algorithm is called selection sort because it repeatedly selects the to see each step of the algorithm, and then try "Automatic" once you understand it to see the One of the steps in selection sort is to find the next-smallest card to put into its I'm sure you could do it, but there's a better way. Show formatting options. Selection Sort is an algorithm that works by selecting the smallest element from the array and putting it at its correct position and then selecting the second smallest element and putting it at its correct position and so on (for ascending order).
Your variable names and scopes really confusing, a lot of duplicated code.
for (j = i + 1; j > list.size(); j++) - this line of code will never be executed in majority of cases.
This s a fix for your descending order:
// the same walk as for ASC but reversed comparison for (int i = 0; i < list.size() - 1; i++) { candidateIndex = i; for (int j = i + 1; j < list.size(); j++) { if (list.get(candidateIndex).getYear() < list.get(j).getYear()) { candidateIndex = j; } } temp = list.get(i); list.set(i, list.get(candidateIndex)); list.set(candidateIndex, temp); }
You definitely need to look at Comparator:.
I will write a full example using comparators:
import java.util.ArrayList; import java.util.Arrays; import java.util.Comparator; import java.util.List; public class Main { /** * Defining comparator for ascending order by default */ public static final Comparator<Movies3> COMPARATOR = (m1, m2) -> m1.getYear() - m2.getYear(); public static void main(String[] args) { List<Movies3> movies = new ArrayList<>( Arrays.asList(new Movies3(1990), new Movies3(1995), new Movies3(2000))); sortYears(movies, true); System.out.println(movies); sortYears(movies, false); System.out.println(movies); } public static void sortYears(List<Movies3> list, boolean asc) { int candidateIndex; // index of candidate whatever min or max Movies3 temp; Comparator<Movies3> comparator; if (asc) { comparator = COMPARATOR; } else { comparator = COMPARATOR.reversed(); // switch to DESC order } for (int i = 0; i < list.size() - 1; i++) { candidateIndex = i; for (int j = i + 1; j < list.size(); j++) { if (comparator.compare(list.get(candidateIndex), list.get(j)) > 0) { candidateIndex = j; } } temp = list.get(i); list.set(i, list.get(candidateIndex)); list.set(candidateIndex, temp); } } }
Output:
[year 1990, year 1995, year 2000] [year 2000, year 1995, year 1990]
You can also let your class implement Comparable to define natural ordering for it and use it instead of
Comparator.
6.4, Write the starting (unsorted) array elements horizontally at the top of the paper. By default, std::sort sorts in ascending order using operator< to Rewrite the selection sort code above to sort in descending order (largest numbers first). I'm trying to have some code mix up the array so I can sort it again, Selection sort can work either way - the normal sort is ascending value (a-z), but descending can be done by a simple change of the comparison it makes. In the first example, it found the lowest char in the word, and sorted each word. In the second example, it finds the lowest remaining word, in the list of words.
I suggest you that your class Movies3 must implement the interface Comparable and use the sort method of the java class List and create a custom Comparator. I think is the better and more elegant way to do it.
It could be something like this:
For the Movie3 class
public class Movie3 implements Comparable<Movie3> { private int year; private String author; private String genre; public Movie3(int year, String author, String genre) { super(); this.year = year; this.author = author; this.genre = genre; } /** * @return the year */ public int getYear() { return year; } /** * @param year the year to set */ public void setYear(int year) { this.year = year; } /** * @return the author */ public String getAuthor() { return author; } /** * @param author the author to set */ public void setAuthor(String author) { this.author = author; } /** * @return the genre */ public String getGenre() { return genre; } /** * @param genre the genre to set */ public void setGenre(String genre) { this.genre = genre; } public String toString(){ StringBuilder sb = new StringBuilder(); sb.append("Year: "+this.getYear()); sb.append("Author: "+this.getAuthor()); sb.append("Genre: "+this.getGenre()); return sb.toString(); } public int compareTo(Movie3 m) { return Integer.compare(this.year, m.year); } }
On the other hand the Custom comparator is simply:
import java.util.Comparator; public class MovieYearComparator implements Comparator<Movie3> { private boolean reverse; public MovieYearComparator(boolean reverse) { super(); this.reverse = reverse; } @Override public int compare(Movie3 m1, Movie3 m2) { if (reverse) return m1.getYear() < m2.getYear() ? 1 : m1.getYear() == m2.getYear() ? 0 : -1; else return m1.getYear() < m2.getYear() ? -1 : m1.getYear() == m2.getYear() ? 0 : 1; } }
And finally the test:
import java.util.ArrayList; import java.util.List; import data.Movie3; import data.MovieYearComparator; public class test { public static void main(String[] args) { // TODO Auto-generated method stub List<Movie3> movies = new ArrayList<Movie3>(); movies.add(new Movie3(1000,"sds","sdf")); movies.add(new Movie3(1001,"sds","sdf")); movies.add(new Movie3(2001,"sds","sdf")); movies.add(new Movie3(2444,"sds","sdf")); movies.add(new Movie3(1002,"sds","sdf")); movies.add(new Movie3(1003,"sds","sdf")); System.out.println(movies.toString()); boolean reverse = true; movies.sort(new MovieYearComparator(!reverse)); System.out.println(movies.toString()); movies.sort(new MovieYearComparator(reverse)); System.out.println(movies.toString()); } }
Selection sort algorithm with increasing/decreasing sort options , The problem is that in your inner loop, you make that index-loop Here's an example of Go code for a basic selection sort algorithm (benchmarked as peterSO): to ascending order and then sorts it from ascending to descending order. Let's try to formulate when 2 elements have to be swapped inside the C Program for Selection Sort is used to read the array from user using for loop and sort it in ascending order using Selection sort algorithm and prints it.. Example: int f = first(temp, 0, m - 1, A2[i], m); Sort the rest of the numbers present in HashMap and put in output array. Selection sort in descending order. we cannot work out from that little what you are trying to do. sorting grid data in ascending and descending order. Advertise
Selection Sort, Value to be subtracted from array elements to make sum of all elements equals Find the winner of the Game · Rank of all elements in a Stream in descending The selection sort algorithm sorts an array by repeatedly finding the minimum In every iteration of selection sort, the minimum element (considering ascending 2 Im trying to write a selection sort with ascending and descending options May 3 '18 0 Assert statement with char values Jun 7 '18 0 How would I call a constructor of an abstract class inside of an extending class constructor Apr 16 '18
Sorting in Python: Selection & Merge Sort, Output: The X array sorted in descending or ascending order. In this lesson, we will present two sorting algorithms: A) Selection sort,. B) Merge sort. For each The Sort Text dialog box. In the Sort Type drop-down list, select the type of sorting you want Word to perform. For instance, if the first information in your text list represents a date, you would choose Date as the Sort Type. Using the radio buttons, indicate whether the sort should be Ascending or Descending. Click on OK.
- The AD variable is ascending or descending, 1 = ascending and 2 = descending
- In your 2nd if-statement you're using
for(j = i + 1; j > list.size(); j++)this line alone will evaluate to
falsebecause j will most likely be lower than the size of the list in most cases and therefore the whole loop won't be executed.
- Why don't you use Comparator?
- instead of passing
reverseflag you can use
Comparator.reverseOrder()
- Thanks Ruslan. I was looking for something similar at the java class List. | https://thetopsites.net/article/50152773.shtml | CC-MAIN-2021-25 | refinedweb | 1,683 | 56.55 |
Wikiversity:Colloquium/archives/September 2008
From Wikiversity
Official
Wikipedia discussion
We've run across a class of editors at Wikipedia who want to do something with the software. I've explained to the professor of the class that Wikiversity is a free textbook project and that there are other services like ScribbleWiki for just having one's own Wiki. He seems to have taken an interest in Wikiversity though and has asked some rather specific questions. Could someone better acquainted with this project stop over there w:en:Wikipedia:Administrators'_noticeboard#It.27s_that_time_of_year_again.2C_more_college_classes_to_keep_an_eye_on... and explain things to him? Thanks. MBisanz 08:56, 3 September 2008 (UTC)
- I left a message over there, but that thread is hellishly long and I couldn't really connect together what you were saying with what the thread says. Perhaps the course facilitator could post a short summary of any outstanding questions here? --McCormack 09:09, 3 September 2008 (UTC)
- "Wikiversity is a free textbook project" <-- that sounds like Wikibooks. Wikiversity has a much broader mandate...Wikiversity participants are free to explore new ways to use wiki technology to promote learning. We have Topic:Sandbox Server 0.5 and a sandbox wiki for software experiments. --JWSchmidt 14:56, 3 September 2008 (UTC)
I never wrote "textbook project"; what I meant was lecture notes or class notes, not textbook. The quotation above ("Wikiversity is a free textbook project") was not by me. There is a misunderstanding here. (I am real busy so I don't always monitor the wiki pages, unless some students inform me about some messages) Eml4500.f08 20:58, 3 September 2008 (UTC)
- Unless you have a specific reason for wanting to use Wiki markup as your scripting language, you might be better off using Google Docs, Google Knol, or Blogger. I have found Google Knol to be very easy to use. —Moulton 21:06, 3 September 2008 (UTC)
- Knol and wikiversity are very different in their basic design, structures, purposes and communities. In particular, Knol follows a rigid forum structure and does not allow editors structure the website or its contents in the way they want. As in the google tradition of emphacising stronger search in favour of better structure, there are no categories, subpages or templates which are instrumental in our community interactions. Hillgentleman|Talk 01:32, 4 September 2008 (UTC)
- If, as the professor says above, all the students require is a place to write lecture notes or class notes (as opposed to constructing a full-blown online course), then Wiki is probably not the best structure for that limited purpose. I use Blogger for writing up notes on some episodic lesson. —Moulton 01:52, 4 September 2008 (UTC)
The two courses by one instructor are now at WikiVersity. See User:Eas4200c.f08 and User:Eml4500.f08. The learning resources are to be created over a series of separate classes. This first class will probably be more about the course than creating learning resources, but as the course is repeated over and over, the learning resources should become a greater and greater part of the projects. I would like to be sure that the instructor and students understand that everything they write here is copy-left. Has anyone verified that is so? WAS 4.250 22:34, 3 September 2008 (UTC)
- The professor has his own mediawiki wiki for personal use, so I suspect he has a good understanding of the wiki way (since the GFDL ships with all copies of the software). MBisanz 02:27, 4 September 2008 (UTC)
Greetings everyone. I quote here my first answer to an inquiry over at wikipedia (Eml4500.f08 (talk) 15:30, 2 September 2008 (UTC)) for your info in case you did not see it: "... some goals related to wikipedia are (1) introduction to the use of wikipedia for learning and research, (2) to train future contributors to wikipedia, and (3) to create and develop open course contents for wiki sites such as wikiversity." You can change "wikipedia" to "wikipedia/wikiversity".
Yes, I do know the wiki way, and want to train students to use mediawiki in general as a tool to collaborate and then to contribute to wikipedia and wikiversity. Another advantage of learning mediawiki is to learn to write equations in the latex way (almost) for those interested in using latex, which I use almost everyday.
And yes, I can install mediawiki as I did for my mediawiki site, but I don't have time and resources to do this installation and the system management (back up the databases, managing accounts, find a computer and set up a backup server in case my main server goes down, etc.). As I said, I am real busy with many tasks to do beside teaching these two courses with close to 200 students. Occasionally, I would drop by this wiki page when necessary... Thank you. Eml4500.f08 13:13, 4 September 2008 (UTC)
- Please Be bold and let your students edit pages in the Wikiversity main namespace. A copyright violation can be corrected just as easily on a main namespace page as on a user subpage. I doubt if there is a way to prevent your students from editing the main namespace. Anyhow, your students are welcome to edit in the main namespace as long as they are learning. --JWSchmidt 15:53, 4 September 2008 (UTC)
Message from the editor
I got an eMail from the editor, Erkan Yilmaz, But there doesn't seem to be a return eMail address, unless I just send it to wiki@wikimedia.org.
I go to the websites listed in the email: mail: >> a general page about Wikiversity. wiki: >> a general page about Erkan
How do I send a response to Erkan? --IHSscj 21:36, 2 September 2008 (UTC)
- You can write here or at User talk:Erkan Yilmaz or email. I guess this is about this comment (you can also reply there which is best, so others don't ask again same questions in the future again) ? Or you can contact me also by chat (see signature), ----Erkan Yilmaz uses the Wikiversity:Chat (try) 21:41, 2 September 2008 (UTC)
I was tryiong to respond to your request: "perhaps you could provide more info about the edits ?" What info were U thinking of? Since U sent that eMail, I have finished editing the document, except that it still needs the Appendix. The Appendix, a speech by Willie Lynch, is available on a website, so that will present problems w/ permission. But they might just give us (the author & I) permission.
- As discussed: please contact the OTRS team. Thanks, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:52, 7)
Separation of Church and State
I was just thinking about the whole "anti-ID cabal" issue that Moulton and JWSchmidt have gotten into lately, as well as the Darwinism as Religion and related projects, and this got me to thinking about religion and secularism in general. I think it might be interesting to have a Separation of Church and State project where issues of religion vs. secularity are discussed; what is the ideal balance? What is proper "separation"? Examples: In France, it is illegal to cover your head in school because this violates the separation of Church and State in France. In Canada, it's illegal for Muslim women to vote covered; in both of these cases, Secularism trumps Freedom of Religion. The issue is not so clear, however: In the United States, religious groups regularly participate in politics, and political figures regularly talk about their religious persuasion. Congress prays, and the currency says "In God We Trust". In Switzerland, they would call this a lack of separation of Church and State. However, in Switzerland, certain ministers of religion are paid by the Swiss government; they are paid out of funds collected specially from religious tax-payers, but they are on the government's payroll. In the United States, this would be seen as a lack of separation of Church and State. In both countries, some of the things done in France and elsewhere are considered violations of the Freedom of Religion. At what point do Church and State become too separated? Has China gone too far in forbidding public sharing of most religion, going so far to arrest and detain those who have pictures of the Dalai Lama? At what point do the two become too conflated? Has Iran gone too far in upholding Sharia law by sentencing to death a group of converts to Christianity? These are all questions I'd like to explore, but I don't feel I could do another project justice at the moment; I already have my hands full. The Jade Knight 08:04, 6 September 2008 (UTC)
- Please see this article on The Separation of Church and State, which I wrote in the wake of the Kelo Decision a few years ago. —Moulton 11:37, 6 September 2008 (UTC)
- You seem to highly value your own work, I've noticed. The Jade Knight 13:44, 6 September 2008 (UTC)
- It's just an observation. I write, of course, but when it comes to learning materials I'm a bit more eclectic. The Jade Knight 01:39, 7 September 2008 (UTC)
- "Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship, and observance." I suppose some people adopt the position that freedom of religion means they have the right and duty to kill, but does freedom of religion "trump" other social obligations? Another example is polygamy. If polygamy is your "religious practice" does that mean you can practice it in a country where secular law makes polygamy illegal? --JWSchmidt 14:20, 6 September 2008 (UTC)
- Well, isn't that the question, though? What about illegal drug use? Sometimes exemptions are given to religions (for example—alcohol laws and Communion/Eucharist), sometimes they are not (polygamy is a case in point). What makes this more difficult for a Christian society is the fact that the Bible has a clear example of religiously motivated civil disobedience being praised. But moreover, what should the law be? What is proper separation? What is not? When and why should religious exemption to law be given? The Jade Knight 01:39, 7 September 2008 (UTC)
- What assumptions? It is a simple question. Perhaps you are asking, what are my assumptions about the purpose of the law, or the role of religion, or the inherent ethicality of the interaction of religion and government? I do not understand. The Jade Knight 12:33, 7 September 2008 (UTC)
- Your question is a simple one to state, but it's only meaningful and answerable if the underlying assumption is valid. I put it to you that there is an assumption underlying your simple question that you have not recognized or reckoned. Were you to do that in a conscientious manner, I dare say you would be obliged to withdraw your question in favor of a more meaningful an interesting one. —Moulton 01:37, 8 September 2008 (UTC)
- Right now I'm only waiting for you to tell me what this underlying assumption is which I have not recognized or reckoned. The Jade Knight 02:32, 8 September 2008 (UTC)
- Your question, "What should the law be?" assumes that the architecture of humankind's socio-cultural regulatory structure should be expressed in terms of rules or laws. If you ask, "What should the architecture the regulatory structure be?" you will no longer get the same answer that occurred to Hammurabi of Babylonia some 3750 years ago. See, for example, this analysis of the fundamental flaw in Hammurabi's idea. If you prefer a less scholarly treatment of the same insight, see this muse instead. —Moulton 11:17, 8 September 2008 (UTC)
- The discussion: should mankind be ruled by governments or even the rule of law? is beyond the scope of the question I'm asking here; certainly, if there were no laws, there would be no laws for or against the establishment or religion. However, there are laws, and will continue to be laws for the foreseeable future. Therefore, the underlying assumption is certainly valid for my purposes; I am not asking what should be in a lawless Utopian society, but rather what should be within the context of current political frameworks, and perhaps more appropriately, how beneficial and/or effective are the various positions on this issue found in various national models? The Jade Knight 11:39, 8 September 2008 (UTC)
- The answer to your question, as originally stated, is "Mu." The Assumption of Hammurabi is now known to be an utterly idiotic relic of the past. Your better question, "What should be within the context of current political frameworks?" does have an answer. What should be developed within the context of current political frameworks is a forward-looking program to evolve humankind's anachronistic and demonstrably dysfunctional socio-cultural regulatory system to a functional regulatory model, grounded in academically defensible theory and practice. —Moulton 11:58, 8 September 2008 (UTC)
- I'm not sure the Maoist/Bolshevik model is really a separation of church and state, at least not in the sense used in the US. --SB_Johnny talk 13:05, 7 September 2008 (UTC)
Something like 2500 years ago Herodotus observed that "Custom (culture) is king of all". Logic and evidence will not change what is acceptable or not acceptable in various countries with regard to religion versus government. In fact in most of the world at most times, religion and culture have been so intermixed as to be indistinguishable. Is bowing one's head while praying religion or culture? Is Sunday in the US being part of the weekend religion or culture? Revealed religions have some parts of culture that are due to religious texts, but even then culture picks and chooses which parts of revelations "from God" to respect and which to ignore. The Bible clearly says to not allow witches to live. Seen anyone killing people who declare themselves to be a witch lately? Thought not. WAS 4.250 00:15, 8 September 2008 (UTC)
- We still have witch hunts. We just don't crush them under a ton of stones anymore. Now we crush them under a ton of words. —Moulton 01:37, 8 September 2008 (UTC)
- I thought Herodotus was speaking about habit, rather than culture? At any rate, I think religion becomes culture, and sometimes culture can become religion, as well, though I think it's more the former than the latter. But concepts of separation of Church and State I think are largely based on tradition, and perhaps just as much, fear, in the US at the moment.. The Jade Knight 02:32, 8 September 2008 (UTC)
- The observation that culture becomes religion can be seen in cultures like the English Language Wikipedia, where the culture has regressively evolved into the Internet's premier Massive Multiplayer Online Narcissistic Wounding and Mugging Game. —Moulton 11:17, 8)
Comment from User:Paul
My name is Paul and I would like to tell you a little about myself and how I got involved in Abaya School in Ethiopia. Two years ago I was invited to help with the setting up and running of the Arba Minch Festival of a Thousand Stars. Arba Minch is a city located in the province SNNPS (Southern Nations Nationalities and People's State) in South Ethiopia. Through my involvement in the festival I got to know the head teacher, staff and pupils of Abaya School, also in Arba Minch. I returned the following year to see if I could help the school and discovered that they have 2210 pupils and fewer books than I have in my office. I did what I was able to do during that visit which was a drop in the ocean. I managed to get a few hundred random books shipped to them with the help of the British Council and bought them some much needed equipment. Books, however, are not practical due to the remoteness of the region and the high cost of shipping. I promised to return this year and try to take the project forward. I have invested in a 320MG portable drive and my idea was to fill it with Wikiversity and related content. Teachers are currently relying on memories of books they borrowed during their training to develop lessons. My vision is to make Wikiversity available to them. They have two good computers and I am hoping to take some laptops with me. There is no internet connection and one is probably not going to be available in the foreseeable future due to political constraints, and technical problems like the lack of phone lines and unreliable electricity source. I am not well educated myself and my computer skills are somewhat limited. For this reason I need help in deciding what to download and how to achieve it.
Retrieved from ""(The preceding unsigned comment was added by Paul (talk • contribs) .)
- Great question and need. Perhaps this should be created as a separate project. Wikiversity is a very promising place, but also quite nascent. Given the practical needs here, I wonder whether to start with some Wikipedia CDs might be useful resources? w:Wikipedia:Wikipedia-CD/Download. I would like to think that the Wikiversity community could make a practical difference, Paul. Please let us know anything we can do. -- Jtneill - Talk - c 11:21, 9 September 2008 (UTC))
Was now added to Wikimania 2009, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:44, 9 September 2008 (UTC)
Lets become a researcher
You dont need to have several digrees or to know a lot. Now you can participate in the project of Plant tissue culture lab and became a researcher. Just study a few pages on Wikipedia. The aim of this project to offer the participants to discover what is a science and and the same time to discover undiscovered things. Dont be affraid and come, the biorobot will do everything for you, you are warmly welcome!.--Juan 15:34, 9 September 2008 (UTC)
VotApedia
This is a MediaWiki we could explore collaboration with:. "VotApedia is an audience response system that doesn't require issuing clickers or need specialist infrastructure.". It can use mobile phones instead of clickers for collecting audience responses. The system is developed by CSIRO, which is the major government-funded science-research body in Australia. -- Jtneill - Talk - c 03:03, 11 September 2008 (UTC)
Disturbed
I am disturbed by much of the recent happenings on Wikiversity; just as Wikiversity has been disturbed by the presence and conduct of the "ethics" project, and some of its participants and offshoots. We've come to a situation where people refuse to talk to each other, verbally attack each other personally, and generally act as if civility is not a core Wikiversity/Wikimedia principle (which it is).
I am disturbed by what has been evoked in the name of "learning" and "scholarly" practice. I think it is all too easy to justify a learning/scholarly project by saying that it attempts to facilitate learning about a given topic, but I think it is also necessary to specify how this resource intends to facilitate learning. (I place emphasis on the word "how": by what process?; under what framework?) And, very importantly, it is also important to reflect on how a learning resource might contribute to an actual practice which hinders the intended learning, or which has other negative, perhaps unintended, consequences. An inquiry into ethics on the English Wikipedia could be a very interesting and productive learning project; and the same could be said of an inquiry into the actions of an individual on Wikiversity. However, what has been done on either front recently seems like a platform to make sneering and often veiled (however thinly) accusations against people, which does nothing to promote civility or scholarly practice.
I am disturbed that "action research" has been degraded to the point that it is seen as a synonym for "abnormally disruptive behaviour" [1] - though, from what he has seen, I don't blame Salmon of Doubt from drawing that conclusion. I would like to reclaim action research as a practice fitting for genuine collaborative learning, premised on honest and open discussion. To do this, (amongst other things) we need to build self-awareness of how our actions affect others - and not simply critique others' actions.
Many other Wikiversity participants are disturbed - SB_Johnny saying "our well is being poisoned" [2]. Wikiversity has been notably tolerant of material of questionable educational use - and this may well be a product of our intention to include as many types of learning resources, activities, participants, and communities as we can. But even within our very broad scope and pedagogical outlook, we surely cannot continue to tolerate actions that contribute to what I would call a toxic culture within Wikiversity. (Is it ironic that the ethics project has arisen out of a certain toxicity within Wikipedia?) There are clearly boundaries for what can promote learning - or, to put it another way, there are clearly actions that do not facilitate the creation of a scholarly environment.
I've just returned to Wikiversity after a week's holiday - and things are markedly worse than when I left (and they were bad enough at that time). Actually, I'm not just disturbed by what I've seen - I'm disgusted. This is not what we set up Wikiversity for. I urge everyone involved in the ethics project and deletion discussions - and anyone else who is interested - to commit to an open, honest, and reflective learning process about recent events on Wikiversity, and to develop out of this a set of principles about what is acceptable and unacceptable on Wikiversity. Cormaggio talk 14:11, 10 September 2008 (UTC)
- I am more perplexed than disturbed, but only because I have studied and struggled with the toxicity of the WP culture longer than many others here. I see the problems of the erosion of civility to be appropriate subjects for study within the context of a study of Applied Ethics. The instances of incivility that you allude to are ethical conundrums that we explore in our Colloquium Series enroute to devising best ethical practices for dealing with them.
- I endorse this proposal. If you can suggest the appropriate project page on Wikiversity where we can undertake this exercise, I will initiate the discussion to develop a Community Social Contract along the lines that you envision.
- If you can suggest the appropriate project page on Wikiversity where we can undertake the exercise you propose, I will initiate the discussion to develop a Community Social Contract in pursuit of mutually agreeable terms of engagement. —Moulton 17:39, 10 September 2008 (UTC)
- One (fun-ish) way I approach this kind of stuff (mostly in my own mind) is by assuming and expecting that everyone's behaviour will be irrational and uncivil. Anything that isn't, then, is a pleasant surprise. To explain this perspective a little more, see this op-ed piece by Hugh Mackay (an Australian social psychologist) on irrationality]. It starts off with an invite to "Try this simple experiment, for just a week. Assume that all the people you encounter - family, friends, colleagues, fellow road-users - are irrational beings...". Otherwise, folks, I fear that we will be perpetually disappointed in one another! -- Jtneill - Talk - c 03:08, 11 September 2008 (UTC)
- I really appreciate your comments here. I'm aware that I'm one of the people involved in this problem, and I'd be more than happy to discuss this. I agree that this is a problem, and I, frankly, don't know what to do about it. A new example of this sort of problem has been explored over at Albanian sea port history. Compare the page before (but after JWSchmidt got to it) and after my edits; at Student Union I tried one approach: reverting. At this page, I've tried another approach. Neither seems desireable, in my opinion. The Jade Knight 03:48, 11 September 2008 (UTC)
Wikiversity needs a "professional detachment" policy; the problem with the ethics project is that it has none. Hesperian 07:15, 11 September 2008 (UTC)
- How could you possibly enforce such a policy? The Jade Knight 07:31, 11 September 2008 (UTC)
- I would argue that we should not allow our principles to be subdued by pragmatic issues like enforceability. i.e. our principles are our principles are our principles, whether we can enforce them or not. But I also think that a policy statement on this is itself a baby step towards enforcement, because it gives people something that they can point to as a reason why a course may be inappropriate. You gotta start somewhere. Hesperian 13:01, 11 September 2008 (UTC)
Thanks for the comments, and a special grin to Jtneill. :-) I've started a project about Learning from conflict and incivility - I urge everyone involved, and anyone who is interested, to participate. Suggestions are very welcome - I wonder how we could capture more suggestions like Hesperian has offered? Cormaggio talk 11:00, 11 September 2008 (UTC)
Function / expression problem
I've been on #MediaWiki IRC trying to working out some stuff which I'll dump below, with names edited out - am hoping this might tease a solution out of someone! :)
<jtneill> wondering how i could get the mathematical addition of two parameters {{PAGESIZE:User:{{{1}}}}} and {{PAGESIZE:User talk:{{{1}}}}} <jtneill> here's the template i'm working on : <1> {{#expr:{{PAGESIZE:User:{{{1}}}}}+{{PAGESIZE:User talk:{{{1}}}}}}} <jtneill> fantastic thankyou - let me try that :) <jtneill> hmmm... this works to work when i do a simple example in the template; but when it for a larger page i get "Expression error: Unrecognised punctuation character ","" <jtneill> here's the page with the error: <2> jtneill, the numbers can't have any commas in them. Try, IIRC, 0 . . . but I have no idea if that works. <2> Or is it :R? <2> Or does that not work at all? <2> There should be some way to get it, anyway. <jtneill> aha, no commas, gotcha, lemme try some more <jtneill> @2 no luck with R or :R but now i'll go hunting some more about #expr <jtneill> i read but didn't a way to strip the commas for {{#expr:{{PAGESIZE:User:{{{1}}}}}+{{PAGESIZE:User talk:{{{1}}}}}}} to work <3> there should be a string replace function IIRC <jtneill> IIRC sounds good (don't know what it is) but i'm now searching <3> if i remember correctly <jtneill> maybe i can use #replace ?: <3> sounds feasible <jtneill> hmmm... looks like the StringFunction extension is not on Wikiversity or Wikipedia
-- Jtneill - Talk - c 14:31, 11 September 2008 (UTC)
Category redirects
Dear all! Just a short note: Please use {{Category redirect}} instead of the common redirect (#REDIRECT[[]]) on category pages. If you don't use this "soft type redirect" some pages should be lost in the system (well, maybe not lost, just very hard to find). For example look at: Category:European history (yes, you need to click the redirect page) I didn't corrected this, that you can see what I meant. Hopefully I will also return to editing soon after this really hectic autumn. Greetings, --Gbaor 13:07, 11 September 2008 (UTC)
- What does {{Category redirect}} do, exactly? The Jade Knight 13:38, 11 September 2008 (UTC)
- If I understand correctly the {{Category redirect}} is a way to redirect users to the correct categories - duplicated categories which have no use on wikiversity could use the {{Category redirect}} link - see [[Category:Sciences]] which is a duplicate of [[Category:Science]] and the Sciences category includes that particular template which is more advance than the standard template. Dark Mage 17:40, 11 September 2008 (UTC)
- But what does it do? Is it just a pretty picture and some text? The Jade Knight 06:18, 12 September 2008 (UTC)
- A redirect just redirects you to the correct pages or categories, although it does include text and an image - but that's common in a template, think of Template:Delete - that itself includes text why?, well for one it insures people how to use that particular template, and explains to them what they should do - that redirect template is no different than the standard deletion template, as with the image in the templates - users do that so it doesn't lets say look bland or boring like a number of templates do on Wikiversity. Text is required in a lot of templates, some of them inside the actual template is only instructions on what to do, or certain users create his/her own templates in there userpage or subpages - view my userpage for example, that itself is a userpage template although yes it was pre-made on Meta-Wiki but mine is a modified version and I've changed it to the way I like it, others on there userpage or subpages include some templates which allows them to place text in them - I don't think there's anything more to say about what a redirect link does, it just redirects you to the correct page. Dark Mage 11:04, 12 September 2008 (UTC)
- So, the category redirect template is different from a normal redirect template only in that:
- It does not redirect you.
- It contains more text (and stuff).
- Yes? (I've posted a request for coded category redirects; it would be wonderful if we could actually get functional category redirects. The Jade Knight 11:35, 12 September 2008 (UTC)
- That is true, it doesn't redirect automatically to the correct category with the new template - though the standard one #REDIRECT[[]] does still work. Dark Mage 11:40, 12 September 2008 (UTC)
- The problem is that even if you make a category page redirect to another category page, the pages in the category aren't redirected. I don't think "functional" category redirects are coming any time soon... this has been on the wishlist of the Commoners for years (for example, it would be great if you could categorize as "butterfly", "mariposa", or "schmetterlingen" and have them all end up in the same category). Commons employs bots to switch pages from redirect categories to real categories. There is, however, a change in the interface on commons that lets you see if a category has content (number of files, subcats, and pages), so perhaps we could get that enabled here? --SB_Johnny talk 11:54, 12 September 2008 (UTC)
- The "category redirect" template is a kind of "soft redirect" (i.e. you need to click the link to go to the page it redirects to). The reason why it should be used on category pages is, that if you use the common redirect, than it takes you to an another category, than you want to go. Example: if you go to British Empire and click its category Category:European history, you find yourself in Category:European History (note the small, but important difference between these two cat.s), and this page is not there. This is because this page is categorized elsewhere, along with few another pages. So to sum up: {{category redirect}} is used to direct readers attention to the correct (unified) category, even if there are slight differences between spellings. For the future also the "wrong" categories should be kept, because there is a good chance, that someone categorizes a page as "European History", and in this case he will know what to do. (inserted again after edit conflict :)) --Gbaor 11:57, 12 September 2008 (UTC)
- Yes, I see. I suppose this makes unfortunate sense. The Jade Knight 12:14, 12 September 2008 (UTC)
TinyURLs
I get a spam block on trying to add URLS like this (http)://tinyurl.com/5pusva to pages. Not a big deal, but it would be nice to be able to use tinyurls. Any suggestions? -- Jtneill - Talk - c 06:23, 12 September 2008 (UTC)
- I'm guessing this is something you'll need to talk to a Bureaucrat about. Anyone? The Jade Knight 06:48, 12 September 2008 (UTC)
- Tiny url is blacklisted on meta:spam blacklist. You may try to ask for it to be locally whitelisted. Hillgentleman|Talk 07:52, 12 September 2008 (UTC)
- Aha, thanks. Have added a request for local whitelist to meta:MediaWiki_talk:Spam-whitelist#tinyurl.com_for_Wikiversity. -- Jtneill - Talk - c 17:16, 12 September 2008 (UTC)
- I would not recommend that. Tinyurl would allow someone to bypass any legitimate blacklist entry. Just use the full url to the site instead. --mikeu talk 02:37, 13 September 2008 (UTC)
- Thanks, Mike, appreciate it. In the end, I got a way sorted on that link to the meta discussion listed above. Wasn't thinking straight at 3am! :). All I need to do is create a shorter named page as a redirect. I started down this path because of problems emailing a long Wikiversity page name. -- Jtneill - Talk - c 02:44, 13 September 2008 (UTC)
Database error
Last couple of edits, I've been getting:
Database error From Wikiversity A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: (SQL query hidden) from within function "ExternalStoreDB::store". MySQL returned error "1290: The MySQL server is running with the --read-only option so it cannot execute this statement (10.0.2.108)".
-- Jtneill - Talk - c 15:30, 12 September 2008 (UTC)
- Which page and which edits did you do ? I just did a small test edit on sandbox, error didn't happen. Let's see if this can be reproduced (to report at bugzilla), ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:32, 12 September 2008 (UTC)
- In regard to Jtneill's defence I've been receiving some error messages earlier on today - all I did was just viewing my watchlist and clicked something like (→Conflict and incivility: new section) but on the arrow and received something like what Jtneill received, thought I might have been the only one receiving the message but it looks like other's have been receiving similar messages. Dark Mage 18:29, 12 September 2008 (UTC)
Second opinion on categories
As I review and revise the History cat and subcats, I expect I'm going to have categorizing questions. I'll list some of them here; I'd like additional opinions, and I doubt many people are watching that cat pages. So, first one:
- Category talk:Historical Subjects by Topic The Jade Knight 10:35, 13 September 2008 (UTC)
New one:
- Category talk:Art history The Jade Knight 10:47, 13 September 2008 (UTC)
- Category talk:History of ideas The Jade Knight 10:49, 13 September 2008 (UTC)
- Perhaps I'm being obtuse, but what are the questions precisely? thanks - KillerChihuahua 13:45, 13 September 2008 (UTC)
- At the links; I'm posting the links here to get attention to them. The Jade Knight 04:11, 14 September 2008 (UTC)
- Replied, hope it will be of help, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 11:23, 14 September 2008 (UTC)
New probationary custodian
Please feel free to comment/ask questions about User:Ottava Rima at Wikiversity:Probationary custodians.
--JWSchmidt 04:10, 14 September 2008 (UTC)
Wikimedia Server Error's
Was with wikimedia these days, I've been getting a number of server error's even when posting this comment or monitoring user activity from the Recent Changes, I've been getting them. Dark Mage 15:20, 14 September 2008 (UTC)
Albert Einstein
I invite all to participate or comment on Albert Einstein as describe in the proposal. Dzonatas 17:15, 14 September 2008 (UTC)
Extension to the civility policy
Below is an urgent draft extension to the civility policy, governing outing and mentioning the names of other users in inappropriate places. Please discuss and vote. The events of the last few weeks have been bizarre and unacceptable, and it is incredible how editors of high standing have been drawn by others into talking about each other all the time like a bunch of schoolgirls. We are all responsible for this, because even those of us who haven't been talking about each other are nevertheless responsible for letting the others get away with it. Wikiversity needs a return to a much higher ethical standard. If you feel this policy affects you, you're probably right - it affects us all. Give yourself a good caning for your sins, then vote for the policy and stick to the policy. If you don't vote for this, then reflect on how Wikiversity has become the laughing stock of the rest of the Wikimedia Foundation. Yes - the policy is severe - and we need it to correct ourselves. I've added voting and discussion sub-sections below the green box. --McCormack 17:20, 14 September 2008 (UTC)
1st draft
2nd draft
The second draft is the version we are currently working on. Please feel welcome to edit this so that we can form consensus about what it should be. --McCormack 06:39, 15 September 2008 (UTC)
Voting on the first draft of the civility policy extension
Please only vote in this section. Discuss in section below.
Oppose Changing from Strong support to oppose due to reading comments below. Dark Mage 17:36, 14 September 2008 (UTC)
Support --mikeu talk 17:44, 14 September 2008 (UTC)
Oppose -- I can't accept this as-is. With a few changes, I may support. Dzonatas 17:50, 14 September 2008 (UTC)
Oppose See comments below. —Moulton 18:30, 14 September 2008 (UTC)
- This is not how Wikiversity makes policy and I won't dignify it with a vote. If anyone takes these proposals seriously, move them to Wikiversity talk:Civility and Wikiversity talk:Privacy policy and let the community participate in the normal process of policy creation. --JWSchmidt 19:33, 14 September 2008 (UTC)
- Please see Wikiversity:Policies/How Wikiversity makes policy. --McCormack 06:59, 15 September 2008 (UTC)
- I have reservations that the unintended consequences of the above will represent a net loss to the project, however I'm a big fan of trying things out to see how they work. In particular, I think it's healthy to give things a go, even if we honestly feel they're bad ideas - we'd probably learn something. I'd support a month's trial (or somesuch) with a sunset clause (writ in stone).. then let's talk about it :-) Privatemusings 21:00, 14 September 2008 (UTC)it strikes me there may be 'autonomy of the community' issues emerging at the mo too, 'how Wikiversity does things' seems to be up for analysis a bit....?
Oppose Let's not make rash decisions in haste, as has already been done to lead to these sort of situations. Much more discussion is needed. --dark
lama 21:08, 14 September 2008 (UTC)
- I'm going to vote anyway:
Oppose; though I support the "Outing" policy, the rest is far too severe; I would support the rest, perhaps as a guideline, but definitely not as policy. The Jade Knight 09:03, 15 September 2008 (UTC)
Winding the first vote up
It's clear we need to work on this some more. I've created a second draft on a subpage and also opened up a talk page for discussion. --McCormack 06:39, 15 September 2008 (UTC)
Discussion of civility policy extension
Changes that I feel are needed:
- Outing: What is described in the proposal only covers a portion of what outing is about. The proposal describes breaking anonymity rather than outing itself. Outing happens when someone's reputation is being ruined or statements that are made to form out-groups. People tend to break anonymity to name people, which have already been subjected to such outing statements. I hate to see this be supported and misinform what outing is about. People will misuse the word more, and that would lead to more problems. Perhaps, a definition should remain on its own page: WV:OUTING.
- The appropriate and not appropriate: I generally agree with what is listed, but the style it is listed can easily lead to loopholes. The spirit is obviously there. Consider some recent experiences and you find those that argue to the letter. There is some other policies here that are not being fully supported since they haven't been fully improved as requested, yet they obviously carry a spirit to them. I have seen comments made to the effect that people will just not even abide by such policy until it is perfected supported by everyone. I don't have quick response to re-layout the appropriate and not-appropriate items, so, sorry, I'm just going to mention it for now.
Dzonatas 18:05, 14 September 2008 (UTC)
- This has not been carefully thought through. Most editors who do not self-disclose on-wiki have, over the years, published off-wiki disclosures in many venues including foundation mailing lists, off-wiki forums, other wikis (including old archival wikis) personal blogs and web sites, in Facebook and sites like Linked-In, in IRC or Skype sessions, in other e-mail, etc. In many cases it's a trivial connect-the-dots puzzle with no more than a few dots. Are you gonna bar the first such dot in a connect-the-dots chain? Usually the first dot is Google with the user's on-wiki avatar name. The famous Kevin Bacon game reveals that almost any page on the Internet can be reached from any other page in a chain of no more than 6 clicks total. You cannot legislate against solving trivial connect-the-dots puzzles that any child could solve. The proposal doesn't nothing more than create a billyclub for adversarial editors to bash each other with. Moulton 18:29, 14 September 2008 (UTC)
- Proposed addition. Special exception: Wikiversity users who have called another Wikiversity participant a "troll" and/or said to another Wikiversity participant "I'm not going to talk to you" are not protected by this policy.
This special exception to the proposed policy is need in order to prevent a few Wikiversity participants for continuing their practice of disrupting the project while refusing to discuss their own bad behavior. Frankly, I would prefer a stronger version that says "Wikiversity participants who call fellow participants a "troll" and/or who say "I refuse to talk to you" are expected to retract such statements and apologize. Those who fail to apologize will be asked to leave the project. --JWSchmidt 18:43, 14 September 2008 (UTC)
- It's entirely possible that people may need to call a spade a spade from time to time. It's also entirely possible that people can in good faith come to the realization that conversation with another user will not be constructive. --SB_Johnny talk 21:44, 14 September 2008 (UTC)
- I'm also a big fan of labeling things, and if you are working with a newly observed plant being added into the Bloom Clock project then it might be okay to apply a label and move on. I do not understand how that strategy works in a learning community where everything depends on collaboration. People seem to often apply the label "troll" when they feel that another person is persistently off topic. In my experience, feeling that another person is persistently off topic often arises from an honest difference in opinion about what is on topic. At Wikiversity we have a mission and it should be possible for people to keep talking and explore honest differences in opinion about which topics fit with the project's mission. --JWSchmidt 22:38, 14 September 2008 (UTC)
- What about labelling in general? What if you call an entire group of Wikiversity participants as "trolls"? (Ie, "those that work over at Topic:Psychology are trolls" or "Those who support forking this page are trolls", etc.) And what about other names, besides trolls, which seem to have negative connotations? Is it okay to call people morons? Racists? Barbarians? The Jade Knight 09:08, 15 September 2008 (UTC)
Japan high school songs
Comparing Okanosata's contributions en:Special:Contributions/Okanosato and betawikiversity:User_talk:Miyazaki, I see much similarity. Hillgentleman|Talk 02:13, 15 September 2008 (UTC)
- Hillgentleman, you seem to be on top of this. I'll take your lead on what to do next. --HappyCamper 03:58, 15 September 2008 (UTC)
- In August, Miyazaki and users from the same ip-address (usercheck requested on meta) had been uploading school songs and baseball scores on beta:. We were concerned about the copy-rights of the songs, and that study projects in Japanese should be hosted in ja:. We tried to talk to them but they refused every attempt to communication. We are told that this behaviour has appeared in other wikis also (either wikibooks:ja: or wikipedia:ja:). We blocked Miyazaki, Miyazaki1, Miyazaki2, for limited terms, and a few more. But they/s/he kept coming back. So in the end I put all such titles in a cascade (see betawikiversity:special:prefixindex/wikiversity:nospam/) and then used the titleblacklist also (only as a temporary measure). However, it is up to the English wikiversity to decide what to do with these materials. If there are students interested in Japanese school songs, there may be something than we can do about it, if we can settle the copy-right problems. Please feel free to ask me on beta if you have more questions. Hillgentleman|Talk 05:20, 15 September 2008 (UTC)
Student union Participants
Due to me updating the Student union page and also re-organizing it all I've created a Participants list including a list for users who are willing to maintain the Student Union project - if anyone is currently participating in the project please add your username to the Participants list - I've explained it in the Introduction, this is only optional but will easily identify those who are taking part of the project and those who are maintaining the pages. Dark Mage 20:26, 15 September 2008 (UTC)
Moulton's three day task
Moulton has completed his three day task and I have started the trial Peer Review process now. I have introduced Part One at this time. Tomorrow, I will introduce Part Two, and then Friday I will introduce Part Three. If you would like to participate, do not feel rushed, as I will wait until Monday to start the community wide discussion on the matter, so you can take your time.
Links: Ottava Rima's Exercise and Peer Review. I would ask that all responses are kept in separate subheading following the same format at this time, and not to discuss other responses until the end of the process when this is opened for a larger, guided discussion and analysis. Ottava Rima (talk) 20:26, 17 September 2008 (UTC)
Teaching Assistant in France Survival Guide/Directory
Was anyone aware we were hosting this?! It isn't connected to any current Wikiversity projects, but is, in fact, tied to b:Teaching Assistant in France Survival Guide. I'm willing to bet most of the users that come to that page have no idea of what's even available here at Wikiversity (or perhaps even that they're at Wikiversity!) The Jade Knight 11:17, 16 September 2008 (UTC)
- In principle, I would think that interaction b/w WMF project pages would actually be desirable. Whether it is in the specific case may well be up for discussion. But I suspect WB wouldn't host learning-related content, whereas WV might well be happy to host such a professional network list which is related back to a WB book. -- Jtneill - Talk - c 12:03, 16 September 2008 (UTC)
- I'm not complaining about the interaction; I'm just thinking we ought to do something to encourage more interaction. The Jade Knight 12:38, 16 September 2008 (UTC)
I actually found it several weeks ago, but I didn't know there is some connection with any other project or resource. Now it is also categorized at WV.--Gbaor 05:55, 18 September 2008 (UTC)
Cleaning up Ethical Management of the English Language Wikipedia
Please note that this section has been moved to Wikiversity:Colloquium/Wikipedia Ethics. This note is for archiving purposes. Hillgentleman|Talk 12:48, 18 September 2008 (UTC)
Not using unified account
I have problems with the comment 'not using unified account' in my user file 'my preferences'. I wanted to fill in the table 'Login unification status' nn.wikipedia.org (home wiki). But my normal password (the only one I use with wikipedia or wikiversity is not accepted. Can you help? --Roger 14:32, 18 September 2008 (UTC)
- Good on you, Roger, for finding the Colloquium! I found it a scary place at first :) (and sometimes it still is!). Anyway, see if this helps m:Help:Unified login and let us know here if it doesn't. -- Jtneill - Talk - c 14:41, 18 September 2008 (UTC)
-- Roger to Jtneill. Thanks for your help, I tried to unify my login but didn't manage to. There must be another Roger in Wikipedia somewhere. So I'm now Roger in wv (wikiversity) and Roger18 in wp (wikipedia). I hope sometime I will have a unified account with one name everywhere in the wikies.--88.64.84.130 18:28, 18 September 2008 (UTC)
User:Ottava Rima
As per recent events, I hereby give permission to any Bureaucrat to request a revoke of my Custodian privileges. This authority was reserved for JWSchmidt during my Candidate for Custodianship process. Because of JWS being blocked, I feel that my Custodianship could come under question. I was first brought over by Moulton and JWSchmidt to provide advice on a situation that later brought about the above user's censure by this community. I feel that my origins here and my process to becoming a custodian may be tainted by this, and I would seek to remedy this immediately. If the community feels that they can trust me and wish for me to take on the mantel that was originally asked of me sometime in the future, then I will be willing to enter into the process once again. There are many who many look at some of the recent events as a fundamental problem that needs to be addressed, and I do not wish to add any problems in the remedying of this. I hope that the matter will be solved quickly, and that the community's decision will be the best for all involved. I would like to apologize to the staff here, to the Wikimedia Foundation, to Jimbo Wales, and to other users who may have had to deal with this recent situation, and I apologize if I have contributed any to the furtherance of this situation, as has been alleged by at least one participant in this community. I will still be willing to help out and contribute in any way that I can, and I am willing to take all steps necessary to further the ends of this community. Ottava Rima (talk) 22:05, 19 September 2008 (UTC)
- I am happy to act as a stand-in mentor for Ottava for the time being. I will be his mentor over the long term as well, but recommend he find someone with more keyboard time (I'm busy this time of year). --SB_Johnny talk 00:12, 20 September 2008 (UTC)
- Thank you. I would be willing to take on multiple mentors if there are those who would be willing. Ottava Rima (talk) 00:18, 20 September 2008 (UTC)
- I'm glad you've found another Mentor and that you're so willing to resolve this, Ottava. The Jade Knight 02:19, 20 September 2008 (UTC)
Referencing, how-tos
Hi, I have 2 questions:
- How is it with referencing here on Wikiversity? In my opinion, all content here should be refed the same way as it is for instance on Wikipedia. How can one learn from original research?
- Does WV accomodate how-tos? If so, where?
Thanx for answers! Regards --Kozuch 07:30, 20 September 2008 (UTC)
- Couple of thoughts on this; referencing is somewhat looser here on WV than on WP (like many things) and although that struck me as a little bit odd at first, I've come to realise that it is far better to accept a half-decent reference in whatever format, then go about tidying later. So, Kozuch, don't be put off - feel free to wikify referencing you see here and we can do with some of the consistency lessons and guidelines from WP. But there will need to be some freedom and flexibility. For example, I myself use and encourage use of APA style with my students. -- Jtneill - Talk - c 07:50, 20 September 2008 (UTC)
- Indeed—and one can learn from original research quite easily; all research was once original. Wikiversity provides many different ways of exploring things. Now, if you see material which appears to be inaccurate, you may wish to bring it up respectfully. If it's something in the School of History, I'd be happy to look into it personally. As for your other question: It does accomodate how-to's, though you'll find more of those on Wikibooks. The Jade Knight 08:30, 20 September 2008 (UTC)
Wikimedia Server
I think wikimedia is playing up again, When I was going to respond to Moultons comment on his talkpage - it looks like it won't work for some reason due to this message:
Unable to store text to external storage Backtrace:
- 0 /usr/local/apache/common-local/php-1.5/includes/Revision.php(724): ExternalStore::randomInsert('?????W?%?????Z*...')
- 1 /usr/local/apache/common-local/php-1.5/includes/Article.php(1501): Revision->insertOn(Object(DatabaseMysql))
- 2 /usr/local/apache/common-local/php-1.5/includes/Article.php(1355): Article->doEdit('==Archives==??*...', '/* Question for...', 102)
- 3 /usr/local/apache/common-local/php-1.5/includes/EditPage.php(1013): Article->updateArticle('==Archives==??*...', '/* Question for...', true, true, false, '#Question_for}
Should this be reported to Bugzilla, because of this it seems I cannot post a reply in that particular section? Dark Mage 07:41, 20 September 2008 (UTC)
- I just got a similar message on trying to post a message to User talk:Draicone. After getting it, I went "back" in my browser, hit save again, and it worked OK.
Internal error From Wikiversity Jump to: navigation, search
Unable to store text to external storage
Backtrace:
- 0 /usr/local/apache/common-local/php-1.5/includes/Revision.php(724): ExternalStore::randomInsert('?Z?n?F???=EE?,)...')
- 1 /usr/local/apache/common-local/php-1.5/includes/Article.php(1501): Revision->insertOn(Object(DatabaseMysql))
- 2 /usr/local/apache/common-local/php-1.5/includes/Article.php(1355): Article->doEdit('Welcome to Wiki...', '==Moodle== ...', 98)
- 3 /usr/local/apache/common-local/php-1.5/includes/EditPage.php(1013): Article->updateArticle('Welcome to Wiki...', '==Moodle== ...', false, true, false, '#installing}
-- Jtneill - Talk - c 12:17, 20 September 2008 (UTC)
I had this kind of error 5 times in a row (by going "back" and retrying -> repeatedly) on trying to edit Social psychology (psychology)/Assessment/Essay/Topics. I gave up on the edit, refreshed the page, and then did the edit again, and it saved OK. Ironically, then, same problem on trying to post this... -- Jtneill - Talk - c 13:38, 20 September 2008 (UTC)
- Hmm, this problem seems to be Global - see. Dark Mage 20:32, 20 September 2008 (UTC)
International law and advices
--59.96.99.158 10:39, 21 September 2008 (UTC)
Hello, for anything to do with off-topic discussions please go to the Help Desk, but if you require information about Law please see Law. Dark Mage 10:02, 22 September 2008 (UTC)
Is it possible to usurp accounts?
Is it possible to usurp accounts on Wikiversity in order to deal with SUL conflicts? I've looked all over and cannot find the page to request it. Matty (temp) 08:56, 22 September 2008 (UTC)
- Welcome, the best place to go to for Usurp request or Changing your username is Wikiversity:Changing username a Bureaucrat will then process the request if it qualifies for usurp - however due to some situations happening on the site it maybe longer than usual before the request gets filled. Dark Mage 09:37, 22 September 2008 (UTC)
Respect people
Please see Wikiversity:Respect people and vote on the talk page. This proposed policy compliments Wikiversity:Civility by addressing issues concerning respecting people (including both other Wikiversity participants and people you write about). --mikeu talk 02:24, 22 September 2008 (UTC)
- Shouldn't this also be added to Wikiversity:Announcements since the page connects to a lot of userpages even learning resources - many people may not view the Colloquium. Dark Mage 19:20, 22 September 2008 (UTC)
Block of Moulton for incivility
After discussion with other admins, in which I was requested to personally make this block, I have indef blocked Moulton from this project. It is my belief that he was not here in a good faith effort to create learning materials, but rather was here to carry out his ongoing campaign against people who he thinks treated him unfairly at Wikipedia. After reviewing his case at Wikipedia, I think this is clearly not the case: he was properly blocked at Wikipedia, and should be blocked on sight from any Wikimedia project where he surfaces with a similar agenda.
I would recommend that a significant number of the attack pages be deleted, and the project protected at least for now, pending a good community discussion of what something like this should look like.
There are always difficult growing pains for young commuities; I have seen it in many languages and many projects. I encourage Wikiversity to review the "ethics" project - which, it seems to me could be an interesting project if handled appropriately - with an eye towards developing principles for dealing with such projects in the future. One idea that I would like to propose is an explicit ban on "case studies" using real examples of non-notable people, in exchange for hypotheticals. I would also like to encourage you to consider clarifying the scope of Wikiversity to make it more clear that it is not a place for people to come and build attack pages in the guise of learning materials.
In any event, I hope that my action here will be viewed as helpful. I did not act quickly, but only after discussion with important people, and only after hearing that 3 bureaucrats support this action. It is not my intention to be the "God King" of Wikiversity, although I do request that this block only be overturned upon a very careful consideration of the possible implications for the future of the project.
The first major internal conflict and ban is always tough. My thoughts are with you, and I wish you well.--Jimbo Wales 19:18, 14 September 2008 (UTC)
- I guess there goes my project that I assigned to Moulton to try to get him to understand his actions and to start having him contribute in a meaningful way. The original proposal for it can be found here. I wish this would have come after the three day period and after the Peer Review process would have begun. Ottava Rima 19:21, 14 September 2008 (UTC)
- I support this block - albeit reluctantly. I feel that many of Moulton's actions within Wikiversity have been profoundly uncivil - even though others have been very welcome. I am dismayed by the tone of activity since the introduction of the "ethics" project - much of which I have felt to be deeply unethical (such as the posting of personal information, and, of course, making what seem to be deliberately provocative and uncivil comments). I am still very much interested in exploring how we could make a contribution to Wikipedia by studying it - however, we still need to put a lot of thought into it to make it workable in practice. I think this is just the beginning of a long learning process for this community, and hopefully the beginning of the end of the divisive activity we've seen of late. Cormaggio talk 19:41, 14 September 2008 (UTC)
- While I'm sad to see him blocked and think he does have some valid points to make, I also think he very largely brought his own situation upon himself by his own insistence on taking a hostile attitude towards anybody and everybody who opposed him. Some of those opponents have used unfair tactics against him too, but that doesn't excuse his own bad behavior. But as for the proposal regarding case studies only using fictional cases instead of real ones with real names, there's the "no win situation" in that, while if you use real names you risk unfairly dragging people's names through the mud and stirring up unnecessary drama around them, but on the other hand when you use made-up cases, you risk being criticized for contriving a case that supports whatever point you're trying to make, possibly with unrealistic attributes not resembling anything in the real world. Dtobias 19:45, 14 September 2008 (UTC)
- I largely agree with Dtobias. The Jade Knight 09:12, 15 September 2008 (UTC)
(Edit conflict) Jimbo, this was completely uncalled for and is a great miscarriage of justice. This was not deserved and had little justification. There was no attempt on your part to negotiate with Moulton, just a secret discussion with people who were probably on your side or members of the ID clique.
This was a unilateral abuse, based on a discussion that is not linked or was secret. It was not right to go blocking him, this is total abuse. Please rethink your actions. I am not familiar with the full situation, but I know that it is wrong for you, because you are the co-founder of Wikipedia, to go to another project and appoint yourself presiding lord. Your discussion with very important people illustrates a clear association with a clique of anti-Moultoners and a clear intent to harm. I hope you rethink your decision and unblock this editor.
I have seen some of Moulton's edits here and they seem like genuine efforts to expose corruption. Just because he does not expose it while bowing down to authoritarianism in Wikipedia does not mean that he should be banned for dissenting. You won't ban Just 'zis Guy for being the rude troll that he is, but you hesitate not when banning someone who disagrees with your posse. Jonas Rand 68.96.213.118 19:48, 14 September 2008 (UTC)
- To clear up some potential confusion from the above - Jimbo has had prior involvement with this issue, and such correspondence has been kept (to this point) mostly off of Wikiversity. It came out then that many people disagreed with what Moulton was posting here (including links to other websites). If you agree with Jimbo or not on the issue, it is at least fair to acknowledge that Jimbo did not come out of no where, but has been connected in some way to this for a few weeks now and was one of many interested and involved parties. Ottava Rima (talk) 20:06, 14 September 2008 (UTC)
And to you Dark Mage: Merely sucking up to Jimbo as a loyal follower, just because you trust him as infallible and because Moulton "exposed" people's real names is not right. You need to think for yourself, not with Jimbo. As for KillerChihuahua's name, it was revealed (I believe) on Amazon.com, and on Wikipedia-Watch.org. Jimbo, why don't you ban User:Salmon of Doubt, as an obvious troll account used to harass? Jonas 68.96.213.118 19:54, 14 September 2008 (UTC)
- Reluctant support from me too: Moulton has a very hard time getting along with others, and while he's been recieving abundant good advice on the irc channel, he hasn't been following it (I'm afraid he's getting bad advice too, but that's a different issue). I hope Moulton will take the opportunity to give Ottama Riva's approach a try, using his talk page as his place to post. I hope the rest of us can take the opportunity to do some serious house cleaning and make a serious effort at creating a policy structure that allows the kind of work Moulton has been engaged in, while at the same time maintaining a welcoming environment where people will not feel that the community turns a blind eye on inappropriate behavior. --SB_Johnny talk 19:57, 14 September 2008 (UTC)
- SB_Johnny, I agree with you, except that I think he shouldn't be forced to use his talk page. I think he should be unblocked, and then he can try OR's mentorship. But he shouldn't be banninated forever. 68.96.213.118 20:01, 14 September 2008 (UTC)
- I generally agree with you, SB_Johnny, though I'm more neutral as to the block; I dislike that it came about in such a fashion, though I do agree that Moulton had engaged in some behaviors which contributed to it. I would like to see him given some method of contributing productively in a limited fashion so that an unblock could be created at the earliest reasonable opportunity. I dislike seeing users blocked from a project, when they even give a reasonable semblance of productive contribution. The Jade Knight 09:21, 15 September 2008 (UTC)
- 20:34, 14 September 2008 (UTC)
- I'll agree to support Moulton's unblock if he follows JWSchmidt's advice I and other editors don't want users being blocked if there is another way round the block like what JWSchmidt stated then lets do it so long as Moulton agrees to it and other editors, in regard to 68.96.213.118 comments regarding my previous comments I am not loyal to anyone and my support for this block is very weak, the reason why I asked Jimbo if he'll be active here is because he is more experienced in these sort of matters and have a huge say in the foundation and it's policies - if we're not sure on what policy is acceptable to the site then we could ask Jimbo what is acceptable under the foundations rules that's all, so think before you start assuming that I'm loyal to someone when I'm not loyal to anyone, nor do I intervene. Dark Mage 20:43, 14 September 2008 (UTC)
- EditConflict Point taken, statement retracted. Just call me Jonas. I think people should be more willing to negotiate with Moulton and stop being crybabies when someone calls KC by her first name, open to all. KC didn't block or warn Moulton, or censor the name, where did she say not to use it? If this incident is enough to warrant an indef block, people need to grow thicker skin. 68.96.213.118 21:21, 14 September 2008 (UTC)
- I oppose any unblock before the dust settles. I just got of the phone with Moulton, and he's not taking it personally or hurtfully. Let's think, 'k? --SB_Johnny talk 21:15, 14 September 2008 (UTC)
- Did he say he accepted the block? Did he care if he was unblocked? 68.96.213.118 21:21, 14 September 2008 (UTC)
- It is very unfortunate when an editor with the potential to be a valuable contributor to wikiversity is blocked. I had hoped that it would not come down to this, but I do support jimbo's action. The incivility has been more than a distraction, it has been an impediment to learning. It is also disappointing to see a learning project strive for scholarly discourse, but fall far short of that goal. We can do better than this. --mikeu talk 21:30, 14 September 2008 (UTC)
I've only been active here for under a week, and have tried to immerse myself into the Wikiversity culture as much as possible. My initial assessment is that this Ethics project, and specifically Moulton, has been allowed to dominate the entire English Wikiversity, at the expense of other learning projects here. Some of the sysops here, specifically JWSchmidt, appear to be reveling in the opportunity to denounce and investigate the sysops of English Wikipedia, and that is an indictment on them, as English Wikiversity will one day (hopefully) be as big English Wikipedia and I can assure the sysops here that they will find they will not always look like saints as they try to keep things orderly. I applaud Jimbo stepping in and saying "enough is enough". Moultons time and energy are not wanted if they remain focused on pulling apart his "bad" English Wikipedia block. The Ethics project can be profitable, but not if it is a vehicle to obtain retribution against English Wikipedia. If Moulton wants restitution, then he can obtain that by working on other Wikiversity learning projects, or other WMF projects. An unblock should not be done until this 3 day Ottava Rima trial is finished, and only consider if Moulton voluntarily takes a break from the Ethics project for at least a week, and then keeps it on the backburner for at least a month. John Vandenberg (chat) 22:35, 14 September 2008 (UTC)
- "Moulton, has been allowed to dominate the entire English Wikiversity" <-- Been allowed? At Wikiversity the participants study the topics that they are interested in. Speaking only for myself, I've studied Wikipedia for years, and I intend to continue studying Wikipedia. I worked to develop Wikipedia Studies at Wikiversity long before I knew the Moulton existed. When I learned of the existence of the Ethical Management of the English Language Wikipedia project I investigated the project and started participating. I firmly believe that I would have joined a similar project had it not included Moulton. I hope we can persuade Moulton to move past his interest in the issue of anonymity. I think he feels strongly that it is ethically irresponsible for Wikipedia to allow editors to put false information into BLPs while being protected by anonymity. I'm interested in thinking about and discussing ways to improve BLPs at Wikipedia and I hope Moulton can redirect his energies towards collaborative efforts to improve Wikipedia. "specifically JWSchmidt, appear to be reveling in the opportunity to denounce" <-- It would be interesting to know the prism through which John Vandenberg is ascertaining such "appearances". I have publicly (in wiki) described my feelings about my study project. Those feelings can best be described as feeling sick while reading the edit history of Wikipedia pages where the BLP policy was violated. Yes, I feel sick when I see biased BLPs that exist only to put a false negative label on a person. This is not a matter I can "revel in". John Vandenberg <-- please, can you tell me exactly what gives the appearance of "reveling"? As a Wikipedia editor I have devoted a significant amount of time to biographies, including working to correct problems in BLPs. I am studying problems in BLPs and thinking about ways to improve Wikipedia BLPs. "Moultons time and energy are not wanted if they remain focused on pulling apart his "bad" English Wikipedia block." <-- Moulton's interests are describe clearly on his user page. --JWSchmidt 03:07, 15 September 2008 (UTC)
- It is great to see that Moulton has been blocked, his efforts have clearly been disruptive in my opinion. I've seen how he's moved from project to project (not just Wikipedia and Wikiversity) as he's been blocked. John Vandenberg makes some good points but, whilst I've not being following every twist and turn of this story, I can't say I'd like to see Moulton editing anytime soon. I have to take JWSchmidt's comments with a pinch of salt considering his clear negative attitude towards Wikipedia editors in general. It is a great shame that he and perhaps others have allowed Wikiversity to be distracted from writing true educational resources by "studying" the behaviour of Wikipedia editors. I look forward to seeing users refocus their efforts in more productive ways as a result of this block. I thank Jimbo for taking this action. Adambro 08:56, 15 September 2008 (UTC)
- "The first major internal conflict and ban is always tough." But, don't cry. Now that it's done, every time afterward is going to feel good, even amazingly pleasurable. Sounds like Wikiversity just got its cherry popped. -- Thekohser 11:16, 15 September 2008 (UTC)
- It doesn't make me "feel good" to see a contributer blocked, as I and others have stated above. I see that there were many chances, and warnings, to participate in learning projects in a civil manner before any action was taken. Suggesting that this block would give someone pleasure might be construed by some as not assuming good faith, esp. given the expressions of regret posted in this thread which clearly indicate otherwise. --mikeu talk 14:29, 15 September 2008 (UTC)
Adambro, if Moulton had the level of power as say, Jpgordon, and you were a nobody, then you would be blocked for incivility towards Jpgordon, if Jpgordon wasn't blocked, but if you said you would be glad if he was. And, Moulton or JWSchmidt didn't have a "bad attitude to Wikipedia editors in general". They exposed corruption and got censored for it, like what would happen in a Sovjet prison camp.
Adam, your comment is quite rude to anyone who gives criticism of Wikipedia without fluffing up Jimbo, saying that this is proper treatment, for all their chances to come to an end, and be banned for an infinite time, even when another editor has a plan laid out to mentor them. Your comments are unhelpful and are not making the situation better. In your view, every critic of Wikipedia should be banned. This is not in the spirit of accepting criticism.
Yours,
Jonas Rand 18:30, 15 September 2008 (UTC)
Personally, I don't think this was handled very well. I'm not surprised Moulton was blocked, and the ethics project did need to be revamped, but the swoop-in block and decrees by Jimbo were, from my perspective, not a good method. It would be one thing if Jimbo regularly addressed civility issues, but this seems pretty one-sided considering the crap that editors regularly get away with on WP.
I know that the over-the-top inclusionism that has historically been the rule on Wikiversity has made it difficult to deal with the issues with the ethics project, but was this really a good answer? Sχeptomaniacχαιρετε 21:09, 15 September 2008 (UTC)
- I agree with Sχeptomaniac that this was not handled very well at all. As a custodian myself I had no idea that blocking Moulton was really even being considered. While I was getting a bit tired of the dominant discourse around the Ethics project it was my perogative to follow along if I wanted to or not. The fact that Jimmy Wales can enter into WV, create an account and with his first 'edit' block a new yet somewhat prolific editor based on reasons brought with him from WP just doesn't seem right to me. Countrymike 21:24, 15 September 2008 (UTC)
- I also agree that this was not handled very well at all - though I fully believe that it was justified. Incidentally, Moulton was also blocked from the wikiversity-en IRC channel yesterday. I'm going to develop a page about this, where I'll try to be as transparent and thorough about the circumstances leading up to Moulton's blocks, at User:Cormaggio/Moulton's block. Cormaggio talk 08:41, 16 September 2008 (UTC)
- Although a number of users are against the block and those who are in favour, it has come to my attention that the foundations policy states that revealing any personal/private information goes against the very core of it's policies including violating the Data Protection Act which strictly forbids people in doing this no matter how Moultan gained the information the policy and the law still applies, although my support for the block is very weak - though I'm now starting to support the block which Jimbo has done, IP 68.96.213.118 your comments are welcome since you've been participating in the dicussion - though next time if you join in the discussion please don't be uncivil towards other editors who have the right to express his/her view in this even in the first comment which you've mentioned about me - that was both uncivil and unprovoked, I'll be willing to forget that comment if you start to remain civil to other editors (including) me - if Moulton does get unblocked for what any reason I may oppose or support it depending on the situation though at the moment I'll support this block. Dark Mage 10:11, 16 September 2008 (UTC)
What's going on here?
If, as Jimbo's comments regarding the block suggests, custodians privately asked him to make the block, rather than do it themselves, it does not bode well for the health of this project, IMO. I find it very troubling that no-one who asked for the block privately has disclosed that information. Is the only way to deal with disputes here to go behind others' backs? Sχeptomaniacχαιρετε 00:00, 17 September 2008 (UTC)
- I consider Jimbo's threat to ban me for reverting Centaur of Attention (almost a month earlier) to be way out of line as well. I'm not alone in regarding Centaur's edits as bordering on vandalism as well as attempting to impose a BADSITES-style policy on this site, and I'm extremely disappointed that Jimbo is backing his side against mine. Dtobias 00:07, 17 September 2008 (UTC)
- I agree that it would have been better if a block request could have been made more transparently. It is possible that Moulton's "chummy" nature with JWSchmidt intimidated some users, as JWSchmidt seems to fully support what Moulton is doing, and JWSchmidt is a custodian. The Jade Knight 06:49, 17 September 2008 (UTC)
- I have considered that it may relate to JWSchmidt's behavior, in which case the blame would partly lie with him for his part in developing the atmosphere. However, if fear of crossing one person is derailing things that badly, then it just underscores that there's a greater problem. Is there a process for addressing off-course projects and concerns about editors before it comes to deleting/blocking? I couldn't find one, so I would think it probably needs to be developed or better articulated. Sχeptomaniacχαιρετε 20:08, 17 September 2008 (UTC)
To diffuse this from getting out of hand above, I would like to state that I support the block, and that my only concern was timing. Although I was not consulted by Jimbo, I would have wholeheartedly agreed with the action at this time. Ottava Rima (talk) 19:55, 17 September 2008 (UTC)
- I don't mean to be too confrontational. I'm not poking at things in order to agitate (though I know that can happen), but because I perceive something is wrong here, and I would like to narrow down what it is. Sχeptomaniacχαιρετε 20:08, 17 September 2008 (UTC)
"Moulton's "chummy" nature with JWSchmidt intimidated some users, as JWSchmidt seems to fully support what Moulton is doing, and JWSchmidt is a custodian" <-- I'd like someone to explain in plain English what is being insinuated here. If anyone cares about facts please read the truth here or come to my talk page and ask me a question or two. I'll also repeat here what I said above on this page, 21:05, 17 September 2008 (UTC)
Moulton's talk page
Despite Moulton being blocked, he continues to make use of his talk page, why? If it isn't considered appropriate to allow him to edit here then I don't see why it is appropriate for Wikiversity to serve as a means of communication for him. Can't this page be protected? Adambro 18:24, 22 September 2008 (UTC)
- It can, of course, be protected. We haven't figured out yet what's to become of the "ethics project" yet, or whether Moulton can be accomodated without causing unwanted damage to the project as a whole. There are a number of issues we're trying to address (including this one), and your input would be warmly welcomed. --SB_Johnny talk 19:00, 22 September 2008 (UTC)
- Moulton's talk page was protected earlier today. This was done to prevent posting of personal information such as email addresses. There is now a discusion at Wikiversity:Request custodian action/Moulton's talk page to involve the community in how we should proceed. We are asking everyone to share their thoughts and opinions on this. Moulton's talk page will remain protected during that discussion. --mikeu talk 20:35, 23 September 2008 (UTC)
Checkuser note
Just to note that in the light of User:JWSchmidt having his checkuser status removed, User:SB Johnny has also has this status removed as there is a minimum of two checkusers required at a project. Stifle 14:10, 24 September 2008 (UTC)
- Update: per the results of the recent Nomination for CheckUser a request has been submitted to grant CheckUser access to User:Emesee and reinstate User:SB Johnny. The request is currently "On hold pending identification" --mikeu talk 14:37, 24 September 2008 (UTC)
- Other notes: User:SB Johnny requested his access be removed, until such time as another CU could be elected. Checkuser access had also been removed from User:Erkan Yilmaz at his own request. --mikeu talk 14:49, 24 September 2008 (UTC)
What should we do about JWSchmidt?
There is a very long thread on the topic: What should we do about JWSchmidt?
Wikiversity:Community Review#JWSchmidt
Inquiry
There is a very long thread here, ranging over a wide variety of topics: Jon Awbrey thread.
Removal of JWSchmidt's custodian status
- Discussion moved to Wikiversity:Request custodian action#Removal_of_JWSchmidt.27s_custodian_status
Music and life
Just wanted to share Alan Watts on Music and Life: (2:20 mins). -- Jtneill - Talk - c 14:11, 25 September 2008 (UTC)
- Nice! Do you want to become YouTube hunter? :)--Gbaor 06:37, 26 September 2008 (UTC)
Content organisation matrix - difficulty and topic
I am quite confused by the various prefixes - there are Portals, Schools, Topics etc., some even redirection to each other. IMHO what whole Wikiversity needs is just to order all content into two dimension matrix: by "difficulty" and by "topic" (=real topic, not current Topic: preffix). Topic is something you dont need to take care about, it is set by a page name. Difficulty is what you have to take care about - so I would much rather introduce preffixes like "Primary:", "Secondary:" or "Tertiary:" to show the difficulty of the page on a topic, so that everybody would recognize it immediately from the URL. I know this is rather a complex stuff, but there ought to be a discussion about it. I feel Wikiversity is developing rapidly, so please tell me whether this has been discussed before.--Kozuch 10:55, 20 September 2008 (UTC)
- This is a rather "vexed" issue; see Wikiversity:Namespaces for an overview and Wikiversity:Vision/2009. More specifically, feel free to contribute your ideas to Wikiversity:Vision/2009/Namespace reform. The problem probably with namespaces by difficulty is that much content could also cut-across levels or is informal (not necessarily fitting into any of these), so we tend to use categories to indicate difficulty (see Help:Resources by educational level), although this type of categorisation is not yet widely used. But no-one I think is entirely happy with the current namespaces. My personal preference would be less in School: and Topic:, and more in the mainspace, but that's far from a shared view. -- Jtneill - Talk - c 11:06, 20 September 2008 (UTC)
- There can be many pages on a topic with different difficulty levels. Personally I don't like organization by school levels because of the overlap and dependency on geographical location which I think can make pages appropriate for any of those categories, making the categorization potentially useless for people trying to find pages on topics for their ability level. I think difficulty level should instead refer to beginner/novice/rookie/easy, intermediate/average, and master/expert/profession/hard. I also believe namespaces should have general uses, which I don't think difficulty levels satisfies, because more than one namespace would be needed to satisfy it. --dark
lama 13:02, 20 September 2008 (UTC)
- Kozuch, The school: and topic: namespaces came from the days when wikiversity was sitting in wikibooks. Such pages differ from "main namespace" pages, which host actualy contents, in that they are to be "community spaces": they are where you can organise study groups and learning activities (which are integral parts of wikiversity's mission). The traditional seperation of "primary", "secondary" and "tertiary" eductions are sometimes artificial, and may due to historical or psychological inertia (there are people who are comfortable with university mathematics and find secondary school history difficult; and then we often see motivated "secondary" students learning "advanced" topics, by themselves, or sometimes in summer schools). It would be great to see if Wikiversity (as we have see in wikipedia also, with a more specific scope) can ignore such articial barriers.
- It is useful for everyone to classify topics with levels of difficulties, but a system with "lists of prerequisites" would be more informative than the rough labelling of "primary", "secondary", "tertiary" and "professional". Hillgentleman|Talk 15:51, 21 September 2008 (UTC)
Success and failure...
Wikipedia’s Jimmy Wales on wiki success and failure (podcast) Emesee 06:21, 29 September 2008 (UTC)
Paid for own content in wikiversity from a private university
Hi, I like to pose a question of whether this is acceptable or ethical. I been creating content here Open_Source_ERP and a personal contact from a local university asked that i write materials for their degree programmes. Can i continue developing those materials online in wikiversity and still collect payments when they are used for such purposes? They accepted my OSS spirit of sharing knowledge and do not mind the materials been copyleft as well. They agree that their business is in making money out of direct classroom in-house education, conducting exams and teacher coachings and not monopolizing the materials i create. So is that ok? I don't mind writing offline, but online sharing is where my passion is. And now getting paid for real work is how i believe OS culture of Free as in air but not lunch is. --Red1 04:59, 27 September 2008 (UTC)
- I cannot speak for the Foundation or the rest of the community here at Wikiversity, but especially considering that this is for a degree program, it seems it very might fall within the scope of Wikiversity (Wikiversity:Scope). At first glance, it seems to seem OK and acceptable. And as far as being ethical, that too at first glance, seems to seem OK. What comes to my mind (although we could make this as complicated, which I personally don't care to), is if everyone benefits and if no one is harmed? It seems like we (all the stakeholders) might benefit, and I have not thus far seen any harm. Emesee mobi 05:42, 27 September 2008 (UTC)
- There's certainly nothing wrong with getting paid to write free content :-). The license we use (GFDL) does allow commercial use of content developed here,meaning that anyone can print it out or burn it to a disk and sell it. The only issues are that since it's a wiki, sooner or later someone will take an interest in it and you'll have collaborators (generally a good thing, but sometimes you'll spend nearly as much time figuring out the collaborating part as in the creating part), and that the printed versions need to include a copy of the GFDL (not exactly a huge hurdle, but it's 7 pages or so in print). I know some Wikipedians have created cd-r versions of the encyclopedia, and I believe Whiteknight has been doing some research into finding grants to support the creation of Wikibooks materiel. --SB_Johnny talk 07:26, 27 September 2008 (UTC)
- But since this content is being developed for a certain program and probably used for a variety of courses, could it not be "protected", but certainly forked, if other users desired to adapt it to their own specific needs? Emesee mobi 07:36, 27 September 2008 (UTC)
- Been 'forked' is part of the concept of been open source, and i would welcome it,as long as the original source is stated somewhere. But can my sponsor who paid me be named as the copyleft holder or sponsor in my link above? Then the sponsor at least has positioned its merit, and any further copy or reference of it makes that even better - Red1 08:27, 27 September 2008 (UTC)
- Well, the copyleft "holder" is in a sense actually the Free Software Foundation (the authors of the GFDL), and the authors of the text. It's a bit more complicated than that, of course, but IANAL :-). The iffy thing is that Wikimedia projects do not allow "invariant sections", meaning you can't guarantee that someone re-using the content will discuss their endorsement (and that's probably a good thing, because it could be changed into almost anything over time). There's nothing wrong with noting their sponsorship of the materials though. I'll ask around and see if I can find someone a bit more versed in this. --SB_Johnny talk 08:32, 27 September 2008 (UTC)
- Ok, got one good suggestion. You could use an account named <your name, writing for company name>, and then have that on your userpage with a note that that's how you would like your contributions attributed. Would that work? --SB_Johnny talk 08:43, 27 September 2008 (UTC)
- Thanks for these responses that gives me some good sense. I might tinkle with some tabs or boxes that says, 'the author teaches this in University...' or 'conducted in the University of..', something not to the effect of 'direct' commercial advertising. The whole idea is to encourage Universities to keep on contributing to their own good image as well as Wikiversity in this mutual horizon. There will be equal risk that the participators ensure good quality and proper acceptance. I will explore as time goes by as what can be good AUP. - Red1 00:22, 29 September 2008 (UTC)
Some folks have run into trouble on Wikipedia with this
My head is spinning a little bit. Have any of you ever heard of this guy who launched a business called "MyWikiBiz", where the GFDL content was paid for, but only published to a commercial site. Then, when other Wikipedians in good standing copied it into Wikipedia (with due attribution), Jimmy Wales went ape shit? You can read about MyWikiBiz on Wikipedia. And, you might want to invite JzG and Calton here to comment, because in my experience, it's their way or the highway. -- Thekohser 13:44, 27 September 2008 (UTC)
- Full disclosure. I saw this thread on Recent Changes and raised it to Greg's attention. Greg has battled Jimbo on this very issue for a very long time. It's one of the recurring unsettled issues that has long been highlighted on Wikipedia Review. Because it's an unsettled policy in which Jimbo personally interfered (just as he personally interfered with the Ethics Project here), I thought it best that those who are naive on the subject be made aware that this issue has deep and disturbing legs. Moulton 14:42, 27 September 2008 (UTC)
- Thanks for clarifying that. Wikiversity is more akin to Wikibooks on these sorts of issues (many fully-written books have been donated to Wikibooks over the years)... as long as it's quality materials with a compatible license and fit within our scope, the problems Thekohser experienced on WP would probably not come up here (assuming of course I understand the problems). --SB_Johnny talk 15:34, 27 September 2008 (UTC)
- In all seriousness, I suppose it is a credible argument to state that something would be acceptable in the Wikiversity environment that might be engaged with hostility on Wikipedia. Another notch in the "win" column for Wikiversity, and one of the reasons I still participate here. It seems to me, the less involvement Jimmy has with a project or enterprise, the more peaceful and successful it has a chance to be. -- Thekohser 16:00, 27 September 2008 (UTC)
- Sponsorship of materials could be regulated. All kinds of abuse is possible, when contents are sponsored. At the moment our community is too small, so we can see the sponsorship of material with money by a university as part of a gentlemen's agreement for the moment.--Daanschr 07:53, 28 September 2008 (UTC)
In my opinion, Thekohser's difficulties at Wikipedia are more a function of his personality than of the paid nature of his formerly proposed activities there. He often notes that there does exist paid activity at Wikipedia and tries to portray that as inconsistent, when actually what it shows is that it is not simply being paid activity that was (and is) the difficulty with his behavior at Wikipedia. Kind of like how Moulton condemns Wikipedia for being rules based and also for not following its own rules. People who condemn something for both being X and not being X are being emotional and not logical. WAS 4.250 14:08, 28 September 2008 (UTC)
- In my opinion, WAS 4.250's theory of my difficulties at Wikipedia are about as tangible as his real-name identity. In other words, worthless. My "difficulty" became unworkable on the night of October 4, 2006, when Jimmy Wales reversed himself on an agreement he had proposed to me. Set aside all of the grumblings from JzG, Calton, you, and others, WAS 4.250, you weren't able AT ANY POINT between about August 20 and October 3, 2006 to stop my business from authoring GFDL content and getting paid for it. There were no "difficulties" until Jimmy Wales had an unflattering breakdown and deleted a perfectly viable and acceptable unpaid article about Arch Coal, which he mistakenly thought was paid for. In the big scheme of things, the joke was on Wales, and by you perpetuating this myth that I was having "difficulties" conducting business prior to October 4th, you're making yourself a part of the joke. I'm a bit surprised, frankly, WAS 4.250 -- you're normally more sage than this. -- Thekohser 19:37, 29 September 2008 (UTC)
- The hypocrisy and double standards at Wikipedia are an observation that the rules exist not for the purpose of crafting an orderly process, but for the purpose clobbering one's opponents in the daily dramas of deciding what content to include in the online encyclopedia. There is nothing wrong with rules if one is seeking to define a game which is played on a level playing field. But I am not aware of any theory to suggest that an authentic encyclopedia can be crafted by means of such a game (even if it were a fair game). —Moulton 14:18, 28 September 2008 (UTC)
- Hypocrisy and double standards exist everywhere. There is no place they do not exist. Your ignorance of "any theory to suggest that an authentic encyclopedia can be crafted by means of such a game" does not indicate that it can not occur. You do not know everything. Wikipedia is indeed crafted in a game-like way and millions of people find it useful and many studies have shown it to be more accurate than some widely used sources and only a little less accurate than Britannica. (I know Britannica disputes this but then they would wouldn't they?) WAS 4.250 14:47, 28 September 2008 (UTC)
Class at en.wp
Hi again, I seem to have run across another class using en.wp at [3]. It looks like they need to test the wiki software. Can someone here contact them to get in touch with their professor and move their activities here from Wikipedia? Thanks. MBisanz 17:22, 29 September 2008 (UTC)
- Thanks for the note. I left messages for two of the students who have edits. link 1 , link 2 --mikeu talk 18:01, 29 September 2008 (UTC)
Sharing on wikipedia.
Hello.
I just had an idea which is that we can make a page or something where all the people here would share their photoshop projects and info about how they could do that and so on. There r also many things we can share, such as powerpoint presentations, photos of models and such things..But I am not really sure if we can upload such things on wikipedia, especially that some of these have large sizes and so on...I hope that I hear from many people soon!..THANKS! --unknown001 12:12, 21 September 2008 (UTC) SEEMS NO ONE IS INTERESTED>>UH OH :( --unknown001 18:30, 2 October 2008 (UTC)
- Sounds like a great idea! Why don't you start it over at Photoshop or Presenting or whatnot? Now, you cannot upload such things onto Wikipedia, but it's perfectly appropriate (IMO, at least) to include them here. The Jade Knight (d'viser) 06:02, 3 October 2008 (UTC)
Reading Wikiversity on a Palm
I tried and can not get Plucker to create an ebook out of Wikiversity pages for me to read. I would love to be able to read the pages on my palm without using its browser. Is there any way to get palm readable ebooks of pages or a way to get Plucker to work with Wikiversity? (this question from new user: User:Sclewin)
- Sclewin, It is an interesting idea. I don't know about Plucker, but may "printable versions" (e.g. [4])work? Hillgentleman|Talk 17:57, 16 September 2008 (UTC)
- Maybe we need to look into an extension like this mw:Extension:Collection? It seems to work on -- Jtneill - Talk - c 08:42, 28 September 2008 (UTC)
- Jtneill, I haven't used that extension, Here is one way (which may not be the simplest) to do it if you are interested:
- 1. set up your own wiki (if you haven't done it, and if you use windows, try mw:wiki on a stick, choose WOS which works better)
- 2. export the pages that you want to your wiki
- 3. install the extensions that you like, including the "extension:collection" that you want to use
- 4. play with your pages with your extensions. Hillgentleman|Talk 09:36, 28 September 2008 (UTC)
- If printable version works for you, a lower-tech way is to create a list of pages (or a list of categories), and use a robot to fetch the html of these pages. Hillgentleman|Talk 09:39, 28 September 2008 (UTC)
Is it possilbe to get something like for wikiversity? It is very hard to read wikiversity pages on mobile devices. --A3pbe 19:09, 4 October 2008 (UTC) | http://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/September_2008 | crawl-002 | refinedweb | 16,675 | 58.52 |
Created on 2011-09-06 18:37 by Claudiu.Popa, last changed 2020-11-30 02:13 by Ark-k.
> inspect.getsource called with a class defined in the same file fails
> with TypeError: <module '__main__' (built-in)> is a built-in class
The error message makes me think that getsource(__main__) was used, not getsource(SomeClass). Can you check again?
Yes. On Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32, the result for the following lines:
import inspect
class A:
pass
inspect.getsource(A)
is:
Traceback (most recent call last):
File "E:/Scripts/Snippets/test_inspect_bug.py", line 4, in <module>
inspect.getsource(A)
File "C:\Python32\lib\inspect.py", line 694, in getsource
lines, lnum = getsourcelines(object)
File "C:\Python32\lib\inspect.py", line 683, in getsourcelines
lines, lnum = findsource(object)
File "C:\Python32\lib\inspect.py", line 522, in findsource
file = getsourcefile(object)
File "C:\Python32\lib\inspect.py", line 441, in getsourcefile
filename = getfile(object)
File "C:\Python32\lib\inspect.py", line 406, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <module '__main__' (built-in)> is a built-in class
>>>
I forgot to mention that I executed this code directly in IDLE. It seems to work perfectly on command line though.
> It seems to work perfectly on command line though.
If the code is saved in a file, yes, but not in an interactive interpreter. This is not actually related to IDLE, but to the fact that inspect.getsource merely finds the __file__ attribute of the module object for its argument. If a module object has no file, the error message indicates that it’s a built-in module (like sys), but this fails to take into account the special __main__ module in an interactive interpreter.
It might be worth it to improve the error message, and in any case the documentation can be improved.
In duplicate Issue24491, zorceta notes: "Both python.exe and IDLE can't. IPython is able to, as it inserts REPL input into linecache."
When provided object is not from a file, like input in interactive shell, `inspect` internals will check for it in `linecache`, which official Python shell and IDLE won't put interactive shell input into, yet. This can be simply solved.
Whether interactive shell input can be put into `linecache` may be a problem, but it'll make life easier, as interactive shell saves time from edit-save-run 'loop'.
btw, I changed the title, since I don't think, what original author thought need to be documented, is absolutely right.
> When provided object is not from a file
should be
'When `inspect` can't find the source file of provided object'.
My mistake.
I just ran into this issue trying to introspect an IPython session, in which case the __main__ module doesn't have a file associated with it.
But it turns out that methods defined in a class do have source code associated with them, so it's possible to add a workaround for the common case where a class actually has methods.
Code:
The problem being discussed here just came up on Stack Overflow today:
The cause of the incorrect error message is pretty clear. The relevant code from `inspect.getfile` should do something better when the object has a `__module__` attribute, but the module named (when looked up in `sys.modules`) does not have a `__file__` attribute. Currently it says the module is a builtin class, which is total nonsense.
A very basic fix would be to have an extra error case:
if isclass(object):
if hasattr(object, '__module__'):
object = sys.modules.get(object.__module__)
if hasattr(object, '__file__'):
return object.__file__
raise TypeError() # need a relevant message here!!!
raise TypeError('{!r} is a built-in class'.format(object))
It might be easier to make a meaningful message if the code after the first `if` didn't overwrite `object` with the module.
But, really, it would be nice to figure out a better fix, which would make the relevant inspect functions actually work for classes defined interactively in the `__main__` module.
So, what would be the right approach here? Store the interactive session's input text in memory?
Probably. Figure out a protocol to inject them into linecache, perhaps. But I'm not sure such a thing would be accepted. If you can figure out a way to make it work at least theoretically, it would probably be best to talk about it on python-ideas first.
In the meantime it would be nice to improve the error message, which is what we should use this issue for.
See how IPython stores source from interactive input and why it's not appropriate for vanilla REPL IMO.
Do we really need to say that getsource(object) can only get the object's source if it is accessible from the object? Getsource also fails if a module is loaded from a .pyc with not corresponding .py available.
The problem is not the call being in __main__. When I put the three lines (with the 3rd wrapped with print()) in an IDLE editor and run, and re-inspect, I get
======================== RESTART: F:\Python\a\tem3.py ========================
class A:
pass
>>> inspect.getsource(A)
'class A:\n pass\n'
Ditto if I run > py -i -m a.tem3
If I continue in IDLE's Shell
>>> class B: pass
>>> inspect.getsource(B)
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
inspect.getsource(B)
File "F:\dev\37\lib\inspect.py", line 973, in getsource
lines, lnum = getsourcelines(object)
File "F:\dev\37\lib\inspect.py", line 955, in getsourcelines
lines, lnum = findsource(object)
File "F:\dev\37\lib\inspect.py", line 812, in findsource
raise OSError('could not find class definition')
OSError: could not find class definition
If I enter the three lines above in a fress python or IDLEs shell, I get the TypeError above.
IDLE does store interactive inputs into linecache, so that tracebacks contain the offending line (unlike interactive python). But it does so on a statement by statement basis, so that each entry is treated as a separate file. In a traceback for an exception in a multiline statement, the line number is relative to the statement.
>>> def f():
# line2 of f
1/0
>>> f()
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
f()
File "<pyshell#12>", line 3, in f
1/0
ZeroDivisionError: division by zero
Interactive python displays '<stdin>' as the file for all entries. IDLE numbers them, so previous statements remained cached. I consider enhanced interactive tracebacks to be an important feature.
But I don't see how to attach individual pseudofile names to classes and functions so that getsource could find their source lines.
This is also an issue even for non-interactive scenarios:
When doing `python -c '<some code>'` inspect.getsource does not work and there are no stack traces.
Perhaps this case will be easier to fix? | https://bugs.python.org/issue12920 | CC-MAIN-2021-21 | refinedweb | 1,169 | 65.93 |
This is a follow-up to this an earlier thread about the
problem with
ShapeRoi:
The core problem is that
ShapeRoi doesn’t handle “open”
rois properly – things like
Lines and polyline
PolygonRois.
I’ve also created a github issue about this:
If people think that it makes sense to address this and
that this is the right approach, I will try to finish up
a replacement version of ShapeRoi.java (following the
draft
ShapeRoi2).
Currently
ShapeRoi2 is supposed to work correctly for the
“basic”
ShapeRoi operations, and generally behave no worse
than
ShapeRoi. (Constructing a
ShapeRoi2 from a
Shape is not expected to work properly with this version.)
Please let me know if you find any issues with the core
functionality of
ShapeRoi2.
Here are two test images that illustrate some of the
problems with
ShapeRoi and compare
ShapeRoi with a
draft of the proposed fix.
The first image shows
ShapeRoi performing correct set
arithmetic with closed rois (in this case
EllipseRois):
The second image shows how
ShapeRoi fails with an open
roi (in this case a
Line) while the proposed fix seems
to work:
These two test images are laid out as follows:
The first three rows show how two rois are rendered by:
- the “classical” roi (viz.
EllipseRoi,
Roi, and
Line);
ShapeRoi(constructed from the classical roi);
ShapeRoi2, the proposed fix for
ShapeRoi.
The orange outlines are boundaries drawn by
roi.drawPixels (ip). The cyan solid shapes are
from
ip.fill (roi).
The next three rows (rows 4 through 6) are again classical,
ShapeRoi, and
ShapeRoi2, now illustrating set arithmetic.
Classical rois don’t do set arithmetic, so they are just
the two rois drawn on top of one another to guide the eye.
For the
ShapeRoi and
ShapeRoi2 rows (rows 5 and 6), the
five columns (of boundary / interior pairs) are the results
of the
or() (union),
and() (intersection),
xor()
(symmetric difference), and the two orders of
not() (A-B
and B-A) operations.
In the first image everything works as expected. In the
second image, the
Line roi displays correctly as the
classical
Line and as the fixed
ShapeRoi2 constructed
from the
Line. But when a
ShapeRoi is constructed from
Line, it displays only with
drawPixels(), not with
fill(), and vanishes entirely when used in set arithmetic.
Here is the jython script that generates these (and other)
test images:
from java.awt import Color from ij import IJ from ij.gui import EllipseRoi from ij.gui import Line from ij.gui import PointRoi from ij.gui import PolygonRoi from ij.gui import Roi from ij.gui import ShapeRoi from ij.gui import ShapeRoi2 def setRoiCentroidLocation (roi, xc, yc): x1 = int (roi.getBounds().getX()) y1 = int (roi.getBounds().getY()) xd = xc - int (round (roi.getContourCentroid()[0])) yd = yc - int (round (roi.getContourCentroid()[1])) roi.setLocation (x1 + xd, y1 + yd) return roi # image size iWidth = 1024 iHeight = 640 # roi locations hBase = 50 # centroid-x of first roi vBase = 60 # centroid-y of first roi hFOff = 85 # offset of "fill" from "drawPixels" hOff = 205 # x-offset for next roi column vOff = 95 # y-offset for next roi row vXOff = 315 # y-offset for first set-arithmetic row cBase = 375 # centroid-y of first "combo" roi # rois roisA = [] roisB = [] titles = [] roisA.append (EllipseRoi (0, 0, 50, 50, 0.2)) # ellipse roisB.append (EllipseRoi (0, 50, 50, 0, 0.2)) # ellipse titles.append ('ellipses') roisA.append (Roi (0, 0, 50, 30, 25)) # rounded-rectangle roisB.append (Line (0, 50, 50, 0)) # line titles.append ('rounded-rectangle / line') ptx = [ 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 10, 14, 10, 14, 10, 14, 10, 14, 10, 14, 20, 24, 20, 24, 20, 24, 20, 24, 20, 24 ] pty = [ 0, 0, 10, 10, 20, 20, 30, 30, 40, 40, 1, 1, 11, 11, 21, 21, 31, 31, 41, 41, 2, 2, 12, 12, 22, 22, 32, 32, 42, 42 ] roisA.append (PointRoi (ptx, pty, len (ptx))) # points roisB.append (Roi (0, 1, 50, 20)) # rectangle titles.append ('points / rectangle') pgx = [ 25, 50, 50, 25, 0, 0 ] pgy = [ 5, 20, 30, 45, 30, 20 ] roisA.append (PolygonRoi (pgx, pgy, Roi.POLYGON)) # polygon (hexagon) plx = [ 20, 40, 10, 30, 30, 10, 40, 20 ] ply = [ 20, 0, 0, 20, 30, 50, 50, 30 ] roisB.append (PolygonRoi (plx, ply, Roi.POLYLINE)) # polyline titles.append ('polygon / polyline') roisA.append (Line (0, 50, 50, 0)) # line roisB.append (Line (0, 0, 50, 50)) # line titles.append ('lines -- intersecting') roisA.append (Line (0, 51, 51, 0)) # line roisB.append (Line (0, 0, 51, 51)) # line titles.append ('lines -- "missed" intersection') ops = ['or', 'and', 'xor', 'A-B', 'B-A'] for i in range (len (titles)): imp = IJ.createImage (titles[i], 'RGB ramp', iWidth, iHeight, 1) ip = imp.getProcessor() ip.multiply (0.125) ip.add (31.0) # draw rois for ir in [0, 1]: r = roisA[i] if ir == 0 else roisB[i] for j in range (3): # "classical", ShapeRoi, ShapeRoi2 cx = hBase + ir * hOff cy = vBase + j * vOff rd = r.clone() rf = r.clone() setRoiCentroidLocation (rd, cx, cy) setRoiCentroidLocation (rf, cx + hFOff, cy) if j == 1: rd = ShapeRoi (rd) rf = ShapeRoi (rf) if j == 2: rd = ShapeRoi2 (rd) rf = ShapeRoi2 (rf) ip.setColor (Color.orange) rd.drawPixels (ip) ip.setColor (Color.cyan) ip.fill (rf) # draw set-arithmetic rois io = 0 for op in ops: for j in range (3): rad = roisA[i].clone() raf = roisA[i].clone() rbd = roisB[i].clone() rbf = roisB[i].clone() cx = hBase + io * hOff cy = vBase + vXOff + j * vOff setRoiCentroidLocation (rad, cx, cy) setRoiCentroidLocation (raf, cx + hFOff, cy) setRoiCentroidLocation (rbd, cx, cy) setRoiCentroidLocation (rbf, cx + hFOff, cy) if j == 0: if op == 'or': ip.setColor (Color.orange) rad.drawPixels (ip) rbd.drawPixels (ip) ip.setColor (Color.cyan) ip.fill (raf) ip.fill (rbf) else: if j == 1: rad = ShapeRoi (rad) rbd = ShapeRoi (rbd) raf = ShapeRoi (raf) rbf = ShapeRoi (rbf) if j == 2: rad = ShapeRoi2 (rad) rbd = ShapeRoi2 (rbd) raf = ShapeRoi2 (raf) rbf = ShapeRoi2 (rbf) if op == 'or': rad.or (rbd) raf.or (rbf) rxd = rad rxf = raf if op == 'and': rad.and (rbd) raf.and (rbf) rxd = rad rxf = raf if op == 'xor': rad.xor (rbd) raf.xor (rbf) rxd = rad rxf = raf if op == 'A-B': rad.not (rbd) raf.not (rbf) rxd = rad rxf = raf if op == 'B-A': rbd.not (rad) rbf.not (raf) rxd = rbd rxf = rbf ip.setColor (Color.orange) rxd.drawPixels (ip) ip.setColor (Color.cyan) ip.fill (rxf) io += 1 imp.show()
To run this script you will need
ShapeRoi2. Here is its
jar file, shape_roi_2.jar:
shape_roi_2.jar (13.9 KB)
Add it to a directory from which Fiji / ImageJ loads classes.
(I put it in the plugins directory.)
For completeness, the code for ShapeRoi2.java appears below.
(It is renamed and posted as ShapeRoi2_java.tif to get by the
forum limitations.) It is almost entirely copy-pasted from the
original ShapeRoi.java, with changes isolated to the
roiToShape() method, primarily in the new
if (!roi.isArea()) if-block.
ShapeRoi2_java.tif (51.4 KB)
Thanks, mm | https://forum.image.sc/t/proposed-fix-for-shaperoi-issue/29391 | CC-MAIN-2019-39 | refinedweb | 1,172 | 67.96 |
2 days now, it is messing up my -app.xml file.
Im targeting AIR 2.5 where versioNumber tag is ment to be used,
but FP ALWAYS adds by itself version tag as well, and the complaining about illegal attributes in XML
descriptor file. Then i have to manually remove <version> tag and i can not compile my app as AIR.
What im trying to do is to target Playbook.
For some reason my apps are now aligned center-center on PB simulator and not TL (as spoken in Flash terms).
I think (belive) that "aligning" problem has to do something with entire project being messed up or using wrong
settings. And targeting 2.0 doent work since PlayBook need 2.5 namespace in descriptor...
what is this "bug" or what the heck am i doing wrong ?
Really old thread here, checking in. | https://forums.adobe.com/thread/822379 | CC-MAIN-2018-13 | refinedweb | 143 | 76.72 |
: (Score:3, Insightful)
You actually prefer XML???????
Re: (Score:1)
Using XML is like sticking your nuts in a vice and squeezing them until they burst. Although in the end it's still more pleasant than using: (Score:1)
How so, specifically? I've never had an issue with it, but then I don't use bullshit scripting languages that force me to do lots of XML processing, let my tools do it for me.
So maybe you should rephrase - if you're using c. 1992 scripting languages, XML is total shit.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
But it's easier now.
Re: (Score:2)
But the problem is -- as I mentioned in a post further up the page -- that JSON throws away some data type information. So when I use JSON, I have to reconstruct some of my data types when I use from_json. But I don't have to do that with XML.
And that's definitely a problem with JSON, not Ruby.
Re: (Score:3, Interesting)
"The funny thing is that, at the end of the day, JSON and XML are the same thing, only syntactically different."
Yes, exactly. But XML is readable by people. JSON is not. Just try to read any big dataset in JSON, especially if it's minified. Good luck. At least with XML you have a shot.
Having said that: there are lots of good tools for converting from one to the other, so it could be a lot worse.
You make a good point about standards and validation, though, too. That's why business data interchanges are generally built on XML, and not JSON. Even though JSON is generally more efficient.
Re: (Score:1)
Yes, exactly. But XML is readable by people
No it isn't. Neither of them is. Not directly. They are *both* readable by humans with a good browser/editor. Tell the editor guys to get crackin' if they haven't already. When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it. Visual Studio even does that with C for cryin' out loud. Class graph browsers for C++ have been out for like... forever. I don't work with XML or JSO
Re: (Score:2)
"No it isn't. Neither of them is. Not directly."
Yes, it is. If you don't believe me, I have posted a link to a simple example below. Not only is the JSON harder to read, it throws away data type information unnecessarily.
"When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it."
This is old stuff. TextMate (just for one example), has been doing that for a long time. Here's an example [postimg.org]. Notice how the line numbers skip where the code is collapsed.
You can also open XML in Firefox, and again it does exactly the same thing: you can expand and collapse levels at will.
Re: (Score:2)
XML is much more mature. XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language.
Which means that XML will still be around in 10 years, and can safely be used today for major projects.
Re: (Score:3)
Re: (Score:1)
IF the situation calls for HUMANS to read the data, I sure as hell do prefer XML. No contest. JSON is virtually unreadable.
Like I said: it's fine for computer data interchange, but when it comes to human intervention, give me XML any day.
I'm not claiming XML is perfect, by any stretch of the imagination. But when humans rather than computers need to deal with the data, it beats the shit out of JSON.
Re: (Score:1)
How is JSON hard to read? It's just lists of key/value pairs
Re: (Score:1, Insightful)
Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read.
Re: (Score:3)
"Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read."
Key-value pairs are a tiny subset of all data types. There are many data types that they have to struggle to represent very well. And when they try, the result (to the human eye) is a huge mess.
You're entitled to your opinion of course. But I think you're looking at it from a very narrow perspective. Have you ever actually had to program for the exchange of complex data sets? By that I mean something quite a bit more involved than a web store?
Re: (Score:2)
"Yes. I'm lucky LISP can parse XML since they are really only just a special case of S-Expressions. Once out of that horrid mess of printer tags it was much more straightforward to validate them and insert them in all their complexity into a nicely normalized relational database."
You are conflating XML and SGML. While technically XML is a subset of SGML, it doesn't contain "printer tags". It literally doesn't have any. XML tags are strictly data description.
Saying that XML is SGML is kind of like saying "car" is LISP. The former is a clearly-specified tool used for certain specific things. The latter is a generalized tool for many things. You wouldn't write an entire language like LISP to perform the function car performs. Nor would you write a specification like SGML (which does
Re: (Score:2)
"No it is not weird. XML is weird because it contains and is based on printer control cruft. Lots of printer control cruft. An unnecessary tag is a tag is a tag is a fucking tag."
It does nothing of the sort. XML is a data description language. It's parent language -- SGML -- had a LOT of printer specification stuff in it. But XML has NONE. Not one little bit.
Jeez, guy. Pick up a book.
"An unnecessary tag is a tag is a tag is a fucking tag."
Then show me how to do the same thing without those tags that you call "unnecessary". Where are you going to get the information necessary to validate your data?
I linked to an example further up the page. XML preserved the data type, while JSON just turns any data it doesn't understand into a stri
Re: (Score:2)
XML has structures, standards, validation and flexibility that JSON sorely lacks. As someone else wrote above, the main thing JSON has going for it is that it's already JavaScript. Big deal.
I linked to a clear example further up. XML preserved my simple data structure. JSON threw away information about my data that I would have to supply myself later, if I were to use JSON to
Re: (Score:2)
Neither JSON nor XML is easily writable without special tools.
YAML attempts to be writable, but the grammar and parser are huge and slow.
RSON [google.com] is a superset of JSON that is eminently readable/writable, and much simpler than YAML, allowing, for example, for human-maintained configuration files.
The reference Python parser operates about as fast as the unaccelerated Python library pure JSON parser.
Re: (Score:2)
"Neither JSON nor XML is easily writable without special tools. "
Sure they are. Take just about any object in Ruby and call [object].to_xml or [object].to_json.
More relevant to the discussion though, I think, is what someone else said above:
"XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language. All JSON really has going for it is that it's already JavaScript."
I would have to say the same for RSON.
While it is true that they are syntactical versions of one another, XML is far less ambiguous. In a way, XML versus JSON is a lot like Java versus JavaScript. The former have more tightly defined specifications, and less ambiguity. (I.e., Java will not let you treat a string like an integer
Re: (Score:2)
But I guess I suck at that myself, since we're obviously not communicating properly.
Obviously there are libraries in all sorts of languages to read/write both.
Re: (Score:3)
You actually prefer XML???????
Yes, as I deal in data interchange all the time, XML is great as it allows schema definition/sharing (XSD) and XSLT is a mature transformation language, that, after many years in the woods, is now available with functional capabilities (XSLT v3.0).
The only problem we have is that often, endpoint partners/vendors don't provide the XSD, nor do they share how they plan to validate files we send them. Or they ignore our XSD. But I still can't imagine things would be better if JSON were the interchange format.
Re: (Score:2)
Re: (Score:2)
Does JSON support namespaces? AFAIK it doesn't, and that would seem to make it suitable only for fairly simple data interchange and not really scalable. if it is a bit ugly!
I know it's bad-form replying to my own post, but it does appear that there is some kind of namespacing going on in the OData spec [oasis-open.org]. Does anyone know if this namespacing is part of the JSON standard, or is it just a convention that OASIS are using?
:D
Eitherway, I still prefer XML!
Re: (Score:1)
Yay! (Score:3)
Good!!!
}
They cracked the code on good web programming standards lol.
Re: (Score:1)
If (PlatformIndepenentProgrammingRelated == True) And (RelatedToJava == False) {
Good!!!
}
They cracked the code on good web programming standards lol.
string message = "";
If (PlatformIndepenentProgrammingRelated &&
!RelatedToJava)
{
message = "Good!!";
}
Re: (Score:1)
std::string message because I hate the guy who gets stuck maintaining things
Considering its the std:: c++ library, you should enable it correctly in stdafx.h for your whole project.
#include
using namespace std;
Re: (Score:2)
Ok, you have types with leading lowercase letters, variables with both leading uppercase and lowercase letters, an "If" keyword with a captial "I" as in Microsoft BASIC, and you initialize a string unnecessarily. Please turn in your geek cred card.
:-)
He's logged in with a Google+ account. He never HAD one. In fact he actually likes beta!
Re: (Score:1)
Ok, you have types with leading lowercase letters, variables with both leading uppercase and lowercase letters. with a captial "I"
Copy and paste for the win.
and you initialize a string unnecessarily.
Necessarily, i initialize the string correctly and give it a NULL value, before i go ahead and play with it.
I'd love to see the values of your variables. Wonder how many of them are uninitialized and causing havoc in your code.
Re: (Score:2)
O'Data (Score:5, Funny)
An Irish android? How appropriate!
I'm not clear.... (Score:2)
I'm not clear here, isn't that the purpose of TCP/IP?
Re: (Score:2)
TCP is for reliable in order transmission/reception of octects.
Re: (Score:2)
TCP is for reliable in order transmission/reception of octects.
...and standardizes nothing about the content of those octets, so, as you suggest, TCP, by itself, is insufficient to "[simplify] data sharing across disparate applications in enterprise, cloud, and mobile devices".
Oh the irony (Score:4, Interesting)
At the link for the specifications OData JSON Format Version 4.0 [oasis-open.org]
The documents that are tagged as Authoritative are
.doc, not even .docx
Re: (Score:2, Interesting)
Oh the irony
History, not irony.
Microsoft took over OASIS in 2006 as part of their campaign to scuttle open document formats. They're still running the show there.... [zdnet.com]
Reinvention of RDF + SPARQL (Score:2)
Re: (Score:3)
SPARQL appears to be read only, and to be restricted to data in kvp or 3-tuples.
OData supports mutable entities, change and request batching, and http GET semantics for data access. It would appear to map much better to real-world databases and business use-cases.
Re: (Score:3)
Re: (Score:2)
You could be right.
OData predates SPARQL 1.1, however, and supported all CRUD operations from its inception.
Re: (Score:1)
What is OData? Why should you care? (Score:5, Informative)
OData is (now) a standard for how applications can exchange structured data, oriented towards HTTP and statelessness.
OData consumers and producers are language and platform neutral.
In contrast to something like a REST service, for which clients must be specifically authored and the discovery process is done by humans reading an API doc, ODATA specifies a URI convention and a $metadata format that means OData resources are accessed in a uniform way, and that OData endpoints can have their shape/semantics programmatically discovered.
So for instance, if you have entity named Customer hosted on [foo.com], I can issue an HTTP call like this:
GET... [foo.com]
and get your customers.
furthermore, the metadata document describing your customer type will live at
foo.com/myODataFeed/$metadata
... which means I can attach to it with a tool and generate proxy code, if I like. It makes it easy to build a generic OData explorer type tool, or for programs like Excel and BI tools to understand what your data exposes.
Suppose that your Customers have have an integer primary key, (which I discovered from reading $metadata), and have a 1:N association to an ORders entity. I can therefore write this query:
GET... [foo.com]
.. and get back the Orders for just customer ID:1
I can add additional operators to the query string, like $filter or $sort, and data-optimization operators like $expand or $select.
OData allows an arbitrary web service to mimic many of the semantics of a real database, in a technology neutral way, and critically, in a way that is uniform for anonymous callers and programmatically rigorous/discoverable.
Examples of OData v3 content are available here:... [odata.org]
OData V4 is a breaking protocol change from V3 and prior versions, but has been accepted as a standard
And, shameless plug: If you want to consume and build OData V1/V2/V3 services easily, check out Visual Studio LightSwitch
:)
Re: (Score:2)
Sounds neat but doesn't solve my JSON problems.
One project might use "customer" another "client" or "businessname". Each of these may have a "description", "overview", "synopsis" and a "type"/"kind"/"businesstype" field.
So code discovery of data doesn't work unless we have agreed to standardized field names in advance, but now there's always exceptions to look out for and name conflicts.
Now even if we know the names of every field, how do we know exactly what sort of data will be returned? A name alone is
Re: (Score:2)
I suggest you look at the $metadata document for the service I linked to.
The property names, conceptual storage types, relationship info, etc, is all in there.
I'm not sure what problem you're trying to solve, exactly.
Then use XML (Score:2).
JSOAP? (Score:1)
Microsoft. (Score:2)
Know who leads the OData brigade? Microsoft. Get your crying ready, neckbeards.
On a more serious note, OData is awesome. If you've ever tried to provide a good data query API (supporting boolean syntax arbitrary queries) via a web service it's not easy. OData does it very well.
Sure, you'll get some whining from people who don't understand it that it forces you to expose your data model to the outside world, but it does absolutely no such thing. You can, should you choose, expose a complete abstraction
Re: (Score:2)
Sold!!! (Score:1)
REST buzzword (Score:2)
Representational state transfer
- from wikipedia
its a framework ? | https://news.slashdot.org/story/14/03/17/1720214/oasis-approves-odata-40-standards-for-an-open-programmable-web | CC-MAIN-2017-39 | refinedweb | 2,682 | 66.23 |
Last week, I gave a webinar on the topic of asynchrnous exceptions in Haskell. If you missed the webinar, I encourage you to check out the video. I've also made the slides available.
As is becoming my practice, I wrote up the content for this talk in the style of a blog post before creating the slides. I'm including that content below, for those who prefer a text based learning method.
Runtime exceptions are common in many programming languages today. They offer a double-edged sword. On the one hand, they might make it (arguably) easier to write correct code, by removing the burden of checking return codes for every function. On the other hand, they can hide potential exit points in code, possibly leading to lack of resource cleanup.
GHC Haskell ups the ante even further, and introduces asynchronous exceptions. These allow for very elegant concurrent code to be written easily, but also greatly increase the surface area of potentially incorrect exception handling.
In this talk today, we're going to cover things from the ground up:
- Defining different types of exceptions
- Correct synchronous exception handling
- How bottom values play in
- Basics of async exceptions
- Masking and uninterruptible masking
- Helper libraries
- Some more complex examples
In order to fully address asynchronous exceptions, we're going to have to cover a lot of topics that aren't specifically related to asynchronous exceptions themselves. Don't be surprised that this won't seem like it has anything to do with async at first, we will get there.
Two important things I'd like everyone to keep in mind:
- Most of the time, simply using the appropriate helper library will work, and you won't have to remember all of the details we discuss today. It's still worthwhile to understand them.
- This talk is taking for granted that runtime synchronous and asynchronous exceptions are part of GHC Haskell, and discuss how best to work with it. There are lots of debates about whether they're a good idea or not, and when they should and shouldn't be used. I'm intentionally avoiding that for today's topic.
To whet your appetite: by the end of this talk, you should bad able to answer—with a few different reasons—why I've called this function
badRace:
Motivating example
Most complexity around exceptions pops up around scarce resources, and allocations which can fail. A good example of this is interacting with a file. You need to:
- Open the file handle, which might fail
- Interact with the file handle, which might fail
- Close the file handle regardless, since file descriptors are a scarce resource
Pure code
Exceptions cannot be caught in pure code. This is very much by design, and fits in perfectly with the topic here. Proper exception handling is related to resource allocation and cleanup. Since pure code cannot allocate scarce resources or clean them up, it has no business dealing with exceptions.
Like all rules, this has exceptions:
- You can still throw from pure code
- You can use
unsafePerformIOfor allocations
- Memory can be allocated implicitly from pure code
- Not a contradiction! We don't consider memory a scarce resource
- If you really want, you can catch exceptions from pure code, again via
unsafePerformIO
But for the most part, we'll be focusing on non-pure code, and specifically the
IO monad. We'll tangenitally reference transformers later.
The land of no exceptions
Let's interact with a file in a theoretical Haskell that has no runtime exceptions. We'll need to represent all possible failure cases via explicit return values:
openFile :: FilePath -> IOMode -> IO (Either IOException Handle) hClose :: Handle -> IO () -- assume it can never fail usesFileHandle :: Handle -> IO (Either IOException MyResult) myFunc :: FilePath -> IO (Either IOException MyResult) myFunc fp = do ehandle <- openFile fp ReadMode case ehandle of Left e -> return (Left e) Right handle -> do eres <- usesFileHandle handle hClose handle return eres
The type system forces us to explicitly check whether each function succeeds or fails. In the case of
usesFileHandle, we get to essentially ignore the failures and pass them on to the caller of the function, and simply ensure that
hClose is called regardless.
Land of synchyronous exceptions
Now let's uses a variant of Haskell which has synchronous exceptions. We'll get into exception hierarchy stuff later, but for now we'll just assume that all exceptions are
IOExceptions. We add in two new primitive functions:
throwIO :: IOException -> IO a try :: IO a -> IO (Either IOException a)
These functions throw synchronous exceptions. We'll define synchronous exceptions as:
Synchronous exceptions are exceptions which are generated directly from the
IO actions you are calling.
Let's do the simplest transformation from our code above:
openFile :: FilePath -> IOMode -> IO Handle hClose :: Handle -> IO () usesFileHandle :: Handle -> IO MyResult myFunc :: FilePath -> IO MyResult myFunc fp = do handle <- openFile fp ReadMode res <- usesFileHandle handle hClose handle return res
The code is certainly shorter, and the types are easier to read too. A few takeaways:
- We can no longer tell whether
openFileand
hClosecan fail by looking at the type signature.
- There's no need to pattern match on the result of
openFile; that's handled for us automatically.
But unfortunately, this code has a bug! Imagine if
usesFileHandle throws an exception.
hClose will never get called. Let's see if we can fix this using
try and
throwIO:
myFunc :: FilePath -> IO MyResult myFunc fp = do handle <- openFile fp ReadMode eres <- try (usesFileHandle handle) hClose handle case eres of Left e -> throwIO e Right res -> return res
And just like that, our code is exception-safe, at least in a world of only-synchronous exceptions.
Unfortunately, this isn't too terribly nice. We don't want people having to think about this each time they work with a file. So instead, we capture the pattern in a helper function:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = do handle <- openFile fp mode eres <- try (inner handle) hClose handle case eres of Left e -> throwIO e Right res -> return res myFunc :: FilePath -> IO MyResult myFunc fp = withFile fp ReadMode usesFileHandle
General principle Avoid using functions which only allocate or only clean up whenever possible. Instead, try to use helper functions which ensure both operations are performed.
But even
withFile could be generalized into something which runs both allocate and cleanup actions. We call this
bracket. And in a synchronous-only world, it might look like this:
bracket :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c bracket allocate cleanup inner = do a <- allocate ec <- try (inner a) _ignored <- cleanup a case ec of Left e -> throwIO e Right c -> return c withFile fp mode = bracket (openFile fp mode) hClose
QUESTION What happens if
cleanup throws an exceptions? What should happen if
cleanup throws an exceptions?
Extensible exceptions
The type signatures we used for
catch and
throwIO are actually a bit of a lie. We've pretended here that all exceptions are of type
IOException. In reality, however, GHC gives us the ability to create arbitrary types which can be thrown. This is in the same spirit as Java, which allows you to create hierarchies of classes.
Let's look at the relevant definitions:
data SomeException = forall e . Exception e => SomeException e class (Typeable e, Show e) => Exception e where toException :: e -> SomeException fromException :: SomeException -> Maybe e throwIO :: Exception e => e -> IO a try :: Exception e => IO a -> IO (Either e a)
The
Exception typeclass defines some way to convert a value to a
SomeException, and a way to try and convert from a
SomeException into the given type. Then
throwIO and
try are generalized to working on any types that are instances of that type class. The
Show instance helps for displaying exceptions, and
Typeable provides the ability for runtime casting.
Here's a simple example of an exception data type:
data InvalidInput = InvalidInput String deriving (Show, Typeable) instance Exception InvalidInput where toException ii = SomeException ii fromException (SomeException e) = cast e -- part of Typeable
Except that
toException and
fromException both have default implementations which match what we have above, so we could instead just write:
instance Exception InvalidInput
You can also create exception hierarchies, for example:
{-# LANGUAGE ExistentialQuantification #-} import Control.Exception import Data.Typeable data MyAppException = InvalidInput String | SomethingElse SomeException deriving (Show, Typeable) instance Exception MyAppException data SubException = NetworkFailure String deriving (Show, Typeable) instance Exception SubException where toException = toException . SomethingElse . SomeException fromException se = do SomethingElse (SomeException e) <- fromException se cast e main :: IO () main = do e <- try $ throwIO $ NetworkFailure "Hello there" print (e :: Either SomeException ())
In OO terms,
SubException is a child class of
MyAppException. You may dislike this kind of adopted OO system, but it's part of GHC Haskell's exception mechanism. It's also vitally important to how we're going to deal with asynchronous exceptions later, which is why we're discussing it now.
Alright, onward to another tangent!
Exceptions in pure code
It's funny that the function for throwing exceptions is called
throwIO, right? Why not just
throw? That's because it's unfortunately used for something else:
throw :: Exception e => e -> a
This generates an exception from within pure code. These kinds of exceptions are sometimes mistakenly called asynchronous exceptions. They are most certainly not! This section is about clearing up this misunderstanding. I'm going to term these kinds of exceptions impure exceptions, because they break pure code.
You can generate these kinds of exceptions a few different ways:
- Using the
throwfunction directly
- Using a function which calls
throw, like
error
- Using partial functions like
head
- Incomplete pattern matches (GHC automatically inserts the equivalent of a call to
throw)
- Creating infinite loops in pure code, where GHC's runtime may detect the infinite loop and throw a runtime exception
Overall, partiality and impure exceptions are frowned upon in the Haskell world, because they're essentially a lie: claiming that a value has type
MyType, when in reality it may also have an exception lurking inside of it. But this talk isn't about passing judgement, simply dealing with things.
There is no mechanism for directly catching impure exceptions. Only the
IO based functions, like
try, are able to catch them. Let's have a look at an example:
import Control.Exception import Data.Typeable data Dummy = Dummy deriving (Show, Typeable) instance Exception Dummy printer :: IO (Either Dummy ()) -> IO () printer x = x >>= print main :: IO () main = do printer $ try $ throwIO Dummy printer $ try $ throw Dummy printer $ try $ evaluate $ throw Dummy printer $ try $ return $! throw Dummy printer $ try $ return $ throw Dummy
QUESTION What do you think is the output of this program?
This exercise relies on understanding GHC's evaluation method. If you're not intimately familiar with this, the solution may be a bit surprising. If there's interest, we can host another FP Complete webinar covering evaluation in the future. Here's the output:
Left Dummy Left Dummy Left Dummy Left Dummy Right Main.hs: Dummy
The fifth example is different than the other four.
- In
throwIO Dummy, we're using proper runtime exceptions via
throwIO, and therefore
Dummyis thrown immediately as a runtime exception. Then
tryis able to catch it, and all works out well.
- In
throw Dummy, we generate a value of type
IO ()which, when evaluated, will throw a
Dummyvalue. Passing this value to
tryforces it immediately, causing the runtime exception to be thrown. The result ends up being identical to using
throwIO.
- In
evaluate $ throw Dummy,
throw Dummyhas type
(). The
evaluatefunction then forces evaluation of that value, which causes the
Dummyexception to be thrown.
return $! throw Dummyis almost identical; it uses
$!, which under the surface uses
seq, to force evaluation. We're not going to dive into the difference between
evaluateand
seqtoday.
return $ throw Dummyis the odd man out. We create a thunk with
throw Dummyof type
()which, when evaluated, will throw an exception. We then wrap that up into an
IO ()value using
return.
trythen forces evaluation of the
IO ()value, which does not force evaluation of the
()value, so no runtime exception is yet thrown. We then end up with a value of type
Either Dummy (), which is equivalent to
Right (throw Dummy).
printerthen attempts to print this value, finally forcing the
throw Dummy, causing our program to crash due to the unhandled exception.
Alright, so what's the point of all of this? Well, two things:
- Despite not passing any judgement in this talk, let's pass some judgement: impure exceptions make things really confusing. You should avoid
throwand
errorwhenever you can, as well as partial functions and incomplete pattern matches. If you're going to use exceptions, use
throwIO.
- Even though the exceptional value appears to pop up in almost “random” locations, the trigger for an impure exception crashing your program is always the same: evaluating a thunk that's hiding an exception. Therefore, impure exceptions are absolutely synchronous exceptions: the
IOaction you're performing now is causing the exception to be thrown.
For the most part, we don't think too much about impure exceptions when dealing with writing exception safe code. Look at the
withFile example again:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = do handle <- openFile fp mode eres <- try (inner handle) hClose handle case eres of Left e -> throwIO e Right res -> return res
If
inner returns an impure exception, it won't cause us any problem in
withFile, since we never force the returned value. We're going to mostly ignore impure exceptions for the rest of this talk, and focus only on synchronous versus asynchronous exceptions.
Motivating async exceptions
Let's try and understand why someone would want async exceptions in the first place. Let's start with a basic example: the
timeout function. We want a function which will run an action for a certain amount of time, and if it hasn't completed by then, kill it:
timeout :: Int -- microseconds -> IO a -> IO (Maybe a)
Let's imagine we built this into the runtime system directly, and allowed a thread to simply die immediately. Then we wrote a program like:
timeout 1000000 $ bracket (openFile "foo.txt" ReadMode) hClose somethingReallySlow
We give our
somethingReallySlow 1 second to complete. What happens if it takes more than 1 second? As described above, the thread it's running on will simply die immediately, preventing
hClose from ever running. This defeats exception safety!
Instead, let's try and create something outside of the runtime system. We'll create a mutable variable for tracking whether the timeout has expired, and an
MVar for the result of the operation. Then we'll use a helper function to check if we should exit the thread. It may look something like:
import Control.Concurrent (threadDelay, forkIO) import Control.Concurrent.MVar import Control.Exception import Control.Monad (when, forever) import Data.IORef import Data.Typeable data Timeout = Timeout deriving (Show, Typeable) instance Exception Timeout type CheckTimeout = IO () timeout :: Int -> (CheckTimeout -> IO a) -> IO (Maybe a) timeout micros inner = do retval <- newEmptyMVar expired <- newIORef False let checkTimeout = do expired' <- readIORef expired when expired' $ throwIO Timeout _ <- forkIO $ do threadDelay micros writeIORef expired True _ <- forkIO $ do eres <- try $ inner checkTimeout putMVar retval $ case eres of Left Timeout -> Nothing Right a -> Just a takeMVar retval myInner :: CheckTimeout -> IO () myInner checkTimeout = bracket_ (putStrLn "allocate") (putStrLn "cleanup") (forever $ do putStrLn "In myInner" checkTimeout threadDelay 100000) main :: IO () main = timeout 1000000 myInner >>= print
On the bright side: this implementation reuses the existing runtime exception system to ensure exception safety, yay! But let's try and analyze the downsides of this approach:
- Since
checkTimeoutruns in
IO, we can't use it in pure code. This means that long-run CPU computations cannot be interrupted.
- We need to remember to call
checkTimeoutin all relevant places. If we don't, our
timeoutwon't work properly.
BONUS The code above has a potential deadlock in it due to mishandling of synchronous exceptions. Try and find it!
While this kind of approach kind of works, it doesn't make the job pleasant. Let's finally add in asynchronous exceptions.
Asynchronous exceptions
Async exceptions are exceptions thrown from another thread. There is nothing performed in the currently running thread which causes the exception to occur. They bubble up just like synchronous exceptions. They can be caught with
try (and friends like
catch) just like synchronous exceptions. The difference is how they are thrown:
forkIO :: IO () -> IO ThreadId throwTo :: Exception e => ThreadId -> e -> IO ()
In our hand-written
timeout example above, calling
throwTo is like setting
expired to
True. The question is: when does the target thread check if
expired has been set to
True/an async exception was thrown? The answer is that the runtime system does this for us automatically. And here's the important bit: the runtime system can detect an async exceptions at any point. This includes inside pure code. This solves both of our problems with our hand-rolled timeout mentioned above, but it creates a new one.
The need for masking
Let's revisit our
withFile function:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = do handle <- openFile fp mode eres <- try (inner handle) hClose handle case eres of Left e -> throwIO e Right res -> return res
But now, let's add in the async checking actions that the runtime system is doing for us:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = do checkAsync -- 1 handle <- openFile fp mode checkAsync -- 2 eres <- try (inner handle) checkAsync -- 3 hClose handle checkAsync -- 4 case eres of Left e -> throwIO e Right res -> return res
If
checkAsync (1) or (4) throws an exception, everything's fine. But if (2) or (3) throws, we have a resource leak, and
hClose won't be called! We need some way to tell the runtime system “don't check for async exceptions right now.” We call this masking, and we'll introduce the
mask_ function to demonstrate it:
mask_ :: IO a -> IO a
This function says “run the given action, and don't allow any async exceptions to get detected while it's running.” We can use this to fix our
withFile function:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = mask_ $ do -- doesn't run, masked! -- checkAsync -- 1 handle <- openFile fp mode -- same -- checkAsync -- 2 eres <- try (inner handle) -- same -- checkAsync -- 3 hClose handle -- same -- checkAsync -- 4 case eres of Left e -> throwIO e Right res -> return res
We've fixed our resource leak, but we've introduced a new problem. Now there's no way to send an asynchronous exception to any part of our
withFile function, including
inner. If the user-supplied action takes a long time to run, we've essentially broken the
timeout function. To work with this, we need to use the
mask function, which provides a way to restore the previous masking state:
mask :: ((forall a. IO a -> IO a) -> IO b) -> IO b
ADVANCED You may wonder why this restores the previous masking state, instead of just unmasking. This has to do with nested maskings, and what is known as the “wormhole” problem. We're not going to cover that in detail.
Now we can write a much better
withFile:
withFile :: FilePath -> IOMode -> (Handle -> IO a) -> IO a withFile fp mode inner = mask $ \restore -> do handle <- openFile fp mode eres <- try (restore (inner handle)) hClose handle case eres of Left e -> throwIO e Right res -> return res
It's completely safe to restore the masking state there, because the wrapping
try will catch all asynchronous exceptions. As a result, we're guaranteed that, no matter what,
hClose will be called if
openFile succeeds.
Catch 'em all!
We need to make one further tweak to our
withFile example in order to make it type check. Let's look at a subset of the code:
eres <- try (restore (inner handle)) case eres of Left e -> throwIO e
The problem here is that both
try and
throwIO are polymorphic on the exception type (any instance of
Exception). GHC doesn't know which concrete type you want. In this case, we want to catch all exceptions. To do that, with use the
SomeException type, which in OO lingo would be the superclass of all exception classes. All we need is a type signature:
eres <- try (restore (inner handle)) case eres of Left e -> throwIO (e :: SomeException)
Recover versus cleanup
There's nothing wrong with this bit of code. But let's write something slightly different and see if there's a problem.
import Control.Concurrent import Control.Exception import Data.Time import System.Timeout main :: IO () main = do start <- getCurrentTime res <- timeout 1000000 $ do x <- try $ threadDelay 2000000 threadDelay 2000000 return x end <- getCurrentTime putStrLn $ "Duration: " ++ show (diffUTCTime end start) putStrLn $ "Res: " ++ show (res :: Maybe (Either SomeException ()))
The output from this program is:
Duration: 3.004385s Res: Just (Left <<timeout>>)
Despite the fact that the timeout was triggered:
- The duration is 3 seconds, not 1 second
- We get a
Justresult value instead of
Nothing
- Inside the
Justis an exception from the timeout
We've used our ability to catch all exceptions to catch an asynchronous exception. Previously, in our
withFile example, I said this was fine. But for some reason, I'm saying it's not OK here. The rule governing this is simple:
You cannot recover from an asynchronous exception
When people speak abstractly about proper async exception handling, this is the rule they're usually hinting at. It's a simple enough idea, and in practice not that difficult to either explain or implement. But the abstract nature of “safe async exception handling” makes it much scarier than it should be. Let's fix that.
There are two reasons you may wish to catch an exception:
- You need to perform some kind of cleanup before letting the exception bubble up. This is what we do in the case of
withFile: we catch the exception, perform our cleanup, and then rethrow the exception.
- Some action has thrown an exception, but instead of letting it bubble up and take down your entire thread, you want to recover. For example: you tried to read from a file, and it didn't exist, so you want to use some default value instead. In this case, we catch and swallow the exception without rethrowing it.
When dealing with synchronous exceptions, you're free to either perform cleanup and then rethrow the exception, or catch, swallow, and recover from the exception. It breaks no invariants of the world.
However, with asynchronous exceptions, you never want to recover. Asynchronous exceptions are messages from outside of your current execution saying “you must die as soon as possible.” If you swallow those exceptions, like we did in our
timeout example, you break the very nature of the async exception mechanism. Instead, with async exceptions, you are allowed to clean up, but never recover.
Alright, that's nice in theory. In practice, how do we make that work?
GHC's async exception flaw
When generating an exception, how do you decide whether the exception is synchronous or asynchronous? Simple: whether you ultimately use the
throwIO function (synchronous), or the
throwTo function (asynchronous). Therefore, in order to implement our logic above, we need some way to ask after using
try which function threw the exception.
Unfortunately, no such function exists. And it's not just a matter of missing a library function. The GHC runtime system itself tracks no such information about its exceptions. It is impossible to make this differentiation!
I've used two different techniques over the years for distinguishing sync and async exceptions. The older one is now captured in the
enclosed-exceptions package, based on forking threads. This one is heavier weight, and I don't recommend it anymore. These days, I recommend using a type-based approach, which is captured in both the
safe-exceptions and
unliftio packages. (More on these three packages later.)
Word of warning It is entirely possible to fool the mechanism I'm about to describe if you use
Control.Exception directly. My general recommendation is to avoid using that module directly and instead use one of the helper modules that implements the type-based logic I'm going to describe. If you intentionally fool the type based detection, you can end up breaking the invariants we're discussing. Note that, for the most part, you have to try to break this mechanism when using
Control.Exception.
Remember how we have that funny extensible exception mechanism in GHC that allows for OO-like exception hierarchies? And remember how all exceptions are ultimately children of
SomeException? Starting in GHC 7.8, there's a new “child” of
SomeException, called
SomeAsyncException, which is the “superclass” of all asynchronous exception types. You can now detect if an exception is of an asynchronous exception type with a function like:
isSyncException :: Exception e => e -> Bool isSyncException e = case fromException (toException e) of Just (SomeAsyncException _) -> False Nothing -> True isAsyncException :: Exception e => e -> Bool isAsyncException = not . isSyncException
We want to ensure that
throwIO and
throwTo only ever work on synchronous and asynchronous exceptions, respectively. We handle this with some helper, wrapper data types:
data SyncExceptionWrapper = forall e. Exception e => SyncExceptionWrapper e instance Exception SyncExceptionWrapper data AsyncExceptionWrapper = forall e. Exception e => AsyncExceptionWrapper e instance Exception AsyncExceptionWrapper where toException = toException . SomeAsyncException fromException se = do SomeAsyncException e <- fromException se cast e
Next we implement helper conversion functions:
toSyncException :: Exception e => e -> SomeException toSyncException e = case fromException se of Just (SomeAsyncException _) -> toException (SyncExceptionWrapper e) Nothing -> se where se = toException e toAsyncException :: Exception e => e -> SomeException toAsyncException e = case fromException se of Just (SomeAsyncException _) -> se Nothing -> toException (AsyncExceptionWrapper e) where se = toException e
Then we implement modified versions of
throwIO and
throwTo, as well as
impureThrow (a replacement for the
throw function):
import qualified Control.Exception as EUnsafe throwIO :: (MonadIO m, Exception e) => e -> m a throwIO = liftIO . EUnsafe.throwIO . toSyncException throwTo :: (Exception e, MonadIO m) => ThreadId -> e -> m () throwTo tid = liftIO . EUnsafe.throwTo tid . toAsyncException impureThrow :: Exception e => e -> a impureThrow = EUnsafe.throw . toSyncException
Assuming that all exceptions are generated by these three functions, we can now rely upon types to differentiate. The final step is separating out our helper functions into those that cleanup (and rethrow the exception), which can work on any exception type, and those that recover (and do not rethrow the exception). An incomplete list is:
- Recovery
catch
try
handle
- Cleanup
bracket
onException
finally
Here's a simplified version of the
catch function:
import qualified Control.Exception as EUnsafe catch :: Exception e => IO a -> (e -> IO a) -> IO a catch f g = f `EUnsafe.catch` \e -> if isSyncException e then g e -- intentionally rethrowing an async exception synchronously, -- since we want to preserve async behavior else EUnsafe.throwIO e
If you stick to this set of helper functions, you'll automatically meet the rules for safe async exception handling. You can even trivially perform a “pokemon” exception handler (catch 'em all):
tryAny :: MonadUnliftIO m => m a -> m (Either SomeException a) tryAny = try main :: IO () main = tryAny (readFile "foo.txt") >>= print
Uninterruptible masking
Before going down this rabbit hole, it's worth remembering: if you use
Control.Exception.Safe or
UnliftIO.Exception, the complexity of interruptible versus uninterruptible masking is handled for you correctly in the vast majority of cases, and you don't need to worry about it. There are extreme corner case bugs that occur, but in my experience this is very low down on the list of common bugs experienced when trying to write exception safe code.
We've described two types of exceptions: synchronous (those generated by actions in the current thread), and asynchornous (those generated by another thread and sent to our thread). And we've introduced the
mask function, which temporarily blocks all asynchronous exceptions in a thread. Right?
Not exactly. To quote GHC's documentation:
Some operations are interruptible, which means that they can receive asynchronous exceptions even in the scope of a mask. Any function which may itself block is defined as interruptible… It is useful to think of
masknot as a way to completely prevent asynchronous exceptions, but as a way to switch from asynchronous mode to polling mode.
Interruptible operations allow for a protection against deadlocks. Again borrowing from the docs, consider this example:
mask $ \restore -> do a <- takeMVar m restore (...) `catch` \e -> ...
If
takeMVar could not be interrupted, it would be possible for it to block on an
MVar which has no chance of ever being filled, leading to a deadlock. Instead, GHC's runtime system adds the concept that, within a
masked section, some actions can be considered to “poll” and check if there are async exceptions waiting.
Unfortunately, this can somewhat undo the very purpose we introduced
mask for in the first place, and allow resource cleanup to not always occur. Therefore, we have another function which blocks async exceptions, even within interruptible actions:
uninterruptibleMask. The decision on when to use each one is not always obvious, as can be seen by a relavant Github discussion. Here are some general rules:
- If you're inside a
mask, you can always “upgrade” to
uninterruptibleMaskinside. You can't upgrade from unmasked to masked in the same way, because in unmaksed code an async exception can occur anywhere, not just inside an interruptible action.
- You should, whenever possible, avoid using any version of a masking function. They are complicated and low-level functions. Instead, prefer the higher-level functions like
bracket,
finally, and so on.
uninterruptibleMaskintroduces the possibility of a complete deadlock. Interruptible
maskintroduces the possibility of a cleanup action being interrupted, or an action before a cleanup action being interrupted and the cleanup action never getting called. If you're stuck with using a masking function directly, you'll need to think carefully about what your goals are.
Deadlock detection
What's the result of running this program?
import Control.Concurrent main :: IO () main = do mvar <- newEmptyMVar takeMVar mvar
Usually, it will be:
foo.hs: thread blocked indefinitely in an MVar operation
Note that you can't actually rely on this deadlock detection. GHC does a good job of noticing that there are no other references to the
MVar in an active thread, and therefore terminates our thread with an asynchronous exception.
How about this?
import Control.Concurrent import Control.Exception main :: IO () main = do mvar <- newEmptyMVar uninterruptibleMask_ $ takeMVar mvar
This one deadlocks, since we've blocked the async exception. How about a normal mask?
import Control.Concurrent import Control.Exception main :: IO () main = do mvar <- newEmptyMVar mask_ $ takeMVar mvar
The deadlock is detected here and our program exits, since
takeMVar is an interruptible action. So far, so good.
How about this one?
import Control.Concurrent import UnliftIO.Exception main :: IO () main = do mvar <- newEmptyMVar :: IO (MVar ()) tryAny (takeMVar mvar) >>= print putStrLn "Looks like I recovered!"
tryAny will only catch synchronous exceptions (based on the exception type). This prevents us from recovering from asynchronous exceptions, which as we know is a bad idea. Therefore, you would think that
tryAny wouldn't catch the
BlockedIndefinitelyOnMVar exception, and “Looks like I recovered!” would never be printed. However, the opposite is true. Why?
Technically speaking, the
BlockedIndefinitely exceptions (both for
MVars and STM) are asynchronously sent, since they are delivered by the runtime system itself. And as such, we can block them via
uninterruptibleMask. However, unlike other async exceptions, they are triggered directly by actions in the current thread, not a signal from an external thread requesting that our thread die immediately (such as with the
timeout) function. Therefore, it is fully safe to recover from them, and therefore those exception types act like synchronous exceptions.
Helper library breakdown
Above, we mentioned three different helper libraries that are recommended for safer async exception handling. Let's break them down:
enclosed-exceptionsuses an older approach based on forked threads for identifying async exceptions. I would not recommend this for new code.
- The other two libraries both use the type based distinction we've described here today. The difference is in how they handle monad transformers:
safe-exceptionsuses type typeclasses from the
exceptionspackage, like
MonadCatchand
MonadMask
unliftiouses
MonadUnliftIO
Proper monad transformer handling is a completely different topic, which I've covered elsewhere (slides, video). I recommend using
unliftio for all new code.
Rules for async safe handling
Let's summarize the rules we've come up with so far for writing proper exception safe code in Haskell.
- If something must happen, like some kind of cleanup, you must use either
maskor
uninterruptibleMaskto temporarily turn off async exceptions
- If you ever catch an async exception, you must rethrow it (no recovery allowed)
- You should minimize the amount of time you spend in a masked state to ensure prompt response to async exceptions
- As an extension to this: you should therefore minimize the amount of time spent in cleanup code. As an example: having a complex network protocol run inside cleanup code is a bad idea.
Remember that using the correct libraries and library functions will significantly assist in doing these things correctly without breaking your brain each time.
Examples
We've now covered all of the principles of exception handling in Haskell. Let's go through a bunch of examples to demonstrate recommended best practices.
Avoid async exceptions when possible
This is a general piece of advice: don't use async exceptions if you don't have to. In particular, async exceptions are sometimes used as a form of message passing and control flow. There are almost always better ways to do this! Consider this code:
import Control.Concurrent import Control.Concurrent.Async import Control.Monad main :: IO () main = do messages <- newChan race_ (mapM_ (writeChan messages) [1..10 :: Int]) (forever $ do readChan messages >>= print -- simulate some I/O latency threadDelay 100000)
This will result in dropping messages on the floor, since the first thread will finish before the second thread can complete. Instead of using
forever and relying on async exceptions to kill the worker, build it into the channel itself:
#!/usr/bin/env stack -- stack --resolver lts-11.4 script --package unliftio --package stm-chans import UnliftIO (concurrently_, atomically, finally) import Control.Concurrent (threadDelay) import Control.Concurrent.STM.TBMQueue import Data.Function (fix) main :: IO () main = do messages <- newTBMQueueIO 5 concurrently_ (mapM_ (atomically . writeTBMQueue messages) [1..10 :: Int] `finally` atomically (closeTBMQueue messages)) (fix $ \loop -> do mmsg <- atomically $ readTBMQueue messages case mmsg of Nothing -> return () Just msg -> do print msg -- simulate some I/O latency threadDelay 100000 loop)
Lesson: async exceptions are powerful, and they make many kinds of code much easier to write correctly. But often they are neither necessary nor helpful.
Is the following an example of good or bad asynchronous exception handling?
bracket openConnection closeConnection $ \conn -> bracket (sendHello conn) (sendGoodbye conn) (startConversation conn)
Answer Bad! Using
bracket for opening and closing the connection is a good idea. However, using
bracket to ensure that a goodbye message is sent will significantly delay cleanup activities. If you have a network protocol which absolutely demands a goodbye message be sent before shutting down… well, you have a broken network protocol anyway, since there is no way to guarantee against:
- The process receiving a SIGKILL
- The machine dying
- The network disconnecting
Instead, this code is preferable:
bracket openConnection closeConnection $ \conn -> do sendHello conn res <- startConversation conn sendGoodbye conn return res
There are likely exceptions to this rule (no pun intended), but you should justify each such exception very strongly.
Is this a good implementation of
bracket?
bracket before after inner = mask $ \restore -> do resource <- before eresult <- try $ restore $ inner resource after resource case eresult of Left e -> throwIO (e :: SomeException) Right result -> return result
Firstly: it's always preferable to use the already written, already tested version of
bracket available in libraries! Now, let's go through this:
- It properly masks exceptions around the entire block. Good!
- The
beforeaction is run with exceptions still masked. Good! If we
restored around the
before, an async exception could sneak in immediately after
beforefinishing and binding the
resourcevalue.
- We
restoreinside of the
trybefore calling
inner. That's correct, and will not prevent proper exception safety in
bracketitself.
- We call
afterimmediately, ensuring cleanup. Good!
- Possibly bad:
afteris called without using uninterruptible masking, meaning that it's possible for an interruptible action inside
afterto prevent complete resource cleanup. On the other hand: if this is well documented, a user of this
bracketcould use
uninterruptibleMask_him/herself inside of
after.
- We rethrow the exception, meaning that it was safe for us to catch asynchronous exceptions. Good!
Overall: very good, but probably better to use
uninterruptibleMask_ on
after, which is what
safe-exceptions and
unliftio both do. Again, see the relavant Github discussion.
Racing reads
What is the output of this program?
import Control.Concurrent import Control.Concurrent.Async main :: IO () main = do chan <- newChan mapM_ (writeChan chan) [1..10 :: Int] race (readChan chan) (readChan chan) >>= print race (readChan chan) (readChan chan) >>= print race (readChan chan) (readChan chan) >>= print race (readChan chan) (readChan chan) >>= print race (readChan chan) (readChan chan) >>= print
Answer: on my machine, it's:
Left 1 Left 3 Left 5 Left 7 Left 9
However, it just as easily could have allowed some
Rights in there. It could have allowed evens in the
Lefts. And instead of skipping every other number, it's possible (due to thread scheduling) to not drop some of the numbers.
This may seem a bit far-fetched, so let's instead try something simpler:
timeout 1000000 $ readChan chan
It seems reasonable to want to block on reading a channel for a certain amount of time. However, depending on thread timing, the value may end up getting dropped on the floor. We can demonstrate that by simulating unusual thread scheduling with
threadDelay:
import Control.Concurrent import System.Timeout main :: IO () main = do chan <- newChan mapM_ (writeChan chan) [1..10 :: Int] mx <- timeout 1000000 $ do x <- readChan chan threadDelay 2000000 return x print mx readChan chan >>= print
This results in:
Nothing 2
If you actually want to have such a timeout behavior, you have to get a little bit more inventive, and once again avoid using async exceptions:
import Control.Applicative ((<|>)) import Control.Concurrent (threadDelay) import Control.Concurrent.STM import GHC.Conc (registerDelay, unsafeIOToSTM) main :: IO () main = do tchan <- newTChanIO atomically $ mapM_ (writeTChan tchan) [1..10 :: Int] delayDone <- registerDelay 1000000 let stm1 = do isDone <- readTVar delayDone check isDone return Nothing stm2 = do x <- readTChan tchan unsafeIOToSTM $ threadDelay 2000000 return $ Just x mx <- atomically $ stm1 <|> stm2 print mx atomically (readTChan tchan) >>= print
This results in the preferred output:
Nothing 1
Forked threads
Whenever possible, use the async library for forking threads. In particular, the
concurrently and
race functions, the
Concurrently data type, and their related helpers, are all the best thing to use. If you must have more complicated control flow, use the family of functions related to the
Async data type. Only use
forkIO as a last resort.
All that said: suppose we're going to use
forkIO. And let's write a program that is going to acquire some resource in a parent thread, and then needs to clean it up in the child thread. We'll add in a
threadDelay to simulate some long action.
import Control.Concurrent import Control.Exception main :: IO () main = do putStrLn "Acquire in main thread" tid <- forkIO $ (putStrLn "use in child thread" >> threadDelay maxBound) `finally` putStrLn "cleanup in child thread" killThread tid -- built on top of throwTo putStrLn "Exiting the program"
This looks like it should work. However, on my machine (this is timing-dependent!) the output is:
Acquire in main thread Exiting the program
This is because the forked thread doesn't get a chance to run the
finally call before the main thread sends an async exception with
killThread. We may think we can work around this with some masking:
import Control.Concurrent import Control.Exception main :: IO () main = do putStrLn "Acquire in main thread" tid <- forkIO $ uninterruptibleMask_ $ (putStrLn "use in child thread" >> threadDelay maxBound) `finally` putStrLn "cleanup in child thread" killThread tid -- built on top of throwTo putStrLn "Exiting the program"
However, we still have the same problem: we don't get to
uninterruptibleMask_ before
killThread runs. Instead, we need to perform our masking in the main thread, before forking, and let the masked state get inherited by the child thread:
import Control.Concurrent import Control.Exception main :: IO () main = do putStrLn "Acquire in main thread" tid <- uninterruptibleMask_ $ forkIO $ (putStrLn "use in child thread" >> threadDelay maxBound) `finally` putStrLn "cleanup in child thread" killThread tid -- built on top of throwTo putStrLn "Exiting the program"
Now our output is:
Acquire in main thread use in child thread
Followed by the program hanging due to the
threadDelay maxBound. Since we're still inside a masked state, we can't kill that thread. We've violated one of our async exception handling rules! One solution would be to write our code like this:
import Control.Concurrent import Control.Exception import System.IO main :: IO () main = do hSetBuffering stdout LineBuffering putStrLn "Acquire in main thread" tid <- uninterruptibleMask $ \restore -> forkIO $ restore (putStrLn "use in child thread" >> threadDelay maxBound) `finally` putStrLn "cleanup in child thread" killThread tid -- built on top of throwTo putStrLn "Exiting the program"
This gives the correct output and behavior:
Acquire in main thread cleanup in child thread Exiting the program
But it turns out that there's a subtle problem with using the
restore we captured from the parent thread's
uninterruptibleMask_ call: we're not actually guaranteed to be unmasking exceptions! Let's introduce the proper solution, and then see how it behaves differently. Instead of using
restore from
uninterruptibleMask, we can use the
forkIOWithUnmask function:
import Control.Concurrent import Control.Exception import System.IO main :: IO () main = do hSetBuffering stdout LineBuffering putStrLn "Acquire in main thread" tid <- uninterruptibleMask_ $ forkIOWithUnmask $ \unmask -> unmask (putStrLn "use in child thread" >> threadDelay maxBound) `finally` putStrLn "cleanup in child thread" killThread tid -- built on top of throwTo putStrLn "Exiting the program"
Small difference in the code. Let's look at another piece of code that demonstrates the difference:
import Control.Concurrent import Control.Exception foo :: IO () foo = mask $ \restore -> restore getMaskingState >>= print bar :: IO () bar = mask $ \restore -> do forkIO $ restore getMaskingState >>= print threadDelay 10000 baz :: IO () baz = mask_ $ do forkIOWithUnmask $ \unmask -> unmask getMaskingState >>= print threadDelay 10000 main :: IO () main = do putStrLn "foo" foo mask_ foo uninterruptibleMask_ foo putStrLn "\nbar" bar mask_ bar uninterruptibleMask_ bar putStrLn "\nbaz" baz mask_ baz uninterruptibleMask_ baz
We're using the
getMaskingState action to determine the masking state currently in place. Here's the output of the program:
foo Unmasked MaskedInterruptible MaskedUninterruptible bar Unmasked MaskedInterruptible MaskedUninterruptible baz Unmasked Unmasked Unmasked
Remember that the
restore function provided by
mask will restore the previous masking state. So for example, when calling
mask_ foo, the
restore inside
foo returns us to the
MaskedInterruptible state we had instituted by the original
mask_. The same logic applies to the calls to
bar.
However, with
baz, we use
forkIOWithUnmask. This
unmask action does not restore a previous masking state. Instead, it ensures that all masking is disabled. This is usually the behavior desired in the forked thread, since we want the forked thread to respond to async exceptions we send it, even if the parent thread is in a masked state.
forkIO and race
Let's implement our own version of the
race function from the
async package. This is going to be a really bad implementation for many reasons (everyone is encouraged to try and point out some of them!), but we'll focus on just one. We'll start with this:
import Control.Concurrent import Control.Exception
Now let's use this in a simple manner:
main :: IO () main = badRace (return ()) (threadDelay maxBound) >>= print
As expected, the result is:
Left ()
Now take a guess, what happens with this one?
main :: IO () main = mask_ $ badRace (return ()) (threadDelay maxBound) >>= print
Same thing. OK, one more try:
main :: IO () main = uninterruptibleMask_ $ badRace (return ()) (threadDelay maxBound) >>= print
This one deadlocks, since our
forkIO calls inside
badRace inherit the masking state of the parent thread, which prevents the
killThread call from working. Any guesses as to how we should fix this bug?
badRace :: IO a -> IO b -> IO (Either a b) badRace ioa iob = do mvar <- newEmptyMVar tida <- forkIOWithUnmask $ \u -> u ioa >>= putMVar mvar . Left tidb <- forkIOWithUnmask $ \u -> u iob >>= putMVar mvar . Right res <- takeMVar mvar killThread tida killThread tidb return res
BONUS What will be the result of running this?
main :: IO () main = uninterruptibleMask_ $ badRace (error "foo" :: IO ()) (threadDelay maxBound) >>= print
And here's a little hint at fixing it:
tida <- forkIOWithUnmask $ \u -> try (u ioa) >>= putMVar mvar . fmap Left
unsafePerformIO vs unsafeDupablePerformIO
I wanted to include a demonstration of
unsafeDupablePerformIO leading to cleanup actions not running. Unfortunately, I couldn't get any repro on my machine, and had to give up. Instead, I'll link to a GHC Trac ticket (c/o Chris Allen) which at least historically demonstrated the problem:
tl;dr: GHC's runtime will simply terminate threads evaluating a thunk if another thread finishes evaluating first, and not give a chance for cleanup actions to run. This is a great demonstration of why async exceptions are necessary if we want both external termination and proper resource handling.
Links
- The original Handling Asynchronous Exceptions in Haskell blog post
- General Haskell syllabus
- The
unliftiolibrary:
- Exception handling module:
- safe-exceptions documentation:
- Exceptions best practices
- Monad transformers talk | https://www.fpcomplete.com/blog/2018/04/async-exception-handling-haskell | CC-MAIN-2020-05 | refinedweb | 7,569 | 52.6 |
We did it again in 2016 and 2017, 2020! Check out the photos of the parties: 2015, 2016, 2017 and 2020!
One of the props that I picked up to decorate DNA Lounge at the first Cyberdelia in 2015 was an old payphone. It wasn't hooked up for that first party, but just in time for the second party, I modified it to run Linux.
(It's still here, by the way. Come play with it in person the next time you are at DNA Lounge!).
Phase One: The Keypad
Behind the buttons on the telephone keypad is the same sort of design as exists inside most computer keyboards. The obvious way to do this would be for there to be a switch for each key, attached to a pin on the chip to tell it when the switch was pressed. But for a keyboard, that would mean your chip would need 100+ inputs. So instead, the way they work is, the keyboard is wired up in a grid: there is a set of horizontal wires, and a set of vertical wires, and pressing a key makes an electrical contact between one horizontal and one vertical wire: so pressing 5 connects wires X and B. The chip figures out which keys are pressed by polling thousands of times a second: it squirts some electrons down W, and if they come back on A, then 1 is pressed. Then it squirts some electrons down X, and if they come back on C, then 6 is pressed, and so on.
So for a simple keypad like this, it's a false economy, since this chip needs 7 inputs instead of 12. But for a full-sized keyboard, you reduce the number of inputs from about 100 to about 20. But, it is what it is.
The logical electrical layout of the phone's keypad is exactly what you'd expect: it's the drawing above. The logical layout of the generic USB numeric keypad is a bit weirder.
It's like the Platonic ideal of a keyboard: you think you're typing on the thing on the left, but that's just a shadow on the wall. The real keyboard is the thing on the right.
The positive numbers line up properly, but there's no choice of mappings here that will make the "*0#" row work. So, I used the "T/*" row for that and I re-map the keys in software. (The blank cells don't send any keys at all).
First I needed to get electrical connections out of the phone's keypad. The key contacts are printed on one side of a two-sided circuit board. I was able to use the "via" holes (where one side of the board connects to the other) to connect in. That looked like this:
That chip on the board is both the keyboard decoder and a DTMF (touch-tone) generator: the output of this board is audio. So that wasn't going to do me any good.
(Fun fact: the noises that telephone buttons make are chords of two notes played simultaneously, and those notes are also arranged in a grid just like the above, low frequencies vertically and high frequencies horizontally. Other notes and chords did other things on the old phone networks: one of the more famous frequencies was 2600 Hz, and that's why both the Atari 2600 and the magazine are called that. Fun fact two: Wozniak and Jobs funded Apple Computer by selling blue boxes that let you make phone calls for free.)
The next step was to connect those new wires to the inputs on the USB keyboard chip. Easier said than done, because that keyboard had a flat plastic membrane as its input: its socket accepted a paper-thin piece of flexible plastic with contacts printed on it that are 0.5mm wide and 1mm apart on center. I wasn't able to find a breakout board that had a socket that small on it: I found some blanks, but they required you to surface-mount the socket to it (and I couldn't find the socket either). So I did it the hard way: I manufactured a connector that would fit into that socket.
Yeah, those are sewing needles.
First: let me draw your attention to the plastic shim holding those needles at 1mm spacing. Let me draw your attention to how hard it was to manufacture that thing. Though I have 0.5mm drill bits, I had to tape them to make them fit into the dremel at all without just vanishing. And even though I have a drill-press stand for my dremel, my first plan of drawing a pattern then lining up the drill and dropping it was totally not working at all. So in the end, I just eyeballed it: drop the drill; lift ever so slightly; give the plastic the slighest kick with my fingernail; drop the drill again. It only took me three tries to get it right!
Second: though those are the thinnest needles that I had in my house at the time, they weren't thin enough to fit into the socket. So I had to file them down. It was like I was making the world's tiniest shivs. I turned the dremel sideways, put a sanding wheel on it, and held the needles to it until they were flat and about 0.1mm thick. I was doing this while wearing a 20x magnification monocle, so all of this action was taking place with my face so close to the spinny bit that I had to exercise great care to avoid sanding off the tip of my nose.
When I described this to a friend he said, "So basically, you're looked like some guy I have to find to send me on a side-quest in Fallout." Pretty much.
Ok, remember that shim I was so proud of constructing? It didn't really work. It was still too hard to get every pin to make contact at the same time. So instead, I tore the top off of the jack and jammed the pins into the underside of the contacts in it, then used the tip of a needle to secure each with a drop of superglue.
Once everything was tested, I wrapped the whole mess of wires inside some Sugru (this amazing play doh-like stuff that dries into flexible rubber).
I did waste a few days trying to track down what I thought was series of bad connections that actually turned out to be: to make some keys work at all, you had to press really fucking hard. I solved this problem by shimming with a thin piece of cardboard between each key and the domes on the rubber membrane. I suspect this phone would also have had that problem when it was still a phone.
Phase Two: The Handset
Fortunately, telephone speaker and microphone design has not changed significantly since like, 1870 or something. Scoping out the four wires coming from the telephone's handset was pretty straightforward: one pair showed 150 Ω (that's the speaker); one pair showed 200 kΩ (that's the mic).
If you're like me, and do these sorts of projects at 4am without really planning ahead, you don't have, for example, a solderable TRRS plug when you need one. So you take what's lying around, and tear apart an old pre-made TRRS cable and solder a stacking header onto it. What, soldering onto wires that are just slightly thicker than a human hair? I'm getting used to it. Then a big old glob of epoxy to finish it off.
Best iPhone headset ever.
Update, 2020: What I described above is wiring the handset's speaker and microphone directly in to the pins on a TRRS jack and calling it a day. As it turns out, that doesn't really work: it was noisy and unreliable. The old-school telephone handset mics and modern mics (e.g., earbuds) are substantially different electrically, so this fell into the category of "lucky it worked at all, ever". The proper solution to this, which I did in my 2020 upgrade, was to open up the handset and replace the old speaker and mic with their modern equivalents. That also allowed me to include an audio amp to get the volume up to nightclub levels.
Phase Three: The Computer
I got a Raspberry Pi 2 B to drive it, which is probably a more powerful computer than your typical PC was at the time that this payphone was originally in service.
The Pi has an audio output, but it's shit, and it's not powerful enough to drive the speaker in this handset. But that doesn't matter since the Pi does not have an audio input, meaning I needed a different audio card anyway. I got a Cirrus Logic, which, conveniently enough, also has a TRRS headset input.
You know what this means, right? I voluntarily and of my own free will recompiled my kernel in order to get my audio card to work. I don't even know who I am any more. I don't even know what year this is. "And you did this on your home phone?"
Because of course Raspbian doesn't come with the modules pre-built. Of course it doesn't.
Anyway, after the usual jiggery-pokery, I had audio ouput via aplay and audio input via arecord and all was right with the world, except for the fact that I had hoped I would never hear the word "ALSA" again in my life. (It's "Advanced", you know. The "A" stands for "Advanced".)
So then I wrote some code: phonebooth.pl. It sits there reading characters from the keyboard (the phone keypad) and responding to them by playing MP3 files, or recording and saving new ones. Most of the MP3 files are pre-recorded (generated by the Makefile using the free online text-to-speech service at voicerss.org -- which doesn't do a fantastic job, but it's adequate). The infoline is downloaded from the DNA Lounge web site every morning and re-converted to an MP3 in the same way.
I launch it as a getty from /etc/inittab on console 0 so that the machine boots up into the Perl script:
(I write junk like this in Perl because I still just can't get past Python's whitespace thing. I know, you kids today all think that it's not a big deal, but I can't. I just can't.)
Phase Four: The Switch Hook
Now I just need to detect when the handset has been taken off hook and hung up. This part should be easy, right? After all, the Pi 2 has about a zillion readable pins on it, and unlike the Arduino, they even come with built-in software-configurable pull-up and pull-down resistors, which saves a lot of soldering.
Well, no. Not so much. This next part took me days to figure out.
If you were me, you might assume that an audio card wouldn't consume all forty pins on the Pi's GPIO header. Well, it does... sorta...
The manual is pretty unclear about whether any of the GPIO pins are intended to be available for non-soundcard-related use, but in the tables it does describe some of them as "EXP", which I interpreted as "Expansion", which I further interpreted as "you can use these."
Also, the board has an "expansion header" (sometimes called a "feature header") on it. It has 20 pins. The manual describes a header that has 13 pins. Further, the 13 pins described in the manual bear no relation to the first 13 pins, or any pins, on that 20 pin header. What the hell, Cirrus.
Scoping out its continuity shows that the expansion header corresponds to these pins on the Pi's header. This table is looking at the board with the ethernet jack on the bottom left. The header is not numbered and I won't hazard a guess as to which corner the designers of this board thought of as "up".
So, just pick a pin, like BCM 26 let's say, and go to town, right?
Well, no. Not so much.
It turns out that if you try to use any of these pins, they all behave as if they're "floating": you get random data, as if the software was unable to attach a pull-up or pull-down resistor to them. Or maybe it's not reading the pin at all but is reading something else entirely. Who knows! The pins I tried worked properly when the Cirrus card was not plugged in, but with it plugged in, they didn't function.
So we're dead in the water, right? Time to spin the wheel and buy a different audio card?
But I had an idea. Why is this card touching these pins at all? Does it really need to? So I got out the needle-nose pliers and very carefully bent one of the pins down, then plugged the audio card back in, denying it access to that pin.
Pin 40: Audio card doesn't function. Bend it back and try again.
Pin 38: Audio card doesn't function. Bend it back and try again.
Pin 36: Audio card doesn't function. Bend it back and try again.
Pin 32: WE HAVE A WINNER.
Yup, I did that. I'm not proud. Or maybe I am. I can't tell.
I suppose it's possible that by doing this I have broken some feature of this audio card, but since both playback and recording work, it's not a feature that I use, so, so what!
Except the story didn't play out in quite so straightforward a way as all that, because there were multiple failures happening at the same time, as is my way, because I am cursed.
It turned out that, on this Pi, with this version of Raspbian, reading a pin state with Python worked great:
import time
import sys
pin = 12
GPIO.setmode (GPIO.BCM)
GPIO.setup (pin, GPIO.IN, pull_up_down = GPIO.PUD_UP)
while True:
state = GPIO.input (pin)
if state != last:
if state == True:
print ('CLOSED')
else:
print ('OPEN')
last = state
sys.stdout.flush()
time.sleep(0.1)
But doing it from Perl didn't work at all. The pins always acted like they were floating:
use Device::BCM2835;
Device::BCM2835::init() || die;
Device::BCM2835::gpio_fsel (&Device::
&Device::
Device::
&Device::
my $last = -1;
while (1) {
my $state = Device::
if ($state != $last) {
print ($state ? "CLOSED\n" : "OPEN\n");
$state = $last;
}
sleep (0.1);
}
This was weird, because both of these libraries appear to be a thin wrapper on top of the underlying bcm2835 library, which is written in C. Ok, fine, let's do it in C:
#include <bcm2835.h>
#define PIN RPI_GPIO_P1_12
int main (int argc, char **argv) {
uint8_t last = ~0;
if (!bcm2835_init()) return 1;
bcm2835_gpio_fsel (PIN, BCM2835_GPIO_FSEL_INPT);
bcm2835_gpio_set_pud (PIN, BCM2835_GPIO_PUD_UP);
while (1) {
uint8_t value = bcm2835_gpio_lev (PIN);
if (value != last)
printf (value ? "CLOSED\n" : "OPEN\n");
last = value;
delay (100);
}
return 0;
}
Oh hey, the C version didn't work either! What the hell was Python doing under the covers that the Perl and C versions were not?
It turns out that they were using different pin numbering schemes! Python (and all the documentation of anything ever) use "BCP" numbers, but the C and Perl interfaces use Pi header pin numbers instead. So changing the "12" to a "32" in the C code made it work.
Except!
Here's the next monumentally moronic thing that I discovered: only root can read BPIO pins. And I don't mean "only users who have read permission on /dev/mem", I mean literally "only root". So either you have to run everything as root, or your Perl or Python program has to call out to a C helper program that is setuid, just to read the pins. (Said program being the one above, that simply loops, printing a line every time the pin state changes.)
(You might be thinking, "Well why not just run everything as root, then? So what?" If you are thinking that, then the Internet of Things, and by "Things" I mean "Moldovan Botnets", thanks you in advance for your complicity.)
Phase Five: Profit
I kind of wanted to make use of a few more pins, to do things like detect when a coin was inserted (at least to have it say "Thank you, and remember to tip your bartenders!") or to be able to ring the physical bell, but after my pin-bending hijinks on the audio card, I figured I had better quit while I was ahead. Maybe the next pin I bend blows up the board.
And here it is, the complete "payphone", before being re-inserted into its massive cast-iron chassis:
In conclusion, Hack the Planet.
Post Script: 2018
It has been two and a half years since I built and installed this payphone, and I've watched many people interact with it over the years. I am sad to report that it is... underappreciated. Nearly every night, I've watched the following drama play out:
They pick up the handset, place it to their ear, and mug for the camera.
The phone rings once, and the person PANICS and slams down the handset and runs away.
They don't even stay on the line long enough to hear the menu of options! Honestly, I don't get it. How can you be so incurious as to not wonder what was going to happen next? With that kind of attitude, how are you ever going to get sent on that side-quest?
People only record messages on it about once every few months, and half of those sound to be from accidental button-mashing. Even of those people who listen to the menu, I don't think most of them even listen long enough to realize that pressing 4 will lead to a deadpan robotic voice telling them a random "walks into a bar" joke.
There are easter eggs, too. Forever undiscovered. Nobody even tries to make it to a barrel roll.
Anyway, the fact that the payphone is appreciated almost exclusively as a selfie-accessory and is barely acknowledged as an electronic toy has sapped my enthusiasm for the other ideas I had for it, like rigging up the physical bell, or the coin slot, or adding interactive lights or a display of some kind.
It has amassed quite a nice collection of stickers, though!
Planet, Hacking Thereof:
Regarding Cyberdelia, our Hackers party that was the inspiration for this project: we'd love to continue throwing that party once a year or so, but we had to skip it in 2018 because we were unable to find a sponsor for it. The party has a lot of expensive overhead, and in 2017 we were able to cover costs by making it be someone's official RSA / B-Sides after party. That went great!
But in 2018, sadly, that company dropped out. Their lawyers told they that they can't sponsor anything that involves alcohol, so their after-party had to be held somewhere that you can't get a drink at all. (Yeah, I don't get it either.)
If you love Hackers as much as we do and you'd like to see Cyberdelia happen again, see if you can get your corporate overlords to kick in some sponsorship money the next time you're in town for a trade show or product release or something.
Eclectic events like this are what we specialize in at DNA Lounge, but while such things make great art, they rarely make money. If you'd like to help us keep doing this sort of thing, please join the DNA Lounge Patreon!
Post Post Script: 2020:
We were able to bring back Cyberdelia one more time, in 2020 for the
25th
anniversary of Hackers! And it was
epic.
I largely rebuilt the payphone's innards for that. The basic design
remained the same, except that I re-built the handset, and used different
audio hardware. A few details are on my
2020 blog.
Everything about this is spectacular.
Sweet hack! You mentioned that the keypad sends audio. Did you consider using audio tones as a possible interface rather than the usb? Would be neat to be able to box the phone :)
Thought about it, but getting realtime analysis of an audio stream sounded like exactly the sort of crap that ALSA would make impossible. (The "A" is for "Advanced", you know.) And if not "impossible" them certainly "exactly the kind of Linux crap I hate most."
Also I never got as far as figuring out how to actually energize and read that board, which also sounded (at the time) like more work than what I did. Which may have turned out to be wrong, given how hard what I did turned out to be.
You probably made the right call; analyzing audio is complicated even when you don't have to find a way to get ALSA to turn it from analog signal into data. I'm still trying to find a way to do it reliably so I can write real functional tests around the SAME encoder I wrote for shits and grins.
Dude! I was looking for a SAME encoder the other day! You wouldn't happen to be willing to share, would you?
Sure! It's on Github at aaron-em/same-encoder.
Do note, it's written in Javascript; that said, it works both in Node (probably io.js as well, though haven't tried it) and in the browser, so you can use it in whichever of those ways suits you best. And, of course, as I already mentioned, there are no functional tests yet, because I haven't been able to find or write a decoder which I can build them around. As far as I can tell, it produces correct output, but I have as yet no way to prove that with certainty to myself or anyone else.
With those caveats aside, of course you're more than welcome to it! If you find any bugs or see any opportunity for improvement, please oblige me by opening Github issues or pull requests. Thanks!
I sincerely hope this is a reference to:
Multimon-ng does DTMF decoding in real-time and can be piped to and from in shell.
You can use sox to pipe the input audio to multimon. It also supports other encoding schemes so you could do some cool tricks to access/input some other features (Morse replay from smart phone? ;))
"These days" Linux users are under the reign of terror of "Pulseaudio", and ALSA has been promoted in stature to fond, wistful memories of when things "just worked... slightly more."
It's Pulse on top of ALSA.
I elided that because it wasn't relevant for my point: PA on top of ALSA still makes me pine for the "just ALSA" days; forgetting what exactly they were like.
I remember trying to use Pulse on purpose one time, trying to get networked audio to function between machines. With 4front's paid-for OSS drivers, and ALSA as a[nother] middle layer.
Looking back, I may have had some masochistic tendencies in my youth.
I pine for the days before universally-integrated AC'97 CODECs, when I could just pay 4front to make things work for me, and the sound card (if I chose carefully!) would do any required mixing in hardware.
Also, more generally: I miss hardware that actually did stuff, including printers that inherently knew how to print words on paper.
Come on, though, printers have always been awful.
But they've gotten far, far worse.
(though admittedly, I've never written a VHS tape labeling system entirely in PostScript. So perhaps my hate is tempered differently.)
(I also keep 30-year-old TI thermal teletype around for posterity, though none of my machines currently have a serial port with which to operate it, and I tossed my expensive external Supra v.32bis modem over a decade ago. Not that it would have anyone to talk to, or a landline to plug into.)
(Ah, fuck. Is this what getting old is actually like? I thought it was supposed to be all titties and beer by now.)
Dude you are talking to someone who is currently sitting on the floor of a filthy office wondering whether this cheap assed POE dingus scorched my Pi when titties and beer are happening literally on the other side of the door. So I dunno.
Thank you for making me realize the error of my ways, however.
Exhibit A (titties, beer, not pictured; fully implied.)
Technically ass not titties, although they're also fully implied.
Any reason not to, instead of converting it to USB, leave the original keypad hardware intact, use the original DTMF generator, have the Pi listen for tones on the audio card’s mic in, and recognize the waveforms? Or would that be more processing than it could handle?
Doh! Ninja’d by Joshka. Damn, it took me half an hour to write that question?
One of your keypad layouts has two number 4 keys.
Congrats on your finished project, I can feel your pain from reading your writeup! I know that giving advice after the fact sounds cocky, but in case anyone wants to recreate your payphone, please....
For Audio: Don't buy any of these overpriced "audio shields", just cut off the headphone part from an old USB headset and reuse the usb soundcard molded either into the USB plug or that small blob between the USB plug and the headphone part. It will just work out of the box on your PC, Mac, Raspberry-Pi, Beagle-Bone...
Switch-Hook: This python library is a very ugly hack and started out as the only thing working when no kernel-drivers existed for the gpio yet. But people never stopped using this idiotic thing, even now the "slow but proper" way works beautifully from every programming language or shell script: Also it doesn't require root (for poking directly into the hardware, which is a pretty insane idea anyway) once you've chmod'ed the files. And it will be fast enough for your hook detection in any case.
Keypad: If you redo it, just invest the saved time from not having to deal with the crap mentioned above in doing the keyboard scanning yourself ;-) using the freed GPIOs you no longer have to waste on the audio shield.
I concur with all the above.
Can sysfs be added without recompiling the kernel?
Why would you want to add sysfs? Why would he need a pseudo file system.
I wouldn't want to do that if I didn't have to, especially if I were surrounded by grinning conssies looking for ideas to steal; or to paraphrase JWZ "tail lights to chase"
What are conssies?
I was simply trying to identify how long it took between jwz mentioning Linux and someone else suggesting kernel recompilation.
The bit about the audio card is insane. Note I understand why do many people hook an Arduino up to their RasbPi via serial if they want to do i/o.
I was wondering, can you get a USB sound card that would work with the pi?
That's how I have audio working on my Raspberry Pi. Without compiling any kernel modules.
Awesome, so awesome. Wish I could listen to the messages.
You recompiled your kernel to make your audio card work! Now that's getting into the spirit of the thing. I mean, most people will probably just dress up for the party or something.
Next Time Around (TM), you may find it easier to use a SIP adapter and VoIP software, rather than banging the audio and keyboard yourself. The adapters are nice in that they are (shitty) sound cards with DTMF decoding built in.
I do not know for sure, but I expect the phone keypad to want to sit between the mic and the phone line without much in the way of external components.
Yeah, a cheap ata with an fxo port might have done it. You could just wire a rj11 on to the line cord and run asterisk on the rpi2.
One alternative I've found to suid C wrappers around scripts is using sudo, allow passwordless root invocation of that specific script only
Suck it up and go with Python.
After reading this blog for the better part of a decade, I've concluded that JWZ is not one who sucks it up, in any manner.
Perhaps build a web version of this so other folks can hear the audio files?
This is awesome, you're my hero.
I'm horrified at how you did it, but I'm super impressed that you did it at all. My pay phone is still standard. It's fun when it rings, but...
Anyway, here's a couple things that you could have done and might consider for version 2:
A ringdown box makes the phone call a specific number when it goes off-hook. These are how you make inmate phones and security phones. They, and everything like this, cost way too much. But if you're insane, you can hook one up to an Asterisk box and get a full interactive voice response system. You can still run Asterisk on RasPi. However, you trade configuring ALSA with configuring Asterisk, and I don't think that's an improvement...
I would have used a USB audio card instead of the shield. This one in particular is really cheap and works really well. You're still having to mess with ALSA, though... and now you've got the problem of ALSA and multiple audio cards. Fun. But you've got all your GPIO back!
But the easiest option might be Adafruit's Audio FX boards. They turn a button press into a playing wav/mp3 file. That's it. They're perfect for something like this! Except now you've got to load it with new audio files once a week...
I'm puzzled why you would want a fancy ring-generator "ringdown box," when you could just use an FXS port (which is already designed to talk to a telephone) instead of an FXO and do the rest in software.
rad
Hey! This got posted to Hacker News, but following the link from there seems to intermittently trigger the hotlinking warning.
Hacker News is a steaming cesspool and I don't have time or desire to deal with their bandwidth demands, so I just block them.
What I tend to do with the RPi rather than using the built in IO ports (which are 3.3V, very limited in current, wired directly into the chip so easy to fry and as you note only accessible by root) I use an expansion board that breaks out i2c to IO ports that can be 5V, easy to replace if you fry them and i2c IO can be done as a non-root user (there are sample libraries for Python and C at least, don't know about perl). This also leaves the non-i2c pins available for whatever other cards you may be using.
I use this one from a small UK manufacturer, I'm sure there are similar ones more easily available in the US:
Or you can build one:
(one of the advantage I always found with Python over perl was other people can't screw up indentation).
Please tell me there's an easter egg if you play a 2600Hz tone into the mic.
And for the love of god get a USB sound card and free up all those pins.
I....wow.
It seems a shame not to hook up the coin detector. Couldn't you just multiplex it over the same pin you used for the hook switch? Build a little board to feed it 3 frequencies of square wave and no signal for your 4 states.
I would want it to reject the coin and play "keep the change, ya filthy animal."
I think I can explain the GPIO thing. There are two[*] GPIO numbering schemes in use on the Pi. There's the actual pin numbers on the Pi's 40-pin "P1" connector header, and there are the GPIO numbers as used by the Broadcom system-on-chip.
In the Python code, GPIO.setmode(GPIO.BCM) tells the library to use raw Broadcom numbering. So you get GPIO 12. However in the other code RPI_GPIO_P1_12 is "sugar" for pin number 12 on the "P1" connector, so you get a different GPIO (I think 18).
Refs:
According to the enum definition elsewhere on that second page, RPI_V2_GPIO_P1_32 = 12.
(Yay embedded systems?)
[*] I think one of the other Python libraries actually invented a third incompatible pin numbering scheme.
Oh holy crap, you're right! Changing it to 32 in the C code makes it work!
Good to hear!
I realised that might explain the other "EXP" pins on the audio breakout not working as well. The GPIO/pin mapping changed between RPi 1 & RPi 2, so if the numbers on the audio breakout are Broadcom GPIO numbers for RPi 1, they'll probably need translating to the equivalent GPIOs for RPi 2.
PS I noticed a typo in the update - s/BCP/BCM/. Normally I wouldn't be pedantic about typos but I know people will find this blog post in the future while having this exact problem, and will go off looking for the meaning of BCP.
Unfortunately, no, it doesn't explain that. Pins that worked while the audio card was detached failed to work while it was attached, and I determined which pins were "the same" by continuity, not by number.
> I voluntarily and of my own free will recompiled my kernel in order to get my audio card to work. I don't even know who I am any more. I don't even know what year this is.
History doesn't repeat itself, but it sure does rhyme ;)
I am happy/disappointed that you managed to use a version of Raspbian old enough to not involve systemd. I fear there might have been few survivors otherwise.
Of course, Limor did it better ages ago. Hers is red-boxable!
Maybe I should implement T9. Though without a display, that might be tricky.
Now I'm regretting that I didn't get a payphone with a rotary dial instead.
And implemented T9 on that.
You could have audio responses to the button presses. After each quick button press the letter of the alphabet would be pronounced (Pressing 222 would make the phone say "abc", and if the user waits, there would be a beep to signal that the system is ready for the next character. You'd need a backspace, and "read input until now" button though.
The Link says the 29th, not the 22nd.
I fail at promoting.
Rather than spending all the time making a connector, I would have just exopied some 0.1" header to the USB board, and then used wires to connect it up directly to the SOIC. That chip has quite a bit wider pitch than the header and would be pretty easy to solder up. Either that or desolder the connector and solder directly to the lands.
I think you're saying, "the pin spacing on the chip might have been the same as the spacing of the pins on a header"? I don't have it in front of me, but from looking at the photo, I think the chip's spacing was tighter than that. Almost half, maybe.
Trying to solder directly onto the round contacts on the board next to the socket was not possible, certainly not with the equipment I have. I would have gotten the solder on more than one contact and probably ended up delaminating the board trying to fix it.
All you had to do is use BOARD rather then BCM,
now it matches with the Pin Number as a Pin Number!
In the python code you forgot to initialize the variable 'last'
while True:
state = GPIO.input (pin)
if state != last:
if state == True:
print ('CLOSED')
else:
print ('OPEN')
last = state
In the perl code you swapped the two variables:,
$state = $last; should be $last = $state;
my $last = -1;
while (1) {
my $state = Device::BCM2835::gpio_lev (RPI_GPIO_P1_12);
if ($state != $last) {
print ($state ? "CLOSED\n" : "OPEN\n");
$state = $last;
}
Oops. Well, I'm not actually running that code anyway.
I'm totally with you.
The only thing that's tempting me is that I have to worry about my whitespace in perl anyway, just purely for aesthetic reasons. I'm always indented, reindenting, etc. So, I figure I can cope with python deriving some meaning from beautification. I haven't taken the python plunge yet but this is the strategy I'm planning to make my brain use. | https://www.jwz.org/blog/2016/01/my-payphone-runs-linux-now/ | CC-MAIN-2022-40 | refinedweb | 6,232 | 80.01 |
as to hand in the first solution they found?
BTW a couple of related posts of mine are Teaching, Learning and the Job Interview and Characteristics of a Good Programming Project
Here are a few more fizzbuzz-type questions: stackoverflow.com/…/alternate-fizzbuzz-questions
I've taken this challenge in the past.
Here's the link to my C++ version:
I've also done a C# one but it's not on codepad. (Curse them for not supporting .Net!)
So which approach did I take? I actually approached it from the maximum readability standpoint.
As such, I use (cringes are expected from hardcore programmers) booleans! And though I don't check for 15 specifically, the way I created the code makes 15s work out perfectly too.
In short, the Psuedo-language for this would be "Print fizz if multiple of 3, buzz if multiple of 5, the number if neither applies." – Noting that if 3 and 5 are both true, fizz and buzz both get printed!
I tend to avoid ternaries. They make the character count smaller true, but the readability usually suffers. They offer absolutely no compiler benefit.
I think that covers all the questions presented herein. And I will concede that there are many "right answers" – but which is "best" is subjective.
Mine is best, of course. Haha.
One more note. You might want to check out projecteuler.net if these kids of problems tickle your fancy. =)
Actually a site that is good for this sort of thing is Supports C#, Visual Basic and F#.
@Keith – if mod 15 – FizzBuzz elseif mod 5 Buzz elseif mod 3 Fizz else N – if two numbers are mutually prime the only numbers that have both of them as factors has their product as a factor.
@anon – mod is definitely the way to go… as a math guy that's the first thing I thought of.
I did a mathematical/binary/modular approach. Unfortunately the switch is unavoidable.
Full explanation there.
Posted too soon. Here is the better version. Done with java but via JSP as I primarily work in web development. Fewest lines of code I could figure to solve the problem. I added a new line character for readability.
for(int i=1;i<=100;i++){
String output = "";
if(i % 3 == 0) output = "Fizz";
if(i % 5 == 0) output += "Buzz";
if(output.equals("")) out.print(i+"<br>");
else out.print(output+"<br>");
}
Dim i As Integer = 1
Dim oWrite As System.IO.StreamWriter
oWrite = IO.File.CreateText("C:Tempfizzbuzz.txt")
While i <= 100
If (i Mod 3 = 0 And i Mod 5 = 0) Then
oWrite.WriteLine(CStr(i) + " – FizzBuzz")
ElseIf (i Mod 3 = 0) Then
oWrite.WriteLine(CStr(i) + " – Fizz")
ElseIf (i Mod 5 = 0) Then
oWrite.WriteLine(CStr(i) + " – Buzz")
Else
oWrite.WriteLine(CStr(i))
End If
i = i + 1
End While
oWrite.Close()
This is based on a drinking game called Bizz-Buzz where you drank if you didn't get the correct numbers as each in the group counted from 1 to 100. I lost a lot. But, congrats on the ones who posted the code already!
If you work where I do, you set up a web service that will return the string to print given the integer input. duh.
I am jaded.
FizzBuzz must be a hot topic for some reason, I saw it a few days ago on a web-design site. Over the years I have seen many plausible responses which actually helps me learn how to do things in other languages
The one that I had not seen (at least at the time) was doing it in SQL, so I developed this version. I did get reminded that SQL chooses the first match in a case statement and ignores the rest, so the 15 was the first check. I also do know that I could use string concatenation to build it, but it does not look as graceful.
DECLARE @Count INT
SET @Count = 1
DECLARE @FzBz VARCHAR(8)
WHILE @Count <= 100 BEGIN
SELECT @FzBz = CASE
WHEN (@Count % 15 = 0) THEN 'FizzBuzz'
WHEN (@Count % 3 = 0) THEN 'Fizz'
WHEN (@Count % 5 = 0) THEN 'Buzz'
ELSE CAST(@Count AS VARCHAR(8))
END
PRINT @FzBz
SET @Count = @Count + 1
END
fizzbuzz.pastebin.com/ytRhRLAq
I love the ternaries in scripted languages, and I don't feel readability suffers because their syntax is so unique that one should know what they're looking at if they've ever encountered them before. I still think this could probably be condensed further — just have this feeling at the back of my neck that I'm missing something — but that almost certainly would impact readability.
I'll also note that while I used "n" between integers in my sample, that was mostly because there's no practical application where this would need to be made pretty, and newline characters are two less code characters than a "<br>". 😛
(My apologies if this is a repost. I just tried to submit, and there was no acknowledgement once the page loaded, nor was my comment listed.)
C# Code
for (int i = 1; i <= 100; i++)
{
string s = "";
s = (i % 3 == 0) ? ((i%5==0)?"FizzBuzz":"Fizz") : ((i%5==0)?"Buzz":i.ToString()) ;
Console.WriteLine(s);
}
Console.ReadLine();
C# Code- iam dumb, reduced one more line 🙂
for (int i = 1; i <= 100; i++)
{
string s = (i % 3 == 0) ? ((i%5==0)?"FizzBuzz":"Fizz") : ((i%5==0)?"Buzz":i.ToString()) ;
Console.WriteLine(s);
}
Console.ReadLine();
C# Code : Man iam the dumbest c# programmer around
for (int i = 1; i <= 100; i++)
{
Console.WriteLine((i % 3 == 0) ? ((i%5==0)?"FizzBuzz":"Fizz") : ((i%5==0)?"Buzz":i.ToString()));
}
Console.ReadLine();
Explanation of my code
Since 3 comes before 5, check for 3 first. If 3 works, then check for 5 too. If its both 3 & 5 it is "FizzBuzz" else only "Fizz". If 3 does not work, check for 5, if 5 works it is "Buzz" else just return the number as a string. All the above is in one line. Both AND and OR condition are handled in this one line. Should not go wrong i guess, but i may be dumb.
@Aditya: ancient god of readability should curse you
Modulo was obviously my first thought, but – what about using the Sieve of Eratosthenes approach?
Pseudocode:
threes := bit[100]
fives := bit[100]
for (i := 3; i <= 100; i += 3)
threes[i-1] = true;
for (i := 5; i <= 100; i += 5)
fives[i-1] = true;
for (i := 1; i <= 100; ++i)
if (threes[i-1] || fives[i-1])
if (threes[i-1])
print "Fizz"
if (fives[i-1])
print "Buzz"
else
print i
Benefit: No division is necessary! Division is much more expensive for your CPU than addition. So, depending on the context that this program is needed for, this algorithm may be much more appropriate. It sacrifices space complexity for the sake of time complexity.
If you really had to, you could optimize further by replacing the "i-1" occurrences with "i" and letting the loops start and end earlier; then you would just need an "i+1" when you print out the number. But that would require commenting your code. 🙂
@anon
You said:
… if two numbers are mutually prime the only numbers that have both of them as factors has their product as a factor.
I know this. Yet the elegance of my code is that I don't have to check for that. I save a check by simply not doing the endl till after all the checks are done.
in VB.Net:
Imports System.Text
Public Class BizzBuzz
Const OUTPUT_START As Integer = 1
Const OUTPUT_END As Integer = 100
Private Sub cmdOutput_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdOutput.Click
Dim nums As New Dictionary(Of Integer, String)
Dim output As New StringBuilder
Dim line As New StringBuilder
nums.Add(3, "bizz")
nums.Add(5, "buzz")
For x = OUTPUT_START To OUTPUT_END
line.Clear()
For Each key As Integer In nums.Keys
If x Mod key = 0 Then
line.Append(nums(key))
End If
If line.Length < 1 Then
line.Append(x)
End If
line.Append(vbNewLine)
output.Append(line)
rtbOutput.Text = output.ToString()
End Sub
End Class
This (while quick and dirty and not configurable outside of code) lends itself to being "upgraded" in the future and configured to work with different numbers and different strings. Ideally it would read in from a database or an xml file, or at the very least the user would have the option of entering in the numbers and the strings to be used. It's an improvement over other solutions presented here because it requires less modification when the client changes the specifications from "multiples of 3 and 5 should be replaced with 'bizz' and 'buzz' respectively" to "multiples of 1,2,3,5,7,8, and 15 should be replaces with 'jazz', 'razz', 'bizz', 'buzz', 'fizz', 'fuzz', and 'wtf' respectively".
Hi Alfred,
I consider it is an interesting problem and "some" mathematics are necessary to investigate, maybe resolve it.
A faster solution will not use "%" or "Mod" modulo operator.
Should we proceed?
Thank you,
MirceaMirea@yahoo.com
for(int num=0;num<=100;num++)
{
if(num%3==0)
{
if(num%5==0)
{System.out.println("FizzBuzz"+num);}
else
System.out.println("Fizz"+num);
}
else if(num%5==0)
{
System.out.println("Buzz"+num);
}
else
{
System.out.println(num);
}
}
Solutions that do not use modulus would be most interesting. As would discussion about why they are faster. In fact a lot more discussion of *why* options are faster or otherwise "better" are encouraged. That is in some ways more useful than just presenting a solution.
No mods ?? Or Division!!
OK.
Faster? Sure. By less than a human would ever notice. ^_^ – a solution in C without using Mod or %
FWIW
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace FIzzBuzz
{
class Program
{
static void Main(string[] args)
{
var output = Enumerable.Range(1,100)
.Select(i => ( (i % 3 == 0 || i % 5 == 0) ?
(((i%3 == 0) ? "Fizz" : String.Empty) +
((i%5 == 0) ? "Buzz" : String.Empty))
: i.ToString()));
foreach (var s in output)
{
Console.Write(s + " ");
}
Console.ReadLine();
}
}
}
I could write this in any of the languages I know, but since I'm a VB.Net person by nature, I'll do that.
For i As Integer = 1 To 100
If i Mod 3 = 0 Then
Console.WriteLine("Fizz")
ElseIf i Mod 5 = 0 Then
Console.WriteLine("FizzBuzz")
Else
Console.WriteLine(i)
End If
Loop
Not quite David. Some cases are clearly missing.
David (and anyone else who may need an example)
Your output for the first 15 numbers should look like this:
01 1
02 2
03 Fizz
04 4
05 Buzz
06 Fizz
07 7
08 8
09 Fizz
10 Buzz
11 11
12 Fizz
13 13
14 14
15 FizzBuzz
After that, the "Fizz" and "Buzz" pattern repeats over and over just with different numbers in between.
A simple list of pre-prepared answers is by far the easiest to write, to verify, to read &c.
If this were for a safety-critical application, this would be the only approach that I would accept.
Would not the fastest solution that meets the brief (and only take a single line) be:
Print("1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz 16 etc")
The print statement method is fast. It is only as reliable as the typist though and highly prone to cut/paste errors. And then there is the matter of scale. How easy is it to modify the routine to handle the numbers between 1 and 1,000,000,000? Or to use some arbitrary starting and ending points? Good coders think about expandability and maintainability.
Some of the languages (including C) allow you to write the most unreadable code only understood by the compiler and the writer. If I saw something like this on the job (yes, production code),
string s = (i % 3 == 0) ? ((i%5==0)?"FizzBuzz":"Fizz") : ((i%5==0)?"Buzz":i.ToString()) ;
I would reject it in the code review. Yes, it is quick, but it is not teaching the student to do a professional job. Will this person be able to maintain it in six months? Will someone else understand it? How long will it take someone else to understand it. In business, time is money. If a programmer needs to take extra time to understand this type of code (assuming that the whole program is written like this), then your business is wasting valuable time and money. Something like
String output = "";
if(i % 3 == 0) output = "Fizz";
if(i % 5 == 0) output += "Buzz";
if(output.equals("")) out.print(i+"<br>");
else out.print(output+"<br>");
As a teacher preparing students for their professional life as software engineers, it is most important to instill the idea that code readability is important.
Was playing a bit with C++ syntax and came up with this…
#include <iostream>
#include <iomanip>
bool setFalse(bool& var)
{
var = false;
return true;
}
int main()
{
bool printNum;
for(int i = 1; i < 101; ++i)
{
printNum = true;
std::cout<<std::setw(3)<<std::setfill('0')<<i<<std::setw(0)<<" : ";
std::cout<<(i%3==0?(setFalse(printNum)?"Fizz":""):"")<<(i%5==0?(setFalse(printNum)?"Buzz":""):"");
printNum?std::cout<<i:std::cout;
std::cout<<std::endl;
}
return 0;
}
If any one have a problem about asp.net than told me i solve it send me your problem on katarmalvk@gmail.com
A simple solution without mod.
int main()
{
int counter3=0;
int counter5=0;
int flag=0;
int i=0;
for(i=1;i<=100;i++)
{
flag=1;
printf("%d ",i);
if(++counter3==3) {counter3=0; printf("Fizz"); flag=0;}
if(++counter5==5) {counter5=0; printf("Buzz"); flag=0;}
if(flag) printf("%d",i);
printf("n");
}
}
My Python answer
g = lambda(i):(i%3==0 and i%5==0 and 'FizzBuzz') or (i%3==0 and 'Fizz') or (i%5==0 and 'Buzz') or i
for i in range(1,101):
print g(i)
Although it's not the most efficient way (it could be done with modulo), here is my solution. I've allowed expansion to an arbitrary number of search terms, as most solutions already presented are hard-coded to 3 and 5, and would make it difficult or impractical to add, for instance, "Woo" on every instance of 2 (making 30, 60 and 90 "WooFizzBuzz").
int number, i;
int[] count;
string output;
int [] multipler = {3, 5}; // add numbers here
string[] words = {"Fizz", "Buzz"}; // corresponding words
count = new int[multipler.Length]; // use a count rather than modulo
for (i = 0; i < count.Length; i++)
count[i] = 1;
for(number = 1; number <= 100; number++)
{
output = "";
for (i = 0; i < multipler.Length; i++)
{
if (count[i] == multipler[i])
{
output += words[i];
count[i] = 0;
}
count[i]++;
}
if (output == "")
output = number.ToString();
Console.WriteLine(output);
}
Console.ReadLine();
Enumerable.Range( 1, 100 ).Select(
i =>
i % 15 == 0 ? "FizzBuzzrn" : i % 3 == 0 ? "Fizzrn" : i% 5 == 0 ? "Buzzrn" : String.Empty
).ToList().ForEach(
Console.Write
);
Console.ReadLine();
This is what happens when you don't read the instructions properly 🙂 Now prints the number if it's not FizzBuzz/Fizz/Buzz:
Enumerable.Range( 1, 100 ).Select(
i =>
i % 15 == 0 ? "FizzBuzz" : i % 3 == 0 ? "Fizz" : i % 5 == 0 ? "Buzz" : i.ToString()
).ToList().ForEach(
Console.WriteLine
);
Console.ReadLine();
…and the fixed aggregate version:
Console.Write(
Enumerable.Range( 1, 100 ).Aggregate(
String.Empty, ( current, i ) =>
current + ( i % 15 == 0 ? "FizzBuzz" : i % 3 == 0 ? "Fizz" : i % 5 == 0 ? "Buzz" : i.ToString() ) + "rn"
)
);
Console.ReadLine();
Extensible version (similar to Adam's but more LINQy):
var words = new Dictionary<int, string> {{3, "Fizz"}, {5, "Buzz"}};
Enumerable.Range( 1, 100 ).Select(
i =>
words.Any( wordPair => i % wordPair.Key == 0 ) ?
words.Where( wordPair => i % wordPair.Key == 0 ).Aggregate(
String.Empty, ( current, wordPair ) =>
current + wordPair.Value
)
:
i.ToString()
).ToList().ForEach( Console.WriteLine );
Console.ReadLine();
Would the students get any credit for pointing out that there is an infinite number of "the numbers from 1 to 100"?
A good question. The proper answer is probably to specify Integers between 1 and 100. 🙂
I would say that it goes without saying that unless specified otherwise, if you're given requirements in integers, then the expected results is also specified in integers.
The ability to understand natural language is import to being a good programmer. If for some reason there was confusion (such as a requirement that states "measure all objects between 1 foot and 5 feet in length and store them in this database" – which would cause confusion over what to do with partial measurements) then one would ask questions in order to understand the requirements more fully.
I've had the unfortunate experience of working with contractors that took the approach of pointing out inane trivialities with language – such as "but there are an infinite number of numbers between X and Y" and I will not hire them back. Such people are not only unproductive, but they excel at creating rabbit holes and then falling in them.
That's not to say if someone was legitimately confused over the requirements that I wouldn't answer clarification questions, but these people were not confused. They simply have the unfortunate habit of trying to look important or highly educated, or something. I'm never quite sure. But they come across as difficult to work with at best. Jerks at worst.
It probably goes without saying, but if I gave this question in interview and a candidate said "there are an infinite number of numbers between 1 and 100" I wouldn't hire them. That would be an immediate disqualifier. If on the other hand, they said "integers only?" then I'd confirm integers only and wouldn't think any less of them.
Ability to see what is missing in a specification can be an important quality. Interestingly the formation of this problem has a missing requirement. The first sentence should read "Write a program that prints the numbers from 1 to 100 in order". The words 'in order' are missing.
// Sans strong-typing…
for( var i = 1: i <= 100; i++){
var multipleOf3 = (i % 3) == 0;
var multipleOf5 = (i % 5) == 0;
var t = multipleOf3 ?
( multipleOf5 ? "fizzbuzz" : "fizz")
: ( multipleOf5 ? "buzz" : i);
print( t);
}
/* Advantages:
* Evaluate i % 3 and i % 5 only once for each i
* It's kind-of readable if you are ok with ternaries
* The coercion of i to a string is here left to the language, but inexpensive algorithms for this are a commodity
* To implement with stronger typing, this coercion would need to be explicitly performed or delegated
*/ | https://blogs.msdn.microsoft.com/alfredth/2011/02/24/fizzbuzza-programming-question/ | CC-MAIN-2017-22 | refinedweb | 3,107 | 65.73 |
Basic Webscraping Script in Python | Requests | BeautifulSoup | ArgParse
Sold Gig ($35)
This is the gig description I offered on my profile to get my first gig:
An email marketing company hired me to write a Python script that satisfies the following requirements.
Requirements
- What is the input? (file, file type, email, text,…) File with list of email addresses (one per line)
- What is the output? (file, file type, text, csv, …) File with all email addresses that are from a disposable email provider:
- Where does the input come from? (user input from the console, specific path,…) How should the input be processed? Where should the output go to? (console, file,…) File to File
- What should the script do if the input contains errors or is incomplete? Ignore line
Code
I recorded a video where I go over the code I developed:
Here’s the code I developed to filter email addresses from spam email providers and clean the email list from fake email addresses.
import requests import sys import argparse from bs4 import BeautifulSoup """ Input: Text file containing email addresses, one address per line Output: A file containing all email address from the input file whose domain was found in the file under the URL """ __author__ = 'lukasrieger' # constant default settings URL = '' PATH_DOMAINS_LOCAL = 'disposable_domains.txt' DEFAULT_INPUT = 'emails.txt' DEFAULT_OUTPUT = 'filtered_emails.txt' def refresh_domains_file(): """ This method gets the disposable domains list from the git repo as html and scrapes it. Finally all domains are written to a file. """ html = requests.get(URL).content soup = BeautifulSoup(html, features="html.parser") tds = soup.findAll('td', class_='js-file-line') domains = [td.text + 'n' for td in tds] with open(PATH_DOMAINS_LOCAL, 'w') as file: file.writelines(domains) print(f'Refreshed disposable domains file under path {PATH_DOMAINS_LOCAL}') def get_disposable_domains(refresh=False): """ This method loads the entries from the disposable domains file into a list and returns the list. If the parameter refresh=True, the file is refreshed with the domains given in the git repo. """ if refresh: # load data from git repo refresh_domains_file() domains = None with open(PATH_DOMAINS_LOCAL, 'r') as file: domains = file.readlines() # remove linebreaks return [domain[:-1] for domain in domains] def check_mails(in_path, out_path, refresh=False): """ Loads the list of disposable domains and checks each address from the input file for those domains. Only if the list of disposable domains contains the email's domain, the email address will be added to the outfile. """ disposable_domains = get_disposable_domains(refresh=refresh) count = 0 print(disposable_domains) with open(in_path, 'r') as in_file, open(out_path, 'w') as out_file: for email in in_file: try: prefix, suffix = email.split('@') #print(prefix, suffix, '|') except: print(f'Invalid email address: {email}') continue # remove blanks around the suffix if suffix.strip() in disposable_domains: out_file.write(email) count += 1 return count if __name__ == '__main__': print('Filtering emails...') parser = argparse.ArgumentParser(description='Filter email addresses by disposable domains.') parser.add_argument('-i', type=str, nargs='?', help='Path of input file with the email addresses.') parser.add_argument('-o', type=str, nargs='?', help='Path where the output will be put.') parser.add_argument('-r', action='store_true', help='Refresh local copy of the disposable domains file.') args = parser.parse_args() path_input = args.i if args.i else DEFAULT_INPUT path_output = args.o if args.o else DEFAULT_OUTPUT refresh = args.r try: mails_count = check_mails(path_input, path_output, refresh) print(f'Copied {mails_count} email addresses to the output file.') print('Done.') except: print(f'Sorry, an unexpected error ({sys.exc_info()[1]}) occurred!nCall filtermails.py -h for help.')
You can run the code with this simple command:
$ python filtermails.py -i emails.txt -o fakeEmails.txt -r
The code is stored in a file named
filtermails.py. The first argument
emails.txt is the file of email addresses, one email address per line. The second argument is
fakeEmail.txt which is the output file where all the fake emails are stored.! | https://www.coodingdessign.com/python/python-freelancing-my-first-fiverr-gig-and-how-i-solved-it/ | CC-MAIN-2020-50 | refinedweb | 635 | 51.24 |
all workflows in a submission done, but submission not marked done and data-model not updated
I have a workflow c94ac5a8-f333-40dd-8f06-518a1f2a1514 that finished about 49 min ago
The workflow is the only one in the submission deed0065-ab63-47ec-9398-b2f30cd32cfa
Since the workflow is done and it's the only one in the submission, should the submission be marked done? and the "Done" status reflected in the monitor tab?
This is also preventing the data-tab from being updated with the results from the workflow
Answers
I'm seeing the same issue with multiple submissions, including submissions with multiple workflows. For example, the 32 workflows in submission b2cb829d-1cdc-49fc-b9b9-23a11fee1cd0 all finished over an hour ago, but are still marked as Submitted and haven't written back to the data model.
@francois_a I started observing this issue today in the afternoon. I have about 8900 submissions. Of the 8900 submissions, based on finding workflow output files in the bucket, over 8800 of the submissions are done, but in the monitor tab, over 1000 of the submissions are marked "Submitted"
Please share your workspaces with [email protected] I'll look into this meanwhile.
Done. If you look at the last few submissions in the realign_0917 workspace, you'll see that submissions 8f5462df-41f2-46b8-8316-ee89abaf9bbd and 2668cedb-4190-4e26-a2c3-7663adea7778 have this issue; in the comparisons_0917 workspace, see b2cb829d-1cdc-49fc-b9b9-23a11fee1cd0. Please note that I'm about to patch the attributes in the latter through the API.
We bounced Rawls, so these should clear up. It looks like it was a combination of two things:
Thanks for the rapid fix!
The large majority of my submissions contained multiple workflows, and I think @esalinas's as well.
@francois_a did your data model get updated? Mine did not.
For example a submission with data entity fffb519f-6aea-42bc-be7e-b314ba9d04c7
but its row in the data model did not get updated
I only observed a single data entity get updated:
@esalinas what's the submission and workflow associated with that entity that didn't write back? Can you share the workspace with [email protected] ?
I think I am having a similar problem - successful runs are not populating the data table in one of my workspaces:
The outputs are listed under "Outputs" in the Monitor tab, however.
Kate
@hussein I gave the submission ID and workflow ID in the initial post of this thread.
The workspace name and namespace are in the screenshots.
I've just shared the workspace broad-firecloud-testing/hg38_PoN_Creation_copy with "[email protected]" as a READER
Please note that the workspace is under TCGA-dbGaP-Authorized authorization domain.
-eddie
I wanted to update this thread to let you know our developers are still working on it. Currently we need another reproduction case, so if you have any new instances of this issue, please let us know.
You can track the status of this fix here. | https://gatkforums.broadinstitute.org/firecloud/discussion/comment/42666 | CC-MAIN-2019-43 | refinedweb | 495 | 60.04 |
Lab: ADC + PWM
Objective
Improve an ADC driver, and use an existing PWM driver to design and implement an embedded application, which uses RTOS queues to communicate between tasks.
This lab will utilize:
- ADC Driver
- You will improve the driver functionality
- You will use a potentiometer that controls the analog voltage feeding into an analog pin of your microcontroller
- PWM Driver
- You will use an existing PWM Driver to control a GPIO
- An led brightness will be controlled, or you can create multiple colors using an RGB LED
- FreeRTOS Tasks
- You will use FreeRTOS queues
Assignment
Preparation:
Before you start the assignment, please read the following in your LPC User manual (UM10562.PDF)
- Chapter 7: I/O configuration
- Chapter 32: ADC
Part 0: Use PWM1 driver to control a PWM output pin
- Re-use the PWM driver
- Study the
pwm1.hand
pwm1.cfiles under
l3_driversdirectory
- Locate the pins that the PWM peripheral can control at
Table 84: FUNC values and pin functions
- These are labeled as
PWM1[x]where
PWM1is the peripheral, and
[x]is a channel
- So
PWM1[2]means PWM1, channel 2
- Now find which of these channels are available as a free pin on your SJ2 board and connect the RGB led
- Set the
FUNCof the pin to use this GPIO as a PWM output
- Initialize and use the PWM-1 driver
- Initialize the PWM1 driver at a frequency of your choice (greater than 30Hz for human eyes)
- Set the duty cycle and let the hardware do its job :)
- You are finished with Part 0 if you can demonstrate control over an LED's brightness using the HW based PWM method
#include "pwm1.h" #include "FreeRTOS.h" #include "task.h" void pwm_task(void *p) { pwm1__init_single_edge(1000); // Locate a GPIO pin that a PWM channel will control // NOTE You can use gpio__construct_with_function() API from gpio.h // TODO Write this function yourself pin_configure_pwm_channel_as_io_pin(); // We only need to set PWM configuration once, and the HW will drive // the GPIO at 1000Hz, and control set its duty cycle to 50% pwm1__set_duty_cycle(PWM1__2_0, 50); // Continue to vary the duty cycle in the loop uint8_t percent = 0; while (1) { pwm1__set_duty_cycle(PWM1__2_0, percent); if (++percent > 100) { percent = 0; } vTaskDelay(100); } } void main(void) { xTaskCreate(pwm_task, ...); vTaskStartScheduler(); }
Part 1: Alter the ADC driver to enable
Burst Mode
- Study
adc.hand
adc.cfiles in
l3_driversdirectory and correlate the code with the ADC peripheral by reading the LPC User Manual.
- Do not skim over the driver, make sure you fully understand it.
- Implement a new function called
adc__enable_burst_mode()which will set the relevant bits in Control Register (CR) to enable burst mode.
- Identify a pin on the SJ2 board that is an ADC channel going into your ADC peripheral.
- Reference the I/O pin map section in
Table 84,85,86: FUNC values and pin functions
- Connect a potentiometer to one of the ADC pins available on SJ2 board. Use the ADC driver and implement a simple task to decode the potentiometer values and print them. Values printed should range from 0-4095 for different positions of the potentiometer.
Note:
- The existing ADC driver is designed to work for non-burst mode
- You will need to write a routine that reads data while the ADC is in burst mode
- You will also create
adc__get_channel_reading_with_burst_mode()that can return an ADC channel reading
- Hint: You will need to set the right bits in CR register to enable burst-mode
#include "adc.h" #include "FreeRTOS.h" #include "task.h" void adc_task(void *p) { adc__initialize(); // TODO This is the function you need to add to adc.h // You can configure burst mode for just the channel you are using adc__enable_burst_mode(); // Configure a pin, such as P1.31 with FUNC 011 to route this pin as ADC channel 5 // You can use gpio__construct_with_function() API from gpio.h pin_configure_adc_channel_as_io_pin(); // TODO You need to write this function while (1) { // Get the ADC reading using a new routine you created to read an ADC burst reading // TODO: You need to write the implementation of this function const uint16_t adc_value = adc__get_channel_reading_with_burst_mode(ADC__CHANNEL_2); vTaskDelay(100); } } void main(void) { xTaskCreate(adc_task, ...); vTaskStartScheduler(); }
Part 2: Use FreeRTOS Queues to communicate between tasks
- Read this chapter to understand how FreeRTOS queues work
- Send data from the
adc_taskto the RTOS queue
- Receive data from the queue in the
pwm_task
#include "adc.h" #include "FreeRTOS.h" #include "task.h" #include "queue.h" // This is the queue handle we will need for the xQueue Send/Receive API static QueueHandle_t adc_to_pwm_task_queue; void adc_task(void *p) { // NOTE: Reuse the code from Part 1 int adc_reading = 0; // Note that this 'adc_reading' is not the same variable as the one from adc_task while (1) { // Implement code to send potentiometer value on the queue // a) read ADC input to 'int adc_reading' // b) Send to queue: xQueueSend(adc_to_pwm_task_queue, &adc_reading, 0); vTaskDelay(100); } } void pwm_task(void *p) { // NOTE: Reuse the code from Part 0 int adc_reading = 0; while (1) { // Implement code to receive potentiometer value from queue if (xQueueReceive(adc_to_pwm_task_queue, &adc_reading, 100)) { } // We do not need task delay because our queue API will put task to sleep when there is no data in the queue // vTaskDelay(100); } } void main(void) { // Queue will only hold 1 integer adc_to_pwm_task_queue = xQueueCreate(1, sizeof(int)); xTaskCreate(adc_task, ...); xTaskCreate(pwm_task, ...); vTaskStartScheduler(); }
Part 3: Allow the Potentiometer to control the RGB LED
At this point, you should have the following structure in place:
- ADC task is reading the potentiometer ADC channel, and sending its values over to a queue
- PWM task is reading from the queue
Your next step is:
- PWM task should read the ADC queue value, and control the an LED
Final Requirements
Minimal requirement is to use a single potentiometer, and vary the light output of an LED using a PWM. For extra credit, you may use 3 PWM pins to control an RGB led and create color combinations using a single potentiometer.
- Make sure your Part 3 requirements are completed
pwm_taskshould print the values of MR0, and the match register used to alter the PWM LEDs
- For example, MR1 may be used to control P2.0, so you will print MR0, and MR1
- Use memory mapped
LPC_PWMregisters from
lpc40xx.h
adc_taskshould convert the digital value to a voltage value (such as 1.653 volts) and print it out to the serial console
- Remember that your VREF for ADC is 3.3, and you can use ratio to find the voltage value
adc_voltage / 3.3 = adc_reading / 4095 | http://books.socialledge.com/books/embedded-drivers-real-time-operating-systems/page/lab-adc-pwm | CC-MAIN-2020-40 | refinedweb | 1,074 | 51.52 |
#include <db.h>
int DB->get(DB *db, DB_TXN *txnid, DBT *key, DBT *data, u_int32_t flags); int DB->pget(DB *db, DB_TXN *txnid, DBT *key, DBT *pkey, DBT *data, u_int32_t flags);
The DB->get function retrieves key/data pairs from the database. The addressursor->c_get for details.
When called on a database that has been made into a secondary index using the DB->associate function, the DB->get and DB->pget functions return the key from the secondary index and the data item from the primary database. In addition, the DB->pget function returns the key from the primary database. In databases that are not secondary indices, the DB->pget interface will always fail and return EINVAL.
If the operation is to be transaction-protected, the txnid parameter is a transaction handle returned from_GET_BOTH flag with the DB->get version of this interface and a secondary index handle.
The data field of the specified key must be a pointer to a logical record number (that is, a db_recno_t). This record number determines the record to be retrieved.
For_MULTIPLE flag may only be used alone, or with the DB_GET_BOTH and DB_SET_RECNO options. The DB_MULTIPLE flag may not be used when accessing databases made into secondary indices using the DB->associate function.
See DB_MULTIPLE_INIT for more information.
Because the DB->get interface will not hold locks across Berkeley DB interface calls in non-transactional environments, the DB_RMW flag to the DB->get call is meaningful only in the presence of transactions.
If the database is a Queue or Recno database and the specified key exists, but was never explicitly created by the application or was later deleted, the DB->get function returns DB_KEYEMPTY.
Otherwise, if the specified key is not in the database, the DB->get function returns DB_NOTFOUND.
Otherwise, the DB->get function returns a non-zero error value on failure and 0 on success.
The DB->get function may fail and return a non-zero error for the following conditions:
A record number of 0 was specified.
The DB_THREAD flag was specified to the DB->open function and none of the DB_DBT_MALLOC, DB_DBT_REALLOC or DB_DBT_USERMEM flags were set in the DBT.
The DB->pget interface was called with a DB handle that does not refer to a secondary index.
The DB->get function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->get function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way. | http://doc.gnu-darwin.org/api_c/db_get.html | CC-MAIN-2013-20 | refinedweb | 433 | 59.43 |
Bug #7564
r38175 introduces incompatibility
Description
r38175 introduces incompatibility with 1.9.3. Before r38175, when looking for dump, Marshal would not call method_missing. Now marshal calls methodmissing when trying to dump.
The following example exits with no error on 1.9.3, but on trunk it raises an exception (_dump() must return string (TypeError)):
class TR
def initialize calls = []
@calls = calls
end
def method_missing name, *args
@calls << [name, args]
end
end
Marshal.dump TR.new
I've attached a test case.
Related issues
Associated revisions
History
#1
Updated by Koichi Sasada over 2 years ago
- Category set to core
- Assignee set to Yusuke Endoh
- Target version set to 2.0.0
mame-san, how about this ticket?
#2
Updated by Aaron Patterson over 2 years ago
Bump. /cc nobu
#3
Updated by Ryan Davis over 2 years ago
I'm getting bit by this in my multi-version CI on Flay and any other project that uses the sexp gem and calls my deep_clone:
def deep_clone Marshal.load(Marshal.dump(self)) end
#4
Updated by Usaku NAKAMURA over 2 years ago
- Status changed from Open to Assigned
#5
Updated by Nobuyoshi Nakada over 2 years ago
- Status changed from Assigned to Rejected
If
method_missing does not deal with a method call, it should raise
NoMethodError.
def method_missing name, *args @calls << [name, args] super end
#6 Updated by Anonymous over 2 years ago
On Sun, Dec 23, 2012 at 10:01:33AM +0900, nobu (Nobuyoshi Nakada) wrote:
Issue #7564 has been updated by nobu (Nobuyoshi Nakada).
Status changed from Assigned to Rejected
If
method_missingdoes not deal with a method call, it should raise
NoMethodError.
def method_missing name, *args @calls << [name, args] super end
Before this changeset,
method_missing did deal with all method
calls. That's why this is not backwards compatible.
Aaron Patterson
#7
Updated by Ryan Davis over 2 years ago
I still think this is a bug, as shown by needing a
respond_to? that does nothing more than call super:
class Sexp < Array def inspect "s(#{map(&:inspect).join ', '})" end def respond_to? meth super end if ENV["WHY_DO_I_NEED_THIS"] def method_missing meth, delete = false raise "shouldn't be here: #{meth.inspect}" end end def s *args Sexp.new args end p Marshal.load Marshal.dump s(1, 2, 3) puts # % WHY_DO_I_NEED_THIS=1 ruby20 trunk_bug.rb && ruby20 trunk_bug.rb # s(1, 2, 3) # # trunk_bug.rb:11:in `method_missing': shouldn't be here: :marshal_dump (RuntimeError) # from trunk_bug.rb:19:in `dump' # from trunk_bug.rb:19:in `<main>'
#8
Updated by Ryan Davis over 2 years ago
- Status changed from Rejected to Open
No, really. This is a bug that needs more eyeballs.
A
respond_to? with just a super should be equivalent to no code at all.
Can we get Matz and Mame to weigh in?
#9
Updated by Nobuyoshi Nakada over 2 years ago
Anonymous wrote:
Before this changeset,
method_missing*did*deal with all method
calls.
No, previously
respond_to? was called so
method_missing did not get called.
That's why this is not backwards compatible.
It depended on the internal behavior too much.
And the bug that overriding
method_missing without
respond_to_missing? has been revealed.
Just like overriding
hash without
eql?.
#10
Updated by Ryan Davis over 2 years ago
This seems highly inconsistent to me. Specifically, MM + RT is the only working solution, but RT implementation is entirely meaningless. It doesn't make sense to me that I need a method table entry that does nothing but super. That should be the same as not existing in the method table.
Notes:
class Sexp < Array def inspect "s(#{map(&:inspect).join ', '})" end def method_missing meth, delete = false x = nil return x if x = find { |o| Sexp === o && o.first == meth } raise "shouldn't be here: #{inspect}.#{meth}" end if ENV["MM"] def respond_to? meth, private = false p :respond_to? => meth if ENV["V"] super end if ENV["RT"] def respond_to_missing? meth, private = false p :respond_to_missing? => meth if ENV["V"] super end if ENV["RTM"] alias respond_to_missing? respond_to? if ENV["ARTM"] end def s *args Sexp.new args end END { puts } p Marshal.load Marshal.dump s(1, 2, 3) p s(1, 2, s(:blah)).blah
#11
Updated by Koichi Sasada over 2 years ago
- Priority changed from Normal to 7
#12
Updated by Yukihiro Matsumoto over 2 years ago
Since this is incompatibility, I propose to get back to the old behavior, for 2.0.
We have to discuss the issue that @zenspider mentioned in separated thread.
Matz.
#13
Updated by Yusuke Endoh over 2 years ago
- Assignee changed from Yusuke Endoh to Nobuyoshi Nakada
- Status changed from Open to Assigned
I agree with Matz. Nobu, please handle this.
#14
Updated by Nobuyoshi Nakada over 2 years ago
- % Done changed from 0 to 100
- Status changed from Assigned to Closed
This issue was solved with changeset r38888.
Aaron, thank you for reporting this issue.
Your contribution to Ruby is greatly appreciated.
May Ruby be with you.
marshal.c: get back to the old behavior
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/7564 | CC-MAIN-2015-27 | refinedweb | 837 | 67.35 |
AtFS - introduction to AtFS library functions and error codes
#include <atfs.h> int af_errno
The following manual pages (named af*) describe the library functions of the Attribute File System (AtFS). AtFS is an extension to the UNIX file system interface that allows the storage of files as complex of content data and an arbitrary number of associated attributes. Attributes are either standard attributes or user defined attributes (also called application defined attributes). A data complex consisting of contents data and attributes, is called Attributed Software Object (ASO). In the following they are also referred to as object versions or simply versions. AtFS has a built-in version control system that manages object histories. A object history consists of an optional busy version and a number of saved versions. The busy version is represented by an ordinary alterable UNIX file. It can be accessed via AtFS and UNIX file system operations. Saved versions come into being by making a copy of the current state of the busy version. They can be stored as source versions or as derived versions. Source versions are typically composed manually (e.g. by use of a text editor). AtFS stores source versions as immutable objects. Once saved, they cannot be modified any longer. Saved versions are stored in archive files, residing in a subdirectory named AtFS. AtFS maintains two archive files for each history of source versions - one holding the attributes and the other holding the data. To save disk space, the versions in an archive are stored as deltas, that means, only differences between two successive versions are stored. Derived versions are typically derived automatically (e.g. by a compiler) from a source version and thus be reproducible at any time. They are kept in a derived object cache a data store of limited size that is administered in a cache fashion. When space runs short, old versions are cleaned out of the cache according to a defined policy. Check af_cache(3) for details. AtFS makes no assumptions whether a copy of a busy object shall be stored as source object or as derived object. The application has to decide that by calling the appropriate function (af_saverev - manual page af_version(3) or af_savecache - manual page af_cache(3)). The main data types for AtFS applications are: · The object key that uniquely identifies an ASO (version). The structure of this type can be different in different implementations of AtFS. Consequently, application programs should handle this type as opaque type and should not access single fields. · A set descriptor represents a set of object keys. A set descriptor contains information about the size of the set and a list of object keys in the set. · The user identification represents a user. As AtFS realizes a simple network user concept, it does not identify users by their UNIX user id, but rather by the user name and the domain where this name is valid. See af_afuser (manual page af_misc(3)) for details. · An attribute buffer is capable to hold all attributes of a software object (standard attributes and user defined attributes). Attribute buffers have two different purposes. First, they can hold a retrieve pattern, i.e. they may be (partially) filled with desired attribute values and then be passed as argument to a retrieve operation (af_find and af_cachefind - manual page af_retrieve(3)). Second, an attribute buffer is used to return all attributes of an identified ASO on demand. There are several ways for an AtFS application to get an object key pointing to a specific object version. The most important is the function af_getkey (manual page af_retrieve(3)), that returns a key for an explicitly named version. After having retrieved a key, the data for that object version remains cached in memory as long as the application does not explicitly give the key back. The function af_dropkey (manual page af_retrieve(3)) gives a key back and releases the object version. A retrieved set of keys has also to be given back by use of af_dropset (manual page af_retrieve(3)). af_dropall (manual page af_retrieve(3)) sets all reference counters for cached object versions to zero, that means, it gives all formerly retrieved keys and sets back. In the case, that any attribute, or the contents data of a cached object version is modified on disk by another application, the data in the cache is automatically updated. In that case, a warning is written to the error log file. For handling sets of keys, AtFS provides a full set algebra with functions for adding and deleting single keys, sorting, building subsets, intersections, unions and differences of sets. The built-in version control functionality features a status mode for versions. Each object version has a certain state, one of busy, saved, proposed, published, accessed and frozen. The already known busy version always has the state busy, while when previously referencing a saved versions we meant a version that can have any state from saved to frozen. AtFS does not enforce any specific semantics with the version state. It is rather a help for applications to implement a classification for versions. Another part of the version control functionality is a locking facility. Adding a new version to an existing object history always requires a lock on the most recent version or the generation to which the version shall be added. Locks can be used for implementing a synchronization mechanism for concurrent updates to one object history. A user defined attribute (or application defined attribute) has the general form name[=value [value [...]]]. It has a name and a (possibly empty) list of values. The name may consist of any characters (even non-alphanumeric) but an equal sign (=). The equal sign is the delimiter between name and value. The attribute value may not contain Ctrl-A (\001) characters. Although AtFS promises the storage of an arbitrary number of user defined attributes, the current implementation limits their number to 255. Most of the AtFS calls have one or more error returns. An error is indicated by a return value that is either -1 or a nil pointer, depending on the return type of the function. If one of the functions returns with an error, the variable af_errno is set to indicate the appropriate error number. af_errno is not cleared upon successful calls.
The following is a complete collection of the error numbers defined in atfs.h. The first list contains return values indicating common error conditions like implausible arguments passed to an AtFS function or permission problems. The error codes listed in the second list point to serious trouble, which can either be an internal error in AtFS or a corrupted archive file. The occurrence of an serious problem is recorded in an error protocol (/usr/adm/AtFSerrlog). On machines without syslog(3) switched on for AtFS, the error protocol is located in a /tmp/AtFSerrlog. AF_ESYSERR (-2) Error during execution of system library command or system call A called system library function or system call returned with an error condition. See errno for a more precise error specification. AF_EACCES (-3) permission denied An attempt was made to perform an operation (e.g. open a file) without appropriate permissions. AF_EARCHANGED (-4) archive file has changed since last read One of the archive files you are operating on has been modified by another process since your process has read it from disk. In this case, AtFS refuses to store your changes because this would destroy the modifications made by the other process. In order to make your desired changes happen, you have to rerun your application. AF_EBUSY (-7) Specified ASO must not be a busy version Some AtFS-operations cannot be performed on ASOs which have the state AF_BUSY. AF_EDERIVED (-8) Specified ASO must not be a derived object Some AtFS-operations (eg. af_lock - manual page af_lock(3), af_newgen - manual page af_version(3)) cannot be performed on ASOs stored in derived object caches. AF_EINVKEY (-9) invalid object key An invalid object key (eg. nil pointer) was passed to an AtFS function. AF_EINVSET (-10) invalid set descriptor An invalid set descriptor (eg. nil pointer) was passed to an AtFS function. AF_EINVUSER (-11) invalid user An invalid user structure or user id was passed to an AtFS operation. AF_EINVVNUM (-12) Bad version number An attempt was made to set a version number, that contradicts the AtFS version numbering philosophy, to an ASO. You cannot change the version number of an ASO into a version number that is "smaller" than the one given by the system. AF_EMISC (-15) miscellaneous errors This error code is set when an error occurs that does not fit in any of the other error categories. See your error logging file (/usr/adm/AtFSerrlog of /tmp/AtFSerrlog) for a detailed description of the error. af_perror (manual page af_error(3)) also gives a diagnostic message explaining the error. AF_EMODE (-16) invalid mode The function af_setattr (manual page af_attrs(3)) requires a change mode. This error condition complains about an invalid mode given to this function. AF_ENOATFSDIR (-17) AtFS subdirectory missing or not writable There is no place where AtFS can create it’s archive files. Either a global archive path should be defined (see af_setarchpath - manual page af_misc(3)) or a subdirectory named AtFS should be present and writable. AF_ENOKEY (-18) key does not exist in set A specified key that shall be removed from a keyset does not exist in the concerning set. AF_ENOPOS (-19) invalid position in set A specified position in a keyset (keysets are organized as arrays of object keys) lies beyond the bounds of the concerning set. AF_ENOREV (-20) specified revision does not exist A revision - uniquely identified by a set of attributes (eg. af_getkey - manual page af_retrieve(3)) - does not exist in the current search space. AF_ENOTBUSY (-21) specified ASO is not busy version Some AtFS operations (eg. af_setbusy - manual page af_version(3)) require the key of a busy ASO as input parameter. If you pass key of a non-busy ASO, the function sets this error code. AF_ENOTDERIVED (-22) specified ASO is no derived object An attempt was made to restore an object that in not stored in a derived object cache. AF_ENOTLOCKED (-23) specified ASO is not locked or locked by another user An AtFS operation cannot be performed because the specified ASO is either not locked (see af_lock(3)) or it is locked by another user. AF_ENOTREGULAR (-24) specified ASO is no regular file With this error number AtFS refuses to generate versions of non- regular UNIX files such as directories, special files and sockets. AF_ENOTVERS (-25) specified ASO has no versions Typically, this error occurs if an operation requiring at least one saved revision (eg. af_newgen - manual page af_version(3)) is applied on a versionless file. AF_ENOUDA (-26) user defined attribute does not exist A user defined attribute with the given name does not exist. AF_ESAVED (-27) saved versions cannot be modified An attempt was made to open a non-busy version with write access. AF_ESTATE (-28) invalid state transition The Attribute File System’s built in revision-states model allows only well defined revision state changes. AF_EWRONGSTATE (-31) wrong state Some AtFS operations can only be performed on ASOs with a specific version state. The error codes indicating real trouble: AF_EDELTA (-32) Error during delta operation Some error occurred during invocation of the delta processor. AF_EINCONSIST (-33) Archive file inconsistent The data in the archive file are corrupted. This may have happened by editing the archive file or by malfunction of an AtFS operation. Try atfsrepair ti fix it. AF_EINTERNAL (-34) internal error Please inform your local trouble shooting service, go to your favorite pub and order a beers. Cheers ! AF_ENOATFSFILE (-35) No AtFS file Archive file lost.
Name Appears on PageDescription af_abort af_transact.3 abort transaction af_access af_history.3 test existence of history af_afname af_misc.3 get ASO name from UNIX path af_afpath af_misc.3 get ASO syspath from UNIX path af_aftype af_misc.3 get ASO type from UNIX path af_afuser af_misc.3 get AtFS user from UNIX uid af_allattrs af_attrs.3 get all attributes of ASO af_cachefind af_retrieve.3 find derived objects by attributes af_cachenames af_history.3 return list of names in derived object cache af_cachesize af_cache.3 define object cache size policy af_chauthor af_protect.3 change author of ASO af_chmod af_protect.3 change protection of ASO af_chowner af_protect.3 change owner of ASO af_cleanup af_error.3 do cleanup after error af_close af_files.3 close ASO contents af_commit af_transact.3 commit transaction af_copyset af_sets.3 copy sets af_crkey af_files.3 create object key for UNIX-file af_diff af_sets.3 build difference between two sets of object keys af_dropall af_retrieve.3 drop all accessed object keys af_dropkey af_retrieve.3 drop object key af_dropset af_retrieve.3 drop set of object keys af_errmsg af_error.3 return AtFS- or system error message af_errno af_error.3 global error code variable af_establish af_files.3 establish ASO as UNIX file af_find af_retrieve.3 find ASOs by attributes af_freeattr af_attrs.3 free memory associated with attribute value af_freeattrbuf af_attrs.3free memory associated with attribute buffer af_getkey af_retrieve.3 get key by unique attributes af_histories af_history.3 return list of history names matching pattern af_initattrs af_retrieve.3 initialize attribute buffer af_initset af_sets.3 initialize set af_intersect af_sets.3 build intersection between two sets of object keys af_isstdval af_attrs.3 check if attribute is from a standard attribute af_lock af_lock.3 set lock on ASO af_newgen af_version.3 increase generation number of ASO af_nrofkeys af_sets.3 return number of keys in set of object keys af_open af_files.3 open ASO contents af_perror af_error.3 report AtFS- or system error af_predsucc af_version.3 get successor or predecessor of version af_restore af_files.3 restore derived ASO af_retattr af_attrs.3 return value of attribute as string af_retnumattr af_attrs.3 return value of numeric attribute af_rettimeattr af_attrs.3return value of time attribute af_retuserattr af_attrs.3return value of user attribute af_rm af_files.3 remove ASO af_rnote af_note.3 return note attribute af_savecache af_cache.3 save derived object to cache af_saverev af_version.3 save busy version of source object af_setaddkey af_sets.3 add key to set of object keys af_setarchpath af_misc.3set path for location of archives af_setattr af_attrs.3 set user defined attribute af_setbusy af_version.3 set version busy af_setgkey af_sets.3 get key from set of object keys af_setposrmkey af_sets.3remove key (id. by position) from set of object keys af_setrmkey af_sets.3 remove key from set of object keys af_snote af_note.3 modify note attribute af_sortset af_sets.3 sort set of object keys af_sstate af_version.3 set version state af_subset af_sets.3 build subset of set of object keys af_svnum af_version.3 set version number af_testlock af_lock.3 see if ASO is locked af_transaction af_transact.3begin transaction af_union af_sets.3 build union of two sets of object keys af_unlock af_lock.3 remove lock from ASO af_version af_misc.3 return identification of current AtFS version
/var/adm/AtFSerrlog, /tmp/AtFSerrlog
intro(2), intro(3) Andreas Lampen and Axel Mahler An Object Base for Attributed Software Objects Proceedings of the Autumn ’88 EUUG Conference (Cascais, October 3-7) European UNIX System User Group, London 1988 For an outlook on a totally new implementation of the AtFS idea, have a look at: Andreas Lampen Advancing Files to Attributed Software Objects Proceedings of USENIX Technical Conference (Dallas TX, January 21-25, 1991) USENIX Association, Berkeley CA, January 1991
The built in archive file locking to prevent concurrent updates is very simple and my lead to confusing situations. It may happen that changes will not be saved to disk. See the error conditions AF_EARCHANGED and AF_EARLOCKED for more details.
Andreas Lampen, Tech. Univ. Berlin (Andres.Lampen@cs.tu-berlin.de) Technische Universitaet Berlin Sekr. FR 5-6 Franklinstr. 28/29 D-10587 Berlin, Germany | http://huge-man-linux.net/man3/af_intro.html | CC-MAIN-2017-13 | refinedweb | 2,622 | 58.18 |
As shown in Tracing FreeRTOS with a Hardware Probe: I have a nice hardware probe to trace out events from my application. But what about to use the target memory as trace buffer? New devices have much more on-chip memory, so this could be an attractive option. That was on my list of future extensions, but then the news came in: Percepio announced their collaboration with FreeRTOS+Trace: exactly what I needed!
It is using the same concept as the FreeRTOS Trace Probe: the trace hooks provided by the FreeRTOS API. But instead streaming it off the target as with the FreeRTOS Trace probe, it is using a RAM buffer on the device. The real cool thing is: the Percepio trace viewer is very, very nice!
FreeRTOS+Trace from Percepio is not open source, but they offer a free of charge version. You can download the library sources after requesting it from the Percepio web site. The Trace viewer comes in three editions: Free, Standard and Professional. With the installation you get as well a 30 day Professional Evaluation license mode. The Free Edition comes with the basic functionality to view the tasks scheduling information. From the Free Edition you can switch to the Standard and Professional Edition, but you will be limited to a demo trace. Still, this gives you a good preview what you can expect in the other editions.
Now here some screenshots so you see why the Percepio FreeRTOS+Trace is really cool. The main window shows the flow of tasks with many information:
The CPU load graph gives a visual indication of the system load:
Cool is as well the communication flow view:
The application can add user events to the trace, which then can be visualized in the Signal Plot:
And if I want to know more about what the kernel blocking API calls: there is a view for that too:
But how to get it hooked up to my system? As you know from my other posts, my answer is: I use Processor Expert :-).
With the FreeRTOS Hardware Trace I have a good base. So what I need is instead tracing to an external hardware, I need to trace to RAM and using the FreeRTOS+Trace API library. To make things really easy to use, I have extended the existing FreeRTOSTrace component and created a new PercepioTrace Processor Expert component. Additionally I have changed the FreeRTOS component to use a free running counter instead of a normal timer interrupt: this allows me to gather performance data for the Percepio Trace.
Here is how this looks like:
I enable the TraceHooks property in the FreeRTOS component:
The FreeRTOSTrace component will create all the FreeRTOS hooks for my application. In the inherited FreeRTOSTrace component, I simply enable Percepio Trace and enable the hooks I want to use:
In the Percepio Trace component, I can configure all the different settings. I can configure if I want to run a console and a separate progress task, define the buffer size and configure the Timer Type: RTOS Ticks or using a Hardware Timer:
That way, the FreeRTOS component inherits the FreeRTOSTrace component, which then inherits the PercepioTrace component:
Adding trace collection to my application is easy: I need a custom debug print out function. The trace module gets initialized with vTraceInit(). And starting trace collection is done with vTraceStart():
#include "Ptrc1.h" void DBGU_Print_Ascii(char * buffer) { FSSH1_SendStr((unsigned char*)buffer, FSSH1_GetStdio()->stdOut); } void main(void) { /*** Processor Expert internal initialization. DON'T REMOVE THIS CODE!!! ***/ PE_low_level_init(); /*** End of Processor Expert internal initialization. ***/ Ptrc1_vTraceStart(); APP_Run(); /* runs the RTOS and all tasks....*/ /*** Don't write any code pass this line, or it will be deleted during code generation. ***/ /*** Processor Expert end of main routine. DON'T MODIFY THIS CODE!!! ***/ for(;;){} /*** Processor Expert end of main routine. DON'T WRITE CODE BELOW!!! ***/ } /*** End of main routine. DO NOT MODIFY THIS TEXT!!! ***/
The library comes with an option to write the trace to an file system. If I find time, I’ll configure it to use the FatFS component I already have created.
The simplest way to get the trace data is to run the application and stop it with the debugger, then dump the trace information. To save the trace to a file, I can use a target task to export and save a memory block: I specify the Recorder Data variable (&RecorderData), the file name and the size (which depends on your trace data structure):
Then I can dump anytime the data using the target task execute button:
As with the target task I need to specify the size of the memory, this sometimes is not ideal. The good thing is: the trace data has special markers in it, so you can dump as well your whole memory: the Percepio Trace Viewer will find the trace data in the dump.
Another way is to use the CodeWarrior Debugger Shell (menu Window > Show View > Debug > Debugger Shell) and to run the following script:
set start_addr [evaluate &RecorderData] set end_addr [evaluate ((char*)&RecorderData) + sizeof(RecorderData) - 1] save -b $start_addr..$end_addr c:\\tmp\\myTrace.dump -o
UPDATE: For MCU10.5, the following script needs to be used:
set start_addr [evaluate &RecorderData] set end_addr [evaluate ((int)&RecorderData) + sizeof(RecorderData) - 1] save -b $start_addr..0x[format %04X $end_addr] c:\\tmp\\myTrace.dump -oThis does the same thing as the target task, but calculates the exact size of the trace data structure:
So far, this is working great. I have it working for all the ColdFire V2 cores and all the S08 cores.
What is next?
- Port it to FreeRTOS+Trace v2.2.1 (just has been released today)
- Supporting all S12 and S12X/XGATE: should be easy 🙂
- Support for all Kinetis: a challenge with the LDD drivers, but will make it work
- Storing trace data on an SD card using FatFS component
- Using FSShell as console/output channel
- Support for Hawk/DSC. But here I need to finish and polish my FreeRTOS port first
So a lot of night and week-end work. I’ll keep things posted…..
Happy Perceptioning 🙂
Fantastic work!
Amen
The FreeRTOS+Trace component has been enhanced and published on
Pingback: FreeRTOS V7.1.1 released | MCU on Eclipse
Pingback: Percepio FreeRTOS+Trace V2.2.2 released | MCU on Eclipse
Pingback: The Making Of RTOS Processor Expert Components | MCU on Eclipse
Pingback: Breakpoints with Special Effects | MCU on Eclipse
Pingback: There is an ARM to Trace | MCU on Eclipse
Pingback: CDE Hacking: Components with Multiple Files | MCU on Eclipse
Pingback: Sumo Robots, one Week until Tournament | MCU on Eclipse
Pingback: Updated McuOnEclipse Components: USB for KL24Z, FatFs v0.10c, Shell Backspace and FreeRTOS Trace Hook Configuration | MCU on Eclipse
Pingback: FreeRTOS Continuous Trace Streaming | MCU on Eclipse | https://mcuoneclipse.com/2012/03/23/tracing-with-freertostrace-from-percepio/ | CC-MAIN-2018-51 | refinedweb | 1,125 | 61.06 |
README
¶
log15, + more
Versioning
The API of the master branch of log15 should always be considered unstable. If you want to rely on a stable API, you must vendor the library.
Importing
import log "github.com/inconshreveable/log15"
Examples
//
Breaking API Changes
The following commits broke API stability. This reference is intended to help you understand the consequences of updating to a newer version of log15.
- 57a084d014d4150152b19e4e531399a7145d1540 - Added a
Get()method to the
Loggerinterface to retrieve the current handler
- 93404652ee366648fa622b64d1e2b67d75a3094a -
Recordfield
Callchanged to
stack.Callwith switch to
github.com/go-stack/stack
- a5e7613673c73281f58e15a87d2cf0cf111e8152 - Restored
syslog.Priorityargument to the
SyslogXxxhandler constructors
The varargs style is brittle and error prone! Can I have type safety please?
Yes. Use
log.Ctx:
srvlog := log.New(log.Ctx{"module": "app/server"}) srvlog.Warn("abnormal conn rate", log.Ctx{"rate": curRate, "low": lowRate, "high": highRate})
License
Apache
Documentation
¶
Overview ¶
Package
Getting Started ¶
To get started, you'll want to import the library:
import log "github.com/inconshreveable/log15"
Now you're ready to start logging:
func main() { log.Info("Program starting", "args", os.Args()) }
Convention ¶})
Context loggers ¶
Handlers ¶
The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface:
type Handler interface { Log(r *Record) error }()) )
Logging File Names and Line Numbers ¶ github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available.
Custom Handlers ¶.
Logging Expensive Operations ¶.
Dynamic context values ¶})
Terminal Format ¶
If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level.
Error Handling ¶.
Library Use ¶ ¶ "github.com/inconshreveable/log15.
Must ¶)
Inspiration and Credit ¶
The Name ¶
Index ¶
- Variables
- func Crit(msg string, ctx ...interface{})
- func Debug(msg string, ctx ...interface{})
- func Error(msg string, ctx ...interface{})
- func Info(msg string, ctx ...interface{})
- func Output(msg string, lvl Lvl, calldepth int, ctx ...interface{})
- func PrintOrigins(print bool)
- func Trace(msg string, ctx ...interface{})
- func Warn(msg string, ctx ...interface{})
- type Ctx
- type Format
-
- type GlogHandler
-
-
- type Handler
- func BufferedHandler(bufSize int, h Handler) Handler
- func CallerFileHandler(h Handler) Handler
- func CallerFuncHandler(h Handler) Handler
- func CallerStackHandler(format string, h Handler) Handler
- func ChannelHandler(recs chan<- *Record) Handler
- func DiscardHandler() Handler
- func FailoverHandler(hs ...Handler) Handler
- func FileHandler(path string, fmtr Format) (Handler, error)
- func FilterHandler(fn func(r *Record) bool, h Handler) Handler
- func FuncHandler(fn func(r *Record) error) Handler
- func LazyHandler(h Handler) Handler
- func LvlFilterHandler(maxLvl Lvl, h Handler) Handler
- func MatchFilterHandler(key string, value interface{}, h Handler) Handler
- func MultiHandler(hs ...Handler) Handler
- func NetHandler(network, addr string, fmtr Format) (Handler, error)
- func StreamHandler(wr io.Writer, fmtr Format) Handler
- func SyncHandler(h Handler) Handler
- func SyslogHandler(priority syslog.Priority, tag string, fmtr Format) (Handler, error)
- func SyslogNetHandler(net, addr string, priority syslog.Priority, tag string, fmtr Format) (Handler, error)
- type Lazy
- type Logger
-
- type Lvl
-
-
- type Record
- type RecordKeyNames
- type TerminalStringer
Constants ¶
This section is empty.
Variables ¶
var ( StdoutHandler = StreamHandler(os.Stdout, LogfmtFormat()) StderrHandler = StreamHandler(os.Stderr, LogfmtFormat()) )
var Must muster
Must provides the following Handler creation functions which instead of returning an error parameter only return a Handler and panic on failure: FileHandler, NetHandler, SyslogHandler, SyslogNetHandler
Functions ¶
func Debug ¶
Debug is a convenient alias for Root().Debug
func Error ¶
Error is a convenient alias for Root().Error
func Output ¶
Output is a convenient alias for write, allowing for the modification of the calldepth (number of stack frames to skip). calldepth influences the reported line number of the log message. A calldepth of zero reports the immediate caller of Output. Non-zero calldepth skips as many stack frames.
func PrintOrigins ¶
PrintOrigins sets or unsets log location (file:line) printing for terminal format output.
func Trace ¶
Trace is a convenient alias for Root().Trace
Types ¶
type Ctx ¶
Ctx is a map of key/value pairs to pass as context to a log function Use this only if you really need greater safety around the arguments you pass to the logging functions.
type Format ¶
func FormatFunc ¶
FormatFunc returns a new Format object which uses the given function to perform record formatting.
func JSONFormat ¶
JSONFormat formats log records as JSON objects separated by newlines. It is the equivalent of JSONFormatEx(false, true).
func JSONFormatEx ¶
JSONFormatEx formats log records as JSON objects. If pretty is true, records will be pretty-printed. If lineSeparated is true, records will be logged with a new line between each record.
func LogfmtFormat ¶
LogfmtFormat prints records in logfmt format, an easy machine-parseable but human-readable format for key/value pairs.
For more details see:
func TerminalFormat ¶
TerminalFormat formats log records optimized for human readability on a terminal with color-coded level output and terser human friendly timestamp. This format should only be used for interactive programs or while developing.
[TIME] [LEVEL] MESAGE key=value key=value ...
Example:
[May 16 20:58:45] [DBUG] remove route ns=haproxy addr=127.0.0.1:50002
type GlogHandler ¶
type GlogHandler struct { // contains filtered or unexported fields }
GlogHandler is a log handler that mimics the filtering features of Google's glog logger: setting global log levels; overriding with callsite pattern matches; and requesting backtraces at certain positions.
func NewGlogHandler ¶
func NewGlogHandler(h Handler) *GlogHandler
NewGlogHandler creates a new log handler with filtering functionality similar to Google's glog logger. The returned handler implements Handler.
func (*GlogHandler) BacktraceAt ¶
func (h *GlogHandler) BacktraceAt(location string) error
BacktraceAt sets the glog backtrace location. When set to a file and line number holding a logging statement, a stack trace will be written to the Info log whenever execution hits that statement.
Unlike with Vmodule, the ".go" must be present.
func (*GlogHandler) Log ¶
func (h *GlogHandler) Log(r *Record) error
Log implements Handler.Log, filtering a log record through the global, local and backtrace filters, finally emitting it if either allow it through.
func (*GlogHandler) Verbosity ¶
func (h *GlogHandler) Verbosity(level Lvl)
Verbosity sets the glog verbosity ceiling. The verbosity of individual packages and source files can be raised using Vmodule.
func (*GlogHandler) Vmodule ¶
func (h *GlogHandler) Vmodule(ruleset string) error
Vmodule sets the glog verbosity pattern.
The syntax of the argument is a comma-separated list of pattern=N, where the pattern is a literal file name or "glob" pattern matching and N is a V level.
For instance:
pattern="gopher.go=3" sets the V level to 3 in all Go files named "gopher.go" pattern="foo=3" sets V to 3 in all files of any packages whose import path ends in "foo" pattern="foo/*=3" sets V to 3 in all files of any packages whose import path contains "foo"
type Handler ¶
Handler defines where and how log records are written. A Logger prints its log records by writing to a Handler. Handlers are composable, providing you great flexibility in combining them to achieve the logging structure that suits your applications.
func BufferedHandler ¶
BufferedHandler writes all records to a buffered channel of the given size which flushes into the wrapped handler whenever it is available for writing. Since these writes happen asynchronously, all writes to a BufferedHandler never return an error and any errors from the wrapped handler are ignored.
func CallerFileHandler ¶
CallerFileHandler returns a Handler that adds the line number and file of the calling function to the context with key "caller".
func CallerFuncHandler ¶
CallerFuncHandler returns a Handler that adds the calling function name to the context with key "fn".
func CallerStackHandler ¶
CallerStackHandler returns a Handler that adds a stack trace to the context with key "stack". The stack trace is formated as a space separated list of call sites inside matching []'s. The most recent call site is listed first. Each call site is formatted according to format. See the documentation of package github.com/go-stack/stack for the list of supported formats.
func ChannelHandler ¶
ChannelHandler writes all records to the given channel. It blocks if the channel is full. Useful for async processing of log messages, it's used by BufferedHandler.
func DiscardHandler ¶
DiscardHandler reports success for all writes but does nothing. It is useful for dynamically disabling logging at runtime via a Logger's SetHandler method.
func FailoverHandler ¶
FailoverHandler writes all log records to the first handler specified, but will failover and write to the second handler if the first handler has failed, and so on for all handlers specified. For example you might want to log to a network socket, but failover to writing to a file if the network fails, and then to standard out if the file write fails:
log.FailoverHandler( log.Must.NetHandler("tcp", ":9090", log.JSONFormat()), log.Must.FileHandler("/var/log/app.log", log.LogfmtFormat()), log.StdoutHandler)
All writes that do not go to the first handler will add context with keys of the form "failover_err_{idx}" which explain the error encountered while trying to write to the handlers before them in the list.
func FileHandler ¶
FileHandler returns a handler which writes log records to the give file using the given format. If the path already exists, FileHandler will append to the given file. If it does not, FileHandler will create the file with mode 0644.
func FilterHandler ¶
FilterHandler returns a Handler that only writes records to the wrapped Handler if the given function evaluates true. For example, to only log records where the 'err' key is not nil:
logger.SetHandler(FilterHandler(func(r *Record) bool { for i := 0; i < len(r.Ctx); i += 2 { if r.Ctx[i] == "err" { return r.Ctx[i+1] != nil } } return false }, h))
func FuncHandler ¶
FuncHandler returns a Handler that logs records with the given function.
func LazyHandler ¶
LazyHandler writes all values to the wrapped handler after evaluating any lazy functions in the record's context. It is already wrapped around StreamHandler and SyslogHandler in this library, you'll only need it if you write your own Handler.
func LvlFilterHandler ¶
LvlFilterHandler returns a Handler that only writes records which are less than the given verbosity level to the wrapped Handler. For example, to only log Error/Crit records:
log.LvlFilterHandler(log.LvlError, log.StdoutHandler)
func MatchFilterHandler ¶
MatchFilterHandler returns a Handler that only writes records to the wrapped Handler if the given key in the logged context matches the value. For example, to only log records from your ui package:
log.MatchFilterHandler("pkg", "app/ui", log.StdoutHandler)
func MultiHandler ¶
MultiHandler dispatches any write to each of its handlers. This is useful for writing different types of log information to different locations. For example, to log to a file and standard error:
log.MultiHandler( log.Must.FileHandler("/var/log/app.log", log.LogfmtFormat()), log.StderrHandler)
func NetHandler ¶
NetHandler opens a socket to the given address and writes records over the connection.
func StreamHandler ¶
StreamHandler writes log records to an io.Writer with the given format. StreamHandler can be used to easily begin writing log records to other outputs.
StreamHandler wraps itself with LazyHandler and SyncHandler to evaluate Lazy objects and perform safe concurrent writes.
func SyncHandler ¶
SyncHandler can be wrapped around a handler to guarantee that only a single Log operation can proceed at a time. It's necessary for thread-safe concurrent writes.
func SyslogHandler ¶
SyslogHandler opens a connection to the system syslog daemon by calling syslog.New and writes all records to it.
type Lazy ¶
type Lazy struct { Fn interface{} }
Lazy allows you to defer calculation of a logged value that is expensive to compute until it is certain that it must be evaluated with the given filters.
Lazy may also be used in conjunction with a Logger's New() function to generate a child logger which always reports the current value of changing state.
You may wrap any function which takes no arguments to Lazy. It may return any number of values of any type.
type Logger ¶
type Logger interface { // New returns a new Logger that has this logger's context plus the given context New(ctx ...interface{}) Logger // GetHandler gets the handler associated with the logger. GetHandler() Handler // SetHandler updates the logger to write records to the specified handler. SetHandler(h Handler) // Log a message at the given level with context key/value pairs Trace(msg string, ctx ...interface{}) Debug(msg string, ctx ...interface{}) Info(msg string, ctx ...interface{}) Warn(msg string, ctx ...interface{}) Error(msg string, ctx ...interface{}) Crit(msg string, ctx ...interface{}) }
A Logger writes key/value pairs to a Handler
func New ¶
New returns a new logger with the given context. New is a convenient alias for Root().New
type Lvl ¶
func LvlFromString ¶
LvlFromString returns the appropriate Lvl from a string name. Useful for parsing command line args and configuration files.
func (Lvl) AlignedString ¶
AlignedString returns a 5-character string containing the name of a Lvl.
type Record ¶
type Record struct { Time time.Time Lvl Lvl Msg string Ctx []interface{} Call stack.Call KeyNames RecordKeyNames }
A Record is what a Logger asks its handler to write
type RecordKeyNames ¶
RecordKeyNames gets stored in a Record when the write function is executed.
type TerminalStringer ¶
TerminalStringer is an analogous interface to the stdlib stringer, allowing own types to have custom shortened serialization formats when printed to the screen. | https://pkg.go.dev/github.com/PulsarTeam/pulsar/log | CC-MAIN-2021-39 | refinedweb | 2,204 | 57.47 |
moebius alternatives and similar packages
Based on the "ORM and Datamapping" category.
Alternatively, view moebius alternatives based on common mentions on social networks and blogs.
ecto10.0 9.2 moebius VS ectoA toolkit for data mapping and language integrated query.
eredis9.7 0.0 moebius VS eredisErlang Redis client
postgrex9.6 6.8 moebius VS postgrexPostgreSQL driver for Elixir
redix9.5 6.2 moebius VS redixFast, pipelined, resilient Redis driver for Elixir. 🛍
eventstore9.4 7.6 moebius VS eventstoreEvent store using PostgreSQL for persistence
ecto_enum9.2 0.1 moebius VS ecto_enumEcto extension to support enums in models
mongodb9.2 6.7 moebius VS mongodbMongoDB driver for Elixir
amnesia9.2 0.0 moebius VS amnesiaMnesia wrapper for Elixir.
memento9.1 4.1 moebius VS mementoSimple + Powerful interface to the Mnesia Distributed Database 💾
mysql9.0 3.5 moebius VS mysqlMySQL/OTP – MySQL and MariaDB client for Erlang/OTP
mongodb_ecto9.0 0.0 moebius VS mongodb_ectoMongoDB adapter for Ecto
rethinkdb9.0 0.0 moebius VS rethinkdbRethinkdb client in pure elixir (JSON protocol)
paper_trail8.9 7.3 moebius VS paper_trailTrack and record all the changes in your database with Ecto. Revert back to anytime in history.
arc_ecto8.8 0.0 moebius VS arc_ectoAn integration with Arc and Ecto.
exredis8.7 0.0 moebius VS exredisRedis commands for Elixir
mariaex8.6 0.0 moebius VS mariaexPure Elixir database driver for MariaDB / MySQL
triplex8.4 2.7 moebius VS triplexDatabase multitenancy for Elixir applications!
ExAudit8.4 4.1 moebius VS ExAuditEcto auditing library that transparently tracks changes and can revert them.
ecto_mnesia8.3 0.0 moebius VS ecto_mnesiaEcto adapter for Mnesia Erlang term database.
shards8.2 1.3 moebius VS shardsPartitioned ETS tables for Erlang and Elixir
xandra8.2 0.0 moebius VS xandraFast, simple, and robust Cassandra driver for Elixir.
riak8.2 0.0 moebius VS riakA Riak client written in Elixir.
Bolt.Sips8.1 1.3 moebius VS Bolt.SipsNeo4j driver for Elixir
timex_ecto7.9 0.0 moebius VS timex_ectoAn adapter for using Timex DateTimes with Ecto
atlas7.8 0.0 moebius VS atlasObject Relational Mapper for Elixir
kst7.8 8.0 moebius VS kst💿 KVS: Abstract Chain Database
instream7.7 8.6 moebius VS instreamInfluxDB driver for Elixir
tds7.7 1.7 moebius VS tdsTDS Driver for Elixir
esqlite7.6 1.2 L3 moebius VS esqliteErlang NIF for sqlite
ecto_psql_extras7.6 4.8 moebius VS ecto_psql_extrasEcto PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.
arbor7.6 0.0 moebius VS arborEcto elixir adjacency list and tree traversal. Supports Ecto versions 2 and 3.
inquisitor7.5 0.0 moebius VS inquisitorComposable query builder for Ecto
extreme7.5 0.0 moebius VS extremeElixir Adapter for EventStore
ecto_fixtures7.4 0.0 moebius VS ecto_fixturesFixtures for Elixir apps
sqlitex7.3 3.4 moebius VS sqlitexAn Elixir wrapper around esqlite. Allows access to sqlite3 databases.
kalecto7.3 0.0 moebius VS kalectoAdapter for the Calendar library in Ecto
mongo7.2 0.0 moebius VS mongoMongoDB driver for Elixir
mongodb_driver7.1 5.9 moebius VS mongodb_driverMongoDB driver for Elixir
boltun7.0 0.0 moebius VS boltunTransforms notifications from the Postgres LISTEN/NOTIFY mechanism into callback execution
redo7.0 0.0 moebius VS redopipelined erlang redis client
tds_ecto7.0 0.0 moebius VS tds_ectoTDS Adapter for Ecto
gremlex6.9 0.0 moebius VS gremlexElixir Client for Gremlin (Apache TinkerPop™)
sqlite_ecto6.9 0.0 moebius VS sqlite_ectoSQLite3 adapter for Ecto
couchdb_connector6.9 0.0 moebius VS couchdb_connectorA couchdb connector for Elixir
neo4j_sips6.8 0.0 moebius VS neo4j_sipsElixir driver for the Neo4j graph database server
sql_dust6.8 2.3 moebius VS sql_dustEasy. Simple. Powerful. Generate (complex) SQL queries using magical Elixir SQL dust.
craterl6.8 0.0 moebius VS craterlErlang client for crate.
github_ecto6.7 0.0 moebius VS github_ectoEcto adapter for GitHub API
ecto_cassandra6.6 0.0 moebius VS ecto_cassandraCassandra Ecto Adapter
triton6.5 0.0 moebius moebius or a related project?
Popular Comparisons
README
Moebius 3.0: A functional query tool for Elixir and PostgreSQL.
Note: this is version 2.0 and there are significant changes from version 1.0. If you need version 1.x, you can find the last release here
Our goal with creating Moebius is to try and keep as close as possible to the functional nature of Elixir and, at the same time, the goodness that is PostgreSQL. We think working with a database should feel like a natural extension of the language, with as little abstraction wonkery as possible.
Moebius is not an ORM. There are no mappings, no schemas, no migrations; only queries and data. We embrace PostgreSQL as much as possible, surfacing the goodness so you be a hero.
Difference from version 2.0
- Fixed a number of issues surrounding dates etc
- Moved to a more Elixiry way of returning results, using
{:ok, result}and
{:error, error}. We were always doing the latter, but decided to move to the former to keep in step with other libraries.
- Moved to Elixir 1.4, along with Dates etc.
- Removed multiple dependencies, including Timex and Poolboy (which is built into the driver, Postgrex)
Documentation
API documentation is available at
Building docs from source
$ MIX_ENV=dev mix docs
Installation
Installing Moebius involves a few small steps:
- Add moebius to your list of dependencies in
mix.exs:
def deps do [{:moebius, "~> 3.0.1"}] end
- Add the db child process to your
Applicationmodule:
children = [ Moebius.Db ]
Run
mix deps.get and you'll be good to go.
Connecting to PostgreSQL
There are various ways to connect to a database with Moebius. You can used a formal, supervised definition or just roll with our default. Either way, you start off by adding connection info in your
config.exs:
config :moebius, connection: [ hostname: "localhost", username: "username", password: "password", database: "my_db" ], scripts: "test/db"
You can also use a URL if you like:
config :moebius, connection: [ url: "postgresql://user:[email protected]/database" ], scripts: "test/db"
If you want to use environment variables, just set things using
System.env.
Under the hood, Moebius uses the Postgrex driver to manage connections and connection pooling. Connections are supervised, so if there's an error any transaction pending will be rolled back effectively (more on that later). The settings you provide in
:connection will be passed directly to Postgrex (aside from
:url, which we parse).
You might be wondering what the
scripts entry is? Moebius can execute SQL files directly for you - we'll get to that in a bit.
Supervision and Databases
Moebius formalizes the concept of a database connection, so you can supervise each independently, or not at all. This allows for a lot of flexibility. You don't have to do it this way, but it really helps.
You don't need to do any of this - we have a default DB setup for you. However, if you want a formalized, supervised module for your database, here's how you do it.
First, create a module for your database:
defmodule MyApp.Db do use Moebius.Database # helper/repo methods go here end
Next, in your
Application file, add this new module to your supervision tree:
def start(_type, _args) do start_db #... end def start_db do #create a child process children = [ {MyApp.Db, [Moebius.get_connection]} ] Supervisor.start_link children, strategy: :one_for_one end
That's it. Now, when your app starts you'll have a supervised database you can use as needed. The function
Moebius.get_connection/0 will look for a key called
:connection in your
config.exs. If you want to connect to multiple databases, name these connections something meaningful, then pass that to
Moebius.get_connection/1.
For instance, you might have a sales database and an accounting one; or you might have a read-only connection and a write-only one to spread the load. For this, just specify each as needed:
config :moebius, read_only: [ url: "postgresql://user:[email protected]/database" ], write_only: [ url: "postgresql://user:[email protected]/database" ], scripts: "test/db"
You can now use these in your database module:
def start(_type, _args) do start_db #... end def start_db do #create a worker read_only_db_worker = worker(MyApp.Db, [Moebius.get_connection(:read_only)]) write_only_db_worker = worker(MyApp.Db, [Moebius.get_connection(:write_only)]) Supervisor.start_link [read_only_db_worker, write_only_db_worker], strategy: :one_for_one end
It bears repeating: you don't need to do any of this, we have a default database setup for you. However supporting multiple connections was very high on our list so this is how we chose to do it (with many thanks to Peter Hamilton for the idea).
The rest of the examples you see below use our default database.
The Basic Query Flow
When querying the database (read or write), you construct the query and then pass it to the database you want:
{:ok, result} = Moebius.Query.db(:users) |> Moebius.Db.first
In this example,
db(:users) initiates the
QueryCommand, we can filter it, sort it, do all kinds of things. To run it, however, we need to pass it to the database we want to execute against.
The default database is
Moebius.Db, but you can make your own with a dedicated connection as needed (see above).
Let's see some more examples.
Simple Examples
The API is built around the concept of transforming raw data from your database into something you need, and we try to make it feel as functional as possible. We lean on Elixir's
|> operator for this, and it's the core of the API.
This returns a user with the id of 1.
{:ok, result} = db(:users) |> filter(name: "Steve") |> sort(:city, :desc) |> limit(10) |> offset(2) |> Moebius.Db.run
Hopefully it's fairly straightforward what this query returns. All users named Steve sorted by city... skipping the first two, returning the next 10.
An
IN query happens when you pass an array:
{:ok, result} = db(:users) |> filter(:name, ["mark", "biff", "skip"]) |> Moebius.Db.run #or, if you want to be more precise {:ok, result} = db(:users) |> filter(:name, in: ["mark", "biff", "skip"]) |> Moebius.Db.run
A NOT IN query happens when you specify the
not_in key:
{:ok, result} = db(:users) |> filter(:name, not_in: ["mark", "biff", "skip"]) |> Moebius.Db.run
If you don't want to deal with my abstractions, just use SQL:
{:ok, result} = "select * from users where id=1 limit 1 offset 1;" |> Moebius.Db.run
Full Text indexing
One of the great features of PostgreSQL is the ability to do intelligent full text searches. We support this functionality directly:
{:ok, result} = db(:users) |> search(for: "Mike", in: [:first, :last, :email]) |> Moebius.Db.run
The
search function builds a
tsvector search on the fly for you and executes it over the columns you send in. The results are ordered in descending order using
ts_rank.
JSONB Support
Moebius supports using PostgreSQL as a document store in its entirety. Get your project off the ground and don't worry about migrations - just store documents, and you can normalize if you need to later on.
Start by importing
Moebius.DocumentQuery and saving a document:
import Moebius.DocumentQuery {:ok, new_user} = db(:friends) |> Moebius.Db.save(email: "[email protected]", name: "Moe Test")
Two things happened for us here. The first is that
friends did not exist as a document table in our database, but
save/2 did that for us. This is the table that was created on the fly:
create table NAME( id serial primary key not null, body jsonb not null, search tsvector, created_at timestamptz not null default now(), updated_at timestamptz not null default now() ); -- index the search and jsonb fields create index idx_NAME_search on NAME using GIN(search); create index idx_NAME on NAME using GIN(body jsonb_path_ops);
The entire
DocumentQuery module works off the premise that this is how you will store your JSONB docs. Note the
tsvector field? That's PostgreSQL's built in full text indexing. We can use that if we want during by adding
searchable/1 to the pipe:
import Moebius.DocumentQuery {:ok, new_user} = db(:friends) |> searchable([:name]) |> Moebius.Db.save(email: "[email protected]", name: "Moe Test")
By specifying the searchable fields, the
search field will be updated with the values of the name field.
Now, we can query our document using full text indexing which is optimized to use the GIN index created above:
{:ok, user} = db(:friends) |> search("test.com") |> Moebius.Db.run
Or we can do a simple filter:
{:ok, user} = db(:friends) |> contains(email: "[email protected]") |> Moebius.Db.run
This query is optimized to use the
@ (or "contains" operator), using the other GIN index specified above. There's more we can do...
{:ok, users} = db(:friends) |> filter(:money_spent, ">", 100) |> Moebius.Db.run
This runs a full table scan so is not terribly optimal, but it does work if you need it once in a while. You can also use the existence (
?) operator, which is very handy for querying arrays. In the library, it is implemented as
exists:
{:ok, buddies} = db(:friends) |> exists(:tags, "best") |> Moebius.Db.run
This will allow you to query embeded documents and arrays rather easily, but again doesn't use the JSONB-optimized GIN index. You can index for using existence, have a look at the PostgreSQL docs.
Using Structs
If you're a big fan of structs, you can use them directly on
save and we'll send that same struct back to you, complete with an
id:
defmodule Candy do defstruct [ id: nil, sticky: true, chocolate: "gooey" ] end yummy = %Candy{} {:ok, res} = db(:monkies) |> Moebius.Db.save(yummy) #res = %Candy{id: 1, sticky: true, chocolate: "gooey"}
I've been using this functionality constantly with another project I'm working on and it's helped me tremendously.
SQL Files
I built this for MassiveJS and I liked the idea, which is this: some people love SQL. I'm one of those people. I'd much rather work with a SQL file than muscle through some weird abstraction.
With this library you can do that. Just create a scripts directory and specify it in the config (see above), then execute your file without an extension. Pass in whatever parameters you need:
{:ok, result} = sql_file(:my_groovy_query, "a param") |> Moebius.Db.run
I highly recommend this approach if you have some difficult SQL you want to write (like a windowing query or CTE). We use this approach to build our test database - have a look at our tests and see.
Adding, Updating, Deleting (Non-Documents)
Inserting is pretty straightforward:
{:ok, result} = db(:users) |> insert(email: "[email protected]", first: "Test", last: "User") |> Moebius.Db.run
Updating can work over multiple rows, or just one, depending on the filter you use:
{:ok, result} = db(:users) |> filter(id: 1) |> update(email: "[email protected]") |> Moebius.Db.run
The filter can be a single record, or affect multiple records:
{:ok, result} = db(:users) |> filter("id > 100") |> update(email: "[email protected]") |> Moebius.Db.run {:ok, result} = db(:users) |> filter("email LIKE $2", "%test") |> update(email: "[email protected]") |> Moebius.Db.run
Deleting works exactly the same way as
update, but returns the count of deleted items in the result:
{:ok, result} = db(:users) |> filter("email LIKE $2", "%test") |> delete |> Moebius.Db.run #result.deleted = 10, for instance
Bulk Inserts
Moebius supports bulk insert operations transactionally. We've fine-tuned this capability quite a lot (thanks to Jon Atten) and, on our local machines, have achieved ~60,000 writes per second. This, of course, will vary by machine, configuration, and use.
But that's still a pretty good number don't you think?
A bulk insert works by invoking one directly:
data = [#let's say 10,000 records or so] {:ok, result} = db(:people) |> bulk_insert(data) |> Moebius.Db.transact_batch
If everything works, you'll get back a result indicating the number of records inserted.
Table Joins
Table joins can be applied for a single join or piped to create multiple joins. The table names can be either atoms or binary strings. There are a number of options to customize your joins:
:join # set the type of join. LEFT, RIGHT, FULL, etc. defaults to INNER :on # specify the table to join on :foreign_key # specify the tables foreign key column :primary_key # specify the joining tables primary key column :using # used to specify a USING queries list of columns to join on
The simplest example is a basic join:
{:ok, result} = db(:customers) |> join(:orders) |> select |> Moebius.Db.run
For multiple table joins you can specify the table that you want to join on:
{:ok, result} = db(:customers) |> join(:orders, on: :customers) |> join(:items, on: :orders) |> select |> Moebius.Db.run
Transactions
Transactions are facilitated by using a callback that has a
pid on it, which you'll need to pass along to each query you want to be part of the transaction. The last execution will be returned. If there's an error, an
{:error, message} will be returned instead and a
ROLLBACK fired on the transaction. No need to
COMMIT, it happens automatically:
{:ok, result} = transaction fn(pid) -> new_user = db(:users) |> insert(pid, email: "[email protected]") |> Moebius.Db.run(pid) with(:logs) |> insert(pid, user_id: new_user.id, log: "Hi Frodo") |> Moebius.Db.run(pid) new_user end
If you're having any kind of trouble with transactions, I highly recommend you move to a SQL file or a function, which we also support. Abstractions are here to help you, but if we're in your way, by all means shove us (gently) aside.
Aggregates
Aggregates are built with a functional approach in mind. This might seem a bit odd, but when working with any relational database, it's a good idea to think about gathering your data, grouping it, and reducing it. That's what you're doing whenever you run aggregation queries.
So, to that end, we have:
{:ok, sum} = db(:products) |> map("id > 1") |> group(:sku) |> reduce(:sum, :id) |> Moebius.Db.run
This might be a bit verbose, but it's also very very clear to whomever is reading it after you move on. You can work with any aggregate function in PostgreSQL this way (AVG, MIN, MAX, etc).
The interface is designed with routine aggregation in mind - meaning that there are some pretty complex things you can do with PostgreSQL queries. If you like doing that, I fully suggest you flex our SQL File functionality and write it out there - or create yourself a cool function and call it with our Function interface.
Functions
PostgreSQL allows you to do so much, especially with functions. If you want to encapsulate a good time, you can execute it with Moebius:
{:ok, party} = function(:good_time, [me, you]) |> Moebius.Db.run
You get the idea. If your function only returns one thing, you can specify you don't want an array back:
{:ok, no_party} = function(:bad_time, :single [me]) |> Moebius.Db.run
I would love to have your help! I do ask that if you do find a bug, please add a test to your PR that shows the bug and how it was fixed.
Thanks! | https://elixir.libhunt.com/moebius-alternatives | CC-MAIN-2021-43 | refinedweb | 3,141 | 57.37 |
table of contents
NAME¶
SETR - Establishes certain constants so that SRFACE produces a picture whose size changes with respect to the viewer's distance from the object. It can also be used when making a movie of an object evolving in time to keep it positioned properly on the screen, saving computer time in the bargin. Call it with r0 negative to turn off this feature.
SYNOPSIS¶
SUBROUTINE SETR (XMIN,XMAX,YMIN,YMAX,ZMIN,ZMAX,R0)
C-BINDING SYNOPSIS¶
#include <ncarg/ncargC.h>
void c_setr (float xmin, float xmax, float ymin, float ymax,
float zmin, float zmax, float r0)
DESCRIPTION¶
- XMIN,XMAX
- Specifies the range of X array that will be passed to SRFACE.
- YMIN,YMAX
- Specifies the range of Y array that will be passed to SRFACE.
- ZMIN,ZMAX
- Specifies the range of Z array that will be passed to SRFACE. If a movie is being made of an evolving Z array, ZMIN and ZMAX should contain range of the union of all the Z arrays. They need not be exact.
- R0
- Distance between observer and point looked at when the picture is to fill the screen when viewed from the direction which makes the picture biggest. If R0 is not positive, then the relative size feature is turned off, and subsequent pictures will fill the screen.
C-BINDING DESCRIPTION¶
The C-binding argument descriptions are the same as the FORTRAN argument descriptions.
ACCESS¶
To use SETR or c_setr, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
SEE ALSO¶
Online: surface, surface_params, ezsrfc, pwrzs, srface. ncarg_cbind.
Hardcopy: NCAR Graphics Fundamentals, UNIX Version
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement. | https://manpages.debian.org/unstable/libncarg-dev/setr.3NCARG.en.html | CC-MAIN-2022-40 | refinedweb | 284 | 61.77 |
I. I found that using Arduino C++ was a bit hard to deal with that. I liked the idea of using Python in ESP8266 so decided to install MicroPython firmware on ESP8266 E-12.
I am using Python 3.7 but Python 2.7 should also work (according to documentation in MicroPython but I haven't tested it)
For the Hello World application we will not to build any electronic circuit as we're using built-in LED. But I assume your ESP-12E is soldered and connected to an FTDI interface.
Installing steps
Install Python if not already installed
Install pip3 (or pip if you're on Python 2.7)
Install esptool
pip3 install esptool
Download MicroPython firmware
Erase flash in your ESP8266 (Update your PORT accordingly)
esptool.py --port PORT3 erase_flash
Flash new firmware
esptool.py --port PORT3 --baud 115200 write_flash -fm dio 0x00000 esp8266-20180511-v1.9.4.bin
If step above doesn't work try changing baud rate (to 57600 or something else)
Install ampy (Check their website)
pip3 install adafruit-ampy
Upload the blink example using ampy
Name below file as "main.py"
import machine import time led = machine.Pin(2, machine.Pin.OUT) led.off() time.sleep(1) led.on() time.sleep(1) led.off() time.sleep(1) led.on() time.sleep(1)
and upload using ampy:
ampy -d 1 --port /dev/cu.wchusbserial1420 -b 115200 ls
Above code will use the built-in LED on the ESP-12 so we don't need to connect any resistors or LEDs.
- If you can step all above steps without any problem you're lucky but I had some issues. Here is how I solved them:
Troubleshooting
I had below error when I tried ampy to list or upload files
ampy --port /dev/cu.wchusbserial1420 ls
raise PyboardError('could not enter raw repl')
ampy.pyboard.PyboardError: could not enter raw repl
I have done some research and found that the culprit was "pyboard.py" file from ampy.
find the location where this file exists
sudo find / -name pyboard.py
for me it was located here:
/usr/local/lib/python3.7/site-packages/ampy/pyboard.py
open this file with nano, vi or any text editor and locate the function name
enter_raw_repl. Add the line
this.sleep(2) where indicated as
### ADD THIS LINE
def enter_raw_repl(self): # Brief delay before sending RAW MODE char if requests if _rawdelay > 0: time.sleep(_rawdelay) self.serial.write(b'\r\x03\x03') # ctrl-C twice: interrupt any running program # flush input (without relying on serial.flushInput()) n = self.serial.inWaiting() while n > 0: self.serial.read(n) n = self.serial.inWaiting() ### ADD THIS LINE ### ADD THIS LINE time.sleep(2) ### ADD THIS LINE ### ADD THIS LINE self.serial.write(b'\r\x01') # ctrl-A: enter raw REPL data = self.read_until(1, b'raw REPL; CTRL-B to exit\r\n>') if not data.endswith(b'raw REPL; CTRL-B to exit\r\n>'): print(data) raise PyboardError('could not enter raw repl') self.serial.write(b'\x04') # ctrl-D: soft reset data = self.read_until(1, b'soft reboot\r\n') if not data.endswith(b'soft reboot\r\n'): print(data) raise PyboardError('could not enter raw repl') # By splitting this into 2 reads, it allows boot.py to print stuff, # which will show up after the soft reboot and before the raw REPL. # Modification from original pyboard.py below: # Add a small delay and send Ctrl-C twice after soft reboot to ensure # any main program loop in main.py is interrupted. time.sleep(0.5) self.serial.write(b'\x03') time.sleep(0.1) # (slight delay before second interrupt self.serial.write(b'\x03') # End modification above. data = self.read_until(1, b'raw REPL; CTRL-B to exit\r\n') if not data.endswith(b'raw REPL; CTRL-B to exit\r\n'): print(data) raise PyboardError('could not enter raw repl')
After this change I was able to run the command:
ampy --port /dev/cu.wchusbserial1420 ls
Now I can upload the file that I have written in step 9. Let's name this file as "main.py" and upload to ESP8266
ampy --port /dev/cu.wchusbserial1420 put main.py
If you want to execute Python commands while connected to ESP8266 as if you were typing commands in IDLE, you can easily setup a serial connection with ESP8266.
screen /dev/cu.wchusbserial1420 115200
Now that we managed to install our first ever Python application for ESP8266 we're ready to build much more complicated application with the power of Python. | https://cuneyt.aliustaoglu.biz/en/installing-micropython-for-esp8266/ | CC-MAIN-2019-09 | refinedweb | 763 | 59.19 |
Question:
I am stucked at the part where I have to bind a collection to a dynamic usercontrol. Scenario is something like this. I have a dynamic control, having a expander , datagrid, combobox and textbox, where combox and textbox are inside datagrid. There are already two collections with them. One is binded with combobox and another is binded with datagrid. When the item is changes in combox its respective value is set to its respective textbox, and so on. and this pair of value is then set to the collection binded with datagrid. A user can add multiple items.
Now the main problem is that all these things are happening inside a user control which is added dynamically, that is on button click event. A user can add desired numbers of user controls to the form. problem is coming in this situtaion. Say I have added 3 controls. Now in 1st one if i add a code to the collection then it gets reflected in the next two controls too, as they are binded with same collection. So, I want to know is there anyway to regenrate/rename the same collection so that the above condition should not arise.
Solution:1
It's hard to answer your question without seeing the bigger picture, however I have a feeling you are going about this the wrong way. It appears that you are adding instances of your user control directly from code. Instead of doing that, you should create some kind of
ItemsControl in your XAML, and in its
ItemTemplate have your user control. Bind that
ItemsControl to a collection in your view model, and only manipulate that collection.
You should not be referring to visual controls in your view model or code behind. Whenever you find yourself referencing visual elements directly from code, it should raise a warning flag in your mind "Hey! There's a better way than that!"...
Example:
The view model:
public class ViewModel { public ObservableCollection<MyDataObject> MyDataObjects { get; set; } public ViewModel() { MyDataObjects = new ObservableCollection<MyDataObject> { new MyDataObject { Name="Name1", Value="Value1" }, new MyDataObject { Name="Name2", Value="Value2" } }; } } public class MyDataObject { public string Name { get; set; } public string Value { get; set; } }
The window XAML fragment containing the list box and the data template:
<Window.Resources> ... <DataTemplate x: <local:MyUserControl/> </DataTemplate> </Window.Resources> ... <ListBox ItemsSource="{Binding MyDataObjects}" ItemTemplate="{StaticResource MyDataTemplate}" HorizontalContentAlignment="Stretch"/>
The user control:
<UniformGrid Rows="1"> <TextBlock Text="{Binding Name}"/> <TextBlock Text="{Binding Value}" HorizontalAlignment="Right"/> </UniformGrid>
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2019/01/tutorial-how-to-bind-observable.html | CC-MAIN-2019-26 | refinedweb | 430 | 54.22 |
You can subscribe to this list here.
Showing
7
results of 7
Hello.
I had a layout similar to
class Actor(SQLObject):
actorType = IntCol()
class Store(Actor):
name = StringCol(length = 64)
When I insert new rows using Store.new(actorType = 1, name = "Test") it
doesn't actualy insert a row in the actor table and the actorType column is
also found in the store table.
Even though this might seem like a smart optimization to make, it's not
since I'm trying to implement a test slice from a larger model. Also, I
sometimes need to fetch an Actor object to find out what type of actor I
should continue fetching, but since there are no rows in the actor table I
get an error telling that the object was not found.
Further more, my model supports multiple inheritence of this nature:
class Store(Actor, Organisation):
....
It seems the second superclass is ignored completely, is this not support in
SQLObject?
As a final question I'd like to ask how where to put the transaction object
when you are creating new rows. The transaction object can be passed as the
second argument when getting objects, but not when creating new.
Best regards,
Peter Gebauer
At 05:18 PM 21/06/2004 +0200, Ivo van der Wijk wrote:
>Hi,
>
>I regularly run into weird problems because I'm trying to pass the wrong
>type of object to, for example, a foreign key.
>
>How hard would it be to make SQLObject warn in such cases?
[snip]
Hi Ivo,
I just read about PyProtocols
<> in the FormEncode
<> documentation. It is supposed to help with this
issue, amongst others.
Regards,
Clifford Ilkay
Dinamis Corporation
3266 Yonge Street, Suite 1419
Toronto, Ontario
Canada M4N 3P6
Tel: 416-410-3326
Hi everybody!
I've got a problem with creating new database entries when i'm using
transactions.
Here's my code:
import SQLObject
class DeputatsDB:
def __init__(self):
"""Initiate a database connection"""
self._conn = SQLObject.PostgresConnection('user=akki
dbname=deputat')
def newTransaction(self):
"""Open a new transaction"""
return self._conn.transaction()
def getDozenten(self, trans):
"""Return an item from table Dozent"""
return Dozenten(trans)
class Table:
def __init__(self, trans):
"""Initiate a transaction"""
self._trans = trans
class Dozent(SQLObject.SQLObject):
"""Class dozent properties of dozent"""
dozent_uid = SQLObject.StringCol()
vorname = SQLObject.StringCol()
name = SQLObject.StringCol()
kuerzel = SQLObject.StringCol(length=5, default = None)
anrede = SQLObject.StringCol()
class Dozenten(Table):
"""Class for manipulation of dozenten"""
def addDozent(self, uid, vorname, name, kuerzel, anrede):
"""Add a new dozent to database"""
return Dozent.new(connection=self._trans,
dozent_uid=uid,
name=name, vorname=vorname,
kuerzel=kuerzel, anrede=anrede)
After creation of instances of DeputatsDB, DeputatsDB.Transaction and
Table i'm trying to add a new Dozent to the Database:
from deputatsabrechnung import *
debDB = DeputatsDB()
trans = debDB.newTransaction()
table = Table(trans)
doz=Dozenten(table)
doz.addDozent('324534645zgfb', 'James', 'Bond', 'bon', 'Mister')
Now i'm getting this error message:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File
"/zope/mniportal/Products/MNIVeranstaltungen/deputatsabrechnung/deputatsabrechnung.py",
line 30, in addDozent
kuerzel=kuerzel, anrede=anrede)
File "/usr/local/lib/python2.3/site-packages/SQLObject/SQLObject.py",
line 907, in new
inst._SO_finishCreate(id, connection=connection)
File "/usr/local/lib/python2.3/site-packages/SQLObject/SQLObject.py",
line 928, in _SO_finishCreate
id = connection.queryInsertID(self._table, self._idName,
AttributeError: Table instance has no attribute 'queryInsertID'
Well, after searching the mail list i haven't found any hint what courses
this error and i haven't any idea what my fault is.
Can anybody help me?
Thank you very much!
Akki Nitsch
#############################################################
The freedom of meaning one thing and saying
something different is not permitted.
E.W. Dijkstra
On Tue, Jun 22, 2004 at 05:26:43AM -0600, William Volkman wrote:
> I've got insertion and update of records working pretty well, however
> record deletion seems broken (as well as conspicuously missing from
> the docs on the web site ;-)
>
> >>> foo = XImport.get(5)
> >>> foo.delete()
foo.destroySelf()
Oleg.
--
Oleg Broytmann phd@...
Programmers don't die, they just GOSUB without RETURN.
On Tue, Jun 22, 2004 at 05:26:43AM -0600, William Volkman wrote:
> I've got insertion and update of records working pretty well, however
> record deletion seems broken (as well as conspicuously missing from
> the docs on the web site ;-)
>
Try foo.destroySelf()
--
Philippe
I've got insertion and update of records working pretty well, however
record deletion seems broken (as well as conspicuously missing from
the docs on the web site ;-)
>>> foo = XImport.get(5)
>>> foo.delete()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: delete() takes exactly 2 arguments (1 given)
>>> foo.delete(5)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.2/site-packages/sqlobject/main.py", line 1028,
in delete
TypeError: __init__() takes exactly 1 argument (2 given)
>>> XImport.delete(5)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.2/site-packages/sqlobject/main.py", line 1028,
in delete
TypeError: __init__() takes exactly 1 argument (2 given)
My copy of SQLObject is about a week old, trying to update it results
in:
svn update
svn: Berkeley DB error while opening 'nodes' table for filesystem
/var/lib/subversion/repository/db:
Cannot allocate memory
Which I googled for and found that a svnadmin recover is necessary
with possibly an increase in the number of locks in DB_CONFIG
On Tue, 2004. | http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?viewmonth=200406&viewday=22 | CC-MAIN-2014-23 | refinedweb | 903 | 55.64 |
Ants first evolved around 120 million years ago, take article details how ACO can be used to dynamically route traffic efficiently. An efficient routing algorithm will minimise the number of nodes that a call will need to connect to in order to be completed thus; minimising network load and increasing reliability. An implementation of ANTNet based on Marco Dorigo & Thomas st�tzle has been designed and through this a number of visually aided test were produced to compare the genetic algorithm to a non-generic algorithm. The report will final conclude with a summary of how the algorithm perform and how it could be further optimised.
Electronic communication networks can be categorised as either circuit-switched or packet-switched. Circuit-switched networks rely on a dedicated connection from source to destination which is made once at start-up and remains constant until the tear-down of the connection. An example of a circuit switched network would be British Telecoms telephone network. Packet-switched networks work quite differently however and all data to be transmitted is divided into segments and sent as data-packet. Data-packets can arrive out of order in a packets-switched network with a variety of paths taken through different nodes in order to get to their destination. The internet and office local area networks are both good examples of packet-switched networks.
A number of techniques can be employed to optimise the flow of traffic around a network. Such techniques include flow and congestion control where nodes action packet acknowledgements from destination nodes to either ramp-up or decrease packet transmission speed. The area of interest in this report concentrates on the idea of network routing and routing tables. These tables hold information used by a routing algorithm to make a local forwarding decision for the packet on the next node it will visit in order to reach its final destination.
One of the issues with network routing (especially in very large networks such as the internet) is adaptability. Not only can traffic be unpredictably high but the structure of a network can change as old nodes are removed and new nodes added. This perhaps makes it almost impossible to find a combination of constant parameters to route a network optimally.
Packet-switched networks dynamically guide packets to their destination via routing tables stored in a link state and are selected via a link state algorithm.
The Link state algorithm works by giving every node in the network a connectivity graph of the network. This graph depicts which nodes are directly connected. Values are stored for connected nodes in map which represent the shortest path to other nodes. One such link state algorithm used in network routing is Dijkstras algorithm. When a path between two nodes is found, its weight is updated in the table. Should a shorter path be found the new optimal weight will be updated to the table replacing the old value.
The algorithm allows traffic to be routed around the network whilst connecting to the least number nodes as possible. The system works but doesn't take into account influx of traffic and load balancing.
By replacing Dijkstras algorithm with a generic algorithm, paths taken by calls could be scored by how short of a path they took, that way if they were queued on a busy network they would perform badly. Consequently other paths would score relative and be chosen. This would work in real time and allow the routing system to adapt as packets are transmitted ANTNet uses virtual pheromone tables much like when an ant follows a path dropping pheromones to re-enforce it. The quicker the ants move down a path the more throughput of ants thus; a greater concentration of pheromones. In the same way pheromone tables in ANTNet allow fast routes to score a higher chance of being selected whilst the less optimal route scores a low chance of being selected.
The idea behind ANTNet is when a call is placed an Ant will traverse across the network using a link-state deterministic algorithm. Every node holds a pheromone table for all other nodes of the network. Each pheromone table holds a list of table entries containing all the connected nodes of the current node.
To begin with, each possible path has an even likelihood of being chosen. An ant is placed on a network of 4 nodes with the source node of 1 and destination node 2. A chance mechanism is invoked and a path is chosen.
In this case node 2 has been selected [figure 3.2] and the ant arrives at its source destination.
The ant then moves and updates the pheromone tables for the visited nodes with higher (and more mathematically biased) value. This would be calculated for figure 3.2 and table 3.2 in the following way:
The system isn't 100% accurate as the total will never add up to exactly 100% but it will be close enough to allow accuracy within the level required.
The following diagram depicts how the path and pheromone table after the update has taken place.
For the purpose of this program a bi-directional, un-weighted topological network consisting of 30 nodes has been created and closely resembles the British Synchronous Digital Hierarchy (SDH) network. After a basic number of parameters have been set the simulation is run. Firstly all pheromone tables are defaulted to equal weights and then calls are generated and placed on the network. Initially the routes chosen are random. If a call cannot connect to a node it is forced to wait and the wait counter is enumerated to reflect the quantum (in timer ticks). Once a node has reached its destination node it will work its way backwards altering the local nodes pheromone table as it traverses. The shorter the route taken the greater increase in probability given to its table entry in the pheromone table. This happens repeatedly until the weight of the fastest node is shifted such that slower routes have a very low probability of being chosen.
NOTE: in order to compile and run the program you will need to download dotnetcharting from dotnetcharting.com
The network contains 30 Nodes and each Node contains an array of PheromoneTable objects, one for every other Node in the network (29). Every PheromoneTable contains an array of TableEntries, one for each Node connected to the current Node.
The following diagram represents the relationships between classes in the program.
Update the pheromone table
// returns the next Node of the path public int ProbablePath(ArrayList VisitedNodes) { // create a random generator Random r = new Random(Global.Seed); double val=0; double count = 0; double Lastcount = -1; ArrayList tempTEVector=new ArrayList(); // loop through all the connected nodes for(int i=0;i<tableEntry.Length;i++) { // has the node been visitied? bool v=false; //loop through all the visited nodes for(int j=0;j<VisitedNodes.Count;j++) { // if the ID's match then this node has alrady been visited if(tableEntry[i].NodeID==(int)VisitedNodes[j]) v=true; } // If v is false then the node hasnt been visited.. so Add if(!v) { // get the node Node n = Global.Nodes[tableEntry[i].NodeID]; // if the node is accepting connections if(!n.FullCapacity) { // add the node as a possible candidate tempTEVector.Add(tableEntry[i]); } } } // if all connections have been visited if(tempTEVector.Count==0) { // loop through all the connected nodes for(int i=0;i<tableEntry.Length;i++) tempTEVector.Add(tableEntry[i]); } // get the ceiling amount for probabilities for(int i=0;i<tempTEVector.Count;i++) val+= ((TableEntry)tempTEVector[i]).Probablilty; //create randon value val = r.NextDouble()*val; // loop through the temp Table Entryies for(int i=0;i<tempTEVector.Count;i++) { // increment the count on each loop count += ((TableEntry)tempTEVector[i]).Probablilty; // if the random value falls into delegated range then select that path as the next node if(val>Lastcount && val < count) return ((TableEntry)tempTEVector[i]).NodeID; // get the value of the last count Lastcount=count; } // method should never return here return -1; }Return the next node via the pheromone table
// updates the probabilities of the pheromone table by multiplying the selected // nodes probability by a radio of newVal public void UpdateProbabilities(double newVal, int EntryTableNodeID) { TableEntry t; double total=0; // loop through all the table entries // get the total enumeration of probabilities and add the new value. // Since this total will be more than 100 a ratio multiplication is // applied. Although these values will not equate to exactly 100% floating // point calculations will be accurate enough at least 99.99999% which is satisfactory for(int j=0;j<tableEntry.Length;j++) { t = tableEntry[j]; // enumerate the total probablility total += t.Probablilty; // if the table entry matches the id of the chosen node path if(EntryTableNodeID==t.NodeID) { // add the new value to the total total += newVal; t = tableEntry[j]; // add the new value the current value of the selected path t.Probablilty += newVal; } } // calculate the ratio for the multiplcation double ratio = 100/total; // loop through each table entry and multiple the current probability // by the new ratio for(int j=0;j<tableEntry.Length;j++) { tableEntry[j].Probablilty *= ratio; } // this will enumerate all the values to 99.99999% } // Constructor takes a node to represent and a list of all connected nodes off the // calling node public PheromoneTable(Node n, int[] conns) { this.NodeID = n.ID; // create a tableEntry array the same length as the number of connections this.tableEntry = new TableEntry[conns.Length]; // create a new tableEntry for each connection for(int i=0;i<conns.Length;i++) tableEntry[i] = new TableEntry(conns[i]); // set default equal values for(int i=0;i<conns.Length;i++) tableEntry[i].Probablilty = (100 / (double)conns.Length); } }
The following tests will illustrated how the ANTNET algorithm effects the routing of traffic. These tests will show effectiveness of the algorithm against the system running without ANTNET. Since it is possible to switch nodes on and off, a number of test comparisons will be done to show how ANTNET can improve the routing of a network when paths are no longer valid and new routes have to be chosen.
These tests have been run with the following parameters
5.1 ANTNet VS non- ANTNet
The first test contains two simulations.
From this simulation it is clear that even by the first 500 calls completed, ANTNet has reduced the average number of hops by approximately 1.5 nodes. This is made more apparent by the end of the simulation where the best paths are made more biased as a choice and are re-enforced as the optimal route, resulting in ANTnet improving network performance by almost 3.5 hops
Figure 5.1 � Non-adaptive algorithm ( orange) VS ANTNet algorithm ( blue)
To view the algorithm from a different perspective the following graph depicts the system running with the ANTNet algorithm off and then activated on the 2,000 th call. This can be identified by a label and follows with a decline of average hops by almost 2.
Figure 5.2 � ANTNet activated after the 2000th call
5.2 Loop elimination
Before an Ant returns back to its source node, an optimisation technique of loop elimination can be invoked. The problem with loops is that they can receive several times the amount of pheromone than they should, leading to the problem of self re-enforcing loops.
Figure 5.3 � Loop removal ( Blue) VS non-loop removal ( Orange )
Figure 5.3 shows two simulations:
From this test, loop elimination has reduced the average number of hops by 1 node with a much more stable adaptation. This would mean that when alternative paths must be chosen, the loop elimination algorithm responds much faster that the regular implementation.
Note: Both lines show the actual number of nodes traversed and not the number after loop removal.
5.3 Adaptivity
It is important to simulate how the network adapts when nodes are removed from the network. Static routing tables may hold the shortest path but they don't necessarily take into account network traffic and nodes that are offline. Three simulations have been run on the program to display how the system adapts compared to a non adaptive algorithm.
Figure 5.4 � Adaptive vs non-adaptive algorithm
Simulation 1 ( orange)
This is a normal run of the simulator to create optimised Pheromone tables for the next two simulations.
Simulation 2 ( blue)
Adaptive algorithm switched OFF.
Nodes 14,15 and 17 are switched off as these are the main Northern access hubs into London so traffic needs to be diverted to the west of England . Since the network is non adaptive, the pheromone tables are biased towards nodes that are have been taken offline and subsequently being continuously redirected and taking longer journeys every time. This increase is displayed in figure 6.4 by the blue line.
Simulation 3 ( green)
Adaptive algorithm switched ON.
Nodes 14,15 and 17 are still switched off but since the network is now set to adaptive, the pheromone tables are readjusted and the system learns alternative routes. This can be seen in figure 6.4 by the green line.
If anyone has any questions, bugs or suggestions then please make a comment.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/recipes/Ant_Colony_Optimisation.aspx | crawl-002 | refinedweb | 2,210 | 54.32 |
24 September 2010 07:43 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“On a year-to-date basis, output of the chemicals cluster was 14.9% higher than the same period in the previous year,” the EDB said in a statement.
Meanwhile, the country’s biomedical manufacturing cluster’s output declined 29% year on year in August, mainly due to a 30.8% drop in output in the pharmaceuticals segment, it said.
The drop was “as a result of a different mix of active pharmaceutical ingredients produced,” the statement added.
Output from the electronics cluster - an important downstream industry of petrochemicals - rose 32.8% year on year in August, according to EDB.
The electronics sector’s computer peripherals segment fell 20.8% in August in comparison to the year-ago period when production levels were high due to re-stocking activities, ED | http://www.icis.com/Articles/2010/09/24/9395872/singapore-chemicals-output-grows-11.9-yoy-in-august.html | CC-MAIN-2014-52 | refinedweb | 142 | 50.02 |
Threads are a useful, but difficult programming concept. While the concept of multi-threaded programming is easy to grasp, writing correct multi-threaded programs is hard. It has been argued that threads are evil and should be banned [Ousterhout]. Rather than taking such an extreme viewpoint, I believe that threads are useful for some tasks, such as parallel asynchronous I/O, where the alternative is worse.
Consequently, Python supports some very simple optional operations for threading. Suppport for these operations must be requested when Python is built for a particular installation; it is available for most platforms except for Macintosh. In particular, threading is supported for Windows NT and 95, for Unix systems with a POSIX thread implementation (this includes Linux), and for native threads on Sun Solaris and SGI IRIX. Thread semantics and performance differ somewhat between platforms, but for correctly written programs these differences won't matter.
I believe that Python is an excellent language to learn how to write multi-threaded programs. In this paper I give a quick introduction to threads in general, show how to use threads in Python, and discuss various synchronization techniques and how they can be implemented in Python using the basic synchronization object available. I also show how the most useful two-third of the Java thread API can be implemented as a small collection of Python classes.
You are probably familiar with multi-processing (also known as multi-programming or multi-tasking): a single computer running several programs (seemingly) simultaneously. This is done by clever and frequent switching between the execution state of each program. For example, the print spooler on your PC may be printing pages from one document while you are editing another document in your word processor.
Multi-threading is a finer-grained version of the same idea: multiple things going on simultaneously within the same program. For example, most web browsers allow you to continue to browse while a file download is in progress.
Each "thing" going on independently in this case is called a thread, short for thread of control. You can think of a thread as a mini-program executing (mostly) independently inside the whole program.
When talking about multi-threading, we generally refer to the whole program as a process. This is the term used in operating systems for a running program, and has more precise definition than "program" (which is often casually used to refer to what is actually a group of closely cooperating processes).
The difference between multi-threading and multi-processing is the amount of sharing and the granularity of the communication that goes on between the participants. Separate processes on the same computer each have their own, private portion of the computer's primary memory. They can communicate only via the file system, and perhaps via pipes and events (and on some systems via specially allocated segments of shared memory). Multiple threads, on the other hand, share the entire memory address space of the process to which they belong.
The sharing of memory between threads allows faster and more fine-grained communication between threads: threads can communicate by exchanging a pointer to a data structure in memory, while separate processes need to serialize data to a file or buffer in order to pass it to each other.
Another reason for using threads is that the creation of a new thread is much faster than the creation of a new process. This means that when you have a relatively small amount of work that you would like to handle separately, it is more attractive to use a separate thread instead of a separate process. The difference in cost between thread and process creation depends on which operating system you are using. On Unix, creating a new process is relatively quick, while on Windows NT or 95, process creation is truly expensive -- thread creation on the other hand is cheap on each system. Given the trend towards Windows, we can expect a growing popularity of threads.
Unfortunately, like so many things in life, there's a dark side to threads. It may be because most programmers are initially trained to write sequential (i.e., single-threaded) programs only, or perhaps are brains aren't capable of reasoning about multiple interacting tasks going on in parallel. In any case, experience shows that writing multi-threaded programs is harder than writing single-threaded programs. While Python removes some of the worst nightmares of multi-threaded programming, it can't avoid all problems, some of which are inherent to the usefulness of threads.
The fundamental problem with multi-threaded programs is that we don't have control over the interleaving of the execution of threads. The operating system generally guarantees that each thread will make progress. It does so by periodically switching its attention from one thread to the next. Unfortunately the operating system is oblivious to the task that a thread is trying to accomplish, and may switch to another thread at an arbitrary point in the middle of that task. On high-end computers the problem can become worse: a system may have multiple CPUs, and execute two or more threads truly simultaneously on different CPUs. Clearly this has the effect of interleaving the computations of the threads at the instruction level.
Because of subtle timing differences, the interleaving of threads can be different each time a program is run, even when it is given exactly the same input each time, and certainly wih (even ever so slightly) different input. This makes multi-threaded programs especially hard to debug: if it works okay in one test run, that doesn't mean that it will run okay in the next.
For example, consider the common situation of two threads that both need to increment a counter. The counter could be used to count a number of files downloaded, and some other thread could be waiting for the counter to reach a particular value (e.g. the total number of files to download). Consider the following pseudo-code:
while "more files to download": "download the next file" nfiles = nfiles + 1
Focusing our attention on the last statement (nfiles = nfiles + 1) for a moment, this seems harmless enough. Translated to machine code (or to Python byte code), this statement looks roughly as follows:
Now imagine two threads executing this same code in parallel. The interleaving between threads can happen between any two items. For example, the following interleaving could happen:
Clearly, if the initial value of nfiles is 10, both threads load 10 into their register, both increment their register to 11, and both store 11 into nfiles. However, if they would have been executed without interleaving, nfiles would have been incremented twice, to 12, as expected.
A situation like this is called a race condition. Race conditions are almost always hard to find bugs, and are characterized by unexpected outcomes under certain interleaving conditions.
On the other hand, the individual instructions ("load", "add", "store") are not further divisible, and have only two possible interleavings: one instruction is executed entirely before the other, or the other way around. Such instructions are called atomic. Strictly speaking, atomicity of an operation is always with respect to another operation, but often, when we think of a particular level of abstraction, we can simply speak of atomic operations and non-atomic operations.
The general concept of keeping operations "out of each other's hair" is called mutual exclusion. In order to implement mutual exclusion, we need a way to force arbitrary sequences of operations to be atomic. The usual approach is to use a mechanism called a critical section. (Some programming languages call it a monitor.) It is a region of the program which can be entered by only one thread at a time. Sometimes (e.g. in Java, with the "synchronized" keyword), critical sections are a language feature. Other times (e.g. when using POSIX threads in C or C++) they must be implemented using a more primitive library feature called a lock.
Python, surprisingly enough, does not have critical sections as a language feature. This is because historically, threads are an optional feature of Python. They are still not supported on all platforms, e.g. Python on the Macintosh has no threads.
Fortunately, implementing critical sections using locks is easy enough. I'll first explain how locks work. (A basic lock is also known as a binary semaphore, due to its similarity with a signpost as used in railroads guarding a particular stretch of railroad tracks.)
A lock is an object with two states: locked and unlocked. Initially, it is in the unlocked state. It has two methods: acquire() and release(). The acquire() method changes the lock from the unlocked into the locked state. This is like setting a semaphore to unsafe and entering the stretch of railroad tracks guarded by the semaphore. The release() method changes the lock from unlocked to locked; it is like leaving the stretch and resetting the semaphore to safe. When acquire() is called on a lock that is already locked, the operation blocks until another thread invokes the release() method. This is like a train waiting to enter the stretch until the semaphore signals safe. It is illegal to call release() when the lock is unlocked. This would be like a train leaving the guarded stretch without having set the semaphore to unsafe on entering -- a crash may occur!
Locks are generally implemented by the operating system, and the acquire() and release() methods are executed atomically. The operating system guarantees that at most one thread at a time can change the lock from unlocked to locked, and that all other threads that attempt to acquire() the same lock are blocked. When the lock is released, exactly one of the blocked threads waiting for the lock (if any) is unblocked.
A lock can be used to insure that "increment counter" operations in different threads are atomic. For example:
# Somewhere in the main program: lock = thread.allocate_lock() # Create a lock object counter = 0 . . . # In each thread: lock.acquire() # Begin critical section counter = counter + 1 lock.release() # End critical section
This program has one critical section, the code between the acquire() and release() calls. It is entered by different threads at different times, but because the acquire() method only allows one thread to continue at a time, there will never be more than one thread executing in the critical section. Thus, effectively the counter increment is executed atomically. Note that the program structure guarantees that release() is always called after acquire(), so there is no danger of a release() with the lock in the unlocked state.
Let's look at a useful example program that uses threads. We'll study a webcrawler -- a program that fetches web pages, analyzes them looking for links, and then uses those links to fetch more web pages -- ad infinitum (or until we stop it :-). This can be used to create your own web indexer, or to check a website for bad links, or for many other household uses. A single-threaded crawler named webchecker is part of the Python distribution (in the Tools subdirectory). It is often very slow, because it often has to wait a long time before a web page is transferred. Especially when loading web pages from many hosts, some of which may be down or very slow, this means that it can take a long time to check a reasonable number of pages. A multi-threaded crawler can continue to work on other pages in other threads while one thread is blocked waiting for a slow host.
We'll start with a really simple version: a program that simply downloads a number of pages given by URL on the command line. It forks a separate thread for each URL. For every loaded page, it prints a byte count.
import sys, thread, time, urllib, httplib, re def main(): for url in sys.argv[1:]: thread.start_new(loadurl, (url,)) time.sleep(1000000) def loadurl(url): f = urllib.urlopen(url) text = f.read() f.close() print len(text), url main()
The main thread (this is the default thread that starts the program) executes the main() function. This function uses the thread.start_new() function to create a new thread for each argument in sys.argv[1:]. Each thread runs the loadurl() function with a different URL as argument. The main function ends by sleeping a million seconds. (This is just to simplify the code of this first example, since the code needed to wait for completion of all threads is slightly complicated.)
Remember that all threads execute in parallel -- the thread.start_new() call does not wait until the loadurl() function is done, but rather returns immediately after the new thread has been scheduled for execution.
You must run this script with a number of URLs as arguments. For example:
python crawl1.py
This might print
10313 2739
Prehaps it would print the two lines the other way around. After that, it will just hang there -- until a million seconds have passed. You may want to hit Control-C at this point. | http://www.python.org/doc/essays/threads.html | crawl-002 | refinedweb | 2,183 | 61.97 |
Java 8 idioms
Java knows your type
Learn how to use type inference in lambda expressions, and get tips for improving parameter naming
Content series:
This content is part # of # in the series: Java 8 idioms
This content is part of the series:Java 8 idioms
Stay tuned for additional content in this series.
Java™ 8 is the first version of Java to support type inference, and it does so only for lambda expressions. Using type inference in lambdas is powerful, and it will set you up for future versions of Java, where type inference will be added for variables and possibly more. The trick is to name your parameters well, and trust the Java compiler to infer the rest.
Most of the time, the compiler is more than capable of inferring type. And when it can't, it will complain.
Learn how type inference works in lambda expressions, and see at least one example where it fails. Even then, there is a workaround.
Explicit types and redundancy
Suppose you ask someone, "What's your name?" and they say "My name is John." This happens often enough, but it would be more efficient to simply say, "John." All you need is a name, so the rest of that sentence is noise.
Unfortunately, we do this sort of thing all the time in code. Here's how a Java developer might use
forEach to iterate and print double of each value in a range:
IntStream.rangeClosed(1, 5) .forEach((int number) -> System.out.println(number * 2));
The
rangeClosed method produces a stream of
int values from 1 to 5. The lambda expression, in its full glory, receives an
int parameter named
number and uses the
println method of
PrintStream to print out double of that value. Syntactically there's nothing wrong with the lambda expression, but the type detail is redundant.
Type inference in Java 8
When you extract a value from a range of numbers, the compiler knows that value's type is
int. There is no need to state the value explicitly in your code, although that has been the convention until now.
In Java 8, we can drop the type from a lambda expression, as shown here:
IntStream.rangeClosed(1, 5) .forEach((number) -> System.out.println(number * 2));
Being statically typed, Java needs to know the types of all objects and variables at compile time. Omitting the type in the parameter list of a lambda expression does not bring Java closer to being a dynamically typed language. Adding a healthy dose of type inference does, however, bring Java closer to other statically typed languages, like Scala or Haskell.
Trust the compiler
If you omit the type in a parameter to a lambda expression, Java will require contextual details to infer that type.
Going back to the previous example, when we call
forEach on
IntStream, the compiler looks up that method to determine the parameter(s) it takes. The
forEach method of
IntStream expects the functional interface
IntConsumer, whose abstract method
accept takes a parameter of type
int and returns
void.
If you specify the type in the parameter list, the compiler will confirm that the type is as expected.
If you omit the type, then the compiler will infer the expected type—
int in this case.
Whether you provide the type or the compiler infers it, Java knows the type of its lambda expression parameters at compile time. You can test this out by making an error within a lambda expression, while omitting the type for the parameter:
IntStream.rangeClosed(1, 5) .forEach((number) -> System.out.println(number.length() * 2));
When you compile this code, the Java compiler will return the following error:
Sample.java:7: error: int cannot be dereferenced .forEach((number) -> System.out.println(number.length() * 2)); ^ 1 error
The compiler knows the type of the parameter called
number. It complained because it isn't possible to de-reference a variable of type
int using the dot operator. You can do that for objects, but not for
int variables.
Benefits of type inference
There are two major benefits to omitting type in lambda expressions:
- Less typing. There is no need to key in the type information given the compiler can easily determine that for itself.
- Less code noise—
(number)is much simpler than
(int number).
Furthermore, as a rule, if we have only one parameter, omitting type means we can also leave out the
(), as shown:
IntStream.rangeClosed(1, 5) .forEach(number -> System.out.println(number * 2));
Note that you will need to add the parenthesis for lambda expressions taking more than one parameter.
Type inference and readability
Type inference in lambda expressions is a departure from the normal practice in Java, where we specify the type of every variable and parameter. While some developers argue that Java's convention of specifying type makes it more readable and easier to understand, I believe that preference may reflect habit more than necessity.
Take an example of a function pipeline with a series of transformations:
List<String> result = cars.stream() .map((Car c) -> c.getRegistration()) .map((String s) -> DMVRecords.getOwner(s)) .map((Person o) -> o.getName()) .map((String s) -> s.toUpperCase()) .collect(toList());
Here we start with a collection of
Car instances and associated registration information. We obtain the owner of each car and the owner's name, which we convert to uppercase. Finally, we put the results into a list.
Every lambda expression in this code has a type specified for its parameter, but we've used single-letter variable names for the parameters. This is very common in Java. It is also unfortunate, because it leaves out domain-specific context.
We can do better than this. Let's see what happens when we rewrite the code with stronger parameter names:
List<String> result = cars.stream() .map((Car car) -> car.getRegistration()) .map((String registration) -> DMVRecords.getOwner(registration)) .map((Person owner) -> owner.getName()) .map((String name) -> name.toUpperCase()) .collect(toList());
These parameter names carry domain-specific information. Rather than using
s to represent a
String, we've specified domain-specific details like
registration and
name. Likewise, instead of
p or
o, we use
owner to show that
Person is not just a person but is the owner of the car.
Each lambda expression in this example is a notch better than the one it replaces. When we read the lambda—for example,
(Person owner) -> owner.getName()—we know that we're getting the name of the owner and not just some arbitrary person.
Naming parameters
Some languages like Scala and TypeScript place more importance on parameter names than their types. In Scala, we define parameters before type, for instance by writing:
def getOwner(registration: String)
instead of:
def getOwner(String registration)
Both type and parameter names are useful, but in Scala parameter names are more important. We can also take this idea to heart when writing lambda expressions in Java. Note what happens when we drop the type details and parenthesis from our car registration example in Java:
List<String> result = cars.stream() .map(car -> car.getRegistration()) .map(registration -> DMVRecords.getOwner(registration)) .map(owner -> owner.getName()) .map(name -> name.toUpperCase()) .collect(toList());
Because we added descriptive parameter names, we did not lose much context, and the explicit type, now redundant, has quietly disappeared. The result is cleaner, quieter code.
Limits of type inference
While using type inference has benefits for efficiency and readability, it is not a technique for all occasions. In some cases, it is simply not possible to use type inference. Fortunately, you can count on the Java compiler to let you know when that happens.
We'll first look at an example where the compiler is tested but succeeds, then one where it fails. What's most important is that in both cases, you can count on the compiler to work as it is supposed to.
Expanding type inference
For our first example, suppose we want to create a
Comparator to compare
Car instances. The first thing we need is a
Car class:
class Car { public String getRegistration() { return null; } }
Next, we create a
Comparator that compares
Car instances based on their registration information:
public static Comparator<Car> createComparator() { return comparing((Car car) -> car.getRegistration()); }
The lambda expression used as argument to the
comparing method carries type information in its parameter list. We know the Java compiler is pretty shrewd about type inference, so let's see what happens if we omit the type of the parameter, like so:
public static Comparator<Car> createComparator() { return comparing(car -> car.getRegistration()); }
The
comparing method takes one argument. It expects
Function<? super T, ? extends U> and returns
Comparator<T>. Since
comparing is a static method on
Comparator<T>, the compiler so far has no clue to what
T or
U might be.
To resolve this, it expands its inference a little further, beyond the argument passed to the
comparing method. It looks for what we're doing with the result of the call to
comparing. From this, the compiler determines that we merely return the result. Next, it sees that the
Comparator<T, which is returned by
comparing, is further returned as
Comparator<Car> by
createComparator.
Ta daaa! Now the compiler is catching on to us: it infers that
T should be bound to
Car. From this, it knows that the type of the parameter
car in the lambda expression should be
Car.
The compiler had to do some extra work to infer the type in this case, but it succeeded. Next let's see what happens when we level up the challenge, and reach the limits of what the compiler can do.
Limits of inference
To start, we'll add a new call following the previous one to
comparing. In this case, we also reintroduce the explicit type for the lambda expression's parameter:
public static Comparator<Car> createComparator() { return comparing((Car car) -> car.getRegistration()).reversed(); }
This code has no problem compiling with the explicit type, but now let's leave out the type information and see what happens:
public static Comparator<Car> createComparator() { return comparing(car -> car.getRegistration()).reversed(); }
As you can see below, it doesn't go well. The Java compiler complains with errors:
Sample.java:21: error: cannot find symbol return comparing(car -> car.getRegistration()).reversed(); ^ symbol: method getRegistration() location: variable car of type Object Sample.java:21: error: incompatible types: Comparator<Object> cannot be converted to Comparator<Car> return comparing(car -> car.getRegistration()).reversed(); ^ 2 errors
Just like the previous scenario, before we included
.reversed(), the compiler asks what we're doing with the result of the call to
comparing(car -> car.getRegistration()). In the previous case, we returned the result as
Comparable<Car>, and so the compiler was able to infer that
T's type would be
Car.
In this modified version, however, we've passed the result of
comparable as a target to call
reversed(). The
comparable returns
Comparable<T>, and
reversed() doesn't reveal anything more about what
T might be. From this, the compiler infers that
T's type must be
Object. Sadly, that's not sufficient for this code, because
Object lacks the
getRegistration() method that we're calling within our lambda expression.
Type inference fails at this point. In this case, the compiler actually needed information. Type inference looks at the arguments and the return or assignment elements to determine type, but if the context offers insufficient details, the compiler reaches its limits.
Method references to the rescue?
Before we give up on this particular situation, let's try one more thing: instead of a lambda expression, let's try using a method reference:
public static Comparator<Car> createComparator() { return comparing(Car::getRegistration).reversed(); }
The compiler is quite happy with this solution. It uses the
Car:: in the method reference to infer the type.
Conclusion
Java 8 introduced limited type inference for parameters to lambda expressions, and type inference will be expanded to local variables in a future version of Java. Learning to omit type details and trust the compiler now will help set you up for what's coming ahead.
Relying on type inference and well-named parameters will help you write code that is concise, more expressive, and less noisy. Use type inference whenever you believe the compiler will be able to infer type on its own. Provide type details only in situations where you are certain the compiler truly needs your help.
Downloadable resources
Related topics
- Java programming with lambda expressions
- Java 8 language changes
- Functional Programming in Java: The Pragmatic Bookshelf, 2014 | https://www.ibm.com/developerworks/library/j-java8idioms8/index.html | CC-MAIN-2018-26 | refinedweb | 2,089 | 55.24 |
C++ is one of the oldest programming languages. Bjarne Stroustrup invented it in 1979 in New Jersey at Bell laboratories. Earlier, C++ was known as C with classes, because they designed it as an extension of the C language. In 1982 the creators renamed it C++, and they added some new features like operator overloading, virtual functions, comments, and so on. In 1985 C++ was released for commercial implementation, and in the year 1989, its second edition was released.
C++ is one of the most popular programming languages; it is an object-oriented, pre-compiled, and intermediate-level language. C++ has a wide variety of applications, and you use it for making games, developing software applications, operating systems, and whatnot. This tutorial on C++ Basics will help you understand all the basic concepts of C++.
The First Program in C++
The first program for a beginner is the Hello World program; it is like a tradition in computer science to start with the Hello World program. In C++, you need to follow some set of rules if you want to write a program. This set of rules are called syntax, so you will understand this syntax along with the Hello World program.
In line1, #include<iostream> is a header file responsible for adding features in the program. There are predefined functions in these header files that provide you with the features you need while writing a program. <iostream> header file contains definitions of cin, cout, etc, which helps you take inputs from the user and display the output. #include is a preprocessor, which is used while adding the header file in the program.
In line2, using namespace std is a standard namespace which means you use the object and variable names from the standard library.
In line3, int main(), known as the main function, is an essential part of a C++ program. The execution of the program always begins with the main function.
In line7, cout is an object used to print the output in the program. For example, in this line, you will print Hello World!.
In line8, return 0 means nothing will return in this program.
Now, you will understand data types and variables in this C++ basics tutorial.
Data Types and Variables
Data types: They are used along with the variables; they instruct the variables on what kind of data they can store.
Data types in C++ are of three types:
- Primitive data types
- Derived data types
- User-defined data types
Primitive data types: These data types are built-in and are used to declare variables. For example, boolean, integer, character, float, etc.
Derived data types: These are called derived data types because it derives them from primitive data types. It includes function, array, pointer, etc.
User-defined data type: These are those data types that the user defines.
Variables: Variables are used to store values. To declare a variable, you must write the variable name with its data type. The syntax for a variable is:
Syntax:
Example:
Now that you have understood data types and variables, move ahead and learn about arrays in this C++ Basics tutorial.
Arrays
Arrays are one of the most widely used concepts in C++. It is used to store homogeneous data in contiguous memory locations; the array elements have index values starting from 0. For example, you can declare 10 values of float type as an array, rather than declare 10 different variables.
There are elements inside each memory block, and for each element, there is an index number.
Syntax:
Example:
Now, move on to the next topic of the C++ Basics, i.e., Strings in C++.
Strings
Strings are objects in C++, which represent text in the program, like displaying sentences in the programs. There are various operations that you can implement on strings.
In C++ there are two ways to create strings:
- C style strings
- Creating string object
C Style Strings:
In this type, a collection of characters is stored like arrays. This type of string is used in the C language, so it is called C style strings.
Example:
Here, string ch is holding six characters, and the one extra character is the null character \0, which is automatically added at the end of the string.
Creating String Object:
Strings in C++ is an object that is a part of the standard library of C++. To use string class in the program, you must include <string> header in the program. It is more convenient to use C++ strings than C style strings.
Example:
Operators in C++
In C++, operators are symbols that are used to perform operations on data. These operations can be mathematical, or they can be logical.
Different types of operators are:
Arithmetic Operator:
These operators are used for mathematical operations like addition, subtraction, multiplication, division, and so on.
Example:
Comparison Operator:
This operator is used for the comparison of two operands.
Example:
Assignment Operator:
As the name suggests, it is used for assigning values to the variables.
Example:
Logical Operator:
These operators are used to connect two or more expressions to get a resultant value.
Example:
Now, you will learn about conditional statements.
Conditional Statements
These are those statements that allow you to regulate whether or not the block of code should execute.
If-else:
These statements are used when you want to run a code based on some conditions. The statements inside the if block execute if the statement's condition is true, otherwise the else block will execute.
else if:
This statement is used if you want to check another condition after it does not meet the first condition.
switch:
The switch statement is used to check on the condition against a list of values. Each value is called a case, and whichever case meets the condition, the code inside that case executes.
Now, you will learn about another C++ Basics topic, i.e., Loops in C++.
Loops in C++
Loops in C++ are used to execute a code block multiple times, and it helps to reduce the length of the code by executing the same code multiple times.
There are two types of loops:
For Loop:
This loop is used to repeat the code block for some exact number of times. The For loop contains three parts: initialization, condition, and updation.
Syntax:
Example:
Here is the example, you are using a for loop with arrays along with if-else to find the number of even and odd elements.
In this example, you have declared an array arr having some elements inside it, along with the variables even and odd, which you initialized from 0.
To run the for loop, you need the length of the array, so this example has divided the size of all the array elements with the size of one element of the array, which would give you the length of the array.
Inside the for loop, you initialized i from 0 to the length of the array so that every element of the array goes through the loop. Inside the if statement, there is a condition that will check for the even elements. If that condition satisfies, the if block executes and the even variable increments by one. If the condition is not satisfied then the else block will execute.
While Loop:
This loop is used when you don’t know the exact number of times the loop should repeat. The execution of this loop ends based on the test condition.
Example:
In the above example, the test condition of the while loop is i<10, which means the loop will keep on repeating till i becomes equal to 9, i.e. i<10. The message Hello there! will display on the screen ten times, i.e. from 0 to 9, i++ will increment the loop after each iteration.
Functions
You can define functions as a block of code or a group of statements that are designed to perform a specific task. It can easily invoke them from the main function at any point by using function_name(). You can also pass arguments to the function.
In the above example, you will call the function printFunction(), and inside this function, you will print the numbers from 1 to 20 with the help of a while loop.
Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!
Conclusion
After reading this article on C++ Basics, you would have understood all the basic concepts of various topics in C++ including arrays, strings, loops, functions. You also learned about the operators and conditional statements in C++ C++ Basics? If you do, then please put them in the comments section. We’ll help you solve your queries. To learn more about C++ Basics, click on the following link: C++ Basics | https://www.simplilearn.com/tutorials/cpp-tutorial/cpp-basics | CC-MAIN-2021-49 | refinedweb | 1,468 | 62.88 |
While observing different teams and individual developers failing to establish a test-driven development process, I follow a TDD recipe that works fine for me since a couple of years. In this article, I outline possible reasons why TDD doesn't work (when it doesn't) and suggest a step-by-step algorithm that has led me to using TDD as a natural software development approach.
Left-to-right, top-to-bottom. To read this article with less effort, Section Test-driven implementation of a feature can be skipped. It develops a detailed process of handling a change request or a late improvement in a TDD-style. This might be useful for some readers, but skipping it doesn’t destroy the consistency of the remaining content. From my standpoint at least.
As I heard once from an Austrian SCRUM guru, Andreas Wintersteiger, "all useful productive code you write in a week, you can write on Friday afternoon". Maybe he even didn't say "productive", I don't remember.
What is difficult to argue is that, if you keep only the useful code, you will find that typing it down didn't cost much in comparison with thinking about what this code should do (that is the behaviour), how do you implement it (e.g., how do you compose the LINQ expressions), and why doesn't it work as you expect it to (debugging). It can't be a two-digit factor between the typing effort and the rest.
It isn't so much that we think more than we type. The point is that we think on very different things simultaneously: the code behaviour, the implementation details, side effects... That's why it takes longer. At that, the output isn't that good: we will very likely miss something. In this way, we produce more bugs, we have more to refactor, and this again brings us back to the same circle, with less time to deadline left and thus less time to stop and to improve the process.
So, if we start coding without a detailed design, our thinking is inefficient because of too frequent context switches. On the other hand, we won't put every public method into a sequence diagram, will we?
What makes the thing even worse is that, as the behaviour complexity grows, our thinking tends to be chaotic. We hectically jump between different parts of the productive code and different behaviour cases, every time losing our efficiency and producing new potential design and code issues. It looks like it has a cumulative effect. It has.
But how can we control our thoughts? How can I forbid myself to think about "how?" when I'm thinking about "what?"
Well, this is a skill, it can be and ought to be trained.
But there is a recipe, too. I can put the "how?" things into another room, even at another floor. I can separate the thinking about my code behaviour and the implementation work so far from each other in time and in space, that it won't be even possible to mix them up.
Here is how it works for me.
Read the user story and describe the unit and integration tests in the form of test method names (yes, just as they say in the books), like:
public void Ctor_Initializes_EmployeeName_WithPassedParameter()
{
Assert.Inconclusive();
}
Write down all the test cases you can figure out at the user story basis. Just keep writing them down, one day, two days, even more. Stick with it. Write no test code, even less productive code, so long as there is at least one uncovered behaviour case you can think of within the story scope.
Yes, you will be staying at multiple daily SCRUM meetings in a row and saying "Yesterday I wrote test definitions. Will proceed with it today. Have no impediments." Have fun, you're welcome.
Yes, it needs some confidence, indeed.
What is your gain, apart from the respect of your teammates?
The gain is the effectiveness of your thinking on the code behaviour. You cannot disperse your mind over different things, because you simply aren't working on them. You stay concentrated (narrow-focused) on the behaviour only, thus giving yourself the best chance not to forget anything, which is so difficult to implement at a later point, when you think (and report!) you are almost done.
Well, you can pack it nicely. I mean for the daily stand-ups.
This is your design phase.
On the one hand, this is your design. On the other, you are writing placeholders for the future automated tests. You will then be forced either to implement and to green all of them, or to remove some, thus explicitly cancelling the related behaviour cases. So, at the end of thorough analysis and design, you will have a complete test suite that automatically... Well, there is no need to discuss, how good it is to have a complete automated test suite.
The only thing you cannot verify objectively, is the very completeness of your test suite. Your future happiness of having this story done without issues and known bugs, is resting on a shaky foundation of how accurate the behaviour description is.
The good news is that you won't need so much concentration after that. After having given your best in the test definition/design phase, you can code almost without thinking. Less thinking means less chaos.
Note that the design in its usual form, e.g., with UML, let's call it old-fashion design, doesn't make the same. Instead of a test skeleton, a future test suit, which completely defines your next steps, the old-fashion design yields some UML sheets that you hopefully won't put in the code documentation, as your code will probably significantly differ from what you've scribbled in Visio. The old-fashion design isn't that agile. (Yesss, I knew I could put it somewhere!)
Let's take a look at how it works in a real user story.
Consider you have a team of employees that you use to send to business trips for installing local networks, maintenance of security hardware, wine tasting, saving the world, whatever. Within this user story, they travel with passenger cars.
For the purposes of correct costs accounting and remuneration, the user as a head of department would like to have a function in the accounting software, where they can specify the vehicles the team drove with, associate the team members with the related vehicles and specify their roles, i.e., driver or passenger.
There are a couple of reasons why it is important for the user to avoid possible errors, like having the same driver associated with multiple vehicles, more drivers than vehicles, same passengers in different vehicles and alike. Indeed, it would be great to know, who of the team members will pay the speed tickets.
The application's graphical layout consists of a two-pane control, where the left-hand part is a so-called hamburger menu that switches the content of the right-hand pane. The user story specifies that there should be an own button added to the hamburger menu for switching to the vehicle/team management function.
The PO didn't specify more details to this story, because they like making unadvertised changes to the productive code and the database schema, so they haven't got so much time for writing detailed acceptance criteria. This is the reality you work in. The user story is now yours.
The user story specifications on one hand, and the existing framework of the app on the other, imply that the new function's view model should be added to the main view model's list of subordinate view models. This automatically leads to appearance of a new menu option in the left-hand pane. So, the first test is as follows:
[TestClass]
public class MainViewModelsTests
{
[TestMethod]
public void Ctor_Adds_ManageVehiclesViewModel_To_SubPages()
{
Assert.Inconclusive();//don't forget to implement me
}
/*
some other tests from previous user stories
*/
}
If the new view model is in the list, and its view is specified as a related data template in the main window’s XAML, the (tested) functionality of our application’s frameworks assures that the user has access to the new functionality. XAML content isn’t something that we unit-test, though.
It looks like we have forgotten that we would need the list of the team members to assign to the vehicles in the new view model. Yes, we have. Let's go ahead.
So, ManageVehiclesViewModel initially (at least in this use case) has an empty list of vehicles, offers a possibility to add and remove vehicles, lets the rest of the world know that it happens(*), and has a validation possibility which has an impact on save. Ah, there is a save command too!
ManageVehiclesViewModel
(*) The related property can by of type IEnumerable<Vehicle>. If its field behind is an ObservableCollection or a BindingList, WPF will check it. Not sure about Xamarin.Forms. If the field behind is a List or an array, it should change the reference and raise the property change event, otherwise the binding won't work (only property change event isn't sufficient). The latter option seems to be the most universal, i.e., it will certainly work for both WPF and Xamarin.Forms. For the sake of brevity, we will use BindingList.
IEnumerable<Vehicle
ObservableCollection
BindingList
So, the user should be able to add or remove vehicles. For this, we need a command and an observable collection in ManageVehiclesViewModel, the command should add a vehicle when it makes sense and be disabled when it does not. The added vehicle view model should have a remove-me command, and there should be a way to communicate this desire to ManageVehiclesViewModel (I always use a command-event pair in such cases in favour of isolated unit tests.) We add a dozen tests just for “...where they can specify the vehicles the team drove with...”. It seems we have enough work to do without blaming the PO for an under-defined user story:
ManageVehiclesViewModel
public void Ctor_Initializes_Vehicles_With_EmptyBindingList()...
public void Ctor_Initializes_AddVehicleCommand_With_CanExecute_True()...
public void AddVehicleCommand_Adds_VehicleViewModel_ToVehicles()...
public void On_VehicleViewModel_RemoveMeEvent_RemovesSender_FromVehicles()...
Rather soon, we will see that we are missing the teammates collection. For instance, when we will realize that we cannot add an unlimited number of vehicles, anyway not more than not yet assigned teammates.
What?! Yes, another collection, that of the unassigned teammates which is initialized in constructor from the passed list of teammates, changes when you add some of them to a vehicle as a driver or as a passenger or remove an entire vehicle with some passengers, and this, in its turn, changes the can-execute state of the add-vehicle command, and that of the save command too, and on changing the can-execute state, the command raises the can-execute-changed event...
It sounds simple: just read the user story and write down in the form of empty tests everything that comes to mind. It isn’t a big deal if you must rework it then, since there is no implementation effort behind them yet.
There will be tests and tests, new behaviour cases, new tests for them, where your find further new behaviour cases and so one and it seems to have no end...
Well, in most cases it does have the end. It’s great fun to reach it, because it happens suddenly. Suddenly, you figure out that you have nothing to add, simply nothing, while all the tests you've written so far are green. Then you are done with that story.
If it doesn't, it has nothing to do with TDD. Your analysis of the behaviour details - this is what you were doing all the time - has led you to a conclusion that the user story has no consistent solution, at least not in your understanding. It is good to know about it so early, before having written a line of productive code. It's time to have a brainstorming and to talk with the PO.
Our user story does have a consistent solution. It is implemented in project Sample2.
Sample2
It turns out, however, that consequent adding/removing passengers or drivers leads to changing their order in the initial list.
style="width: 640px; height: 322px" data-src="/KB/work/5257738/initial_view.png" class="lazyload" data-sizes="auto" data->
Clicking “Georgy Zhukov” in the collection of the first vehicle’s available drivers assigns him as a driver. The same is for “Dwight Eisenhower” if we select him as the second vehicle's driver. In both cases, these team members are removed from “Available Passengers” and “Available Drivers” of both vehicles.
style="width: 640px; height: 322px" data-src="/KB/work/5257738/view_after_adding_drivers.png" class="lazyload" data-sizes="auto" data->
If we click the driver button with an assigned driver, the latter is de-assigned and returns to “Available Passengers” and “Available Drivers” of both vehicles. However, the order of the unassigned team members is now different:
style="width: 640px; height: 322px" data-src="/KB/work/5257738/view_after_removing_drivers.png" class="lazyload" data-sizes="auto" data->
The functionality can be used as specified in the user story, but the PO doesn’t find it nice and it’s difficult to argue. Indeed, we are supposed to bring the vehicle-driver-passenger association to its initial state, and we do so, but the user expects to see the entire view in its initial state.
Let’s look at test-driven implementation of this improvement in detail.
First, we describe a couple of tests for this.
Ah, no! First, we decide where to place these tests.
If you examine project Sample2, you will see that we have tested that:
The first position could have seemed excessive at the beginning. Should we really test such things? Well, in the original customer project, where this story occurred, we didn’t, what made it necessary to test synchronization between the vehicles. For this article, I implemented it from scratch and differently, so I could spare nearly a dozen of integration-level tests without even adding more unit-level tests.
With this observation, it seems to suffice if we test the new feature within one vehicle view model, too. Let’s define such tests:
[TestFixture]
public class VehicleVieweModelTests
{
[Test] public void
Setting_Removing_Driver_Preserves_OriginalOrder_OfUnassignedEmployees()...
[Test] public void
Adding_Removing_Passengers_Preserves_OriginalOrder_OfUnassignedEmployees()...
}
As I’m not a LINQ guru, I have no idea how I would implement it, and I prefer not to think about it at this point. It fits well to the scheme.
The two tests are not quite similar:
[Test]
[TestCase(0)]
[TestCase(1)]
[TestCase(2)]
public void Setting_Removing_Driver_Preserves_OriginalOrder_OfUnassignedEmployees(int expected)
{
// arrange
var target = new VehicleViewModel(this.unassingedEmployees, this.unassingedEmployees.ToList());
var labRat = this.unassingedEmployees[expected];
// act
target.AvailableDrivers.Single(el => el.Person == labRat).SelectCommand.Execute(null);
target.Driver.SelectCommand.Execute(null);
// assert
var actual = this.unassingedEmployees.IndexOf(labRat);
Assert.AreEqual(expected, actual);
}
In this test, we see that the collection of employees is passed to the constructor of VehicleViewModel twice, but as two different instances. You find the related discussion below in this section. The test verifies exactly what we have observed and depicted in the screenshots above. But something tells us that the things will be the same if we try it with the passenger. Maybe even more complicated, as we can add and remove multiple passengers in an arbitrary order.
VehicleViewModel
[Test]
[TestCase(new[] { 0 }, new[] { 0 })]
[TestCase(new[] { 1 }, new[] { 1 })]
[TestCase(new[] { 2 }, new[] { 2 })]
/*lots of test cases ...*/
[TestCase(new[] { 0, 1, 2 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 1, 0, 2 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 0, 1 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 1, 0 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 1, 0 }, new[] { 2, 0, 1 })]
public void Adding_Removing_Passengers_Preserves_OriginalOrder_OfUnassignedEmployees
(int[] toAdd, int[] toRemove)
{
// arrange
var target = new VehicleViewModel(this.unassingedEmployees, this.unassingedEmployees.ToList());
var labRats = this.unassingedEmployees.ToArray();
foreach (var i in toAdd)
{
target.AvailablePassengers.Single(el => el.Person == labRats[i])
.SelectCommand.Execute(null);
}
// act
foreach (var i in toRemove)
{
target.Passengers.Single(el => el.Person == labRats[i])
.SelectCommand.Execute(null);
}
// assert
foreach (var expected in toAdd)
{
var actual = this.unassingedEmployees.IndexOf(labRats[expected]);
Assert.AreEqual(expected, actual);
}
}
Yet, it didn’t (and still doesn’t) seem evident to me that the original order can be restored correctly after all available teammates are selected to passengers and then unselected in a different order. Instead of gazing down at the productive code and trying to figure out how it would work in more complicated cases, or messing with mathematical induction or something, I just add the test cases and consider it is good enough if they pass while the algorithm isn’t quite clear to me.
There are lots of such situations, for instance in numerical methods, where understanding every algorithm detail in connection to any thinkable application case is simply not affordable.
For making it sure that this implementation has an expected effect with regard to “Available Passengers” and “Available Drivers”, recall that we have already tested that these collections are synchronized with this.unassingedEmployees.
this.unassingedEmployees
There remains a nuance that no test covers yet, namely that the ManageVehiclesViewModel creates a new vehicle view model with two different collections, namely this.unassignedEmployees and this.originalEmployees.
this.unassignedEmployees
this.originalEmployees
var newVehicleVm = new VehicleViewModel(this.unassignedEmployees, this.originalEmployees);
The vehicle view models share the former collection’s reference, so its content changes in time. Can we really use it to keep the order template?
It is quite annoying to test such a small thing, especially when we cannot figure out right away, how we can do it in an elegant manner. It would however be even more annoying, if it wouldn’t work because of a stupid copy-paste error.
I did my best trying to keep it as simple as possible:
[Test]
[TestCase(0, 1)]
[TestCase(1, 2)]
public void
Adding_Removing_Passengers_ForTwoVehicles_Preserves_OriginalOrder_OfUnassignedEmployees
(int toAddRemove1, int toAddRemove2)
{
// arrange
var target = new ManageVehiclesViewModel(this.employees, this.containerMock.Object);
target.AddVehicleCommand.Execute(null);
var vehicle1 = target.Vehicles.Last();
vehicle1.AvailablePassengers.Single(el => el.Person ==
this.employees[toAddRemove1]).SelectCommand.Execute(null);
target.AddVehicleCommand.Execute(null);
var vehicle2 = target.Vehicles.Last();
vehicle2.AvailablePassengers.Single(el => el.Person ==
this.employees[toAddRemove2]).SelectCommand.Execute(null);
// act
vehicle1.Passengers.Single(el => el.Person ==
this.employees[toAddRemove1]).SelectCommand.Execute(null);
vehicle2.Passengers.Single(el => el.Person ==
this.employees[toAddRemove2]).SelectCommand.Execute(null);
// assert
CollectionAssert.AreEqual(this.employees, target.UnassignedEmployees.ToList());
}
This is an integration test. So as to reduce its overlapping with the unit tests in VehicleViewModelTests, I retained here only those tests cases that fail if we pass this.unassignedEmployees as the second parameter like below:
VehicleViewModelTests
var newVehicleVm = new VehicleViewModel(this.unassignedEmployees, this.unassignedEmployees);
It didn’t take much to think on its implementation, as it resembles the analogous unit tests in VehicleViewModelTests. Nevertheless, what is its value? I mean, besides covering this ridiculous copy-paste opportunity. Well, it verifies that passing two different collections is necessary indeed, so we haven't added any technical debt here and don’t need to think about eventual simplification. At some point, I had a doubt.
When messing around with the above integration test, I found another behaviour case, namely that of deleting an entire vehicle with passengers or drivers, where correct order recovery is to be verified too. So, further integration tests are added. This time, these are undoubtedly integration cases:
public void On_Removing_VehicleViewModel_Adds_VehiclesAssingedPassengers_
ToUnassgignedEmployees_InOriginalOrder()...
public void On_Removing_VehicleViewModel_Adds_VehiclesAssingedDriver_
ToUnassgignedEmployees_InOriginalOrder()...
These new cases would require reuse of the insertion-to-original-order algorithm, which was initially implemented as a method of VehicleViewModel. Now we move it to an own utility class OriginalOrderTemplate and we should test it, shouldn’t we? And what is then with the already written tests in VehicleViewModelTests? Will they be duplicated?
OriginalOrderTemplate
No, not really. The initially written tests verify only the cases that can occur in this user story. But the new class is a utility. So, its usage should either be limited to the cases of our user story, which would require defining and testing its reaction on the cases beyond the required scope, like throwing argument exceptions. Or we extend its scope and implement some limit case tests outside the user story scope. In this specific situation, I found it more pragmatic to add limit test cases and thus a) add more value to this utility, b) make it less fragile, and c) avoid changes to the productive code logic, which I would have tested too. There is something to test in OriginalOrderTemplateTests, anyway.
OriginalOrderTemplateTests
The above-mentioned tests and the related productive code changes are found in project Sample3.
Sample3
It is important to indicate that there is no need to verify this improvement in an automated UI test. Indeed, we have not only tested the synchronization of “Available Passengers” and “Available Drivers”, but also that they are of an observable type (IBindingList in our case). So, the only thing that still could go wrong is the related XAML binding expression, which we as developers do not cover with automated tests. If QA would like to, they can, but they certainly don’t need to do this additionally for this specific improvement.
IBindingList
As you can see, late definition of behaviour cases and late test implementation involves that chaotic jumping between behaviour analysis, tests definitions and implementations details that I have talked about at the beginning, the old, good and expensive thinking-n-coding.
In the rule, you realize some new behaviour cases when you are already working on the implementation details of the productive code.
The algorithm is simple: whatever you are doing, stop it if you find a new behaviour case, and write an empty test for it. It secures you from forgetting that new case (you will forget it if you find a second one or a third). Besides, if you have any inconsistency in your design or in the user story definition, you have a chance to figure it out and take measures as early as possible, thus reducing the risk of excessive costs. In any way, you stay in the test definition phase so long as you can add or change anything in the test skeleton.
It might help if you define the priorities of your work as follows:
At least you can feel so, even if you have forgotten the XAML part. It systematically happens to me to forget about the view. Anyway, you will laugh on it with your teammates on the stand-up and then will be refining and tuning the UI part with an easy heart, because the thing already works.
Phases 2 and 3 can be merged. It depends upon your personal preferences and how confident or unconfident you are about implementation of the productive part in this specific user story. I usually merge.
What is the meaning of this work from the productive code perspective?
What concerns the class interaction, it is pretty much like CRC design, except that you define it in within the integration tests. Besides, you define the class behaviour much more detailed as when you do it in a usual CRC or UML design.
Anyway, at every step, you are doing the productive work. Having a complete test suit at the end is a bonus.
The software developer work is creative. That’s why we like it. What if the suggested recipe removes the thinking work from where it is the most creative, from the productive code implementation?
Well, partly it does.
There is however a phenomenon that spoils a pleasure of a super-creative work, namely a bitter experience of never-ending stories. The chart below displays functionality F versus costs C in a project or user story with elevated technical debt (poor design and test coverage are parts thereof):
style="height: 394px; width: 640px" data-src="/KB/work/5257738/Functionality-Costs-High-Tech-Debt.png" class="lazyload" data-sizes="auto" data->
The fun of a super enthusiastic and creative feature-driven work at the beginning turns into frustration at the end. At this point, you just wouldn’t like to think about what would happen if a change request comes. You can experience it once, twice, a couple of times more...
The next chart shows another case, namely how the costs-versus-functionality curve looks like in a TDD-style project. You get done suddenly. And certainly.
style="height: 395px; width: 640px" data-src="/KB/work/5257738/Functionality-Costs-Low-Tech-Debt.png" class="lazyload" data-sizes="auto" data->
Here, you might feel uncomfortable at the beginning, as you’re working hard but produce no functional increments. Then, as you start greening the tests, you might be thinking “no, it cannot be that simple!”. Si, it can! It feels like you are working almost without thinking. This is because your thinking is more efficient. You are concentrated on the implementation details only, that’s why it doesn’t take so much effort.
These diagrams are valid for projects with higher and lower technical debt in general. Test-driven development helps you to reduce the part of technical debt that is linked to poor design and tests coverage.
It is like in many other life situations: your either invest up-front and then enjoy or you have fun from the beginning, but not for long:
style="height: 394px; width: 640px" data-src="/KB/work/5257738/Functionality-Costs-Comparison.png" class="lazyload" data-sizes="auto" data->
How can you write empty unit tests (for days!), if you are not sure that you can implement the productive part?
You cannot. If you are not sure about the productive part, this is the case for a prototype.
As books say, a prototype is something that you throw away after. The further you go with your prototyping, the more difficult is to stop it and start a clean productive solution. The further you continue, the lower are the technical risks, but the higher are the risks to continue the production with a poorly designed prototype code.
Prototype code isn't something you’re used to cover with unit tests, simply because you may need to refactor it too often and too deeply, so that refactoring the related unit tests may turn out to be too expensive.
You think you realize the software component's behaviour and the interaction between the productive classes good enough to skip this borrowing work. I often have such temptation. It might lead me to having a leaky tests coverage. What consoles me is a hope that the not-covered classes would never be changed in the future nor impacted by any changes in other parts of my software component.
Even if at the beginning you were sure enough about the productive part, you encounter a necessity of significant refactoring at a later point, where you already have tons of unit tests which should be refactored too. So, while without the unit tests the refactoring costs would be quite moderate, with them it turns out to be very expensive.
It is the matter of design to organize the things in your productive code and the tests so, that such deep refactoring is sufficiently unlikely. Is there any recipe for a good design? Yes, there is. Read about design anti-patters and avoid them. Even if you, for whatever the reason, don't like using design patterns, simply avoiding anti-patterns will make you code good enough.
In a real project, where this sample user story occurred, vehicles were initially represented by strings (car plate numbers). It’s the same what happens if you pass the event data as is, without encapsulating it into an EventArgs-descendant. I haven’t found any specific anti-pattern for this, so I have christened it as under-typing. When it was time to add car mileage, we already had a lot of unit tests, where we've got to change the test data types.
EventArgs
We had once a great experience of defining all thinkable tests cases exactly as I put it here. But we did it in the form of SCRUM Planning II, that is seating altogether, the entire team or almost, and writing that stuff on a board. We did only one user story in this way and all of us highly estimated it at the consequent retrospective meeting. But we have never done it again.
In my actual understanding, TDD should be cosy. I would even say, this is the objective.
That means, you feel it brings less than it costs. This is the killer of any undertaking.
Yet, we should never forget about the technical debt effect. It is always delayed and always inevitable. Figures 5 and 7 display how it works. When the costs explode, the learned lesson will probably convince you to start documenting your code, refactor, increase the test coverage, etc. But you cannot recover the costs of what has already happened. The regret to not having done it earlier will remain.
Nevertheless, it would be nice to know...
Every automated test has a value and its cost. The objective is to always have the former possibly high and the latter possibly low. In case where we cannot estimate the value of the cost of each individual test in advance, a couple of rules can be used to make the value expectation higher and the cost expectation lower.
As Martin Fowler depicted in his "Test Pyramid" article, the lower is the test in the pyramid, the cheaper it is:
style="height: 352px; width: 640px" data-src="/KB/work/5257738/test-pyramid.png" class="lazyload" data-sizes="auto" data->
It is in general true for pure unit tests and for integration tests, unless unit-testable isolation of productive classes requires too much effort. We have integration tests in the sample user story in this article. Basically, if I have a choice, I test any behaviour as low in the test pyramid as possible.
If you tested the thing at a lower level, avoid retesting it elsewhere. In other words, if I have already tested some behaviour in lower-level (lower-integration-degree) tests, I rely entirely on that in the higher-level tests.
And do not copy-paste test-data-creating code. Use auxiliary methods.
And…
There are many recommendations and unit-testing best-practices. Let’s have them outside this article’s scope.
In any case where you don't have any idea about the software component's behaviour cases and detailed design. Ok, ok... You simply don't feel excessive confidence, alright? In other words, if you know that it isn't rocket science, but you don't know what to start with, it wouldn't be wrong if you start with empty test cases, top-bottom, exactly as in the example above.
If you are tired and have concentration problems, it could help if you focus on simple and small yet useful things, like empty test cases.
If your team is new to TDD (otherwise, they would tell you what to start with), you should first agree with your teammates on doing a user story or two in the TDD-style. If your team practices pet projects, this could be a good place to try something without an obligation to succeed right away.
If your project guidelines imply high test coverage, it's more reasonable to write the tests first. It is simply less time consuming because of the reasons I tried to explain in this article. Besides, with tests-first, you will test the behaviour only, your tests will be shorter (not so much to refactor in case of), and your productive code will forcefully be test-cooperative.
Whenever you feel it's too difficult, just recall how it was when you were learning to ride a bike.
The more people have reviewed your empty tests, the less will be your remaining effort.
Pair programming is much easier in the test definition phase than in the implementation, because the only thing you need to agree on is the behaviour. Besides, pair programming in the test definition phase is especially valuable.
If you even don't know, what empty tests to start with, as it happened to me in this user story, add functionality regions to individual test files, like "Constructor", "Adding/removing vehicles", "Validation and saving", etc. Remember that thinking takes time, typing does not.
Consider adding Assert.Inconclusive() to the empty of copy-pasted tests. Keeping the empty tests in mind so as not to leave passing test placeholders is tiresome. Typing is... you already now. Why Assert.Inconclusive() and not throwing a not-implemented exception? Sounds logical, but then you probably won't be able to check it in and share the implementation work.
Assert.Inconclusive()
The source code is a VS2019 solution. It contains three productive executable projects, namely Sample1, Sample2 and Sample3. The former is a boiler plate WPF project, the second one implements the sample user story as it is defined, the latter adds an improvement discussed in Test-driven implementation of a feature.
Sample1
,
Sample2
Sample
3
The productive projects have their test counterparts, namely Sample1.Tests, Sample2.Tests, and Sample3.Tests.
Sample1.Tests
2
.Tests
The former contains only empty tests that define the behaviour cases I thought about before I've set up with the implementation.
The second test project contains the test suit of Sample2.
The test suits of the both sample projects are different. Some tests of the first project are removed in the second. This is normal. Indeed, in the suit of empty tests, it deals about the behaviour definition. Later, it can turn out that some behaviour cases are not important, should be different, or even cancelled. Sometimes you cannot know it in advance.
The significantly increased number of tests, as displayed in Test Explore, namely 83 in Sample2.Tests versus 43 in Sample1.Tests is because I have changed from MSUnit to NUnit and implemented some tests with multiple data-driven tests cases. NUnit accounts data test cases as individual tests. Sample3.Tests displays 133 tests, although we have added only 6 test methods.
1
MSUnit
NUnit
NUnit
There is also a utility project with an auxiliary stuff and unit tests. | https://codeproject.freetls.fastly.net/Articles/5257738/How-to-Code-Without-Thinking?fid=1957336&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True | CC-MAIN-2021-39 | refinedweb | 5,716 | 54.93 |
User Tag List
Results 1 to 2 of 2
Thread: Primary Key Type other than INT?
- Join Date
- Apr 2005
- 3
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Primary Key Type other than INT?
Hello,
I started to use Rails recently and at the moment I am focusing Mgrations.
Is it possible to change the type of the primary key? Changing the name is fairly easy. I have data that are referencend by a char field that has unique content.
That's what my first migration looks like.
Code:
def self.up create_table(:pages, :primary_key => :id) do |t| t.column :id, :string t.column :name, :string t.column :content, :string end
Last edited by Josti; Jun 12, 2006 at 04:40.
- Join Date
- Aug 2005
- 986
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
I don't think that Rails supports this. But why do you want to use a string? Maybe your problem can be solved in another way.
Bookmarks | http://www.sitepoint.com/forums/showthread.php?391655-Primary-Key-Type-other-than-INT&p=2822630&viewfull=1 | CC-MAIN-2014-15 | refinedweb | 162 | 86.71 |
Perl & LWP 121
The good:The book has a nice style and good coverage of the subject, includes introduction to all the modules used, reference material and includes good, well-developed examples. I really liked the way the authors describe the basic methodology to develop screen-scraping code, from analyzing an HTML page to extracting and displaying only what you are interested in.
The bad:Not much is bad, really. Some chapters are a little dry, though, and sometimes the reference material could be better separated from the rest of the text. The book covers only simple access to web sites; I would have liked to see an example where the application engages in more dialogue with the server. In addition, achieve appendices and index, this one is part of that line of compact O'Reilly books which covers only a narrow topic in each volume but which covers those topics well. Just like Perl & XML , its target audience is Perl programmers who need to tackle a new domain. It gives them a toolbox and basic techniques that to provide URLs. It continues with five chapters on how to process the HTML you get, using regular expressions, an HTML tokenizer and HTML::TreeBuilder, a powerful module that builds a tree from the HTML. It goes on with an explanation programmer new to the field would do: start from the docs for a module, play with it, write snippets of code that use the various functions of the module, then go on to coding real-life examples. I particularly liked the fact that the author often explains the whys, and not only the hows, of the various pieces of code he shows us.
It is interesting to note that going from regular expressions to ever more powerful modules is a path followed also by most Perl programmers, and even by the language itself: when Perl starts being applied to a new domain first there are no modules, then low-level ones start appearing, then, as the understanding of the problem grows, easier-to-use modules are written.
Finally I would like to thank the author for following his own advice by including interesting examples and above all for not including anything about retrieving stock-quotes.
Another recommended book on the subject is Network Programming with Perl by Lincoln D. Stein, which covers a wider subject but devotes 50 pages to this topic and is also very good.
Breakdown by chapter:
-.
Web Basics (16p): describes how to use LWP::Simple, an easy way to do some simple processing.
The LWP Class Model (17p): a slightly steeper read, closer to a reference than to a real introduction that lays out the ground work for the good stuff ahead. mentioned in the introduction roadmap..
HTML processing with Tokens (19p): using a real HTML parser is a better (safer) way to process HTML than regexps. This chapter uses HTML::TokeParser. It starts with a short, reference-type intro, then a detailed example. Another reference section describes the methods an alternate way of using the module, with short examples. This is the kind of reference I find the most useful, it is the simplest way to understand how to use a module..
HTML processing with Trees (16p): even more powerful than an HTML tokenizer: HTML::TreeBuilder (written by the author of the book) builds a tree from the HTML. This chapter starts with a short reference section, then revisits 2 previous examples of extracting information from HTML using HTML::TreeBuilder.
Modifying HTML with Trees (17p): More on the power of HTML::TreeBuilder: a reference/howto on the modification functions of HTML::TreeBuilder, with snippets of code for each function I really like HTML::TreeBuilder BTW, it is simple yet powerful..
Spiders (20p): a long example describing how to build a link-checking spider. It uses most of the techniques previously described in the book, plus some additional ones to deal with redirection and robots.txt files.
Appendices
I think the Appendices programmers are not very familiar with OO, and in truth they don't need to focused enough that I have never needed to use the index.
You can purchase Perl & LWP from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Re:spammers thank you. (Score:3, Insightful)
I'd combat it by making the infomation you DO want available very easy to get to, and the everything else hard.
That's why it's easy to take a penny from the penny jar and hard to get to the safe at a store.
Re:spammers thank you. (Score:2)
Re:perldoc LWP (Score:5, Insightful)
Books like these, that focus very narrowly but try to cover the topic well, is what ORA is well known for and why they are still the major distributor of books related to OSS development and usage. Other large publishers would seem to balk at these types of books and instead opt for the 1000+ pg books that try to cover everything, typically failing to cover topics adequetely or making mistakes, since the size of a book can be an influencing factor to some book purchasers. In fact, one could argue that a lot of what ORA offers is simply rehashs of free documentation, but if that were the case, I'd have expected to see ORA out of business years ago. Therefor, there is a demand for ORA's quality retakes of the manpages and free documentation, and books like these continue to extend their catalog in good ways.
Re:perldoc LWP (Score:3, Funny)
That's fairly apparent. Especially as his name is Sean.
:)
Screen scraping cold war (Score:2, Insightful)
In turn, screen scrapers will have to counter with further intelligence and the information cold war begins!
Just a thort.
Re:Screen scraping cold war (Score:1)
Re:Screen scraping cold war (Score:4, Informative)
Another thing that sites do is encode certain bits of text as images. Paypal, for example, does this. And they muck with the font to make it hard for OCR software to read it -- obviously they've had problems with people creating accounts programatically. (why people would, I don't know, but when there's money involved, people will certainly go to great lengths to break the system, and the system will have to go to great lengths to stop it -- or they'll lose money.)
It's nice that there's a book on this now
... but people have been doing this for a long time. For as long as there has been information on web sites, people have been downloading them and parsing the good parts out.
Re:Screen scraping cold war (Score:1)
Re:Screen scraping cold war (Score:1)
Spam (Score:1, Offtopic)
Re:Quick n' Dirty Method (Score:2)
Whoah, perl needs a whole book for this? (Score:1, Informative)
import urllib
obj = urllib.urlopen('')
text = obj.read()
Doesn't seem to discuss the legalities (Score:3, Interesting)
Re: Doesn't seem to discuss the legalities (Score:3, Insightful)
How can this be less legal than surfing the pages with a browser regularly?
Additional question for 5 bonus points: Who the hack can sue me if I program my own browser and call it "Perl" or "LWP" and let it pre-fetch some news sites every morning at 8am?
VCRs can be programmed to record my favorite daily soap 5 days a week at 4pm as long as I'm on vacation. Some TV stations here in Europe even use VPS so my VCR starts and stops recording exactly when the show begins and ends, so I don't get commercials before/after. Illegal to automate this?
Disclaimer: I don't watch soaps.
:)
Ticketmaster Example (Score:5, Informative)
Quote from their TOC's... Ticketmaster Ticketmaster
This I think would be something that a lot of sites would want to do (Not that I agree)
Re:Ticketmaster Example (Score:2)
It's the repurposing that concerned me (Score:1)
Re: Doesn't seem to discuss the legalities (Score:4, Insightful)
Many sites (Yes, our beloved Slashdot included) use detection methods. If the detector thinks you are using a script, BANG!, your IP is in the deny list until you can explain your actions. A nice profile that says "for the last 18 days, x.x.x.x IP address logged in each day at exactly 7:53 am and did blah..." will get you slapped from MSNBC pretty fast. I would advise you to get some type of permission from the owner of the site before running around with scripts to grab stuff all over the web. Someone might mistake you for a script kiddie.
Re: Doesn't seem to discuss the legalities (Score:1)
I meant: What is the difference between fetching a site every morning in a browser and - for example - have it pre-fetch with a script so the info is already there when you enter your office?
Asking for permission is never a bad idea, though.
Re:Doesn't seem to discuss the legalities (Score:1, Informative)
Re:Doesn't seem to discuss the legalities (Score:1)
I needed several good sources of news stories for a live product demo and I did not have too much trouble getting permission from site owners to automatically summarize and link to their material.
I worked for a company that got in mild trouble for not getting permission a few years ago, so it is important to read the terms of services for web sites and respect the rights of others.
That said, it is probably OK to scrape data for your own use if you do not permanently archive it. I am not a lawyer, but that sounds like fair use to me.
A little off topic: the web, at its best, is non-commercial - a place (organized by content rather than location) for sharing information and forming groups interested in the same stuff. However, I would like to see more support for very low cost web services and high quality web content. A good example is Salon: for a low yearly fee, I find the writing excellent. I also really like the SOAP APIs on Google and Amazon - I hope that more companies make web services available - the Google model is especially good: you get 1000 uses a day for free, and hopefully they will also sell uses for a reasonable fee.
-Mark
Yea! (Score:4, Funny)
Oh, wait...
Re:Yea! (Score:3, Insightful)
First, perl has native threads in the current perl 5.8.0.
Second, if you are interested in threads (or more generally multiple concurrent processing), check out POE from CPAN. POE *is* the best thing to happen to perl since LWP. It is an event driven application framework, which allows cooperatively multi-tasking sessions to do work in parallel. It is the bees knees, and the cat's meow.
Re:Yea! (Score:1)
For that matter, anyone who uses that phrase should be drawn and quartered and then fed their own intestines, but I digress.
Re:Yea! (Score:1)
LWP is not new but (Score:1)
perldoc LWP first followed by finding any examples
I can using the module, the book is always a last resort. I can learn a lot more by just playing
with the module.
LWP is great! (Score:4, Interesting)
In one evening, I wrote a quick Perl routine to perform the login and navigation to the appropriate page by LWP, download the needed page, and use REs to extract the appropriate information (yes, traditional screen scrape)
The beauty was that it was easy. I don't usually do Perl, but in this case it proved to be a wonderful tool creation tool
Re:LWP is great! (Score:1)
lynx -source '' | awk '{ some code to parse }'
I don't understand why anyone would want to use a library or something like Perl for that. Makes it more complicated than it should be. I especially don't understand why you would want to buy a book about this when you can do: man awk
Re:LWP is great! (Score:1)
Primarily because it was a cross-platform solution, and (at the time) I didn't know how to do it in Java (I do now, but can't bother to rewrite something that works).
The script originally ran on a Windows box, but it has since moved to a SCO Unix box.
Finally, remember that a form-based login and some navigation was required (and saving of cookies in the process). This makes lynx and such more or less useless when trying to automate this. The Perl script then can proceed to dump the data directly into the database (or output as CSV, as mentioned earlier) with just a few more lines of code.
Re:LWP is great! (Score:1)
awk != perl. awk (very much less than) perl (Score:1)
OTOH, Perl *is* a superset of awk. Any awk program can be converted to perl with the utility a2p (which comes with the Perl source distribution), although probably not optimally.
Re:LWP is great! (Score:2)
Re:LWP is great! (Score:1)
Unfortunately that wouldn't be feasible because this feed comes in daily, and the idea was to reduce manual work. Much easier to just schedule it with 'crontab' or 'at'.
Re:LWP is great! (Score:1)
Thank you for a perfect example of false laziness.
Screenscraping is hardly best practices. (Score:1, Offtopic)
Re:Screenscraping is hardly best practices. (Score:2)
or get information from dozens of (often academic) "content providers", with a page or two of info each; updated maybe once or twice a month... yes they would definitely want everyone who uses the information they publish to "work with them officially" - good use of everyone's time.
Screen Scraping? (Score:1, Redundant)
This book fills a niche (Score:4, Informative)
It's actually not that often that I want to grep web pages with Perl, the slightly-more difficult stuff is when you want to pass cookies, etc, and that's where I always find the docs to be wanting. Yes, the docs tell you how, but to get the whole picture I remember having to flip back-and-forth between several module's docs.
Re:This book fills a niche (Score:3, Informative)
I've always found the libwww-perl cookbook [perldoc.com] to be an invaluable reference. It covers cookies and https connections. Of course, it doesn't go into too much detail, but it provides you with good working examples.
Re:This book fills a niche (Score:2)
Anyway, "Web Client Programming" was a nice slim volume that did a good job of introducing the LWP module, but had an unfortunately narrow focus on writing crawlers. If you needed to do something like do a POST of form values to enter some information there wasn't any clear example in the text. (The perldoc/man page for HTML::LWP on the other hand had a great, very prominent example. Though shalt not neglect on-line docs.) I flipped through this new edition at LinuxWorld, and it looks like it's fixed these kind of omissions it's a much beefier book.
BUT... even at a 20% discount it wasn't worth it to me to shell out my own money for it. If you don't know your way around the LWP module, this is probably a great deal, if you do it's a little harder to say.
Re:This book fills a niche (Score:2)
Re:This book fills a niche (Score:2)
2 Free Orielly online books with related topics (Score:2, Informative)
You can read online their book Web Client Programming With Perl [oreilly.com] which has a chapter or two on LWP, which I've found very useful.
And on a related note, you can also read CGI Programming on the World Wide Web [oreilly.com] which covers the CGI side.
I may take a look at this LWP book, or I may juststick with what the first book I mentioned has. It's worked for me so far.
This is not screen scraping (Score:3, Informative)
Using the source of a webpage is just interpreting HTML. It's not like the application is selecting the contents of a browser window, issuing a copy function, and then sucking the contents of the clipboard into a variable or array and munging it. THIS is what screen-scrapers do.
Re:This is not screen scraping (Score:2)
The process of ripping data from HTML is very commonly called screen scraping.
Re:This is not screen scraping (Score:2)
Worth learning LWP instead of doing it manually? (Score:4, Interesting)
I've done a whoooole lot of screen-scraping working for a company that shall remain nameless?
Re:Worth learning LWP instead of doing it manually (Score:2, Informative)
Re:Worth learning LWP instead of doing it manually (Score:2)
Actually, I do normally use Perl. I just dump the source to a string and then regexp to my heart's content.
Hmm.. guess I should take a closer look at it =)
Re:Worth learning LWP instead of doing it manually (Score:3, Informative)?
The benefits come from when you're trying to crawl websites that require some really advanced stuff. I use it to crawl websites where they add cookies via javascript and do different types of redirects to send you all over the place. One of my least favorite ones used six different frames to finally feed you the information, and their stupid software was requiring my session to download, or at least open, three or four of those pages in the frames before it would spit out the page with all the information in it. IMHO, LWP with PERL makes it way simple to handle this sort of stuff.
Re:Worth learning LWP instead of doing it manually (Score:2)
I too have done this for a long time, but not for any company. Let's just say it is useful for increasing the size of my collection of, oh, shall we say widgets.
:-)
Really, it may be a little time consuming to do it manually, but it is also fun. If I find a nice site with a large collection of widgets, it is fun to figure out how to get them all in one shot with just a little shell scripting. A few minutes of "lynx -source" or "lynx -dump", cutting, grepping, and wget, and I have a nice little addition to my collection.
Re:Worth learning LWP instead of doing it manually (Score:4, Funny)
First, I don't know what "the right way" means. Whatever works for your situation works and is just as "right" as any other solution. Second, I don't know excatly what you're using in comparision. I can think of a dozen ways to grab text from a web page/ftp site, create a web robot, etc. The LWP modules do a good job of pulling lots of functionality into one package, though, so if you expect to expand your current process's capabilities at any point, I'd maybe recommend it over something like a set of shell scripts.
Having said all that, I can say that yes, in general, it's worth it to learn the modules if you know you're going to be doing a lot of network stuff along with other programmatic stuff. It provides all the reliability, ease of use/deployment, and other general spiffiness you get with Perl. If you have a grudge against Perl, then it probably won't do anything for you; learning LWP won't make you like Perl if you already hate it. But if you have other means to gather similar data and you think might like to take advantage of Perl's other strengths (database access, text parsing/generation, etc) then you'd do well to use something "internal" to Perl rather than 3 or 4 disparate sets of tools glued together (version changes, patches, etc can make keeping everything together hard sometimes). Of course, you can also use Perl to glue these programs together and then integrate LWP code bit-by-bit in order to evaluate the modules' strengths and weaknesses.
Does the LWP stuff replace things like wget for quick one-liners? No. Does it make life a little easier if you have to do something else, or a whole bunch of something elses, after you do your network-related stuff? Yes.
Or is it just a bloated Perl module that slaps a layer of indirection onto what is sometimes a very simple task?
Ah, I have been trolled. Pardon me.
-B
Re:Worth learning LWP instead of doing it manually (Score:2)
Unless you go to the link to my homepage and read the first paragraph?
Re:Worth learning LWP instead of doing it manually (Score:2)
Shush =P
And just to repeat, in case people didn't see my follow-up post, I'm already using Perl to handle my screen-scraping. My question was if I should take the time to learn to get/parse the resulting HTML using LWP instead of using Lynx and regexp-ing the resulting source to death.
when it's worth using LWP and HTML parsers (Score:2)
Yes, it's worth it to learn this, even if you still end up using the quick-and-dirty approach most of the time. The abstraction and indirection is pretty much like *any* abstraction and indirection -- it's more work for small, one-off tasks, but it pays off in cases where reusability, volume, robustness, and similar factors are important. If you end up having to parse pages where the HTML is nasty, or really large volumes of pages where quality control by inspection is impractical, or more session-oriented sites, the LWP-plus-HTML-parser-solution can be really valuable.
Frankly, if you're familiar with the principles of screen scraping (and you obviously are), learning the LWP-plus-parser solution is pretty simple (and I suspect you know a big chunk of what this book would try to tell you anyway). You can just about cut and paste from the POD for the modules and have a basic working solution to play with in a few minutes, then adapt or extend that in cases where you really need it.
Re:when it's worth using LWP and HTML parsers (Score:2)
Regulare Expressions for HTML? (Score:1, Interesting)
This is so wrong in so many ways.
For starters, you cannot parse dyck-languages with regular expressions.
You *have* to use a proper HTML-parser (that is tolerant to some extent), otherwise your program is simply wrong and I can always construct a proper HTML page that will break your regexp parser.
For those who are really hot on doing information extraction on web pages:
In my diploma thesis I found some methods to extract data from web pages that is resistant to a lot of changes.
I.e. if the structure of a web page changes, you can still extract the right information.
So you can do "screen-scraping" if you really want to, but it should be easier to contact the information provider directly.
Re:Regulare Expressions for HTML? (Score:3, Informative)
The problem is that regular expressions are often faster at processing than using an HTML parser. One example that I wrote used the HTML::TreeBuilder module to parse the pages. The problem is that we were parsing 100's of MB's worth of pages, and the structure of these pages made it very simple for me to write a few regexp's to get the necessary data out. The regexp version of the script took much less time to run than the TreeBuilder version did.
This is not to say that TreeBuilder doesn't have it's place. There's a lot of stuff that I use TreeBuilder for just because sometimes it's easier and produces cleaner code.
Re:Regulare Expressions for HTML? (Score:2)
Um, OK, whatever. If I have an HTML parser and your HTML page changes, my program is broken. Whereas if I'm looking for say, the Amazon sales rank for a certain book, and the format of amazon's page changes, but I can still grep for Amazon Sales Rank: xxx, I still have a working program.
What diploma thesis? Where's the link? Parent post should be considered a troll until further explanation is given.
Besides, this book in fact covers HTML parsers in addition to other useful techniques, like regular expressions. And since when is HTML a dyck language?
Re:Regulare Expressions for HTML? (Score:1)
Sure. But you're missing the point if you think it's about using regexes to process a whole HTML file.
The idea isn't to parse an entire HTML document, but to look for markers which signal the beginning and end of certain blocks of relevant content.
What's the url to your thesis?
Practical but... (Score:1)
Tone
Re:Practical but... - One Solution (more complex) (Score:1)
So, my company developed software that uses AI-like techniques to avoid this problem - not a trivial problem to solve, but valueable when you do.
What we've done (using PHP not Perl but the techniques and languages are very similar for this piece) is do a series of extraction steps - some structural and others data related - the structural steps employ AI-like techniques to detect the structure of the page and then use it to pass the "right" sections on to the data extraction portions.
This employs some modified versions of HTML parsers, but not a full object/tree representation (too expensive from a memory and performance standpoint for our purposes) - rather we normalize the page (to reduce variability) and then build up a data structure that represents the tree structure, but does not fully contain it.
In simpler terms - this stuff can be very complex, but if you need to there are companies (such as mine) who can offer solutions that are resistant to changing content sources and/or are able to rapidly handle new sources (in near realtime).
If you are interested feel free to contact me off Slashdot for more information and/or a product demo. [jigzaw.com]
Slash-scraping with LWP (Score:2, Informative)
Anyone who has used AvantGo to create a Slashdot channel understands the importance of reparsing the content. AvantSlash [fourteenminutes.com] uses LWP to such down pages and do reparsing. Hell, for years (prior to losing my iPaq), this was how I got my daily fix of Slashdot.
I just read it during regular work hours like everyone else.
:>
Too little too late (Score:1, Informative)
I can't believe they have devoted a book to this subject! And why would they wait so long...? If you are into Perl enough to even know what LWP is, you probably don't need this book.
Once you build and execute the request, it is just like any other file read.
For you PHP'ers, the PHP interface for the Curl library does the same crap. Libcurl is very cool stuff indeed.
l8,
AC
"If you have to ask, you'll never know".
Red Hot Chili Peppers
Sir Psycho Sexy
Another resource (Score:5, Interesting)
Thanks merlyn! (Score:1)
Your Perl advocacy over the years has been very helpful to my perl mast^h^h^h^hhackery, and I have borrowed from your LWP columns in writting LWP servers and user agents.
At my last gig, I had to write an automation to post to LiveLink, a 'Doze based document repository tool. The thing used password logins, cookies, and redirect trickery.
Using LWP (and your sample source code), I wrote a proxy: a server which I pointed my browser to, and a client which pointed to LiveLink. I was then able to observe the detailed shenanigans occuring between my browser and the LiveLink server, which I then simulated with a dedicated client.
I can't imagine any other way to have accomplished it as simply as with LWP, and with your sample code to study. Thanks to you and Gisle Aas (and Larry) for such wonderful tools!
Parsing HTML in Perl (Score:3, Interesting)
Re:Parsing HTML in Perl (Score:3, Interesting)
Re:Parsing HTML in Perl (Score:1)
SSL coverage omitted (Score:2, Insightful)
Big Time Scraping (Score:3, Interesting)
I work for a telecom company. You wouldnt believe the scope of devices which require screen scraping to work with. The biggest one that comes to mind that _can_ require it is the Lucent 5ESS telecom switch. While the 5ESS has an optional X.25 interface (for tens of thousands $), our company uses the human-ish text based interface.
Lets say a user (on PC) wants to look up a customers phone line. They pull up IE, go to a web page, make the request into an ASP page, it gets stored in SQL Server.
Meanwhile, a perl program retrieves a different URL, which gives all pending requests. It keeps an open session onto the 5ESS, like a human would. It then does the human tasks, retrieves the typically 1 to 10 page report and starts parsing goodies out of it for return.
More than just 5ESS switches -- DSC telecom switches, some echo cancellers, satellite modems, lots of other devices require scraping to work with.
Don't be unfair to the author...... (Score:4, Insightful)
There. Now shut the fsck up about the issue.
I manage a few government web sites and this book has been tremendous help in writing the spiders that I use to crawl the sites and record HTTP responses that then generate reports about out of date pages, 404s and so on. That alone has made it worth the money.
Sean did a great job on this. His book doesn't deserve to be slammed for what the technology MAY be used for.
I deserve to be spammed? (Score:1)
So trying to provide the audience of my web-pages with some comfort (a single click and a decent (configurable) mail editor appears, which allows for the address to be bookmarked, the letter to be saved as a draft for later completion, sending a carbon copy to a friend etc.) should be re-payed with punishment like being spammed?
Are you one of those persons who also claim that if you leave your stuff unprotected then you deserve to have it stolen? what a sad society we live in...
LWP (Score:2, Interesting)
Re:Wow, just like Evolution! (Score:1)
Re:Wow, just like Evolution! (Score:1)
Great Book, Cool author (Score:3, Interesting)
In fact I wanted to write a review for this book, but obviously got beaten to the punch. My only wish(2nd edition perhaps) for this book is that it spent a little more time dealing with things like logging into sites, handling redirection, multi-page forms, dealing with stupid HTML tricks that try to throw off bots, etc. But for a first edition this is a great book.
Why HTML::TokeParser? (Score:1, Interesting)
I never liked that module. The tokens it returns are array references that don't even bother to keep similar elements in similar positions, thus forcing you to memorize the location of each element in each token type or repeatedly consult the docs. If you refuse to do event driven parsing, at least use something like HTML::TokeParser::Simple [cpan.org] which is pretty cool as it's a factory method letting you call accessors on the returned tokens. You just memorize the method names and forget about trying to memorize the token structures.
Or, you could save the money and look at OpenBooks (Score:3, Informative)
Along with the other comments listing many references for Perl & LWP, I don't think I'll be rushing out to spend the money quick-like...
LWP rocks (Score:2)
Back in the days when IPOs were hot (anyone remember them?), we wrote a client to place IPO orders on WitCapital's site automatically (when they had first-come, first-served allotments). In those days, it didn't really matter what IPO you got. All you had to do was get it and flip the same day, making a tidy sum of ca$h.
Later, we automated ordering on E*Trade's site. We wrote an application that would check their site for IPOs, fill-in the series of forms and submit the orders. Got many an IPO that way, and it was fun too.
Of course, who hasn't written an EBay sniper using a few lines of LWP?
Perl and LWP (Score:1)
Your money is likely better spent buying Friedl's Mastering Regular Expressions Second Edition for example, which just came out, and then being able to apply that knowledge to many situations. Screen-scraping sounds indeed like parsing HTML, in which case it should be a breeze to use regexes and CPAN modules dedicated to HTML and even modified XML parsers to do the job...after all the power of XML is user-defined tags, there's nothing stopping the user from specifying html tag events... | https://books.slashdot.org/story/02/08/19/134241/perl-lwp | CC-MAIN-2018-13 | refinedweb | 5,575 | 68.5 |
![if !(IE 9)]> <![endif]>
The article discusses the new capabilities of C++ language described in the standard C++0x and supported in Visual Studio 2010. By the example of PVS-Studio we will see how the changes in the language influence static code analysis tools.
The new C++ language standard is about to come into our life. They are still calling it C++0x, although its final name seems to be C++11. The new standard is partially supported by modern C++ compilers, for example, Intel C++ and Visual C++. This support is far from being full-fledged and it is quite clear why. First, the standard has not been accepted yet, and second, it will take some time to introduce its specifics into compilers even when it is accepted.:
namespace std { typedef decltype(__nullptr) nullptr_t; }:
for (vector<int>::iterator itr = myvec.begin(); itr != myvec.end(); ++itr):
for (auto itr = myvec.begin(); itr != myvec.end(); ++itr)
Besides mere convenience of writing the code and its simplification, the key word auto makes the code safer. Let us consider an example where auto will be used to make the code safe from the viewpoint of 64-bit software development:
bool Find_Incorrect(const string *arrStr, size_t n) { for (size_t i = 0; i != n; ++i) { unsigned n = arrStr[i].find("ABC"); if (n != string::npos) return true; } return false; };:
warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data
This is how Viva64 does it:
V103: Implicit type conversion from memsize to 32-bit type.:
auto n = arrStr[i].find("ABC"); if (n != string::npos) return true;:
void *AllocArray3D(int x, int y, int z, size_t objectSize) { int size = x * y * z * objectSize; return malloc(size); }:
warning C4267: 'initializing' : conversion from 'size_t' to 'int', possible loss of data
Relying on auto, an inaccurate programmer may modify the code in the following way:
void *AllocArray3D(int x, int y, int z, size_t objectSize) { auto size = x * y * z * objectSize; return (double *)malloc(size); }:
V104: Implicit type conversion to memsize type in an arithmetic expression.:
void Foo(int X, int Y) { auto AA = X * Y; size_t BB = AA; //V101 }():
decltype(Calc()) value; try { value = Calc(); } catch(...) { throw; }
You may use decltype to define the type:
void f(const vector<int>& a, vector<float>& b) { typedef decltype(a[0]*b[0]) Tmp; for (int i=0; i<b.size(); ++i) { Tmp* p = new Tmp(a[i]*b[i]); // ... } }
Keep in mind that the type defined with decltype may differ from that defined with auto.
const std::vector<int> v(1); auto a = v[0]; decltype(v[0]) b = 1; // type a - int // type b - const int& (returned value // std::vector<int>::operator[](size_type) const)
Let us look at another sample where decltype can be useful from the viewpoint of 64 bits. The function IsPresent searches for an element in a sequence and returns true if it is found:
bool IsPresent(char *array, size_t arraySize, char key) { for (unsigned i = 0; i < arraySize; i++) if (array[i] == key) return true; return false; }:
for (auto i = 0; i < arraySize; i++) if (array[i] == key) return true;
The variable "i" will have the type int because 0 has int type. The appropriate correction of the error lies in using decltype:
for (decltype(arraySize) i = 0; i < arraySize; i++) if (array[i] == key) return true;.
In the standard C++98, temporary objects can be passed into functions but only as a constant reference (const &). Therefore, a function cannot determine if it is a temporary object or a common one which is also passed as const &.
In C++0x, a new type of references is added - R-value reference. It is defined in the following way: "TYPE_NAME &&". It may be used as a non-constant, legally modified object. This innovation lets you take account of temporary objects and implement the move semantics. For example, if std::vector is created as a temporary object or returned from a function, you may simply move all the internal data from the reference of the new type when creating a new object. The move constructor std::vector simply copies through the reference to a temporary object it has received the pointer of the array situated in the reference which is emptied when the copying is over.
The move constructor or move operator may be defined in the following way:
template<class T> class vector { // ... vector(const vector&); // copy constructor vector(vector&&); // move constructor vector& operator=(const vector&); // copy assignment vector& operator=(vector&&); // move assignment };
From the viewpoint of analyzing 64-bit errors in code, it does not matter if '&' or '&&' is processed when defining the type. Therefore, the support of this innovation in VivaCore is very simple. Only the function optPtrOperator of Parser class underwent some modifications: we consider '&' and '&&' equally there.
From the viewpoint of C++98 standard, the following construct has a syntactical error:
list<vector<string>> lvs;
To avoid it, we should input a space between the two right angle brackets:
list<vector<string> > lvs;
The standard C++0x makes it legal to use double closing brackets when defining template types without adding a space between them. As a result, it enables us to write a bit more elegant code.
It is important to implement support for this innovation in static analyzers because developers will be very glad to avoid adding a lot of unnecessary spaces.
At the moment, parsing of definitions of template types with ">>" is implemented in VivaCore not very well. In some cases, the analyzer makes mistakes and it seems that we will significantly modify some analyzer's parts responsible for template parsing in time. Until it is done, you will meet the following ugly functions which use heuristic methods to determine if we deal with the shift operator ">>" or part of the definition of the template type "A<B<C>> D": IsTemplateAngleBrackets, isTemplateArgs. We recommend those, who want to know how to correctly solve this task, to see this document: "Right Angle Brackets (N1757)". In time, we will make processing of right angle brackets in VivaCore better.
Lambda-expressions in C++ are a brief way of writing anonymous functors (objects that can be used as functions). Let us touch upon some history. In C, pointers to a function are used to create functors:
/* callback-function */ int compare_function(int A, int B) { return A < B; } /* definition of sorting function */ void mysort(int* begin_items, int num_items, int (*cmpfunc)(int, int)); int main(void) { int items[] = {4, 3, 1, 2}; mysort(items, sizeof(items)/sizeof(int), compare_function); return 0; }
Earlier, the functor in C++ was created with the help of a class with an overloaded operator():
class compare_class { public: bool operator()(int A, int B) { return (A < B); } }; // definition of sorting function template <class ComparisonFunctor> void mysort (int* begin_items, int num_items, ComparisonFunctor c); int main() { int items[] = {4, 3, 1, 2}; compare_class functor; mysort(items, sizeof(items)/sizeof(int), functor); }
In C++0x, we are enabled to define the functor even more elegantly:
auto compare_function = [](char a, char b) { return a < b; }; char Str[] = "cwgaopzq"; std::sort(Str, Str + strlen(Str), compare_function); cout << Str << endl;
We create a variable compare_function which is a functor and whose type is determined by the compiler automatically. Then we may pass this variable to std::sort. We may also reduce the code a bit more:
char Str[] = "cwgaopzq"; std::sort( Str, Str + strlen(Str), [](char a, char b) {return a < b;} ); cout << Str << endl;
Here "[](char a, char b) {return a < b;}" is that very lambda-function.
A lambda-expression always begins with brackets [] in which you may specify the capture list. Then there is an optional parameter list and optional type of the returned value. The definition is finished with the function's body itself. On the whole, the format of writing lambda-functions is as follows:
'[' [<capture_list>] ']' [ '(' <parameter_list> ')' ['mutable' ] ] [ 'throw' '(' [<exception_types>] ')' ] [ '->' <returned_value_type> ] '{' [<function_body>] '}'
Note. Specification of exceptions in common and lambda-functions is considered obsolete nowadays. There is a new key word noexcept introduced but this innovation has not been supported in Visual C++ yet.
The capture list specifies what objects from the exterior scope a lambda-function is allowed to access:
Unfortunately, it is impossible to cover lambda-functions very thoroughly within the scope of this article. You may read about them in detail in the sources given in the references at the end of this article. To demonstrate using lambda-functions, let us look at the code of a program that prints the strings in increasing order of their lengths.
The program creates an array of strings and an array of indexes. Then the program sorts the strings' indexes so that the strings are arranged according to growth of their lengths:
int _tmain(int, _TCHAR*[]) { vector<string> strings; strings.push_back("lambdas"); strings.push_back("decltype"); strings.push_back("auto"); strings.push_back("static_assert"); strings.push_back("nullptr"); vector<size_t> indices; size_t k = 0; generate_n(back_inserter(indices), strings.size(), [&k]() { return k++; }); sort(indices.begin(), indices.end(), [&](ptrdiff_t i1, ptrdiff_t i2) { return strings[i1].length() < strings[i2].length(); }); for_each(indices.begin(), indices.end(), [&strings](const size_t i) { cout << strings[i] << endl; }); return 0; }
Note. According to C++0x, you may initialize arrays std::vector in the following way:
vector<size_t> indices = {0,1,2,3,4};
But Visual Studio 2010 has no support for such constructs yet.
The quality of analysis of lambda-functions in static analyzers must correspond to the quality of analysis of common functions. On the whole, analysis of lambda-functions resembles that of common functions with the exception that lambda-functions have a different scope.
In PVS-Studio, we implemented the complete diagnosis of errors in lambda-functions. Let us consider an example of code containing a 64-bit error:
int a = -1; unsigned b = 0; const char str[] = "Viva64"; const char *p = str + 1; auto lambdaFoo = [&]() -> char { return p[a+b]; }; cout << lambdaFoo() << endl;
This code works when compiling the program in the Win32 mode and displays the letter 'V'. In the Win64 mode, the program crashes because of an attempt to access the item with the number 0xFFFFFFFF. To learn more about this kind of errors, see the lessons on development of 64-bit C/C++ applications - "Lesson 13. Pattern 5. Address arithmetic".
When checking the code shown above, PVS-Studio generates the diagnostic message:
error V108: Incorrect index type: p[not a memsize-type]. Use memsize type instead.
Correspondingly, the analyzer must have parsed the lambda-function and make out the scope of variables to do this. It is a difficult yet necessary functionality.
The most significant modifications in VivaCore are related to lambda-function support. It is a new function rLamdas that participates in the process of building the parse tree. The function is situated in the class Parser and called from such functions as rInitializeExpr, rFunctionArguments, rCommaExpression. The function rLambdas parses lambda-functions and adds a new type of an object into the tree - PtreeLambda. The class PtreeLambda is defined and implemented in the files PtreeLambda.h and PtreeLambda.
Processing of PtreeLambda in the built tree is performed by TranslateLambda function. The whole logic of working with lambda-functions is concentrated in VivaCore. Inside TranslateLambda, you can see the call of the function GetReturnLambdaFunctionTypeForReturn implemented in PVS-Studio's code. But this function serves for internal purposes of PVS-Studio and an empty function-stub GetReturnLambdaFunctionTypeForReturn does not impact code parsing in VivaCore at all.
There are cases when it is difficult to determine the type returned by a function. Let us consider an example of a template function that multiplies two values by each other:
template<class T, class U> ??? mul(T x, U y) { return x*y; }
The returned type must be the type of the expression "x*y". But it is not clear what to write instead of "???". The first idea is to use decltype:
template<class T, class U> decltype(x*y) mul(T x, U y) //Scope problem! { return x*y; }
The variables "x" and "y" are defined after "decltype(x*y)" and this code, unfortunately, cannot be compiled.
To solve this issue, we should use a new syntax of returned values:
template<class T, class U> [] mul(T x, U y) -> decltype(x*y) { return x*y; }
Using the brackets [], we spawn a lambda-function here and say that "the returned type will be determined or defined later". Unfortunately, this sample cannot be compiled in Visual C++ by the moment of writing this article although it is correct. But we go an alternative way (where we also use Suffix return type syntax):
template<class T, class U> auto mul(T x, U y) -> decltype(x*y) { return x*y; }
This code will be successfully built by Visual C++ and we will get the needed result.
The version PVS-Studio 3.50 supports the new function format only partially. Constructs are fully parsed by VivaCore library but PVS-Studio does not take into consideration the data types returned by these functions in the analysis. To learn about support of an alternative record of functions in VivaCore library, see the function Parser::rIntegralDeclaration.
The standard C++0x has a new key word static_assert. Its syntax is:
static_assert(expression, "error message");
If the expression is false, the mentioned error message is displayed and compilation aborts. Let us consider an example of using static_assert:
template <unsigned n> struct MyStruct { static_assert(n > 5, "N must be more 5"); }; MyStruct<3> obj;
When compiling this code, Visual C++ compiler will display the message:
error C2338: N must be more 5 xx.cpp(33) : see reference to class template instantiation 'MyStruct<n>' being compiled with [ n=3 ]
From the viewpoint of code analysis performed by PVS-Studio, the construct static_assert is not very interesting and therefore is ignored. In VivaCore, a new lexeme tkSTATIC_ASSERT is added. On meeting this lexeme, the lexer ignores it and all the parameters referring to the construct static_assert (implemented in the function Lex::ReadToken).
There has been no key word to denote a null pointer before the standard C++0x in C++. To denote it, the number 0 was used. But a good style is to use the macro NULL. When opening the macro NULL, it turns into 0 and there is no actual difference between them. This is how the macro NULL is defined in Visual Studio:
#define NULL 0
In some cases, absence of a special key word to define a null pointer was inconvenient and even led to errors. Consider an example:
void Foo(int a) { cout << "Foo(int a)" << endl; } void Foo(char *a) { cout << "Foo(char *a)" << endl; } int _tmain(int, _TCHAR*[]) { Foo(0); Foo(NULL); return 0; }
Although the programmer expects that different Foo functions will be called in this code, it is wrong. It is 0 that will be put instead of NULL and that will have the type int. When launching the program you will see on the screen:
Foo(int a) Foo(int a)
To eliminate such situations, the key word nullptr was introduced into C++0x. The constant nullptr has the type nullptr_t and is implicitly converted to any pointer type or a pointer to class members. The constant nullptr cannot be implicitly converted to integer data types except for bool type.
Let us return to our example and add the call of the function Foo with the argument nullptr:
void Foo(int a) { cout << "Foo(int a)" << endl; } void Foo(char *a) { cout << "Foo(char *a)" << endl; } int _tmain(int, _TCHAR*[]) { Foo(0); Foo(NULL); Foo(nullptr); return 0; }
Now you will see:
Foo(int a) Foo(int a) Foo(char *a)
Although the key word nullptr is not relevant from the viewpoint of searching for 64-bit error, it must be supported when parsing the code. For this purpose, a new lexeme tkNULLPTR was added in VivaCore as well as the class LeafNULLPTR. Objects of LeafNULLPTR type are created in the function rPrimaryExpr. When calling the function LeafNULLPTR::Typeof, the type "nullptr" is coded as "Pv", i.e. "void *". From the viewpoint of existing tasks of code analysis in PVS-Studio, it is quite enough.
The standard C++0x introduces new standard classes referring to namespace std. Some of these classes are already supported in Visaul Studio 2010, for example:
Since these entities are usual template classes, they do not demand any modification of PVS-Studio or VivaCore library.
At the end of our article, I would like to mention one interesting thing related to using C++0x standard. On the one hand, the new features of the language make code safer and more effective by eliminating old drawbacks, but on the other hand, they create new unknown traps the programmer might fall into. However, I cannot tell you anything about them yet.
But one might fall into already known traps as well because their diagnosis in the new C++0x constructs is implemented much worse or not implemented at all. Consider a small sample showing the use of an uninitialized variable:
{ int x; std::vector<int> A(10); A[0] = x; // Warning C4700 } { int x; std::vector<int> A(10); std::for_each(A.begin(), A.end(), [x](int &y) { y = x; } // No Warning ); }
The programmer might hope to get a warning from the compiler in both cases. But in the example with the lambda-function, there will be no diagnostic message (it was tried on Visual Studio 2010 RC, /W4) - like there have not been many other warnings about various dangerous situations before. It needs some time to implement such diagnosis.
We may expect a new round in development of static analyzers concerning the topic of searching for potentially dangerous constructs that occur when using C++0x constructs. We position our product PVS-Studio as a tool to test contemporary programs. At the moment, we understand 64-bit and parallel technologies by this term. In the future, we plan to carry out an investigation into the question what potential issues one may expect using C++0x. If there are a lot of traps, perhaps we will start developing a new tool to diagnose them.
We think that C++0x brings many good features. Obsolete code does not demand an immediate upgrading, although it may be modified during refactoring in time. What the new code is concerned, we may write it already with the new constructs. So, it seems reasonable to start employing C++0x right now. ... | https://www.viva64.com/en/a/0061/ | CC-MAIN-2019-35 | refinedweb | 3,068 | 50.77 |
Jeff wrote: > On Mon, Aug 9, 2010 at 11:25 AM, Dino Viehland <dinov at microsoft.com> wrote: > > clr.LoadClrExtensions(namespace) > > I'd like to paint the bikeshed 'clr.ImportExtensions()', if possible, > and have it take either a namespace or a static class - i.e. > System.Linq or System.Linq.Enumerable. > > import clr, System > clr.ImportExtensions(System.Linq) > > Looks fine to me. > > For a shorter syntax, I would prefer > > from System.Linq.Enumerable import * > > and have it pull in any extension methods in that class. My only > concern is that it's a little too magical, and doesn't match the usual > behaviour of an import statement. +1 on allowing the types, that's a good idea. My biggest concern w/ using import semantics for this is Brett's goal (which I am +100 on) to move to a Python implementation of import. Once that happens it'll be either very difficult or impossible to make this syntax work. Even import clr will need to change to be something we recognize at compile time like from __future__. So I'm inclined not to do that - plus it may be boarding too close on embracing and extending :) | https://mail.python.org/pipermail/ironpython-users/2010-August/013472.html | CC-MAIN-2017-30 | refinedweb | 196 | 68.06 |
Want more? Here are some additional resources on this topic:.
An enumerator may not contain white space in its name.};
static void Main()
{.
using System;
For more information, see the following sections in the C# Language Specification:
1.10 Enums
6.2.2 Explicit Enumeration Conversions
14 Enums
Please allow implicit enum conversion in C#. Forcing explicit conversion does not help code readability, in fact it does exactly the opposite.
what's the sense in:switch((MyType)SomeInt){case MyType.E0:SomeOtherInt = (int)MyType.E1;break;}wouldn't it be nicer this way:switch(SomeInt){case MyType.E0:SomeOtherInt = MyType.E1;break;}?
??
I was wondering if enums in C# could possibly be used to do the following:BookType {HARDCOPY = "expensive",EBOOK = "cheap"}String bookTypePriceIdea = (String) BookType.HARDCOPY;
Also, enum in C#, unlike their counterparts in other languages DO NOT enforce the accepted values, if you declare an enum with values ranging from 1 to 7, and I pass a "(<enumName>)10 " to your function as a parameter, it will be accepted. C/C++ overlap the values, so if I passed 10, the compiler would switch it to 3 during compiling time. C# does not do this.
To finish, enums are ONLY int constants, the purpose of their name is simply to make it easier for you to remember the correct value to pass (easier to remember 'ReturnValue.Success' than '5'). Any other language with much tigher enum specification allow free conversion from the enum to its underlying type. C/C++ even allow you do to aritmetic operations straight with the enum values (and overlap the result accordingly). VB.NET allows free conversion to the underlying type. There is no reason to not to allow it as well in C#.
public static BookType
{
public const string HARDCOPY = "expensive";
public const string EBOOK = "cheap";
}
public struct BookType
Perhaps Leahn simply left out the keyword class in her code.public static class BookType{public const string HARDCOPY = "expensive";public const string EBOOK = "cheap";public const string kindle = "other";}certainly would work.However I agree with Kozmoknot - if this is the extent of the members there is no reason to use a class. One would be better off using a struct. In the CLR, a struct is a value type, is lighter weight and allocated directly from the stack. | http://msdn.microsoft.com/en-us/library/sbbt4032(VS.80).aspx | crawl-002 | refinedweb | 385 | 56.76 |
Opened 10 years ago
Closed 8 years ago
#9847 closed New feature (fixed)
Improved handling for HTTP 403 Errors
Description
in core/handlers/base.py is hardcoded an h1:
<h1>Permission denied</h1>
so this string not transable,
regards
drakkan
Attachments (5)
Change History (37)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
Changed 10 years ago by
comment:3 Changed 10 years ago by
comment:4 Changed 10 years ago by
comment:5 Changed 10 years ago by
comment:6 Changed 10 years ago by
comment:7 Changed 10 years ago by
Changed 9 years ago by
core/base.diff
comment:8 Changed 9 years ago by
There is no need to have a message at all. 'Permission Denied' is presumed with the 403 HttpResponseForbidden error. If a message must be delivered, it should be added to django/http/init.py for all HttpResponseForbidden errors.
comment:9 Changed 9 years ago by
I think there should be content in the response, since otherwise you'd show a completely blank page to an end user, requiring them to know about HTTP status codes and how to find out what status code we sent back.
I can see a couple options for doing this:
- Don't bother trying to translate the response content, and just use the language in the HTTP spec (so just "403 Forbidden").
- Set a default
contentvalue for
HttpResponseForbidden, and mark it for translation. I think we can do that safely since it's over in
django.httpand settings will already be set up by the time it gets imported.
comment:10 Changed 9 years ago by
Also, this is a minor enough issue that it really ought to happen on 1.3 or a 1.2.X bugfix release.
comment:11 Changed 9 years ago by
I take back what I said about not having a message at all.
- It shouldn't be hardcoded in base.py, maybe it should be in source:django/trunk/django/http/__init__.py ?
- All the non-200 HTTP errors should derive from a master error class. Right now it's kind of a hodge-podge it seems.
I agree that milestone:1.3 is best.
comment:12 Changed 9 years ago by
Why not create an exception subclass which would be raised in base.py and caught by a middleware to display a custom 403 page, like with Http404 ?
comment:13 Changed 9 years ago by
OK, I see middleware is not involved in this process, but it's possible to add a method to the resolver, resolver.resolve403() just like resolve404 and resolve500 which could render 403.html template and if that fails just return 'Permission Denied'
Changed 9 years ago by
Changed 9 years ago by
Sorry base.3.diff was wrong
comment:14 Changed 9 years ago by
So here's the patch, I think that this way it would be more flexible so that you can customize the content of 403 as much as you wish. Besides it makes exception handling more consistent, you just create 403.html in the root of your templates dir just like for 404 and 500. What do you think?
comment:15 Changed 9 years ago by
comment:16 Changed 9 years ago by
comment:17 Changed 9 years ago by
comment:18 Changed 9 years ago by
comment:19 Changed 9 years ago by
I'm going to try and get this ready for checkin today at the djangocon sprint.
comment:20 Changed 9 years ago by
Changed 9 years ago by
Added tests and docs... anyone care to test my tests?
comment:21 Changed 9 years ago by
comment:22 Changed 9 years ago by
Need some more feedback on this one, so we're going to hold off... in any case, the tests will be useful even if we do something other than a bunch of resolve4xx stuff. Slippery slope... we don't really want 17 different resolve401, resolve402 etc.
comment:23 Changed 9 years ago by
Tests and docs are fine. As noted, I'd like a way to avoid requiring the module-level
resolver403 function. Those functions are really annoying (they require
import * from the default url stuff to be used, for example).
comment:24 Changed 8 years ago by
Just for the record, after [13590] the
handlerXXX
functions don't require an "import *". This still calls for a more general solution then just putting another function. Also, the current patch required a
"403.html"
template, but doesn't provide it. This gets catch by
"except:"``, but that's at least bad style to me. If I make a syntax error in the ``"403.html"
I would expect a 500 and a trace, so that my tests fail when checking the returned status code.
comment:25 Changed 8 years ago by
1.3 is feature-frozen now.
comment:26 Changed 8 years ago by
Bump. Is this in the running for 1.3.x?
comment:27 Changed 8 years ago by
comment:28 Changed 8 years ago by
It can't go in 1.3.X, as we don't backport features. It could get into 1.4. It currently has 'Patch needs improvement' for some reason. If that is resolved, and it can promoted to 'Ready for checkin', it stands a good chance for 1.4. The way to get things included is to follow our contributing howto to make sure the ticket moves along nicely.
(In [10346]) Fixed #9847: mark the permission denied message for translation. | https://code.djangoproject.com/ticket/9847 | CC-MAIN-2019-13 | refinedweb | 927 | 73.37 |
smbus2 is a drop-in replacement for smbus-cffi/smbus-python in pure Python
Project description
Introduction
smbus2 is (yet another) pure Python implementation of the python-smbus package.
It was designed from the ground up with two goals in mind:
- It should be a drop-in replacement of smbus. The syntax shall be the same.
- Use the inherent i2c structs and unions to a greater extend than other pure Python implementations like pysmbus does. By doing so, it will be more feature complete and easier to extend.
Currently supported features are:
- Get i2c capabilities (I2C_FUNCS)
- read_byte_data
- write_byte_data
- read_word_data
- write_word_data
- read_i2c_block_data
- write_i2c_block_data
It is developed on Python 2.7 but works without any modifications in Python 3.X too.
Code examples
smbus2 installs next to smbus as the package, so it’s not really a 100% replacement. You must change the module name.
Example 1a: Read a byte
from smbus2 import SMBus # Open i2c bus 1 and read one byte from address 80, offset 0 bus = SMBus(1) b = bus.read_byte_data(80, 0) print(b) bus.close()
Example 1b: Read a byte using ‘with’
This is the very same example but safer to use since the smbus will be closed automatically when exiting the with block.
from smbus2 import SMBusWrapper with SMBusWrapper(1) as bus: b = bus.read_byte_data(80, 0) print(b)
Example 2: Read a block of data
You can read up to 32 bytes at once.
from smbus2 import SMBusWrapper with SMBusWrapper(1) as bus: # Read a block of 16 bytes from address 80, offset 0 block = bus.read_i2c_block_data(80, 0, 16) # Returned value is a list of 16 bytes print(block)
Example 3: Write a byte
from smbus2 import SMBusWrapper with SMBusWrapper(1) as bus: # Write a byte to address 80, offset 0 data = 45 bus.write_byte_data(80, 0, data)
Example 4: Write a block of data
It is possible to write 32 bytes at the time, but I have found that error-prone. Write less and add a delay in between if you run into trouble.
from smbus2 import SMBusWrapper with SMBusWrapper(1) as bus: # Write a block of 8 bytes to address 80 from offset 0 data = [1, 2, 3, 4, 5, 6, 7, 8] bus.write_i2c_block_data(80, 0, data)
Installation instructions
smbus2 is pure Python code and requires no compilation. Installation is easy:
python setup.py install
Or just use pip
pip install smbus2
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/smbus2/0.1.4/ | CC-MAIN-2019-30 | refinedweb | 423 | 62.68 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to add 42 days to an date field?
How can I add 42 days to an date-field?
I need help with that, I want to add 42days to the invouce_payed field and put it on a new date.field like:
pay_date_two
@api.onchange('invoice_payed')
def _check_change(self):
self.pay_date_two
# = self.invoice_payed
self.pay_date_two = self.pay_date_two + datetime.timedelta(days=42)
Dear Wizardz,
try this code
wow amazing. what was the problem with that bool? it's because invoice_payed comes as a bool back if it's emptry?
Yes, I think that..
Hi Wizardz,
if your date filed is empty (meaning you have not stored any value in it yet), the ORM will return False.
That's why, you must use the if self.invoice_payed to make sure the field contains a valid datetime value.
@api.onchange('invoice_payed')
def _check_change(self):
if self.invoice_payed:
date_1 = datetime.datetime.strptime(self.invoice_payed, "%Y-%m-%d")
self.pay_date = date_1 + datetime.timedelta(days=42)
Best regards
Yvan
I have fix it with that:
@api.onchange('invoice_payed')
def _check_change(self):
date_1 = datetime.datetime.strptime(self.invoice_payed, "%Y-%m-%d")
self.pay_date = date_1 + datetime.timedelta(days=42)
Now when I create a new record it gives me this error:
date_1 = datetime.datetime.strptime(self.invoice_payed, "%Y-%m-%d")
TypeError: must be string, not bool
self.invoice_payed , why is this a bool when I have a date field ?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-add-42-days-to-an-date-field-108946 | CC-MAIN-2017-43 | refinedweb | 286 | 62.85 |
)
Programming
January 2006 Entries
Enterprise Libray for .NET 2.0 is out!
The waiting is over. Microsofts Patterns & Practices Group has just released the new Enterprise Library for .NET 2.0 to the public. It is an easy to use and graphically configurable collection of application blocks that help you to build Enterprise Scale managed applications. This is release is for: Anybody who is interested in best programming practices for .NET 2.0. If you need a good Caching solution which survives an application crash. You want to log messages to the Event Log, Email them ......
Share This Post:
Short Url:
Posted On
Friday, January 20, 2006 7:06 PM
|
Why Unit Tests and TDD are good
With the advent of unit test frameworks like JUnit, NUnit, MBUnit, ... and the new Visual Studio Team System a plethora of articles and books spread the news that unit tests are a nice thing in every programmers toolbox. We all know that solid software engineering requires a certain amount of testing. Everybody knows it and only few actually do it. This might have to do with pressing time schedules and human laziness which most programmers follow: Achieve good results with the least amount of work. ......
Share This Post:
Short Url:
Posted On
Tuesday, January 10, 2006 9:45 PM
|
Read/Write App.config with .NET 2.0/Enterprise Library
This time I would like to show you the most important changes in the System.Configuration namespace with .NET 2.0. I have looked at my blog referrer statistics and saw about 20 hits/day by google. Most of them were searching infos how to configure the new Enterprise Library but also a significant number of people which seem to seek guidance to the following questions: How to read/write to App.Config? How to store a list of objects in a config file via the System.Configuration mechanism? Reason enough ......
Share This Post:
Short Url:
Posted On
Wednesday, January 4, 2006 11:00 PM
|
Skin design by
Mark Wagner
, Adapted by
David Vidmar | http://geekswithblogs.net/akraus1/archive/2006/01.aspx | CC-MAIN-2016-07 | refinedweb | 340 | 64.3 |
Type: Posts; User: armen_shlang
I dont know why, when I query for items using EF 4.0 all the children are loading by default. I never specified this why is this happening?
I'm receiving this weird exception when calling a WCF function that returns a list of items. apparently when the size of list gets bigger this exception is thrown.
"An error occurred while...
No I'm not trying to write to it, only change the value before it's used by the WCF client call.
thanks for the reply
I used Service Reference to create a WCF client class (much like the svcutil.exe tool). It generated this section in the app.config automatically:
<system.serviceModel>
<bindings>
...
I figured it out.
I have to call refresh.(refreshmode.ClientWin, objects). and that works!
EF 3.5 and SSCE 3.5 suck bad.
I wish Microsoft sticks with ONE version of a framework, and fixes it's...
hey Arjay,
I created by project and you were right, using WCF requires a bit more work, but easier to maintain and adds the advantage for my GUI to work from a remote location.
HOWEVER, I'm...
Hello,
I have an EF model which includes a list of planes, and each plane has a list of passengers. I would like to write a linq query that from the resulting list of planes, I can do this:
...
I'm sold, thank you very much for sharing your experience that was very helpful. btw, How long did it take you to finish that project? And if you don't mind me asking, how much would a project like...
Thx for the reply.
Good points. I'm not too familiar with security concepts, but I understand the abstraction idea.
The only reason why I'm hesitant to do so are two reason:
- introduction to...
Thanks for the reply,
What's the benefit of choosing opt 2? It would slow down transactions, and introduce complexity (if all ports are closed on the client machine WCF wouldn't work).
Hello,
I have a software and I cannot decide which approach would be the proper method. I blame on lack of experience!
So I have a Windows Service running in the background, and it's populating a...
I cant find a good tutorial or a resource on how to create a custom tab controller in blend.
I messed around with the controller feature but since there are no good guide or a help file I'm stuck....
I have something like this:
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
struct boy
I had to check since we are doing the same project at ucla and it's due tommorow. I have a battleship program fresh from the oven. and it's in english. but I didn;t look at your requirements.
I'm guessing your NOT going to UCLA. and NOT taking the cs32 class who the professor is NOT SmallBerg.
Lol, a bet with a friend!
I have a function like such
#include <cstdlib>
inline int randInt(int limit)
{
return limit <= 1 ? 0 : (rand() % limit);
}
That was my mistake wen i posted it. The problem like i said is that in the arena constructor, m_block() can't take more than one argument.
However, m_block's constructor does take 2 argumets....
I will
sorry for the trouble you went through
I have a class like this:
#include "History.h"
class arena
{
public:
arena(some arguments)
: m_history(int num1, int num2) {};
typedef hash_map<int, list<string> > EXCEL_ROW;
EXCEL_ROW row;
why is typedef used here?
couldn't i just say...
hash_map<int, list<string> > row;
Does it have any additional...
Well if your specifically want to delete the last element yes, but if it's a general loop that deletes lets say blocks every time a ball hits them, then u need to use erase.
Thx all.
I see --- so "--iter" would fix the problem!!!
also I was reading this book and I noticed this example which I've never seen it before... what does it signify?
list<char> list1 = make<...
small question about LIST stl...
My code looks something like this...
#include <list>
#include <iostream>
using namespace std;
Here is an example by author:
---------------------------------------------
/* Filename: diction.cpp
Author: Br. David Carlson
Date: July 14, 1999
Last Revised: December 23, 2001
...
Thank you,
I didn't know How to have the m_firstJob() to take only one argument also I didn't know about the constructor and how to write the member initialization list.
One last thing...
How... | http://forums.codeguru.com/search.php?s=4d881700dbc49c67212d2a6cb7fdccc4&searchid=3732019 | CC-MAIN-2014-23 | refinedweb | 758 | 76.11 |
Type.GetMethods Method (BindingFlags)
When overridden in a derived class, searches for the methods defined.MethodInfo[]
An array of MethodInfo objects representing all methods defined for the current Type that match the specified binding constraints.
-or-
An empty array of type MethodInfo, if no methods are defined for the current Type, or if none of the defined methods match the binding constraints.
The GetMethods method does not return methods in a particular order, such as alphabetical or declaration order. Your code must not depend on the order in which methods are returned, because that order varies.-public methods (that is, private, internal, and protected methods) in the search. Only protected and internal methods on base classes are returned; private methods methods declared on the Type, not methods that were simply inherited.
See System.Reflection.BindingFlags for more information. creates a class with two public methods and one protected method, creates a Type object corresponding to MyTypeClass, gets all public and non-public methods, and displays their names.
using System; using System.Reflection; using System.Reflection.Emit; // Create a class having two public methods and one protected method. public class MyTypeClass { public void MyMethods() { } public int MyMethods1() { return 3; } protected String MyMethods2() { return "hello"; } } public class.Name); } } }
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0 | https://msdn.microsoft.com/en-US/library/4d848zkb(d=printer).aspx | CC-MAIN-2016-07 | refinedweb | 230 | 50.02 |
public class Main {
public static void main(String[] args) throws FileNotFoundException {
double score;
String line;
String fname;
String lname;
String filename = JOptionPane.showInputDialog("Filename: ");
FileInputStream input_file = new FileInputStream(filename);
Scanner input = new Scanner(input_file);
while (input.hasNextLine()) {
line = input.nextLine();
System.out.println(line);
}
}
}
Scanner is reading a text file that is formatted like this:
Bob Smith 99.0
John Doe 89.5
Douglas Adams 42.0
The program as is works when I enter the name of a txt file that's in the root directory.
However, I'm supposed to use printf(); to get the first/last names and scores to line up.
In order to do this I'm guessing (probably wrong) that I need to create 2 string variables for first and last names and 1 double variable for the scores.
I can't figure out how to do this.
Is there a way I can use scanner.next.. to separate each line in the text file into individual strings/doubles?
Thanks in advance. | http://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/1046-creating-variables-text-file.html | CC-MAIN-2013-48 | refinedweb | 168 | 68.36 |
Handling Attributes and Namespaces
XML namespaces and attributes are similar in some respects but different in others. Attributes are name-value pairs that are typically used to hold metadata related to the element. Namespaces are considerably more complex in purpose. They serve to distinguish elements and attributes from different sources that have identical names but different meanings. They are also used to group related attributes and elements so that a processing application can easily recognize them. Namespaces are declared in a manner similar to the way attributes are, but with an
xmlns:prefix name assigned a value that is a URI, for example:
This construction maps the prefix to a unique URI identifier; thereafter the prefix can be used to identify the namespace of an attribute or element (for example, “<emp:title></emp:title>”). Default, prefix-less namespaces can also be declared that affect all current and descendent elements and attributes past the point of declaration, unless they are overridden by another namespace.
Although the purposes of attributes and namespaces are different, they are conceptually similar. They cannot be children of an element node, but they are always closely associated with one. Indeed the associated element is their parent. Even though they are NSXML nodes—NSXMLNode objects of kinds
NSXMLAttributeKind and
NSXMLNamespaceKind—they cannot have children and cannot be children of any node. (Namespaces, however, can qualify the attributes of an element as well as the element itself.) Namespace and attribute nodes are not encountered during document traversal.
The programmatic interface of NSXMLElement reflects this architectural affinity. It offers similar sets of methods for namespace nodes and attribute nodes. This articles explains how to use these methods and then discusses a unique feature of the namespace API: resolving namespaces.
Methods for Manipulating Attributes and Namespaces
The NSXMLElement class defines methods for manipulating and accessing an element’s attributes that are nearly identical in form to another set of methods for manipulating and accessing namespace nodes. Table 1 lists these complementary sets of methods.
The names of these methods clearly indicate what you use them for, but some comments about each category of method are warranted:
The
add...and
set...methods usually require you to create NSXMLNode objects of the appropriate kind (
NSXMLAttributeKindor
NSXMXLNamespaceKind) before adding or setting the object. To create these node objects, you can use the NSXMLNode class factory methods
namespaceWithName:stringValue:,
attributeWithName:stringValue:, and
attributeWithLocalName:URI:stringValue:. The last of these methods creates an attribute that is bound to a namespace identified by the URI parameter.
The
setAttributesAsDictionary:method lets you set an element’s attributes without having to create NSXMLNode objects first. The keys in the dictionary are the names of the attributes and the values are the string values of the attributes.
All of the methods that set attributes or namespaces of an element remove all existing attributes or namespaces.
The methods that remove an attribute or namespace from an element, or that access a particular attribute or namespace, require you to know the name of the attribute or the prefix of the namespace. For an attribute node, you can simply ask the node for its name using the NSXMLNode
namemethod. For a namespace node, however, the
namemethod returns a qualified name (that is, the prefix plus the local name, separated by a colon). You can obtain the prefix by invoking the NSXMLNode class method
prefixForName:, passing in the qualified name.
The
attributeForLocalName:URI:requires you to supply the local (non-qualified) name of an attribute as well as the namespace URI it’s bound to. If you can access the associated namespace node, you can obtain the URI by sending the node a
stringValuemessage. You can get the local name from the qualified name by using the NSXMLNode class method
localNameForName:.
If you want to access or remove an existing namespace or attribute node, you can obtain a reference to its element by sending the namespace or attribute node a
parentmessage.
Once you have accessed a specific namespace or attribute node, you can get or set its string or object value (see Changing the Values of Nodes for details). Bear in mind that the value of a namespace node is the URI of the namespace; if you want to set the URI as an object value, make it an NSURL object.
Resolving Namespaces
If your application knows (or suspects) that it might be dealing with XML from different sources or authored in different XML vocabularies, such as XSLT and RDF, it has to deal with namespaces. At any point of processing it might have to know the namespace to which an element or attribute is bound in order to handle that element or attribute appropriately. A case in point is the
set element, which is defined by both SVG and MathML for different purposes. Before you can determine the meaning of a
set element in a document containing both SVG and MathGL markup, you have to find out which namespace it belongs to. To find out the namespace affiliation of an element you must resolve it.
For namespace resolution, NSXMLElement declares two methods beyond the ones discussed in Methods for Manipulating Attributes and Namespaces.
The first method takes the qualified or local name of an element and returns the namespace node representing the namespace to which the element belongs. Your application can get the URI from the node (its string value) and compare it to a list of known or expected URIs to determine namespace affiliation. If there is no associated namespace, the
resolveNamespaceForName: method returns
nil.
The second namespace-resolution method,
resolvePrefixForNamespaceURI:, works in the opposite direction. You pass in a URI and get back the prefix bound to that URI. You can use this method, for example, when you are adding elements to a document and need to know the prefixes of their qualified names.
Copyright © 2004, 2013 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2013-09-18 | https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/NSXML_Concepts/Articles/AttributesNamespaces.html | CC-MAIN-2015-14 | refinedweb | 990 | 51.89 |
DIRECTORY(3) BSD Programmer's Manual DIRECTORY(3)
NAME
opendir, readdir, telldir, seekdir, rewinddir, closedir, dirfd - directo-
ry operations
SYNOPSIS
#include <<sys/types.h>>
#include <<dirent.h>>
DIR *
opendir(const char *filename);
struct dirent *
readdir(DIR *dirp);
long
telldir(const DIR *dirp);
void
seekdir(DIR *dirp, long loc);
void
rewinddir(DIR *dirp);
int
closedir(DIR *dirp);
int
dirfd(DIR *dirp);
DESCRIPTION
The opendir() function opens the directory named by filename, associates
a directory stream with it and returns a pointer to be used to identify
the directory stream in subsequent operations. The pointer NULL is re-
turned if filename cannot be accessed, or if it cannot malloc(3) enough
memory to hold the whole thing.
The readdir() function returns a pointer to the next directory entry. It
returns NULL upon reaching the end of the directory or detecting an in-
valid seekdir() operation. unde-
tected directory compaction. It is safe to use a previous telldir() val-
ue immediately after a call to opendir() and before any calls to
readdir().).
Sample code which searchs a directory for entry ``name'' is:
len = strlen(name);
dirp = opendir(".");
while ((dp = readdir(dirp)) != NULL)
if (dp->d_namlen == len && !strcmp(dp->d_name, name)) {
(void)closedir(dirp);
return FOUND;
}
(void)closedir(dirp);
return NOT_FOUND;
SEE ALSO
open(2), close(2), read(2), lseek(2), dir(5)
HISTORY
The opendir(), readdir(), telldir(), seekdir(), rewinddir(), closedir(),
and dirfd() functions appeared in 4.2BSD.
4.2 Berkeley Distribution June 4, 1993 2 | http://modman.unixdev.net/?sektion=3&page=telldir&manpath=4.4BSD-Lite2 | CC-MAIN-2017-39 | refinedweb | 242 | 54.93 |
I have a listbox and what I basically want to achieve is as I click a value within the listbox I want the value clicked to be passed to a function.
I could easily do this using a button and listbox.GetValue() but I believe the quality of my finished code would benefit by having the listbox do this for me.
The listbox obviously knows its being clicked as it highlights the value in blue. I would like it to send this value to a function, or just call a function similar to the one below.
import wx class Trial(wx.Frame): def __init__(self, parent, id): wx.Frame.__init__(self, parent, id, 'Trial', size = (300, 300)) self.panel = wx.Panel(self) maincats = wx.StaticText(self.panel, label = "Main Catagory") self.mainselect = wx.ListBox(self.panel, style = wx.LB_SINGLE) y = (1,2,3,4,5,6,7,8) for x in y: self.mainselect.Insert(str(x), 0) self.grid = wx.GridSizer(2,2,10,5) self.grid.Add(maincats) self.grid.Add(self.mainselect) self.panel.SetSizer(self.grid) def OnListBoxClick(self, event): value = self.mainselect.GetValue() print value app = wx.App(redirect = False) frame = Trial(parent = None, id = -1) frame.Show() app.MainLoop()
I could easily call this function with a button, but I believe having the listbox call this function when a value is selected would be a better option.
Can anybody point me in the right direction here?
The parameters for the insert method are as follows.
Insert(self, item, pos, clientData=None)
"Insert an item into the control before the item at the pos index, optionally associating some data object with the item."
Can a data object be a function?
Thanks in advance.
Edited by Archenemie: n/a | https://www.daniweb.com/programming/software-development/threads/348735/wx-listbox-selection-to-call-function | CC-MAIN-2017-13 | refinedweb | 293 | 52.56 |
Details
- Type:
Improvement
- Status:
Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 1.1-rc-2
-
- Component/s: groovy-runtime
- Labels:None
- Number of attachments :
Description
As discussed with Graeme, take a Grails plugin for example that might supply a proxy "render" method for controllers, that checks for certain parameters and if present does something, else defers to the pre-existing render() method. There are several default render(...) variants overloaded, but this plugin replaces only a specific one:
def oldRender = controllerClass.metaClass.render controllerClass.metaClass.render = { Map m, Closure c -> if (!something) oldRender(m, c) }
This may work sometimes if the render method retrieved from the EMC is the one that takes Map, Closure. Sometimes it may not be though, and then you get method invocation errors when it tries to pass in the Map and Closure.
Currently the workaround is to use metaClass.getMetaMethod( name, argTypes) but this gives you a different invocation paradigm - i.e. you must call invoke() on the MetaMethod.
A couple of features then offer themselves as part or whole solutions:
- Make MetaMethod support call()/doCall() as well as invoke()
- Make EMC's methodMissing return a new MultiMetaMethodProxy object that, when call() is invoked, automatically determines which overloaded method to invoke. This could be dangerous, needs further assessment. This is the most elegant approach in my opinion, but I don't believe anywhere else in groovy treats all overloaded forms of a method as a "single" method and as such the terminology may not work. The concept is great, but maybe Groovy needs a new term for this, such as "Message" as in other OO languages. i.e. a MetaMessage is some invokation you can perform on an object using MetaMessage.call(args) but the exact method that will be called is dependent on the args, i.e. it knows about all the possible MetaMethods.
- Add a new findMethod(name, argTypes) that returns a method reference instead of MetaMethod. | http://jira.codehaus.org/browse/GROOVY-2310 | CC-MAIN-2014-15 | refinedweb | 324 | 54.52 |
kevin hudsonPro Student 9,652 Points
How to do this in repl.it please? Or maybe Visual Studio?
I'm not going to use csharp console for this course. I've been using repl.it to follow along and I need help understanding how to make the query work in repl.it
My code :
Birds.cs
//Objec initialization public class Bird { public string Name{get; set;} public string Color{get; set;} public int Sightings{get; set;} }
Main.cs
using System; using System.Linq; using System.Collections.Generic; class MainClass { public static void Main (string[] args) { List<Bird> birds = new List<Bird> { new Bird { Name = "Cardinal", Color = "Red", Sightings = 3 }, new Bird { Name = "Dove", Color = "White", Sightings = 2 } }; var canary = new Bird { Name = "Canary", Color = "yellow", Sightings = 0 }; birds.Add(canary); foreach(Bird bird in birds){ Console.WriteLine(bird.Name + " " + bird.Color + " " + bird.Sightings); } query = from b in birds where b.Color == "Red" select b; Console.WriteLine(query); } }
I'm having an issue with trying to display the LINQ query after the C# foreach loop data
query = from b in birds where b.Color == "Red" select b; Console.WriteLine(query);
repl.it link :
Someone else's repl example of LINQ but not a list:
I just can't follow along with the console. Does anyone know if I can use Visual Studio? Which project should I select? Do I need to install a LINQ package in VS?
1 Answer
Steven Parker194,132 Points
Two things to keep in mind about using the workspaces:
- It's easier to follow along using the same tools as the instructor. You need to be very familiar with any external tool to know when (and how) to alter the steps the instructor is using.
- With a workspace, it's easy to create a "snapshot" and share a link to it so other students can replicate and analyze your issue if you need help.
But with that said, you can use an external build environment. And if you were missing the LINQ package, you should get a compiler error from the "
using System.Linq;" directive. | https://teamtreehouse.com/community/how-to-do-this-in-replit-please-or-maybe-visual-studio | CC-MAIN-2020-24 | refinedweb | 348 | 75.3 |
How to reverse the order of words in a text string is a frequently asked interview question. The application use of this problem is that if there are two words separated by a space, a typical example is "LastName FirstName". Then reversing the order of words will arrange them like "FirstName LastName".
Reversing the order of words in a string is two step solution:
The following piece of code reverses the order of words in given string. For demonstration purpose, the string is hard-coded in the code itself; while in real world input will be taken either from keyboard or file.
#include <stdio.h> #include <string.h> #define MAXLEN 100 void reverseString(char*, int, int); void reverseWords(char*, int); int main() { char string[MAXLEN] = "Kumar Krishan"; printf("Input String: %s\n", string); reverseString(string, 0, strlen(string) - 1); reverseWords(string, strlen(string)); printf("Output String: %s\n", string); } void reverseString(char* buffer, int startIndex, int endIndex) { int ch, i, j; for (i = startIndex, j = endIndex; i < j; i++, j--) { ch = buffer[i]; buffer[i] = buffer[j]; buffer[j] = ch; } } void reverseWords(char* buffer, int strLength) { if (*buffer == '\0') // check for null return; // extract a word from the buffer and reverse it // to get back the original word. int startWord = 0, endWord = 0; while (endWord < strLength) { While (buffer[endWord] != ' ' && buffer[endWord] != '\0') { endWord++; } reverseString(buffer, startWord, endWord-1); startWord = ++endWord; } } OUTPUT ====== [krishan@localhost ~/cprogs]$ gcc rev_order.c -o rev_order [krishan@localhost ~/cprogs]$ ./rev_order Input String: Kumar Krishan Output String: Krishan Kumar
Above C program has two functions viz
reverseString and
reverseWords. Both are quite straight forward. Function
reverseString reverses the string; whereas
reverseWords reverses individual words in a string. This program is just for demonstrating how to reverse the order of words in a string and tackles only single space to delimit words, new-line and tabs are not handled here.
Hope you have enjoyed reading this answer for reversing the order of words in a string. | http://cs-fundamentals.com/tech-interview/dsa/reverse-order-of-words-in-string.php | CC-MAIN-2017-17 | refinedweb | 327 | 51.28 |
The
Searchable concept represents structures that can be searched.
Intuitively, a
Searchable is any structure, finite or infinite, containing elements that can be searched using a predicate. Sometimes,
Searchables will associate keys to values; one can search for a key with a predicate, and the value associated to it is returned. This gives rise to map-like data structures. Other times, the elements of the structure that are searched (i.e. those to which the predicate is applied) are the same that are returned, which gives rise to set-like data structures. In general, we will refer to the keys of a
Searchable structure as those elements that are used for searching, and to the values of a
Searchable as those elements that are returned when a search is successful. As was explained, there is no requirement that both notions differ, and it is often useful to have keys and values coincide (think about
std::set).
Some methods like
any_of,
all_of and
none_of allow simple queries to be performed on the keys of the structure, while other methods like
find and
find_if make it possible to find the value associated to a key. The most specific method should always be used if one cares about performance, because it is usually the case that heavy optimizations can be performed in more specific methods. For example, an associative data structure implemented as a hash table will be much faster to access using
find than
find_if, because in the second case it will have to do a linear search through all the entries. Similarly, using
contains will likely be much faster than
any_of with an equivalent predicate.
Insight
In a lazy evaluation context, any
Foldablecan also become a model of
Searchablebecause we can search lazily through the structure with
fold_right. However, in the context of C++, some
Searchables can not be folded; think for example of an infinite set.
find_if and
any_of
When
find_if and
any_of are provided, the other functions are implemented according to the laws explained below.
any_of(xs, pred)by checking whether
find_if(xs, pred)is an empty
optionalor not, and then reduce the minimal complete definition to
find_if. However, this is not done because that implementation requires the predicate of
any_ofto return a compile-time
Logical, which is more restrictive than what we have right now.
In order for the semantics of the methods to be consistent, some properties must be satisfied by any model of the
Searchable concept. Rigorously, for any
Searchables
xs and
ys and any predicate
p, the following laws should be satisfied:
Additionally, if all the keys of the
Searchable are
Logicals, the following laws should be satisfied:
hana::map,
hana::optional,
hana::range,
hana::set,
hana::string,
hana::tuple
Builtin arrays whose size is known can be searched as-if they were homogeneous tuples. However, since arrays can only hold objects of a single type and the predicate to
find_if must return a compile-time
Logical, the
find_if method is fairly useless. For similar reasons, the
find method is also fairly useless. This model is provided mainly because of the
any_of method & friends, which are both useful and compile-time efficient.
Given two
Searchables
S1 and
S2, a function \( f : S_1(X) \to S_2(X) \) is said to preserve the
Searchable structure if for all
xs of data type
S1(X) and predicates \( \mathtt{pred} : X \to Bool \) (for a
Logical
Bool),
This is really just a generalization of the following, more intuitive requirements. For all
xs of data type
S1(X) and
x of data type
X,
These requirements can be understood as saying that
f does not change the content of
xs, although it may reorder elements. As usual, such a structure-preserving transformation is said to be an embedding if it is also injective, i.e. if it is a lossless transformation.
Initial value:
#include <boost/hana/fwd/all.hpp>
Returns whether all the keys of the structure are true-valued.The keys of the structure must be
Logicals. If the structure is not finite, a false-valued key must appear at a finite "index" in order for this method to finish.
Initial value:
#include <boost/hana/fwd/all_of.hpp>
Returns whether all the keys of the structure satisfy the
predicate.If the structure is not finite,
predicate has to return a false- valued
Logical after looking at a finite number of keys for this method to finish.
Initial value:
#include <boost/hana/fwd/any.hpp>
Returns whether any key of the structure is true-valued.The keys of the structure must be
Logicals. If the structure is not finite, a true-valued key must appear at a finite "index" in order for this method to finish.
Initial value:
#include <boost/hana/fwd/any_of.hpp>
Returns whether any key of the structure satisfies the
predicate.If the structure is not finite,
predicate has to be satisfied after looking at a finite number of keys for this method to finish.
Initial value:
#include <boost/hana/fwd/at_key.hpp>
Returns the value associated to the given key in a structure, or fail.Given a
key and a
Searchable structure,
at_key returns the first value whose key is equal to the given
key, and fails at compile-time if no such key exists. This requires the
key to be compile-time
Comparable, exactly like for
find.
at_key satisfies the following:
If the
Searchable actually stores the elements it contains,
at_key is required to return a lvalue reference, a lvalue reference to const or a rvalue reference to the found value, where the type of reference must match that of the structure passed to
at_key. If the
Searchable does not store the elements it contains (i.e. it generates them on demand), this requirement is dropped.
Initial value:
#include <boost/hana/fwd/contains.hpp>
Returns whether the key occurs in the structure.Given a
Searchable structure
xs and a
key,
contains returns whether any of the keys of the structure is equal to the given
key. If the structure is not finite, an equal key has to appear at a finite position in the structure for this method to finish. For convenience,
contains can also be applied in infix notation.
#include <boost/hana/fwd/contains.hpp>
Return whether the key occurs in the structure.Specifically, this is equivalent to
contains, except
in takes its arguments in reverse order. Like
contains,
in can also be applied in infix notation for increased expressiveness. This function is not a method that can be overriden; it is just a convenience function provided with the concept.
Initial value:
#include <boost/hana/fwd/find.hpp>
Finds the value associated to the given key in a structure.Given a
key and a
Searchable structure,
find returns the
just the first value whose key is equal to the given
key, or
nothing if there is no such key. Comparison is done with
equal.
find satisfies the following:
Referenced by boost::hana::literals::operator""_s().
Initial value:
#include <boost/hana/fwd/find_if.hpp>
Finds the value associated to the first key satisfying a predicate.Given a
Searchable structure
xs and a predicate
pred,
find_if(xs, pred) returns
just the first element whose key satisfies the predicate, or
nothing if there is no such element.
Initial value:
#include <boost/hana/fwd/is_disjoint.hpp>
Returns whether two
Searchables are disjoint.Given two
Searchables
xs and
ys,
is_disjoint returns a
Logical representing whether the keys in
xs are disjoint from the keys in
ys, i.e. whether both structures have no keys in common.
Initial value:
#include <boost/hana/fwd/is_subset.hpp>
Returns whether a structure contains a subset of the keys of another structure.Given two
Searchables
xs and
ys,
is_subset returns a
Logical representing whether
xs is a subset of
ys. In other words, it returns whether all the keys of
xs are also present in
ys. This method does not return whether
xs is a strict subset of
ys; if
xs and
ys are equal, all the keys of
xs are also present in
ys, and
is_subset returns true.
is_subsetcan also be applied in infix notation.
This method is tag-dispatched using the tags of both arguments. It can be called with any two
Searchables sharing a common
Searchable embedding, as defined in the main documentation of the
Searchable concept. When
Searchables with two different tags but sharing a common embedding are sent to
is_subset, they are first converted to this common
Searchable and the
is_subset method of the common embedding is then used. Of course, the method can be overriden for custom
Searchables for efficieny.
is_subsetis supported, it is not currently used by the library because there are no models of
Searchablewith a common embedding.
Initial value:
#include <boost/hana/fwd/none.hpp>
Returns whether all of the keys of the structure are false-valued.The keys of the structure must be
Logicals. If the structure is not finite, a true-valued key must appear at a finite "index" in order for this method to finish.
Initial value:
#include <boost/hana/fwd/none_of.hpp>
Returns whether none of the keys of the structure satisfy the
predicate.If the structure is not finite,
predicate has to return a true- valued
Logical after looking at a finite number of keys for this method to finish. | http://www.boost.org/doc/libs/1_65_0/libs/hana/doc/html/group__group-Searchable.html | CC-MAIN-2017-47 | refinedweb | 1,550 | 54.42 |
ReSharper’s code generation actions can create a lot of code for us. Why manually write a constructor that initializes type members? Why manually implement interface methods and properties? ReSharper can do those things for us!
Some say developers who use tools to generate code are lazy. Personally, I like to call that efficient. Instead of writing code that is, in essence, programming language ceremony, we can let ReSharper generate it for us and focus on implementing business value and solving our customer’s problem instead. In this post, we’ll go over some of the newly introduced and updated code generation actions in ReSharper 2016.3.
Generate relational members/comparer (implement IComparable<T>/IComparer<T>)
When we want to be able to compare or sort a class we’re implementing, we have to implement the IComparable interface on it. It features one method,
CompareTo(other), which returns 0 when objects are equal, < 0 when the current object comes before the other, and > 0 when the current object comes after the other.
Here’s a ShoeSize class we want to be able to compare. After pressing Alt+Insert we can generate Relational members. As with any ReSharper generate action, we can then select the fields we want to use, optionally change their order and then let ReSharper generate the code for us – in this case the comparer.
The IComparable interface is great to have for sorting collections and all that, but personally I prefer comparing objects in my code using operators. In other words, I prefer writing
shoeSizeA >= shoeSizeB instead of
shoeSizeA.CompareTo(shoeSizeB) >= 0. Again using Generate relational members we can enable overload relational operators and have ReSharper generate operator overloads for our class.
A cool thing is ReSharper also detects field types. For example if we have a class which contains string fields, ReSharper will let us pick a string comparison option, generate nullability checks and all that.
Generate dispose pattern (implement IDisposable)
When writing code that makes use of external resources, like a file stream, system handle or a database connection, it’s best to always release these resources when we no longer need them. The .NET runtime helps us with this by finalizing such objects, but that’s not ideal as it takes two rounds of garbage collection to remove the objects from memory – essentially keeping the object around in memory for too long.
The dispose pattern helps us release these external resources so that the garbage collector can do its job faster. And ReSharper helps us do our job faster – we can use the quick-fix by pressing Alt+Enter and let ReSharper generate and implement
IDisposable for us.
The dispose pattern code generation lets us select several options where we can help code generation, for example by specifying fields can be null. We also can specify whether we plan to inherit from this class, or if the class owns unmanaged resources. These options are used to implement variations of dispose logic:
- When the class only owns managed resources, a simple
Disposemethod will be generated which in turn calls the
Disposemethod on those resources.
- When the class owns only unmanaged resources, ReSharper will generate a
Disposemethod, a destructor, and a method called
ReleaseUnmanagedResources. We can add our own clean-up logic in this method.
- When the class owns both managed and unmanaged resources, either directly or via inheritance, a
Dispose(bool)method will be generated which can be overridden by inheritors.
Generate constructor
Have a class and added some fields and properties to it? Using the Alt+Insert keyboard shortcut, we can generate a constructor for our class. We can select fields, properties and optionally base constructors to call and then let ReSharper generate initialization code. The generated code will assign constructor parameter values to fields and properties. New in ReSharper 2016.3 is that we can (optionally) generate null checks, throwing ArgumentNullException when a parameter is null.
Generate missing members/overrides that are async
In any codebase, we can start off by writing an interface or a base class and then let ReSharper generate a class that implements or inherits these. With ReSharper 2016.3 we’ve added the option to make generated methods async when returning a Task. This speeds up coding since we no longer have to add the async keyword manually and can just start implementing the method bodies.
Several other improvements have been made, such as additional quick-fixes for C# and VB.NET that can invoke these code generation actions.
Awesome as usual.
It’d be nice if the image examples didn’t restart so fast, though. It is hard to read the final screen sometimes.
Updated the animations.
I was about to ask if we could have the “string comparison” option in the equality member generation dialog, but I see you’ve already done that. Nice!
Finally overriding async methods gets easier. Only thing missing is fixing default body:
public override Task MyMethodAsync()
{
// current generated code:
return base.MyMethodAsync();
// desired version:
await base.MyMethodAsync();
}
Thanks for the feedback!
We’ll surely fix that one in the next release.
Until that moment you can press Alt-Enter on the code with red squiggles and execute one of the following quick-fixes
In case of non-generic Task:
* To void return (preserve value) — removes ‘return’ keyword
* To void return (remove value) — removes the whole statement
In case of generic Task:
* Await expression — adds missing ‘await’ after ‘return’
The proposed code change is undesirable. See and the many related posts/answers. | https://blog.jetbrains.com/dotnet/2017/01/30/code-generation-improvements-in-resharper-2016-3/ | CC-MAIN-2018-05 | refinedweb | 920 | 54.22 |
README
Etcd3 ModuleEtcd3 Module
Etcd3 module for the Hapiness framework.
Table of contentsTable of contents
- Using your module inside Hapiness application
Etcd3Serviceapi
Using your module inside Hapiness applicationUsing your module inside Hapiness application
or yarn
npm it in your
package.json
$ npm install --save @hapiness/etcd3 @hapiness/core rxjs or $ yarn add @hapiness/etcd3 @hapiness/core rxjs
"dependencies": { "@hapiness/core": "^1.3.0", "@hapiness/etcd3": "^1.0.3", "rxjs", "^5.5.5" //... } //...
Importing
Etcd3Module from the library
This module provide an Hapiness extension for Etcd3.
To use it, simply register it during the
bootstrap step of your project and provide the
Etcd3Ext with its config
@HapinessModule({ version: '1.0.0', providers: [], declarations: [], imports: [Etcd3Module] }) class MyApp implements OnStart { constructor() {} onStart() {} } Hapiness .bootstrap( MyApp, [ /* ... */ Etcd3Ext.setConfig( { basePath: '/project/root'; client: <IOptions> { /* options comes here */}; } ) ] ) .catch(err => { /* ... */ });
The
basePath key is optional and represents the prefix of all future keys. The default value is
/.
The
IOptions interface let you provide config to connect etcd. Allowed fields are:
/** * Optional client cert credentials for talking to etcd. Describe more * {@link here}, * passed into the createSsl function in GRPC * {@link here}. */ credentials?: { rootCertificate: Buffer; privateKey?: Buffer; certChain?: Buffer; }, /** * Internal options to configure the GRPC client. These are channel options * as enumerated in their [C++ documentation](). */ grpcOptions?: ChannelOptions, /** * Etcd password auth, if using. */ auth?: { username: string; password: string; }, /** * A list of hosts to connect to. Hosts should include the `https?://` prefix. */ hosts: string[] | string, /** * Duration in milliseconds to wait while connecting before timing out. * Defaults to 30 seconds. */ dialTimeout?: number, /** * Backoff strategy to use for connecting to hosts. Defaults to an * exponential strategy, starting at a 500 millisecond * retry with a 30 second max. */ backoffStrategy?: IBackoffStrategy, /** * Whether, if a query fails as a result of a primitive GRPC error, to retry * it on a different server (provided one is available). This can make * service disruptions less-severe but can cause a domino effect if a * particular operation causes a failure that grpc reports as some sort of * internal or network error. * * Defaults to false. */ retry?: boolean
Using
Etcd3 inside your application
To use the
etcd3 module, you need to inject inside your providers the
Etcd3Service.
class FooProvider { constructor(private _etcd3: Etcd3Service) {} getValueForKey(key: string): Observable<string | object | Buffer | null | Error> { return this._etcd3.get(key); } }
api Etcd3Service
/** * * @returns {string} The value of the base path * */ public get basePath(): string; /** * * Retrieve the client without basePath consideration * * @returns {Namespace} the client for the namespace * */ public get client(): Namespace; /** * * Retrieve the client without basePath consideration * * @returns {Etcd3} the normal client (without namespace consideration) * */ public etcd3Client(): Etcd3; /** * * Get the value stored at path `key`. * * @param {string} key The key you want to retrieve the value * @param {ResponseFormat} format The format you want for the result (default is string) * * @returns {string | object | number | Buffer | null | Error} The value of the object stored * */ public get(key: string, format: ResponseFormat = ResponseFormat.String): Observable<string | object | Buffer | null | Error>; /** * * Get all keys and values stored under the given `prefix`. * * @param {string} prefix The prefix under which you want to start looking * * @returns { { [key: string]: string } } An object having all path as keys and all values stored under them * */ public getWithPrefix(_prefix: string): Observable<{ [key: string]: string }>; /** * * Append the value `value` at path `key`. * * @param {string} key The key you want to retrieve the value * @param {string | Buffer | number} value The format you want for the result (default is string) * * @returns {IPutResponse} The result of the operation * */ public put(key: string, value: string | number | Object | Buffer): Observable<IPutResponse>; /** * * Delete the key `key`. * * @param {string} key The key you want to delete * * @returns {IDeleteRangeResponse} The result of the operation * */ public delete(key: string): Observable<IDeleteRangeResponse>; /** * * Delete all registered keys for the etcd3 client. * * @returns {IDeleteRangeResponse} The result of the operation * */ public deleteAll(): Observable<IDeleteRangeResponse>; /** * * Create a watcher for a specific key. * * @param {string} key The key you want to watch * @param {string} prefix The prefix you want to watch * * @returns {Watcher} The watcher instance created * */ public createWatcher(key: string, prefix?: boolean = false): Observable<Watcher>; /** * * Create and acquire a lock for a key `key` specifying a ttl. * It will automatically contact etcd to keep the connection live. * When the connection is broken (end of process or lock released), * the TTL is the time after when the lock will be released. * * @param {string} key The key * @param {number} ttl The TTL value in seconds. Default value is 1 * * @returns {Lock} The lock instance created * */ public acquireLock(key: string, ttl: number = 1): Observable<Lock>; /****************************************************************************************** * * Lease Operations * ******************************************************************************************/ /** * * Create a lease object with a ttl. * The lease is automatically keeping alive until it is close. * * @param {number} ttl The TTL value in seconds. Default value is 1 * * @returns {Lease} The lease instance created * */ public createLease(ttl: number = 1): Observable<Lease>; /** * * Create a lease object with a ttl and attach directly a key-value to it. * The lease is automatically keeping alive until it is close. * * NOTE: Once the lease is closed, the key-value will be destroyed by etcd. * * @param {string} key The key where to store the value * @param {string | Buffer | number} value The value that will be stored at `key` path * @param {number} ttl The TTL value in seconds. Default value is 1 * * @returns {Lease} The lease instance created * */ public createLeaseWithValue(key: string, value: string | Buffer, ttl: number = 1): Observable<Lease>;
MaintainersMaintainers
LicenseLicense | https://www.skypack.dev/view/@hapiness/etcd3 | CC-MAIN-2022-40 | refinedweb | 884 | 54.12 |
Months ago, I was forced to get a new laptop (thanksalot, kitty!) which uses an Intel HD Graphics 520 graphics card. When I got around to working on druid4arduino I hit a mystery bug with the display, as described previously. In short, a nice black window that wouldn’t display anything but the void. Development was thus stalled until I could actually see what I was doing again.
Since I couldn’t even run a “hello world” cordova-ubuntu program (which also resulted in a black window), and my bug report was apparently ignored for 5 months, I’d moved my focus to other things and everything with SerialUI/Druid4Arduino stalled. Thankfully, I’ve found a workaround and I’ll describe the symptoms and fix here, in case anyone else is banging their head against this. I’ll also describe the process and various error messages encountered, for reference and to help anyone struggling with the same issue to actually find this information.
If you’re only interested in the solution, head straight to the work-around.
The Bug
After a short detour during which I believed the underlying cause was the Oxide webviewer (which is the basis of the cordova-ubuntu display), I finally determined that the problem was somewhere below that in the QT layer.
To demonstrate this, a dead simple QtQuick application window was used:
import QtQuick 2.1 import QtQuick.Controls 1.0 import QtQuick.Window 2.0 ApplicationWindow { title: qsTr("Hello World") width: 640 height: 480 menuBar: MenuBar { Menu { title: qsTr("File") MenuItem { text: qsTr("Exit") onTriggered: Qt.quit(); } } } Button { text: qsTr("Hello World") anchors.horizontalCenter: parent.horizontalCenter anchors.verticalCenter: parent.verticalCenter } }
and displayed using
qmlscene mytest.qml
This resulted in a window with black rectangles wherever controls/widgets were supposed to be
and an error message related to the texture atlas:
QSGTextureAtlas: texture atlas allocation failed, code=501
Sleuthing
I found I could use apitrace (a useful tool) to dig into the error further
apitrace trace qmlscene mytest.qml
and then
apitrace dump qmlscene.trace
to find errors like:
message: major api error 17: GL_INVALID_OPERATION in glTexSubImage2D(invalid texture image)
related to calls like
glTexImage2D(target = GL_TEXTURE_2D, level = 0, internalformat = GL_BGRA, width = 1024,height = 512, border = 0, format = GL_BGRA, - type = GL_UNSIGNED_BYTE, pixels = NULL)
and
glTexSubImage2D(target = GL_TEXTURE_2D, level = 0, xoffset = 137, yoffset = 0, width = 101, height = 1, format = GL_BGRA, type = GL_UNSIGNED_BYTE, pixels = blob(404))
Since this was OpenGL texture related, I spent an inordinate amount of time trying to disable OpenGL. Since the dev machine is using the intel i915 driver, I created an /usr/share/X11/xorg.conf.d/20-intel.conf device driver Xorg config file, tried everything I could think of and wound up with an config file that retains some traces of my past attempts:
Section "Device" Identifier "Intel Graphics" Driver "intel" #Option "NoAccel" "true" #Option "AccelMethod" "blt" #Option "AccelMethod" "uxa" #Option "AccelMethod" "none" #Option "DRI" "1" #Option "TearFree" "true" #Option "XvPreferOverlay" "true" #Option "FallbackDebug" "true" EndSection
Nothing worked and it was all pretty frustrating.
Discovery and Work-around
After finding that this was likely due to an invalid request by Qt to use the BGRA format for the texture, I reported the bug to the Qt guys (who were waaaay more responsive than the cordova-ubuntu maintainers) and kept looking. I finally found this bit of code in the QtQuick scenegraph which was specifying different internal and external formats for the textures, e.g.
m_internalFormat = GL_RGBA; m_externalFormat = GL_BGRA;
Though this may be the source of my issue, I really wanted to avoid messing around with Qt by trying to compile the whole thing and install it on my system.
Thankfully, qsgatlastexture.cpp provided a final clue… since it was configuring the formats based on the presence of various GL extensions:
if (strstr(ext, "GL_EXT_bgra") || strstr(ext, "GL_EXT_texture_format_BGRA8888") || strstr(ext, "GL_IMG_texture_format_BGRA8888")) { // ... }
maybe disabling these extensions might work..? Using glxinfo, I could see that that I did have GL_EXT_bgra and GL_EXT_texture_format_BGRA8888 enabled. I found a page which lists Mesa debug environment settings. Of specific interest: MESA_EXTENSION_OVERRIDE, which can be used to enable/disable extensions.
The magic command was then just:
MESA_EXTENSION_OVERRIDE="-GL_EXT_bgra -GL_EXT_texture_format_BGRA8888" qmlscene mytest.qml
and voila, hello world!
Thus, leaving everything else as-is but disabling the extensions with that flag was all it took. Setting it in the environment
export MESA_EXTENSION_OVERRIDE="-GL_EXT_bgra -GL_EXT_texture_format_BGRA8888"
prior to running Cordova is all it takes. aaaaaaaagh! So: not resolved but reported to Qt and druid development can finally proceed. Fi.Nal.Ly!
Context
Some context to describe the affected system and graphics card/driver, for reference.
This is an Ubuntu system, which started at 16.04 but was upgraded twice since and is now at 17.something in the (vain) hopes of getting a fix from fresh packages. The machine is an Asus laptop, with Intel Core i7-6500U CPU @ 2.50GHz × 4 and a Intel HD Graphics 520 (Skylake GT2) card, using the intel i915 driver.
GLX info:
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics 520 (Skylake GT2) (0x1916)
Version: 13.0.3
Accelerated: yes
Video memory: 3072 520 (Skylake GT2)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 13.0.3
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 3.0 Mesa 13.0.3
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 13.0.3
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
Using Kernel Mode Setting driver: i915, version 1.6.0 20160919
with /sys/kernel/debug/dri/0/i915_dmc_info:
fw loaded: yes
path: i915/skl_dmc_ver1_26.bin
version: 1.26
DC3 -> DC5 count: 8706
DC5 -> DC6 count: 0
program base: 0x09004040
ssp base: 0x00002fc0
htp: 0x00b40068 | https://flyingcarsandstuff.com/2017/02/mystery-black-window-workaround-druid-back-in-development/ | CC-MAIN-2017-51 | refinedweb | 989 | 54.52 |
!
275 responses to “Flex AutoComplete: Version 0.98.2”
Trackbacks / Pingbacks
- July 21, 2009-
- October 30, 2011-
Continuation of my previous post:
What would be nice:
1) The component handles the delay for autocomplete (execute step 2 below after N ms)
2) The possibility to mount the item’s array through a function (AutoComplete.ObtainArrayFunction = something), so we can choose from where we get our items
A fixed array is nice, but, in some cases, it is impossible… sometimes we have so many items in a database… =\
JCK,
The next version of the AutoComplete has support for using XMLListCollection as the dataProvider. You can check out the latest SWC from the google code site. Please give this a try and see if it helps with your issue.
For point 1, why not introduce your own delay. You can check out the DynamicData.mxml file in the examples folder to see an example.
Let me know how it goes…
How do I change the heigth of AdvancedAutoComplete? the heigth is locked want the component to 150, heigth .. I have to extend it?
Thanks…
Luan,
You should be able to set the minHeight property.
Let me know if you have any issues.
Thank you for helping me.
How do I reduce the heigth only of the MultiSelect component?
i need reduce the heigth of the List and the button Remove.
Thanks!
Hillel,
Thank you very much for your answer. I have another question, Can I always image of actionsMenuDataProvider all the time?
Praem,
I also prefer that the icon is always visible, I’ve changed it in the next release. You can download the latest SWC from the google code site.
Hi Hillel,
I am facing a urgent problem regarding this component.The problem is like that if the input search criteria does not match with the arraycollection the input text will be removed automatically from the input text box.Please help me with some hints so that I can solve this problem ASAP.
Regards
Asfahan
Asfahan,
I’m sorry, I’m not following your issue. Could you please try describing it another way.
Thanks
Hi Hillel,
Thanks for the reply.Actually I am facing the same problem what tahir suggested on May 24.What I actually want to remove the text if it doesn’t exist in dataProvider and it must be happend either in focus out event or key press event.
For an example My data provider contain two records “Cat”,”Animal”.Now as soon as user type any other character except “t” after “ca” the text automatically removed from the input text.It means if user type “cad” the records will be removed from the input text.
Hope I am able to explain you.
Could you please guide me to resolve this issue.
Regards,
Asfahan.
Asfahan,
Thanks, I follow you now. It looks like Tahir posted a solution to the problem, is his change not working for you?
Hi Hillel,
Thanks for the reply.Actually I already go through the solution suggested by tahir.But I want the same things in some diffrent way for an example if user type “d” after “ca” and my data provider does not contain “cad” value and if user did not press “Tab” or “Enter” key event & he just focus out from the text box.Means user click outside “autocomplete” component.Now as my data provider does not contain “cad” value.So this value must be removed from the search text box.
Is there any event or function available in this component to solve this issue.Please give some hint or tell me the function name where I have to modify the code to solve this problem.
Regards,
Asfahan
Asfahan,
Although the focusOut event doesn’t appear in the MXML you can use it in ActionScript. ie,
autoComplete.addEventListener( FocusEvent.FOCUS_OUT, handleFocusOut );
Actually, you can also use it in MXML. Flex Builder’s text prompting feature will not suggest it, but if you write it anyway, it works.
Hi Hillel,
Recently my team member asked me to modify this component with multiple column search facility.
For example: if my data provider contain three fields like name, code and email id.
As data provider can contain >100000 records so user can search the records in format of name+code or code+name or name+email.
it means If the data contain like below format:
Name code email
abdul ab ab@cp.com
Nancy nan na@mn.com
So user can search the records “abdul+ab” or
“ab+ab@cp” like that.
Do you have any idea whether is it possible to implement this feature on this component?Either in dropdown or in browse option.If it is not possible could you please tell me why it is not possible so that I can explain this.If you don’t have any idea could you please send some link so that I can start to implement this.
Regards,
Manna
Manna,
That should be pretty straightforward to implement, you’d just need to set a custom filter function which checks against all of the fields. I actually wrote a post a little while back which discusses this topic (and a way to improve the performance).
Hi Hillel,
Is it possible display the remove icon before the selected item or before the , seperator.It means if u select a record the result will be displayed like
Ex:”remove Icon” abc,
or abc “remove icon”, like that
Let me know whether u understand my point or not.
Regards,
Sourav
Sourav,
You can accomplish this by using the source code and changing the setting for labelPlacement in the IconButton.mxml to right.
Hi Hillel,
Presently what its happen after selecting multiple records the selected records are become greyed out.Now my question is is it possible after single click on greyed out item ,its allow user to remove the selction of the entry and the “grey out” effect is then no longer applied.
For an example:You select the records “Asfahan”.Now from the browse option if you click “Asfahan” again then “Asfahan” records will be remove from the selection and “Greyed out” no longer applied.
As my team member asked me to enhance this component like the above.Could you please give us some hints so that we can modify the component to achieve the above functionality.
Anticipating to get your feedback soon.
Regards,
Asfahan
Asfahan,
I’m sorry, I’m not following what you’re trying to accomplish. Could you please try describing it another way.
Hi Hillel,
I have one simple question.It would be great if u answer that one.The question is :Suppose my data provider contain three fields id,name,sal.
In the dropdown it displays only two field name & sal.
Now user select 2 name records.
Now my question is after selection how can I retrieve the id of each name.Because name & sal can be duplicate.
Ex: id name sal
1 abc $100
2 abc $200
3 def $500
Now user select abc & def.Now after selection I want to retrieve this id.Could you please help on this.
Regards,
Manna
I resolved bind problem:
override public function set selectedItem( value:Object ):void {
super._selectedItems.addItem( value );
dispatchEvent( new Event( Event.CHANGE ) );
}
Mauro,
Thanks for letting me know you were able to resolve the issue.
Let me know if you run into any other problems.
Hi Hillel,
I got my answer I think we can get the id information using autocomplete.selectedItems object.
Regards,
Manna
Manna,
Awesome, happy to hear you were able to figure it out.
Hi Hillel,
thank you for the great AdvacedAutoComplete.
unfortunately i habe a few issues:
First, all the labels are fixed in english. I edit the component for my self for german text an prompts.
Second, can you please rebuid the sources. I’ve some problems to activate the actionsMenu.
Third, there are strange tooltips in all styles. So i get the htmlText-Tooltip for an entry in the listBox. Can yu give me some hints, how to deactivate these. The tolltips come up sproradic while the first typing in some text in the searchBox.
Thanks
Frank
Frank,
1. There should be properties for all the labels (ie, browseLabel, removeLabel, etc). Let me know if there are any missing.
2. Can you please explain the issue your having. I haven’t heard other complaints so I don’t think it’s likely that rebuilding the SWC will solve your problem.
3. I’m not sure about this one, I don’t see any tooltips. Are these visible in the demo?
Hi Hillel,
1. i miss the properties in the ListBuilder (OK, Cancel, the prompts in the left and right inputfield and the tooltips for the imageButtons.
Also in the Browser the select and ok and the PanelTitle “Browse”. Yes some customers are requesting for 🙂
2. i trying to implement the same menu as in your demo. But i don’t understand how my object must be formatted and which events you are yousing for these two options.
3. Unfortuantely not. I have an little example with fix names. And if into the searchfield verry quickly and type a name, i can see a tooltip like this: First Name. its sporadical but reproduceable.
Thank you for your quick reply!
Frank,
1. Hmmm… yeah we’re missing a bunch (I’m surprised nobody noticed this sooner). I don’t love the idea of adding properties for all of these fields. I’ll consider working in resource bundles (or some other solution) in to the next release. For now I’d suggest using the source and manually making the changes.
2. The AdvancedDemo.mxml file should be pretty self explanatory. We’re using a regular Flex Menu component so you can check here for more info on how to format the XML data.
3. Would it be possible for you to send me an example application which demonstrates your problem.
If anyone is still looking for the solution to the tooltip problem (problem: when you type quickly with your mouse over a selection in the drop down sometimes a tool tip would pop up, which is useless to the user and there would be html bold and underline tags since tool tip doesn’t support html, and the tool tip would sometimes just hang there if the drop down selections change –ie the mouseOff event wouldn’t fire), all you need to do is disable tool tips with the flex tool tip manager class in AutoComplete.mxml. Just add this line to the init() function:
ToolTipManager.enabled = false;
Be sure to include the tool tip class above in the import section:
import mx.managers.ToolTipManager;
Thanks Hillel, awesome component!
Rick,
Thanks for the tip, I’ve actually run into that issue myself. I believe this disables all tool tips in the app (otherwise I’d add this to the next version of the component).
Hi Hillel,
Thanks for the reply.For my question please follow the below example:
For example:
My data provider contains two fields name & code.
name code
Brien BR
Hillel HC
Ronaldo RC
Now from the dropdown option you select the records “Brien”.So now you click the browse option from the action menu.
You will see the information belongs to “Brien” is “Greyed out” means its showing in different color and you can not reselect this records as it was already selected.
Now what I want if you click “Brien” records again from the browse option it will remove the records from the selected items list and the information belongs to “Brien” will be displayed in normal color.
Hope I am able to explain you.If you need more explanation please let me know.
Any help will be appriciated.
Regards,
Asfahan
Asfahan,
Thanks for the explanation. I now understand what you want to accomplish (but I have to admit, I don’t personally think it will be clear to your users). Your best bet is probably to set allowDuplicates to true and then watch for the case where an item is selected a second time.
Let me know if you have any trouble getting this working
Hi Hillel,
I want to add little more information which will be helpful for enhance this component.I think this
is a good feature.
For example:
I have selected 50 records from my dataprovider(Means autocomplete.selectedItems contain 50 records).
Now I want to remove one records.So it is very difficult to traverse each records in the flowbox.
So what I did I provide an option “Display Selected Records” in the Action menu.
I have implemented this using autocomplete.showBrowser() option where dataprovider= autocomplete.selectedItems;
So now user can see the the selected records from the “Display Selected Records” option.
Now I want one more additional feature so that user can remove a selected records from
“Display Selected Records” pop up.
For this the bussiness rule is like that when user click the “greyed out” entry(presently displaying in disable color) then the “greyed out” entry become “normal color” from “disabled color” and the records will remove from the autocomplete.selecteditems list.
Could you please give some idea to achieve this functionality.
Regards
Asfahan
Asfahan,
A simpler solution may be to use the list builder (set the useListBuilder property to true). If you want to go with your current approach I think it’d be easier to create your own browser which use the selectedItems arrayCollection as the dataProvider. You could then simply remove whichever items the user selects.
Hi Hillel,
Today I face a strange problem.Please help me regarding this.The problem is like below:
Presently I am displaying “Avaiable for single record selection” as prompt on input textbox.After
click on this texbox I am able to view the “Action menu” icon & it works fine.
But if I am displaying a small text as prompt for example “Avaiable for single select” on input text box.Then after click on this input text box I am not able to view the “Action menu” icon.
But if I fire a keyboard event on input text box(i.e if type something on the input text box)
then I am able to view the “Action Menu”.
Could you please tell us how to solve this strange issue.
Regards,
Sourav
Sourav,
That’s definitely an interesting one, I’ve never seen that before. If you can create a simple test application which demonstrates the problem I’d be happy to help you debug it.
Hillel,
Thanks so much for your time in this project. It’s weird, because I’m having problem that I thought would be more prevalent because it’s so simple. I am using the AutoComplete component and passing it a
When I type in ANYTHING that’s NOT part of the name of one of my objects, I get an exception:
ReferenceError: Error #1069: Property data not found on com.myComp.xxx.CustomObj and there is no default value.
at mx.controls::ComboBase/get value()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBase.as:880]
at mx.controls::ComboBase/setSelectedItem()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBase.as:717]
at mx.controls::ComboBase/set selectedItem()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBase.as:706]
at mx.controls::ComboBox/set selectedItem()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBox.as:846]
at mx.controls::ComboBase/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBase.as:1211]
at mx.controls::ComboBox/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\controls\ComboBox.as:1212]]
So, all I do is set my dataprovider to a list of objects, that all have a “name” that all start with a letter. When I type in a “3”, I see that exception. I want to allow the user to enter numbers, even if it’s not in the dataprovider. I’ll handle them differently. Any ideas?
David,
Are you sure you’re using my AutoComplete (there are a number of ones out there for Flex). I ask b/c the error being thrown is from the ComboBase class which most other AutoCompletes are based on while mine isn’t.
Peculiar! here’s my code:
(The only include in this mxml file)
xmlns:autoText=”com.hillelcoren.components.*”
David,
We’ll… that’s definitely me 😉
If you can you put together a simple test application which demonstrates your problem I’d be happy to help you debug it.
I’m having a hard time isolating the problem. Your examples work when I’m directly copying them into my code, but mine still doesn’t work. It has something to do with the fact that I’m passing in an external array-collection (i.e. the datasource is passed in from another class). I don’t declare the array at the top of my MXML file like you do in your examples. Based on the error, I’ve added the variable of name “data” to my class and just set it to the empty string. Seems like a non-optimal solution though. Any thoughts?
David,
I’m sorry, I don’t have too much to go on. I’d suggest trying to prove that it’s the passed in dataSource which is causing the problem. You could try changing your app so it uses an arrayCollection defined in the class and see if it resolves your problem.
Hillel,
I have one hindrance while integrating the Autocomplete component. I do not want to use comma as the item/element delimiter, want it to be as part of the search string.Cases like “65 main st,irvine,ca”. Can the source be modified to handle this or simply not worthy? Directions/Thoughts on this is appreciated?
Vinoth,
The modifications to the source to make it work the way you’d like aren’t too bad. You’ll need to make the following two changes in the AutoComplete.mxml file.
1. In the handleKeyDown function change:
else if (event.keyCode == Keyboard.ENTER || event.keyCode == Keyboard.TAB || event.charCode == 44)
to
else if (event.keyCode == Keyboard.ENTER || event.keyCode == Keyboard.TAB)
This will prevent the component from using the comma key as an item selector.
2. In the handleChange function change:
var parts:Array = searchText.split( “,” );
to
var parts:Array = searchText.split( “###” );
This will disable the code which splits up the string into it’s parts (assuming none of your values contain the string ‘###’).
That should do it, let me know if you run into any other issues,
Hillel
Hillel,
Many Thanks. It works omitting the comma.
Vinoth
Hi Hillel,
I just wanted to thank you for all your hard work and allowing everyone to benefit from it. Your code is really quite nicely done. Congrats.
I used several other auto-completion type components and have found yours to be the most robust and well thought out.
Mike,
Thanks, that’s really nice of you to say.
Hillel,
I do not want the button which inserts itself in the text box after selecting a dropdown element;
How can we have text instead of the button after a selection is made; Just needs to look like a simple TextBox with the text selected.
Vinoth,
Thanks for letting me know the comma fix worked.
You can set the selectedItemStyleName property to “none” or “underline”.
Hillel,
I am using the Email Demo to check my modifications and Button is still visible instead of the text inspite of setting selectedItemStyleName=”none”
Vinoth,
I’m sorry, I’m not sure what’s going on here. I just tried modifying the Email Demo (I changed the style from macMail to none) and it worked fine for me. Can you try doing a “clean build” and checking out the latest version from the google code site.
Hillel,
The selectedItemStyleName=”none” still does not work ; also tried it in another m/c. The source which I use is from Googlecode and I have made some changes to the source but i dont think it is related to the selectedItemStyleName.
Below is what how I use:
Vinoth,
It looks like wordpress stripped out the code from your last comment. Could you please either email it to me, or replace the less thans/greater thans with ampersand lt;/ampersand gt;
Thanks,
Hillel
Hi Hillel,
I need some urgent input regarding this component.
Suppose using this component I have selected 3 records & stored into the database.Now I want to display this 3 records in to this component
after fetching the data from database.
Could you please tell me how to implement this?
[Note: I tried to assigned the data using
autocomplete.selectedItems={items retrived from the database}
using debug I checked & found that the data is coming on {items retrived from the database}
but is not diplaying on the textbox. ]
Please help.
Regards,
Sourav
Sourav,
I’m not sure what the issue is, setting the selectedItems property is correct. If you could create a test application which demonstrates the problem (ie, use dummy data instead of the result from the database) I’d be happy to help you debug it.
Hello,
First, thanks for making this great component, and especially thanks for being so responsive and helpful. I was hoping you could give me a tip. I am currently using it to filter an ArrayCollection that is populating a DataGrid. That is working fine, but I also wanted to use it as an ItemRenderer in one of the cells of the DataGrid.
The data in this instance is Accounts and Shippers, where there is one Account to many Shippers. So in the Shippers DataGrid, I wanted to use your fine component as an ItemEditor. I have the value object for the Shippers, which includes two properties, account_id and account_name. The AutoComplete MXML component that I’m using as the ItemEditor is already pulling in its DataProvider of accounts, so what do you think the best way to pass the account_id and account_name in to the AutoComplete in order to set it as the SelectedItem?
Thanks for your help
Philip,
To set the selectedItem you’re going to want to set the dataField property in the DataGridColumn. I’d suggest taking a look at the DataGridDemo.mxml file in the examples folder. Let me know if you need more help getting this working.
Ok, thanks for the tip. So if I understand correctly, I would add an Object to the ArrayCollection with the label and the data I want for the AutoComplete along with another property of the ArrayCollection that contains the data I want displayed when the field is not being edited. Then, I would
1. set the DataGrid editorDataField to the Object in the ArrayCollection
2. set the DataGrid dataField to the data property of the ArrayCollection (e.g. account_id)
3. set the AutoComplete labelField and keyField to the properties of the Object in the ArrayCollection.
Does that sound like I’m on the right track?
Philip,
You’re going to want to set the editorDataField to “selectedItems” (at that’s the property of the AutoComplete which will contain the value once the user finishes editing the cell). The keyField is only needed if you’ll be matching based on the key. In the DataGridDemo I didn’t need to use it as the Object being passed to it is the exact same object. If however, I’d passed it an object which was equivalent (but not the same) then I would have to use it (hope that makes sense).
Hi Hillel,
I want to add little bit more info to my previous mail.
I tried to bind the data which fetched from the database to this autocomplete component
using the following way:
Step1:
[Bindable]
private var SavedObj:ArrayCollection
=new ArrayCollection
([
{ “name”:”Andy”,”age”:”2″ },
{ “name”:”Pal”, “age”:”2″ }]); //This info will come from the database
Step2:
Pass that data into the component
//This component I have created using your component
//BindValue indicate which info I have to display into
//the textinput
//Here I did n’t assigned the dataprovider.The dataprovider will be assigned on AdvancedDemo.mxml file when user start typing
//or when user click the browse option.
Step 3:
//now to bind the data on textinput I have written the code on AdvancedDemo.mxml file like below
[Bindable]
private var _bindValue :ArrayCollection;
private var _fieldWatcher : ChangeWatcher;
[Bindable]
public function get bindValue() : ArrayCollection {
return _bindValue;
}
public function set bindValue(value :ArrayCollection) : void {
_bindValue = value;
if (_bindValue != null)
{
_bindValue=value;
}
_fieldWatcher = null;
setBinding();
}
private function setBinding():void{
if (_fieldWatcher == null && _bindValue != null )
_fieldWatcher = BindingUtils.bindProperty(autocomplete, “selectedItems”,_bindValue,null);
}
But this does not work.
I also tried using the below way:
public function set bindValue(value :ArrayCollection) : void {
_bindValue = value;
if (_bindValue != null)
{
_bindValue=value;
autocomplete.selectedItems=_bindValue;
}
}
This also not works.
Could you please tell me where I was wrong or please tell me the wright way to imlement this feature.
Please help.
Regards,
Sourav
Sourav,
As per my last comment, is it possible for you to put together a test application which demonstrates your problem?
Hallo!
Problem with flex 4 style?
I am using your component in flex 4 project (formerly in flex 3 did not find any serious problem) and the rollover styles (both of combo and browser-datagrid) is not visible (the selected item can not be seen although the selection are being occured and the item.selectedValue is correct) I have tried to play with different style .. but any success
Any idee?
Thanks
Laszlo,
I’ve heard reports from other people that there are issues with the component in Flex 4. I’m currently working on releasing a 1.0 version of the component which I believe works reliably on Flex 3. After that my plan is to start working on supporting Flex 4 (and improving the documentation). I’m sorry, but I think it may be a little while (a month or two) until the component works reliably in Flex 4. Truth is, I’m surprised so many people are already using Flex 4 to build production applications.
Thanks, I am waiting for the new version!
Hi,
I just tried to implement this lib into my project and I have an issue where I cannot change the “selectedIndex”. I presume this lib extends ComboBox? Is there anyway I can change the SelectedIndex of the component?
Jeffery,
Although it’s similar in certain ways to the ComboBox component, it doesn’t extend it. One of the main features which differentiates this AutoComplete from the rest out there is that it supports multiple selection (which would have been very difficult if I had extended the ComboBox). To select an item you need to use either the selectedItem, selectedItems or selectedItemsIds properties. I’ve never had a need to use the selectedIndex, but I’ll definitely consider adding it in the future.
Let me know if using the selectedItem/selectedItems/selectedItemIds doesn’t work for the way you’re using it
PLEASE…
I need to pass values from a Textinput to the AutoComplete by clicking a button. The problem is: when I’m passing the values, the component is not applying any style. How can I pass values to the component to preserve the component’s style?
Thanx!
Erik,
You have two options:
If you know that the text entered is in the dataProvider (or you’re allowing new values to be entered) you can use
autoComplete.selectedItem = textInput.text;
Otherwise, you can use the following code
autoComplete.text = textInput.text;
autoComplete.search();
The second approach will cause the dropDown to be displayed (if the text isn’t a perfect match). If you want it to automatically select it if there’s only one match you could set a custom value for the autoSelectFunction property.
Hope this helps,
Hillel
Hi Hillel,
I got some issue when I am retrieving multiple records
from database & displaying them to autocomplete component.
It would be great if you can help on this.
/*I have created a custom component ModifiedAutcomplet) using your component with addition of below code*/
[Bindable]
public function get bindValue() : Object {
return _bindValue;
}
/*This properties is used for bind the records with autcomplete component which are saved in database*/
public function set bindValue(value : Object) : void
{
//Intially data come with null value
_bindValue = value ;
_fieldWatcher = null;
setInternalBinding();
}
private function setInternalBinding():void
{
if (_fieldWatcher == null && _bindValue != null)
{
_fieldWatcher = BindingUtils.bindProperty(this, “Selected”,_bindValue,”Name”);//My object contain “Name” field only
}
}
public function set Selected(value:Object):void
{
if(value!=null)
{
var obj:Object=_bindValue;
/*again I am reasign this _bindValue because it contain my records with “Name” field*/
autocomplete.selectedItem=obj;
}
}
I am calling the component like below way
//NameList will be loaded asynchronously
< cmk:ModifiedAutcomplete id=”id1″ width=”380″ bindValue=”{dataSource.NameList}”/>
In the above way I am able to fetch single record.But not multiple record because I don’t know how to apply
changewatcher for collection object.
Could you please provide some hints or links how to implement this?
Regards,
Sourav
Thanks for the great component. I have already successfully integrated it.
My Model returns ICollectionViews. Having to provide an ArrayCollection to the dataProvider is limiting. Have you thought of accepting ICollectionView to the dataProvider? It would allow you to accept any collection that implements ICollectionView (which is probably ALL of them!)
Niel,
In the latest version in the google code site I’ve changed the dataProvider to be of type ListCollectionView (which enables you to use either an ArrayCollection or an XMLListCollection). I think that’s about as generic as I can make it (w/o requiring a large amount of changes) as I rely heavily on the functions in the ListCollectionView class (ie, getItemIndex, removeItemAt, etc…)
Thanks for the suggestion though
Hi
This component is awesome.
I looked to your Email demo. But i noticed that search is not made inside the email id. If i give search text as “gm”. Then i expect that all email id should be shown in drop down but none is shown up.
Thanks
ilikelfex
ilikeflex,
Which part of the text to search is determined by the
matchTypeproperty. In the email demo it’s set to “word”, you could change it to “anyPart” so it matches “gm”.
Hi Hillel,
I have tried to use AutoComplete text input connecting Java using JSP. When i run that application it doesn’t showed any errors.From JSP the data is reterived to Flex. But the dropdown is not working. I have used HTTP Service as RPC.
Kindly help me to solve this issue.
Regards,
HaliflifeBaby.
Halflife Baby,
I’m sorry, based on the info you’ve provided it’s very difficult for me to help you. If you could create a sample application which demonstrates your problem I’d be happy to take a look at it.
Hi Hillel,
Could you just reply for my question here itself?
Regards,
HalflifeBaby.
Halflife Baby,
Your question is essentially “it’s not working please tell me why”, I’m sure you could understand why that would be hard to answer. Are there any more details you can provide. The best approach to solving these types of problems is to eliminate variables. Does the component work if you hard code the data (as opposed to loading it remotely). Also, have you checked out the DynamicData.mxml file in the examples folder (it gives a basic sample for using the component with remote data).
Nice component but I’m finding that, once a field is selected:
(1) It appears in white text, always. This doesn’t seem to be changeable
(2) There appears to be a tiny field to the left of the displayed value, which is editable and about 4 pixels wide
(3) There’s another field shown to the right of the displayed value, also containing the displayed value. I think this shows the content of ‘searchText’.
The end result of all this is that the field, once a value has been selected, looks extremely ugly and in fact unusable.
Is this something anyone else has problems with? it’s weird no one else has mentioned it…
Regards,
Jamie.
Jamie,
Which version of Flex are you using. There are currently some style issue in Flex 4, I hope to release a new version soon.
i was looking for the autocomplete.
thanks for the information.
regards
Hi Hillel,
First up, I’d like to offer my sincere thanks for making this component available. It’s proving to be a tremendous resource for us. If and when we can contribute to it’s further development, we certainly will.
Next up, I’m looking for some help on styling the actionsMenu button. In the online demo for the AdvancedAutoComplete, the inline button and associated menu have some custom styling (no border on the button, a rounded header for the menu, etc).
Can you give me some guidance on how to affect this styling? The default implementation I see is to render a standard button. And the menu appears several pixels below the control, without the nicely rounded header piece.
Cheers!
-Matt Towers
Matt,
The style is defined in style.css in the actionsMenuButton tag. It’s applied to the button in ActionsMenu.mxml on line 70.
Let me know if you need any more help applying the style.
I see the code you mentioned but I’m not sure I follow how to make this work. By default, it appears that the actionsMenuButton CSS class refers to com.hillelcoren.assets.skins.BlankSkin, which is a trivial extension of ProgrammaticSkin.
My assumption is that there is a different bit of styling that you are using for the Advanced AutoComplete tab on the demo page ()
BTW, would it be possible to view the source of the demo page? Presumably I can find the answer in there? However, the View Source menu item is enabled but the source page is 404.
Matt,
That’s actually all the styling for the demo, the source for the demo is in the examples folder of the zip file.
I found it. The problem was that default.css was not included in the build path. The fix is to go to
Project -> Properties -> Flex Library Build Path
In the Assets tab, make sure default.css is included as a “non embedded file to include in the library”
Thanks large for your help, and again for your hard work on this project. Us == Big fans
I’m having the same problem as Matt, but I don’t see any assets tab in Flex 4.
Hi ,
I am using an XML file which has nearly 26000 records fo Companies. Each Company Node has 25 child elements.
I can successfully implement the autocompletecomponent but I am looking for some help on performance issue.
It takes nearly 1 second delay while I am typing in the Autocomplete input box. My autocomplete is generic to all fileds in the XML file i.e. it searches for word match in all fields e.g. 26000 x 25 = 650000 words to match with.
Can you show me a way to improve performance?
is there a way to improve this?
Gaurav,
As luck would have it, I’ve written a post on exactly this topic
Hope it helps,
Hillel
Your RSS feed is only working every now and then, i’m on a macbook running shrook if that helps
Gary,
Sorry to hear you’re having issues with the RSS feed. My site is hosted by wordpress.com so I’m not sure if there’s anything I can do to fix it.
Beautiful. Saved me a lot of time. I was using Adobe’s AutoComplete component in their extras package, which stopped working in sdk 3.5. This one is a lot better.
Nick,
Awesome, happy to hear that the component saved you time.
Let me know if you run into any issues.
Best,
Hillel
Hillel,
This is a great component, thanks!
I just had one question. Would it be possible to have the selected item text displayed just as text? I know that we can set the selectedItemStyleName to none, which makes it look like just text. However, there ends up being spacing to the left and to the right of the text in the textbox. As I’ve placed the autocomplete in a form, I would need it to be consistent with the other textinput boxes. Let me know if there would be a way to fix this!
Thanks,
Laura
Laura,
The spacing to the left is b/c there’s a very small text input there to allow the user to type text to select a new value. You could use the source code and modify it to not include it. You’d want to add something like ‘visible=”false” includeInLayout=”false”‘ to the ShorterTextInput tag at the end of SelectedItem.mxml.
Thanks a lot Hillel!
FYI, sourceview for the demo is broken…
404
Fantastic blog.
Hi Hillel,
You’ve created an great component, but I have one question. Is there anyway to pre-populate the field (onload). I’m using it for a messenger and there are times when the user either replys or clicks a name which will be populated in the TO field (which obviously uses your component) however I haven’t spotted a method to pre-populate the field on load.
Hey Hillel,
No need to reply, I figured it out. It appears that someone else was having the same problem and although I was using the same line of code:
autocomplete.selectedItems={items retrived from the database}
I failed to properly name my OBJECT to what it should be. All is working now. Thanks again for one of the best AutoComplete components for Flex out there.
I am facing the same issue that Vinoth facing.
setting selectedItemStyleName=”none” doesnt change the style.
Man,
Are you using the source or the SWC (if you’r using the source please try using the SWC)
Yes I am using swf file. its in the libs folder.
sorry typo, its SWC file.
Man,
Just to be clear, what are you seeing when you set it to “none”?
Hello Hillel,
Thanks for your work, this is a great component !
Unfortunately i can’t use it beacause of a tiny skin’s bug.
Please, Let’s try this simple demo :
Just enter a new entry and click the test button without press enter when your component has the focus. The button skin seems to stay in a down state.
I can’t find the issue.
Please could you help me to resolve this bug.
thanks for advance
Mat
Matboul,
I’m sorry, but your demo didn’t come through. Maybe try emailing it to me.
Hi Hillel,
Sorry to have troubled you !
It was a emphasized’s style problem with halo theme in a spark app.
The clicked button came (i don’t know why…) to emphasized state and not down state.
I bypass this pb with a s|Button.emphasized css selector with the same skin-class than default one.
Everything is ok with your nice component.
thanks | https://hillelcoren.com/2009/05/11/flex-autocomplete-version-0-98-2/ | CC-MAIN-2017-13 | refinedweb | 6,548 | 65.22 |
SJ2 Board
SJ2 board has lots of in-built sensors and a 128*64 OLED. It has 96kb of RAM and 120MHZ CPU.
Board Layout
Board Reset and Boot System
Normally, the NMI pin is not asserted, and when the board is powered on, it will boot straight to your application space which is where you flashed your program.
When the NMI pin is asserted (through the RTS signal of the USB to the serial port), and Reset is toggled, then the board will boot to a separate 8KB flash memory where NXP wrote their own bootloader. This program communicates with
flash.py script to load your program to the application memory space.
SJ2 Board Pins
1. UART Pin Mapping for SJ-2 Board
2. SSP/SPI Pin Mapping for SJ-2 Board
3. I2C Pin Mapping for SJ-2 Board
4. CAN Pin Mapping for SJ-2 Board
Pin functionality Selection
A pin's functionality may be selected based on your system design. Here are a few examples:
Select UART3 on
P4.28 and
P4.29:
#include "gpio.h" void select_uart3_on_port4(void) { // Reference "Table 84" at "LPC408x_7x User Manual.pdf" gpio__construct_with_function(GPIO__PORT_4, 28, GPIO__FUNCTION_2); // P4.28 as TXD3 gpio__construct_with_function(GPIO__PORT_4, 29, GPIO__FUNCTION_2); // P4.29 as RXD3 }
A pin function should be set based on one of the 8 possibilities. Here is an example again that sets
P0.0 and
P0.1 to UART3 (note that the
010 corresponds to
GPIO__FUNCTION_2). Of course you can also configure
P0.0 and
P0.1 as UART0 pins by using
GPIO__FUNCTION_4
#include "gpio.h" void select_uart3_on_port0(void) { gpio__construct_with_function(GPIO__PORT_0, 0, GPIO__FUNCTION_2); // P0.0 as TXD3 gpio__construct_with_function(GPIO__PORT_0, 1, GPIO__FUNCTION_2); // P0.1 as RXD3 }
Software Reference
This section focuses on the C software framework, and not the C++ sample project.
CLI Commands
CLI stands for Command Line Interface. The SJ2 C framework includes a way to interact with the board through a CLI command utilizing a CLI task. You can and should add more commands as needed to provide debugging and interaction capability with your board.
You can add your own CLI command by following the steps below:
Step 1: Declare your CLI handler function, the parameters of this function are:
app_cli__argument_t: This is not utilized in the SJ2 project, and will be
NULL
sl_string_t: There is a powerful string library type. The string is set to parameters of a CLI command, so if the command name is
taskcontroland user inputs
taskcontrol suspend led, then the string value will be set to
suspend ledwith the command name removed, see
sl_string.hfor more information
cli_output: This is a function pointer that you should use to output the data back to the CLI
// TODO: Add your CLI handler function declaration to 'cli_handlers.h' app_cli_status_e cli__your_handler(app_cli__argument_t argument, sl_string_t user_input_minus_command_name, app_cli__print_string_function cli_output);
Step 2: Add your CLI handler
// TODO: Declare your CLI handler struct, and add it at 'sj2_cli.c' inside the sj2_cli__init() function void sj2_cli__init(void) { // ... static app_cli__command_s your_cli_struct = {.command_name = "taskcontrol", .help_message_for_command = "help message", .app_cli_handler = cli__your_handler}; // TODO: Add the CLI handler: app_cli__add_command_handler(&sj2_cli_struct, &your_cli_struct); }
Step 3: Handle your CLI command
// TODO: Add your CLI handler function definition to 'handlers_general.c' (You can also create a new *.c file) app_cli_status_e cli__your_handler(app_cli__argument_t argument, sl_string_t user_input_minus_command_name, app_cli__print_string_function cli_output) { // sl_string is a powerful string library, and you can utilize the sl_string.h API to parse parameters of a command // Sample code to output data back to the CLI sl_string_t s = user_input_minus_command_name; // Re-use a string to save memory sl_string__printf(s, "Hello back to the CLI\n"); cli_output(NULL, s); return APP_CLI_STATUS__SUCCESS; } // TODO: Now, when you flash your board, you will see your 'taskcontrol' as a CLI command
Platform Glue
TODO
Newlib and floating point
printf and
scanf
At the env_arm file, there are a couple of lines you can comment out to save about 18K of flash space. This space is not significant enough when you realize the fact that the LPC controller has 512K of flash ROM space, but it increases a few seconds of programming time each and every time you program.
LINKFLAGS=[ # Use hash sign to comment out the line # This will disable ability to do printf and scanf of %f (float) # "-u", "_printf_float", # "-u", "_scanf_float",
Layout a plan or design of something that is laid out More (Definitions, Synonyms, Translation) | http://books.socialledge.com/books/embedded-drivers-real-time-operating-systems/page/sj2-board | CC-MAIN-2021-17 | refinedweb | 715 | 55.54 |
08 August 2012 17:17 [Source: ICIS news]
LONDON (ICIS)--European T2 fuel ethanol prices are at their highest levels since records began in December 2005, according to ICIS data on Wednesday.
European T2 fuel ethanol prices are assessed at €730-740/cubic metre ($901-914/cbm) FOB (free on board) ?xml:namespace>
Ethanol can be made from agricultural products such as sugar, corn and wheat.
The drought across the
Prices were further supported by tight supply caused by low operating rates and a decrease in imports.
However, according to sources, T2 fuel ethanol prices are backwardated, with September values quoted at €708-725/cbm FOB
It is thought the backwardation is because the new wheat crop is likely to enter the market in September, relieving pressure on feedstock prices, and expectations that supply will | http://www.icis.com/Articles/2012/08/08/9585261/europe-fuel-ethanol-values-hit-record-highs-on-us-drought.html | CC-MAIN-2014-41 | refinedweb | 135 | 54.36 |
Details
Description
Since people are starting talking about adding macros etc to Groovy. I have quickly packed a poor man's experimental pre-processor here and would like to show that a simple and easy pre-processor can be done and achieve some nice functionality.
With groovy-pp, now I can do things like:
1. sqlG
fn
ln
id = 1
#sql [sql] {
select firstname, lastname
into :fn, :ln
from mytable
where emp_id = :id
}
#sql [sql] {
update thetable set firstname = :fn where emp_id = :id
}
":" is used to identify host variables. It's inspired by sqlJ.
More embedded SQL has yet to be implemented.
2. Make powerful JavaBeans :
class MyBean {
#boundsupport
#bound int balance = 0
#readonly name
#writeonly message
}
And then I can use Closure to listen to the propertyChangeEvent:
bean = new MyBena()
bean.addPropertyChangeListener {
print "$
, ${it.oldValue}
, ${it.newValue}
"
}
3. A beeper
Actually it's for quick dumping messages to stdout with the information of the current method, class, and line number in the source file:
#beepon
a = 1
1.times {
#beep "a has a value of $
"
#beep a has a value of :a
}
-O< PPBeepTest$1.doCall(PPBeepTest.groovy:7): a has a value of 1
-O< PPBeepTest$1.doCall(PPBeepTest.groovy:7): a has a value of 1
The beeper can be turned off quickly by
#beepoff
There is also a macro that saves me from typing excessive "\" in a regular expressions.
All the above is sampled by a file called
PPTest.groovy
in:
bran.groovy.preprocessing
Where all the source code can be found, including a configuration file that maps macro directives to processors. All these are hard-coded right now.
Basically I have added a filter to the InputStreamCharStream, which provides source stream to the lexer. All the macro expanding happens before the lexer.
I have used # as the indicator of possible pre-processing and it has to be the first none-space character on a line to take effect. I believe some people prefer other indicators such as @. We can easily change that later.
This stuff is very experimental and probably only good for demo purpose. But I like the light-weight nature that allows everyone to quickly hack Groovy. It's quite possible that the whole thing will be tossed out once we have a formal interface for language extension.
Personally I have found the JavaBeans macros are quite powerful together with use of Closures as listeners. Note that that indexed properties are not implemented yet.
I have made minor changes to InputStreamCharStream to open up the hook. One or two new methods have been created in the DefaultGroovyMethod, which are none-essential and nice to have. A little change is also made to the Sql class, but i don't think it's required to run the test script.
Gone for sleep
Activity
One of the best things about Java is that it has no preprocessor. Please do not add this. If you want macros, make lisp-macros not textual replacement.
People's view differs of course. There are people missing pre-processing. How about SqlJ? It's preprocessing Java code and some people use it for good reasons. And it seems not bothering people who don't like it. Personallly I hate and love pre-processing. Why don't keep the option open?
My general philosophy about Groovy is: Allow optionals. I don't disable an option because I don't like it. I simply don't use it.
In the meantime I'm thinking the only thing that we need to change about the core is a few lines of looking-up in the system property and see if the user has configured a source code stream provider filter, so to separate the concerns. Nothing will happen unless the user specifies a specific system property. This way there is no danger of stepping on other's toes. I also take steps to ensure that a specific processor can only see the portion of code that it's responsible for.
SQLJ can be implemented with the sublanguage feature we are looking at without resorting to a textual preprocessor. A preprocessor is a bad idea for a number of reasons:
1) it obfuscates the code such that you cannot find the source of an error without debugging the generated code.
2) it acts at a text level, not the AST level so there is the possibility of generating illegal code or code that behaves in an unexpected way. for example, look at all the stuff that you have to escape/parenthesise in C to ensure that things like double increments don't happen.
3) maintainable code is best done without optionals. witness the maintainability of perl if you want an example of a failed experiment with lots of optional syntax.
4) adding a preprocessor ties your program to a particular build environment. look at makefiles if you want an example of how difficult it is to build source that depends on certain command line settings for the compiler.
5) it is very easy to make an external tool to do any preprocessing you think is necessary. i think that is exactly where this functionality should remain.
This is just my opinion, but now that groovy is a JSR we need to be very careful about which features are put in because we will be stuck with them. Nothing will put people off of Groovy more than it looking like a giant kitchen sink of features with no thought to cleanliness.
Mike Spille has already noted a lot of the untidyness of the current optional features of Groovy and my feeling is that will be the opinion of those on the expert group:
added Lips style macro support, so all Groovy power is available when scripting the macro. ongoing work. somewhat limited. Can pass simple use cases (see MacroTest.groovy)
Thanks Sam for pointing me to Lisp for macro idea. Never used it before. But the idea applies nicely to Groovy. The way I write it now is to run a scipt twice, the first time for the macros and expansion computation, the second time for the final script. Of course it happens behind the scene. Users will notice a slight hesitation whe running a script that defines and uses macros.
Multiple macros are collected together to form a expansion computation script that gets evaluated once. Thus it scales.
Nested macro expansion is not there.
All paramaters are passed the macros as literal strings. Writers can convert them to whatever types they desire.
example scrit:
------8< -----
package bran.groovy.preprocessing
#defmacro swap(l, r) {
tmp1 = l; // use temps to
tmp2 = r; // prevent multiple evaluations
return "__tmp = $
= ${tmp2};${tmp2}
= __tmp"
}
a = 1
b = 10
swap(a, b)
assert a == 10
assert b == 1
#echo a b
c = 'hello'
d = 'world'
swap(c, d)
assert c == 'world'
assert d == 'hello'
#echo c d
#defmacro add(l , r){ li = new Integer(l); ri = new Integer(r); return (li + ri) }
ad = add (1, 2)
assert ad == 3 // ad is 3 already at compile time
#echo ad
-------
>8----
where #echo is a "system macro" that do pretty printing of vriable names and values. Can be replaced by println.
We'd like to support some form of AST transforms in the future which may be close to what this issue was about.
The system won't allow me to attach the source code jar. What can I do? | http://jira.codehaus.org/browse/GROOVY-370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2015-22 | refinedweb | 1,237 | 64.41 |
Python wrapper for Mailchimp API
Project description
PostMonkey 1.0b
PostMonkey is a simple Python (2.6+) wrapper for MailChimp’s API version 1.3.
Features
Basic Usage
Once you create a PostMonkey instance with your MailChimp API key, you can use it to call MailChimp’s API methods directly:
from postmonkey import PostMonkey pm = PostMonkey('your_api_key') pm.ping() # returns u"Everything's Chimpy!"
If the MailChimp method call accepts parameters, you can supply them in the form of keyword arguments. See Examples for common use cases, and refer to the MailChimp API v1.3 official documentation for a complete list of method calls, parameters, and response objects.
MailChimp has established guidelines/limits for API usage, so please refer to their FAQ for information.
Note: it is the caller’s responsibility to supply valid method names and any required parameters. If MailChimp receives an invalid request, PostMonkey will raise a postmonkey.exceptions.MailChimpException containing the error code and message. See MailChimp API v1.3 - Exceptions for additional details.
Examples
Create a new PostMonkey instance with a 10 second timeout for requests:
from postmonkey import PostMonkey pm = PostMonkey('your_api_key', timeout=10)
Get the IDs for your campaign lists:
lists = pm.lists() # print the ID and name of each list for list in lists['data']: print list['id'], list['name']
Subscribe “emailaddress” to list ID 5:
pm.listSubscribe(id=5, email_address="emailaddress")
Catch an exception returned by MailChimp (invalid list ID):
from postmonkey import MailChimpException try: pm.listSubscribe(id=42, email_address="emailaddress") except MailChimpException, e: print e.code # 200 print e.error # u'Invalid MailChimp List ID: 42'
Get campaign data for all “sent” campaigns:
campaigns = pm.campaigns(filters=[{'status': 'sent'}]) # print the name and count of emails sent for each campaign for c in campaigns['data']: print c['title'], c['emails_sent']
Changelog
-Initial Release
-2012-10-11: Quote JSON string before sending POST request
-2013-07-03: Documentation updates and version bump (no code changes)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/postmonkey/ | CC-MAIN-2022-21 | refinedweb | 353 | 57.16 |
Hello all,
could anyone please tel me how to control the GPS receiver integrated in the Nokia N82? I just want to read the position info and display it. Thanks!
Robin24
Hello all,
could anyone please tel me how to control the GPS receiver integrated in the Nokia N82? I just want to read the position info and display it. Thanks!
Robin24
Hi robin24 and welcome to the Python Discussion Board.
A simple example of what you want would be like this:
Note that you have to have a Script Shell signed with the Location capability.Note that you have to have a Script Shell signed with the Location capability.Code:import positioning print positioning.position()
Pankaj Nathani
Hi all,
first, thanks for the nice welcome Croozeus! :-)
Well, I tried the positioning.modules() command, however I get a permission denied error :-( So where can I get a shell with GPS capabilities? Thanks!
Robin24
1)Download "PythonScriptShell_1_4_3_3rdEd_unsigned_testrange.sis" from here
2)Go to, select the "Open Signed Online" method
3)Enter your email address, the phone's IMEI (*#06# in standby) and the application you wish to submit.
4)Select all the capabilities from the list there (even if you don't need them all, it never hurts to have them)
5)Enter the security code and agree with the "Legal agreement" and submit
6)You will receive a confirmation email and then a download link
Hi,
thanks for the info. Unfortunately, there's no way I can do the signing process since I'm blind (using Talks as the screen reader on my phone). Well, would you mind doing the signing process for me? I'd really apreciate it. My Mail address is webmaster[at]robin-kipp.de, the IMEI number of my phone is 358082012741753 Thanks a lot!
Robin24
OK, I've sent the signed Script Shell to you via email.
Thanks!!! That's really nice, now it works! Great! Thanks again!
Robin24
Hi again,
just wondering, why doesn't the following code work when I run it as a script under Python Shell -> run script but works when I type it in the interactive shell?
Code:
import positioning
mod = positioning.default_module()
print "GPS default module:"
print mod
print "More info about default module:"
print positioning.module_info(mod)
I get the "GPS default module" message and the ID, but then it stops. I don't even get the "info" message. What's wrong here? Thanks!
Robin | http://developer.nokia.com/community/discussion/showthread.php/133679-Internal-GPS-receiver-in-Nokia-N82?p=415955&viewfull=1 | CC-MAIN-2014-10 | refinedweb | 407 | 74.19 |
On Sat, Oct 28, 2017 at 11:24 PM, Soni L. <fakedme+py at gmail.com> wrote: > > On 2017-10-28 02:51 PM, Steven D'Aprano wrote: > >> >> You ignored my question: Is that the sort of thing you mean by >> composition? If not, then what do you mean by it? This is not a >> rhetorical question: I'm having difficulty understanding your proposal. >> It is too vague, and you are using terminology in ways I don't >> understand. >> >> Maybe that's my ignorance, or maybe you're using non-standard >> terminology. Either way, if I'm having trouble, probably others are too. >> Help us understand your proposal. >> > > I have to say I'm almost impressed by the constructiveness of the discussion, even though I still don't understand the point of all the square brackets in the proposal. > With composition, you can have car.key.turn() call car.ignition.start(), > without having to add car to key or ignition to key. You just have to put > both in a car and they can then see eachother! > Here it's a bit confusing that the key is thought of as part of the car. It's easy to imagine that an owner of two cars would want the same key to work for both of them. Or that one car could have multiple users with non-identical keys. I'm not sure if these things already exist in real life, but if not, it's probably just a matter of time. But let's ignore this confusion for a moment, and imagine that the example makes perfect sense. Now, it sounds like you want something like namespacing for methods and attributes within a complicated class. Maybe you could implement it using nested classes and decorators to make sure 'self' is passed to to the methods in the way you want. The usage might look roughly like: @namespacedclass class Car: @classnamespace class ignition: def start(self): ... @classnamespace class key: def turn(self): self.ignition.start() Another concern regarding the example, however, is that this seems to make it unclear what the public API of the car is. It looks like you can just as easily drive the car without having the key: just call car.ignition.start(). -- Koos -- + Koos Zevenhoven + + -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-ideas/2017-October/047500.html | CC-MAIN-2022-33 | refinedweb | 387 | 74.49 |
Content uploaded by William Beaver
Author content
All content in this area was uploaded by William Beaver on Oct 27, 2018
Content may be subject to copyright.
Journal of Contemporary Athletics ISSN: 1554-9933
Volume 12, Number 2 © 2018 Nova Science Publishers, Inc.
CONCUSSIONS IN THE NHL: A CASE STUDY
William Beaver
Robert Morris University, PA, USA
ABSTRACT
Concussions have become a major concern in professional sports particularly in the
National Football League and the National Hockey League with most of the attention
being directed at the NFL. With these thoughts in mind, this chapter will trace the actions
taken by the NHL beginning in 1997 to the present. In some ways, the league was
proactive in addressing concussion with the implementation of baseline neurological
testing but was slow to implement other changes such as a formal concussion protocol.
Comparisons are then be made with the NFL and conclude that the NHL did not engage
in fraud and deception in terms of concussion research, but, on the other hand, conducted
very little research on the subject. Finally, the article analyzes the league’s concussions
policies in terms of the NHL’s culture and the structure of the league, which accounts for
the conservative approach to reform.
INTRODUCTION
Concussions have become a major topic of interest in the sport’s world particularly with
the growing realization that concussions can have serious health implications, which can
impact players long after they have left the game. Much of the attention has focused on the
National Football League (NFL) where a best-selling book, League of Denial, and a major
motion picture, Concussion, have directly confronted the issues surrounding head injuries in
the league. More important, however, was the $900 million settlement between the NFL and
former players, which provides compensation for those who suffered long-term harm from
concussions. Much less attention has been given to concussions in the National Hockey
League (NHL), despite the fact that concussions are commonplace. Although there are more
concussions in the NFL, the number in the NHL are such that the league has attempted to
address the issue for nearly two decades.
The article will attempt to provide a clearer understanding of how the NHL approached
concussions and is divided into three parts. The first part will chronicle how the NHL has
attempted to deal with the issue beginning in 1997 and will trace the development of league
policies. (See Table I). The second part will compare the policies of the NFL with the NHL.
Finally, I will discuss the league’s actions in terms of culture and structure.
William Beaver
124
Table I. Concussions in the NHL: Chronology of Events
1997
● Concussion Study Group formed and baseline neurological testing is implemented.
2000
● Injury Analysis Panel formed.
● Eric Lindros sits-out 10 weeks to recover from a concussion.
2001
● Injury Analysis Panel makes first report and recommends covering exposed hard-shell
plastic padding, the wearing of helmet visors, and the installation of more forgiving glass.
2003
● Independent study reports concussions are up in the NHL because more players are
reporting them.
2004
● Injury Analysis Panel disbanded
2004-2005
● The league begins to crack down on vicious illegal hits imposing more suspensions and
fines
2007
● Gary Bettman proposes penalizing all hits to the head if the initial contact point is the head
but general managers reject the proposal.
2009
● Players’ Association proposes penalties for recklessly or intentionally targeting the head.
● Concussions Working Group proposes using helmet sensors to study concussions and
investigate the long-term implications of concussions but is rejected
2010
● A formal protocol for concussions is adopted along with the approval of Rule 48.
2011
● Sidney Crosby suffers concussion and the concussion protocol is altered.
● Three former NHL enforcers who had suffered concussions die.
2012
● Insurance industry expresses concern about growing number of concussions.
2013
● Former player’s lawsuit is filed in U.S. District court.
● Independent study finds Rule 48 has had little impact on the number of concussions.
● The league institutes hybrid icing, the wearing of helmet visors, and penalizes players who
remove helmets during fights.
2014
● NHL reports that 17 percent of concussions from illegal hits, 26 percent from accidental
hits, and 44 percent from legal checks and a double-digit drop in concussions.
● NHL reports the number of physical penalties has declined
2015
● Concussions spotters system implemented.
2016
● League reports that fighting down substantially over a five-year period.
● U.S. District Court rejects NHL’s motion to dismiss player’s lawsuit and emails between
NHL officials are released.
Concussions in the NHL
125
INITIAL ACTIONS
The simplest definition of a concussion is a blow to the head that alters brain functions,
and hockey players have always known about them. For years they were often described as
dings or “getting your bell rung,” but such hits to the head were usually not taken that
seriously. Perhaps Brain Burke, now president of hockey operations for the Calgary Flames,
best described how players handled concussions in the old days. “You went to the bench,
threw-up, and as soon as you got your vision back you played” (Williams, 1996). Coaches
favored players who could “suck it up” and still give it their best to help the team win. Indeed,
players who exhibited such toughness were role models to be emulated. These values
dominated the NHL’s culture from its inception but began to change during the 1990s. The
NHL started to crack down on hits from behind but more telling was the 1995-96 season,
where 52 head injuries were reported (a large number for that era), and two-thirds of these
coming to the New York Islanders forcing the retirement of Brett Lindros who was only
twenty at the time. Besides Lindros, star players from other teams, like Dave Taylor and
Michel Goulet, were also forced into early retirement due to concussions (Elliott, 1996). In
response, the NHL and the NHL Players’ Association formed a Concussion Study Group
made up of team doctors, coaches, and trainers to improve understanding of the phenomena.
As Chip Burke then Pittsburgh Penguins team physician put it, “We thought we should look
at the problem and see what we could find out.” (Biggane, 1997) After reviewing the data and
talking to experts for six months, Burke concluded that physicians could identify and treat the
disorder, but other than that there was little scientific research available (Biggane, 1997).
The most important initiative taken by the NHL also occurred in 1997, when players
were given baseline neurological tests during the pre-season so that if a head injury occurred
a player could be tested again and not allowed to play until they returned to the baseline level.
Although the new policy was a major advancement and unique to the NHL, no formal
protocols were enacted. As Larry Pleau then the general manager of the St. Louis Blues put it,
“There is no league formula to follow” (Nelson, 2000). Hence, teams treated concussions
differently. For instance, the Blues required that a concussed player be free of headaches (the
most common symptom) to begin light workouts on a stationary bike, followed by skating,
and finally, a return to contact. Apparently, the Blues policy was driven by the fact that
Pleau’s son had his hockey career cut short by concussions. On the other hand, Dallas Stars
defenseman Richard Matvichuk was permitted to play in the 2000 playoffs with what was
described as a “partial concussion” even though Matvichuk complained of headaches (Nelson
2000). Over the next several years, the league would require concussed players to be
symptom- free as indicated by the Impact-2 test (developed by the NFL) and then have a team
physician clear them for play (Gillogly 2007), but there was no formal protocol to cover the
entire process, which was left up to each team.
The NHL also assembled a 20-person Injury Analysis Panel in 2000 consisting of
coaches, team physicians, trainers, along with referees, and chaired by Dave Dryden, a former
NHL goalie. The purpose of the panel was to gather objective data to prevent injuries. The
panel took a cautious approach toward concussions. Dryden noted his group was “struggling”
with the issue and stated “the worse thing to do with injuries is to move before you know all
the details” (Todd, 2001). Similar comments would be echoed by league officials over the
next decade. In 2001, the panel issued its first report. Among other things, they recommended
William Beaver
126
a reduction of deliberate blows to the head, the wearing of a new helmet every year with
visors, covering exposed hard-shell plastic padding, and the installation of more forgiving
glass to soften impacts from checks (NHL industry analysis panel, 2001). Only this last
recommendation was acted on with any sense of urgency. New glass was installed behind the
goal lines beginning in 2002 (Heike 2002). Some of the other recommendations would take
some years to come to fruition. For instance, hard shell padding was required on elbow pads
in 2003 but not until 2010 for shoulder pads, while helmet visors were not mandated until
2013. The panel also distributed a fact sheet and video to players about head injuries and
established a hot-line for trainers to report concussions (Dryden 2001). The Injury Analysis
Panel was disbanded in 2004 reportedly because the player’s association feared information
concerning a player’s injuries would become known to management and influence contract
negotiations or personnel decisions (Vogel 2016).
Although no new major policies regarding concussions would be forthcoming for some
years, certain incidents did raise important issues. Eric Lindros, considered one of the
league’s best players had suffered a number of concussions in his career and was widely
criticized for sitting-out 10 weeks to recover from one of them. Some implied that Lindros
was a malingerer who violated the norms of the game, feeling he should return to the ice
particularly during the 2000 playoffs, which he eventually did only to be concussed again
(Leonard, 2016). Nonetheless, his behavior of sitting-out until recovered would become the
standard in future years.
Another issue where the rules themselves, which seemed to precipitate head injuries. For
instance, in 2002 Jeremy Roenick, then playing for the Philadelphia Flyers, blindsided Mike
Madano of the Dallas Stars resulting in a concussion – the second such incident between the
two players. Yet Roenick, although eventually suspended for the hit, did not receive a penalty
for his actions. When the referee involved was later asked why, he responded that blood
would have had to be drawn for a penalty to be called. For his part, Madano called for an end
to all hits to the head, just as the NFL had outlawed spearing and clothesline tackles (Heika,
2002). Although not banning all targeted hits to the head, the league did begin to increase
suspensions and fines for vicious illegal contact, particularly after the career ending hit by
Todd Bertuzzi on Steve Moore in 2004 (Campbell 2005). In 2007, NHL commissioner Gary
Bettman proposed penalizing any hit to the head if the initial contact point was the head at a
general manager’s meeting. Only one general manager voted in favor of the proposal,
although league officials would continue to promote the idea (Agenda: NHL general
managers meeting).
Perhaps one of the reasons for a lack of new reforms was a study published in the
Canadian Journal of Neurological Sciences in 2003. The study found that concussions had
tripled between the 1986-87 and 2001-2002 seasons with a particularly sharp increase after
the 1997-98 season. Some hypothesized that players had gotten bigger, hence more violent
and damaging hits. However, the study concluded that more awareness and reporting of
concussions was the major cause, not larger players. “It’s becoming much less of a macho
thing” stated Dr. Richard Wennberg one of the authors of the study in referring to the slowly
changing league norms (Wennberg & Tator, 2003). The findings could also be interpreted as
evidence that the league’s policies requiring baseline testing were working and further
changes were unnecessary.
One of the more curious aspects of the concussion issue was the study group formed in
1997 produced so little research, although apparently the data existed to do so. Emails
Concussions in the NHL
127
between the NHL’s deputy council Julie Grand and Gary Bettman indicate the lack of
research was troubling. Grand felt the primary reason was the group was not paid, along with
a lack of leadership. In 2007, a new entity the “Concussions Working Group” was formed
(Westhead 2016). In 2011, an article was finally published which focused on the 1997-2004
regular seasons. Among other things, the article reported that headaches were the most
common symptom of concussions and games missed due to them had increased (Benson
et al., 2011). In 2009, the group (currently called the Concussion Subcommittee) wanted to
begin a pilot program where selected teams would wear helmet sensors to measure the
magnitude of blows to the head. If a player received a hit beyond a certain threshold, they
would be removed from the ice and medically evaluated. The group also wanted to conduct a
study of retired players who had suffered repeated concussions to determine if there were any
long-term health implications. In an email, Julie Grand wrote that helmet sensors would be
“too expensive”. As for the study of retired players, Grand wrote, “We don’t think anything
can be gained that would benefit our game” (Grand, 2009).
A FORMAL PROTOCOL AND RULE 48
In March 2009, the NHL Players’ Association head Paul Kelly, during the league’s
general managers meeting, announced that referees should have the option of calling penalties
on players who intentionally or recklessly target the head of an opponent. Kelly stated that
perhaps three-quarters of the players favored the rule change. However, accidental or
inadvertent hits should not be penalized since they “don’t cause a great deal of injury”
according to Kelly —a puzzling statement given the fact any hit to the head, accidental or
otherwise, can result in an injury (NHL Players’ Association urges GMs to consider
penalizing hits to head, 2009)
The union’s support of new rules to protect players marked something of a turning
point in that another important NHL constituency, along with the league’s brass favored
change. At the same time, external forces were also at work. League documents indicate the
NHL felt it was being targeted and blamed for concussions triggered by the fact that more
players were sitting out for longer periods, which many viewed as a positive development. In
addition, the NFL’s concussion issues had generated a great deal of publicity and focused
unwanted attention on the NHL (Executive Summary, 2010). The end result being the
league’s general managers finally acquiesced, which brought about two important changes. In
January 2010, the NHL announced the implementation of a formal concussion protocol
developed by the Concussion Working Group. The protocol required that if there was an
incident on the ice, a trainer on the bench would make an initial evaluation. If the trainer
deemed there was a problem, the player would be taken to an isolated area where a team
physician would administer the Sport Concussion Assessment Tool or Scat-2. During the test,
players are asked to perform simple motor skills like standing on one leg and then asked a
number of questions such as, who hit you, and then repeat four numbers spoken by the
physician in reverse order, something a non-concussed player should be able to do. Following
the exam, which takes 10 to 15 minutes, the physician would make an initial diagnosis.
If a concussion was diagnosed, the player would sit out until they were symptom-free, and a
team doctor cleared their return (NHL protocol for concussion evaluation and management,
2010).
William Beaver
128
The other major change involved rule 48 and was based on the league’s Concussion
Video Analysis Project. The new rule stated that lateral or blindside hits where the head is the
primary target are illegal and would result in a major penalty. Apparently, there was some
confusion about what constituted a blindside or lateral hit. As a result, the following year the
rule was modified. The words lateral and blindside were removed so that any hit to the head,
where the head was the primary target would be penalized. Brandon Shanahan, then senior
vice-president of the NHL, also announced that players would also be subject to
“supplemental discipline” (usually a suspension and or a fine) even if a penalty was not called
during the game. (Rosen, 2011). This would be accomplished through the Department of
Player Safety, who would video monitor all games for violations. (Department of player
safety, 2011) In 2013, the rule was changed once again. The new wording stated that an
illegal hit occurred when the head was the main point of contact and such contact was
avoidable. It should be emphasized that not all hits to the head were illegal. For instance, no
penalty would be called if contact was deemed to be accidental or where contact had been
initiated squarely through the body and the head not targeted. In addition, if a player put
themselves in a vulnerable position or changed positions just prior to contact, a penalty would
not be assessed (Brigidi, 2013).
An independent analysis of rule 48 revealed it had little impact on the number of
concussions. The study, after examining three seasons (2010 through 2012), found that 28
percent of the “interactions” that generated a concussion resulted in a penalty, but blindsiding
occurred in only 4 percent of the cases, which rule 48 explicitly addressed. The authors of the
study also felt that allowing referees to make judgement calls regarding players putting
themselves in vulnerable position, should be reevaluated (Donaldson et al., 2013). While not
directly addressing this issue, Gary Bettman pointed out that many concussions occurred
because of accidental or inadvertent contact, where teammates collide or when a player is
checked legally but then hits his head on the ice or boards and not because players were
intentionally targeting the head. The commissioner went on to add “It’s easy to say the league
needs to do X, Y and Z on concussions. It’s not that simple.” Bettman also implied that it was
important to take a cautious approach. As he put it, “Changing a rule which doesn’t address
what’s actually causing the concussions may not be the right thing to do. Changing equipment
may not necessarily be the right thing to do” (Frequently asked questions about concussions,
2011).
THE CROSBY CONCUSSION
On January 1, 2011 the “Winter Classic” was held outdoors at Pittsburgh’s Heinz Field
between the Washington Capitals and the Penguins. During the game, Sydney Crosby,
considered the league’s best player, was hit in the head from behind. The hit was ruled
accidental and no penalty was called, and Crosby, although dazed, played the rest of the
game. After the game, Crosby spoke with the media and seemed to be alright. Four days later
Crosby played in the team’s next game and was checked heavily into the boards. He left the
game and was subsequently diagnosed with a concussion. Crosby would take the next 15
months to recover. Pat Lafontaine, a Hall of Fame player, who was driven from the game by
concussions gave his take on the Crosby situation when he told The USA Today, “He wasn’t
healed from the first one and that’s why the damage was greater from the second one. When
Concussions in the NHL
129
you get hit again after a short period of time after you’ve been concussed, that’s where the
exponential damage comes in” (Alan & Brady, 2011). Shortly after the Crosby incident, the
Concussions Working Group modified the concussion protocol, which was quickly
implemented in March 2011. There would no longer be a bench evaluation if a concussion
was suspected. Instead, the player would be immediately be taken to a “quiet room” for
evaluation by a team doctor. Although the league denied Crosby’s concussions had anything
to do with the change, the timing suggests otherwise. Had Crosby been immediately removed
from the ice and evaluated after the first incident, his recovery might have been more rapid, as
recent research indicates (Peachman, 2016). To some, including Lafontaine, former
Philadelphia Flyer Keith Primeau (an often-concussed player) and then Capital’s coach Bruce
Bourdeau the change was not enough. As Bourdeau put it, “You’re never going to take the
hitting out of hockey. But at some point the hitting to the head has got to stop” (Alan &Brady,
2011). One could also add Bourdeau’s comments were particularly appropriate in regard to
star players whose extended absence could harm the entire league.
FIGHTING AND ECONOMIC ISSUES
Besides the Crosby incident, one other event brought concussions to the forefront in
2011. Three former league enforcers (fighters), Derek Boogaard, Rick Rypien, and Wade
Belek all died either by an accidental overdose of drugs or suicide and all had suffered
concussions (Branch, 2016). The NHL is the only major professional sport where trading
bare-knuckle punches is viewed as a normal part of the game and a behavior for which
players are not ejected. The tragic deaths of the three players would raise new questions. Are
players who regularly fight more prone to concussions? Were the deaths of these players
somehow linked to concussions and are there long-term health implications? Questions the
league would be forced to grapple with in the years to come. A recent analysis did shed
some light on the subject finding that 8 percent of concussions were the result of fighting
and 75 percent of these were caused by the player’s head hitting the ice (Kuhn &Solomon,
2015).
The official position of the league was that the effects of fighting and the long-term
health implications of concussions were unclear. However, in March 2016 the U.S. District
Court in Minneapolis ordered the release of emails between league officials, which appeared
to contradict the official position. For instance, deputy commissioner Bill Daly wrote,
“Fighting raises the incidence of head injuries/concussions, which raises the incidence of
depression onset, which raises the risk of personal tragedies.” There were also emails between
Daly, Bettman, and Shanahan, which discussed outlawing fighting. However, Bettman felt
that the NHL Players’ Association would block any moves to ban fighting since it would
eliminate jobs for enforcers (Branch, 2016). NHL officials maintained that part of the
problem with implementing new rules was the players’ association. According to the
collective bargaining agreement between the league and the union, proposed rule changes are
made by the NHL’s Competition Committee and then approved by the league’s Board of
Governors. Five members of the Competition Committee are current players chosen by the
players’ association, the other five are selected by the league. Seven votes are required for a
rule change to be recommended, which obviously gives the union a major voice in any rule
changes (Competition Committee, 2015).
William Beaver
130
There were also economic pressures to confront the concussion issue. Air Canada
reportedly informed the NHL that it was considering withdrawing its financial support if
more was not done to combat concussions (Concussions: New rules for treating NHL players,
2011). The league also requires teams to take out insurance policies which covers 80 percent
of a team’s the top five player salaries if they are injured for extended periods. With the
growing number of reported concussions and players sitting out longer, the risk to insurance
companies increased. As one insurance executive put it in 2012, “Right now you’ve got 10
percent of the league affected by concussions. While I don’t know where the breaking point
is, at some point, if it keeps trending this way, companies are not going to be able to insure
the NHL players for concussions.” The executive went on to say, “The entire framework of
the NHL is in jeopardy. There is a risk of bringing down the house” (Westhead, 2012). In
addition, a study released in 2014 found injuries cost the league $200 million a year, with
head and neck injuries costing $60 million a year and accounted for missed games than any
other type of injury (Brean, 2014). The study highlights the on-going dilemma the NHL
faced--injures were expensive but would making the game less physical be ultimately more
expensive due to reduced fan interest and revenues?
That said, the league would continue to make minor changes. For instance, more
padding would be added along the boards, while other improvements were made to the glass
surrounding the ice. In 2013, players were finally required to wear helmet visors, while
players who removed their helmets during fights would be penalized. Hybrid icing was
also introduced, which meant icing would automatically be called rather than have players
race back for the puck, where violent contact often occurred (Rosen, 2013). In addition,
both the players’ association and the league ramped-up efforts to educate coaches, players,
and staff about head injuries (Memorandum: Concussions program update, 2012).
Gary Bettman stressed that players had to take responsibility and follow the concussion
protocol for the system to work. In this regard, various news accounts indicated that
players were able to skirt the system. In 2014, James Wisniewski, a defenseman with
the Columbus Blue Jackets, told reporters that he avoided the protocol after he went head
first into the boards. In another incident, Montreal forward Mike Weise was blindsided
during a 2014 playoff game. He went to the dressing room but returned to the ice. The
following day, the Montreal’s general manager Marc Bergevin said that the protocol had
been followed, and the team did not know Weise had a concussion. These two incidents
are indicative of an on-going problem. First, marginal or journeyman players are more
likely to attempt to avoid the system for fear that if diagnosed with a concussion and forced
to sit-out, someone will replace them. Secondly, the urgency of the situation can play a
role. Specifically, if it’s a crucial regular season game or the playoffs, with a lot at stake,
the system is more likely to be compromised by players or team officials (NHL still grappling
with concussions, 2014). In an attempt to mitigate such occurrences, the NHL introduced
a “spotter system,” borrowed from the NFL in 2015. During each game, an NHL official
with access to live streaming and video replay watches for visible signs of a concussion
and then notifies the appropriate bench. The player in question is then taken off the ice
and formally evaluated. Of course, the judgement of the spotter is paramount and
spotters have been maligned for missing what some thought were obvious visible signs
(Enger, 2015). Perhaps this is why the NHL will now have 4 concussion spotters at a
remote location as well as those on-site, and all will have authority to remove players for
the game (Clinton, 2016).
Concussions in the NHL
131
IMPACTS
What has been the impacts of the league’s various initiatives to reduce concussions? A
review of the literature by Kuhn and Solomon (2015) discovered that between 1986 and 2012
the rate of concussions increased. The lowest number occurred in 1986-87 season (.417
concussions/100 games) and the highest number in the 2011-1212 season (4.8 concussions
/100 games), while 88 percent of the concussions were related to a violent act. However, the
reasons for the increases were not more violent or larger players, but, once again, increased
awareness and reporting. In 2014, the NHL stated only 17 percent of concussions were from
illegal hits, 26 percent from accidental ones, and 44 percent were from legal checks (Stimson,
2015). In a related issue, the number of fights declined from 645 in the 2010-11 season to 317
as of March 2016, while the number of so-called “physical penalties” assessed per game
declined from 1.34 to 1.03 over a five-year period (Goss, 2016). There was also a double-
digit drop in concussions to 53 for the 2013-14 season, compared to the 2011-12 (the last full
season before the players were locked out), where the number was 78 (NHL still grappling
with concussions, 2014). Critics questioned the double-digit decline, suggesting there may
have been less reporting by players or teams in any given season, which may be directly tied
to the fact that the NHL does not require teams to disclose the specific nature of an injury.
Hence, concussions can be reported as an “upper body injury” making accurate assessments
difficult (Stinson, 2015).
THE PLAYERS’ LAWSUIT
Much of the discussion about concussions was heightened by a lawsuit filed in November
2013 by 10 former NHL players in U.S. District Court. In a nutshell, the suit alleges the
league knew or should have known about the dangers concussions posed, and as result, failed
to do enough to reduce the risks involved and educate players about them. The suit seeks
medical care for those requiring it and “the full measure of damages” allowed under the law.
The suit did not specify any dollar amounts, nor was there any mention of the players’
association. Since the initial filing, more than 100 players have joined the class action lawsuit,
which is open to all 4300 players who retired before February 2013 and suffered brain trauma
and or injuries that were caused by concussions or sub-concussive contact (Mcindoe, 2013).
The NHL filed a motion to dismiss the suit, arguing that any claims by former players are
preempted by labor law, in this case, contractual agreements reached between the league and
the players’ association. However, in May 2016 a U.S. District Judge in Minneapolis ruled
against the motion allowing the player’s lawyers to proceed with the discovery phase of the
suit, which precipitated the release of 298 emails between league officials. The league
responded by declaring it would defend the suit “vigorously” (Heitner, 2016).
COMPARISONS WITH THE NFL
Any discussion of the NHL’s concussion policy will inevitably lead to comparisons with
the NFL. In this regard, both leagues formed committees to study concussions. In 1994, the
NFL announced the formation of the “Mild Traumatic Brain Injury Committee.” (MTBI) This
William Beaver
132
committee, much like the NHL’s concussion study group, was largely made up of individuals
with ties to each league, making them vulnerable to the charge that independent voices were
lacking, but that is where similarities end. The most obvious difference was in the amount of
research published. The NHL group published very little, while the opposite was true of the
MTBI. Beginning in 2003, the committee would publish 13 peer-reviewed research reports
appearing in the journal Neurosurgery. The reports were based on data gathered from team
doctors between 1996 and 2001. In general, the research supported the NFL’s contention that
concussions are not associated with long-term harm. A 2005 report claimed, “Return to play
does not involve a significant risk of a second injury either in the game or during the season”
(Petchesky, 2013) In the meantime, independent research increasingly contradicted the
MTBI. For example, a survey of NFL retirees with a history of concussions found they were 5
times more likely to suffer cognitive impairment, while an earlier study discovered that if
players had multiple concussions the chance of developing depression later in life doubled
(Faibaru-Wada & Fainar 2013). A more recent investigation by the New York Times revealed
100 diagnosed concussions were not reported to the MTBI since teams were not required to
participate in the study. The result being that concussions appeared to be less frequent than
they actually were. In addition, some of the reviewers for Neurosurgery had strong
reservations about the validity of the studies but had little impact on the decision to publish
(Schwarz et al., 2016). By 2010, the MTBI was discredited to the point that its two co-chairs
resigned, and the committee was disbanded and was replaced by the Head, Neck, and Spine
Committee. In speaking about the MTBI one of the co-chairs of the new committee stated,
“We all had issues with some of the methodologies… the inherent conflict of interest …that
was not acceptable by any modern standards or not acceptable to us” (Petchesky, 2013)
Others likened the NFL to the tobacco industry where for years bogus studies were used to
cover-up the harmful effects of smoking (Schwarz et al., 2016).
Certainly, publishing very little is better than publishing research that would later be
discredited and not many have equated the NHL with big tobacco. Nonetheless, there are
questions that arise. First, did the NHL know more about the potentially harmful effects of
concussions than were revealed? At this point, this does not appear to be the case. Although
emails indicate that league officials felt concussions could produce long-term harm, the NHL
did not conduct any studies in this regard, and when the Concussions Working Group
proposed doing so, they were turned down, which suggests the league’s position was “what
we don’t know can’t hurt us.” One can also wonder to what degree was the NHL influenced
by the NFL’s MTBI reports, which downplayed the seriousness of concussions?
An area where the NHL was proactive was the implementation of baseline neurological
testing, beginning in 1997. In contrast, the NFL did not mandate baseline testing until 2008.
Interestingly, when the NHL implemented its program, Elliot Pellman, then team physician
for the New York Jets, and a much-maligned co-chair of the MTBI, stated he tried to initiate a
similar program early on, but only 6 to 10 teams showed any interest (Biggane 1997). As
discussed, the NHL implemented its formal protocol in 2010 and the NFL in 2011, although
beginning in 2009 the NFL placed posters in locker rooms warning players about the dangers
of concussions (Coates, 2013), and the majority of teams were voluntarily following a
protocol. The formal protocols of each league are similar: a player is removed from the ice or
field and taken to the dressing room for evaluation. One important difference is before a
player returns from a concussion in the NFL an independent neurologist, and not a team
Concussions in the NHL
133
physician must approve a player’s return, removing any pressure on team doctors to clear a
player before they are recovered (NFL’s 2013 protocol for players with concussions,2013).
In 2009, the NFL publicly admitted for the first time a connection existed between head
trauma and lasting injury when league spokesperson Greg Aiello stated, “It’s quite obvious
from the medical research that’s been done that concussions can lead to long-term problems”
(Coates, 2013). For its part, the NHL has never made any such public statements, maintaining
there is no clear link between the two. Moreover, in March 2016 an NFL official admitted a
connection between developing chronic traumatic encephalopathy (CTE), the degenerative
brain disease and concussions. Gary Bettman then attempted to distance the NHL from the
NFL. He told reporters, “It’s fairly clear that playing hockey isn’t the same as playing
football”, then added, “and as we’ve said all along, we’re not going to get into a public debate
about this.” (Kilgore 2016). Bettman’s statement is, to say the least, curious. There are
obvious differences between the two sports, but the NHL is the only other professional league
which comes close to the NFL in terms of physical contact and head injuries, and although
there is no conclusive link between playing hockey and developing CTE, experts on the
subject believe that once more brains of deceased NHL players are examined a link will be
established (Kilgore, 2016). Perhaps the commissioner’s comments should be viewed in the
context of the player’s lawsuit where any public admission would be viewed as detrimental to
the league’s financial interest while the suit proceeds.
DISCUSSION
Beginning in the 1990s, the NHL began to take concussions more seriously as its culture
began to slowly change. The initial actions, like baseline neurological testing, appeared to be
prompted not by external forces, which would come later, but with the realization that star
players, the league’s most valuable commodity, could have their careers shortened and would
be an important and on-going contingency. Over the years, a policy evolved that aimed to
protect players without changing the fundamental nature of the game, which proved to be a
difficult balancing act. As discussed, by 2007 league officials wanted to penalize all direct
hits to the head but the teams, general managers nixed the idea. Although one can only
speculate as to why, since the minutes of the general managers meeting in the released emails
have been redacted, two possible reasons come to mind. First, further limitations on the more
physical play would hurt certain teams who specialize in being aggressive. Second, hitting
was an essential component of the NHL’s culture and limiting it might change the game, to
the point that it would diminish the sport’s popularity and ultimately revenues. These values
were expressed by Mike Murphy, former player and NHL executive in an email to Gary
Bettman when he wrote in regard to a less physical play, “We run the risk of having a totally
different game than we have now… Less physical, less combative etc. One we probably
wouldn’t like” (Murphy 2008).
The same values were also embraced by the players’ association. Why it took until 2009
for the union to recommend that intentional hits to the head should be penalized is puzzling.
Certainly, protecting players from career ending injuries and lasting harm should be of
paramount concern for a union. Did the players also fear that new rules would change the
nature of the sport and negatively impact their careers? There are any number of players who
stay in the league because of their physical play, perhaps that’s why surveys over the years
William Beaver
134
have found that players did not want fighting eliminated (Wyshynski, 2012). As mentioned,
NHL officials indicated in their emails that the players’ association would oppose the
elimination of fighting because it would eliminate jobs for enforcers. Apparently, the union
felt a sense of loyalty to players who had been groomed in junior hockey and the minor
leagues to fight and who otherwise did not possess the skills needed to remain in the league.
In a 2013 interview, Gary Bettman stated the NHL had undergone a cultural change. He
noted the implementation and modification of rule 48, equipment changes, the formal
concussion protocol, along with more players willing to report suspected concussions
(Duhatschek, 2013). One could also add the decline of fighting and the diminished role of
enforcers to the point that many teams do not employ one. The decline in fighting is
intriguing since the NHL has never banned it. Perhaps the best explanation is the tragic deaths
of Boogaard, Rypien, and Belek and negative publicity generated had as much to do with it as
anything (Kelly, 2016). The fact that fighting is still a part of the NHL’s game points to
another important variable in understanding the concussion issue –the structure of the league.
The NHL has three prominent stakeholder groups, league officials, the teams (typically
represented by the general managers), and the players’ association. For any meaningful
change to occur, there must be some consensus, and in terms of fighting, none of the three
groups have favored banning it, despite the fact that it might improve the league’s image. On
the other hand, the addition of rule 48 illustrates how consensus is needed to effectuate
change. League officials first proposed changes in 2007, followed by the players’ association
in 2009, and finally the general managers in 2010. All nudged along by growing social
pressures to do something about concussions, spurred in part by the NFL’s difficulties.
The interplay of culture and structure in the NHL lends itself to what some have called a
glacial approach to reform. That said, one wonders if other reforms will eventually be
approved. Besides eliminating fighting, the league could penalize any hit to the head,
accidental or otherwise. Indeed, Montreal Canadians owner Geoff Molson, fearing lawsuits,
proposed such a rule change in 2011 (Vogel, 2016). Other organizations, including the
International Ice Hockey Federation, the Ontario Hockey (OHL), and NCAA Hockey, have
all done so. Perhaps at some point, the NHL will follow suit, but as yet there appears to be no
consensus on the issue, even among the players. Skilled players tend to favor the change,
while the more physical players are opposed. Not surprisingly, they argue that penalizing all
head contact would take away the hitting and physicality that epitomizes NHL (Klien, 2011).
Another element that should be pursued is safer equipment, particularly helmets. The NFL’s
experience with developing a safer helmet has been frustrating but research needs to continue.
In this regard, the NHL does have a protective equipment subcommittee, which should
investigate the helmet issue more vigorously. Finally, two minor changes could add some
credibility. First, the league needs to be more transparent and require teams to report
concussions, rather than “upper body injuries,” and second, an independent physician (not
team doctors) should clear a player’s return to the ice like the NFL.
CONCLUSION
Gary Bettman has stated that concussions are a complex issue, and there is no “magic
bullet” for reducing them, which the research supports—witness rule 48. The NHL was not
unmindful of the problems associated with concussions, and made reforms, and to this point,
Concussions in the NHL
135
has also avoided the scandal experienced by the NFL. On the other hand, additional changes
could have been put into place like penalizing all blows to the head and eliminating fighting,
which could further protect players but no consensus exists to do so. In addition, the fact that
NHL officials have never made any public statements concerning the dangers of concussions
is troubling to say nothing of the paucity of research produced by its own committees. The
most important element, however, in the league’s conservative approach, appears to be the
fear that if the culture of the NHL was changed too much, its popularity and revenues would
decline. There is no doubt that fans are drawn to the violence of hockey. Yet, fighting is down
significantly in the past few years, and the role of enforcers diminished, neither of which have
appeared to have any impact on the NHL’s popularity or revenues and suggests that other
reforms will not lead to hockey’s demise. In this regard, it will be interesting to see if more
changes are eventually implemented in future years and how the outcome of the player’s
lawsuit impacts the league.
REFERENCES
Allen, A. and E. Brady (2011, March 8). NHL concussions touch off debate. USA Today, p.
01c. Retrieved from: usatoday30.usatoday.com/sports/hockey/nhl/2011/2011-03-07-sid
eny-crosby-nhl-conncussion_N.htm
Benson, B. W. Meuwisse, J. Rizos, J. Kang, and C. Burke. (2011). A prospective study of
concussions among National Hockey League players during regular season games: The
NHL-NHLPA Concussion Program. Canadian Medical Association Journal, 183(4), 905-
911.
Biggane, B. (1997, October 12). The lost consciousness NHL has become a leader in research
on concussions. The Palm Beach Post, p.12c. Retrieved from:?
login?url=.
Branch, J. (2016, March 28).In emails NHL officials conceded concussion risks in fights. The
New York Times, p. B11. Retrieved from:
nhl-emials-links-concussions-fighting-bettman.html.
Brean, J. (2014, January 19). NHL’s violent ways costing the league $200-million a year,
study says. Retrieved from:’s/-violent-costing-
the-league-200-million-a –year-study-says.
Brigidi, M. (2013, September 9). NHL rule changes: League rewrites illegal checks to
head rule. SB Nation. Retrieved from:
rule-changes-rule-48-illegal-head-hit.
Campbell. C. (2005, September 12). Memorandum: Player Disciplinary Procedures.
Retrieved from:.
Clinton, J. (2016, September 9). NHL concussion spotter will have power to remove players.
The Hockey News. Retrieved from:-
spotter—will-have-power-to-remove=players.
Coates, T, (2013, January, 25). The NFL’s response to brain trauma; A brief history. The
Atlantic. Retrieved from: theatlantic.com/entertainment/archive/2013/01/the-nfls-respons
e-to–traumatic-brain-trauma-a brief-history/27250.
William Beaver
136
Donaldson, L. Askbridge, M., and M. Cusimano. (2013) Bodychecking rules and concussion
in elite hockey. PLOS/ONE, 8(7). Retrieved from: Journals.plos.org/plsone/artice?id=10.
137/journal.pone.0066122.
Dryden. D. (2001, September 17). Concussion Hot Line. Retrieved from:
mentcloud.org/documents/2779211-NHL0579440.html.
Duhatschek, E. (2013, November 15, 2013). Gary Bettman: King of the castle. The Globe and
Mail. Retrieved from:
42.html.
Elliiott, H. (1996 May 10). Head cases: With more players missing time than ever before,
concussions are part of the game that has NHL walking on eggshells. The Los Angeles
Times, p. 1.Retrieved from:
docview/293299071?accountid=28365.
Enger, Eric. (2015, December 2). NHL concussions protocol under question after Beaulieu
incident. Sportsnet Canada. Retrieved from:
ion-protocol-under-question-after-beaulieu-incident.
ESPN. NHL Industry Analysis Panel Recommendation. (2001, October 8). Retrieved from:
a.espncdn.com/nhl/2001/1008/126115.html.
ESPN. NHL Still Grappling with Concussions. June 8, 2004. Retrieved from: espn.go.com/
nhl/playoffs/2014/story/_id/11115889/nhl-say-concussions-decreased-protoc0l-remains-
imperfect.
Fainaru-Wada, M and S Fainaru. (2013). League of Denial. Three Rivers Press: New York,
114-131.
Heike, M. (2002, December 22). Concussions a real problem in the NHL. The Gazette, P. B6.
Retrieved from:
3886783?accountid=28356.
Heitner, D. (2016, May 16). NHL loses motion to dismiss concussion cases. Forbes.
Retrieved from:-
cases/#45 1c598b.
Gillogly, S. D. (2007, December 11). Memo: All NHL Team Physicians and Head Athletic
Trainers. Retrieved from:
9341.html.
Goss, N. (2016, March 11). Gary Bettman: Decline in Fighting is Part of How Game
Evolving. Retrieved from: nesn.com/2016/03gary-bettman-nhl-evolving-among-reasons -
for-decline-in-fighting/.
Grand, J. (2009, December 11). Re: Concussion Working Group. Retrieved from: https://.
Kelly, C. (2016, March 29). Unsealed e-mails show NHL in midst of existential crisis. The
Globe and Mail. Retrieved from:-
Kilgore, A. (2016, May 25). Former players are suing the NHL over concussions, but remain
loyal to hockey. Washington Post. Retrieved from:
captials/former-players-are-suing-the-league-over-concussions-but-remian-loyal-to-
hockey.
Klien, J. (2011, September 19). With stricter rule on hits to the head, some N.H.L stars are
split on a full ban. The New York Times, B13.
Concussions in the NHL
137
Kuhn, A.W., and G.S. Solomon. (2015). Concussion in the National Hockey League: A
systemic review of the literature. Future Medicine, 1(1). Retrieved from:
medicine.com/doi/full/10.2217/cnc.15.1.
Larose, J. Executive Summary: Hits to the head. (2010). Retrieved from:
mentcloud.org/documents/277859-NHL0141261.html.
Leonard, P. (2016, February 29). CTE worries Lindros after concussions shorten NHL career.
The New York Daily News. Retrieved from:-
lindros-leading activist-cte-research-artice-1.254787.
Mcindoe, S. (2013, March 27). Everything you wanted to about the NHL’S concussion
lawsuit. Grantland. Retrieved from: grantland.com/features/the-nhl-concussion-lawsuit/.
Nelson, K. (2000, June 2). NHL should take more charge in heading off concussions. The St.
Louis Post-Dispatch, p. D1. Retrieved from:.
proquest.com/docveiw/404050606?accountid=28365.
NFL. NFL’s Protocol for Players with Concussions. (2013, October 1). Retrieved from:
cussions.
NHL. Agenda: NHL General Manager’s Meeting. (2007, June 7). Retrieved from: https://
NHL. Concussions: New Rules for Treating NHL Players. March 14, 2011. Retrieved from:.
NHL. Frequently Asked Question about Concussions. February 7, 2011.Retrieved from:
NHL. Memorandum: Concussions Program Update. (2012, August 27). Retrieved from:
https://.
NHL. NHL Protocol for Concussion Evaluation and Management, January 8, 2010. Retrieved
from: sportsdocumetns.com/nhl/protocol-for-concussions-evaluation-and–management/2/
NHLPA.com. Competition Committee. (2015). Retrieved from:
/organization/competition-committee.
Peachman, R. (2016, August 29). Playing with concussions doubles recovery time, The New
York Times. Retrieved from:-
concussion-doubles-recovery-time.html?emc=etal.
Petchesky, B. (2013, August 30). A timeline of concussion science and NFL denial.
Deadspin. Retrieved from: deadspin.com/a-timeline-of-concussion-scienceand-nfl-denm
ail-1222395754.
Rosen, D. (2011, June 21). Board of Governors Approves Changes to Two Rules. Retrieved
from:
__________ (2013, September, 30). Hybrid Icing Tops List of Rule Changes for 2013-14.
Retrieved from:
-14/c68494.
Schwarz, A., Bogdanich, W., and Williams, J. (2016, March 24). NFL’s flawed concussion
research and ties to tobacco industry. The New York Times, p. A 1. Retrieved from: http://
nyt.ms/1q3DF5Z.
Stinson, S. (2015, August 6). Grey Areas: The NHL says concussions are down. Denying
their occurrence would do that. National. Post, B1. Retrieved from:
.edu/login?url=.
William Beaver
138
Todd, K. (2001, January 25). Concussions NHL: Panel won’t jump the gun on solutions to
head injury problem. Calgary Herald, p. D1. Retrieved from:
login?url=.
Vogel, J. (2016, June 23). No easy answers in concussion suit. The Buffalo News. Retrieved
from: Sabers.buffalonews.com/2016/06/23/no-easy-answers-in-oncussion-suit.
Westhead, R. (2012, January 31). Concussions cause NHL New Headaches. The Toronto
Star, p.s1. Retrieved from:
docview/918779583?accountid=28365.
__________ (2016, March 29). NHL concussions group folded without any conclusions after
leadership worries, PA roadblocks. TSN. Retrieved from:-
study-group-folded-wihtout-any-conclusions-after-leadership-worries-pa-roadblocks-1.46
1185.
Wennberg, R.A. and C.H. Tator. (2003). National Hockey reported concussions, 186/87-
2001-02. Canadian Journal of Neurological Sciences, 30(3), 206-209. Retrieved from:.
Wheeler, Paul. (2016, March 23). The Unsealed NHL Emails Reveal a League with a
Horrifying Attitude to Player Safety. Retrieved from:
1016/3/31/11334066/unsealed-nhl-emails.
Williams, J. (1996, July 9). Special report: Concussions, sports, crosschecking-NHL exams
rash of head injuries. Newsday, A52. Retrieved from:
=.
Wyshynski, G. (2012, February 20). Once again, NHL Players Voice Overwhelming
Opposition to Fighting Ban. Retrieved from:
dy/once-again-nhlplayers-voive-overwhelming-oppossition-to-fighting-175 57533.html.
___________ (2016, March 30). NHL Email Dump 12 Scandalous Insights. Retrieved from:
sports.yahoo.com/blogs/nhl-puck-daddy/nhl-emial-dum-12-scansalous-insightful-things-
we-learned-15303481.html. | https://www.researchgate.net/publication/328560562_Concussion_in_the_NHL_A_Case_Study | CC-MAIN-2022-27 | refinedweb | 8,458 | 54.22 |
The ASCII Cam 122."
Re:Encryption! (Score:1)
He cheats! (Score:1)
:)
Or, to quote NTK.NET... (Score:1)
OK, But it's the only way to watch sports. (Score:1)
not new (Score:1)
Re:Mmmmm..... (Score:1)
Don't say I never gave you anything
IRNI
Re:someone at SGI developed this 5 years ago (Score:1)
re: New pgp sig (Score:1)
To send me encrypted mail:
Run the end credits of Debbie Does Dallas backwards through the Elmer Fudd biff filter and use results as pgp key data.
Oh yeah
Grell
"May you live In Fortean Times"
Re:been done before (Score:1)
been done before (Score:1)
Text-mode atrocities unlimited (Score:1)
People are starving to death in the world, and you had time for this [mr.net]?!!
Re:Encryption! (Score:1)
Neat, but not really new. (Score:1)
Re:Nothing new (Score:1)
(Even just a man page reference would work...)
I gotta try it on my Indy.
--K
i know (Score:1)
New meaning.... (Score:1)
ok ya. (Score:1)
i'm too late all of the time, and i tend to repeat someone else's idea =)
Huh? No it's not. (Score:1)
This ain't news ! (Score:1)
acquire_image | anytopnm | pnmscale
ppmquant -fs -2 | pnmtopgm | pgmtopbm | \
pbmtoascii -2x4 >foo.txt
(where acquire_image is a suitable program, which spits an image in any format out of the desired source)
Complete (?) list of google screen shot caches (Score:1)
Re:Encryption! (Score:1)
tagline
Re:Cipher (Score:1)
it's actually a clever way to get a reasonable level of randomness (cheaper than renting atmospheric noise collectors, another method I've heard of).
sean
Re:Looks a little fishy... (Score:1)
Re:It's About Time... (Score:1)
Re:It's About Time... (Score:1)
Arguably, they would be better off with a 2x2 pixel x 2 bit grey decomposition, from a pure fidelity point of view, but using ascii is really cool. I was actually a bit dissapointed there was grey-scale at all. Just using different chars for that (".+*" for light through dark grey, "|-\/" for edge features, f.ex) would have been cooler, IMANHO.
Re:SSH and Telnet (Score:1)
slashdotted already (Score:1)
What really drives opensource? (Score:1)
Maybe that what got John Carmack started too?
Reminds me of ASCII Doom ... (Score:1)
-Moondog
Re:Use of Greyscale (Score:1)
If this ascii webcam thing is just for fun, though, then hey, nice job. It's probably pretty educational to write something like that.
slashcode web cacheing project (Score:1)
Obviously this is something that is desperatly needed.
Re:MosASCII (Score:1)
echo $email | sed s/[A-Z]//g | rot13
Re:Aim camera at senction of transparent sewer lin (Score:1)
Re:Nice (Score:1)
Uhm... I think that's a guy.
Re:nifty... (Score:1)
Re:Cipher (Score:1)
never thought about using hasciicam for chyphering
if you do so be sure to check out the flag into hasciicam.c that says:
ascii_parms.randomval = 0;
further info on aalib documentation (info aalib)
Re:False Advertising (Score:1)
to see asciicam on lynx you need to customize size of rendered html
Re:Looks a little fishy... (Score:1)
i said CONSOLE LIVE MODE screenshots
how can i show you that without images?
png are done with gimp without editing
tell me more about your problems...
Just in time for Supper Bowl XXXV (Score:1)
Where's the hack? (Score:1)
(OT)JPEG to ASCII conversion and Goatse.cx (Score:1)
How long before that goatse guy gets a hold of this?
I used the NetPBM [sourceforge.net] suite to convert goatse.cx [goatse.cx]'s "The Receiver" image to ASCII [slashdot.org]. It didn't look very good.
Oh, and IANTGCG (I am not the Goatse.cx guy).
Like Tetris? Like drugs? Ever try combining them? [pineight.com]
The Matrix, resolved! (Score:1)
Incidentally, does anyone know where to get a green filter for my upcoming ascii-cam?
--
(For those who played Acrophobia a long time ago, I'm already crawling toward the couch.)
SGI did it all (Score:1)
but since reality.sgi.com is down more often than not, you ought to hit the google cache at:
after it fails to load directly.
cheers
Random noise? Maybe not. (Score:1)
Re: Video takes up too much bandwidth????? (Score:1)
I started using the net in 1992. I was a sophomore in high school, and I worked over the summer at the Naval Research Lab in Virginia studying solar flares. They had mosaic. But at that time, I believe they had just added support for gifs, and *everyone* was complaining about how certain sites took up too much bandwidth, and about how Netscape was adding all this "proprietary" tag support, for jpegs and other stuff...
I ask you, what happened??? All those people who complained didn't have a clue... The web has evolved for the better, just like everything else. Those people who don't accept the change are not just conservative, they're wrong.
not new (Score:1)
Nothing new (Score:1)
take another look (Score:1)
Random number generator... (Score:1)
bzzzzzzztttt (Score:1)
(==HELLo=Hihi=)
GUIGNOLISNOTDEAD(YET)!!
YUPIYUPIBORNTOBEALIVEANYWAY
+iHOXXHTIii==ii)IIII)i)))THXXHOL;
=iTXXHTi===;;;:::::::::;;;+=iILXXO+
+ii)LiiIii)TI))T)+;TI)iT)TIi===I)ii
=iiii++;+;;;=)i+;=i=;;;;;;++=i=+
))i)==++;++=)i++ii=++;;+===i=;
))I))i==iiii=++++i==)iII=,
=)ITIL)iiLHIi+==i)H==++=i)I)TII;
feww.. managed to post this as no junk
Photo - Ascii (Score:1)
Re:Cipher (lavarand actually) (Score:1)
this project was called lavarand. i spoke to the guy who started this about 3 years ago, smart guy.
heres the url for more info: [sgi.com]
.brad
Drink more tea
organicgreenteas.com [organicgreenteas.com]
Re: Video takes up too much bandwidth????? (Score:1)
What people object to is high bandwidth crap. Useless text at least loads quickly. Waiting forever for a page to load only to find it's useless is a pain.
New troll tool (Score:2)
Re:Already being used to generate random numbers (Score:2)
For those that avoided this movie (good choice!), our hero as played by Keanu "Whoa" Reeves, encodes a large block of data in his mind by using 3 random images from television. The recieving end would have gotten the images by one means while Keanu's character would have travelled a different route, as such to protect the data. Of course, that's not at all how smoothly it works out...
Neat, but hardly new... (Score:2)
WOW - pictures from beyond the grave!! (Score:2)
Re:255 color gray scale? (Score:2)
I had to chuckle over the ``color gray scale'' phrase.
Also, the human eye can not detect much difference in gray levels once you get to about 64 levels; any more than that and your gilding lilies. But then, I supposed it doesn't hurt to go ahead and use the whole byte.
:-)
All in all, I've got to say: ``Kudos to the author!'' This is one of the coolest things I've seen in quite a while. Not everyone's got a T1 into their home and this package could make crude but servicable video conferencing available for people on a budget... or can't afford a fat pipe... or live too far away from the CO for ADSL. Now I'm wondering how cheaply I can get a camera for the PCs at home...
\begin{aside}
Back in my grad school days I was doing programming involving image compression techniques (Hadamard, Haar, DCT, etc.), Viterbi encoding, etc. for use over noisy channels and the only output device I had easy access to that could produce a viewable image was the monster IBM band printer. We had some programs to produce output like the hasciicam only it used overstrikes to create much of the gray scale levels. (Other folks eventually got used to seeing before-and-after images hanging from the walls like wallpaper.) We eventually got (for another study) a thermal printer but was a pain to use (serial input from the mainframe, required expensive paper that turned yellow green after a while, etc.) This brings back some memories.
\end{aside}
--
OK, I shoulda been more explicit (Score:2)
what is he holding? (Score:2)
Encryption! (Score:2)
IMHO, it would be an extremly bad source of random noise. Large chunks of the image would be the static bits of the image, and the rest of it would be fairly repetitive - the face of whoever sat infront of the computer, or whatever.
Re:You could use it as a CCTV camera.... (Score:2)
Only if they commited a Capital crime.
Re:Encryption! (Score:2)
Could it be used... (Score:2)
Re:Encryption! (Score:2)
Re:slashdoted - google cache link for screenshot (Score:2)
Maybe they should film the Matrix sequels... (Score:2)
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Wierd (Score:2)
Star Wars in ASCII (Score:2)
Anyone remember where it is? I lost the link
Here's the link [asciimation.co.nz]
You could have just searched google [google.com].
Already being used to generate random numbers (Score:2)
Train a camera on some lava lamps. Take a picture. Process bit stream. Random numbers.
The processing used to generate the ascii art here would probably reduce the randomness. Sorry. Try again.
--
Re:SSH and Telnet (Score:2)
earliest Video to ASCII tool? (Score:2)
Nice to see a full open source release though rather than just an IRIX binary.
--LP
aatv (Score:2)
I dont have much else to say about it
(pls dont mod me down, I'm not logged in and dont have the oportunity to dselect the 'add 1 pt karma thing
Re:SSH and Telnet (Score:2)
I feel Robbed! (Score:2)
O
-+-
|
^
It's About Time... (Score:2)
-Karl
excerpt from early Matrix script draft (Score:2)
An ASCII web-cam? Yeah.
The monitors are packed with bizarre characters and fuzzy shapes.
CYPHER
I've been watching it so long, I can make everything out. It used to be blonde, brunette, redhead... but now it's nothing more than a bunch of geeks with too much time on their hands.
Neo nods.
CYPHER
You want a drink?
He pours Neo a drink from a large plastic jug.
Neo takes a sip and it almost kills him. Cypher pounds on his back.
CYPHER
Good shit, huh? Dozer makes it... calls it Jolt. It's good for two things: degreasing engines and keeping coders awake.
Red-faced, Neo finally stops coughing.
Looks a little fishy... (Score:2)
Still just making a picture look like that deserves some credit
Never knock on Death's door:
Re:someone at SGI developed this 5 years ago (Score:2)
i started coding hasciicam on the code from xawtv's webcam by gerd knorr
the idea to make it html is the new thing
i'm not the first putting ascii into video !
Re:False Advertising (Score:2)
Re:Encryption! (Score:2)
you want random? (Score:2)
Same can be said for random irrelevant slashdot taglines...
Why? (Score:2)
Just because you can doesn't mean you should.
John
Images to ASCII?? (Score:2)
How long before that goatse guy gets a hold of this?
--
Info on Lavarand Patent (Score:2)
Being one of the Inventors of this (Beer Inspired) technique I have have a a lot of intrest in it.
Also, There will be a new website comming up in the near future (no link since it is not on the air yet) with new an improved access to a lavarand system.
Note the the intrested, the patent only covers using the data to seed a pseudo-random number generator
...
Cool, but not really ASCII (Score:2)
I think they should have used telnet for this stream.
Re:False Advertising (Score:2)] | https://slashdot.org/story/01/01/26/1617217/the-ascii-cam | CC-MAIN-2017-51 | refinedweb | 1,994 | 76.01 |
And I'm finally getting around to ripping all of my CDs (again for a lot of them). Decided that 's idea of putting the discid as a comment tag on the ogg was a good one to make for easier retagging/renaming at a later date was a good idea.
I just wish I knew where I stashed away the scripts that I used to use for this. grip seems to be behaving a little bit better now than it used to, though, at least, and it was easier to set up quickly than rewriting my scripts.
3 thoughts on “140278”
#!/usr/bin/python
import fcntl, struct, os, string
def get_toc(dev):
CDROMREADTOCHDR=0x5305
CDROMREADTOCENTRY=0x5306
CDROM_MSF=0x02
ioctlarg=’ ‘*1024
val = fcntl.ioctl(dev,CDROMREADTOCHDR,ioctlarg)
hdr = struct.unpack(‘BB’,val[:2])
tracks = []
for i in range(hdr[0],hdr[1]+1):
tracks.append(i)
tracks.append(0xAA)
toc = []
for track in tracks:
ioctlarg = struct.pack(‘BBBBBBBBiB’,track, 0, CDROM_MSF,0,0,0,0,0,0,0)
val = fcntl.ioctl(dev,CDROMREADTOCENTRY,ioctlarg)
tocentry = struct.unpack(‘BBBBBBBBiB’, val)
# print tocentry
msf = {
‘track’ : tocentry[0],
‘minute’ : tocentry[4],
‘second’ : tocentry[5],# -2, # -2 to kludge for between-track gap
‘frame’ : tocentry[6]
}
toc.append(msf)
return toc
def cddb_sum(n):
ret = 0
while n > 0:
ret = ret + (n % 10)
n = n / 10
return ret
def seconds(msf):
ret = (msf[‘minute’]*60) + msf[‘second’]
return ret
def frames(msf):
return (msf[‘minute’]*60*75) +
(msf[‘second’]*75) +
(msf[‘frame’])
def get_discinfo(toc):
csum = 0
t = 0
length = 0
trackinfo = []
for entry in toc:
if entry[‘track’] == 0xAA:
t = seconds(entry) – seconds(toc[0])
length = seconds(entry)
else:
trackinfo.append(frames(entry))
csum = csum + cddb_sum(seconds(entry))
discid = ((csum % 0xFF) << 24 | t << 8 | len(toc)-1)
discinfo = {
'discid' : discid,
'tracks' : len(toc)-1,
'trackinfo' : trackinfo,
'length' : length,
}
return discinfo
fd = os.open('/dev/scd0',os.O_RDONLY|os.O_NONBLOCK)
toc = get_toc(fd)
discinfo = get_discinfo(toc)
tracklist = string.join(map(lambda a: "%d" % a, discinfo['trackinfo']),' ')
track_info = "%(discid)08x %(tracks)d " % discinfo +
tracklist +
" %(length)s" % discinfo
print track_info
So, having been drinking, the above was clearly eligable to be a comment 🙂
you can do “vorbiscomment -a DISCID=`./discid.py` in.ogg out.ogg” with that.
Have I mentioned how much I hate python’s ioctl() routines, and how the output formats work? For fuck’s sakes, even perl does it better.
Woo hoo ogg’s for everyone! | https://velohacker.com/2002/09/25/140278/ | CC-MAIN-2018-26 | refinedweb | 403 | 55.95 |
Um, mobile-responsive, schedule view. We’ll start in the Umbraco back office and build a “document type” to hold our data. Next, we’ll learn how to add the UI for ASP.NET MVC to an Umbraco site. Last, we’ll create a simple class library for our view model and wire it up using Razor templates in Umbraco.
The Scheduler control is a highly customizable component used to create displays for calendars, appointments, agendas and timelines. With built-in adaptive rendering, the Scheduler will work across all screens and can handle touch events as if triggered by a keyboard or a mouse.
While the Scheduler component itself is very robust, it is also very easy to setup. With just a few lines of fluent API code the Scheduler is ready to be used. In the following examples we’ll learn how to create a Scheduler, add event items using the
ISchedulerEvent interface, and configure view options.
The Scheduler will bind to a collection of items that must implement the
ISchedulerEvent interface. The minimal properties necessary to bind event items to the Scheduler include:
Title,
StartDate,
EndDate, and
Description.
Let’s create a new class library project in Visual Studio called
EventSchedule. We’ll need to add a reference to our project so we can access the interface
ISchedulerEvent which is located in the Kendo.Mvc.UI library. To do this open the Reference Manager then browse and add Kendo.Mvc.dll from the UI for ASP.NET MVC install directory.
This is usually root:\Program Files (x86)\Telerik\UI for ASP.NET [version]\wrappers\aspnetmvc\Binaries\Mvc4
To work with Umbraco content we’ll need to access the
IPublishedContent interface from the
Umbraco.Cms.Core namespace. Using NuGet, add the Umbraco.Cms.Core package, this package contains everything needed to work directly with Umbraco.
Now that we have the necessary references in place, we can begin writing a class to hold our event schedule data. We’ll start by creating a new class named
MyEvent and inherit from
ISchedulerEvent, this is common practice for working with the Scheduler.
MyEvent should implement all of the properties of the
ISchedulerEvent interface.
public class MyEvent : ISchedulerEvent { public MyEvent(IPublishedContent node) { }; } }
Next we’ll begin to work directly with data from Umbraco, let’s add a constructor to the class that accepts
IPublishedContent as an argument. In the constructor we’ll map the data from Umbraco to the properties we implemented from
ISchedulerEvent. For brevity we’ll only be using the following properties:
Title,
IsAllDay,
StartDate,
EndDate and
Description.
The
Name property from
IPublishedContent will map directly to
Title. For the remaining properties, we’ll need use the
GetPropertyValue<T> helper method to retrieve strongly typed values from the Umbraco content node.
namespace EventSchedule { public class MyEvent : ISchedulerEvent { public MyEvent(IPublishedContent node) { this.Title = node.Name; this.IsAllDay = node.GetPropertyValue<DateTime>("endDate").Equals(DateTime.MinValue); this.Start = node.GetPropertyValue<DateTime>("startDate"); this.End = this.IsAllDay ? this.Start : node.GetPropertyValue<DateTime>("endDate"); this.Description = node.GetPropertyValue<String>("eventDescription"); }; } } }
The
MyEvent class is now complete, we can compile the library and deploy it to the bin directory of our Umbraco site.
Adding the UI for ASP.NET MVC components to an Umbraco site requires just a few simple steps. First, we’ll need to add the Kendo.Mvc.UI library to the bin directory of our Umbraco site. This will give us access to the fluent API needed to build the Scheduler component and satisfy our dependency for
ISchedulerEvent. Copy the UI for ASP.NET MVC binaries from installation directory on your drive, to the bin folder of your Umbraco site.
This is usually root:\Program Files (x86)\Telerik\UI for ASP.NET [version]\wrappers\aspnetmvc\Binaries\Mvc4
In the
head section of the site’s HTML we’ll add references to the Telerik CDN, these references will bring in the required CSS and JavaScript necessary to render the Scheduler component. In this example we will be using the new Material Design theme. The theme selection is determined by which CSS file references are used.
<!-- Use the Material Design theme --> <link rel="stylesheet" href="" /> <link rel="stylesheet" href="" />
The Telerik UI Components are now enabled on our Umbraco site. Next, we’ll create Umbraco document types to store event item data and wire up the view templates.
Two document types will be needed to create a complete schedule, a schedule and event item. The schedule document type will have a single property used to set what date to begin displaying the schedule from. The event item document type will hold the values that correspond to the
MyEvent view model.
We’ll create a Schedule document type and associated template using the standard Umbraco back-office features. In the Schedule document type add a property named Display Date and give it a type of Date Picker with time.
Next we’ll add an Event Item document type, this time without an associated template. Since the schedule will display all of the event information, a template is not needed. For the Event document type, add the following properties:
Start Date (Date Picker with time),
End Date (Date Picker with time), and
Description (textbox multiple).
Next, we need to allow the Schedule to contain Event Items, we’ll need to add the Event Item to the “Allowed child node types” of the Schedule document type. This will give our Schedule a parent/child relationship with Event Items.
Since Umbraco templates are MVC views we can build our template just as we would in an MVC application. Open the template we created while setting up our Schedule document type. The template should be pre-configured to inherit from
Umbraco.Web.Mvc.UmbracoTemplatePage, this will give us access to the Umbraco content values passed in through the
Model object.
We’ll need to reference some dependencies to build our Schedule view, add a reference to
Kendo.Mvc.UI and
EventSchedule above the inheritance declaration.
@using Kendo.Mvc.UI; @using EventSchedule; @inherits Umbraco.Web.Mvc.UmbracoTemplatePage
We’ll add a variable
displayDate and get the value from the Schedule using the Umbraco strongly typed property value method
.GetPropertyValue<T>(propertyName). Next we’ll get all of the events belonging to the Schedule. Since the Event Items are all children of the Schedule they will be available in the
Children property of the Schedule as an
IEnumerable<IPublishedContent>.
Because our
EventItem class has a constructor of
IPublishedContent we can simply map the children to our custom view model using
.Select(child => new MyEvent(child)). We’ll use a variable called
events and map the events.
@using Kendo.Mvc.UI; @using EventSchedule; @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @{ Layout = "_Layout.cshtml"; var scheduleDate = Model.Content.GetPropertyValue<DateTime>("displayDate"); var events = Model.Content.Children.Select(child => new MyEvent(child)); }
To complete the view we’ll add our UI for ASP.NET MVC Scheduler control. Start by initializing the Scheduler control using the MyEvent type
@(Html.Kendo().Scheduler<MyEvent>(), this will begin the fluent API chain. The methods that come next set up the configuration of the Scheduler.
@(Html.Kendo().Scheduler<MyEvent>() )
First, give the Scheduler a name using, this will become the Scheduler element’s ID, we’ll use
uScheduler.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") )
Next, set editable to false, so the schedulers editing features are disabled.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") .Editable(false) )
Set the schedulers starting date to the
displayDate of the document.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") .Editable(false) .Date(scheduleDate) )
Using the time zone setting, tell the scheduler what time zone the schedule data belongs to.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") .Editable(false) .Date(scheduleDate) .Timezone("Etc/UTC") )
We’ll setup two views for the scheduler: a weekly calendar style view, and an agenda.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") .Editable(false) .Date(scheduleDate) .Timezone("Etc/UTC") .Views(views => { views.AgendaView(agenda => agenda.Selected(true)); views.WeekView(wv => wv.ShowWorkHours(true)); }) )
To wrap it all up, bind the scheduler to the data.
@(Html.Kendo().Scheduler<MyEvent>() .Name("uSchedule") .Editable(false) .Date(scheduleDate) .Timezone("Etc/UTC") .Views(views => { views.AgendaView(agenda => agenda.Selected(true)); views.WeekView(wv => wv.ShowWorkHours(true)); }) .BindTo(@events) )
Now the schedule is ready to be managed from the Umbraco back-office. Schedule items can be added and configured and the items will populate the front end UI.
With just a few integration steps a feature-full, mobile-responsive, schedule UI can be added to your Umbraco site. Use this guide as a starting point and add more Scheduler features like: time line views, resource scheduling and others. | http://developer.telerik.com/featured/building-a-schedule-with-umbraco/ | CC-MAIN-2017-17 | refinedweb | 1,434 | 56.76 |
The Aggregate Function
import findspark findspark.init() import pyspark sc = pyspark.SparkContext()
aggregate
Let’s assume an arbirtrary sequence of integers.
import numpy as np vals = [np.random.randint(0, 10) for _ in range(20)] vals
[5, 8, 9, 3, 0, 6, 3, 9, 8, 3, 4, 9, 5, 0, 8, 4, 2, 3, 2, 8]
rdd = sc.parallelize(vals)
Finding the
mean
Assume further that we can’t just call the handy
mean method attached to our
rdd object.
rdd.mean()
4.95
We’d create the mean by getting a sum of all values and a total count of numbers.
sum(vals) / len(vals)
4.95
In Spark, we recreate this logic using a two-fold reduce via a Sequence Operation and a Combination Operation.
The
seqOp is a reduce step that happens per-partition. Whereas the
combOp is how we take the reduced values and bring them together.
total, counts = rdd.aggregate(zeroValue=(0, 0), seqOp=(lambda x, y: (x[0] + y, x[1] + 1)), combOp=(lambda x, y: (x[0] + y[0], x[1] + y[1]))) total / counts
4.95
Under the Hood
For purposes of demonstration, let’s look at something a bit easier.
Starting at 0. Simple sum. Then take the max.
rdd.aggregate(zeroValue=0, seqOp=lambda x, y: x + y, combOp=lambda x, y: max(x, y))
29
Why did we get this value? Peeking inside the partitions of
rdd we can see the distinct groups.
brokenOut = rdd.glom().collect() brokenOut
[[5, 8, 9, 3, 0], [6, 3, 9, 8, 3], [4, 9, 5, 0, 8], [4, 2, 3, 2, 8]]
The sum inside each partition looks like
[sum(x) for x in brokenOut]
[25, 29, 26, 19]
Thus, taking the
max of each of these intermediate calculations looks like
max([sum(x) for x in brokenOut])
29
Thus, we must be careful in writing our
seqOp and
combOp functions, as their results depend on how the data is partitioned. | https://napsterinblue.github.io/notes/spark/basics/aggregate_fn/ | CC-MAIN-2021-04 | refinedweb | 329 | 64.61 |
Hi
Here is the code:
import java.io.*;
import java.util.*;
public class test{
public static void main(String args[]) throws IOException
{
BufferedReader stdin = new BufferedReader( new InputStreamReader ( System.in ));
String mystr = new String("1");
string(mystr);
System.out.println(mystr);
}
public static void string(String str){
str = new String("2");
}
}
I just wonder why mystring value is not "2"??
There might be somehow the two variables mystring and str has different memory address, but I don't know why? How can I make "mystring" changed to "2"??
Thanks
you cant. string is immutable.
i'll explain what goes on, line by line. it will help to think of java objects as some object, like a ball. it will help to think of the named variable you give as being like a piece of paper tied to the object with string.. a tag, that you have a hold of
now, yu make a new string, and somewhere in memory comes the word "1", tied to it is a label tag, mystr:
mystr-------("1")
you call a method with it, passing in the label. now, your method has the parameter named as String str.. and just like String myStr, this also attaches a label to our object in memory.. note it attaches to the object, NOT your tag:
mystr-------("1")--------str
not:
str------mystr------("1")
not that it would matter but.. you need to be clear that one object now has 2 labels (see above above)
then you make a new object and reattach the str tag to the new:
Code:
mystr--------("1") str------("2")
then the method ends, str goes out of scope and the label is destroyed:
Code:
mystr--------("1") ("2")
the new string "2" is now orphaned and will be garbage collected; it's dead
and mystr is still attached to "1"
this cannot be violated.. string cannot be changed once created..
only a method that works on the object will show a change. as soon as you use the "new" keyword, you are not changing the object; youre making another
mystr--------("1") str------("2")
mystr--------("1") ( | http://forums.devx.com/showthread.php?140499-sticky-recursion-problem&goto=nextoldest | CC-MAIN-2017-13 | refinedweb | 349 | 79.5 |
Muenchian technique of grouping is de-facto standard way of grouping in XSLT 1.0. It uses keys and is usually very fast, efficient and scalable. There used to be some problems with using Muenchian grouping in .NET though, in particular the speed was in question. To put it another way - .NET implementation of keys and generate-id() function is slow. Reportedly, as per KB article 324478, keys performance has been fixed, though I have no idea if the fix is within .NET 1.1 SP1 (.NET version 1.1.4322.2032). Anyway, writing the article on XML indexing I did some perf testing for XSLT keys and got interesting results I want to share.
Muenchian grouping includes a step of selecting unique nodes - first node for each group. Usually this is done using generate-id() or count() functions. There is another way to select nodes with unique value though - EXSLT's set:distinct() function, supported by EXSLT.NET. So I measured performance and scalability of all three methods.
The source XML is XML dump of the Orders database from the Northwind sample database, includingitten" ShipAddress="Luisenstr. 48" ShipCity="Munster"
ShipPostalCode="44087" ShipCountry="Germany" />
<!-- 414 more orders -->
</root>
)[1])]">
>
<xsl:for-each select="
orders[count(.| key('countryKey', @ShipCountry)[1]) = 1]">
<xsl:for-each
The graph view works better:
As can be seen, in .NET 1.1, Muenchian grouping using generate-id() is not only the slowest, but shows the worst scalability. Probably the reason is poor generate-id() function implementation. count() function performs much better, but still shows some scalability issues. And finally Muenchian grouping using set:distinct() function is the winner here - both in speed and good scalability. Sublinear running time, amazing. Kudos to Dimitre Novatchev for optimizing set:distinct() function implmentation in EXSLT.NET.
The bottom line - if you are looking for ways to speed up grouping in XSLT under .NET 1.X, use Muenchian grouping with set:distinct() function from EXSLT.NET to get the best perf and scalability. Otherwise use Muenchian grouping with count() function, which sucks less in .NET than generate-id() function does.
I wonder what would be results in .NET 2.0? Stay tuned guys.
TrackBack URL:
eliasen, take a look at. In short - there is no difference in .NET 2.0. All three work the same.
Have you done your performance tests on .NET 2.0 yet? Which is to be preferred now?
Thanks in advance!
--
eliasen
I cannot expect all people to move to Net 2. immediately, so 1.1 XSLT will be used for long time. Namespace manager is fast and efficiant, I created some tests. If you use namespaces with single char it is much faster, so considering it uses NameTable it is strage. Then see memory usage. Even not used namespaces (just declared) add much memory. The dump managed memory statistics. What should people do with this? I beleive that compiled styleseets will not have this problem, but there must be some way to avoid this. If you take your 5Mbs document and add 5 long namespaces to your stylesheet at top level, probably your app will go out of memory. This is not an not efficient implementation, it looks like a mistake in implementation. I icannot imagine how unused declared namespaces can impact memory usage so much. I had to add some normalization to generated XSLTs and I could reduce working set very significantly. In real world all groupping mehtods in .Negt may become inefficient, because of other problems.
Feb TCP which includes only XslTransform class has the same problem.
Yeah, I've noticed it too - each additional namespace declaration slows down transformation. That's probably because of inefficient implementation of the XmlNamespaceManager class.
AFAIK it's fixed in .NET 2.0 already.
Oleh, I am still trying to show people another problem, so just add several not used namespace to prefix binding on stylesheet element and compare times. I think you should notice a difference ;)
This page contains a single entry by Oleg Tkachenko published on March 10, 2005 2:25 PM.
Microsoft Certification Second Shot Offer - if you fail, try second time for free was the previous entry in this blog.
VB6 is dead? Amen is the next entry in this blog.
Find recent content on the main index or look in the archives to find all content. | http://www.tkachenko.com/blog/archives/000401.html | crawl-002 | refinedweb | 723 | 68.77 |
How to import d3 plugins with Webpack
Last week I started working on a new visualization and I wanted to include a couple of d3 plugins: d3-legend by Susie Lu and d3-line-chunked by Peter Beshai.
I spent some time figuring out how to include these plugins with webpack, so I’m writing this small reminder here.
As far as I know, there are at least 3 ways to import d3 and some plugins.
The wrong way to do itThe wrong way to do it
import * as d3 from 'd3'
import * as d3Legend from 'd3-svg-legend'
import * as d3LineChunked from 'd3-line-chunked'
This is bad for 2 reasons:
- we are importing the entire d3 library
- the plugins are not attached to
d3, so in order to use them we have to do something like this:
const colorLegend = d3Legend.legendColor().
The lazy way to do itThe lazy way to do it
import * as d3Base from 'd3'
import { legendColor } from 'd3-svg-legend'
import { lineChunked } from 'd3-line-chunked'
// attach all d3 plugins to the d3 library
const d3 = Object.assign(d3Base, { legendColor, lineChunked })
Here we are still importing the entire d3 library, but now the plugins are attached to the
d3 object. This means that we can use them like this:
const colorLegend = d3.legendColor(). I have to say that I start writing a visualization with this setup. It’s vey compact and pretty convenient, because we don’t have to worry about which d3 functions/submodules we need.
The efficient way to do itThe efficient way to do it
import { select, selectAll } from 'd3-selection'
import { min, extent, range, descending } from 'd3-array'
import { format } from 'd3-format'
import { scaleLinear } from 'd3-scale'
import * as request from 'd3-request' // d3 submodule (contains d3.csv, d3.json, etc)
import { legendColor } from 'd3-svg-legend' // d3 plugin
import { lineChunked } from 'd3-line-chunked' // d3 plugin
// create a Object with only the subset of functions/submodules/plugins that we need
const d3 = Object.assign(
{},
{
selectAll,
min,
extent,
range,
descending,
format,
scaleLinear,
legendColor,
lineChunked,
},
request
)
D3 version 4 is not a monolithic library like D3 version 3, but a collection of small modules. This is perfect for a module bundler like Webpack, because it means that we can include in your bundles only the functions that we actually need. As you can see, this is a bit tedious though, that’s why I start with the “lazy way to do it” and change to the “efficient way to do it” only when my visualization is basically finished. | https://www.giacomodebidda.com/posts/how-to-import-d3-plugins-with-webpack/ | CC-MAIN-2021-21 | refinedweb | 427 | 51.41 |
Consul Agent in Docker
This project is a Docker container for Consul. It's a slightly opinionated, pre-configured Consul Agent made specifically to work in the Docker ecosystem.
Getting the container
The container is very small (50MB virtual, based on Busybox) and available on the Docker Index:
$ docker pull progrium/consul
Using the container
Just trying out Consul
If you just want to run a single instance of Consul Agent to try out its functionality:
$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap
The Web UI can be enabled by adding the
-ui-dir flag:
$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui
We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of
node1. Setting the container hostname is the intended way to name the Consul Agent node.
Our recommended interface is HTTP using curl:
$ curl localhost:8500/v1/catalog/nodes
We can also use dig to interact with the DNS interface:
$ dig @0.0.0.0 -p 8600 node1.node.consul
However, if you install Consul on your host, you can use the CLI interact with the containerized Consul Agent:
$ consul members
Testing a Consul cluster on a single host
If you want to start a Consul cluster on a single host to experiment with clustering dynamics (replication, leader election), here is the recommended way to start a 3 node cluster.
Here we start the first node not with
-bootstrap, but with
-bootstrap-expect 3, which will wait until there are 3 peers connected before self-bootstrapping and becoming a working cluster.
$ docker run -d --name node1 -h node1 progrium/consul -server -bootstrap-expect 3
We can get the container's internal IP by inspecting the container. We'll put it in the env var
JOIN_IP.
$ JOIN_IP="$(docker inspect -f '{{.NetworkSettings.IPAddress}}' node1)"
Then we'll start
node2 and tell it to join
node1 using
$JOIN_IP:
$ docker run -d --name node2 -h node2 progrium/consul -server -join $JOIN_IP
Now we can start
node3 the same way:
$ docker run -d --name node3 -h node3 progrium/consul -server -join $JOIN_IP
We now have a real three node cluster running on a single host. Notice we've also named the containers after their internal hostnames / node names.
We haven't published any ports to access the cluster, but we can use that as an excuse to run a fourth agent node in "client" mode (dropping the
-server). This means it doesn't participate in the consensus quorum, but can still be used to interact with the cluster. It also means it doesn't need disk persistence.
$ docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node4 -h node4 progrium/consul -join $JOIN_IP
Now we can interact with the cluster on those published ports and, if you want, play with killing, adding, and restarting nodes to see how the cluster handles it.
Running a real Consul cluster in a production environment
Setting up a real cluster on separate hosts is very similar to our single host cluster setup process, but with a few differences:
- We assume there is a private network between hosts. Each host should have an IP on this private network
- We're going to pass this private IP to Consul via the
-advertiseflag
- We're going to publish all ports, including internal Consul ports (8300, 8301, 8302), on this IP
- We set up a volume at
/datafor persistence. As an example, we'll bind mount
/mntfrom the host
Assuming we're on a host with a private IP of 10.0.1.1 and the IP of docker bridge docker0 is 172.17.42.1 we can start the first host agent:
$ docker run -d -h node1 -v /mnt:/data \ \ progrium/consul -server -advertise 10.0.1.1 -bootstrap-expect 3
On the second host, we'd run the same thing, but passing a
-join to the first node's IP. Let's say the private IP for this host is 10.0.1.2:
$ docker run -d -h node2 -v /mnt:/data \ -p 10.0.1.2:8300:8300 \ -p 10.0.1.2:8301:8301 \ -p 10.0.1.2:8301:8301/udp \ -p 10.0.1.2:8302:8302 \ -p 10.0.1.2:8302:8302/udp \ -p 10.0.1.2:8400:8400 \ -p 10.0.1.2:8500:8500 \ -p 172.17.42.1:53:53/udp \ progrium/consul -server -advertise 10.0.1.2 -join 10.0.1.1
And the third host with an IP of 10.0.1.3:
$ docker run -d -h node3 -v /mnt:/data \ -p 10.0.1.3:8300:8300 \ -p 10.0.1.3:8301:8301 \ -p 10.0.1.3:8301:8301/udp \ -p 10.0.1.3:8302:8302 \ -p 10.0.1.3:8302:8302/udp \ -p 10.0.1.3:8400:8400 \ -p 10.0.1.3:8500:8500 \ -p 172.17.42.1:53:53/udp \ progrium/consul -server -advertise 10.0.1.3 -join 10.0.1.1
That's it! Once this last node connects, it will bootstrap into a cluster. You now have a working cluster running in production on a private network.
Special Features
Runner command
Since the
docker run command to start in production is so long, a command is available to generate this for you. Running with
cmd:run <advertise-ip>[::<join-ip>[::client]] [docker-run-args...] will output an opinionated, but customizable
docker run command you can run in a subshell. For example:
$ docker run --rm progrium/consul cmd:run 10.0.1.1 -d \ progrium/consul -server -advertise 10.0.1.1 -bootstrap-expect 3
By design, it will set the hostname of the container to your host hostname, it will name the container
consul (though this can be overridden), it will bind port 53 to the Docker bridge, and the rest of the ports on the advertise IP. If no join IP is provided, it runs in
-bootstrap-expect mode with a default of 3 expected peers. Here is another example, specifying a join IP and setting more docker run arguments:
$ docker run --rm progrium/consul cmd:run 10.0.1.1::10.0.1.2 -d -v /mnt:/data -v /mnt:/data \ progrium/consul -server -advertise 10.0.1.1 -join 10.0.1.2
You may notice it lets you only run with bootstrap-expect or join, not both. Using
cmd:run assumes you will be bootstrapping with the first node and expecting 3 nodes. You can change the expected peers before bootstrap by setting the
EXPECT environment variable.
To use this convenience, you simply wrap the
cmd:run output in a subshell. Run this to see it work:
$ $(docker run --rm progrium/consul cmd:run 127.0.0.1 -it)
Client flag
Client nodes allow you to keep growing your cluster without impacting the performance of the underlying gossip protocol (they proxy requests to one of the server nodes and so are stateless).
To boot a client node using the runner command, append the string
::client onto the
<advertise-ip>::<join-ip> argument. For example:
$ docker run --rm progrium/consul cmd:run 10.0.1.4::10.0.1.2::client -d
Would create the same output as above but without the
-server consul argument.
Health checking with Docker
Consul lets you specify a shell script to run for health checks, similar to Nagios. As a container, those scripts run inside this container environment which is a minimal Busybox environment with bash and curl. For some, this is fairly limiting, so I've added some built-in convenience scripts to properly do health checking in a Docker system.
These all require you to mount the host's Docker socket to
/var/run/docker.sock when you run the Consul container.
Using check-http
check-http <container-id> <port> <path> [curl-args...]
This utility performs
curl based HTTP health checking given a container ID or name, an internal port (what the service is actually listening on inside the container) and a path. You can optionally provide extra arguments to
curl.
The HTTP request is done in a separate ephemeral container that is attached to the target container's network namespace. The utility automatically determines the internal Docker IP to run the request against. A successful request will output the response headers into Consul. An unsuccessful request will output the reason the request failed and set the check to critical. By default,
curl runs with
--retry 2 to cover local transient errors.
Using check-cmd
check-cmd <container-id> <port> <command...>
This utility performs the specified command in a separate ephemeral container based on the target container's image that is attached to that container's network namespace. Very often, this is expected to be a health check script, but can be anything that can be run as a command on this container image. For convenience, an environment variable
SERVICE_ADDR is set with the internal Docker IP and port specified here.
Using docker
The above health check utilities require the Docker binary, so it's already built-in to the container. If neither of the above fit your needs, and the container environment is too limiting, you can perform Docker operations directly to perform any containerized health check.
DNS
This container was designed assuming you'll be using it for DNS on your other containers. So it listens on port 53 inside the container to be more compatible and accessible via linking. It also has DNS recursive queries enabled, using the Google 8.8.8.8 nameserver.
When running with
cmd:run, it publishes the DNS port on the Docker bridge. You can use this with the
--dns flag in
docker run, or better yet, use it with the Docker daemon options. Here is a command you can run on Ubuntu systems that will tell Docker to use the bridge IP for DNS, otherwise use Google DNS, and use
service.consul as the search domain.
$ echo "DOCKER_OPTS='--dns 172.17.42.1 --dns 8.8.8.8 --dns-search service.consul'" >> /etc/default/docker
If you're using boot2docker on OS/X, rather than an Ubuntu host, it has a Tiny Core Linux VM running the docker containers. Use this command to set the extra Docker daemon options (as of boot2docker v1.3.1), which also uses the first DNS name server that your OS/X machine uses for name resolution outside of the boot2docker world.
$ boot2docker ssh sudo "ash -c \"echo EXTRA_ARGS=\'--dns 172.17.42.1 --dns $(scutil --dns | awk -F ': ' '/nameserver/{print $2}' | head -1) --dns-search service.consul\' > /var/lib/boot2docker/profile\""
With those extra options in place, within a Docker container, you have the appropriate entries automatically set in the
/etc/resolv.conf file. To test it out, start a Docker container that has the
dig utility installed (this example uses aanand/docker-dnsutils which is the Ubuntu image with dnsutils installed).
$ docker run --rm aanand/docker-dnsutils dig -t SRV consul +search
Runtime Configuration
Although you can extend this image to add configuration files to define services and checks, this container was designed for environments where services and checks can be configured at runtime via the HTTP API.
It's recommended you keep your check logic simple, such as using inline
curl or
ping commands. Otherwise, keep in mind the default shell is Bash, but you're running in Busybox.
If you absolutely need to customize startup configuration, you can extend this image by making a new Dockerfile based on this one and having a
config directory containing config JSON files. They will be added to the image you build via ONBUILD hooks. You can also add packages with
opkg. See docs on the Busybox image for more info.
Quickly restarting a node using the same IP issue
When testing a cluster scenario, you may kill a container and restart it again on the same host and see that it has trouble re-joining the cluster.
There is an issue when you restart a node as a new container with the same published ports that will cause heartbeats to fail and the node will flap. This is an ARP table caching problem. If you wait about 3 minutes before starting again, it should work fine. You can also manually reset the cache.
Sponsor
This project was made possible thanks to DigitalOcean.
License
BSD | https://hub.docker.com/r/noumansaleem/docker-consul/ | CC-MAIN-2017-47 | refinedweb | 2,099 | 64.2 |
so to declare a long int in Borland C++ 5.5 I use the syntax 'long int <name>'? and is a short 'short int <name>'?
so to declare a long int in Borland C++ 5.5 I use the syntax 'long int <name>'? and is a short 'short int <name>'?
>A long int can't hold all values.
Yeah, that's true. If you enter a 20 digit integer, it's not going to work. A universal solution would probably require reading it as a string, then using a series of if statements.
>so to declare a long int in Borland C++ 5.5 I use the syntax 'long int <name>'? and is a short 'short int <name>'?
I believe it would be __int64 (two underscores).
It doesn't seem to be so with cin. If cin >> num fails, it seems to leave num as it is.It doesn't seem to be so with cin. If cin >> num fails, it seems to leave num as it is.See, when you read in a value beyond the max value it wraps around.
Anyway, here's what I would try:Anyway, here's what I would try:Code:int num = 42; cout << "Please don't enter a valid integer: "; cin >> num; cout << num; //prints 42
Code:#include <iostream> #include <string> #include <sstream> enum {NOT_INTEGER = 0, OUT_OF_RANGE, OK}; int CheckRange(const std::string & a) { //If anybody knows a simpler way, please tell so! //test 1: is it fully convertible int num; std::stringstream test(a); if (test >> num && test.eof()) return OK; /*test 2a: minus sign should be at pos 0 if anywhere*/ size_t pos = a.rfind('-'); if (pos && pos != std::string::npos) return NOT_INTEGER; /*test 2b: if minus sign is at pos 0, string must be longer than 1 character*/ if (pos == 0 && a.size() == 1) return NOT_INTEGER; /*test 3: find other non-digits*/ if (a.find_first_not_of("-1234567890") == std::string::npos) return OUT_OF_RANGE; else return NOT_INTEGER; } int main() { std::string s; while (1) { std::cout << "Enter an integer: "; std::cin >> s; switch (CheckRange(s)){ case OK: std::cout << "That's a valid integer." << std::endl; break; case OUT_OF_RANGE: std::cout << "Numeric but out of range." << std::endl; break; case NOT_INTEGER: std::cout << "That's not an integer." << std::endl; break; } } }
Here's a crude method. It doesn't check for nonnumeric input though.
Code:#include <iostream> #include <limits> #include <cstring> #include <sstream> using namespace std; int main() { string s_num; cout << "Please enter a integer between " << std::numeric_limits<int>::min() << " and " << std::numeric_limits<int>::max() << ": "; cin >> s_num; ostringstream oss; oss << std::numeric_limits<int>::max(); string int_max = oss.str(); if (s_num[0] == '-') { s_num = s_num.substr(1); } if (s_num.length() == 10) { if (s_num > int_max) cout << "Outside range." << endl; else if (s_num <= int_max) cout << "Valid integer." << endl; } else if (s_num.length() < 10) cout << "Valid integer." << endl; else cout << "Outside range." << endl; return 0; }
Okay great, I get it. I'll just stick with using a long int for the input and checking it against numeric_limits<int>. The program doesn't need to be totally foolproof and I've only been learning c++ for two days
Maybe I'll make it foolproof after I've worked on arrays and functions, thanks for the help!
If by a long int you mean long, be warned: very often it would be just as large as a regular int, and you'd gain nothing.If by a long int you mean long, be warned: very often it would be just as large as a regular int, and you'd gain nothing.
Originally Posted by rickyoswaldiowOriginally Posted by rickyoswaldiow
If you mean __int64 and your compiler accepts it: why bother to check if the input is in range of int? I mean, you'd still have to validate the input otherwise (the user might enter non-numbers anyway). But why would you be interested to restrict your user to 32 bits, if you can use and are using 64-bit integers?
You could just as easily demonstrate your understanding of checking ranges with other examples (e.g precentages as suggested above). It might also be more practical to learn what to do if user enters "fourty-two" where an int is expected.;} | https://cboard.cprogramming.com/cplusplus-programming/84005-stupid-syntax-error-2.html | CC-MAIN-2017-22 | refinedweb | 702 | 75.4 |
problems regrading .properties files
problems regrading .properties files According to my struts application my i ve to register particular data into the DB..It will succefully... of formbean class. else it will throw one error msg form .properties file
See What?s On with a Television Listing Mobile Application
See What’s On with a Television Listing Mobile Application
Anyone who... night should check one’s television listings. However, not all people... can easily find details on what is on in one’s area. A good application
Hibernate Polymorphic Queries
class, it
returns the instances of subclasses also i.e a query will return all... the HQL polymorphic
query. In this example you will see a child class extends the parent class by
which all the features of parent class will be inherited
It?s Easy to See Why You Should Learn PHP
. This includes ensuring that all of the individual modules can work together... should learn how to use PHP.
There is a very good reason why it is important... for it to be coded. All that is needed to edit PHP is a text editor. Many programs can also
Must to See in Agra India
see of the Taj today is the effort of more than 20,000 workers employed from all....
Visit to Agra Fort of Agra:
The Taj may take time to simply get a good over all...Must See Sites in Agra
Agra's rich cultural past along with strong Mughal
s:textfield - Struts
s:textfield I am using the s:textfield tag in an appication and need to display % after the field on the same line. How would I go about doing this? Hi friend,
Textfield Tag Example
Which is the good website for struts 2 tutorials?
Which is the good website for struts 2 tutorials? Hi,
After... for learning Struts 2.
Suggest met the struts 2 tutorials good websites.
Thanks
Hi,
Rose India website is the good
JPA Native Queries, JPA Native Queries Tutorials
all persistent values to specified entity
class.
For jpa native query...
JPA Native Queries
In this section, you will know about the jpa native
queries and how
SQL QUERIES GUIDANCE
SQL QUERIES GUIDANCE I have a query which returns 10 rows and 10 columns. Person_Id is primary key.
The result set may be like
Person_id Name, City... there are 15 organizations.
1 O1
1 O2
1 O3
I want all the organizations
Struts 2.1.8 - Struts 2.1.8 Tutorial
, download and
install and then see the examples shipped with Struts distribution...
result types. Let's get started with the Struts 2.1.8 framework.
Struts...
Struts 2.1.8 - Struts 2.1.8 Tutorial
Hi good afternoon
Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty
GPS and it?s Competitors
GPS and it’s Competitors
... completely operated by the United State’s Department of Defense. US possess... the development of this technology, its been widely used in all over the world
queries
queries class one having the static variable and method.......
we extend class one to class two .....
In clas two if we create a object for class one then we can access the static variable that present in class one
queries
Must to See Agra Fort
Agra Fort: Must to See
The Agra Fort, must to see place, is undoubtedly... of these
which exist even today is the 'Bengali Mahal'- a good example of Akbari... and palaces inside it. All of these
make for interesting study of art
Struts Tutorials
issues with Struts Action classes. Ok, let?s get started.
StrutsTestCase... Struts application, specifically how you test the Action class.
The Action class... into a Struts enabled project.
5. Struts Action Class Wizard - Generates Java
Struts 1 Tutorial and example programs
to the Struts Action Class
This lesson is an introduction to Action Class... struts ActionFrom class and jsp page.
Using... Struts Dispatch Action
that will help you grasping the concept
Can you suggest any good book to learn struts
Can you suggest any good book to learn struts Can you suggest any good book to learn struts
hi all - Java Beginners
/Good_java_j2ee_interview_questions.html?s=1
Regards,
Prasanth HI
Struts Articles
and Arabic. You will see how to set the user locale in Struts Action classes....
4. The UI controller, defined by Struts' action class/form bean... these gaps that involves an extension to the Struts framework. All Web applications
Testing Struts Application
in the action class code.Intentionally so! The step by step progress gets.... After compiling, copy all the class files in
this folder to:
g:\tomcat41....
In the forthcoming installments, we will see advanced Struts topics like
ALL command - SQL
Specify where to place generated class files
-s... and javac generate
d class files
-s Specify where to place...ALL Command in Java & SQL Nee all commands in Java. Dear
JPA Sub-Queries
JPA Sub-Queries
In this section, you will see how to use sub-queries
in your JPA application. Sub-queries allows you for querying the vast number of
data
Struts Book - Popular Struts Books
the development of a non-trivial sample application - covering all the Struts components in a "how to use them" approach. You'll also see the Struts Tag Library in action... Software Foundation. Struts in Action is a comprehensive introduction to the Struts
Struts
Struts Tell me good struts manual
JSP SQL Tag
sql tag in
jsp.
JSTL?s database library supports database queries...;
%>
You can see in the given example that in order to connect the jsp page
<s:radio> in struts 2 is not setting the correct value. - Struts
in struts 2 is not setting the correct value. hello,
i m using the following code in my jsp:
where pa is HashMap object defined in action class with getter/setter as
pa = new HashMap();
pa.put("1","Yes");
pa.put("2
Struts Project Planning - Struts
Struts Project Planning Hi all,
I am creating a struts application.
Please suggest me following queries i have.
how do i decide how many... should i create those classes which are as table in database??
and how i do all
which data structure is good in java..? - Java Beginners
which data structure is good in java..? Hi frends,
Actually i... and vector ...etc........ i wanted to know, which technique is good to store and retrieve the data among all these techniques of data structures.........can
Google Ranking Update for Spammy Queries
ranking update today for some spammy queries". This update was worldwide and many... in their ranking and disappeared from search results.
This update targets spam queries... percent of English queries. It is a work in progress and will be completely
History of Struts
the programmer's headache. The Struts was very
promising and programmer'...History of Struts
In this section we will disuses about the history of web application and the
history of Struts. Why Struts was developed and the problems
login controller.servlet file.. (good coding stuff for reference)
login controller.servlet file.. (good coding stuff for reference) ...;
/**
* Servlet implementation class LoginController
*/
public class LoginController...;
/**
* @see HttpServlet#HttpServlet()
*/
public LoginController() {
super
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... and append it in the database.
Ultimately all the file content should be stored
struts - Struts
Struts s property tag Struts s property tag
struts hibernate integration application
Customer Information</h2>
<s:form...;roseindia"
extends="struts-default">
<action
name="...</result>
</action>
<action
name="addInfo"
class
STRUTS INTERNATIONALIZATION
introduction we shall see how to implement
i18n in a Simple JSP file of Struts.
g... in the
struts-config.xml file for all the properties files. The entry and its...
STRUTS INTERNATIONALIZATION
--------------------------------
by Farihah
C++ Starter needs help :S
) the most used lenguages & they can do all the things that i would love to do... to program. im looking for someone that could show me and answer all my
Ask Applet Questions Online
solutions to our customer’s queries, we have decided to begin this service... can solve all your queries in a very short period.
Our revolutionary... queries within sort span of time. Now, using ‘ask online questions
Struts Books
see the Struts Tag Library in action - use tags for HTML, javabeans, logical... application
Struts Action Invocation Framework (SAIF) - Adds features like Action interceptors and Inversion of Control (IoC) to Struts.
Struts - Framework
using the View component. ActionServlet, Action, ActionForm and struts-config.xml...Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary
Struts Alternative
properties of the respective Action class. Finally, the same Action instance...
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework
spring controller V/S stuts Action - Spring
spring controller V/S stuts Action we are going to use spring framework so what is better spring controller or struts action
struts - Struts
struts shud i write all the beans in the tag of struts-config.xml
STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS
action. In this article we will see how to achieve this. Struts provides four...STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS... of Action classes for your project. The latest version of struts provides classes
Textarea - Struts
; <action name="characterLimit1" class="...="ajax" /></head><body> <s:form action="... characters.Can any one? Given examples of struts 2 will show how to validate
What is Struts?
are now preferring Struts based applications.
Struts is good as it provides.... Since Struts is based on MVC framework, it provides all the features of
MVC...What is a Struts? Understand the Struts framework
This article tells you "
Places to See in Jaipur India
the
local cultures in its vibrant hues.
Places to See
Jaipur Zoo: Steal some time... Maharaja?s collection of Persian
and Indian miniatures, and other things related....
Must to see
Chand Pol, Ajmeri Gate, Sanganeri Gate of Jaipur City
struts
*;
import org.apache.struts.action.*;
public class LoginAction extends Action...struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">
Integrate Struts, Hibernate and Spring
Integrate Struts, Hibernate and Spring
... are using one of the
best technologies (Struts, Hibernate and Spring). This tutorial is very good if
you want to learn the process of integrating these technologies
struts
}//execute
}//class
struts-config.xml
<struts...struts <p>hi here is my code in struts i want to validate my...;gt;
<html:form
<pre>
Top 10 Tips for Good Website Design
Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... to content structure to device friendliness and all of these factors addressed
Struts Validator Framework - lab oriented lesson
entered by us in the struts-config.xml file .The
Action class does not do...;
}
}
===========================================
The modelAction class is derived from Action class of Struts...
of the preceding technologies, a good knowledge of Struts is equivalent to
competence
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...://
Thanks
Struts2 - Struts
work,
see code below:-
in the Class...Struts2 S:select tag is being used in my jsp, to create a drop down list.
The drop down works very well in Mozilla, but in IE7 it behaves very
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ?
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ? try
{
Connection conn=Create...");
HttpSession s=request.getSession();
s.setAttribute("view1",l
JSP PDF books
and
JavaServer pages
Servlets are Java technology?s answer to Common Gateway... want to really write a few servlets. Good. This chapter shows you how, outlining the structure that almost all servlets follow, walking you through the steps
Java Queries
javax.swing.*;
import java.awt.*;
import java.awt.event.*;
class LoginDemo extends
datatypes queries
we prefered double datatype for this?
You can use Scanner class... java.util.*;
class ScannerExample
{
public static void main(String[] args
StringBuilder v/s StringBuffer - Java Beginners
StringBuilder v/s StringBuffer Hi All, I want to know the difference... javax.swing.*;public class ReverseExample { public static void main(String[] args...:\n" + reversed); } }}StringBuffer classesimport java.io.*;public class
StringBuilder v/s StringBuffer - Java Beginners
StringBuilder v/s StringBuffer Hi, Thank you for your prompt... in all respects to StringBuffer except that it is not synchronized, which means...*;import java.io.IOException;public class StringBufferExample { public static void
Struts MVC
request passes through the controller. In Struts all the user request passes...
Struts MVC
Struts is open source MVC framework in Java. The Struts framework is
developed and maintained by the Apache Foundation. The Struts framework Advance.. Yes we can have more than one struts config files..
Here we use SwitchAction. So better study to use switchaction class
Struts 2 Tutorial
!!!
Struts 2 Features
It is very extensible as each class of the framework... 2
The new version Struts 2.0 is a combination of the Sturts action framework...;
Struts 2
Actions Introduction
When a client request matches the action's
Hibernate Criteria load all objects from table
Hibernate Criteria load all objects from table - Learn how to load all the
data from a table (not good choice if large amount of data is present in table).
This is the first example of Hibernate Criteria which loads all the data from
Why the Google Penguin update is good for SEO
update is good for SEO, well specially for white-hat SEO and not black-hat... techniques, duplicate content all are being downgraded by search engines. Many websites....
This update is checking all the back links for relevancy and if they are just
all comment in jsp
all comment in jsp Defined all comment in jsp ?
jsp... compiled as we all know that they are not executed and they simply serve... and hence it doesn't remove them. All HTML comments are maintained in the response Projects
solutions:
EJBCommand
StrutsEJB offers a generic Struts Action class... by
combining all the three mentioned frameworks e.g. Struts, Hibernate and
Spring...
Registration Action Class and DAO code
In this section we will explain
Interceptors in Struts 2
action of interceptor in Struts 2 application like validation, exception handling...
an important work of seting all parameters from the action mapping on the
value... to input validation.
workflow: Invokes the validate method
in struts?
please it,s urgent........... session tracking? you mean session management?
we can maintain using class HttpSession.
the code follows... for later use in in any other jsp or servlet(action class) until session exist
Show all the entries of zip file.
Show all the entries of zip file.
In this tutorial, We will discuss how to show all the entries of a zip file. The ZipFile
class is used... of ZipFile class class returns name
of available entry. The hasMoreElements
struts - Struts
knows about all the data that need to be displayed. It is model who is aware about all the operations that can be applied to transform that object. It only....
-----------------------------------------------
Read for more information.
Retrieving All Rows from a Database Table
APIs and methods. See brief descriptions for retrieving all rows
from a database...
Retrieving All Rows from a Database Table
Here, you will learn how to retrieve all rows
Java Print the all components
Java Print the all components This is my code. Please tell me... java.awt.geom.*;
import java.util.Calendar;
public class Salary_report extends...;
ImageIcon images;
Container c;
Employee_report f1;
JButton print;
String s,days
SQL All Column Names
SQL All Column Names
SQL All Column Names is useful when you want to see the Field, Type, Null... on 'SQL All Column Names'. To understand and grasp
the example we create a
Stay Safe Through a Home Security Mobile Application
a home security system to work with protecting one’s home. A good... be useful. A good application can work on a good type of mobile device without... a person’s ability to send information out to a security system. This can
Hibernate named queries
Hibernate named queries Hi,
What is named queries in Hibernate?
Thanks
Struts 2 : Http Status Error 404 - Struts
Struts 2 : Http Status Error 404 Hi All,
I'm facing the below... error as shown below.
see below error for the details
code... will be highly appreciated.
Note:: All The required JAR's have been
Developing Struts Application
it to a specified instance of Action
class.(as specified in struts-config.xml... outline of Struts, we can
enumerate the following points.
All requests... of the servlets or Struts Actions...
All data submitted by user are sent
What are criteria Queries in Hibernate?
What are criteria Queries in Hibernate? Just explain me what is Criteria Query in Hibernate with examples?
Thanks
Criteria queries are very helpful in hibernate for queries based on certain criteria.
Check
java and xml problem. plz see this 1 first - XML
java and xml problem. plz see this 1 first hi, i need to write...
]]>
s
i have witten a program in java...*;
import org.w3c.dom.*;
public class CreatXMLFile {
public static void main(String
Java replace all non alphanumeric
Java replace all non alphanumeric Hi,
In Java how to replace all... all non alphanumeric.
Thanks
Hi,
You can use the replaceAll() function of String class in Java.
We will use the regular expression "[^a-zA-Z0-9
Struts2.2.1 token tag example.
;body><h1>Struts_Token_Example</h1><hr/>
<s:form
action...Struts2.2.1 token tag example.
In this example, you will see the use of taken tag of struts2.2.1. It helps
double click problem. The s:token tag merely
Struts 2.2.1 - An introduction to Struts 2.2.1 Features
all the libraries
need to setup and run Struts 2.2.1 based applications... these files on Tomcat or any other server and see the
capability of Struts...Struts 2.2.1 - An introduction to Struts 2.2.1 Features
The Struts 2.2.1 | http://roseindia.net/tutorialhelp/comment/3755 | CC-MAIN-2014-42 | refinedweb | 3,054 | 68.16 |
An article by Oliver and Soundararajan that has got a lot of attention in the last few days notes that there is a curious pattern in the distribution of the final digits of consecutive prime numbers. Prime numbers larger than 5 must end in 1, 3, 7 or 9 (else they would be divisible by either 2 or 5), and it seems that a prime ending in any of these is less likely to be followed by another prime ending in the same digit.
For example, in the first billion primes the authors find that a prime with final digit 1 is followed by one with final digit 1 about 18% of the time, but followed by a prime ending in 3, 7 and 9 with probabilities 30%, 30% and 22%.
Here I will verify this for the first 10 million primes (I don't have the patience to calculate a billion or the bandwidth to download them from some online service). The code below uses a segmented sieve of Eratosthenes to build a dictionary of dictionaries, mapping the last digit of each prime to the last digit of the following prime, to a count of the number of times that pair of last digits is encountered. Thus,
digit_count[1][3] is the number of times a prime number ending in 1 is followed by one ending in 3.
The segmented sieve works by first generating a list of prime numbers up to the square root of a number estimated to be close to but guaranteed to be larger than the 10,000,000th prime number. The 10,000,000th prime is 179,424,673, so we only need to store the square root of this number of potential prime factors, or 13,395 of them, in a fairly modestly-sized list. This list is then used to sieve all numbers starting at 7 (the first one of interest to us for this exercise) until 10,000,000 prime numbers have been found.
The code for this project is also available on my github page.
import time import math def approx_nth_prime(n): """Return an upper bound for the value of the nth prime""" return n * (math.log(n) + math.log(math.log(n))) nmax = 10000000 pmax = approx_nth_prime(nmax) print('The {:d}th prime is approximately {:d}'.format(nmax,int(pmax))) N = int(math.sqrt(pmax)) + 1 print('Our sieve will therefore contain primes up to', N) def primes_up_to(N): """A generator yielding all primes less than N.""" yield 2 # Only consider odd numbers up to N, starting at 3 bsieve = [True] * ((N-1)//2) for i,bp in enumerate(bsieve): p = 2*i + 3 if bp: yield p # Mark off all multiples of p as composite for m in range(i, (N-1)//2, p): bsieve[m] = False gen_primes = primes_up_to(N) sieve = list(gen_primes) def is_prime(n, imax): """Return True if n is prime, else return False. imax is the maximum index in the sieve of potential prime factors that needs to be considered; this should be the index of the first prime number larger than the square root of n. """ return not any(n % p == 0 for p in sieve[:imax]) digit_count = {1: {1: 0, 3: 0, 7: 0, 9: 0}, 3: {1: 0, 3: 0, 7: 0, 9: 0}, 7: {1: 0, 3: 0, 7: 0, 9: 0}, 9: {1: 0, 3: 0, 7: 0, 9: 0}} # nprimes is the number of prime numbers encountered nprimes = 0 # the most recent prime number considered (we start with the first prime number # which ends with 1,3,7 or 9 and is followed by a number ending with one of # these digits, 7 since 2, 3 and 5 are somewhat special cases. last_prime = 7 # The current prime number to consider, initially the one after 7 which is 11 n = 11 # The index of the maximum prime in our sieve we need to consider when testing # for primality: initially 2, since sieve[2] = 5 is the nearest prime larger # than sqrt(11). plim is this largest prime from the sieve. imax = 2 plim = sieve[imax] start_time = time.time() while nprimes <= nmax: # Output a progress indicator if not nprimes % 1000: print(nprimes) if is_prime(n, imax): # n is prime: update the dictionary of last digits digit_count[last_prime % 10][n % 10] += 1 last_prime = n nprimes += 1 # Move on to the next candidate (skip even numbers) n += 2 # Update imax and plim if necessary if math.sqrt(n) >= plim: imax += 1 plim = sieve[imax] end_time = time.time() print(digit_count) print('Time taken: {:.2f} s'.format(end_time - start_time))
The returned dictionary (formatted for clarity) is
{1: {1: 446808, 3: 756072, 7: 769924, 9: 526953}, 3: {1: 593196, 3: 422302, 7: 714795, 9: 769915}, 7: {1: 639384, 3: 681759, 7: 422289, 9: 756852}, 9: {1: 820369, 3: 640076, 7: 593275, 9: 446032}}
Plotting these data on a bar chart:
import numpy as np import matplotlib.pyplot as plt #}} fig, ax = plt.subplots(nrows=2, ncols=2, facecolor='#dddddd') xticks = [0,1,2,3] last_digits = [1,3,7,9] for i, d in enumerate(last_digits): ir, ic = i // 2, i % 2 this_ax = ax[ir,ic] this_ax.patch.set_alpha(1) count = np.array([digit_count[d][j] for j in last_digits]) total = sum(count) prob = count / total * 100 this_ax.bar(xticks, prob, align='center', color='maroon', ec='maroon', alpha=0.7) this_ax.set_title('Last digit of prime: {:d}'.format(d), fontsize=14) this_ax.set_xticklabels(['{:d}'.format(j) for j in last_digits]) this_ax.set_xticks(xticks) this_ax.set_yticks([0,10,20,30,40]) this_ax.set_ylim(0,35) this_ax.set_yticks([]) for j, pr in enumerate(prob): this_ax.annotate('{:.1f}%'.format(pr), xy=(j, pr), ha='center', va='top', color='w', fontsize=12) this_ax.set_xlabel('Next prime ends in') this_ax.set_frame_on(False) this_ax.tick_params(axis='x', length=0) this_ax.tick_params(axis='y', length=0) fig.subplots_adjust(wspace=0.2, hspace=0.7, left=0, bottom=0.1, right=1, top=0.95) plt.show()
The output seems to confirm the headline finding of Oliver and Soundararajan's article with approximately the same probabilities reported in the press for their larger prime number collection.
The following code visualizes these data as a heatmap of probabilities.
import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter #}} last_digits = [1,3,7,9] hmap = np.empty((4,4)) for i, d1 in enumerate(last_digits): total = sum(digit_count[d1].values()) for j, d2 in enumerate(last_digits): hmap[i,j] = digit_count[d1][d2] / total * 100 fig = plt.figure() ax = fig.add_axes([0.1,0.3,0.8,0.6]) im = ax.imshow(hmap, interpolation='nearest', cmap=plt.cm.YlOrRd, origin='lower') tick_labels = [str(d) for d in last_digits] ax.set_xticks(range(4)) ax.set_xticklabels(tick_labels) ax.set_xlabel('Last digit of second prime') ax.set_yticks(range(4)) ax.set_yticklabels(tick_labels) ax.set_ylabel('Last digit of first prime') cbar_axes = fig.add_axes([0.1,0.1,0.8,0.05]) cbar = plt.colorbar(im, orientation='horizontal', cax=cbar_axes) cbar.ax.set_xlabel('Probability /%') plt.savefig('prime_digits_hmap.png') plt.show()
There is a pleasing symmetry in this plot, particularly about the "anti-diagonal" which is mentioned but not fully explained in Oliver and Soundararajan's paper.
Share on Twitter
Share on Facebook
Comments are pre-moderated. Please be patient and your comment will appear soon.
Richard Caine 2 years, 3 months ago
Are consecutive primes ending in the sequence 1 3 7 9 infinite? Such as 11 13 17 19: 101 103 107 109: 191 193 197 199: 821 823 827 829: 3461 3463 3467 3469Link | Reply
Thanks for neat article
christian 2 years, 3 months ago
Thank you. I don't know the answer to your question for sure, but I strongly suspect that there are an infinite number of prime sequences ending in 1,3,7,9. You might start here: and ask a related question on math.stackexchange: they're generally friendly folk.Link | Reply
New Comment | https://scipython.com/blog/do-consecutive-primes-avoid-sharing-the-same-last-digit/ | CC-MAIN-2020-29 | refinedweb | 1,320 | 62.38 |
How do we make a menu item "checkable," as in, selected, and keep the state of that selection?
You'll need 2185 or later for this to work, although it's not out yet. When it is, you'll need to:
Oh Jon, you're such a tease.
hi,
how exactly do I need to implement is_checked? I've looked around the docs, but it appears nowhere.
Thanks, mates
class MyCheckableCommand(sublime_plugin.WindowCommand): def run(self): pass # do something
def is_checked(self):
return True # or False
I couldn't figure out how to get the current value from the menu entry! Really, really thank you! My problem was I tried that entry without a command, and it was behaving like a checkbox, and I expected I should grab the value, not override it with is_checked!
Thank you again =) | https://forum.sublimetext.com/t/checking-selected-menu-items/4855/1 | CC-MAIN-2016-44 | refinedweb | 138 | 75.4 |
It's official! Smart Device Framework 2.1 is out in the wild. This is more of a maintenance release, but it still has some nice new additions (amongst others: LargeIntervalTimer, ACMStream, RiffStream). The ConnectionManager has had an overhaul and we've split out the OpenNETCF.WindowsCE.Messaging namespace into a separate assembly to help componentize the framework.
Specific to the Extensions, we've made it a little easier for you to add new items to your projects by adding some new options to the code window context dialog. Now just a right-click away, you can add new classes, forms, components as well as add assembly references and web references. Personally, I've been using this for a week or so now, I find it invaluable. No more dragging the cursor all the way over the side of the screen (my display is running at 1920x1200), I just right-click and away I go. Here's a screenshot of the new context menu options:
We'll be contacting existing customers who purchased SDF 2.0 in the next few days with details of the upgrade path. | http://blog.opennetcf.com/ncowburn/CommentView,guid,cbc9cadb-6535-4f07-8e74-46932eff127e.aspx | crawl-002 | refinedweb | 187 | 72.26 |
First Steps with Server-Side UI for Blazor
This article explains how to get the Telerik UI for Blazor components in your Server-side Blazor project and start using them quickly. The process consists of the following steps:
- Set Up a Blazor Project
- Add the Telerik Blazor Components to an Existing Project
- Add a Telerik Component to a View
If you are familiar with the Telerik NuGet Feed and Blazor in general, you may want to follow the shorter, more technical getting started article: What You Need. The current article is designed as a step-by-step tutorial for new users and starts from the basics.
Step 0 - Download the Components
To follow the steps in this tutorial, you need access to the Telerik UI for Blazor components. The recommended download methods differ depending on your Telerik UI for Blazor license - trial or commercial.
Download the Free Trial Version
If you want to try Telerik UI for Blazor, you can download a free, fully functional trial. The trial offers the same functionality as the commercially licensed version.
Download the Commercial Version
The easiest way to get the commercially licensed Telerik UI for Blazor components to your development machine is to use the Progress Control Panel or to download the automated installer from your telerik.com account.
Alternatively, you can also access the
.nupkg files from our private NuGet feed or by creating a local feed from your installation.
Step 1 - Set Up a Blazor Project
Make sure that you have installed the following:
- .NET - .NET Core 3.1.x, .NET 5 or .NET 6.
- Visual Studio - Visual Studio 2019 (for .NET 3.x and .NET 5) or Visual Studio 2022 (for .NET 6).
The latest version of Telerik UI for Blazor is
3.4.0, and it supports
.NET Core 3.1.11 and .NET 6.
To create a server-side Blazor app, use a Blazor Server App project:
If you already have one, go to the Add the Telerik Blazor Components to an Existing Project section below.
To create a new project, choose one of the following methods:
Create a Project with the Telerik VS Extensions
You can use the Telerik Visual Studio Extensions that will create and automatically configure the project so that you can start using the components immediately.
If you prefer VS Code, you can use the VS Code Extension to create a Telerik-enabled project.
If you choose to create a project with the Telerik VS Extensions, you can jump directly to Step 3 - Add a Telerik Component to a View. The following sections in this article explain the manual steps to configure the project so that you can better understand the underlying process.
Create a Project with Visual Studio
To create a project manually, without using the Telerik VS Extensions, follow these steps:
Open Visual Studio (2019 for .NET 3.x and .NET 5; 2022 for .NET 6).
Create a New Project.
Choose Blazor App and click Next. Then, choose a name and location for the project and click Create.
Choose the Blazor Server App project type and click Create.
Create a Project with the CLI
If you are not running Visual Studio, you can create the Blazor project from the command prompt. See the
dotnet new command and the arguments for Blazor apps.
Step 2 - Add the Telerik Blazor Components to an Existing Project
Add the Telerik NuGet Feed to Visual Studio
The recommended distribution method for the Telerik UI for Blazor packages is the private Telerik NuGet feed.
Telerik now offers NuGet v3 API for our feed which is faster, lighter, and reduces the number of requests from NuGet clients. It is available at. We recommend switching to the v3 API since the old server will be deprecated.
If you don't have an active license, start a free trial - this will let you download the installation file, install the components, and use the Telerik NuGet feed. During the installation, select the Set up Telerik NuGet package source checkbox and the installer will configure the Telerik online NuGet feed automatically:
If you prefer to configure the NuGet package source manually, follow the steps in the Setup the Telerik Private NuGet Feed article.
Once you have added the Telerik NuGet feed, continue by enabling the components in the project.
Enable the Components in the Project
To prepare the project for the Telerik UI for Blazor components, install the
Telerik.UI.for.Blazor NuGet package, and then configure the project. For detailed instructions, see the video tutorial below or follow the instructions after the video.
1. Manage NuGet Packages
Right-click the project in the solution and select
Manage NuGet Packages:
2. Install the Telerik Package
Choose the
telerik.com feed, find the
Telerik.UI.for.Blazor package and click
Install (make sure to use the latest version). If you don't have a commercial license, you will only see
Telerik.UI.for.Blazor.Trial. Use that instead.
3. Add the JavaScript File
Add the
telerik-blazor.js file to your main index file:
~/Pages/_Host.cshtmlfor .NET 3.x and .NET 5
~/Pages/_Layout.cshtmlfor .NET 6
HTML
<head> . . . <script src="_content/Telerik.UI.for.Blazor/js/telerik-blazor.js" defer></script> <!-- For Trial licenses use <script src="_content/Telerik.UI.for.Blazor.Trial/js/telerik-blazor.js" defer></script> --> </head>
To enable the use of static assets in your project, add the following line to the startup file of your Server project:
Startup.csfor .NET 3.x and .NET 5
Program.csfor .NET 6
C#
namespace MyBlazorAppName { public class Startup { public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { //more code may be present here //make sure this is present to enable static files from a package app.UseStaticFiles(); //more code may be present here } } }
//more code may be present here //make sure this is present to enable static files from a package app.UseStaticFiles(); //more code may be present here
4. Add the Stylesheet
Register the Theme stylesheet in your main index file:
~/Pages/_Host.cshtmlfor .NET 3.x and .NET 5
~/Pages/_Layout.cshtmlfor .NET 6
<head> . . . <link rel="stylesheet" href="_content/Telerik.UI.for.Blazor/css/kendo-theme-default/all.css" /> <!-- For Trial licenses use <link rel="stylesheet" href="_content/Telerik.UI.for.Blazor.Trial/css/kendo-theme-default/all.css" /> --> </head>
5. Register the Telerik Blazor Service
Open the startup file of your Server project and register the Telerik Blazor service:
Startup.csfor .NET 3.x and .NET 5
Program.csfor .NET 6
C#
namespace MyBlazorAppName { public class Startup { public void ConfigureServices(IServiceCollection services) { //more code may be present here services.AddTelerikBlazor(); } //more code may be present here } }
//more code may be present here builder.Services.AddTelerikBlazor(); //more code may be present here
6. Add Usings
Add the following to your
~/_Imports.razor file so the project recognizes our components in all files:
_Imports.razor
@using Telerik.Blazor @using Telerik.Blazor.Components
7. Add the Telerik Layout
Next to your main layout file (by default, the
~/Shared/MainLayout.razor file in the Blazor project), add a Razor component called
TelerikLayout.razor with the following content:
@inherits LayoutComponentBase <TelerikRootComponent> @Body </TelerikRootComponent>
8. Configure the Main Layout
Open the main layout file (by default, the
~/Shared/MainLayout.razor file in the Blazor project), and add
@layout TelerikLayout as the first line in the file. This will ensure that the
TelerikRootComponent wraps all the content in
MainLayout. Alternatively, the
TelerikRootComponent can reside directly in
MainLayout, but we place it in another file for better separation of concerns.
@layout TelerikLayout @inherits LayoutComponentBase @* @Body and other code will be present here depending on your project *@
Now your project can use the Telerik UI for Blazor components.
Step 3 - Add a Telerik Component to a View
The final step is to use a component in a view and run it in the browser. For example:
Add a Button component to the
~/Components/Pages/Index.razorview:
RAZOR
<TelerikButton>Say Hello</TelerikButton>
Optionally, hook up a click handler that will show a message. The resulting view should look like this:
RAZOR
@page "/" <TelerikButton OnClick="@SayHelloHandler" ThemeColor="primary">Say Hello</TelerikButton> <br /> @helloString @code { MarkupString helloString; void SayHelloHandler() { string msg = string.Format("Hello from <strong>Telerik Blazor</strong> at {0}.<br /> Now you can use C# to write front-end!", DateTime.Now); helloString = new MarkupString(msg); } }
Run the app in the browser by pressing
F5. You should see something like this:
Now you have the Telerik components running in your Blazor app.
Next Steps
Next, you can explore the live demos and the rest of the documentation. You can also find the entire demos project in the
demos folder of your local installation. The project is runnable and you can inspect, modify, and copy its code. The project targets the latest official .NET version, and its readme file provides more details on running the project and using older versions of the framework.
Many applications have a data grid component, and you can get started with the full-featured Telerik Grid by visiting the Grid Overview article.
We recommend to get familiar with the fundametals of data binding our components. This information is applicable for all databound components.
Finally, you can explore the List of Components and pick the ones you are interested in. | https://docs.telerik.com/blazor-ui/getting-started/server-blazor | CC-MAIN-2022-27 | refinedweb | 1,544 | 65.12 |
Wikiversity:Colloquium
Contents
- 1 UploadWizard
- 2 Non-existent page text needs change from admin
- 3 Alternative paid contribution disclosure policy
- 4 This Month in Education: July 2014
- 5 learn software of computer
- 6 Sciences category
- 7 Fiction, popular culture
- 8 Tech writing community still active?
- 9 Promote wikiversity
- 10 Vision on wikiversity
- 11 Companies and markets
- 12 Suggestion: Plasmons and polaritons
- 13 Question
- 14 Technical writing Robs Screengrab
- 15 Request for importation
- 16 A lot of pages are removed from the Dutch wikiversity
- 17 This Month in Education: August 2014
- 18 Letter petitioning WMF to reverse recent decisions
- 19 Process ideas for software development
- 20 Wiki ViewStats
- 21 i enjoy too much wikiversity it helps us the student to improve our knowledge in our study
- 22 Interactive labs
- 23 Rotlink Bot and Archive.is
- 23.1 Allow Rotlink to operate?
- 23.2 Allow links to archive.is?
- 23.3 Allow Rotlink to add links to archive.is or archive.today?
- 24 Tagging a wiki book as as a research project
- 25 Nomination of Dave Braunschweig for Permanent Custodianship
- 26 Automatic transformation of XML namespaces (new research project)
- 27 Grants to improve your project
- 28 Echo and watchlist
- 29 Change in renaming process
UploadWizard[edit]
UploadWizard is a MediaWiki extension that greatly simplifies the process for uploading files to a MediaWiki wiki. To see the UploadWizard in operation, visit Commons:Special:UploadWizard. In order to activate the extension here at Wikiversity, we need community consensus to do so. Please discuss as needed and then reply to this thread and indicate
Support or
Oppose the addition of this extension. -- Dave Braunschweig (discuss • contribs) 01:28, 7 July 2014 (UTC)
Support - I've used upload wizard on commons and found it easy and efficient. As I recall it also allows us to specify fair use images and provide appropriate licensing information before the upload can occur. This may help us to obtain proper licensing per image at the time of upload or prevent upload without licensing info. --Marshallsumter (discuss • contribs) 02:13, 7 July 2014 (UTC)
Oppose Some administrators should be responsible for the local importations in fair use as it needs some licensing skills (did you know that we can publish the Eiffel Tower photos unless it has been taken during the night because of the light system copyright?) and Commons:Special:UploadWizard is forbidding the fair use.
Apart from that every other media should be hosted on Commons to allow our courses to be translated and benefit to the Commons licenses templates, huge categories and licensing specialists. Moreover when we copy some images from one wiki to another the image can change behind our back, as its local name can be identical as the Commons one.
That's why I propose my bot to migrate at least our 3,773 Category:Public domain images as I did for the French Wikiversity (please see Commons:Commons:Bots/Requests/JackBot).
Sorry for my late answer Dave Braunschweig and Marshallsumter. JackPotte (discuss • contribs) 08:42, 16 August 2014 (UTC)
- If JackPotte is correct and the Upload Wizard cannot be locally modified to allow Fair Use images then I agree; however, migrating our Public Domain images to Commons may not be a good idea. Deletionists on commons may cause the loss of some of our images which creates more work here to re-upload them as Fair Use. --Marshallsumter (discuss • contribs) 12:04, 16 August 2014 (UTC)
- To me they are separate issues. The current process and the UploadWizard are two different approaches to *how* content is uploaded. Neither one controls *what* is uploaded. Either one can ultimately be controlled by filters and/or a bot to manage the content.
- Regarding the proposal that we should be Fair Use only, in theory I agree. In practice, however, it presents problems. Content is often deleted from Commons without adequate notice here. It's happened to me, and it frequently happens to Marshall.
- Dave Braunschweig (discuss • contribs) 13:08, 16 August 2014 (UTC)
Support As long as non-free use files are not locked out. As to migrating all our free use files to Commons, this is an idea with no benefit and much harm, as we have seen. Commons files have been used here, they stand for years, and then they are deleted on some obscure technicality, which may have been necessary or not (Commons uses the Precautionary principle, but may take years to apply it, and then it's applied quirkily, and who has time to engage in these discussions?), and our pages are then damaged, even if we could claim non-free use with a rationale; to obtain the file is extra work, i.e. we'd need to get a Commons admin to undelete it and supply it. There is no benefit to Commons; if any user thinks a file should be on Commons, they may easily copy it there. However, our files should not be deleted just because they have been copied to Commons. A bot, however, could clean up names. I'd think that ideally, our public domain files should have the same name as an identical Commons file; and be linked that way. In that way, if the Commons file is deleted, there is no work to do here. --Abd (discuss • contribs) 13:46, 16 August 2014 (UTC)
- As far as I know Commons, if an image is considered of poor quality it's not deleted if it's used in a course. And letting a duplicate here will make the image on Commons unused in the paragraph at the bottom. JackPotte (discuss • contribs) 15:19, 16 August 2014 (UTC)
- That is correct, usually. However, if an image is of general utility, it should remain on Commons even if it is not used here, but that's not really our problem. (If a file is hosted in both places, that it does not display as used on Commons is a bug, my opinion. However, I can see that there could be a version problem.)
- Commons is a repository of free content. We are for the creation of educational resources, and we can use images that are not free, with a non-free rationale. An image may be used here with the belief that it is properly licensed. If that changes, it is then possible to still use it with a fair use claim.
- Our policy on non-free usage may change over time, it's within our prerogative per WMF policy. The problems that have arisen have been over a decision on Commons that a license, for content standing for years, was somehow defective. The *main* thing that we need to do is to make sure that any non-free content is machine-readably tagged. That creates a warning for any content re-user, the concern of the WMF. Jack, the content is protected if it is here, and we are responsible for determining what we protect. Commons is fantastic. And not our mission. Basically, leaving images here is the most efficient practice for our mission.
- Jack, your opposition is over an Upload Wizard that allows users to upload content here. Do you really intend to make it more difficult than it need be for users to upload images for usage here?
- By the way, we specifically do not want to confine fair use claims to custodians. That's backwards. Fair use, under WMF policy, requires a content judgment. It does not require usage of custodial tools (unless a deletion is required, but, even then, deletion decisions are still a matter of community consensus, outside of uncontroversial deletions.) --Abd (discuss • contribs) 18:06, 16 August 2014 (UTC)
- In spite of your pleading I choose to maintain my vote because we can already provide the UploadWizard for all by redirecting toward Commons. When you say that fulling it is not our mission I understand, but it seems to don't take into account that every Wikiversity course can easily be translated in several languages sooner or later. JackPotte (discuss • contribs) 18:25, 16 August 2014 (UTC)
Oppose Strong oppose - Simply because I cannot use commons. --Goldenburg111 21:15, 14 September 2014 (UTC)
- What does not using Commons have to do with whether or not we use an updated user interface here? -- Dave Braunschweig (discuss • contribs) 00:26, 15 September 2014 (UTC)
- Goldenburg, I think you have misunderstood. JackPotte seems to be arguing that we only allow uploads to Commons (which conflicts with our allowance of Fair Use), that is not the proposal here. The proposal here is to enable the Upload Wizard for Wikiversity as part of our MediaWiki setup. --Abd (discuss • contribs) 01:21, 15 September 2014 (UTC)
My apologies for being an idiot. I
Support this discussion. Thanks. --Goldenburg111 01:48, 15 September 2014 (UTC)
Non-existent page text needs change from admin[edit]
Wikivoyage If you check for a page that does not exist (e.g. asdlfkjo348fj0349jf), there are suggestions to search other WMF projects but Wikivoyage is absent. Can someone please add this? Thanks. —Justin (koavf)❤T☮C☺M☯ 01:05, 10 July 2014 (UTC)
Done - If anyone needs to find this in the future, it's in MediaWiki:Newarticletext. -- Dave Braunschweig (discuss • contribs) 01:52, 10 July 2014 (UTC)
Alternative paid contribution disclosure policy[edit]
I believe that the paid contributions disclosure policy effected by the Foundation is broad enough to potentially affect anyone who happens to use Wikiversity for off-Wikiversity education, including, for instance, participating in a Wikiversity collaborative project with the intent of getting a course credit at the institution, or making course materials available to the students as part of one’s job as an instructor. (
As part of these obligations, you must disclose your employer, client, and affiliation with respect to any contribution for which you receive, or expect to receive, compensation.)
The policy, however, allows any individual Wikimedia wiki to adopt its own, alternative policy, by the means of the community consensus. One such policy has recently been implemented at the Wikimedia Commons, and reads:
The Wikimedia Commons community does not require any disclosure of paid contributions from its contributor.
I hereby propose that a similarly relaxed, or perhaps identical, alternative paid contributions policy is adopted for the English Wikiversity just as well.
So far, Commons seem to be the only project to adopt an alternative paid contribution disclosure policy.
— Ivan Shmakov (d ▞ c) 07:42, 15 July 2014 (UTC)
- Over at en.wn, we've talked about adopting an alternative policy; our concern is that accusations of paid editing may be a weapon of choice for those seeking to expel someone from the wikimedian community, and en.wn as the recipient of much flak from some unscrupulous quarters should protect itself against specious attacks. Since I gather en.wv also takes a lot of flak, I'd encourage you to adopt an alternative policy. --Pi zero (discuss • contribs) 11:55, 15 July 2014 (UTC)
- See Wikiversity:Research guidelines#Disclosures, which has existed long before the global Wikimedia community decided to do something. -- darklama 12:47, 15 July 2014 (UTC)
- Did you note that the WMF policy now in effect covers each and every contribution, – not just ones related to research?
- For instance, I’ve just started writing (rather, mostly translating) the AVR programming introduction course here. If I’ve done that as part of my job (as in: part of my job is to make my course’s materials available to my students), while not disclosing it (and surely I didn’t) – I’ve just breached the new policy, and thus ToU, and may be subject to a legal action!
- Think of Comparative law and justice, for instance, which is
a student-written collaborative resource comparing the law and justice systems of countries around the world.Correct me if I’m wrong, but from the prior discussions I’ve got that the students have participated in this project with the intent of getting a course credit. Under the new policy, if such a student has somehow failed to disclose its affiliation (school), – it will be a violation of the new ToU.
- To me, it’s quite a harsh treatment of what’s otherwise a harmless activity.
- — Ivan Shmakov (d ▞ c) 21:25, 15 July 2014 (UTC)
… Well, I don’t seem to see much answers to the questions I’ve raised in the above back some two weeks ago, so I guess these were not the right questions to ask in the first place. So, I’ll try to put it another way.
First of all, there was some confusion over the policy, but that’s the way I understand it:
- no, this amendment does not prohibit “compensated” edits;
- neither does it encourage them;
- neither does this amendment require that one’s biases be declared, – only the fact of receiving “compensation” and the “payer”; (one’s biases may – and often do – stem from things other than “receiving payment”);
- it’s not all that hard to violate this policy and such violations do not necessarily constitute harm to either a specific project, its community, or the Foundation; (see commons:Commons talk:Requests for comment/Alternative paid contribution disclosure policy#Test case for an example of such violation.)
Now, given the now-effective requirement to
disclose [one’s] employer, client, and affiliation with respect to any contribution for which [one] receive[s], or expect[s] to receive, compensation, how exactly the Wikiversity community – and the custodians – will use the knowledge of one’s affiliation, etc. when judging one’s contributions?
Consider, for instance, the following edits.
- Special:Diff/1105485. Suppose it somehow transpires that this edit was “paid” by some J. Smith. Technically, the failure of the user to disclose his or her client is not a violation of the policy, as the policy was not effected until some seven months after this edit took place. Will, however, this edit become any worse (or better) in the eyes of the community because of the newly-found information of the party it was made on behalf of? Will this edit be reverted, and (or) the user blocked because of that?
- Special:Diff/1208608. Somehow, I came to think that the party behind the SusannaBasser account may be compensated for this edit, even though not disclosing that, and thus violating the ToU. Is, however, there any reason for this particular edit to be deemed more (or less) appropriate should the party’s
employer, client, and affiliationbe publicly disclosed in full accordance with the policy?
Are there other specific examples when the contribution’s value is to be decided (in whole or in part) based on the contributor’s affiliation, and (or) the fact it was publicly disclosed, as required by the newly-effected policy?
Thanks in advance.
— Ivan Shmakov (d ▞ c) 20:11, 1 August.
14:07, 15 July 2014 (UTC)
learn software of computer[edit]
--117.239.47.178 (discuss) 09:06, 17 July 2014 (UTC) how to learn computer software?
- It depends on whether by computer software you mean to learn computer applications or to learn computer programming. If you are looking for computer applications, try Computer Skills followed by Key Applications. If you are looking for computer programming, see Programming. -- Dave Braunschweig (discuss • contribs) 13:55, 17 July 2014 (UTC)
Sciences category[edit]
Just today a bot, specifically JackBot, changed 19 resources from being in the Category:Sciences to Category:Science. I've used both categories but do not consider them equivalent. It's a bit like the differences between languages and linguistics. Perhaps I have not been clearly differentiating between the two, but why has a bot decided to eliminate the former in preference for the latter? --Marshallsumter (discuss • contribs) 02:53, 21 July 2014 (UTC)
- Comment left for User:JackBot to reach consensus before continuing. According to bot rules, the bot must stop with a comment on its talk page. If it continues, post at Wikiversity:RCA so someone can block the bot until consensus is reached. I'll be offline most of the day today. -- Dave Braunschweig (discuss • contribs) 12:39, 21 July 2014 (UTC)
- I was treating Special:UncategorizedPages when I noticed these two categories, not linked between themselves (the former one was linked to nothing actually). If it had been Category:Sciences names I wouldn't have touched it, but in this case it seemed too much confusing and stubby to me. Moreover I based my judgment on the interwiki architecture and the former category content (eg: Relational biology in Category:Sciences vs Biology in Category:Science), which you can see here. JackPotte (discuss • contribs) 12:44, 21 July 2014 (UTC)
- I have looked at the Category:Science and noticed it has been placed immediately below the top Wikiversity category of Category:Contents. This top category contains only Category:Constructs, Category:Humanities, Category:Science, and Category:Engineering. The Category:Sciences would be better in this top category than Category:Science. The latter category also contains entities, sources, and objects of interest to science, as well as the sciences. May I suggest that the category within the top category be changed to Category:Sciences rather than Category:Science. I also have a resource Sciences which may be helpful in distinguishing between science and sciences. At present there is no science resource but I would like to create one to example the entities and things that are a focus for science as well as the scientific method. There is the resource What is science? that science redirects to. Engineering is often considered its own plural. --Marshallsumter (discuss • contribs) 16:45, 22 July 2014 (UTC)
There is some duplicate effort going on here. See Category:Categories which is described as the root category for Wikiversity, Category:Schools which is like a main category for all topics that are further organized into Category:Departments which are further organized into <subject> department categories, and there are also some School of <subject> categories as well. I would favor some simplification there, but am also inclined to create a Resources by topic category to go in Category:Resources to act as the main category for all resources organized by topic as well. -- darklama 13:18, 25 July 2014 (UTC)
- There doesn't seem to be a one or two step process to go from Category:Categories to Category:Contents. From a student's, teacher's, contributor's point of view what would be the best way to resolve this? Or, would a more diverse or all-encompassing structure that touches each be more helpful to newcomers and current contributors and participants alike? For example, Category:Categories could be in Category:Contents and vice versa, or would this create some kind of boom-loop? --Marshallsumter (discuss • contribs) 20:22, 25 July 2014 (UTC)
- Participants can click Wikiversity:Browse from the sidebar to start browsing Wikiversity. Category:Resources has been the main category for all main namespace contents up to this point, and its subcategories fill some of the lists for Wikiversity:Browse. I think Category:Contents should be renamed to describe its intended use more clearly, like resources by topic. I think topics could be connected through Category:Categories → Category:Resources → Category:Resources by topic → Category:<Topic>. Wikiversity:Browse could then list all resource topics as well while using a consistent category structure. -- darklama 22:46, 25 July 2014 (UTC)
- Perhaps Category:Contents could be renamed (moved) to Category:Resources by contents under Category:Resources. Category:Resources by topic may connote Category:Resources by department (or topic), which is also a good idea. --Marshallsumter (discuss • contribs) 01:37, 27 July 2014 (UTC)
{{BookCat}}?[edit]
BTW, speaking of uncategorized pages, the edits such as 1187784 should use {{BookCat}} (or
{{BookCat|filing = deep}}, if necessary) instead of hard-coding the category name, if only to facilitate possible future renaming.
Similarly, I’d ask that the explicit categories of the Lua course subpages be replaced with the {{BookCat}} template invocations. (FWIW, I’d volunteer to perform this task myself, via my ISbot robot; see luxo:ISbot, for instance.) For one thing, this will make the members listed on the eponymous category page dispersed across different letters (‘B’ for Background, ‘S’ for Scribunto/Lua, etc.), instead of all being grouped together under ‘L’.
— Ivan Shmakov (d ▞ c) 13:13, 21 July 2014 (UTC)
- I like using this template on Wikibooks, but here I would rather deploy {{CourseCat}} instead. JackPotte (discuss • contribs) 16:01, 21 July 2014 (UTC)
- Well, I’ve checked {{CourseCat}}, and it’s still a redirect to {{BookCat}}, – just as it was when I’ve created it a week ago. Personally, I have no strong preference for either name. — Ivan Shmakov (d ▞ c) 17:55, 21 July 2014 (UTC)
- Lua is done. -- Dave Braunschweig (discuss • contribs) 00:35, 22 July 2014 (UTC)
- All right, that's much better like that. I'll adopt it for the remaining Special:UncategorizedPages. JackPotte (discuss • contribs) 11:31, 22 July 2014 (UTC)
Fiction, popular culture[edit]
Are there any classes about fiction here? I mean fiction is not typical for a classroom, but Wikipedia outlandishly has articles about popular culture, so why don't we have lessons about Caillou or Teletubbies or Duckman or Mona the Vampire or etc.?
I mean it would be quite silly to see:
"Are you ready for the test on Teletubbies?"
But since Wikipedia has articles on everything don't you think we should have lessons on everything as well?LalalalaSta (discuss • contribs) 04:29, 21 July 2014 (UTC)
- Notability Wikipedia operates on guidelines of notability so not everything is supposed to have an article. Have you checked Wikiversity:FAQ? —Justin (koavf)❤T☮C☺M☯ 04:43, 21 July 2014 (UTC)
- I know about Wikipedia's notability. What I'm talking about here is Wikiversity's inclusion of popular culture. Why do we not have articles on many important subjects of popular culture, such as Teletubbies? LalalalaSta (discuss • contribs) 04:57, 21 July 2014 (UTC)
- I'm not sure if this is a real request or just a troll. If it's a real request, be bold! Start a course on television and film studies or children's television. But be careful to contextualize your lessons on the educational aspects of the topic. Previous efforts on this type of content have not had educational objectives and were either speedy deleted or proposed for slow deletion. -- Dave Braunschweig (discuss • contribs) 13:01, 21 July 2014 (UTC)
Tech writing community still active?[edit]
Hello everybody,
My name is Andrew Pfeiffer and I'm still one of those "emerging academics". I have many passions and I am very impressed by all the good work going on here, but I'm not quite sure how to get started. . . My immediate interest is technical writing (also, basic computer science and English writing). Your tech writing course looks serious and helpful, and also decently well-viewed, but it seems that no one has modified any pages in quite some time. If there are still people involved in teaching or taking that course, where would I find them and get in touch with them?
Thank you for your suggestions! —Andrew Pfeiffer (discuss • contribs) 16:12, 22 July 2014 (UTC)
- Welcome Andrew! You can try posting something on the talk page of pages you are interested in, or here in the Colloquium for a wider call. But you are correct that the technical writing course does not appear to have anyone currently maintaining it. That gives you the opportunity to be bold and contribute wherever you'd like. My personal recommendation would be to find a page or course that you know a little bit about, and when you look at the content already here you say to yourself, 'Someone should fix that.' You're that someone. Jump in and make it better. And ask questions whenever you have them. -- Dave Braunschweig (discuss • contribs) 01:43, 23 July 2014 (UTC)
- Many thanks, Dave! Looking forward to talking with you all soon. Andrew Pfeiffer (discuss • contribs) 21:58, 23 July 2014 (UTC)
Promote wikiversity[edit]
I would like to get some feedback on an experiment we started on the Dutch wikiversity. The Dutch wikiversity is still in beta. To promote the wikiversity we contact researchers and other people with the question if they want to help writing an article about their research, etc. on the wikiversity. On the Dutch wikiversity we get very good response on this approach. A lot of people we contact only know wikipedia and they are very happy when somebody is interested in their work. We now have a list of people that we can contact every 2 (?) years and ask them what's keeping them busy. See the following link (Dutch) for more info:. Is it possible to start a similar initiative on the English wikiversitity? Did somebody already try something like this. I would like to hear their experiences. Regards, Tim Ruijters, Timboliu (discuss • contribs) 23:35, 30 July 2014 (UTC)
Vision on wikiversity[edit]
On the Dutch wikiversity a lot of content will be deleted. Some few months ago some wikipedians joined the wikiversity and started to 'clean up' the pages. On the one hand I'm happy that after all those years (I started with the Dutch wikiversity in 2011) people are going to help me improve the wikiversity. On the other hand I'm afraid that with the clean up they will destroy a lot of content. In my vision the wikiversity should be a place with less rules. The learning groups should have a lot of freedom to decide how they want to learn. But in the current situation the custodian disagrees with me. What can I do? I contacted Wikimedia Nederland to help me with this and today I also contacted ArbCom. Can I also get some international support? Who should I contact? Timboliu (discuss • contribs) 09:31, 1 August 2014 (UTC)
- Did you make this already public in the specific wiki itself? (E.g. at your colloquium), ----Erkan Yilmaz 03:42, 2 August 2014 (UTC)
- Yes I did. I also tried to asked the Dutch Arbcom for help. They didn't acccept my request because they only arbitrate about issues concerning Wikipedia. I now contacted the Wikipedia helpdesk. Timboliu (discuss • contribs) 08:33, 3 August 2014 (UTC)
- This wouldn't be the first time a mob from a Wikipedia has decided to gang up on a smaller sister project and trash it. Afaik it hasn't happened on the English projects yet. (Fortunately en.wn established very early that outsiders can express their opinions non-disruptively but don't get a "vote", to the extent there is such a thing as voting, on matters of project policy; and we've long been the only non-Wikipedian sister project that has its own ArbCom.) --Pi zero (discuss • contribs) 17:06, 3 August 2014 (UTC)
- Wikipedia is irrelevant to Wikiversity, you will not be likely to find help there. (But maybe!) However, en.wikiversity has developed traditions that only rarely result in deletion.
- First of all, site mission is not just "educational resources," which then raises issues about neutrality, etc., but also about "learning by doing." So there is a place for what might be called "student exercises." We had a 7-year old editing for a time. He was Learning to write, and to write wikitext, learning to cooperate with a larger community. I moved his work (the writing of a 7-year old! actually precocious) into his user space, and that is a generic solution that will almost always avoid deletion. Someone has some fringe point of view, say. There is almost zero problem with expressing that in their user space.
- In mainspace, subpages are allowed, so there can be a top-level resource that is *rigorously neutral,* it should be possible to find complete consensus regarding it, and under that there can be subpages with essays, original research, and it is possible for there to be, as with a brick-and-mortar university, "sections," i.e., classes on the same topic, taught by different individuals, who may have their own points of view.
- The key is to find ways to avoid conflict, while still maintaining neutrality and organization.
- This is very different from the Wikipedias, which have one page per topic, and where editors with different points of view may end up battling for domination of the single page. Wikiversity is neutral through inclusion, whereas the Wikipedias tend to seek neutrality through exclusion.
- Often, here on en.wv, scholars have been undisciplined as to how resources are presented. There are pages that are strangely titled, disconnected with other resources, etc.
- Wikipedians often take a look at Wikiversity and think it is a mess. I prefer to say that Wikiversity is an opportunity to organize study, discussion, and knowledge. In that process, all points of view are welcome, as long as readers are not misled, i.e., a controversial idea should not be presented as if it were mainstream. Sometimes if I find a top-level page that is not neutral, I will move it to subspace, creating a neutral top-level page that links to it as an attributed essay or "editorial," i.e., opinion piece. I can't recall when this ever created conflict.
- Discussion of topics is discouraged on the Wikipedias. It can be encouraged here, and sister wiki templates can be placed on Wikipedia articles or article talk pages specifically to invite participation in learning about the subject. I have only seen this opposed when there was a dominant faction on Wikipedia that didn't want anyone learning about other points of view. Ultimately, though, these sister wiki links are encouraged by guidelines and the exclusion will not prevail if users stand for it.
- If a page has been deleted without due process, if there is any doubt about it, you should be able to request undeletion for review, and then move the page to user space, either the user space of the author, or your own -- unless, of course, when seeing it, you recognize it as completely useless, with an author who hasn't edited for years, etc. --Abd (discuss • contribs) 18:35, 3 August 2014 (UTC)
- Abd, thank you very much for this information. In made a link to this discussion on our forum. I will also send a mail to 'Wikimedia Nederland' if they can facilitate a discussion about the next steps for the Dutch wikiversity. I think we need something like a ArbCom or commission to re-evaluate the current guidelines. Do you agree this is a good idea? Do you have other suggestions? Timboliu (discuss • contribs) 19:17, 4 August 2014 (UTC)
- You forgot to tell that your field of work on Wikiversity is nearly identical to your field of work described on your Linkedin-page. In fact, you use Wikiversity for selfpromo, networking and notebook. For a long time the Dutch Wikiversity was identical to Timboliu. The Banner (discuss • contribs) 21:17, 4 August 2014 (UTC)
- This "you forgot to tell" is uncivil, accusatory. Yes, Timoliu has been active on Beta Wikiversity. There is no Dutch Wikiversity, as such, yet. Wikiversity is not Wikipedia. If Timboliu has a conflict of interest, or potential conflict, he should, of course, disclose it. We do not, however, on en.wikiversity, reject expert content, for example, because a person is employed in a field. Pointing to a Linked-in page could be a violation of privacy policy. "Self-promo" is not prohibited, per se, here, we do want COI disclosed, it's a WMF-wide policy that we have not opted out of. I'm seeing a high level of activity as being claimed to be some sort of problem.
- Bottom line, Timboliu is welcome here, as are all. We will watch his work; the Companies project he set up could have led to certain problems, but we have addressed those, at least largely, and, in particular, the individual company files, the few created, have been moved to subpages of a learning project; our goal is to welcome participation, to find ways that users can do what they want to do, within what works for Wikiversity overall.
- It is a pity Timboliu here tells only half the story, the part that fits him. He is playing Calimero, but doesn't tell what problems he causes. The "content" he mostly added has nothing to do with Wikiversity, but is in some way nothing more than a personal sandbox on a large scale with notes and other personal stuff. Wikiversity needs a very liberal attitude towards contributors, I fully agree with the things Abd above here says. But in the past years Timboliu has had multiple comments from experienced users that how he is acting is not the way what Wikiversity is for, but he choose, multiple times, to ignore that and keeps on adding crap to the wiki. It doesn't even look like the English Wikiversity, or any Wikversity, at all. There are hundreds of pages without any content, without any goal/meaning. There are a lot of pages with copyright issues as texts have been copied from other websites. There are a lot of pages which try imitate Wikipedia articles, Wikisource pages, recipes, news articles, using pages to blame innocent people from crimes, a list of films he thinks are good, pages without context in the middle of nowhere of what the title says such as "Plant on the corner of the streets Fonteinlaan, Helenalaan" with only a picture on the page, pages that describe how businesses can become a sponsor of a page on Wikiversity, advertisements, pages with "Wikiversity can do paid activities for you", this goes on and on and on.
- The most basic guidelines we have now are set up together with him!
- In the mean while we (Wikimedia Netherlands) are trying to set up an education programme. With the current pages, it is in no way possible to set up a trustful Wikiversity in Dutch with the current situation. Also pages about people who do not want to have a page about them have been created by Timboliu. And that is why currently users are working on solving the problems, as complaints from external people have been filed, copyright issues have been reported to us, etc. All these things he doesn't tell you, why he doesn't? Romaine (discuss • contribs) 21:59, 4 August 2014 (UTC)
- Romaine, anyone who is active will run into certain problems. However, what I'm seeing claimed above amounts to undisciplined page creation. My guess is that, as with en.wikiversity, there have been few guidelines as to how to create pages and where to create them. Certain material is only appropriate for user space in a Wikiversity project. Many other pages are created as mainspace pages that really are clutter, created in that way, but as subpages, can be fine. Stubs can be useful in some contexts, but, at the same time, there is little work invested in them.
- Wikiversity is conceptually very different from Wikipedia. We have no notability requirements of universal application. We are moving toward a concept that to be a mainspace top-level page, the topic must be at least somewhat notable, but that is not rigid. We are moving toward classification of pages by topic, pages that used to exist as free-standing mainspace pages are now subpages of an overall learning resource.
- We do not want, on en.wikiversity, the severe problems that can be associated with biographies of living persons. There may be restricted exceptions. Basically, to become a Dutch Wikiversity, and not just a language section on the Beta incubator, you will need to develop some guidelines and policies that will encourage Wikiversity growth. Wikiversity can be, in general, a place where people discuss topics that are covered on Wikipedia. There are many possible problems to be resolved.
- Again, the accusatory tone is offensive. He asked about deletion, and it was explained to him how we manage to mostly avoid deletion here, except for blatant spam and vandalism and obviously problematic pages, which can usually be speedy-deleted. If Timboliu wants to keep a page that someone else wants deleted, why not move it to his user space, until and unless there is more support for it in mainspace? If there are pages being created about living individuals, that are being objected to by the individual, that's definitely a problem, wherever they are, and, absent community-approved guidelines or some necessity, should stop. For behavior like that, Timboliu should be politely warned. However, Beta has rather weak governance. I'm going to encourage Timboliu to cooperate, it will be better for everyone. So, toward that end, please stop accusing him of bad behavior, and start inviting him to collaborate toward creating a Wikiversity that all can be proud of. --Abd (discuss • contribs) 02:40, 5 August 2014 (UTC)
- Romaine and Abd, thank you for your reactions. The concerns of Romaine are partly true. In 2011 I joined the Dutch wikiversity project. For three years the contributors could be counted on one hand. Occasionally I received some feedback but after a discussion no consequences were taken. I thought they agreed on my opinion or accepted my way of working. What I also like about the wikiversity is the lack of rules. The reason I stopped contributing to Wikipedia is because every contribution I made was deleted. On the wikiversity I had a lot of freedom. I understand that in this process I made some pages that are lacking content or don't meet the high quality standards of Wikipedia. Regarding the pages of people. In my opinion in a learning process it can be very important to know what the background is of the people you are learning with. Not all people have a user page, so with the approval of the person involved I have created a page of that person. In 90% of the cases people like it when somebody is interested in what they are doing. Anyway... I understand that wikimedia is a community project and that I don't have the freedom anymore that I had the last three years. I hope the Dutch community grows so that we hear more opinions when making decisions. I also hope the Dutch wikiversity can get some help with setting up some guidelines and best practices. Timboliu (discuss • contribs) 05:48, 5 August 2014 (UTC)
Companies and markets[edit]
On the Dutch wikiversity we're discussing about the question whether learning project companies and markets has educational value. As a business consultant I see the value, but are there any guidelines regarding educational value on the English wikiversity? Timboliu (discuss • contribs) 16:41, 3 August 2014 (UTC)
- Almost any topic can have educational value. There can be problems with "promotion," but those are soluble if there are users willing to cooperate. It's difficult to discuss this in the abstract. Any more specific examples? --Abd (discuss • contribs) 18:40, 3 August 2014 (UTC)
- A specific example is Business/Companies/Agora. I think this is a very interesting company because the crowd decides where the company is heading. On the Dutch wikiversity I created a similar page. This page is nominated for deletion because it has no educational value. I tried to explain that on the wikiversity you should not evaluate one page but the learning project, but this argument didn't convince the person who nominated the page. We have a custodian who will decide which pages will actually be deleted, but maybe someone of the English wikiversity can help me with good arguments? Timboliu (discuss • contribs) 19:02, 4 August 2014 (UTC)
- We do on discuss wether or not educational material related to companies has value, but the subject what is discussed is how Timboliu is doing that. Romaine (discuss • contribs) 22:03, 4 August 2014 (UTC)
- We changed the setup he had created here to avoid problems. He seems happy with that. What he had created here was like what is on Beta. And that's not going to fly. I could comment on Beta, but prefer to stay out of conflicts involving languages I don't know, and a local culture I don't know. To Timboliu, I recommend he keep personal copies, off-wiki, of anything important to him, and do accept some level of cleanup. Stubs that are nothing more than a link to a Wikipedia page can be deleted with little cost, they can always be quickly recreated, it is not worth arguing over! Timboliu, please learn to work with the community. At least some of the concerns they are raising are legitimate.
- So as to one issue mentioned, a company page is sitting in mainspace, with a little text, and an unclear purpose. Sitting isolated, it is not obvious what the purpose of the page is. Even if it is linked to or categorized with a learning project. If it is a subpage, it's obvious, it is part and parcel of what is above it in the structure. It is like Wikibooks, where book chapters are subpages. Rarely would someone nominate a chapter out of a book for deletion, if it was consistent with the rest of the book. If it was vandalism or offensive, sure! My opinion, though, is that it is better to leave undeveloped pages as redlinks in a supervisory page, rather than turning the link blue with a stub. The red calls attention to an opportunity to improve. The blue makes it appear that there is valuable content there, which is not the case with a stub. --Abd (discuss • contribs) 02:50, 5 August 2014 (UTC)
- Abd, thanks for the tip. I understand that working with the community is the key to success. I also believe that on the short term I have to accept some level of cleanup. For the long term I would like to initiate an local discussion about the next steps for the wikiversities. In my opinion in most countries wikipedia is growing up. It doesn't need that much effort to stay up-to-date. I think that we, as a community, could involve more people if we focus on some of the sisterprojects. I think the wikiversity can be a platform that can attract many new people to the active wikimedia community. I think we (especially) in the Netherlands need a clear vision and examples, like you give above. One of the members of Wikimedia Nederland asked me what should be talked about in London. I think a discussion about the vision (of sisterprojects) can be helpful. Abd, do you also think a discussion, with help from more experienced wikiversities, is needed? Do you think it's possible? And can you/ the English wikiversity help me to start an initiative? Timboliu (discuss • contribs) 05:48, 5 August 2014 (UTC)
- Develop a vision of Wikiversity that will work for all (or nearly all) users. We tend to look only at our own goals. Broaden your perspective.
- Then work to create the vision as a shared goal, shared by many users.
- Do not merely "accept" cleanup, create it. Start by cleaning up your own contributions. Move pages to better locations, if you want to keep them, and drop a speedy deletion template on the redirects, if they won't be needed. Request speedy deletion of your own pages if there isn't content there worth keeping in other's faces. Move anything likely to be controversial, but that is educational *for you,* into your user space, or create neutral structure in mainspace to contain it. We do all of this on en.wikiversity.
- Having cleaned up your own act, assist others in the same way.
- Listen to and respect warnings. Understand how they may be "right." Then seek consensus before proceeding contrary to a warning. That can be difficult on a small wiki, but you can clearly stand for it.
- You experienced a freedom on Beta Wikiversity that was missing from Wikipedia. That's a good thing, in itself, but when freedom becomes license, it can go too far. A wiki is the Commons, and if we create a mess there, it affects everyone. Hence the focus I'm suggesting on organization of content. A goal: when someone goes to Random Page, they will see, at the top level in mainspace, a recognizable learning resource, or at least the core of one and an invitation to participate, neutrally presented, not some fringe idea or promotion.
- That's a goal. It may never be perfectly realized, but it is possible to approach it.
- Here is an example: See Wikiversity:Organization/Examples/Arduino, I just created.
- If you look around any Wikiversity, you will find many opportunities to organize. First of all, seek to develop some consensus about how to organize content. You can Be Bold and go ahead, but page moves, the basic organizational tool, can create some level of mess for a custodian to clean up, so do take some care in advance. Announce what you plan to do on a resource talk page or on the user talk page of an author, until you are sure that your general plan enjoys reasonable consensus. You may, however, always organize your own contributed content better! --Abd (discuss • contribs) 14:59, 5 August 2014 (UTC)
Suggestion: Plasmons and polaritons[edit]
In discussions I've been having at the Science Refdesk about the quantum vacuum fluctuation drive, and the energy and momentum of refracted light, and which metals have a silvery color; also in matters of reflection and photonic computing and I see here even cold fusion ... over and over the topic of plasmons and polaritons is central to an understanding of physics. I feel like it should have been something taught as a basic concept even in high school, but really, I had no introduction to it even in college. So I think it would be a really good thing if people working on physics here could work out a curriculum that recommends the best order and scope of material to learn to understand these things well, and ideally, full course materials on the topic with concomitant improvement of the Wikipedia articles. Wnt (discuss • contribs) 16:52, 7 August 2014 (UTC)
- As with anyone, you are welcome to create a learning resource or project here. Yes, we have material on many topics, "even Cold fusion" -- which is a legitimate topic in scientific journals now, as it has always been, just not all journals! Plasmons are an important topic in contemporary physics, and Wikipedia is not good at covering what is relatively new, or even what is older but still controversial, such as cold fusion. However, Wikiversity allows original research, and has no notability requirement. There is still a neutrality policy, but that can be handled here through inclusion rather than exclusion. There are not many "people working on physics here," but some with some interest in and knowledge of physics, and we can and will cooperate with new projects, especially supporting them in remaining neutral as WMF policy requires, without deleting them! --Abd (discuss • contribs) 20:50, 7 August 2014 (UTC)
- I am willing to prepare at least a lecture/article on either plasmons or polaritons, perhaps both, but Wnt may not wish such in my usual style. Let me know. --Marshallsumter (discuss • contribs) 20:45, 20 August 2014 (UTC)
Question[edit]
Is it possible to import a deleted Wikipedia article? Does it need to be undeleted over there first? The article that I have in mind is w:The Benefits and Detriments of an Australian Bill of Rights, which was said at AfD to be original research and an essay, something that we may accept. James500 (discuss • contribs) 19:12, 7 August 2014 (UTC)
- see Wikiversity:Wikimedia Garbage Detail
- as I see, the article is deleted, so you'd need to ask a WP admin to undelete it (or perhaps you are lucky and a cached version exists in a search engine already), ----Erkan Yilmaz 19:40, 7 August 2014 (UTC)
- It is best if the article is undeleted; anyone may then download an export, which may then be imported here. If there is only one author, this isn't necessary, the wikitext can be copied with attribution, but if there are multiple authors, import will preserve the edit history here. Import is a custodian right here. We may decide to change that, to allow any autoconfirmed editor to import, but that hasn't been done yet. (There is a separated user group, Importers, but it is empty and I don't see the right to create membership as existing here for any user group. It might take a steward.)
- There are Wikipedia administrators who will generally undelete and userfy an article on request. In some cases, if an article was speedy deleted, undeletion may be routine, but undeletion into a Wikipedia user space is not controversial, normally. If an article was deleted from a deletion discussion (AfD), then ordinary undeletion is not advisable without a request at w:WP:DRV, and if you can't read the article, even knowing if this is a good idea can be difficult! So request userfication, get the article here, and you and anyone else may work on it. Once here, the Wikipedia copy can be speedy deleted with reference to the import here. Wikipedia articles are not necessarily designed for WV mainspace, so having an article imported to user space here is a simple way to start. It can then be moved to mainspace when ready. --Abd (discuss • contribs) 21:12, 7 August 2014 (UTC)
- See w:Category:Wikipedia administrators willing to provide copies of deleted articles and w:Category:Copy to Wikiversity The undeletion policy, as is common with many Wikipedia policies, does not list the exceptions, but undeletion to user space for some legitimate purpose is the norm, not the exception. In addition, an admin may agree to email the exported file, if there is no reason to not do this (such as copyright violation, perhaps.) Looking at the AfD, this is a prime candidate to be transwikied to WV. Wikipedians, in general, simply are unaware of the Wikiversity possibility. --Abd (discuss • contribs) 21:30, 7 August 2014 (UTC)
- AIUI, both plain and transwiki imports open the possibility of accident “history merges”, – which seems like a thing that’s very hard to get undone. That’s the very reason these tools are only available to a limited set of users. — Ivan Shmakov (d ▞ c) 19:05, 8 August 2014 (UTC)
- Of course. This would be the most common problem: A page is imported that already exists, i.e., same page name on the source and target wiki. The import command is not designed for safety; properly, it would warn that a page merge is about to be done, but, in fact, it goes ahead and imports with no warning, and this cannot be easily undone, it takes some possibly tedious sysop work to fix a problem, if the result is a problem. (Usually it would not be a major problem, it's only difficult to fix if the source and target pages both have many revisions.)
- This is easily avoided if the importer always imports to an empty subspace instead of attempting to import to the ultimate target space, which may be complex (like mainspace). The import command allows specifying a root target space (the default is no root space, so the pages will simply be named on the target wiki as they were on the source). So, yes, a rogue importer could do a lot of damage, like any rogue sysop can do with move/delete vandalism (really the same problem.) But if an importer has clear guidelines to follow, and follows them, not a big risk. --Abd (discuss • contribs) 19:55, 8 August 2014 (UTC)
- A variant on the problem (that may be more difficult to address) may arise if templates are included in the export. Those templates, on import, might be history-merged, and, again, the source information is lost. It's really a major MediaWiki bug, lack of full revision tracking. *There is no record of source for the revisions in the database.* It is effectively assumed that all edits were made to the page they are assigned to. There is a Bugzilla report on this, I couldn't find it right now, but there is very little interest, so far, in fixing the problem. --Abd (discuss • contribs) 20:10, 8 August 2014 (UTC)
- There was mediazilla:57490. This wouldn't be all that hard to implement, I think. The exported XML files already include siteinfo, just not on a per-revision basis. I'm thinking, since most revisions in the database aren't imported, it might be more efficient to store that data in, say, log_search than to add another field to revision. It might be possible to get support for merging the patch by saying it will help wikis comply with licenses. I know that sometimes I've just imported the most recent revision of templates to my wikis, which is probably a license violation. Leucosticte (discuss • contribs) 21:13, 8 August 2014 (UTC)
Technical writing Robs Screengrab[edit]
I came across the resource page Technical writing Robs Screengrab while having some fun fixing up resources using Random. It only contains a file which has been deleted per the resource's creator's request and two categories. Unless someone knows what this file is about perhaps it should be put up for deletion, or speedy deleted. The creator was last active in 2012 and contributed copiously to technical writing. Suggestions? --Marshallsumter (discuss • contribs) 20:54, 8 August 2014 (UTC)
- Okay, educational opportunity. This is an obviously useless page. It has no incoming links, and no history of any content other than being a link to a deleted photo. The photo was deleted on author request, but then undeleted immediately because of a link or links to it. That undeletion was an error, my opinion, but that was five years ago. Basically, this page has no raison d'etre for Wikiversity. Nobody will ever miss it. Hence the obvious thing to do: pop a speedy deletion template on it.
{{delete|orphaned page, photo deleted, no apparent purpose}}
- Folks, when you see a page that is clearly useless, don't be shy to request speedy deletion. This is how we all participate in cleaning up Wikiversity. If there is some speculative purpose, you might try Template:Proposed deletion which provides more time. Don't worry, if you request deletion improperly, it takes custodian confirmation, and if it ever turns out that a page is improperly deleted, that, too, can be fixed. We don't want people to return after being away and finding that something important to them is mysteriously missing, but that's not going to happen here, at all.
- I checked and a deleting custodian will check to see that there are no incoming links. In this case, there is one. This report! The custodian will look at page history, as I did, to verify that there wasn't something useful in history.
- We all can do this work, and we should not just leave it to custodians. This is our wiki. Thanks, Marshall. --Abd (discuss • contribs) 22:32, 8 August 2014 (UTC)
Request for importation[edit]
Further to the discussion above, could a custodian please import the page now at w:User:James500/The Benefits and Detriments of an Australian Bill of Rights. James500 (discuss • contribs) 01:53, 9 August 2014 (UTC)
Done - Imported to Australian Bill of Rights. In the future, you can post import requests at Wikiversity:Import. -- Dave Braunschweig (discuss • contribs) 12:40, 9 August 2014 (UTC)
A lot of pages are removed from the Dutch wikiversity[edit]
For me today was a sad day. A lot of pages, I made, were removed from the Dutch wikiversity. The only comment was 'no educational value'. I think this practice is not in line with the vision of wikiversity. Can I do something to get the content back? Or do I have to accept the removal? I asked the custodian if it is possible to move the content to my name space. The custodion is considering this request. Timboliu (discuss • contribs) 18:35, 14 August 2014 (UTC)
- First of all, be clear that this is not from the "Dutch Wikiversity." There is no Dutch Wikiversity. There is a Dutch project on Beta.wikiversity, Dutch main page.
- You knew that some pages were proposed for deletion, because you asked about it here. I did discuss the general situation with deletion, but this discussion would not necessarily have been seen on Beta. Did these pages have speedy deletion tags on them? If not, standard practice was violated. So, looking at beta.wikiversity.
- I see that Romaine did, indeed, delete a boatload of pages, 1551 of them. That started on July 30, with author request pages. It got intense yesterday and today, with "no educational content" deletions, or, as an example,
- 00:35, 13 August 2014 Romaine deleted page Wat is Agile? (not suitable, too little educational content)
- Without seeing the pages, it's difficult to judge. That much deletion, was there discussion? For there to be so many pages to be legitimately speedy deleted at one time would be very unusual. I see some discussion on User talk:Romaine.
- I see that you requested content be moved to your own user space. However, Timboliu, you could have done that yourself. It was easier for that sysop to delete. The principle that Romaine enunciated, in my opinion, was improper: he made himself the judge of whether or not content "complied with the principles and guidelines of Wikiversity." It's not terribly surprising, custodians sometimes do that.
- If you wanted to move content to your own web site, you could have done that at any time while the pages were visible, using Special:Export, without any controversy and, in fact, nobody would have known you were doing it. You can do long lists of pages at once with that command. Now, at this point, to grab the pages requires a custodian undelete a huge number of pages, a lot of work. And what I saw of many of your pages was that they were hardly more than stubs.
- So, your question: can the pages be recovered? Yes. But what pages? Any custodian can undelete pages, and they could be moved to your user space, but it is now much more work, because you didn't handle it yourself when you could. You will have to convince a custodian to do that work. Do you have any idea how much work it is to undelete 1500 pages?
- So who is Romaine? I was not familiar with the name. So, I found [1]. Romaine is a Wikipedian. I see who voted for the user. Aside from you, Timboliu, Romaine had votes from prominent Dutch Wikipedians. Wikipedians, as a general rule, have little understanding of Wikiversity. Romaine wanted the tools to clean up Beta. This was trouble, coming, it was totally visible.
- I am *not* saying that Romaine was wrong. However, I regret that I did not advise you to export those files. I had no idea that the scale was that large. Nowhere did I see any discussion that specified how much was involved.
- I see that Romaine created a deletion process that has the deletion decision be made by a custodian. Romaine. The user bypassed and did not use the standard Beta speedy deletion template, which may be removed by any user, and then specific discussion is required. I see there was discussion of deletion at the Forum page -- that Romaine created last month, bypassing the Babel Beta community discussion page -- and you clearly realized deletion was about to occur. You were advised to move materials to your own computer. Did you do that? If not, why not? Did you ask anyone for help rescuing the content? Given a list of pages, anyone could have exported the lot in a few minutes. Yes, over a thousand pages in a few minutes.
- Procedurally, Romaine's process is defective in that it gives custodians superior powers of assessment, very much not what wikis normally do. Speedy deletion is designed for uncontested deletions. There are procedures for mass deletion of content, where the community discusses it. There was, indeed, some discussion of deletion on the [2], a page also started by Romaine. I would not call it a formal deletion discussion, there was no specification of pages to be deleted.
- If there were a conflict like this here, with substantial numbers of Wikipedians coming here to influence and control Wikiversity policy, we'd have some difficulties! I think existing custodians would hold the line, but it could be very tough.
- Romaine had some semblance of consensus to do what was done. You did not actually object. Instead you asked for Romaine to do what you could have done yourself.
- However, Romaine does not understand the complete mission of Wikiversity, and has a narrow view, similar to that of many Wikipedians with little experience of Wikiversity. Romaine thinks of educational materials as "books, readers, and the like" used in schools, given to students. That is part of what can be here (though books belong on Wikibooks). Romaine has missed "learning by doing," which was part of the original mission, and has missed what happens in university seminars and the like, where discussion takes place, and has missed original research, which was explicitly allowed here, from the beginning.
- What you created, Timboliu, however, pushed way beyond some reasonable compromise. Had you done this in your user space, it probably would not have been a problem. You could have moved it there immediately, once you realized there was an issue. Wikipedians, however, think very differently about user space. Everything on Wikipedia is intended for ultimate usage in the encyclopedia, or as an essay about the encyclopedia. User space material is deleted all the time on Wikipedia, especially if the material is considered not useful for the encyclopedic project.
- We allow broader usage of user space here. We allow user space to be used for almost anything with some educational purpose, even as writing practice for a student. So this was a setup for what happened. Wikipedians plus a user who was undisciplined about where and what he put in mainspace.
- I've been pro-active here. When I see users place possibly inappropriate content in mainspace, I move it to user space immediately. (If it's spam or other clearly inappropriate material, I tag it for speedy deletion.) Rarely does a user get upset.
- I see that a Dutch user looked at en.wikiversity and did not understand how it was organized. That's because it is, as it is, the product of a whole series of opinions about how it can be organized, plus many individual actions that have never been reviewed. We are gradually finding consensus on organization, and as we do this and document it, Wikiversity will become, quite naturally, more organized. It's improving, and, in the meantime, it's very usable. There are now some topics where Wikiversity shows up at the top of google searches. I have one in mind, and it's a very controversial topic, and controversy did show up here, and we handled it. Nobody was blocked or banned, there was no revert warring, and the result has been deeper content. I'm proud of that.
- So I'm sorry about what happened on Beta. But it was coming. I am a Beta user, but Romaine bypassed the normal Beta central pages, such as Babel. I got no watchlist notifications, as a result.
- Here, if something like that happened, you could go to WV:RFD and request undeletion. Normally, speedy-deleted pages are undeleted on request unless there is solid reason otherwise; they may then be discussed.
- But deletion on this level without a clear deletion discussion, we would very much avoid! Romaine is certainly not an experienced Wikiversity custodian! I have seen inexperienced custodians here start to delete material on their own initiative. It's strongly discouraged. Custodians follow the same process as anyone else. They may place a speedy deletion template, and wait for *another custodian* to delete. Sometimes custodians here place a proposed deletion template, wait for so many months, and then delete. If anyone removes the templates, our policy requires a deletion discussion before a page may be deleted. For efficiency, a user may place a deletion template and then a custodian may delete if deletion is considered uncontroversial. Custodians also delete spam and vandalism without any fuss. We had a case recently where alleged spam was deleted, and undeletion was requested and it was promptly granted for discussion.
- So content decisions are not made by custodians, but by the community.
- On the other hand, Dutch users are now organizing their own project there, and it's quite possible that the end result will be an improvement, and a move to an nl.wikiversity. Other-language wikiversities do not necessarily follow our model here. But we are a demonstration of what a highly inclusive project can be, and we are proud of it. --Abd (discuss • contribs) 23:39, 14 August 2014 (UTC)
- Abd, thanks for the info. It is a lot :-) I have made an export of the pages before the deletion. Maybe it is possible to import these pages in my namespace? I will read through the rest of the info at some later point. What I would like to do is to start a discussion about copy some of the best practices of the English wikiversity to the Dutch wikiversity (beta). Is it possible to decide on some international best practices? What could be my next steps? 80.65.122.82 (discuss) 07:16, 18 August 2014 (UTC)
- My opinion is that you could import those pages to your user space. Special:Import allows you to specify the base page. Before importing 1500 pages, though, obtain community consensus or at least consent, (i.e., absence of consensus against). I would suggest you create a user space resource with the pages being underneath it. That is, all those pages, that were strewn about Beta mainspace, would be subpages of some organizing project page, a little like what we have done here on en.wv. My question, though, is: "Why do you need all those pages as separate pages?" That makes it all cumbersome to maintain. If I'm correct, most of those pages were simply stubs. Stubs frequently add little more than eliminating a redlink somewhere, and redlinks are not harmful. In fact, eliminating the redlink can be harmful, people will assume there is useful content there, and waste time looking for it.
- Back up. Before acting, think about what it is you want to do, and describe it, and seek comment and consent before going ahead. That is not a requirement for creating resources, but you already have seen what lack of caution about community impact can do. Creating one possibly problematic resource, no big deal. Creating over a thousand of them, very big deal.
- Romaine has now deleted over 5000 pages.[3]. Romaine did create a deletion process page, I don't know how many of the deletions were handled through that page. On the face, it looks like Romaine has consensus for what is being done. This was the basic problem on Dutch Beta: nobody minding the store. It's a general problem on Beta. So Timboliu was very active and was not guided and restrained. Since Wikiversity is for learning by doing, as part of the mission, Timboliu is not to be blamed. The neglect was a community shortcoming. The Dutch community is now deliberately engaged, and creating a Dutch Wikiversity may now proceed. I'll see if I can lend a hand, I already see something to do, just a detail, wikignoming.
- Total contributions globally (includes deleted) for Timboliu would be 34,075. beta wikiversity contributions in X!s tool shows 32,449 deleted edits, 1,227 remaining live. I can easily see why Timboliu would be upset!
- Looking at current contributions display for Timboliu, I see this page: [4]. It is nominated for deletion. If it were me, I'd move the page to my user space and remove the deletion tag. This page should never have been in mainspace, as-is. This is Beta, and this is a specific plan for a specific project. I consider Romaine's process improper, for reasons given above (basically, it requires a custodian determination, very much un-wiki), but something close to it would be proper. My guess is that there are a lot of pages like this.
- This is what happens when we don't have a clear structure and guidelines for users. A huge amount of work can be wasted. That is common on wikis, but is it necessary? What actually happens is that we wait for User:Somebody else to create guidelines. Or Somebody else tries, gets it "wrong," and there is nothing but complaint, no consensus, and so nothing happens. Or Somebody else tries to register, and the SUL process locks her out. Too similar to someone else. --Abd (discuss • contribs) 14:29, 18 August 2014 (UTC)
- "Now, at this point, to grab the pages requires a custodian undelete a huge number of pages, a lot of work."
- Not necessarily, see "Restore pages:" here. I have the flags at beta.WV and can anytime restore. ----Erkan Yilmaz 16:34, 18 August 2014 (UTC)
Beta issues[edit]
The massive deletions went well over 5000. The deletion process that was set up was improper, as I stated above. But it was also not implemented neutrally. A special Dutch deletion page was set up, in Dutch. Not a bad idea, but .... it meant that only the very new custodian, Romaine, was aware of the proposed deletions. Many pages were deleted without that discussion (using speedy deletion tags and unknown standards; Romaine's discussion of content standards was far from what we'd expect for a Wikiversity, he did not understand how it was possible to have "original research" in "educational materials." For example. He may have something in mind like handouts in grade-school classes, and even there, students write essays, etc, may state their opinions.
As to the discussion, many pages were summarily nominated, a user would make a list of pages, all in one paragraph, no way to even comment on them individually. Most deletions consisted of a single nomination and no response, not even covering all the nominations of that user. At first, the user Timboliu questioned the nominations. It never made any difference at all, and he gave up. Some of his questions were legitimate. In one case I recall, there was actually consensus for a merge, not a delete. The custodian paid no attention to that. I think he just looked at what pages had the template on them and deleted them. No exceptions, so far. (There are now only a handful of pages left in the "Dutch Wikiversity.") There were other pages, recently deleted after I asked the custodian to recuse, which were clearly allowable content: a set of recipes, for example, that could have become subpages of a Cooking resource. It was actually proposed to transwiki them to Wikibooks. That indicated recognition of value! Transwiki can't be done by an ordinary user without the file being visible! They were deleted *early*, probably because I'd suggested to the user that he work with them to make them acceptable, or move them to his user space.
The custodian reached outside the "Dutch Wikiversity," threatened me with a block, because I'd described what happened. Civilly, I hope! He revert warred with me on my Talk page. He revert warred with me in my Talk archive, and threatened to protect it, and when I reverted him, he reverted and protected, classic "preferred version" protection. He not only deleted the only page that I'd moved out of the "Dutch" category, but also deleted a copy I'd made in my own user space, claiming that I'd hijacked the content and was frustrating the Dutch Wikiversity consensus.
And he filled page after page of repetitive screeds, evidence-free, claiming personal attack, trolling, etc. An ru.wikibooks sysop (who may have become aware of the situation from following up on this page) was attacked as a "hand puppet" of mine, the disruption is spinning out to the meta proposal to close beta, etc.
So ... if you care about the Wikiversity concept, if you understand how it differs from Wikipedia concept, how content policies need to be different on a Wikiversity than on a Wikipedia, or, in fact, if you differ with me on this, please take a look at what is going on on Beta. The problems arose on beta with some poor content because nobody was paying attention and guiding a user, *for three years*. And then Wikipedians -- that's what the global contributions histories show, generally *no Wikiversity experience* -- drop in with a meat cleaver.
Please start watching Beta. If you love Wikiversity, please make it available for other languages, and Beta is how that is done. Please participate in the development of Beta policy; we have already seen how the management of another wikiversity by a "non-Wikiversitan," causes severe conflict; and one of those moving to close beta, on meta, is an admin on another wikiversity where there has been some disruption, involved block, etc.
What do I mean by a "Wikiversitan"? I mean someone who understands and supports the concept of "learning-by-doing," coupled with academic freedom. Freedom is not license, there are limits, but Wikiversities set those limits in a very different place than encyclopedia or other purely document-oriented projects. Those projects delete or hide content, we organize it.
Some links:
- Babel, several sections starting with the linked on, have material on this. (Romaine revert warred with me there and I stopped, so some of my response has been, for now deleted.)
- Request Custodian Action
--Abd (discuss • contribs) 19:02, 25 August 2014 (UTC)
- JFTR, – from what I saw, I believe that Abd was “rough” at times in that discussion himself, and slipped a few unwarranted comments just as well. (I’ve got a personal communication from him, and hope to clarify my points privately somewhat later.) However, the claims of his opponent that what I see as merely an attempt to investigate the situation are, e. g. (and sorry for a bit out-of-context) “solely to disturb the wiki”, and that my own comments are “similar[ly] blockable behaviour” are disturbing by themselves. Disturbing enough to ask for someone of the Dutch MediaWiki community to check if such behavior is something that does indeed happen at the Dutch WMF projects.
- Unfortunately, I could hardly contribute substantially to Beta myself, as the only two (human) languages I’m more or less fluent in already have Wikiversities of their own.
- — Ivan Shmakov (d ▞ c) 19:26, 28 August 2014 (UTC)
- I have a private communication with an official of WM Nederland who was "not interested." It is likely that the "New Dutch Wikiversity Project" was started with discussions involving WM Nederland, that's been hinted at. I've seen some pretty iffy behavior from Dutch Wikimedians, globally, but I definitely don't want to stereotype the Dutch. However, a wiki culture can develop. I've discussed the matter of my "roughness" with Ivan, a bit, and there can be language and cultural issues over how "criticism" is handled. Where I am discussing semi-privately, i.e., on a user page, I may be more free with comments, but I was still pretty careful. What can easily happen is that intention is read into what I write that is not there. One of the Dutch users is recently blocked on nl.wikipedia and also recently came off a block on en.wikipedia. That's a fact I mentioned. I did not make it mean that this was a bad user, only a user who might get into trouble. The user was horribly offended, and this is the user that filed the Request for Custodian Action. But it was just the truth. And I might get into trouble, also, but the events on en.wikipedia with me -- which those users had already brought up, with detailed quotes, was about four years ago or so.
- That kind of projection of meaning, obviously, what happened on Beta, in fact. It should also be realized that the problematic Dutch user behavior on Beta was really only from three users, all of whom have voted to close Beta in the meta RfC over Beta. They have no Wikiversity experience and no familiarity with the Wikiversity concept, but are dead set on creating their own idea, by destroying everything else. Above, I wrote "a handful of pages." The Dutch Wikiversity organizing category now links to 10 pages. Seven of them are in the deletion category. One is the main page, one is the Forum just created for discussion in Dutch, and one is a page that I moved back into mainspace. There are a couple of pages being worked on in user space by the new users, and at least one page in mainspace without the Dutch category.
- I have created a resource in my user space from a set of pages that are facing deletion, it is at [5]. The top level page (linked to) wasn't ever finished up, but there is very substantial content underneath it. Some pieces of this resource were deleted. The guideline they are running with is that mainspace resources should be "finished." That, then, requires approval process, etc., or a big, controversial mess. They have no idea what they are getting into.
- Before working more on Dutch issues on Beta, I'm waiting for the smoke to clear. I have XML for much of the deleted work, and intend to study it. If there is much like the single page that I've rescued, this was truly a travesty. However, more likely, most of the content was of low quality. The problem then is simply how that user was treated. Wikiversities are intended for education, so, hey, take the student out back and whip him. Bad student! Instead of taking responsibility for failure to supervise, for failure to engage in the Dutch Wikiversity years ago. No problem with an intention to clean it up, but how that is done is crucial to the mission! --Abd (discuss • contribs) 20:31, 28 August 2014 (UTC)
- Well, did you notice that this
by destroying everything elsepart in the above almost literally doubles the complaints made against you there at Beta? My best guess is still that this somewhat poor choice of words – of all the parties involved – is one of the things which have spun the issue out of control.
- Otherwise, I do appreciate your intent to help with organizing their work. However, my feeling is that it’s first and foremost between them and the Foundation on what would be allowed – and what not – on the Dutch Wikiversity. I could very well accept that they may specify that “unapproved” works are indeed only allowed in specific namespaces, although to me, that would seem somewhat of a “backwards move.” (But so does a certain amendment to the Terms of Use made this June.)
- — Ivan Shmakov (d ▞ c) 22:59, 28 August 2014 (UTC)
- Thanks, Ivan; however, unless I count you as a 'complainer against me' -- which I don't -- there are three users who complained, the same ones who have voted to close Beta, and I don't know how to ascribe their complaints to that specific comment. At the time that I started to review the situation, I did not know the extent of the deletion. Nor did anyone, apparently, they didn't say what they were about. (And maybe they didn't know.) They did destroy (delete) so close to "everything else" that we might as well say "everything."
- Yes, they could decide to only allow finished resources in mainspace. They could decide any foolish thing -- or wise thing -- they choose. It's Beta, though, where global cooperation is encouraged, and expected, in fact. (They did not need a Dutch custodian for legitimate deletions, that was a fantasy. They thought the custodian's job was to judge content, which would indeed require Dutch for facility, but that's, properly, not the job on a WMF wiki. The job is to assess community consensus and serve it. Crochet.david regularly runs a bot on Beta, and could easily have run it to delete even 5000 pages, providing consensus were shown by the maintenance of a category.) Part of Beta's purpose is to consider global Wikiversity policy or guidelines; it's simply not been used for that, for years.
- (This could easily be done by creating a Draft namespace. For now, we use User space for drafts, but a Draft workspace would be like mainspace, i.e., the inclusion requirements, as to topic, and shared nature of the pages would be the same, but stubs would be allowed. I think it's unnecessarily complex, though, there are better organizational tools available, and it's quite useful to have brief stubs as subpages -- or sometimes as superpages --, which would break with a Draft workspace.)
- The issue is not out of control. Some Dutch users went ballistic, from much milder comment than that, and I'm not seeing that any suggestions for restraint from less reactive users appeared within the Dutch community. It certainly appeared from the global community, such as from you, and you know how they responded to that!
- Wikiversity sysops responded sanely. So those users voted to shut down Wikiversity. We see this kind of complaint all the time on meta: restart project because Bad Sysops. Sysops are Bad because they did not agree with us. But nobody was stopping the Dutch users from doing whatever they chose. All that happened was that there was some comment. There was, for example, no disruptive editing (in my opinion, the Dutch sysop, alone, disagreed about one page). A community should be so lucky as to see one "bad edit," if it was bad.
- What I'm seeing so far is that their approach is generating content, if any, at a glacial pace, as would be expected. Instead of taking a measured approach, as experienced Wikiversitans would follow, they went for the meat-axe first. Actually burning down the building entirely to start it from the ground up would be a better analogy. I've seen one Dutch page created, so far, not categorized in the Dutch Wikiversity, but categorized within a Dutch category. Brief, incomplete. And that is how Wikiversity resources get started. Sometimes someone just wants to learn about something and starts a resource and requests participation. That's often what Timboliu did. He described this as creating "learning circles." Or trying to! Very Wikiversitan, in fact, just with low skill, and part of our purpose is to train users in developing skills.
- So ultimately, my plan is to develop better guidelines for how to create a Wikiversity, on Beta, out of this experience. We have How to be a Wikimedia sysop/Wikiversity which is vague as to whether it's for here only, or is intended as global ("Wikimedia") . This should be developed on Beta. The "Dutch misunderstanding" about Wikiversity is widespread, not only "Dutch."
- Someone looks at Special:RandomPage here and freaks out! Yet if that's a piece of junk, that freaked-out user could easily tag the page for speedy deletion. It takes seconds, and then we would promptly review the page. The cleanup is ongoing, and we have some people who know what they are doing, but the task is enormous, because there were years of undisciplined pag allowed almost anything. We have the opportunity now to stand for the academic and educational freedom that they wanted, and stand for clear and useful organization as well. We can have both. It does take a community which actually cares about the overall educational goals of Wikiversity.
- As long as people only look at their own projects, it doesn't happen. That's just a fact, not a judgment of those people; the people working on their own projects create our content, for the most part. But we also need, collectively, something else.
- From the random page link above, I found UTPA STEM/CBI Courses/Intermediate Algebra/Application of Quadratic Equations. It's a stub, an outline of a course, created in 2010. See UTPA STEM. So we have a large family of pages, organized under a specific school project instead of by topic. Not good, long-term, my opinion, but this is what we have! There is no way that this community would support the deletion of the page, as radically incomplete as it is. Rather, someone will eventually organize this material, moving it where it is more useful for an educational goal, and inviting participants to fill out or even radically revise the content -- or to request deletion.
- At this point, there is a possible problem. There may be a blue link for that page, presumably, when it really is useless for reading. So at the next level up in the structure, the page should be listed in a section as proposed, undeveloped content, inviting participation. Otherwise it will waste user (student) time.
- So I fixed the problem with the page supra: UTPA STEM/CBI Courses/Intermediate Algebra.
- Once again, the development task is not generally furthered through deletion, per se, except where a resource serves no educational function at all (including the function of inviting participation). It is furthered through organization. We have broadly settled on this as a defacto community guideline. As an additional benefit, it avoids a major cause of wiki conflict: arguments over deletion. Then there are arguments over content, which we also address by neutral forking. --Abd (discuss • contribs) 20:03, 29 August 2014 (UTC) (discuss • contribs) 21:39, 19!
Wiki ViewStats[edit]
I was checking statistics on Wikipedia today and came across Wiki ViewStats. For those who are looking to improve the quality of Wikiversity resources, I recommend starting with the most popular articles. To see the list of popular articles, visit Wiki ViewStats, select Wikiversity at the top, and then select Overall on the left. Select different timeframes to see current and recently popular articles lists. -- Dave Braunschweig (discuss • contribs) 04:22, 24 August 2014 (UTC)
- This is amazing! We could recommend it in a special page (in addition to Wikiversity:Statistics), not MediaWiki:Sidebar, but something like the obsolete history banner, which I couldn't find here, or in Translatewiki.
- Crochet.david do you copy? JackPotte (discuss • contribs) 11:49, 24 August 2014 (UTC)
- MediaWiki:Histlegend, perhaps? — Ivan Shmakov (d ▞ c) 17:03, 24 August 2014 (UTC)
- It also provides a hit counter for specific Wikiversity pages or resources. Thanks, Dave --Marshallsumter (discuss • contribs) 02:33, 27 August 2014 (UTC)
I found the API at wikipedia:de:Wikipedia:Wiki ViewStats/API. I added ViewStats to the History page. I added it rather than putting it in place of Readers. Let me know if you'd rather have it replace Readers or be named differently. -- Dave Braunschweig (discuss • contribs) 03:20, 27 August 2014 (UTC)
i enjoy too much wikiversity it helps us the student to improve our knowledge in our study[edit]
--41.73.114.229 (discuss) 00:07, 27 August 2014 (UTC)
Very true. I always use Wikiversity for my studies, and I am successful in school. I hope you have a great time here! --Goldenburg111 21:31, 14 September 2014 (UTC)
Interactive labs[edit]
The resource Interactive labs has been untouched since 22 November 2007. I was about to put a deletion request on this resource but thought I might start here first. I've created some sixteen "labs" for the course principles of radiation astronomy that ask the student to obtain their own learning example from the web. Using an example, I then ask them to scientifically analyze it, record their results, and then compose a report. I use the resource Astronomy/Laboratories as a guide to constructing these "labs". But, the Interactive labs resource intends "to use Java applets as a framework." I probably or currently lack sufficient programming background to apply this. Comments, criticisms, suggestions, or questions welcome. Otherwise, I guess I'll put a deletion tag on the resource and see what happens. --Marshallsumter (discuss • contribs) 18:11, 27 August 2014 (UTC)
Rotlink Bot and Archive.is[edit]
Recently I have become concerned by the edits of User:Rotlink, which appears to be primarily bot-driven. Short explanation is that Rotlink searches pages, seemingly at random, and replaces dead links with links to various Internet archives. However, the user behind the Rotlink account runs archive.is, which now redirects to archive.today. Wikipedia has discussed this issue extensively at Wikipedia:Wikipedia:Archive.is_RFC. User:Rotlink is now blocked at Wikipedia for bot use, archive.is links are blocked from addition to Wikipedia, and all links to archive.is are being rolled back or replaced with links to other archives.
How does Wikiversity want to address this situation? -- Dave Braunschweig (discuss • contribs) 13:06, 28 August 2014 (UTC)
- The Wikipedia response was particular to en.Wikipedia. The Wikipedia RfC was run and closed in a way that many considered a problem. There was no actual consensus. What is clear, in fact, is that Rotlink is doing good work, and if you review the Wikipedia RfC, it was not shown that Rotlink was damaging content. I have suggested to Rotlink, on the Talk page here, that COI be formally disclosed, but it is well-known that Rotlink is affiliated with archive.is.
- Is Rotlink a bot? Meta policy allows unapproved bots if they do not edit at a rate higher than 1 edit per minute. However, humans can easily edit at 6 edits per minute, I've demonstrated 10, even for extended periods. Rotlink has 171,352 edits globally, which is huge, but there are ordinary editors, not bots, with more than 500,000 edits.
- Looking at recent global contributions, I see 50 edits from 12:32 to 13:14. That's 42 minutes. Very slightly over 1 per minute. There is no way for us to know if Rotlink is a bot or a human, perhaps using an automated editor. (The difference is that an automated editor presents a human with an Accept button, one approval per button push, a bot just goes ahead.) Bots, though, usually hit much higher edit rates unless throttled back.
- How many of the Rotlink edits created an actual content problem? Rotlink is now almost entirely adding links to archive.org. Archive.is or archive.today links are relatively rare, hard to find.
- In none of the extensive discussion of Rotlink has a bad edit been shown. Negative opinion about archive.is was often of the nature of "they might do something bad in the future."
- There is also RotlinkBot which has bot status on 6 wikis, is blocked on more than that.
- Rotlink demonstrated that you can follow w:WP:IAR Wikipedia Rule Number One, causing no harm at all, and be blocked or banned. That RfC failed to implement what it demanded: a blacklisting of archive.is, and a global blacklisting request also failed. Essentially, the active core wants one thing ("obey our authority"), the real community actually wants better content.
- AFAIK, Rotlink has violated no Wikversity policy. If we think that Rotlink is harming Wikiversity, policy or not, we may request Rotlink stop, and warn if it continues, and block if the warning is ignored. But it would be entirely contrary to our traditions to request that beneficial behavior stop.
- We don't want unapproved bots running, for good reason. However, this user, bot or not, has, for a long time, only done good work. If the account is a bot or we think it is, we might consider approving it. Is.wikipedia apparently did. Pl.wiki actually gave it the flood right.
- The issue with unapproved bots is flooding, overwhelming the capacity of the community to review edits. Rotlink is not doing that here. However, after checking a lot of Rotlink edits, I'm not doing it any more. Rotlink has been assigned Trusted status on some wikis, and it's no wonder.
- I welcomed Rotlink's work on the user talk page. I suggested specifying possible conflict of interest, per WMF policy. However, there is no COI with the vast majority of edits, in recent edits, some time ago, I think I found only one archive.is link. This is trivial and actually harmless. Anyone may communicate with Rotlink, and the user has email enabled. Rotlink was not notified of this question, and I'm not doing it, because I consider it a waste of time. A formal warning would be necessary before taking action. --Abd (discuss • contribs) 14:49,)
- Dave, first of all, policy exists to serve the project and users, not the other way around.
- We have the same policy here as meta, look at the fine print, my emphasis:
- Bots running without a bot flag should edit at intervals of over 1 minute. Once they have been authorised and appropriately flagged, they should operate at an absolute minimum interval of 5 seconds.
- Our policy governs local editing. The 1 edit per minute approximate rate of Rotlink, when active, is global, not local. Rotlink is not close to the margin here, but far from it.
- This discussion is not a warning of the user. This is a request for Wikiversity comment. My opinion is that no user should ever be blocked if their editing is not harming the project, and particularly if it is benefiting the project. This is an application of a rule that we have not formally implemented here, but it was made Rule Number One on Wikipedia, and that, in fact, was merely common law.
- I did do more research on this, including looking at links to archive.is, but I did not report it, since my response was already long. Since Dave brought it up, here is what I found:
- Rotlink, from CA, has 293 edits to this wiki.
- There are 37 links to archive.is on en.wikiversity.[6] One was added by Marshallsumter.
- 22:23, 26 August 2014 added three links to archive.is
- 14:54, 4 January 2014 added one link to archive.is.
- 17:18, 22 August 2014 added one link to archive.is
- that leaves 31 links, and unless I missed something, all were added in October, 2013.
- Archive.today has 4 links on en.wikiversity, none were added by Rotlink.
- There is no problem. It appears that almost all of the activity of Rotlink is adding archive.org links. Dave, I would guess that you saw two edits, listed above.
- What I have not researched: were there archive.org links available for those four pages linked to archive.is? My guess is not, but that's just a guess. And is it worth the effort to find out? How will this benefit Wikiversity? I've put two hours into this because Rotlink is benefiting this project and I don't want it to stop. I don't really care if links are to archive.is. There are still 16,292 (just now) links on en.wikipedia to archive.is, almost a year after the RfC was closed. The RfC had it that there were 10,000 links when it was filed. Now, I know that the number of links went far higher than that, it went to over 30,000. Links stopped being added because an Edit Filter was written to prevent ordinary editors from adding links. That was not consensus, it appears it was unilateral.
- So the question for the community. Rotlink may not speak English, and the user may not have time to bother with separate wikis, he's working on hundreds of them. Do we want to put the user through some bureaucratic process to get an approval that isn't needed by our policy, for the edit rate here? I've looked at a lot of Rotlink edits, maybe hundreds of them, and every one has been good. That was the strong report on the en.wiki RfC. Where Rotlink is blocked, it is not for bad edits!
- There is no harm in allowing Rotlink do what Rotlink does, as long as the edit rate here is below 1 per minute, which, the way that Rotlink operates, it will always be.
- If we don't want Rotlink to edit, we can block the user. Here is what happens when Rotlink is questioned. Nothing. The question raised was an esoteric one. Rotlink fixes dead links. Wiktionary quotes sources, that may include dead links. The user complaining thinks they should be left as-is, but when those materials were created, the links were live. Insisting on maintaining the literal link when it is dead is ... strange, very literalist, it will waste user time. However, a compromise would be to add a note, using the Rotlink link in addition to the original. We are very, very unlikely to see a problem like this. I think I know what happened with. That was an archive.org page, and it was taken down after the link was added. That's all. The restored link is behind a pay wall (if it still exists). Any link can break.
- We have lots of users who do not respond to warnings and requests. We do not therefore block them. We make an assessment, overall, if the harm outweighs the benefit of allowing the user to edit, and we don't punish for "failure to respond," as some wikis do. --Abd (discuss • contribs) 19:16, 28 August 2014 (UTC)
- We're obviously looking at different data, because I find 19 archive.is links from August 2014 alone. And unfortunately, I am unable to vouch for the safety of this resource. I can only tell you that if I wanted to create a botnet, I would do it by having unsuspecting users click on links that would bring them to my server before directing them to their requested content. If the information at Wikipedia:Wikipedia:Archive.is_RFC is correct, such a botnet already exists, and the links here would serve to expand that network. Because there is this potential for abuse, already identified on a sister project, I bring it to the community's attention. If the community finds archive.is links to be a valuable service, so be it. -- Dave Braunschweig (discuss • contribs) 01:47, 29 August 2014 (UTC)
- At what data are you looking, Dave? I linked what I used. I did not, however, compile an analysis, imagining that this would be unnecessary and wasted work. Since this has been questioned, I now have, User:Abd/Archive.is. That page shows the four additions in August that I mentioned above, no more. Is there a bug in the special page for external links? Or have I made some other error? Diff or diffs, please! --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC) Daven's comment below split my comment, so I'm adding this sig to it, copied from below. --Abd (discuss • contribs) 20:08, 29 August 2014 (UTC)
- As indicated initially, I am doing a full text search using "archive.is". I can't give a link, because the link doesn't interpret correctly with quotes in the URL. It also doesn't interpret correctly using ASCII values for the quotes. That's why I instead noted 'You can do a full text search for "archive.is" (quotes required) to verify.' -- Dave Braunschweig (discuss • contribs) 19:35, 29 August 2014 (UTC)
- Awesome. Thanks, Dave. Now, why are some of these links not showing up on the normal special page used to study external links? For example, Molecular Biology contains a link to archive.is. The search shows a date of 03:20, 22 July 2014, which was the date of last edit (by Dave). That is not the date the text was added, . I am, by the way, fixing the reference to show what was archived. This was not properly created in the first place.
- When I edited the link, which hid the "archive.is" text, it disappeared from the search. The search does not show hidden text (this is a bug in the search engine, in my opinion.) But we also are seeing a problem with Special:External links.
- If we want to know Rotlink activity in August, neither of these approaches is clearly accurate at this point. Both are tedious and may be incomplete. Instead, this is Rotlink contributions for August (to 21:09, 29 August 2014), examining which, while tedious, will also give us other useful data. I'm compiling this on User:Abd/Archive.is and will come back with a report. --Abd (discuss • contribs) 21:52, 29 August 2014 (UTC)
- See [[7]] for an analysis of Rotlink August contributions. Summary: Rotlink made 162 edits in the study period. About 184 links were added to archive.org and about 43 links to archive.is. Peak day had 25 edits, peak hour had four edits. In only one case did two edits show up with the same minute. --Abd (discuss • contribs) 01:04, 30 August 2014 (UTC)
- There are two issues here, Rotlink, a user, probably an unauthorized bot, but operating below non-authorized limits, and links to archive.is. The large majority of Rotlink edits add links to archive.org (the ancient Wayback Machine, or Internet Archive). It appears that where an archive.org page is not available, Rotlink uses archive.is, and Rotlink is apparently being run by the same people who run archive.is.
- When the archive.is flap arose on enwiki, Rotlink did not respond as "the community" expected. Rotlink's attitude seems to be, this is purely helpful, so what's the problem, I don't want to hassle this, so if you try to stop me, I will follow w:WP:IAR and do it anyway. And so Rotlink edited anonymously. We do not know to what extent the ensuing IP editing was affiliated with Rotlink or was some general support, but that activity is not known to have continued. Rotlink is not editing anonymously here, but openly. So, rather than run a formal Community Review, which would be needed to ban Rotlink, some questions for the community follow below. --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
The historic alternative before any automatic URL modification, was {{Dead link}}. On the French Wikipedia this template provides in addition four archive site URLs, including archive.is. So this solution is quite better than Rotlink, and that's why it engendered a development request of a similar bot in 2010, which I took, and modified thousands of French pages since then. Fortunately now my bot is open source and anyone can submit a modification I could run after. So I propose to:
- Modify {{Dead link}} in order to offer a few URLs to get the initial sources.
- Create Category:All articles with dead external links as a HIDDENCAT.
- See if we replace all the Rotlink URLs by the new template version.
- Vote if we periodically launch a bot like that, eventually in parallel of Rotlink.
JackPotte (discuss • contribs) 20:52, 29 August 2014 (UTC)
- Great. First of all, if Rotlink can operate an unapproved bot making good edits automatically at low rate, so can anyone, without approval, and we don't need to vote on whether or not dead links should be fixed. I.e. we want them fixed, we don't just want to notify users that they are dead. If anyone thinks that having a dead link template (even with suggestions) on a link is superior to having an linked archive of the page, the user is welcome to revert the archive.org or archive.is link and place the template, Rotlink having served to automatically identify the dead link. And then someone else could review the situation, etc.
- Or one can get bot approval. It's obvious why Rotlink hasn't done this. The process is a pain in the rump; he's running Rotlink globally, and there are something on the order of 600 wikis. So he has apparently elected to go for low rate, tolerating a few wikis blocking him. If they don't want the fixes, that's up to them!
- So there is already another bot, great! Demonstrate it, please! We will not block you for operating an unapproved bot without warning you first! Just keep the rate as low as Rotlink's and you will be fine. If you want approval, again, go for it! Then you can easily make Rotlink editing here unnecessary, because you can run the bot once a day and stay way ahead of the game, with up to about 10 edits per minute, I think. --Abd (discuss • contribs) 01:31, 30 August 2014 (UTC)
- JackBot is already approved. -- Dave Braunschweig (discuss • contribs) 02:31, 30 August 2014 (UTC)
- Great. However, I just reviewed JackBot operation with broken links, below. I'd not be happy to see that here. However, if the rate were low, we could supervise it. This is, in no way, a substitute for what Rotlink does. Take a look at my experience below. --Abd (discuss • contribs) 02:43, 30 August 2014 (UTC)
Allow Rotlink to operate?[edit]
Rotlink is running an unauthorized bot. However, the rate is low, and authorization is required for bots out of fear that they will flood a wiki. If a flood of edits appear from Rotlink, I would see no problem with any custodian temporarily blocking Rotlink pending investigation. Given this, the bot is performing two valuable services: identifying dead links and fixing them with links to archives (mostly archive.org). Some wikis have authorized Rotlink and/or Rotlinkbot to operate as a bot. We could do that as well. --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
- While Wikiversity:Bots says that bots must be approved, there is then a section, added much later after discussion at [8] that has:
- Bots running without a bot flag should edit at intervals of over 1 minute. Once they have been authorised and appropriately flagged, they should operate at an absolute minimum interval of 5 seconds (12 edits per minute).
- This clearly allows bots to operate at low rate without approval. (The intention may be just for testing, though). That was taken from meta policy. Rotlink operates globally, and global edit rate is below one edit per minute. (i.e, you can occasionally find minutes with 2 edits, but that does not mean that the bot actually operated at an interval of less than a minute, in terms of submitting edit requests, because there can be delay in those.) And that is why, in spite of all the flap over Rotlink, there has been no global lock request. --Abd (discuss • contribs) 16:49, 30 August 2014 (UTC)
Rotlink operation discussion[edit]
From a custodial point of view I would prefer that bots be declared and approved. It ensures that the community is aware of the actions planned, it gives everyone an opportunity to inquire and verify what the bot will do (with documentation), and it allows the bot edits to be tagged as bot, removing them from the default recent changes list. Unlike Abd, I interpret the requirement at Wikiversity:Bots literally, that 'The operation of a bot requires approval.' I have my own bot, and I followed these guidelines and waited for approval before using it. We also have a page for bot operation discussion at Wikiversity:Bots/Status. -- Dave Braunschweig (discuss • contribs) 19:35, 29 August 2014 (UTC
- I read the policy in both ways: literally and as to intention.
- As to literal reading, one needs to read the whole policy. It is obvious that "bot" is referring to high-rate automated editing, that is what is being regulated. This is clear from the exception noted in our policy, that a bot may operate below 1 edit per minute without approval. (Personally, I think that's too fast. I would not want to look at recent changes and see 1200 edits for the last day. But the origins of these policies were in high-watchfulness environments. Rotlink is a global editor, and global sysops watch for stuff like that!
- As to intention, bot policy has a purpose. Rotlink is not violating the purpose, AFAIK.
- Dave, you have a bot for doing high-volume editing here. That's a very different situation from that of Rotlink. Rotlink has already demonstrated high reliability, with over 173,000 edits globally. If this were a real problem, Rotlink would be globally locked. It's not. However, if I saw Rotlink flooding this or any wiki, I'd be at m:SRG in a flash with a global lock request, and it would be granted pronto.
- Bots are written and tested at low rate, already, before approval.
- As I mentioned, one of our options is to approve the bot. Would that make you happy, Dave? --Abd (discuss • contribs) 01:31, 30 August 2014 (UTC)
- But there's more to it than that. This discussion, and the alternatives being discussed are why there should be approval before a bot makes 173,000 edits, or even just 309. As you point out, a 'low volume' bot could still have a very high impact in a relatively short amount of time. If this functionality is something we want, then yes, one approach would be to approve Rotlink as a bot. Although, that, too, would violate policy, because 'Bot operators must: create a separate account for bot operation'.
- Based on the discussion so far, I would much rather have the functionality suggested by JackPotte than the functionality currently provided by Rotlink. In addition, if this is something we want, then it should be done at a higher volume so that it has an effective impact. JackPotte has offered to set up his bot to do this. I would be happy to do the same with mine. But first we should all agree on (or at least come to consensus on) what it is the bots should do. -- Dave Braunschweig (discuss • contribs) 02:28, 30 August 2014 (UTC)
- That is really a separate discussion from this. Rotlink is operating, now, doing useful work. We don't have an alternative, yet. But as to what the bots should do, how about starting with exactly what Rotlink does? After all, maybe Rotlink will stop operating. That's a problem with depending on any private bot. Our page archive bot, here, stopped working eventually when the user apparently stopped running it. If we have something that does the same job, why should we prevent Rotlink from running? Only if we have something better and we want to stop the Rotlink changes, then, it's simple. One button push. When needed. Not now.
- Dave, are you really proposing that we block Rotlink because, someday, Rotlink might start eating our pages? Running amok? But any user could run amok, any time, and any user could set up a bot at any time. Remember, as well, in the User:Abd/Augusto De Luca study, I tested manual editing at in excess of 6 edits per minute, sustained. Simple to automate that, if I wanted to run amok and get myself blocked and banned. Rotlink is not going to do this.
- We don't issue preventative blocks absent harmful behavior. Some wikis apparently have decided to do that. I hope we don't start to imitate them. --Abd (discuss • contribs) 02:55, 30 August 2014 (UTC)
- I haven't proposed blocking Rotlink. I have asked the community what it wants to do. My personal concerns are already described here. I believe it is a bot functioning without community notice or approval, and it directs users to a (potentially self-serving) third-party website not related to the link the user is seeking, and without any visual cue or warning. From a computer security perspective, it is something I would instruct my students never to do. This approach is how most current malware is distributed. If a better solution can be found, I would be in favor of that better solution. JackPotte's proposal would provide a clear indication to the user what site they are actually visiting. Then, anyone who trusts archive.is is free to click on the link, knowing what they are requesting, and from whom. From a design perspective, it also centralizes the management of archive options in the dead link template, allowing for easy additions and removals as archive sites come and go. If I'm the only one who has concerns, then it's my problem. If others are also concerned, then we can implement a better solution. -- Dave Braunschweig (discuss • contribs) 03:36, 30 August 2014 (UTC)
- Is there an action you do propose, Dave? This is an example where general concerns, which you have well expressed, may not apply to the particular situation in front of us.
- Less than a quarter of the links are to archive.is. If we want to stop Rotlink from adding links to archive.is, that is an action that could be taken, except there seems to be no necessity for it, and the one opinion that is global consensus (with a few exceptions) is that archive.is links are okay, so there is only the technical concern.
- Rotlink is not going to distribute malware. He'd be shooting himself in the foot. We'd lock the account, take all the links down, lickety-split, I don't care how many there are, there are many bots that could go into gear quickly, and we'd blacklist the site. It would be an emergency and would be handled as such.
- I'm not concerned about archive.is. I would be concerned about a bot operating like Rotlink operates, if it had not demonstrated a track record of good edits. This is the point: RandomEditorDoingYouAFavor placing links to some unknown site, absolutely, stop it now. But Rotlink automated something that was already happening, with many editors adding links. One of the problems in the Wikipedia RfC and in the meta blacklist discussion is that no analysis was done of how many edits were added by regular editors, first, before Rotlink pushed the issue.
- Meta blacklisters care much more about "conflict of interest editing," per se, regardless of whether or not an editor is doing good work, I studied this extensively with an editor adding links to lyrikline.org, which was blacklisted for years for no good reason, all the links were good. The editor was blocked to boot, and antispam volunteers vandalized many projects, it's reasonable to call it that. (One nearly was blocked on de.wikipedia.) If he could have gotten away with it, there is a meta admin who was itching to blacklist archive.is. But there was no community support. Basically, to blacklist, you have to remove those links first. There was talk of using a bot to remove them on en.wiki, that went nowhere. Then an admin used the edit filter to disallow any new archive.is links, and few editors understand the edit filter, much less know how to challenge an action like that. I was one. That's why I'm no longer editing Wikipedia! (Successful challenges of administrators don't make one popular with the administrative community.)
- In any case, thanks, Dave, for being concerned about security. This is not, however, a security issue, just because in some alternate universe it might be. I think we should edit the bot policy to more accurately reflect what we need. Rotlink isn't a problem. An unapproved bot editing at 1 edit per minute for more than short bursts, could be a problem. Actually, though, any busy editor can make a huge mess if nobody is paying attention. An eye-opener for me was looking at all those Rotlink edits. We have a lot of work to do to organize Wikiversity! The time we spend hassling dead links is time we don't spend on the organizational task. --Abd (discuss • contribs) 04:27, 30 August 2014 (UTC)
- I'm in the universe where Russian hackers recently amassed 1.2 billion usernames and passwords using a botnet. One way botnets are built is using a combination of browser vulnerabilities and URL redirection. Through Rotlink's activities, we're providing the URL redirection. I would propose a 'truth in advertising' type of approach where the redirection is clearly labeled. The link could be displayed as 'archive.is: title'. I already do the same thing in other external links I create, so that users know the site they are visiting before they click. JackPotte's proposal would also work, as users would see the name of the archive they are visiting before they click. I particularly prefer JackPotte's suggestion that it be done through a template, so that any future updates can be performed on a single template rather than what will become thousands of pages and external links. -- Dave Braunschweig (discuss • contribs) 14:03, 30 August 2014 (UTC)
- That's an alternate universe, i.e., part of the complete universe that does not apply here. The Evil Plan described requires this: archive.is is a massive undertaking to provide archiving-on-demand-or-need of many, many web pages. That undertaking operates for a long time (it's been at least a year) and many incoming links are created. Then archive.is switches to a malware site. How the Russian hackers collected 1.2 billion usernames and passwords using a botnet has anything to do with Rotlink, I don't know, there appears to be no "botnet" involved. How a malware site that is loaded by following an archive.is link here ends up collecting usernames and passwords is entirely unclear. Perhaps they pretend to be Wikiversity requesting log-in? How one would fool a billion users this way is, again, obscure.
- So there is a huge investment of time and energy, as represented by archive.is, all to create something that would be shut down within hours of starting to implement the Evil Plan. It makes no sense at all.
- As to improved process, yes, I agree that links to archive sites should be identified. Many of the archive links do identify, because the original link was bare or visible, so the replaced link is. Hidden links, though, are not identified, but, then again, they were not identified before, necessarily. We have no policy that requires external links to be identified. It also looks like we may have a bug in Special:ExternalLinks, which is a serious problem.
- We do not create content with policy. One of the major wiki problems is policy formation or central decision-making that is an "unfunded mandate." That's what the archive.is flap on en.wiki showed.
- Bots can be used for maintenance, if there is something very clear that a bot can do, and if it is possible to review bot operation and stop a malfunctioning bot. Bots can be low-rate, allowing this review. (There is no particular need for a bot to be fast, unless there is a huge and very clear task to be undertaken.) In any case: until we have practical alternatives, Rotlink is performing a needed service. "Unauthorized Bot" is a red herring here. The real issue is content, how we want links and broken links to be handled, and, then, review of Recent Changes by those of us who watch it. I don't mind Rotlink because the rate is low enough here (averages on peak days, which are rare, one edit per hour) to allow monitoring of Rotlink activity and, in fact, to look at those links and pages and make links work better, I've already done some of that.
- As I've said, if I see some drastic alteration in Rotlink behavior, I expect to look at it immediately, review global behavior, and possibly be at meta requesting global lock within minutes.
- Of course, we could not stop a mal-intentioned bot in the hands of a sophisticated user, because the user would simply register new accounts. Most locked spambots are not sophisticated. They might as well wave a big red flag, "spambot." In this case, the behavior would not be distinguishable to ordinary users as spam or malicious linking. We'd whack archive.is quickly, though, with the global blacklist, if we found malware hosted there (i.e., directly by the site itself, there could be something problematic on an archived page, as there could with archive.org).
- See [9] where the page was a mess. I left the links as raw, which is common in print publications (where the whole link must be visible!), and we'd want that for a book compilation. Ugly, but useful and clear. Instead of focusing on Rotlink, who is helping, we may much more profitably focus on what we actually want for Wikiversity, and how to get from here to there. --Abd (discuss • contribs) 16:21, 30 August 2014 (UTC)
- I've considered requesting the bot flag for Rotlink, but given that Rotlink is not particularly communicative, I'd rather not. I'd rather have those edits be reviewed, even though they are boringly good. It creates a kind of "random page" review, so there are other benefits as well. --Abd (discuss • contribs) 16:24, 30 August 2014 (UTC)
- I have created a notice on User:Rotlink that Rotlink should be treated as a bot, and immediately blocked if operation moves out of established limits. No warning is needed (nor would warning be useful). I report there that consensus is, at present, to allow Rotlink to operate, based on the usefulness of Rotlink's work. Users should feel free to revert any Rotlink edits if they are harmful, and to Request custodian action if the bot begins to operate outside of safe limits. --Abd (discuss • contribs) 14:19, 6 September 2014 (UTC)
Rotlink operation conclusions[edit]
Support as proposer. --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
Support I've checked the last ten link repairs by Rotlink, nine are between 8/25-8/29/2014 and the tenth is from 9/9/13. Each link was dead, either from the web accessible literature (8) or from Wikipedia (2), each was replaced with a link to archive.org, and each repair was correct. If Rotlink begins to operate outside this norm creating vandalism or unusable links then I believe a custodian should halt the bot or human pending investigation. --Marshallsumter (discuss • contribs) 00:30, 31 August 2014 (UTC)
Allow links to archive.is?[edit]
The general consensus on this has been yes, everywhere. Archive.is and archive.today are not globally blacklisted, are not blacklisted on en.wiki (where the close of the RfC mentioned above indicated a "weak support" for blacklisting), and AFAIK, are not blacklisted anywhere. Shall we allow links to archive.is and archive.today? (The alternative is blacklisting.)
Archive.is discussion[edit]
Archive.is conclusions[edit]
Support as proposer. --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
Support it might help the readers. JackPotte (discuss • contribs) 21:01, 29 August 2014 (UTC)
Support if the only links I can find for a dead link happen to be on archive.is or archive.today, I will use them. If Rotlink systematically shifts from archive.org to archive.is and this archive starts charging for downloads, then I believe solicitation has occurred and the bot or human should be halted, pending investigation. --Marshallsumter (discuss • contribs) 00:36, 31 August 2014 (UTC)
Allow Rotlink to add links to archive.is or archive.today?[edit]
Rotlink apparently has a Conflict of Interest with respect to those two domains. We have no policy prohibiting COI users from linking to their domain, if the domain is useful and otherwise allowable. Given that the Rotlink edits only flag dead links and provide a suggested solution, which any user may improve or revert, should we allow Rotlink to add such links? --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
COI discussion[edit]
Noting that User:JackPotte's proposal above would resolve the conflict of interest issue. Users would be free to choose which archive they would like to view, and the visual clues make it clear that they are not visiting the original source. -- Dave Braunschweig (discuss • contribs) 21:25, 29 August 2014 (UTC)
- JackPotte is free to implement his proposal, our approval is not needed, it would only be needed if that bot operates at high rate. I
am notwas not judging which is better, not having seen any example of what it does. Taking a hint from myself, I look at the French template and what links there and find permanent link, the template is used in Note 4. While that looks decent, the archive.org page that opens up is the URL page, not a specific link, and both snapshots are broken. The only link that seems to work at all is [10] from wikiwix.
- I don't see this as superior to what Rotlink is doing, but Rotlink probably doesn't check wikiwix. The problem here is that work is being set up for editors to do, and what happens on Wikiversity is that, too often, nobody gets around to it. Most readers would see that link and not know how to fix it. So, to see what is involved, I look at the wikitext. To start out, I click on the references section and, of course, don't see the code. I have to click on the ref number to find where the code is. I know already where this is heading! So I click on the number and I don't find the ref. That's because it's hidden by the donation request. (These ref jumps put the ref at the very top of the page, where they will be nicely hidden by the site message.)
- Plus I need to know what to look for. It's a tiny highlighted ref number in the template on the top right. How the hell do I edit that? Okay, I have to edit the whole page. I was going to need to do that anyway, because it's the only way to see a prevue of references, and I don't want to blow my one chance to make a good edit to fr.wikipedia. Of course, the note numbers don't show in the wikitext, so, having been through this crap before, I first copy the date referenced, then edit the page, and search for it with my browser Find function, which didn't work. It's a template, so Find doesn't see the date, it's spread out in fields. Still, it wasn't a lot of text to search through.
- About to substitute the wikiwix link for the broken link template, I realize I haven't checked to see if it supports the information. It does not. That link was to a googlebooks page that allowed searching a book by page, and the page was given. No wonder archive.is didn't have it! Total waste of time.
- Summary: not so good. Better than a broken link and "dead link." Maybe. Not better than what Rotlink does, but maybe this was unfair as a test. (Rotlink would not have edited this.) Perhaps someone else can find a better example, I just looked at the first I found.
- And then I found the original edit.[11] No wonder nobody had fixed it since JackBot placed it January 1, 2013. Yes, JackBot=JackPotte. The bot appears to have mangled the note information, losing the actual book citation and I had to look way back to find it. I haven't seen Rotlink do anything like that, but nobody's perfect. I fixed it.[12] --Abd (discuss • contribs) 02:29, 30 August 2014 (UTC)
- And then, armed with the full book information from the original, and even though the page is now missing from the Googlebooks preview, I found a way to display the needed information, and added a link.[13]. Old trick from an old dog. I did not put this link into the citation template, because, then, a bot might break the thing again if Google changes something, which they often do. Rotlink looks for dead links and replaces them, without changing anything else. In fact, I found one place where this wasn't optimal, but it was harmless, just didn't look good, the real problem was the original was an external link with the entire URL then placed again for display, so Rotlink replaced both. Rotlink really should not replace display-only text. But this is so rare that it's probably not worth pinging him. --Abd (discuss • contribs) 03:44, 30 August 2014 (UTC)
COI conclusions[edit]
Support as proposer. --Abd (discuss • contribs) 18:45, 29 August 2014 (UTC)
Support see my comments in the above support vote. --Marshallsumter (discuss • contribs) 00:39, 31 August 2014 (UTC)
Tagging a wiki book as as a research project[edit]
Should I have {{Research}} or {{original research}} or both on the titlepage of a research project? Should I also have these templates at subpages of the project?
--VictorPorton (discuss • contribs) 18:47, 28 August 2014 (UTC)
- My opinion is that it is only needed at the top level, subpages are automatically considered a part of the same project. They can be like chapters or pages in a book, the general disclaimers don't have to be on every page. Thanks for asking. --Abd (discuss • contribs) 20:33, 28 August 2014 (UTC)
- By the way, it is possible that a resource doesn't have the research project at the top level. A lot of work I've done only gets into what can be called original research in subpages. So a disclosure would be at the highest level of the actual research project. --Abd (discuss • contribs) 20:35, 28 August 2014 (UTC)
Nomination of Dave Braunschweig for Permanent Custodianship[edit]
Wikiversity:Candidates for Custodianship/Dave Braunschweig. All users are invited to comment there.
In the past, we have often site-messaged PC nominations. Dave, as a probationary custodian, should not edit the site message to show that the process is happening, my opinion, because of conflict of interest. However, another custodian, seeing this, may decide to site-message it, or, if there is community approval here for a site message, Dave could then go ahead and action it. So if you approve or disapprove of this being site-messaged, please comment! --Abd (discuss • contribs) 16:16, 29 August 2014 (UTC)
Shall the approval process be site-messaged?[edit]
It is commonly done. --Abd (discuss • contribs) 18:50, 29 August 2014 (UTC)
Site-message discussion[edit]
Site-message conclusions[edit]
Support as proposer, --Abd (discuss • contribs) 16:16, 29 August 2014 (UTC)
Support, this is usually done, but does this require a permanent custodian or bureaucrat? --Marshallsumter (discuss • contribs) 03:02, 30 August 2014 (UTC)
- No, any custodian may edit the site-message. Ideally, the custodian is neutral. However, with a consensus here, or at least this proposal and no reasonable opposition in a decent period (I'd say three days), Dave could also site-message this. That message would be neutral and could actually attract negative votes, though I doubt that will happen. I'm just suggesting caution about recusal. In my view, it's better if this is in the site-message, but I don't want to start pinging all the custodians. Some might be on IRC. I don't do IRC, I'm allergic to it, I break out in hives. --Abd (discuss • contribs) 03:07, 30 August 2014 (UTC)
Support--guyvan52 (discuss • contribs) 02:56, 31 August 2014 (UTC)
Done - Thanks, Abd for getting this happening and discussing. -- Jtneill - Talk - c 02:47, 5 September 2014 (UTC)
Automatic transformation of XML namespaces (new research project)[edit]
I've started a new research project Automatic transformation of XML namespaces (mainly about automatic transformation between XML namespaces, based on RDF resources which may be located at namespace URLs).
Everyone with good knowledge of XML is welcome to contribute. Any comments are welcome.
--VictorPorton (discuss • contribs) 17:52, 29 August 2014 (UTC)
Grants to improve your project[edit]) (discuss • contribs) 16:09, 3 September 2014 (UTC)
Echo and watchlist[edit]; [14]:23, 9 September 2014 (UTC)
- By experience I'm sure that even if you post it on MW:Talk:Echo (Notifications) it will be complicated, unless you develop it yourself. JackPotte (discuss • contribs) 11:17, 9 September 2014 (UTC)) | http://en.wikiversity.org/wiki/Wikiversity:Colloquium | CC-MAIN-2014-41 | refinedweb | 22,268 | 63.8 |
[dash] wrong search result of Unity in Chinese
Bug Description
The search result in Chinese is not correct. Please see the attached example.
Unity-2D version: 3.8.1
Related branches
- Neil J. Patel (community): Approve on 2011-08-25
- Diff: 13 lines (+3/-0) 1 file modified
This bug most likely occurs in unity 3d. It's probably not being reported because there is currently no way to enter Chinese characters in unity 3d (see bug https:/
Supplement information for Ubuntu Software center:
1. The category have be localized in to Chinese, but
2. The package name has NOT been localized.
Please see the attached screen shot.
Thanks for your bugreport.
I looked into this and was able to reproduce it if I only set my session to zh_CN but not the system wide language.
With a systemwide zh_CN setting (and a subsequent update-
I was not able to text the localized search (becauseI don't know what to input) but I assume it works as well as its using
the same data as the display. Attached are two screenshots from my box.
To make this work for non-systemwide setups software-center would have to build a index for each installed language.
We can add this feature if needed, it just means that the diskspace will increase and app-install-data package install
speed will decrease.
I think I need the hand holding of someone versed in Chinese in order to fix this, otherwise it's like shooting blindly. Are you the man for this Kevin? Please try and catch me, kamstrup, on IRC if so.
I've been looking into CJK indexing in Xapian and the prospects are slightly dire... See http://
The upshot is that this will require some work. There are libs we can pull in for this (like http://
Anyone with a simpler solution are more than welcome to chime in :-)
Looks like http://
We may be able to replace the libunicode bits with libicu bits (since libicu44 is installed by default as a dep of webkit)
Looking deeper into the linked Xapian bug it seems we may be able to shoplift some code from the Pinot engine that is based on cjk-tokenizer but ported to glib2 instead of libunicode. As described in the Xapian bug it does depend on the Dijon namespace though so as they've done for the Xapian patch based on the Pinot code we must remove the Dijon usage.
As olly describes in the Xapian bug this is slightly dangerous though and may have unpredictable consequences if we ever see Unicode version mismatches between glib2 and Xapian or if they differ in their error handling (which they almost certainly do).
All of this still leaves the question open for how to handle this in S-C with Python as it's crucial that S-C and u-p-a use the *exact* same method for tokenization. If there's a mismatch between the query parser in u-p-a and how the indexed terms are generated in the S-C index we'll see no-, weird-, or random results.
Xapian should have all the Unicode support you need for this built in, so you shouldn't need to add a dependency on libunicode, icu, or glib.
Does SC use Xapian::QueryParser and u-p-a use Xapian:
Also, Xapian is taking part in GSoC this year, and "CJK support" is one of the potential projects. We've had promising interest in it, though it's too soon to know if that'll happen, and it wouldn't be done until August anyway. It might also be just Chinese support or just Japanese (or possibly students working on each separately). So a patch with a more generic approach may still be useful (probably would be for Korean at least).
I've got some code lying around which is a hacked version of cjk-tokenizer which uses xapian's unicode routines; it wasn't hard to make. I'll shove a copy of it up on github in a moment. It still requires linking into an indexing and query parser, though.
@Olly, @Richard: Thanks for chiming in! Afaik we use use Xapian:
The complicating factor here is that the Software Center index is created from a Python program (and also consumed by that program), but also consumed from a C program (unity-
Hmm, it does indeed seem awfully late in the release process for some fairly major distro-specific patching of xapian-core. It's quite likely there will be a better solution before 11.10, and if not we can probably get the cjk-tokeniser approach in cleanly upstream by then.
My thought would be to package the cjk-tokeniser code in its own little C++ library (which can link to libxapian for the Unicode stuff since that's a public API), and then knock up a simple Python wrapper around it (with SWIG or similar or even by hand). Then you can use this for CJK locales, and Xapian's code for others, which means that any breakage won't affect other users of Xapian, and can only break for S-C in CJK locales, where the search doesn't really work currently anyway.
Attached a branch with my WIP to add support for CJK handling in Xapian. Development details will be in the Xapian bug tracker. Once there is something to ship/test in Ubuntu I'll put a note on this bug.
For reference, here is the summary of IRC discussions on this topic, including support from Platform with regards to a release plan for the fix.
We quickly considered the 2 alternatives today. The alternatives being:
1. workaround in apps, no change in the library
vs
2. patch in the library and regression testing in the apps
The library approach (2.) was chosen, as it's easier to implement and is
closer to a long term solution.
The plan of action considered now looks as follows:
1. kamstrup to start integrating the tokenizer + library patch into libxapian
2. seb128 to integrate the lib and impacted packages into a PPA for testing
3. mvo to check potential regressions with the SC test suite (western languages mostly)
4. At that point, we'd like the test teams in OEM, and CJK users in particular, to control the resulting packages.
"1. kamstrup to start integrating the tokenizer + library patch into libxapian"
With my Xapian upstream hat on, that really doesn't seem a good plan to me. It would mean that databases built by anything using Ubuntu's packages of Xapian risks being incompatible with those built on other platforms or with other builds of Xapian. We take a lot of care to avoid introducing any such incompatibilities within a release series, and the feedback I've had suggests users appreciate that.
@Olly: Agreed - that's not a situation we want to get into. That's also why we'll kick this off from a PPA so one has to specifically opt in to this and we can put a big fat Caveat Emptor sticker on it.
Then if we can guarantee database- and result set compatibility with vanilla libxapian for non-CJK corpuses we can *consider* it for update in main. And if you object *if* we get to that point I am pretty sure the platform team will listen to you - I don't expect that they enjoy maintaining a broken platform :-)
@Mikkel: thanks, that's reassuring.
There's been a relevant development too - Xapian has a student working on adding support for a Chinese segmentation algorithm as part of Google's Summer of Code this year. Assuming that project goes well and we can get it merged in, this ticket should be addressed for Chinese.
That still leaves Japanese and Korean, which aren't explicitly mentioned in this ticket so far that I saw, but suffer from the same issues.
Here is a patch for Xapian, it edits the Term Generator and Query Parser used in unity-places-
This is the branch with the patch applied:
https:/
Which allows for the searching CJK text in the Dash.
New patch, hopefully will be merged with Xapian.
Hopefully this will be merged with Xapian.
David & Brandon,
Since there is a fix for this bug, is there any way for our colleagues in Asia to help test it and moving this bug forward?
Thank you.
Kent
Kent Lin: It would be very useful for people who can read these languages to try out Brandon's latest patch, and report if it works well, or if there are any issues. Some more test cases would be good too - I've not had a chance to check the test coverage for yet, but it would be good to have most of the new code covered.
Not sure if the latest patch here is that same as the one in xapian's trac or not, but the latter is what I'll be looking at, and at least for me it's simpler if discussion about the patch itself happens there rather than being split:
http://
With xapian 1.2.5-2 on Ubuntu 11.10, I see using the same Chinese character for search in the Unity dash returns relevant applications including Empathy. By relevant I mean the Chinese character is in the application names on Unity dash.
See attached screenshot, the translated Chinese name of the first and second applications has the Chinese character, but not the third (Sudoku).
Ubuntu 11.10 Alpha i386 (20110803.1)
Unity-2d: 3.8.14.1-0ubuntu1
xapian-tools: 1.2.5-2~ppa1
LANGUAGE=
language-
language-
Another example that it returns relevant but also irrelevant applications.
Here is a bug that user can not find the correct application if he type full Chinese name of the application in unity search entry,
the attachment is a screentshot of this bug.
Test Case
=======
User want to find a application named as "terminal" (終端機 in Chinese), this Chinese word "終端機" has 3 characters , first character is "終", second is "端", the third is "機".
Step To Reproduce:
1. type "終端" unity2d search entry
Excepted Result:
2. the application "終端機" shows in result (User can find what application he wants)
Actually Result:
No result
Other Related Info:
* type "terminal" in unity2d search entry can find the application "終端機" (User can find what application he wants)
* type "終" unity2d search entry can find the application "終端機" (User can find what application he wants)
* type "端機" unity2d search entry can find the application "終端機", and "端機" is not actually a Chinese word, it does not mean anything here. (Users usually do not try "端機" keyword for finding "終端機" application)
Env
====
Ubuntu 11.10 Alpha i386 (201108010.1)
Unity-2d: 3.8.14.1-0ubuntu1
libxapian22: 1.2.5-2~ppa1
xapian-tools: 1.2.5-2~ppa1
LANGUAGE=
The Xapian part of the bug is now fixed upstream and released in Ubuntu Oneiric, with a distro-patch. See also https:/
The "Applications" lens has been fixed to trigger support for the new Xapian CJK tokenizer
The FTS extension for Zeitgeist that serves for indexing keywords for Files & Folders has been fixed to support the new CJK tokenizer in Xapian.
Unity-2D is not the source of the issue.
All the foundations aspect of the bug have now been taken care of. See previous comments.
I will merge the fix tomorrow :)
On Thu, Aug 25, 2011 at 5:36 PM, David Barth <email address hidden>wrote:
> All the foundations aspect of the bug have now been taken care of. See
> previous comments.
>
> ** Changed in: unity-foundations
> Status: Triaged => Fix Committed
>
> --
> You received this bug notification because you are subscribed to The
> Zeitgeist Project.
> https:/
>
> Title:
> [dash] wrong search result of Unity in Chinese
>
> Status in OEM Priority Project:
> In Progress
> Status in Ubuntu Translations:
> Triaged
> Status in Unity:
> Fix Committed
> Status in Unity 2D:
> Invalid
> Status in Unity Foundations:
> Fix Committed
> Status in Unity Applications Lens:
> Fix Committed
> Status in Xapian Search Engine Library:
> Confirmed
> Status in Zeitgeist Extensions:
> Fix Committed
> Status in “software-center” package in Ubuntu:
> Triaged
> Status in “unity” package in Ubuntu:
> Fix Committed
> Status in “unity-2d” package in Ubuntu:
> Invalid
>
> Bug description:
> The search result in Chinese is not correct. Please see the attached
> example.
>
> Unity-2D version: 3.8.1
>
> To manage notifications about this bug go to:
> https:/
>
This bug was fixed in the package software-center - 4.1.21
---------------
software-center (4.1.21) oneiric; urgency=low
[ Kiwinote ]
* AUTHORS:
- add credits for the new icon (LP: #834882)
* a stash of unicode fixes to make s-c-gtk3 usable around the world
(LP: #831865, LP: #834409, LP: #834312)
* softwarecenter/
- fix reinstall previous purchases (LP: #834984)
* softwarecenter/
- set title for 'previous purchases' list view (LP: #833960)
* softwarecenter/
- fix None.copy() such that switching panes works again (LP: #834196)
* softwarecenter/
- escape application name in tiles (LP: #835876)
[ Jacob Johan Edwards ]
* softwarecenter/
- fix the spinner display when loading slow views (LP: #830682)
[ Gabor Kelemen ]
* po/POTFILES.in,
po/
- update per latest configuration, add new gtk3 files
[ Matthew McGowan ]
* softwarecenter/
- resize fix for Top Rated and What's New tiles (LP: #833697)
* softwarecenter/
softwarecen
- disable the rendering of the checkboard pattern in the
grid views (at request of mpt)
* lp:~mmcg069/software-center/description-tweaks:
- fix badly rendered package descriptions, other tweaks
(LP: #833954)
* lp:~mmcg069/software-center/globalpane-themeability:
- various theming fixes (LP: #828092, LP: #830681,
LP: #830738 and LP: #838382)
[ Gary Lasker ]
* software-center,
software-
softwarecen
- enable CJK support in Xapian (LP: #745243)
* po/software-
- refresh .pot file
* softwarecenter/
- fix missing icon in theme to let non-gtk3 version
launch again, also fixes all gtk unit tests
* test/test_
- update unit test
[ Didier Roche ]
* softwarecenter/
softwarecen
softwarecen
softwarecen
softwarecen
softwarecen
data/
- brings back OneConf to software center gtk3 with a fresh new design
(LP: #838623)
* debian/control:
- depends on latest oneconf
-- Gary Lasker <email address hidden> Thu, 01 Sep 2011 11:55:14 -0400
Hi,
I'm sorry but no matter what Chinese character I input from Unity 2D dash, it returns nothing.
System is updated.
apt-xapian-index 0.44ubuntu2
libxapian-dev 1.2.5-1ubuntu1
libxapian22 1.2.5-1ubuntu1
python-xapian 1.2.5-2ubuntu1
python2.6-xapian
python2.7-xapian
xapian-doc
xapian-examples 1.2.5-1ubuntu1
xapian-tools 1.2.5-1ubuntu1
I saw the same issue as Ray said in #37.
libxapian22 1.2.5-1ubuntu1.
It is a fresh install of the 0906 Oneiric build.
Software Center 4.1.21
Application name and description are translated after 'update-
Search by Chinese character doesn't show all expected applications.
See attached screen shot.
The first window shows 3 applications that have '文' in either the name or the description or both.
The second window shows search by '文' returns only 2 of those 3 applications.
This problem was fixed and is in the current daily build of unity-2d. It was a regression in unity-2d not libxapian.
See the 2 screenshots:
Software Center Version: 4.1.21
Different search result on the same program
Step:
1) Input '播' to search program, it can search 'VLC媒體播放器'.
2) Input '播放' to search program, 'VLC媒體播放器' won't in the search result.
3) Input '播放器' to search program, 'VLC媒體播放器' won't in the search result.
Marking the oem-priority task fix released because the main problem was addressed in 11.10. Some additional improvements are desirable; they will be addressed in separate bugs.
El 25/10/11 17:35, Steve Magoun escribió:
> Marking the oem-priority task fix released because the main problem was
> addressed in 11.10. Some additional improvements are desirable; they
> will be addressed in separate bugs.
>
> ** Changed in: oem-priority
> Status: In Progress => Fix Released
>
Thanks you for your answer. My problems whit ubuntu 11.10 are already
resolved.
My old hard disk was in no good conditions.
I have one new and the installation is ok.
Thanks newly for you and for all the people that make posible the
project of free sofware.
Attendly, Víctor.
To clarify:
This is running unity with Simplified Chinese seledcted and iBus for keyboard input method (see steps below).
Searching using Chinese characters does not yeild correct search results even though the Chinese characters appear in the displayed name. Search results are only correct if English characters are entered and this is wrong.
To enable iBus and Chinese:
In Language Support:
* Tab 1: Install Chinese/Simplified translations, input methods, and fonts
* Tab 1: Apply System-Wide button
* Tab 1: Select iBus in Keyboard Input Method System combo.
* Tab2 (named "Regional Formats" in en _US"): Select Chinese and Apply.
* Reboot/Relogin
* Ctrl + Space enables /disables iBus
* enter Chinese character for Empathy for example and it won't be found. enter English characters for empathy and it will be found in applications place. | https://bugs.launchpad.net/unity-foundations/+bug/745243 | CC-MAIN-2015-22 | refinedweb | 2,821 | 61.06 |
I don't think it's the good idea... Look at the comment : // This does not work with DMALLOC, since the internal data structures // differ. And after we have : #ifdef HAVE_DMALLOC we should have #if _NOT_ def HAVE_DMALLOC things which works with malloc and doesn't with dmalloc. #endif 2006/5/24, strk <address@hidden>:
On Wed, May 24, 2006 at 02:06:58PM +0200, Frédéric Point wrote: > Hi, > > When I compile gnash with dmalloc enable I have this error : > > i686-pc-linux-gnu-g++ -DHAVE_CONFIG_H -I. -I. -I.. -I.. -I. -I.. > -I../server -I > /usr/include -I/usr/include/libxml2 -I/usr/include/SDL -I/usr/include/SDL > -DQT_T > HREAD_SUPPORT -D_REENTRANT -I.. -I. -I.. -I../server -I/usr/include > -I/usr/inclu > de/libxml2 -I/usr/include/SDL -I/usr/include/SDL -O2 -mcpu=i686 -pipe -ansi > -Wal > l -MT utility.lo -MD -MP -MF .deps/utility.Tpo -c utility.cpp -fPIC -DPIC > -o .l > ibs/utility.o > utility.cpp: In function `void dump_memory_stats(const char*, int, const > char*) > ': > utility.cpp:81: error: aggregate `mallinfo mi' has incomplete type and > cannot > be defined Try including "malloc.h", struct mallinfo is defined there --strk; _______________________________________________ Gnash mailing list address@hidden | http://lists.gnu.org/archive/html/gnash/2006-05/msg00127.html | CC-MAIN-2016-07 | refinedweb | 200 | 58.58 |
The Little Program
That Could Raises its Own Bar
by Jim Bray
For fifteen years,
Broderbund's "The Print Shop" (about $49.95US) has been empowering home
and home office users who need a quick, easy, and affordable way to do
basic publishing chores.
It was a testament
to Broderbund having gotten the program right the first time that, other
than making technological upgrades like the jump from 16 to 32 bit Windows
features and the addition of the requisite clip art libraries, "Print
Shop" had hardly changed from its initial incarnation to the first 32
bit version (available as Print Shop Ensemble III).
We last reviewed Version
6, which was a virtually complete redesign that added a lot of power to
the package. This "new tradition of excellence" has been carried on in
the jump to Version 10, a 9 CD-ROM tour de force if I've ever seen one.
So the new version
lets you do all the nifty stuff of versions past, like designing or adapting
from a template your own greeting cards, signs, banners, calendars, etc.
You can save your designs as HTML files, too, so you can throw 'em up
onto the world wide web - which fits in nicely with the package's Web
site creation module.
The latter is definitely
no "Dreamweaver" or "Hot Metal Pro," but they'll indeed let you
create a basic web site without studying HTML.
You can also take
advantage of some 8000 pre-designed layouts that add a lot of flexibility
to your projects as well as adding from the 160,000 pieces of clip art
(a lot of which is clip paintings" and "clip photos" rather than your
run of the mill line drawing clip art, though there's lots of that,too).
I like beginning with
a pre-designed project, and adapting it for my own use; it's a nice and
easy way to get started, yet the finished product usually bears now make multi-page
calendars and photo albums, borders, and even do some pretty good photo
editing see below). And that isn't all. Though I don't really know why
it would be included in what's basically a publishing program, Print Shop
Deluxe 10 also includes "The Ultimate Mail Manager," which includes address
book, mail merge function, and can be used to send out automatic mailings.
A nice touch, I suppose,
and an unexpected bonus - but I'm sure no one would have known if it hadn't
been included.
Back of the graphics
side of things, Print Shop 10 also lets you create logos and seals and
import images from many different file formats, including jpg, wmf, tif,
bmp, pcd, png and more.
Its photo editing
capabilities will let you add drop shadows, transparent effects, and stuff
like that. You can also create digital photo albums and share them over
the Internet.
The package also includes
Serif DrawPlus 3, a "dumbed down" CorelDraw/Adobe Illustrator type graphics
creation program that's incredibly easy to use, yet surprisingly flexible.
And "3D Greetings" is a cute app that lets you create animated cards you
can send to your friends via e-mail.
Plus, the interface
has been redesigned (there was nothing wrong with the old one, but I must
admit the new one is even smoother) to feature cascading menus like you
see on many Web sites. This makes navigating the plethora of choices much
easier..
In short, Print Shop
Deluxe 10 is a graphic design, layout, photo editing and cataloguing,
Web creation tool and more - all with a no brainer interface and enough
built in stuff that the average person can be up and running on it with
virtually no learning curve.
Professionals may
sneer at this Print Shop, but they're doing it a disservice - and it isn't
meant for them anyway.
If you're someone
who wants something quick, easy, and incredibly flexible, that unleashes
your creativity without sending you back to school, check out this product!
Tell us at TechnoFile what YOU think
Audio/Video
Automotive
Computers
DVD's
Forum
Gadgets
Games
Guides
Letters
Links
Miscellaneous
Product
News
Search
Subscribe | http://www.technofile.com/articles/printshop.html | crawl-001 | refinedweb | 694 | 60.18 |
Hey there,
We are publishing CLion 2016.1.3 bug-fix update (build 145.1617) today.
A few notable fixes:
- CPP-6013 Wrong “Binary operator ‘<<‘ can’t be applied” error message.
- CPP-6254 Output for CLion IDE sometimes cuts off when executing a program.
- IDEA-156750 Slow IDE launch on cellular network.
We are also bringing a new feature in this update:
Alphabetical sorting and type grouping in the Structure View
To open structure view for a file just press
Alt+7 on Linux/Windows,
⌘7 on OS X. By default the order corresponds to the order in the original file. With grouping by type elements are shown in the following groups: namespaces, structures and classes, constructors/destructors, functions, fields, etc.
No sorting:
Grouping by type:
Grouping by type and alphabetical sorting:
Full release notes can be found here.
The update is available for download from the site or as a patch via Check for Updates in the IDE.
Your CLion Team
JetBrains
The Drive to Develop
There is no way to patch in the IDE for now. Only a button that links to the page for the whole package.
Could you please try again now? Just call Check for Updates from the menu.
No. Only four buttons: Download…, Release Notes, Ignore This Update & Remind Me Later.
By the way, which version do you have currently?
2016.1.2
Thanks for reporting, we added patched from earlier versions. Should be available in 15 minutes or so
Works fine now. Thanks!
Yay!!! This is a simple but awesome feature which makes my life easier.
Out of curiosity, whatever happened the possibility of adding Valgrind support? It was a “maybe” on the last roadmap but never made it and it’s not on your current roadmap. While getting Python support was way more useful in 2016.1, seeing the possible Valgrind support got me pretty excited as well. Have you guys abandoned it for the time being?
We’ll consider it later. We’ve decided to proceed with other features for 2016.2, for example Doxygen.
Thanks Anastasia! Doxygen is a more useful feature anyways. I’m already using it in the EAP and it is making my life a lot easier. I’m looking forward to the full rollout of 2016.2.
Keep up the great work.
The download link didn’t work. Don’t know if it is caused by the browser settings. I tried to download CLion by Chrome 43 with Javascript enabled.
Strange. Could you please try the direct links:
?
Working for me now in Chrome.
The link in your reply works. Thanks.
The following link, however, still didn’t work, and neither did the direct link in this page. | https://blog.jetbrains.com/clion/2016/06/clion-2016-1-3-bug-fix-update/ | CC-MAIN-2019-22 | refinedweb | 452 | 76.93 |
Creating a Web Component in Stencil
Stencil is a new lightweight framework for creating web components from the authors of Ionic. It's going to form the basis of components in Ionic 4, which will make them framework agnostic and hence allow writing Ionic applications in frameworks other than Angular. Although Stencil is still in early stages, I decided to give it a try and see what the experience is like.
There's no CLI yet, so the best way to create a new project is by cloning the starter project.
git clone my-component cd .\my-component\ git remote rm origin
We can now run the project to see the sample component it contains:
npm install npm start
This will start a development server on with file watcher enabled to trigger rebuild as we edit the files. Although Stencil components will run in all current browsers, development build is ECMAScript 6 only by default, so you'll need to use a compatible browser, e.g. Chrome.
My component will use the GitHub API to retrieve some users. I chose that to try out as many Stencil concepts as possible, including HTTP calls to external APIs.
For starters, I took care of rendering the list. I replaced the component source code with the following:
@Component({ tag: 'my-component', styleUrl: 'my-component.scss', shadow: true }) export class MyComponent { @State() users: Array<any>; render() { return ( <ul> {this.users.map(user => <li><a href={user.html_url}>{user.login}</a></li> )} </ul> ); } }
As you can see, the code looks like a mixture of Angular and React. The class structure and the decorators look very Angular-like, while the render function uses the React XML-like syntax - TSX. Even if you're familiar with both, you might wonder what the
@State attribute does. Well, it indicates that the property is a part of the component state. Because of that,
render() will be invoked whenever its value changes and re-render the component.
To call the GitHub API, I will use the new Fetch API. Stencil will polyfill it in browsers that don't support it yet:
componentWillLoad() { let url = ''; fetch(url).then(response => { response.json().then(json => { this.users = json; }); }); }
The
componentWillLoad() function is a part of Stencil's component life-cycle. It will be run only once while the component is initializing.
As the last feature of my component, I'll implement the option to customize the number of users to retrieve.
@Prop() count = 20; componentWillLoad() { let url = `{this.count}`; fetch(url).then(response => { response.json().then(json => { this.users = json; }); }); }
The
@Prop decorator exposes the class property as an element attribute so that it can be set when the element is used in a page. In the project, this page is
index.html and that's where we can set a different value:
<body> <my-component</my-component> </body>
The change in
index.html wasn't automatically picked up by the browser even with file watch enabled. I had to restart the development server and refresh the browser for the change to take effect.
Of course, not everything works perfectly yet. For example, the starter project includes a setup for unit tests which fail for me even in a freshly cloned starter project. Still, the framework looks promising and could become a great tool for developing web components. | http://www.damirscorner.com/blog/posts/20180223-CreatingAWebComponentInStencil.html | CC-MAIN-2019-30 | refinedweb | 555 | 63.9 |
goes something like this:
Hi Adrian, I’m working on a project where I need to stream frames from a client camera to a server for processing using OpenCV. Should I use an IP camera? Would a Raspberry Pi work? What about RTSP streaming? Have you tried using FFMPEG or GStreamer? How do you suggest I approach the problem?
It’s a great question — and if you’ve ever attempted live video streaming with OpenCV then you know there are a ton of different options.
You could go with the IP camera route. But IP cameras can be a pain to work with. Some IP cameras don’t even allow you to access the RTSP (Real-time Streaming Protocol) stream. Other IP cameras simply don’t work with OpenCV’s cv2.VideoCapture function. An IP camera may be too expensive for your budget as well.
In those cases, you are left with using a standard webcam — the question then becomes, how do you stream the frames from that webcam using OpenCV?
Using FFMPEG or GStreamer is definitely an option. But both of those can be a royal pain to work with.
Today I am going to show you my preferred solution using message passing libraries, specifically ZMQ and ImageZMQ, the latter of which was developed by PyImageConf 2018 speaker, Jeff Bass. Jeff has put a ton of work into ImageZMQ and his efforts really shows.
As you’ll see, this method of OpenCV video streaming is not only reliable but incredibly easy to use, requiring only a few lines of code.
To learn how to perform live network video streaming with OpenCV, just keep reading!
Looking for the source code to this post?
Jump right to the downloads section.
Live video streaming over network with OpenCV and ImageZMQ
In the first part of this tutorial, we’ll discuss why, and under which situations, we may choose to stream video with OpenCV over a network.
From there we’ll briefly discuss message passing along with ZMQ, a library for high performance asynchronous messaging for distributed systems.
We’ll then implement two Python scripts:
- A client that will capture frames from a simple webcam
- And a server that will take the input frames and run object detection on them
Will be using Raspberry Pis as our clients to demonstrate how cheaper hardware can be used to build a distributed network of cameras capable of piping frames to a more powerful machine for additional processing.
By the end of this tutorial, you’ll be able to apply live video streaming with OpenCV to your own applications!
Why stream videos/frames over a network?
Figure 1: A great application of video streaming with OpenCV is a security camera system. You could use Raspberry Pis and a library called ImageZMQ to stream from the Pi (client) to the server.
There are a number of reasons why you may want to stream frames from a video stream over a network with OpenCV.
To start, you could be building a security application that requires all frames to be sent to a central hub for additional processing and logging.
Or, your client machine may be highly resource constrained (such as a Raspberry Pi) and lack the necessary computational horsepower required to run computationally expensive algorithms (such as deep neural networks, for example).
In these cases, you need a method to take input frames captured from a webcam with OpenCV and then pipe them over the network to another system.
There are a variety of methods to accomplish this task (discussed in the introduction of the post), but today we are going to specifically focus on message passing.
What is message passing?
Figure 2: The concept of sending a message from a process, through a message broker, to other processes. With this method/concept, we can stream video over a network using OpenCV and ZMQ with a library called ImageZMQ.
Message passing is a programming paradigm/concept typically used in multiprocessing, distributed, and/or concurrent applications.
Using message passing, one process can communicate with one or more other processes, typically using a message broker.
Whenever a process wants to communicate with another process, including all other processes, it must first send its request to the message broker.
The message broker receives the request and then handles sending the message to the other process(es).
If necessary, the message broker also sends a response to the originating process.
As an example of message passing let’s consider a tremendous life event, such as a mother giving birth to a newborn child (process communication depicted in Figure 2 above). Process A, the mother, wants to announce to all other processes (i.e., the family), that she had a baby. To do so, Process A constructs the message and sends it to the message broker.
The message broker then takes that message and broadcasts it to all processes.
All other processes then receive the message from the message broker.
These processes want to show their support and happiness to Process A, so they construct a message saying their congratulations:
Figure 3: Each process sends an acknowledgment (ACK) message back through the message broker to notify Process A that the message is received. The ImageZMQ video streaming project by Jeff Bass uses this approach.
These responses are sent to the message broker which in turn sends them back to Process A (Figure 3).
This example is a dramatic simplification of message passing and message broker systems but should help you understand the general algorithm and the type of communication the processes are performing.
You can very easily get into the weeds studying these topics, including various distributed programming paradigms and types of messages/communication (1:1 communication, 1:many, broadcasts, centralized, distributed, broker-less etc.).
As long as you understand the basic concept that message passing allows processes to communicate (including processes on different machines) then you will be able to follow along with the rest of this post.
What is ZMQ?
Figure 4: The ZMQ library serves as the backbone for message passing in the ImageZMQ library. ImageZMQ is used for video streaming with OpenCV. Jeff Bass designed it for his Raspberry Pi network at his farm.
ZeroMQ, or simply ZMQ for short, is a high-performance asynchronous message passing library used in distributed systems.
Both RabbitMQ and ZeroMQ are some of the most highly used message passing systems.
However, ZeroMQ specifically focuses on high throughput and low latency applications — which is exactly how you can frame live video streaming.
When building a system to stream live videos over a network using OpenCV, you would want a system that focuses on:
- High throughput: There will be new frames from the video stream coming in quickly.
- Low latency: As we’ll want the frames distributed to all nodes on the system as soon as they are captured from the camera.
ZeroMQ also has the benefit of being extremely easy to both install and use.
Jeff Bass, the creator of ImageZMQ (which builds on ZMQ), chose to use ZMQ as the message passing library for these reasons — and I couldn’t agree with him more.
The ImageZMQ library
Figure 5: The ImageZMQ library is designed for streaming video efficiently over a network. It is a Python package and integrates with OpenCV.
Jeff Bass is the owner of Yin Yang Ranch, a permaculture farm in Southern California. He was one of the first people to join PyImageSearch Gurus, my flagship computer vision course. In the course and community he has been an active participant in many discussions around the Raspberry Pi.
Jeff has found that Raspberry Pis are perfect for computer vision and other tasks on his farm. They are inexpensive, readily available, and astoundingly resilient/reliable.
At PyImageConf 2018 Jeff spoke about his farm and more specifically about how he used Raspberry Pis and a central computer to manage data collection and analysis.
The heart of his project is a library that he put together called ImageZMQ.
ImageZMQ solves the problem of real-time streaming from the Raspberry Pis on his farm. It is based on ZMQ and works really well with OpenCV.
Plain and simple, it just works. And it works really reliably.
I’ve found it to be more reliable than alternatives such as GStreamer or FFMPEG streams. I’ve also had better luck with it than using RTSP streams.
You can learn the details of ImageZMQ by studying Jeff’s code on GitHub.
Jeff’s slides from PyImageConf 2018 are also available here.
In a few days, I’ll be posting my interview with Jeff Bass on the blog as well.
Let’s configure our clients and server with ImageZMQ and put it them to work!
Configuring your system and installing required packages
Installing ImageZMQ is quite easy.
First, let’s pip install a few packages into your Python virtual environment (assuming you’re using one):
From there, clone the imagezmq repo:
You may then (1) copy or (2) sym-link the source directory into your virtual environment site-packages.
Let’s go with the sym-link option:
Note: Be sure to use tab completion to ensure that paths are correctly entered.
As a third alternative to the two options discussed, you may place imagezmq into each project folder in which you plan to use it.
Preparing clients for ImageZMQ
ImageZMQ must be installed on each client and the central server.
In this section, we’ll cover one important difference for clients.
Our code is going to use the hostname of the client to identify it. You could use the IP address in a string for identification, but setting a client’s hostname allows you to more easily identify the purpose of the client.
In this example, we’ll assume you are using a Raspberry Pi running Raspbian. Of course, your client could run Windows Embedded, Ubuntu, macOS, etc., but since our demo uses Raspberry Pis, let’s learn how to change the hostname on the RPi.
To change the hostname on your Raspberry Pi, fire up a terminal (this could be over an SSH connection if you’d like).
Then run the raspi-config command:
You’ll be presented with this terminal screen:
Figure 7: Configuring a Raspberry Pi hostname with
raspi-config. Shown is the
raspi-config home screen.
Navigate to “2 Network Options” and press enter.
Then choose the option “N1 Hostname”.
Figure 9: Setting the Raspberry Pi hostname to something easily identifiable/memorable. Our video streaming with OpenCV and ImageZMQ script will use the hostname to identify Raspberry Pi clients.
You can now change your hostname and select “<Ok>”.
You will be prompted to reboot — a reboot is required.
I recommend naming your Raspberry Pis like this: pi-location . Here are a few examples:
- pi-garage
- pi-frontporch
- pi-livingroom
- pi-driveway
- …you get the idea.
This way when you pull up your router page on your network, you’ll know what the Pi is for and its corresponding IP address. On some networks, you could even connect via SSH without providing the IP address like this:
As you can see, it will likely save some time later.
Defining the client and server relationship
Before we actually implement network video streaming with OpenCV, let’s first define the client/server relationship to ensure we’re on the same page and using the same terms:
- Client: Responsible for capturing frames from a webcam using OpenCV and then sending the frames to the server.
- Server: Accepts frames from all input clients.
You could argue back and forth as to which system is the client and which is the server.
For example, a system that is capturing frames via a webcam and then sending them elsewhere could be considered a server — the system is undoubtedly serving up frames.
Similarly, a system that accepts incoming data could very well be the client.
However, we are assuming:
- There is at least one (and likely many more) system responsible for capturing frames.
- There is only a single system used for actually receiving and processing those frames.
For these reasons, I prefer to think of the system sending the frames as the client and the system receiving/processing the frames as the server.
You may disagree with me, but that is the client-server terminology we’ll be using throughout the remainder of this tutorial.
Project structure
Be sure to grab the “Downloads” for today’s project.
From there, unzip the files and navigate into the project directory.
You may use the tree command to inspect the structure of the project:
Note: If you’re going with the third alternative discussed above, then you would need to place the imagezmq source directory in the project as well.
The first two files listed in the project are the pre-trained Caffe MobileNet SSD object detection files. The server ( server.py ) will take advantage of these Caffe files using OpenCV’s DNN module to perform object detection.
The client.py script will reside on each device which is sending a stream to the server. Later on, we’ll upload client.py onto each of the Pis (or another machine) on your network so they can send video frames to the central location.
Implementing the client OpenCV video streamer (i.e., video sender)
Let’s start by implementing the client which will be responsible for:
- Capturing frames from the camera (either USB or the RPi camera module)
- Sending the frames over the network via ImageZMQ
Open up the client.py file and insert the following code:
We start off by importing packages and modules on Lines 2-6:
- Pay close attention here to see that we’re importing imagezmq in our client-side script.
- VideoStream will be used to grab frames from our camera.
- Our argparse import will be used to process a command line argument containing the server’s IP address ( --server-ip is parsed on Lines 9-12).
- The socket module of Python is simply used to grab the hostname of the Raspberry Pi.
- Finally, time will be used to allow our camera to warm up prior to sending frames.
Lines 16 and 17 simply create the imagezmq sender object and specify the IP address and port of the server. The IP address will come from the command line argument that we already established. I’ve found that port 5555 doesn’t usually have conflicts, so it is hardcoded. You could easily turn it into a command line argument if you need to as well.
Let’s initialize our video stream and start sending frames to the server:
Now, we’ll grab the hostname, storing the value as rpiName (Line 21). Refer to “Preparing clients for ImageZMQ” above to set your hostname on a Raspberry Pi.
From there, our VideoStream object is created to connect grab frames from our PiCamera. Alternatively, you can use any USB camera connected to the Pi by commenting Line 22 and uncommenting Line 23.
This is the point where you should also set your camera resolution. We are just going to use the maximum resolution so the argument is not provided. But if you find that there is a lag, you are likely sending too many pixels. If that is the case, you may reduce your resolution quite easily. Just pick from one of the resolutions available for the PiCamera V2 here: PiCamera ReadTheDocs. The second table is for V2.
Once you’ve chosen the resolution, edit Line 22 like this:
Note: The resolution argument won’t make a difference for USB cameras since they are all implemented differently. As an alternative, you can insert a frame = imutils.resize(frame, width=320) between Lines 28 and 29 to resize the frame manually.
From there, a warmup sleep time of 2.0 seconds is set (Line 24).
Finally, our while loop on Lines 26-29 grabs and sends the frames.
As you can see, the client is quite simple and straightforward!
Let’s move on to the actual server.
Implementing the OpenCV video server (i.e., video receiver)
The live video server will be responsible for:
- Accepting incoming frames from multiple clients.
- Applying object detection to each of the incoming frames.
- Maintaining an “object count” for each of the frames (i.e., count the number of objects).
Let’s go ahead and implement the server — open up the server.py file and insert the following code:
On Lines 2-8 we import packages and libraries. In this script, most notably we’ll be using:
- build_montages : To build a montage of all incoming frames.
- imagezmq : For streaming video from clients. In our case, each client is a Raspberry Pi.
- imutils : My package of OpenCV and other image processing convenience functions available on GitHub and PyPi.
- cv2 : OpenCV’s DNN module will be used for deep learning object detection inference.
Are you wondering where imutils.video.VideoStream is? We usually use my VideoStream class to read frames from a webcam. However, don’t forget that we’re using imagezmq for streaming frames from clients. The server doesn’t have a camera directly wired to it.
Let’s process five command line arguments with argparse:
- --prototxt : The path to our Caffe deep learning prototxt file.
- --model : The path to our pre-trained Caffe deep learning model. I’ve provided MobileNet SSD in the “Downloads” but with some minor changes, you could elect to use an alternative model.
- --confidence : Our confidence threshold to filter weak detections.
- --montageW : This is not width in pixels. Rather this is the number of columns for our montage. We’re going to stream from four raspberry Pis today, so you could do 2×2, 4×1, or 1×4. You could also do, for example, 3×3 for nine clients, but 5 of the boxes would be empty.
- --montageH : The number of rows for your montage. See the --montageW explanation.
Let’s initialize our ImageHub object along with our deep learning object detector:
Our server needs an ImageHub to accept connections from each of the Raspberry Pis. It essentially uses sockets and ZMQ for receiving frames across the network (and sending back acknowledgments).
Our MobileNet SSD object CLASSES are specified on Lines 29-32. If you aren’t familiar with the MobileNet Single Shot Detector, please refer to this blog post or Deep Learning for Computer Vision with Python.
From there we’ll instantiate our Caffe object detector on Line 36.
Initializations come next:
In today’s example, I’m only going to CONSIDER three types of objects from the MobileNet SSD list of CLASSES . We’re considering (1) dogs, (2) persons, and (3) cars on Line 40.
We’ll soon use this CONSIDER set to filter out other classes that we don’t care about such as chairs, plants, monitors, or sofas which don’t typically move and aren’t interesting for this security type project.
Line 41 initializes a dictionary for our object counts to be tracked in each video feed. Each count is initialized to zero.
A separate dictionary, frameDict is initialized on Line 42. The frameDict dictionary will contain the hostname key and the associated latest frame value.
Lines 47 and 48 are variables which help us determine when a Pi last sent a frame to the server. If it has been a while (i.e. there is a problem), we can get rid of the static, out of date image in our montage. The lastActive dictionary will have hostname keys and timestamps for values.
Lines 53-55 are constants which help us to calculate whether a Pi is active. Line 55 itself calculates that our check for activity will be 40 seconds. You can reduce this period of time by adjusting ESTIMATED_NUM_PIS and ACTIVE_CHECK_PERIOD on Lines 53 and 54.
Our mW and mH variables on Lines 59 and 60 represent the width and height (columns and rows) for our montage. These values are pulled directly from the command line args dictionary.
Let’s loop over incoming streams from our clients and processing the data!
We begin looping on Line 65.
Lines 68 and 69 grab an image from the imageHub and send an ACK message. The result of imageHub.recv_image is rpiName , in our case the hostname, and the video frame itself.
It is really as simple as that to receive frames from an ImageZMQ video stream!
Lines 73-78 perform housekeeping duties to determine when a Raspberry Pi was lastActive .
Let’s perform inference on a given incoming frame :
Lines 82-90 perform object detection on the frame :
- The frame dimensions are computed.
- A blob is created from the image (see this post for more details about how OpenCV’s blobFromImage function works).
- The blob is passed through the neural net.
From there, on Line 93 we reset the object counts to zero (we will be populating the dictionary with fresh count values shortly).
Let’s loop over the detections with the goal of (1) counting, and (2) drawing boxes around objects that we are considering:
On Line 96 we begin looping over each of the detections . Inside the loop, we proceed to:
- Extract the object confidence and filter out weak detections (Lines 99-103).
- Grab the label idx (Line 106) and ensure that the label is in the CONSIDER set (Line 110). For each detection that has passed the two checks ( confidence threshold and in CONSIDER ), we will:
- Increment the objCount for the respective object (Line 113).
- Draw a rectangle around the object (Lines 117-123).
Next, let’s annotate each frame with the hostname and object counts. We’ll also build a montage to display them in:
On Lines 126-133 we make two calls to cv2.putText to draw the Raspberry Pi hostname and object counts.
From there we update our frameDict with the frame corresponding to the RPi hostname.
Lines 139-144 create and display a montage of our client frames. The montage will be mW frames wide and mH frames tall.
Keypresses are captured via Line 147.
The last block is responsible for checking our lastActive timestamps for each client feed and removing frames from the montage that have stalled. Let’s see how it works:
There’s a lot going on in Lines 151-162. Let’s break it down:
- We only perform a check if at least ACTIVE_CHECK_SECONDS have passed (Line 151).
- We loop over each key-value pair in lastActive (Line 153):
- If the device hasn’t been active recently (Line 156) we need to remove data (Lines 158 and 159). First we remove ( pop ) the rpiName and timestamp from lastActive . Then the rpiName and frame are removed from the frameDict .
- The lastActiveCheck is updated to the current time on Line 162.
Effectively this will help us get rid of expired frames (i.e. frames that are no longer real-time). This is really important if you are using the ImageHub server for a security application. Perhaps you are saving key motion events like a Digital Video Recorder (DVR). The worst thing that could happen if you don’t get rid of expired frames is that an intruder kills power to a client and you don’t realize the frame isn’t updating. Think James Bond or Jason Bourne sort of spy techniques.
Last in the loop is a check to see if the "q" key has been pressed — if so we break from the loop and destroy all active montage windows (Lines 165-169).
Streaming video over network with OpenCV
Now that we’ve implemented both the client and the server, let’s put them to the test.
Make sure you use the “Downloads” section of this post to download the source code.
From there, upload the client to each of your Pis using SCP:
In this example, I’m using four Raspberry Pis, but four aren’t required — you can use more or less. Be sure to use applicable IP addresses for your network.
You also need to follow the installation instructions to install ImageZMQ on each Raspberry Pi. See the “Configuring your system and installing required packages” section in this blog post.
Before we start the clients, we must start the server. Let’s fire it up with the following command:
Once your server is running, go ahead and start each client pointing to the server. Here is what you need to do on each client, step-by-step:
- Open an SSH connection to the client: ssh pi@192.168.1.10
- Start screen on the client: screen
- Source your profile: source ~/.profile
- Activate your environment: workon py3cv4
- Install ImageZMQ using instructions in “Configuring your system and installing required packages”.
- Run the client: python client.py --server-ip 192.168.1.5
As an alternative to these steps, you may start the client script on reboot.
Automagically, your server will start bringing in frames from each of your Pis. Each frame that comes in is passed through the MobileNet SSD. Here’s a quick demo of the result:
A full video demo can be seen below:
What’s next?
Is your brain spinning with new Raspberry Pi project ideas right now?
The Raspberry Pi is my favorite community driven product for Computer Vision, IoT, and Edge Computing.
The possibilities with the Raspberry Pi are truly endless:
- Maybe you have a video streaming idea based on this post.
- Or perhaps you want to learn about deep learning with the Raspberry Pi.
- Interested in robotics? Why not build a small computer vision-enabled robot or self-driving RC car?
- Face recognition, classroom attendance, and security? All possible.
I’ve been so excited about the Raspberry Pi that I decided to write a book with over 40 practical, hands-on chapters that you’ll be able to learn from and hack with.
Inside the book, I’ll be sharing my personal tips and tricks for working with the Raspberry Pi (you can apply them to other resource-constrained devices too). You can view the full Raspberry Pi for Computer Vision table of contents here.
The book is currently in development. That said, you can reserve your copy by pre-ordering now and get a great deal on my other books/courses.
The pre-order sale ends on Friday, May 10th, 2019 at 10:00AM EDT. Don’t miss out on these huge savings!
Summary
In this tutorial, you learned how to stream video over a network using OpenCV and the ImageZMQ library.
Instead of relying on IP cameras or FFMPEG/GStreamer, we used a simple webcam and a Raspberry Pi to capture input frames and then stream them to a more powerful machine for additional processing using a distributed system concept called message passing.
Thanks to Jeff Bass’ hard work (the creator of ImageZMQ) our implementation required only a few lines of code.
If you are ever in a situation where you need to stream live video over a network, definitely give ImageZMQ a try — I think you’ll find it super intuitive and easy to use.
I’ll be back in a few days with an interview with Jeff Bass as well!
To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
How does this compare with MQTT, performance wise?
I use paho-mqtt MQTT in Python and mosquitto MQTT broker running locally on the Pi to pass images among separate processes or to processes running on different machines.
MQTT has the “virtue” that you can buy “cloud broker” services from IBM, HiveMQ, etc. and get remote access without needing to run your own servers exposed to the internet
In any event thanks for posting this, looking at the imagezmq git hub, it looks to be potentially veryuseful. I’d encourage Jeff to get it pip install-able for wider usage.
If I can free up some time, I might try to take this sample and modify it to use MQTT for comparison.
ImageZMQ was designed with ZMQ in mind. It would definitely require some updating to get it to run on MQTT and gather comparisons..
Hi Wally,
I did some performance tests this morning comparing MQTT (that I use for the same purpose as you, sending images across my network) with imageZMQ, in this case using send_jpg instead of send_image since I needed to send the image as a buffer
In the tests I sent the same 95k jpg image ten times as fast as possible from one Pi to another over my wireless lan connections for both
The results shows that there was no signinficant difference at all, it took around 3 seconds for both technologies
mqtt DONE! 2.3724958896636963
mqtt DONE! 3.370811939239502
mqtt DONE! 4.014646530151367
mqtt DONE! 2.674704074859619
mqtt DONE! 3.1588287353515625
zmq DONE! 2.7648768424987793
zmq DONE! 5.127021312713623
zmq DONE! 3.3753623962402344
zmq DONE! 2.6726326942443848
zmq DONE! 3.2702481746673584
Thank you for sharing, Walter!
same concepts. mqtt can transport whatever you want. the core part is the broker (there also a lot of mqtt borkers out there). you could use an mqtt broker instead of zmq. apache has many adapters to support any kind of protocol for the broker.
Thanks for your great work!
Awesome content Adrian. What machine did you use for processing on the server side? Was it GPU enabled because running object detection on frames from 4 cameras can be quite computationally expensive.
It was actually a MacBook Pro running YOLO via the CPU!
Great post, Adrian,
I loved the Jeff Bass’ presentation at PyImageConf.
One of these days I am going to finally get furiously angry with opencv’s USB Camera interface and start using libuvc. The Pi Camera interface by comparison is very clean and straightforward and consistent.
Thanks David. I’ll have an interview publishing with Jeff on Wednesday as well 🙂
Is it possible to send it via http to see it on a web page?
I’ll be covering that in my upcoming Raspberry Pi for Computer Vision book, stay tuned!
Hi i want to ask you a question say i implemented this project now what i want to do is store the output to a database or simply say i am running open cv face recognition on client and when a person is recognized i want a json of his name in real time streaming. So what should i do?
You could simply encode the name of the person as JSON string. That’s not really a computer vision question though. I would instead suggest you read up on Python programming basics, including JSON serialization and database fundamentals. That will better enable you to complete your project.
Hi,
Very nice project. but I have an error message : module ‘imagezmq’ has no attribute ‘ImageHub’ when I launch the server.
Have you an idea.
Tahnks
It sounds like you have not properly installed the “imagezmq” library. Double-check your sym-links or simply put the “imagezmq” Python module in the working directory of your project.
I was having the same issue running the server. Putting the imagezmq folder that contains __init__.py, __version__.py and imagezmq.py in the working folder solved the issue.
I had the same problem, I solved it in the following way.
1.- copy the imagezmq folder inside the project directory
2.- in the server.py file change:
import imagezmq
by
from imagezmq import imagezmq
I hope to be helpful
regards
Alba – did you resolve this ? I have the same issue.
Its ok — Hans M’s solution worked for me.
Awesome, glad to hear it Geoff!
Awesome Adrian. It’s great
I have two questions about.
1. It’s possible to share multiple ip camera by one Ras PI? my memory is only 1GB.
2. Can I put a password for my app and share them on the Internet?
1. You mean have two cameras on a single Pi? Yes, absolutely. You would just have two client scripts running on the Pi, each accessing its respective camera.
2. Yes, but you would need to code any authentication yourself.
I can confirm this works, I am streaming from a picam and a USB camera.
I should add that I changed the client code very slightly to:
while True:
# read the frame from the camera and send it to the server
frame = vs.read()
if frame is not None:
sender.send_image(rpiName, frame)
else:
print(“no frame to send {}”.format(time.time()))
time.sleep(1.0)
I added a 2nd USB camera along with the picamera, 3 cameras in total which was mostly fine but run into what was either specific to the USB camera I was using, or USB bus limits. I suspect it was the former. By confirming there is a frame, the client doesn’t crash when there is no frame ready.
I also added time to the frame overlay on the server so it is more obvious when new frames come in on static scenes.
I suspect if hosting multiple cameras, it’d be better to read and send them consecutively from a single process rather than have multiple instances running. However, this is a great tutorial to build from.
Hi Adrian and thanks for your Awesome website, I have a question:
Is it possible to reduce the bandwidth of streaming video by doing some compression algorithms on it? I mean choosing for example MJPEG or MPEG4 or some other formats for our streaming video.
thanks a lot
Yes, the smaller your frames are (in terms of spatial dimensions) combined with your compression will reduce network load and therefore reduce latency. You just need to be careful that the compression can still happen in real-time on the client..
Thanks for the great article! I’m looking at using WebRTC and the open source Kurento server to stream content from a laptop camera, apply an OpenCV filter on the server side, then stream the results out to a browser endpoint. You mentioned you had some trouble with RTSP? Was this just due to some camera’s not being able to publish on that protocol? Are there other hurdles with RTSP / RTC that are important to consider?
Hey Brett — see my reply to Heughens Jean.
Any suggestions on making the pi weatherproof for outdoor applications?
Jeff discusses his weather/waterproofing in this interview.
I tried to install imagezmq on your colleague’s site at using the git command.
and got the following error
Any idea?
Thank you,
Anthony of Sydney
Are you using Windows? If so, you can find the solution here.
This is very useful tool!
One question. If the processing speed on server is slow, will it take the latest frame from the queue on the second loop? or the second one?
Sorry, I’m not sure what you mean? Could you elaborate?
I do agree with you after my work of ffmpeg and cgstreamer.
The transmission on internet of opencv frame is very pretty work.It will great decrease hardware cost.
Thank you so much.Great Adrian
You are welcome, I’m glad you enjoyed the tutorial!
Dear Adrian,
This works basically as you have presented it…but the client.py/server.py is really loading the Pi heavily, the CPU load goes up in the roof
And still, OpenCV seems not able to keep the frame rate at the same level as the other video software (Motion) I’m currently using in my solution. With Motion the CPU load in the Pi is just around 15% with a frame rate of 10 fps including motion detection enabled
Motion -> MQTT -> my python dnn analyzer server
To send images via MQTT, I just send them as byte arrays and it seems to be very fast
I’m wondering if there is a performance advantage in using imageZMQ instead of pure MQTT. I noticed that zmq uses Cython that I believe should be great for performace reasons, not sure if MQTT does the same
Anyway, my best regards & thanks for all great writing & sharing
Walter
Hey Walter, I see you already replied to the thread with Wally regarding MQTT. Thanks for doing that.
As far as CPU usage, are you sure it’s just not the threading of the VideoStream? You can reduce load by using “cv2.VideoCapture” and only polling frames when you want them.
Why not use IP cameras? They are cheap, compact and don’t require RasPI on far end
I’ve just checked my IP-cam module (ONVIF) with your code from face-detection post.
It works like a sharm.
Thank you.
Take a look at the intro of the post. If you can use an IP camera, great, but sometimes it’s not possible.
Hey Adrian,
What were some of the issues you were seeing when you tried streaming with rtsp? I’m assuming this should also work on a tx2 or nano.
Yes, this will also work on the TX2, Nano, Coral, etc.
As for RTSP, gstreamer, etc., you end up going down a rabbit hole of trying to get the correct parameters to work, having them work one day, and fail the next. I hated debugging it. ImageZMQ makes it far easier and reliable.
Hi Adrian, thanks for this post. But how about to forward montage or cv2.imshow output include detected rectangle from server to website (like custom html with dashboard and video player) ?
I’m covering that exact topic inside my Raspberry Pi for Computer Vision book. If you’re interested, you can use the Kickstarter page to pre-order at reduced prices.
Inspiring post.
I want to get images from camera located at a remote site over a GSM enabled router. Will it work? What is needed?
Kindly shed more light on the configuration of the server side.
Which python night vision library can you advice?will you be covering in “wildlife monitoring ” section of the book?
1. Yes, I will be covering wildlife detection inside the book, including wildlife at night. If you haven’t pre-ordered your copy yet, you can use this page to do so.
2. As for the GSM router, as long as you have a publicly access IP address it will work.
Hey Adrian! Thank you for continuing to write articles like this!
Do you think it would be possible for a Raspberry Pi with the Movidius to have enough power to process the inputs from the streaming cameras?
In your article on OpenVino, it seems you are processing a similar payload as would be received in this article.
That really depends on:
1. How computationally expensive the model is
2. The # of incoming streams
If you’re going to use the Raspberry Pi + NCS I guess my question would be why not just run inference there on the Pi instead of sending it over the network?
any idea how to stop the server hanging at (rpiName, frame) = hub.recv_image()
Double-check that your server is running ZMQ. Secondly, make sure your IP address and port numbers are correct.
Hello, I’ve been reading your tutorials an they really helped me learning to use opencv. Could you suggest a way of streaming the captured video to a webpage? Thanks in advance.
I’m actually covering that exact topic in my Raspberry Pi for Computer Vision book. You can find the Kickstarter page for the book here if you would like to pre-order a copy.
Thanks for this. It is interesting to me not just for the video streaming part, but I had never heard of zeromq and libraries like it before.
Whenever I wanted to do something like this, my preferred solution was to use Redis, which is also very simple and reliable. Not sure if it is as fast.
Also, the smoothest, high fps video for opencv setup I have ever tried was with Webrtc.
However to take advantage of hardware acceleration, etc, it was a very ugly, complicated set up using JavaScript in the browser, on both ends a node server for Webrtc, a node server to receive frames on the server side out of the browser on the server end, and a redis server. Very ugly and brittle, but the video was super smooth.
I am reading up as much as possible on zeromq and nanomsg, thanks to you, because tcp can be a pain to work with directly.
Redis is amazing, I’m a HUGE fan of Redis. You can even design entire message passing/broker libraries around Redis as well.
Hello Adrian,
Thanks for the post, very helpful.
Is it possible to send video from laptop camera to AWS EC2 for processing and show the processed video back on the same laptop.
Thanks.
Yes, you can use this exact method, actually. Just supply the IP address of your AWS server and ensure the ZMQ server is running on it.
Awesome work Adrian! My project is the same but I’m using an IP camera? May I use the same code of Raspberry Pi and if yes what changes must be done
How do you typically access your IP camera? Do you/have you used OpenCV to access it before?
I failed at the sym-link option
cd ~/.virtualenvs/lib/python3.5/site-packages
No such file or directory
My mistake,
I needed to add the “cv” (my name for my virtual environment) in the path
/home/pi/.virtualenvs/cv/lib/python3.5/site-packages
…still learning…
Congrats on resolving the issue, Kurt!
Please explain in detail as I am stuck with the same error.
this is really cool. thanks for the article. i am also wondering what is the best way to push from zmq to web browsers? any recommendations? thank you!
I’m actually covering that exact project in my Raspberry Pi for Computer Vision book. You can find the Kickstarter page for the book here.
hi Adrian. Thx for the tutorial. I’m planning to do something similar instead stream iPhone video at very high fps to my laptop so I can faster prototype algorithms. Do you have any experience or comparison with LCM (). It seems it targets realtime scenarios and using UDP multicast so theoretically should have lower latency.
Sorry, I do not have any experience with that library so I can’t really comment there.
Thanks for your great work!
I tried this work and the result was great
But implemented only in one device in the sense implemented by the client and the server in one computer. The process did not work with me in the raspberry pi problem. I experimented even with ubuntu computer and windows computer did not fix.
But if you do them together they work both in ubuntu or in the windows
I have a problem with communication.
Hi Adrian,
I’ve managed to get the RPi client and Ubuntu 18.04 server communicating properly, but my Pi camera is mounted upside down and I’m not sure if I need to import the PiCamera module and use the –vflip switch. Would this be a way to flip the video image? If so, where in the script should I place the code? Thanks.
You could control it directly via the “vflip” switch OR you could use the “cv2.flip” function. Either will work.
where do i find the imutils.video stuff?
You install it via pip:
$ pip install imutils
Hi Adrian,
Thanks for your tutorial! I made it success in my Env.(Raspberry Pi 3 B+ and up-board square), I would like to ask you several questions.
1. what is the bottle neck of the low frame rate?
2. if I use an AI accelerator(like NCS or Edge TPU) for inference, can I get more frame rate?
3. Is Jetson nano a good platform for the host?
Thanks and regards,
Rean
The bottleneck here is running the object detector on the CPU. If your network is poor then latency could become the bottleneck. If you’re streaming to a central device you wouldn’t use a NCS, Edge TPU, or Nano. You would just perform inference on the device itself.
Great tutorial
but I have a question
what is the video format (codec) used by ?
Hey Adrian, as always, a great tutorial!
I am working on a project in which I need to live stream feed from a rtsp ip cam to the browser. I am also performing some recognition and detection on the feed through opencv on the backend. The problem is I have to put this code on a VM and show these feeds to client on their browser. I tried different players but none is able to recognize the format of the stream. The only option I found was to convert the stream using ffmpeg and then display it on the browser. But doing this process simultaneously is something I am stuck at. Do you have any suggestions?
Hey Sam — I’m covering how to stream from a webcam to the browser inside my book, Raspberry Pi for Computer Vision.
Your tutorial is awesome. Thanks Adrian, you are a prince. This one helped me tremendously in getting my automated aircraft annotater up and working in only a week.. probably would still be figuring out step 1 inf it wasn’t for your help.
Holy cow. This is one of the neatest projects I have seen. Congratulations on a successful project!
Hi Adrian,
Having some trouble running the code. I first run the server.py as instructed followed by running the client script on the Pi. I am using a Windows 10 machine as the server. The problem is that after running the client script, nothing is being returned on the server side.
The client.py script appears to hang at the command: “sender.send_image(rpiName, frame)”
I have checked that the server IP is entered correctly. Any suggestions on how to debug?
Thanks in advance!
Hi. Just wondering if there is a way to add basic error handling to these scripts. I’ve noticed that if the client is started before the server it will simply block on send_image() until the server is up and will work fine.
However if the server is killed (for whatever reason) and then restarted it no longer works without killing all the clients and restarting them as well.
I guess I can have a script running on the server – monitoring the status of the script from this blog post and if something goes wrong ssh into each client and kill it and restart.
It seems to be an issue in imageHub.rev_image() that stops the server working after a restart.?!? Any ideas on how to make this a robust solution?
Hey Jason — we’re actually including a few updates to the ImageZMQ library inside Raspberry Pi for Computer Vision to make it a bit more robust.
Hi ! Awesome tutorial!
What is the difference between this and using the Motion project ? does that still use message passing?
Which motion project are you referring to?
Hi Adrian,
Is it possible to use Raspberry Pi Zero W as the CAM?
or, what is the minimum demand for a video streamer in Pi series?
Thanks and Regards,
Rean
You can technically use a Pi Zero W but streaming would be a bit slower. I recommend a 3B+.
understood, thanks!
Thanks for that great work, could I use that code inside commercial product ?
Yes, but make sure you read the ImageZMQ license first.
I have been having this issue no matter what I try to do with sending video from one machine to another machine, and I have not seen this asked in the comments section. Whenever I run the client and server on the same machine I have no problems the image frame opens up and displays the video. However every time I try to host the server on one machine and run the client on another machine the image frame never shows up, and in this case the server doesn’t even see the client connecting.
I am entering the correct IP of the server in the arguments, what am I doing wrong?
Thank you in advance.
Are both machines on the same network? And have you launched ImageZMQ on the machines?
Hello, Adrian. It’s OK? I really liked this project. One question, if I wanted to use an IP camera, how would I configure this client code? Another question, is it possible to use the people counting system in this example too? Thank you.
I don’t have any examples of IP cameras yet. But you’ll be able to use people counting inside my book, Raspberry Pi for Computer Vision.
How do I do not let zero the counter of people and objects?
I personally do not like the low framerate associated with ZeroMQ.
In my own work, I’ve found that ROS+raspicam_node has been a very frictionless system that offers pretty low latency, and will support native 30fps framerates. The only headache I guess will be using ROS if you’re not already familiar.
ROS can be quite a pain to work with. Do you happen to know how ROS is working with the RPi camera? Is it a native integration or something involving a service like gstreamer?
Great proyect my friend, this help me out of my issue with recognizing faces faster with the raspberry pi 3+b thank you! My best wishes!
Thanks Jahir, I’m glad it helped!
Hi Adrian, thanks for the tutorial, I have a problem sending the frames from the pi to my server, when i first run my server displays in the terminal “starting video stream…”, then i run the client.py on the pi but my server never says “receiving video from…”. it works on windows, but not in Ubuntu 18.04. D:
Try checking the IP address of the server as it sounds like you’re using two different systems.
Hello, thank you for a great solution. I wish to askm how can one stream these videos to an online web page, shared hosting server or something where someone can login to see live video feeds of some remote place by opening a web page displaying the video feeds.
Any ideas please?
I am covering that exact equation inside Raspberry Pi for Computer Vision — I suggest you start there.
hello adrian thank you for your tutorial.i have a face recognition project and i want to use 15 cameras for this purpose.my system info is: gpu 1080ti, cpu core i7, 16gb memory.
My goal is to have at least 10 cameras for real time face recognition, and the speed of recognize in this project is very important.
Is this sufficient for this project? What is the minimum system I need?
Is zmq and 10 raspberry pi’s able to give me a good performance?
Im using ip cameras.
Hey Jenny — I’d be happy to help and provide suggestions for this project, but I would definitely suggest you pickup a copy of Raspberry Pi for Computer Vision first. That book will teach you how to build such a project and it will enable me to provide you with targeted help on your project. Thank you!
Hi Adrian,
I’ve found your post and what you’ve done is amazing! I’m researching in the field of computer Vision and looking for some way to use OpenPose for tracking body movements. I have facing some problems regarding processing the images using this library, as it takes 20 sec to process each of the frames. I would like to set up a GPU in the cloud and then be able to run it real time. Is that possible using imagezmq?
Cheers Fabio
Yes, basically you would use ImageZMQ to stream frames from your webcam to a system running a GPU. The GPU machine will then run the OpenPose models.
hey, Adrian is it possible to sent result back from the server to a client (suppose I want to send a number of persons available in the room back towards the client) how can I do that?
i have a question. i am trying to integrate this within the Kivy toolset, i know how to get a screen to display an openCV video stream however when trying to work with this library i get a bunch of errors. is there any advice you might be able to offer with this?
Sorry, I don’t do much work with Kivy. Good luck resolving the issue!
Dear Adrian,
Thanks a lot for making this tutorial. I am able to use this program without any problem. I have a few questions to ask.
i) How can I set up the server and client if they are not connected to the same wifi network? For example, my client is RPi3 and server is AWS EC2. Which IP of my AWS instance will go into –server-ip.
ii) Do I need to open port 5555?
Thanks.
1. You’ll want to check your EC2 dashboard to find the IP address of your EC2 instance.
2. Make sure the port is available for use. Check your security configurations and make sure port 5555 is accessible.
Dear Adrian,
I have another question.
Suppose for some reason, I stop my server in between but my client is running. When I restart the server, it cannot receive frames from the client. Do you have any idea about that? If I restart the server, I also have to restart the client.
How to deal with this problem?
Thank you in advance. | https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/ | CC-MAIN-2019-35 | refinedweb | 8,936 | 73.47 |
(One of my summaries of a talk at the 2018 european djangocon.)
Markus started created websites at the end of the 1990s. He calls himself an open source software developers. Open source is how he learned programming, so he likes giving back.
How can packaging make deployments easier, faster and more reliable?
He showed a django project structure, modified for packaging it with
python. Extra files are
MANIFEST.in,
setup.cfg,
setup.py. And the
actual project code is in a subdirectory, as that’s what setup.py likes. If
you have apps, he suggests to put them in an
yourproject/apps/
subdirectory.
Many people use
requirements.txt files to list their dependencies. They
only use
setup.py for libraries or apps that they publish. But why not use
the
setup.py also for projects? There’s an
install_requires key you
can use for mentioning your dependencies.
Note that if your setuptools is new enough, you can put everything into
setup.cfg instead of having it all in a weird function in
setup.py. This includes the dependencies. Your
setup.py will now look
like this:
from setuptools import setup setup()
As we’re deploying a project (instead of a library), we can even pin the requirements to specific versions.
He mentioned bumpversion to update your version number.
Regarding the
MANIFEST.in: it lists the files you want to include in the
package. You can install the
check-manifest package to test your manifest.
Building the package:
python setup.py bdist_wheel. You get a python wheel
out of this.
What he uses for deployment is pip-tools. There are more modern alternatives,
but he thinks they’re not ready yet. With pip-tools he generates a
constraints.txt. Basically a
requirements.txt, but then for the
dependencies of the packages in your
requirements.txt. You can pass the
constraints file to pip install with
-c constraints.txt.
How to serve the package? You can use “devpi” to host your own private python package index.
How to change settings? Use environment variables. Except for the secrets. Use a vault for this if possible. There are some python packages that can help you with environment variables:
Summary:
If you package your django project as a python package, you’re hosting provider independent.
And you use tools you already know: it prevents the not-invented-here syndrome.
It improves deployment to many servers.
The same release is used everywhere: dev, CI, staging, production.
And: rollback is easy.
Nice: a built distribution requires no build steps.
His slides are at
Some notes/comments from myself:
I’m starting to like pipenv, which does the requirements/constraints handling automatically. And in a more elegant way. The big advantage is that you don’t have to remember to pass the correct arguments all the time. Much safer that way (in my opinion).
He mentioned “bumpversion”. I myself wrote and use zest.releaser, which also updates your changelog, has a plugin mechanism, etc. It is widely used.
Photo explanation: when illuminated, this is what you see of the): | https://reinout.vanrees.org/weblog/2018/05/23/05-packaging.html | CC-MAIN-2021-21 | refinedweb | 511 | 70.8 |
If.
But before that, we need to make sure we understand the question.
Quotas, Requests, Limits
Quotas, requests, and limits all play a role in the way OpenShift allocates resources and it can be easy to get them confused. They are actually taken in consideration by OpenShift at different times and for different purposes. The following table summarizes these concepts:
Quotas
Quotas are attached to a project and have little to do with the real capacity of the cluster. But if a project reaches its quota, the cluster will not accept any additional requests from it, behaving as if it were full (notice that quotas can be attached to multiple projects with multi-project quotas.
Best Practice: create T-shirt sized projects. By that, I mean cluster administrators will define, via templates, a few standard sizes for projects where the size is determined by the associated quotas.
Requests
Requests are an estimate of what a container will consume when it runs. OpenShift uses this information to create a map of all the nodes and of how many resources they have available based on the containers that have been already allocated. Here is a simple example:
In this example, a new pod which is requesting 2GB of memory needs to be allocated to nodes with already committed memory. OpenShift can place this pod only on node1 because node2 does not have enough available resources.
OpenShift can deny scheduling of pods if there is no node that can satisfy the request constraint.
We can get an idea of how OpenShift estimates a node being utilized with the following command:
oc describe node <name of a node>
This will produce an output of the form (only the relevant fragments are reported):
Allocatable: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 2 memory: 12304368Ki pods: 20 ... Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted. CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 300m (15%) 400m (20%) 2660Mi (22%) 2248Mi (18%)
In this particular example, we have a node with 12GB of memory, of which 2.6GB are booked by requests coming from the 20 pods that are deployed.
Notice that this figure (22%) has nothing to do with the actual memory used on the node. This value represents the memory that has been reserved. The same applies to the CPU.
Also, if you don’t specify what amount of resources you container needs, OpenShift assumes zero. This implies that if you don’t specify the requests for your containers, you can easily run out of resources while OpenShift will still think that all your nodes have capacity available.
Best Practice: specify the request for all of your containers. You can mandate that requests for all your containers be specified by setting a min request in the limit range object associated to your projects, here is an example:
apiVersion: "v1" kind: "LimitRange" metadata: name: "core-resource-limits" spec: limits: - type: "Container" min: cpu: "1m" memory: "1Mi"
In the above example, very small minimums are specified that will not affect the value of the requests for containers but will make it mandatory to specify it.
It is also worth noting that in version 3.5, a generalized framework for tracking additional node resources has been introduced. These resources are called opaque integer resources and the main use case is to be able to track GPUs’ resources.
Every time OpenShift places a pod, it is solving an instance of a multidimensional knapsack problem. The knapsack problem is a classical problem in algorithm theory, where you have to place N stones of different sizes in M backpacks of different capacity. The point is to find optimal allocation. In the case of OpenShift, we have a multidimensional knapsack problem because there are more that one characteristic to consider: CPU, memory and as we have seen opaque integer resources.
The knapsack problem is a np-complete problem, which means that the algorithm that solves it scales exponentially with n (pods) and m (nodes). For this reason, when n and m are big enough, a human cannot do a better job than a machine at solving this problem.
Best Practice: refrain from pinning pods to nodes using node selectors, because this interferes with the ability of OpenShift to optimize the allocation of pods and increase density.
Limits
Limits determine the maximum amount of resources (CPU and/or memory) that can be used by a container at runtime. Setting a limit corresponds to passing to the docker run command the --memory parameter for memory limits and the --cpu-quota parameter for CPU limits.
This influences the cgroup that is created around the given container, limiting the resources it can use.
For memory, docker assumes that if you ask for a given amount of RAM, you should also get twice as much swap size. This implies that if you have swap active, your container will actually see thrice as much memory as you have assigned to it, and it could potentially swap.
Best Practice: disable swap for nodes (note that beginning with OpenShift 3.6, the node service will not start if swap is enabled).
Again if you don’t specify limits for your containers, then OpenShift assumes that they are unbounded and that the container can consume all the resources available on the node.
To get an idea of the actual resources available on a node you can run (this command requires cluster metrics to be installed)
oc adm top node --heapster-namespace=openshift-infra --heapster-scheme=https <node-name>
Which will output something as follows:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.99.100 1007m 50% 5896Mi 49%
This is the same node of the above example. Notice that the actual consumption of resources is much higher than what OpenShift estimated. The difference is due to the fact that in this node there are several pods that do not declare requests and limits.
In summary, in order to fully being able to describe the available capacity of a cluster we should be able to answer two questions (and we need the answer them for at least memory and CPU):
- How much capacity OpenShift estimates is available based on the declared requests by the pods.
- How much capacity is really available based on current usage.
How well the actual resource availability tracks the OpenShift estimated availability will depend on how well the pods have been sized and on the current load.
Cluster administrators should watch the ratio between estimated resources and actual resources. They should also put in place policies to make sure that the two metrics stay as much as close as they can. This allows OpenShift to optimize allocation by increasing density, but at the same time guarantee the requested SLAs.
Monitoring Available Capacity of the Cluster
Implementing a mature, enterprise-grade monitoring tool for OpenShift can take some time. I wanted to provide something that would allow you to answer the resource availability question from day one.
One way is to script the above commands (
oc describe node and
oc adm top) and come up with some calculation to get to the answer. Another way is to use Kube-ops-view. A ported version to support OpenShift is available here.
Kube-ops-view features a dashboard that allows you to get information on the capacity of your cluster among other things. Here is an example of the dashboard:
The nice thing of kube-ops-view is that you don’t have to install anything on the nodes and you can run it all on your laptop. You can also install in your cluster.
Kube-ops-view requires metrics to be installed and running correctly.
Running kube-ops-view on you laptop
For a local installation, you need to be logged in as a cluster administrator and then run the following:
oc proxy & docker run -it --net=host raffaelespazzoli/ocp-ops-view
And then point your browser to
Running Kube-ops-view in your cluster
An in-cluster installation allows you to make the console available to users who are not cluster-admin.
You can install kube-ops-view in this mode by running the following:
oc new-project ocp-ops-view oc create sa kube-ops-view oc adm policy add-scc-to-user anyuid -z kube-ops-view oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:ocp-ops-view:kube-ops-view oc apply -f oc expose svc kube-ops-view oc get route | grep kube-ops-view | awk '{print $2}'
2 Responses to “How Full is My Cluster? Capacity Management and Monitoring on OpenShift”
Carlos Santiago Moreno
Hi, thanks for all the info, great post.
I’ve just a comment, regarding the last example. The “oc adm policy” command, sets the cluster-reader role to a SA from project “kube-ops-view”, but the project created with the first command (and in which all the elements are created) is “ocp-ops-view”, so, to make it work properly, I had to set the permissions to SA “system:serviceaccount:ocp-ops-view:kube-ops-view”.
Sean
Hi, thanks for this post, really useful
I’ve managed to spin up the app in OpenShift, but I just see the top banner of the page and no cluster info. I tried setting the CLUSTERS env variable, but no joy
Any pointers would be appreciated and we have multiple clusters that I would like to look at
Cheers | https://blog.openshift.com/full-cluster-capacity-management-monitoring-openshift/?share=google-plus-1 | CC-MAIN-2019-13 | refinedweb | 1,576 | 57.81 |
.
At the end of the sales process, the employee closes the case either marked as “successful” or “unsuccessful”. In a successful, case a new case for the internal division which will work on the project will be created. All that data is stored inside a Data Warehouse and can be used for reports and further analysis. To train and test the machine learning algorithms the closed sales cases were used.
To find out whether a sales case will be closed as successful or not, the features of the cases need to be analysed. Therefore, the following features were exported for every sales case:
- Lifetime: The time between the creation date of the case and the date when it was closed or the current date, if the case is still open
Activity: This simply counts the amount of communication with the customer of any kind, like phone calls, mails and meetings.
Activity per time: While activity counts the communication over the whole lifetime of a case, the activity per time indicates how many communications there have been within a week.
Customer status: Whether the potential customer is a new customer or an existing one.
Division: The division inside the company, which would work on the project.
Origin: Gives information where the opportunity is coming from, for example an already known customer, an event or a request for proposals.
Sales volume
sales volume for licenses
proportional marketing costs
All these features were exported from the Data Warehouse as csv and then processed with Python. Therefore, a Jupyter Notebook was used. These notebooks are very flexible since they can run on your local machine or they can even connect to a Hadoop cluster, using PySpark, where they can be used by several users via the WebUI.
A first step is to search for clusters within the given data, in this example the already closed sales cases. Using K-Means, it is possible to find out if a certain combination of features more likely leads to a successful closing of the sales case.
For that approach, the K – Means algorithm is a good option. It is an unsupervised machine learning algorithm that needs no labels. It finds k clusters within the data, where k is a number of your choice. The algorithm starts with k randomly picked data points and chooses them as the centers for the k clusters. Then, all other data points are assigned to the cluster where the center is nearest to them. In the next step for the now existing clusters the centers are newly calculated. Then again, every data point is assigned to the cluster with the nearest center. This procedure goes on until the centers don’t really change their positions anymore.
The K - Means algorithm is already included in the python package “sklearn”. Further information can be found at the scikit-learn documentation.
But it is also no big deal to implement that algorithm yourself. If your data is stored in a Hadoop cluster, you can connect the Jupyter Notebook to Spark and use the following PySpark code to run the algorithm on the cluster:
import numpy as np def closestPoint(p, centers): bestIndex = 0 closest = float("+inf") for i in range(len(centers)): Dist = np.sum((p - centers[i]) ** 2) if Dist < closest: closest = Dist bestIndex = i return bestIndex data = ... K = 3 convergeDist = 0.1 kPoints = data.takeSample(False, K, 1) Dist = 1.0 while Dist > convergeDist: closest = data.map(lambda p: (closestPoint(p, kPoints), (p, 1))) pointStats = closest.reduceByKey(lambda p1, p2: (p1[0] + p2[0], p1[1] + p2[1])) newPoints = pointStats.map(lambda st: (st[0], st[1][0] / st[1][1])).collect() Dist = sum(np.sum((kPoints[i] - p) ** 2) for (i, p) in newPoints) for (i, p) in newPoints: kPoints[i] = p
This snippet calculates the final coordinates for the centers for each cluster. In the end each data point has to be assigned to its closest center which can easily be done with the function “closestPoint”.
So far, no prediction for the open sales cases has been generated. For that purpose, a supervised machine learning algorithm is used. A more simple one is the decision tree algorithm. The decision tree algorithm splits the data into groups at every branch until the final decision is made. The paths from root to leaf represent classification rules. This tree-like structure can be plotted as seen in the example picture below.
Supervised algorithms generate knowledge based on training data, here the already closed sales cases were used for that purpose. After the training phase the model needs to be tested. Therefore 20 percent of the closed sales cases were used, while the other 80 percent were training data. A good choice is to split the data with a random component.
df['is_train'] = np.random.uniform(0, 1, len(df)) <= .8 train, test = df[df['is_train']==True], df[df['is_train']==False]
For the decision tree it is possible to visualize the decisions as seen above. This is a good way to get a first impression of the data. In the next step, the random forest algorithm was used because it is more reliable. Random forest consists of several uncorrelated decision trees, where every tree had grown with a random component during its learning process. The features are always randomly permuted at each split. In default, random forest grows 10 trees. This can be changed with the variable n_estimators. For the final classification of the data every tree has a vote. The class with most votes will be the one assigned to the data. This algorithm has many advantages:
In Python it’s quite simple to use that algorithm.
import sklearn from sklearn.ensemble import RandomForestClassifier data = ... features = df.columns[:9] clf = RandomForestClassifier(n_jobs=2, random_state=0) clf.fit(train[features], train['success']) clf.predict(test[features])[0:10]
To evaluate the prediction on the test data you can create a confusion matrix.
M=pd.crosstab(test['success'], clf.predict(test[features]), rownames=['Actual success'], colnames=['Predicted success'])
The random forest algorithm gives you also the ability to view a list of the features and their importance scores.
sorted(list(zip(train[features], clf.feature_importances_)),key=itemgetter(1), reverse=True)
Finally, the algorithm was used on the open sales cases to predict their success.
It turns out that multiple features are important to predict the success of an open sales case. And of course the right balance between these features makes a sales opportunity successful.
The most important features are:
The results can be integrated in the existing reporting by pushing them back as a table into the Data Warehouse.
Analysing data from ConSol CM with Machine Learning algorithms can also be used to optimise complaint management, for fraud detection, to cluster customers into groups and many more. | https://labs.consol.de/de/consol-cm/big-data/2018/04/20/machine-learning-and-consol-cm.html | CC-MAIN-2020-50 | refinedweb | 1,132 | 56.25 |
Set the effective user ID
#include <unistd.h> int seteuid( uid_t uid );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The seteuid() function lets the calling process set the effective user ID, based on the following:
The real and saved user IDs aren't changed.
/* * This process sets its effective userid to 0 (root). */ #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> int main( void ) { uid_t oeuid; oeuid = geteuid(); if( seteuid( 0 ) == -1 ) { perror( "seteuid" ); return EXIT_FAILURE; } printf( "effective userid now 0, was %d\n", oeuid ); return EXIT_SUCCESS; } | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/s/seteuid.html | CC-MAIN-2018-22 | refinedweb | 104 | 69.07 |
cubicweb-rqlcontroller 0.4.0
restfull rql edition capabilities
Summary
Controller that gives users rql read/ write capabilities.
Sample usage
Users of this service must perform a HTTP POST request to its endpoint, that is the base url of the CubicWeb application instance appended with the “rqlio/1.0” url path.
The posted data must use the application/json MIME type, and contain a list of pairs of the form (rql_string, rql_args), where:
- rql_string is any valid RQL query that may contain mapping keys with their usual form
- rql_args is a dictionary, whose keys are the mapping keys from rql_string, and the values can be:
- actual values
- string references to a previous RQL query’s result, with the assumption that the referenced RQL query returns a single line and single column rset; under such conditions, a string reference must be “__rXXX” where XXX is the (0-based) index of the RQL query in the json-encoded list of queries.
The HTTP request’s response (in case where there is no error), is a json-encoded list. Its length is the number of RQL queries in the request, and each element contains the json-encoded result set rows from the corresponding query.
In case of an error, a json object with a reason key will explain the problem.
Python client example using python-requests:
import requests import json args = [('INSERT CWUser U: U login %(l)s, U upassword %(p)s', {'l': 'Babar', 'p': 'cubicweb rulez & 42'}), ('INSERT CWGroup G: G name "pachyderms"', {}), ('SET U in_group G WHERE U eid %(u)s, G eid %(g)s', {'u': '__r0', 'g': '__r1'}) ] resp = requests.post(''), data=json.dumps(args), headers={'Content-Type': 'application/json'}) assert resp.status_code == 200
- Author: LOGILAB S.A. (Paris, FRANCE)
- License: LGPL
- Categories
- Package Index Owner: logilab
- DOAP record: cubicweb-rqlcontroller-0.4.0.xml | https://pypi.python.org/pypi/cubicweb-rqlcontroller | CC-MAIN-2018-13 | refinedweb | 305 | 54.66 |
list, songs, which I want to divide into two groups.
Essentially, I want:
new_songs = [s for s in songs if s.is_new()]
old_songs = [s for s in songs if not s.is_new()]
but I don't want to make two passes over the list. I could do:
new_songs = []
old_songs = []
for s in songs:
if s.is_new():
new_songs.append(s)
else:
old_songs.append(s)
Which works, but is klunky compared to the two-liner above. This seems like a common enough thing that I was expecting to find something in itertools which did this. I'm thinking something along the lines of:
matches, non_matches = isplit(lambda s: s.is_new, songs)
Does such a thing exist?
You could do something like:
new_songs, old_songs = [], []
[(new_songs if s.is_new() else old_songs).append(s) for s in songs]
But I'm not sure that that's any better than the long version.
itertools.groupby() is kinda similar, but unfortunately doesn't fit the bill due to its sorting requirement.
There is regrettably no itertools.partition(). And given how dead-set Raymond seems to be against adding things to the itertools module, there will likely never be.
Maybe more-itertools ( )
would accept a patch?
Hi,
I have a list of arbitrary length, and I need to split it up into equal size chunks.
This should work:
l = range(1, 1000)
print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ]
I was looking for something useful in itertools but I couldn't find anything obviously useful.
Appretiate your help.
How can I use the '.split()' method (am I right in calling it a method?) without instead of writing each comma between words in the pie list in the following code? Also, is there a way to use .split instead of typing the apostrophes?
import random
pie=['keylime', 'peach', 'apple', 'cherry', 'pecan']
print(random.choice(pie))
Input:
[1 7 15 29 11 9]
Output:
[9 15] [1 7 11 29]
Average of first part: (15+9)/2 = 12,
Average of second part: (1 + 7 + 11 + 29) / 4 = 12.
Forgot Your Password?
2018 © Queryhome | https://tech.queryhome.com/3382/split-a-list-into-two-parts-based-on-a-filter-in-python | CC-MAIN-2018-22 | refinedweb | 346 | 77.43 |
I am working on a code that is supposed to take in a users input, then convert it to binary with base 2 through 9. (Sorry if these are the wrong terms, completely new to the idea of binary.) I have the code done, but there is something missing. This is what it is supposed to out put when the user types in "245"
converted to base 2 = 11110101
converted to base 3 = 100002
converted to base 4 = 3311
converted to base 5 = 1440
converted to base 6 = 1045
converted to base 7 = 500
converted to base 8 = 365
converted to base 9 = 302
converted to base 2 = 1111010
converted to base 3 = 10000
converted to base 4 = 331
converted to base 5 = 144
converted to base 6 = 104
converted to base 7 = 50
converted to base 8 = 36
converted to base 9 = 30
import java.util.*;
public class Tester {
public static void main(String args[]) {
//ask user for number
Scanner k = new Scanner(System.in);
System.out.println("Please enter a positive integer.");
int input = k.nextInt();
System.out.println();
//this loop converts the number into each base from 2 to 9
//for each base the loop calls the convertNumber() method to do the conversion
for(int i=2; i<=9; i++) {
System.out.print("converted to base " + i + " = ");
convertNumber(input, i);
System.out.println();}
}
/*
* Recursive method that prints the given number in the given base
* example: if n = 13 and base = 2 (binary) then 1101 should be displayed
*/
private static void convertNumber(int n, int base) {
if (n >= base) {
n = n/base;
convertNumber(n, base);
int r = (n % base);
System.out.print(r);
}
} //end convertNumber
}//ends Tester
I see a bug in the convert number routine, if n is not GE than the base, you need an else, where you print out whatever n is. This is why all of your responses are missing the last digit.
Best of luck. | https://codedump.io/share/If6z2uADpfKw/1/binary-conversion-issues | CC-MAIN-2018-26 | refinedweb | 322 | 65.25 |
Talk:Proposed features/demolished
Contents
date questions
I like this key. A few questions:
what is the format for the date? the example provided leaves this ambiguous.[Update: found this in the sidebar and corrected in article] it seems pretty important that this is consistently applied, else it can't be accurately interpreted. FWIW I've used dd-mm-yyyy (as OSM's native language is en-GB AFAIK) but would prefer yyyy-mm-dd for computational efficiency and sorting.
- can we accept partial dates (e.g. if only the month or year is known)? this would work better with the yyyy-mm-dd format of dates BTW
- can we populate with the value "yes" if the date is unknown?
--Hubne 06:19, 15 April 2011 (BST)
I favour specifying ISO8601 date format. Standard, documented and unambiguous. -- User:EliotB
Please use standard formats like ISO8601. Making month and day optional, and allowing "yes" would be beneficial too. I'm a Brit myself, and I don't recognise dd-mm-yyyy until I twig you probably meant dd/mm/yyyy (and why aren't you using normal, sortable, unambiguous ISO-format dates?) --achadwick 22:42, 5 June 2011 (BST)
This tag suffers from the same data consistency problems as disused=* and abandoned=*: see the pages linked for details. To quote the page for disused=yes:
-.
For both of the above tags, I've suggested using the corresponding namespace as a suitable "tweak". Almost as a way of storing the tags which are no longer relevant as a result of the disuse or the abandonment. It's backwards-compatible, at least. The main page for this tag should be updated with some similar wording. It's particularly important to make the recommendations backwards-compatible with software without special handling for demolished=* because routing software may route users to (or along) rubble in this case. Demolished objects will also render as normal objects.
--achadwick 22:37, 5 June 2011 (BST)
Please move to Key:demolished
... since the wiki is case-sensitive ☺ --achadwick 22:39, 5 June 2011 (BST)
"Map what's on the ground"
- -1 on this tag. The general rule and consensus in OSM is to map what's standing and relevant on the ground right now, not historical data, I can't agree with this tag. Demolished objects are fairly obviously no longer still standing or relevant - unless they're a building site now. We have a separate tag for that. Even so, if you choose to go ahead with it, I hope you'll take on board my suggestions above. --achadwick 22:50, 5 June 2011 (BST)
- Yes, this is a terrible idea. Unverifiable, makes using data even more complicated etc. Any demolished object should be removed from OSM Mateusz Konieczny (talk) 07:31, 17 July 2014 (UTC)
- No reason to not have a scheme to tag demolished objects.
- sometimes demollished objects are still visible or even important obstacles
- several initiatives want to map historical objects in separate databases. The tags - if possible - should be still documented and compatible with main OSM
- RicoZ (talk) 22:10, 19 December 2014 (UTC)
This is very relevant considering many residential areas recently devastated by tsunami, earthquakes, wars, etc. Remains still visible on the ground in recent images. It should be actualized buildings that eventually have been reconstructed or replaced ancient ones. I think the perimeter of demolished buildings should be rendered like a line (like for wall) in "demolished=building".
Move to rehabilitate (use ISO dates, recast as a namespace, strongly discourage use)
Per [1], I move that we "rehabilitate" (basically, do away with) this broken tag. I propose
- Modification of this page to avoid bad data creeping into the database, by:
- Rewriting it as a namespace in a similar manner to abandoned and disused;
- Altering the date specification to be Right, namely the ISO 8601 date formats.
- Strongly discouraging its use altogether.
See the linked mailing list thread, or the discussion topics above, for further details.
--achadwick (talk) 17:27, 27 June 2013 (UTC)
redundancy to end_date
If this is used the same way as end_date=*, as is currently proposed, it is quite redundant. Perhaps it can be a yes/no value, to indicate if the tagged object was destroyed (since end_date can mean when that usage of the object stopped). Abbafei (talk) 06:36, 17 July 2014 (UTC)
Or perhaps it can be a namespace tag, like disused=*; for example demolished:building:yes or demolished:highway:tertiary. This can even apply for anything which no longer exists, even things to which demolition does not apply (like a leisure=sport). Abbafei (talk) 06:41, 17 July 2014 (UTC)
Currently proposed for namespace usage @ Comparison of life cycle concepts. Abbafei (talk) 06:08, 25 January 2015 (UTC)
- end_date is dead and especially not usable to mark demolished objects as they would still appear on the map and confuse existing applications. I do not think it is proposed in Comparison of life cycle concepts but should be merely listed there for completeness (and as an example how not to do it). If you see any places where end_date is endorsed in this way please add a big fat warning there.
- I believe this proposal is dead for the same reason and indeed demolished: used as namespace prefix is used instead of it. I will edit the proposal to reflect this. It is a damn serious deficit of the current wiki templates and taginfo that it is impossible to create functional key:namespace entries in the wiki.
- RicoZ (talk) 12:33, 25 January 2015 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Key:demolished: | CC-MAIN-2019-43 | refinedweb | 930 | 62.58 |
Introduction.
Everything the application needs to run is included. The Docker image contains the code, runtime, system libraries and anything else you would install on a server to make it run if you weren’t using Docker.
What Makes Docker Different from a Virtual Machine
You may have used Vagrant, VirtualBox, or VMWare to run a virtual machine. They allow you to isolate services, but there are a few major differences which make virtual machines much less efficient.
For starters, you need to have an entire guest operating system for each application you want to isolate. It also takes many seconds to boot-up a virtual machine, and each VM can potentially be gigabytes in size.
Docker containers share your host’s kernel, and isolation is done using cgroups and other linux kernel libraries. Docker is very lightweight — it typically takes 50 milliseconds for a container to start, and running a container doesn’t use much disk space at all.
What’s the Bottom Line?
What if you could develop your Rails application in isolation on your work station without using RVM or chruby, and changing Ruby versions were super easy?
What if as a consultant or freelancer with 10 Rails projects, you had everything you needed isolated for each project without needing to waste precious SSD disk space?
What if you could spin up your Rails, PostgreSQL, Redis, and Sidekiq stack in about 3 seconds?
What if you wanted to share your project on GitHub and other developers only had to run a single command to get everything running in minutes?
All of this and much more is possible thanks to Docker.
The Benefits of Using Docker
If you’re constantly looking for ways to improve your productivity and make the overall software development experience better, you’ll appreciate the following 5 key benefits Docker offers:
1. Cross Environment Consistency
Docker allows you to encapsulate your application in such a way that you can easily move it between environments. It will work properly in all environments and on all machines capable of running Docker.
2..
3..
4..
5..
Prerequisites
You will need to install Docker. Docker can be run on most major Linux distributions, and there are tools to let you run it on OSX and Windows too.
This tutorial focuses on Linux users, but it will include comments when things need to be adjusted for OSX or Windows.
Installing Docker
Follow one of the installation guides below for your operating system:
- [Linux]
- [Mac]
- [Windows]
Before proceeding, you should have Docker installed and you need to have completed at least the hello world example included in one of the installation guides above.
This guide expects you to have Docker 1.9.x installed, as it uses features introduced in Docker 1.9.
The Rails Application
The application we’re going to build will be for the latest version of Rails 4 which happens to be
4.2.5 at the time of writing.
However, all of the concepts described below can be used for Rails 5 when it comes out.
Generating a New Rails Application
We’re going to generate a new Rails project without even needing Ruby installed on our work station. We can do this by using the official Rails Docker image.
Creating a Dummy Project
First, let’s create a dummy project:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker run -it --rm --user "$(id -u):$(id -g)" \ -v "$PWD":/usr/src/app -w /usr/src/app rails:4 rails new --skip-bundle dummy
Running it for the first time will take a while because Docker needs to pull the image. The command above will create the application on our work station.
The
-v "$PWD":/usr/src/app -w /usr/src/app segment connects our local working directory with the
/usr/src/app path within the Docker image. This is what allows the container to write the Rails scaffolding to our work station.
The
--user flag ensures that you own the files instead of root.
The
rails new --skip-bundle dummy bit should look familiar if you’re a Rails developer. That’s the command we’re passing to the Rails image.
You can learn more about the official Rails image located on the Docker HUB.
Deleting the Project
We created the application in the home directory:
nick@isengard:~ $ ls -la drwxr-xr-x 12 nick nick 4096 Dec 11 09:48 dummy
Delete it like using the following command:
rm -rf dummy
Creating the Real Project
We’ll run the same command as last time, but we will change the name of the project. Note how fast the project gets generated this time around:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker run -it --rm --user "$(id -u):$(id -g)" \ -v "$PWD":/usr/src/app -w /usr/src/app rails:4 rails new --skip-bundle drkiq
It’s basically the same as creating a new Rails project without using Docker.
Setting Up a Strong Base
Before we start adding Docker-specific files to the project, let’s add a few gems to our
Gemfile and make a few adjustments to our application to make it production ready.
Modifying the Gemfile
Add the following lines to the bottom of your
Gemfile:
gem 'unicorn', '~> 4.9' gem 'pg', '~> 0.18.3' gem 'sidekiq', '~> 4.0.1' gem 'redis-rails', '~> 4.0.0'
Also, make sure to remove the
sqlite gem near the top.
DRYing Out the Database Configuration
Change your
config/database.yml to look like this:
--- development: url: <%= ENV['DATABASE_URL'].gsub('?', '_development?') %> test: url: <%= ENV['DATABASE_URL'].gsub('?', '_test?') %> staging: url: <%= ENV['DATABASE_URL'].gsub('?', '_staging?') %> production: url: <%= ENV['DATABASE_URL'].gsub('?', '_production?') %>
We will be using environment variables to configure our application. The above file allows us to use the
DATABASE_URL, while also allowing us to name our databases based on the environment in which they are being run.
DRYing Out the Secrets File
Change your
config/secrets.yml to look like this:
--- development: &default secret_key_base: <%= ENV['SECRET_TOKEN'] %> test: <<: *default staging: <<: *default production: <<: *default
YAML is a markup language. If you’ve never seen this syntax before, it involves setting each environment to use the same
SECRET_TOKEN environment variable.
This is fine, since the value will be different in each environment.
Editing the Application Configuration
Add the following lines to your
config/application.rb:
# ... module Drkiq class Application < Rails::Application # We want to set up a custom logger which logs to STDOUT. # Docker expects your application to log to STDOUT/STDERR and to be ran # in the foreground. config.log_level = :debug config.log_tags = [:subdomain, :uuid] config.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT)) # Since we're using Redis for Sidekiq, we might as well use Redis to back # our cache store. This keeps our application stateless as well. config.cache_store = :redis_store, ENV['CACHE_URL'], { namespace: 'drkiq::cache' } # If you've never dealt with background workers before, this is the Rails # way to use them through Active Job. We just need to tell it to use Sidekiq. config.active_job.queue_adapter = :sidekiq # ... end end
Creating the Unicorn Config
Next, create the
config/unicorn.rb file and add the following content to it:
# Heavily inspired by GitLab: # # Go with at least 1 per CPU core, a higher amount will usually help for fast # responses such as reading from a cache. worker_processes ENV['WORKER_PROCESSES'].to_i # Listen on a tcp port or unix socket. listen ENV['LISTEN_ON'] # Use a shorter timeout instead of the 60s default. If you are handling large # uploads you may want to increase this. timeout 30 # Combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings: # preload_app true GC.respond_to?(:copy_on_write_friendly=) && before_fork do |server, worker| # Don't bother having the master process hang onto older connections. defined?(ActiveRecord::Base) &&, etc.. # addr = "127.0.0.1:#{9293 + worker.nr}" # server.listen(addr, tries: -1, delay: 5, tcp_nopush: true) defined?(ActiveRecord::Base) &&
Creating the Sidekiq Initialize Config
Now you can also create the
config/initializers/sidekiq.rb file and add the following code to it:
sidekiq_config = { url: ENV['JOB_WORKER_URL'] } Sidekiq.configure_server do |config| config.redis = sidekiq_config end Sidekiq.configure_client do |config| config.redis = sidekiq_config end
Creating the Environment Variable File
Last but not least, you need to create the
.drkiq.env file and add the following code to it:
# You would typically use rake secret to generate a secure token. It is # critical that you keep this value private in production. SECRET_TOKEN=asecuretokenwouldnormallygohere # Unicorn is more than capable of spawning multiple workers, and in production # you would want to increase this value but in development you should keep it # set to 1. # # It becomes difficult to properly debug code if there's multiple copies of # your application running via workers and/or threads. WORKER_PROCESSES=1 # This will be the address and port that Unicorn binds to. The only real # reason you would ever change this is if you have another service running # that must be on port 8000. LISTEN_ON=0.0.0.0:8000 # This is how we'll connect to PostgreSQL. It's good practice to keep the # username lined up with your application's name but it's not necessary. # # Since we're dealing with development mode, it's ok to have a weak password # such as yourpassword but in production you'll definitely want a better one. # # Eventually we'll be running everything in Docker containers, and you can set # the host to be equal to postgres thanks to how Docker allows you to link # containers. # # Everything else is standard Rails configuration for a PostgreSQL database. DATABASE_URL=postgresql://drkiq:yourpassword@postgres:5432/drkiq?encoding=utf8&pool=5&timeout=5000 # Both of these values are using the same Redis address but in a real # production environment you may want to separate Sidekiq to its own instance, # which is why they are separated here. # # We'll be using the same Docker link trick for Redis which is how we can # reference the Redis hostname as redis. CACHE_URL=redis://redis:6379/0 JOB_WORKER_URL=redis://redis:6379/0
The above file allows us to configure the application without having to dive into the application code. This is a very important step to making your application production ready.
This file would also hold information like mail login credentials or API keys. You should also add this file to your
.gitignore, so go ahead and do that now.
Dockerizing Your Rails Application
We’ll need to add 3 files to the project, but only the first one is mandatory.
Creating the Dockerfile
Create the
Dockerfile file and add the following content to it:
# Use the barebones version of Ruby 2.2.3. FROM ruby:2.2.3-slim # Optionally set a maintainer name to let people know who made this image. MAINTAINER Nick Janetakis <nick.janetakis@gmail.com> # Install dependencies: # - build-essential: To ensure certain gems can be compiled # - nodejs: Compile assets # - libpq-dev: Communicate with postgres through the postgres gem # - postgresql-client-9.4: In case you want to talk directly to postgres RUN apt-get update && apt-get install -qq -y build-essential nodejs libpq-dev postgresql-client-9.4 --fix-missing --no-install-recommends # Set an environment variable to store where the app is installed to inside # of the Docker image. ENV INSTALL_PATH /drkiq RUN mkdir -p $INSTALL_PATH # This sets the context of where commands will be ran in and is documented # on Docker's website extensively. WORKDIR $INSTALL_PATH # Ensure gems are cached and only get updated when they change. This will # drastically increase build times when your gems do not change. COPY Gemfile Gemfile RUN bundle install # Copy in the application code from your work station at the current directory # over to the working directory. COPY . . # Provide dummy data to Rails so it can pre-compile assets. RUN bundle exec rake RAILS_ENV=production DATABASE_URL=postgresql://user:pass@127.0.0.1/dbname SECRET_TOKEN=pickasecuretoken assets:precompile # Expose a volume so that nginx will be able to read in assets in production. VOLUME ["$INSTALL_PATH/public"] # The default command that gets ran will be to start the Unicorn server. CMD bundle exec unicorn -c config/unicorn.rb
The above file creates the Docker image. It can be used in development as well as production or any other environment you want.
Creating a dockerignore File
Next, create the
.dockerignore file and add the following content to it:
.git .dockerignore Gemfile.lock
This file is similar to
.gitgnore. It will exclude matching files and folders from being built into your Docker image.
Creating the Docker Compose Configuration File
Next, we will create the
docker-compose.yml file and copy the following content into it:
postgres: image: postgres:9.4.5 environment: POSTGRES_USER: drkiq POSTGRES_PASSWORD: yourpassword ports: - '5432:5432' volumes: - drkiq-postgres:/var/lib/postgresql/data redis: image: redis:3.0.5 ports: - '6379:6379' volumes: - drkiq-redis:/var/lib/redis/data drkiq: build: . links: - postgres - redis volumes: - .:/drkiq ports: - '8000:8000' env_file: - .drkiq.env sidekiq: build: . command: bundle exec sidekiq -C config/sidekiq.yml links: - postgres - redis volumes: - .:/drkiq env_file: - .drkiq.env
If you’re using Linux, you will need to download Docker Compose. You can grab the latest 1.5.x release from the docker/compose GitHub repo.
If you’re using OSX or Windows and are using the Docker Toolbox, then you should already have this tool.
What is Docker Compose?
Docker Compose allows you to run 1 or more Docker containers easily. You can define everything in YAML and commit this file so that other developers can simply run
docker-compose up and have everything running quickly.
Additional Information
Everything in the above file is documented on Docker Compose‘s website. The short version is:
- Postgres and Redis use Docker volumes to manage persistence
- Postgres, Redis and Drkiq all expose a port
- Drkiq and Sidekiq both use volumes to mount in app code for live editing
- Drkiq and Sidekiq both have links to Postgres and Redis
- Drkiq and Sidekiq both read in environment variables from
.drkiq.env
- Sidekiq overwrites the default
CMDto run Sidekiq instead of Unicorn.
Creating the Volumes
In the
docker-compose.yml file, we’re referencing volumes that do not exist. We can create them by running:
docker volume create --name drkiq-postgres docker volume create --name drkiq-redis
When data is saved in PostgreSQL or Redis, it is saved to these volumes on your work station. This way, you won’t lose your data when you restart the service because Docker containers are stateless.
Running Everything
Now it’s time to put everything together and start up our stack by running the following:
docker-compose up
The first time this command runs it will take quite a while because it needs to pull down all of the Docker images that our application requires.
This operation is mostly bound by network speed, so your times may vary.
At some point, it’s going to begin building the Rails application. You will eventually see the terminal output, including lines similar to these:
postgres_1 | ... redis_1 | ... drkiq_1 | ... sidekiq_1 | ...
You will notice that the
drkiq_1 container threw an error saying the database doesn’t exist. This is a completely normal error to expect when running a Rails application because we haven’t initialized the database yet.
Initialize the Database
Hit
CTRL+C in the terminal to stop everything. If you see any errors, you can safely ignore them.
Run the following commands to initialize the database:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:reset docker-compose run --user "$(id -u):$(id -g)" drkiq rake db:migrate
The first command should warn you that
db/schema.rb doesn’t exist yet, which is normal. Run the second command to remedy that. It should run successfully.
If you head over to the
db folder in your project, you should notice that there is a
schema.rb file and that it’s owned by your user.
You may also have noticed that running either of the commands above also started Redis and PostgreSQL automatically. This is because we have them defined as links.
docker-compose is smart enough to start dependencies.
Running Everything, Round 2
Now that our database is initialized, try running the following:
docker-compose up
On a quad core i5 with an SSD everything loaded in about 3 seconds.
Testing It Out
Head over to. If you’re using Docker Toolbox, then you should go to the IP address that was given to you by the Docker terminal.
You should be greeted with the typical Rails introduction page.
Working with the Rails Application
Now that we’ve Dockerized our application, let’s start adding features to it to exercise the commands you’ll need to run to interact with your Rails application.
Right now the source code is on your work station, and that source code is being mounted into the Docker container in real time through a volume.
This means that if you were to edit a file, the changes would take effect instantly, but right now we have no routes or any CSS defined to test this.
Generating a Controller
Run the following command to generate a
Pages controller with a
home action:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker-compose run --user "$(id -u):$(id -g)" drkiq rails g controller Pages home
In a second or two, it should provide everything you would expect when generating a new controller.
This type of command is how you’ll run future Rails commands. If you wanted to generate a model or run a migration, you would run them in the same way.
Modify the Routes File
Remove the
get 'pages/home' line near the top and replace it with the following:
root 'pages#home'
If you go back to your browser, you should see the new home page we have set up.
Adding a New Job
Use the following to add a new job:
# OSX/Windows users will want to remove --user "$(id -u):$(id -g)" docker-compose run --user "$(id -u):$(id -g)" drkiq rails g job counter
Modifying the Counter Job
Next, replace the
perform function to look like this:
def perform(*args) 21 + 21 end
Modifying the Pages Controller
Replace the
home action to look like this:
def home # We are executing the job on the spot rather than in the background to # exercise using Sidekiq in a trivial example. # # Consult with the Rails documentation to learn more about Active Job: # @meaning_of_life = CounterJob.perform_now end
Modifying the Home View
The next step is to replace the
app/views/pages/home.html.erb file to look as follows:
<h1>The meaning of life is <%= @meaning_of_life %></h1>
Restart the Rails Application
You need to restart the Rails server to pickup new jobs, so hit
CTRL+C to stop everything, and then run
docker-compose up again.
If you reload the website you should see the changes we made.
Experimenting on Your Own
Here are three things you should do to familiarize yourself with your new application:
- Changing the
h1color to something other than black
- Generating a model and then running a migration
- Adding a new action and route to the application
All of these things can be done without having to restart anything, so feel free to check out the changes after you have performed each one.
Continuous Integration for Docker projects on Semaphore
You can easily set up continuous integration for your Docker projects on Semaphore.
First thing you’ll need to do is sign up for a free Semaphore account. Then, you should add your Docker project repository. If your project has a
Dockerfile or
docker-compose.yml, Semaphore will automatically recommend the platform with Docker support.
Now you can run your images just as you would on your local machine, for example:
docker build <your-project> docker run <your-project>
With your Docker container up and running, you can now run all kinds of tests against it. To learn more about using Docker on Semaphore, you can check out Semaphore’s Docker documentation pages.
Where to Go Next?
Congratulations! You’ve finished dockerizing your Ruby on Rails application.
If you would like to learn more about Docker and how to deploy a Ruby on Rails application to production in an automated way, you can follow the link below to get a 20% discount on Nick’s new course Docker for DevOps: From development to production.
P.S. Would you like to learn how to build sustainable Rails apps and ship more often? We’ve recently published an ebook covering just that — “Rails Testing Handbook”. Learn more and download a free copy. | https://semaphoreci.com/community/tutorials/dockerizing-a-ruby-on-rails-application | CC-MAIN-2019-47 | refinedweb | 3,458 | 54.52 |
Don't understand fundamental concept of Pymakr and modules
- Patrick Keogh last edited by Patrick Keogh
I wasn't sure whether to put this in Pymakr or just "Getting Started"...
I know Python well, but despite reading lots of introductory documentation there is something fundamental that I don't understand. I have Atom and Pymakr running fine. I can load a Python module as main.py and it executes just fine. I can type Python statements to the ">>>" and they work fine too.
My program will use MQTT, and so I found the module and I have it as umqtt.py. I copied it to .atom/packages/umqtt/umqtt.py
I have a project that has three files:
boot.py (empty) main.py (38 lines of code) lib/umtqq.py (205 lines of code)
The first few lines of main.py are
print("Init of main") import pycom pycom.heartbeat(False) pycom.rgbled(0xFF0000) # Green from umqtt import MQTTClient from network import WLAN import machine import time
So I use Pymakr to
Format flash storage
With the project selected, Upload Project to Device
>>> Reading file status Failed to read project status, uploading all files Creating dir Garage Creating dir Garage/lib [1/3] Writing file Garage/boot.py (0kb) [2/3] Writing file Garage/lib/umqtt.py (6kb) [3/3] Writing file Garage/main.py (0kb) Upload done, resetting board... OK60'''
That all looks OK to me, but then if I select main.py and click Run I get
>>> >>> Init of main Traceback (most recent call last): File "<stdin>", line 7, in <module> ImportError: no module named 'umqtt' > Pycom MicroPython 1.20.2.r1 [v1.11-a5aa0b8] on 2020-09-09; WiPy with ESP32 Pybytes Version: 1.6.0 Type "help()" for more information. >>> >>>
So main.py runs and the LED is indeed red, but the import fails. So what am I not understanding about importing modules?
This is a classic.
You have to change your import from
from umqtt import MQTTClient
to
from lib.umqtt import MQTTClient
Thit will solve the problem.
However, I would recommend creating an empty file called __init__.py inside your lib folder. This way you just need to make a reference to the package where your libraries are, like
from directory.filename import Myclass
This might be helpful if you need to import something from a directory one level up.
- Manuel Ricardo Alfonso Sanchez last edited by
@Patrick-Keogh said in Don't understand fundamental concept of Pymakr and modules:
Garage
Hi. I suggest you to upload files directly in flash. As I seen in your upload sketch it appears a Garage folder and inside the lib folder and so on. | https://forum.pycom.io/topic/6613/don-t-understand-fundamental-concept-of-pymakr-and-modules | CC-MAIN-2021-31 | refinedweb | 447 | 76.01 |
We hope to offer a small but comprehensible tutorial here. At the end of this tutorial, you should be able to write all sorts of HTML templates using BlazeHtml.
Please note that you should at least know some basic Haskell before starting this tutorial. Real World Haskell and Learn You a Haskell are two good starting points.
Installation
The installation of BlazeHtml should be painless using
cabal install:
[jasper@alice ~]$ cabal install blaze-html
Overloaded strings
The
OverloadedStrings is not necessarily needed to work with BlazeHtml, but it is highly recommended, as it allows you to insert string literals in your HTML templates without having boilerplate function calls.
> {-# LANGUAGE OverloadedStrings #-}
Modules
This tutorial is a literate haskell file, thus we should begin by importing the modules we are going to use. To avoid name clashes, we just import everything while renaming the module namespace so that we can use unqualified names most of the time and revert to nice and short qualified names when there is a clash.
> import Control.Monad (forM_)
> import Text.Blaze.Html5 as H > import Text.Blaze.Html5.Attributes as A
As you can see, we imported the
Html5 modules. Alternatively, you can choose to import
Text.Blaze.Html4.Strict. More HTML versions are likely to be added in the future.
A first simple page
The main representation type in BlazeHtml is
Html. Therefore, your “templates” will usually have the type signature
ArgumentType1 → ArgumentType2 → Html. We will now write a small page that just contains a list of natural numbers up to a given
n.
> numbers :: Int -> Html
Note how these templates are pure. It is therefore not recommended to mix them with
IO code, or complicated control paths, generally – you should separate your “view” code from your “logic” code – but you already knew that, right?
> numbers n = docTypeHtml $ do > H.head $ do > H.title "Natural numbers" > body $ do > p "A list of natural numbers:" > ul $ forM_ [1 .. n] (li . toHtml)
We use the
docTypeHtml combinator which is basically the doctype followed by the
<html> tag.
Attributes
We also provide combinators to set attributes on elements. Attribute setting is done using the
! operator.
> simpleImage :: Html > simpleImage = img ! src "foo.png"
Oh, wait! Shouldn’t images have an alternate text attribute as well, according to the recommendations?
> image :: Html > image = img ! src "foo.png" ! alt "A foo image."
As you can see, you can chain multiple arguments using the
! operator. Setting an attribute on an element with context also uses the
! operator:
> parentAttributes :: Html > parentAttributes = p ! class_ "styled" $ em "Context here."
As expected, the attribute will only be added to the
<p> tag, and not to the
<em> tag. This is an alternative definition, equivalent to
parentAttributes, but arguably less readable:
> altParentAttributes :: Html > altParentAttributes = (p $ em "Context here.") ! class_ "styled"
Nesting & composing
It is very common to nest, compose and combine multiple templates, snippets or partials – use whatever terminology you prefer here. Again, a small example. Say we have a simple datastructure:
> data User = User > { getUserName :: String > , getPoints :: Int > }
If the user is logged in, we want to have a snippet that displays the user’s current status.
> userInfo :: Maybe User -> Html > userInfo u = H.div ! A.id "user-info" $ case u of > Nothing -> > a ! href "/login" $ "Please login." > Just user -> do > "Logged in as " > toHtml $ getUserName user > ". Your points: " > toHtml $ getPoints user
Once we have this, we can easily embed it somewhere else.
> somePage :: Maybe User -> Html > somePage u = html $ do > H.head $ do > H.title "Some page." > body $ do > userInfo u > "The rest of the page."
In the previous example, the user would probably be pulled out of a reader monad instead of given as an argument in a realistic application.
Getting the goods
Now that we have constructed a value of of the type
Html, we need to do something with it, right? You can extract your data using the
renderHtml function which has the type signature
Html → L.ByteString. This function can be found in the
Text.Blaze.Renderer.Utf8 module.
A lazy
ByteString is basically a list of byte chunks. The list of byte chunks the
renderHtml is your HTML page, encoded in UTF-8. Furthermore, all chunks will be nicely-sized, so the overhead is minimal.
There are other renderers as well – for example there is a prettifying renderer called
Text.Blaze.Renderer.Pretty, and if you just want a
String, use
Text.Blaze.Renderer.String.
The blaze-from-html tool
There is also a tool called
blaze-from-html which is used to convert HTML pages to Haskell code using the BlazeHtml library. It needs to be installed seperately using cabal:
[jasper@alice ~]$ cabal install blaze-from-html
The reason that we use a seperate package is because we want to keep the set of
blaze-html dependencies small.
blaze-from-html usage is pretty straightforward. An example:
[jasper@alice ~]$ curl -S | blaze-from-html
will output the Haskell code that would be needed to produce this page. By default,
blaze-from-html will use HTML5. You can use other variants as well:
[jasper@alice ~]$ blaze-from-html -v html4-transitional index.html
To include the imports as well, use the
-s flag – this will give you a piece of standalone code that can be compiled directly. The
-e flag causes
blaze-from-html to ignore a lot of errors, which might come in handy if the page you are trying to convert has some faults in it.
Further examples
This tutorial should have given you a good idea of how BlazeHtml works. We have also provided some more real-world examples, you can find them all in this directory on github.
Go forth, and generate some HTML! | https://jaspervdj.be/blaze/tutorial.html | CC-MAIN-2019-13 | refinedweb | 948 | 66.23 |
ADO.NET classes assist in retrieving data from numerous different data sources. ADO.NET classes form the bridge between your .NET programs and your database engine.
Data Providers and .NET Database Classes
Prior to creating a data-oriented application you should choose the data provider you'll use to access the database. Data providers are used to execute commands against data sources. The Microsoft .NET platform supports SQL and OLE DB data providers.
The SQL data provider is specially designed to work against Microsoft SQL Server and is highly optimized for such data access. The SQL classes reside in the System.Data.SqlClient namespace. OLE DB is not limited to a specific data engine and may be used to access any database, for which there exists an OLE DB driver. The corresponding namespace for an OLE DB data provider is System.Data.OleDb. Additionally, Microsoft is working with vendors to develop other data providers. An open database connectivity .NET provider (ODBC.NET) has been already released.
You work with ADO.NET classes in the same manner as you work with any other .NET classes (constructing, invoking methods etc.). In the next sections, we'll show you how to use the classes listed above in different database access scenarios (connected and disconnected).
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/codemag/Article/10306 | CC-MAIN-2016-22 | refinedweb | 241 | 61.33 |
In this post, we are going to discuss about Scala Primary Constructor in depth with real-time scenario examples.
Table of Contents
Post Brief TOC
- Introduction
- Primary Constructor in Scala
- Scala val and var in-brief
- Scala Primary Constructor With val and var
- Scala Primary Constructor in-brief
Introduction
As we know, Constructor is used to create instances of a class. Scala supports constructors in different way than in Java.
In Scala Language, a class can have two types of constructors:
- Primary Constructor
- Auxiliary Constructor
A Scala class can contain only Primary Constructor or both Primary Constructor and Auxiliary Constructors. A Scala class can contain one and only one Primary constructor, but can contain any number of Auxiliary constructors. We will discuss Primary Constructor in-detail in this post and Auxiliary Constructor in-detail in my coming post.
Before going to next sections, we need to understand Class Definition and Class Body as shown in the diagram below:
Class Body is defined with two curly braces “{ }”. First line is Class Definition.
Primary Constructor in Scala
In Scala, a Primary Constructor is a Constructor which starts at Class definition and spans complete Class body.
We can define Primary Constructor with zero or one or more parameters. Now, we will discuss them one by one with some simple examples.
Example-1:-Create a Person class with default Primary Constructor.
class Person{ // Class body goes here }
Here we have defined No-Argument or Zero-Arguments constructor like “Person()”. It is also known as “Default Constructor” in Java.
We can create an instance of Person class as shown below:
val p1 = new Person() var p2 = new Person
Both are valid in Scala. We can use No-Arguments constructor without parenthesis.
Example-2:-
Primary Constructor’s parameters are declared after the class name as shown below:
class Person(firstName:String, middleName:String, lastName:String){ // Class body goes here }
Here Person class’s Primary Constructor has three parameters: firstName, middleName and lastName.
Now, We can create an instance of Person class as shown below:
val p1 = new Person("First","","Last")
If we observe this example, some People may have Middle Name or not but still they have to provide all 3 parameters. It is not efficient way to deal with constructors in Scala. We can use Auxiliary Constructors to solve this problem (Please go through my next post).
Example-3:-
Anything we place within the Class Body other than Method definitions, is a part of the Primary Constructor.
class Person(firstName:String, middleName:String, lastName:String){ println("Statement 1") def fullName() = firstName + middleName + lastName println("Statement 2") }
When we execute above program in Scala REPL, we can get the following output:
scala> var p1 = new Person("Ram","","Posa") Statement 1 Statement 2 p1: Person = Person@3eb81efb
If we observe that output, as both println statements are defined in Class Body, they become the part of Primary Constructor.
Example-4:-:-
Any Statements or Loops (like If..else, While,For etc) defined in the Class Body also become part of the Primary Constructor.
class Person(firstName:String, middleName:String, lastName:String){ def fullName() = firstName + middleName + lastName if (middleName.trim.length ==0) println("Middle Name is empty.") }
Output:-
scala> var p1 = new Person("Ram","","Posa") Middle Name is empty. p1: Person = Person@64a40280
Example-5:-:-
Not only Statements or Expressions, any method calls defined in the Class Body also become part of the Primary Constructor.
class Person(firstName:String, middleName:String, lastName:String){ def fullName() = firstName + middleName + lastName fullName // A No-argument Method Call }
Output:-
scala> var p1 = new Person("Ram","-","Posa") Ram-Posa p1: Person = Person@64a40280
Scala val and var in-brief
Before discussing about Scala Primary Constructor, we need to revisit about Scala Field definitions concept.
In Scala, “val” and “var” are used to define class fields, constructor parameters, function parameters etc.
- “val” means value that is constant. “val” is used to define Immutable Fields or variables or attributes.
- Immutable fields means once we create we cannot modify them.
- “var” means variable that is NOT constant. “var” is used to define Mutable Fields or variables or attributes.
- Mutable fields means once we create, we can modify them.
Scala Primary Constructor With val and var
In Scala, we can use val and var to define Primary Constructor parameters. We will discuss each and every scenario with simple examples and also observe some Scala Internals.
We have defined three different Scala sources files as shown below:
Example-1:-
In Scala, if we use “var” to define Primary Constructor’s parameters, then Scala compiler will generate setter and getter methods for them.
Person1.scala
class Person1(var firstName:String, var middleName:String, var lastName:String)
Open command prompt at Source Files available folder and compile “Person1.scala” as shown below:
This step creates “Person1.class” file at same folder. “javap” command is the Java Class File Disassembler. Use this command to disassemble “Person1.class” to view its content as shown below:
As per this output, we can say that “var” is used to generate setter and getter for constructor parameters.
As per Scala Notation, setter and getter methods for firstName Parameter:
Getter Method
public java.lang.String firstName();
Setter Method
public void firstName_$eq(java.lang.String);
This “firstName_$eq” method name is equal to “firstName_=”. When we use “=” in Identifiers Definition(Class Name, Parameter Name, Method names etc.), it will automatically convert into “$eq” Identifier by Scala Compiler.
NOTE:-
Scala does not follow the JavaBeans naming convention for accessor and mutator methods.
Example-2:-
In Scala, if we use “val” to define Primary Constructor’s parameters, then Scala compiler will generate only getter methods for them.
Person2.scala
class Person1(val firstName:String, val middleName:String, val lastName:String)
Open command prompt, compile and use “javap” to disassemble to see the generated Java Code as shown below:
If we observe this output, we can say that “val” is used to generate only getter for constructor parameters.
As per Scala Notation, getter methods for firstName, middleName and lastName Parameters:
Getter Methods
public java.lang.String firstName(); public java.lang.String middleName(); public java.lang.String lastName();
Example-3:-
In Scala, if we don’t use “var” and “val” to define Primary Constructor’s parameters, then Scala compiler does NOT generate setter and getter methods for them.
Person3.scala
class Person1(firstName:String, middleName:String, lastName:String)
Open command prompt, compile and use “javap” to disassemble to see the generated Java Code as shown below:
If we observe this output, we can say that no setter and getter methods are generated for firstName, middleName and lastName Constructor Parameters.
Scala Primary Constructor in-brief
The Scala Primary Constructor consist of the following things:
- The constructor parameters declare at Class Definition
- All Statements and Expressions which are executed in the Class Body
- Methods which are called in the Class Body
- Fields which are called in the Class Body
In Simple words, anything defined within the Class Body other than Method Declarations is a part of the Scala Primary Constructor.
That’s it all about Scala Primary Constructor. We will discuss Scala Auxiliary Constructors in-depth in coming posts.
Please drop me a comment if you like my post or have any issues/suggestions.
Simply The Best Keep it up Sir ..
This s excellent, everything about constructors on one page..thanks
Am searching Google for the constructors in Scala and i read so many blogs but this is the best information with simple examples Thanks you
Example 5 is not working….
class Person(firstName:String, middleName:String, lastName:String){
def fullName() = firstName + middleName + lastName
fullName // A No-argument Method Call
}
OUTPUT ::
scala> new Person(“H”,”E”,”MAN”)
res0: Person = Person@4e302a4b
method call is not called in the primary constructor… I am using 2.10 version…
Is it updated in the newer version
very well explained
Great post! It helps me a lot. Now I understand how to write the constructor correctly in scala. Looking forward to your next post! | https://www.journaldev.com/9810/scala-primary-constructor-indepth | CC-MAIN-2021-25 | refinedweb | 1,327 | 53.81 |
Actions¶
Define your view¶
You can setup your actions on records on the show or list views. This is a powerful feature, you can easily add custom functionality to your db records, like mass delete, sending emails with record information, special mass update etc.
Just use the @action decorator on your own functions. Here’s an example
from flask_appbuilder.actions import action from flask_appbuilder import ModeView from flask_appbuilder.models.sqla.interface import SQLAInterface class GroupModelView(ModelView): datamodel = SQLAInterface(Group) related_views = [ContactModelView] @action("myaction","Do something on this record","Do you really want to?","fa-rocket") def myaction(self, item): """ do something with the item record """ return redirect(self.get_redirect())
This will create the necessary permissions for the item, so that you can include or remove them from a particular role.
You can easily implement a massive delete option on list’s. Just add the following code to your view. This example will tell F.A.B. to implement the action just for list views and not show the option on the show view. You can do this by disabling the single or multiple parameters on the @action decorator.
@action("muldelete", "Delete", "Delete all Really?", "fa-rocket", single=False) def muldelete(self, items): self.datamodel.delete_all(items) self.update_redirect() return redirect(self.get_redirect())
F.A.B will call your function with a list of record items if called from a list view. Or a single item if called from a show view. By default an action will be implemented on list views and show views so your method’s should be prepared to handle a list of records or a single record:
@action("muldelete", "Delete", "Delete all Really?", "fa-rocket") def muldelete(self, items): if isinstance(items, list): self.datamodel.delete_all(items) self.update_redirect() else: self.datamodel.delete(items) return redirect(self.get_redirect()) | http://flask-appbuilder.readthedocs.io/en/latest/actions.html | CC-MAIN-2017-47 | refinedweb | 302 | 51.14 |
Type: Posts; User: lolobebo
its a MS-4 barcode scanner
The company is Microscan who produces this scanner. And here is the code that I got from the internet. I have to us visual studio 6:
#include "stdafx.h"
#include "usb1.h"
//#include <setupapi.h>...
I got the driver from them. The project is, user will be able to dispense the gift card from ATM. I have to fix this scanner in the ATM machine, which will scan the barcode from the gift-card coming...
Hi olivthill2,
Truly speaking, I don't have any knowledge about this programming. I am new to it. I have got the project. I have to write a driver for the barcode scanner camera using C++. Can you...
olivthill2 , can you help me with this programming, as you already have worked on it.
Its a Aztec code. Do you have any idea about this programming?
when I am scanning something, it gives me a string of characters
If you have worked with the scanners, please help me to do this. This is something completely new for me. The scanner I am using is usb microscan camera
Can anyone help me?
I have to write a program for the usb barcode scanner to read the barcode. I haven't done Windows API programming before. I have to write the program in C++. I have gathered some... | http://forums.codeguru.com/search.php?s=4d3e13bb914ea5597d8fd1e173af2edd&searchid=8138397 | CC-MAIN-2015-48 | refinedweb | 230 | 86.6 |
- 09 Apr, 2013 3 commits
- ssh://git.bind10.isc.org/var/bind10/git/bind10
merged
mostly in documentation or in comments reviewed by muks via jabber includes output changes for two isc_throw.
Fix failure to connect in msgq tests.
- 08 Apr, 2013 1 commit
- 05 Apr, 2013 1 commit
- 04 Apr, 2013 10 commits
Conflicts: ChangeLog
- Marcin Siodelski authored
and also remove dash before beta revision in version naming
unit tests.
trivial, no review
I wrote this some month ago. Didn't get reviewed.
- 03 Apr, 2013 14.
- using namespace boost is no longer used in pkt6_unittest.cc - copyright years updated - Pkt6::getRelayOption() documented - several methods are now better commented - ChangeLog now refers to libdhcp++, not b10-dhcp6
Even if we fail to connect, close the socket. It is loosely related to the previous commit, as that one used unsuccessful connection attempts to discover msgq is not ready yet and the unit tests complained. It should have no real effect, since the garbage collector would reclaim the socket after a while anyway.
This should solve the race condition when the socket file is created, but connect to it does not work yet, because listen() was not called on it yet. Really connecting ensures it is possible to connect.
-. | https://gitlab.isc.org/adwol/kea/-/commits/68863a7847788b6dfa9de464c5daf7e48db6a273 | CC-MAIN-2022-21 | refinedweb | 208 | 64.61 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.