text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this tutorial we will learn about the PipedWriter in Java.
AdsTutorials
In this tutorial we will learn about the PipedWriter in Java.
java.io.PipedWriter writes the characters to the piped output stream. Characters written to the piped output stream can be read by the corresponding piped reader. To read the piped characters PipedReader should be connected with the PipedWriter. PipedWriter(PipedReader pr) and the connect(PipedReader pr) method is used to connect the pipe.
Constructor Detail
Method Detail
Example
Here an example is being given which demonstrate about how to use the PipedWriter. This example explains how to write a specified length of characters started from the specified offset of an array to the piped output stream using PipedWriter write() method.
Source Code
PipedWriterExample.java
import java.io.PipedReader; import java.io.PipedWriter; import java.io.IOException; public class PipedWriterExample { public static void main(String[] args) { char[] ch = {'a','e','i','o','u','r','o','s','e','i','n','d','i','a'}; PipedWriter pw = null; PipedReader pr = null; try { pr = new PipedReader(); pw = new PipedWriter(pr); System.out.println(); // Write by the PipedWriter pw.write(ch, 5,9); // read from the PipedReader int c; while((c = pr.read() ) != -1) { System.out.print((char) c); } } catch (IOException ioe) { System.out.println(ioe); } finally { if(pr != null) { try { pr.close(); pw.close(); } catch(Exception e) { System.out.println(e); } } }// close finally }// close main }// close class
Output
When you will execute this example you will get the output as follows :
Advertisements
We have 1000s of tutorials on our website. Search Tutorials tutorials on our website.
Posted on: December 20, 2012 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Java IO PipedWriter
Post your Comment | http://roseindia.net/java/example/java/io/pipedwriter.shtml | CC-MAIN-2018-05 | refinedweb | 296 | 51.95 |
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#7106 closed (wontfix)
django.db extra() method fails
Description
Hello, I use to work with PostgreSQL, and with full-text enabled. To get full-text working I used the extra() method. After updating to the last svn version with the database refactor, this was broken. I use a custom manager to get the results.
class NewsManager( models.Manager ): def get_query_set(self): return super(NewsManager, self).get_query_set().filter( status = 2 ).order_by('-dateTimeCreated') class news( models.Model ): dateTimeCreated = ... ... status = ( here I define the status of the item: published, deleted, awaiting aproval... ) #The visible manager only shows the news whose state is "published" visible = NewsManager()
There is a problem with the positions of the arguments, and autoscaping the table name.
This is the code I use:
words = 'here the words i want to search' items = items.extra( select={ 'headline': "ts_headline(description, query,'StartSel = <strong>, StopSel = </strong>')", 'rank': "ts_rank_cd(tsv, query, 32)", }, tables=["plainto_tsquery(%s) as query"], where=["tsv @@ query"], params=[ words ] ).order_by('-rank')
This is the SQL generated:
SELECT COUNT(*) FROM "app_news" , "plainto_tsquery(2) as query" WHERE "app_news"."status_id" = E\'words to search\' AND tsv @@ query
As you can see, the position of the arguments is wrong, and "plainto_tsquery(2) as query" shouldn't be between double quotes.
Change History (4)
comment:1 Changed 7 years ago by mtredinnick
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to wontfix
- Status changed from new to closed
comment:2 Changed 7 years ago by Adrián Ribao <aribao@…>
Thank you for your reply, I have no idea about how to make it work using the subclass approach, where can I find some documentation about this?
comment:3 Changed 7 years ago by mtredinnick
It's not documented in any text file, since this is deep internals stuff. But each of the methods I referred to has docstrings that explain how they're used, so a bit of poking around and looking at how QuerySet implements things will get you started.
comment:4 Changed 7 years ago by Adrián Ribao <aribao@…>
So far I have this, and it workd:
from django.db.models.sql.query import Query from django.db import connection class nQuery( Query ): def get_from_clause(self): r = super( nQuery, self).get_from_clause() r[0].append(r", plainto_tsquery(%s) as query" ) r[1].append(r"'%s'" % (words,) ) return r def full_text( self ): select={ 'headline': "ts_headline( %s, query,'StartSel = <strong>, StopSel = </strong>')" % ( 'content', ), 'rank': "ts_rank_cd(tsv, query, 32)", } self.add_extra( select,None,('tsv @@ query',),None,None,None ) q = nQuery( Article, connection ) q.full_text() from django.db.models.query import QuerySet items = QuerySet( Article, q )
I was wondering if it would be very difficult to implement the full-text ability of postgreSQL in Django. I'd like to help or do it by myself, but I would probably need some help at some point.
It's never been intended that params would work with tables in extra(). The params argument was documented as being for the select and where arguments (and that has changed slightly after the merge as a documented backwards-incompatibility). So you're doing something that wasn't supported and happened to work by accident. Now it doesn't work. Supporting it would require adding yet another parameter and doesn't really seem worth it, since there are other ways to achieve the same thing. extra() is really for simple cases and I think this has gone a bit beyond that.
Also, because of the way aliases are managed inside queries now, trying to set your own via extra(tables=...) is going to be highly problematic (there are, at least, quoting problems as well as issues with aliases being renamed in the query).
It's hard to see exactly what's going on here, since you haven't shown the whole queryset, which apparently involves a count() call and maybe some other stuff. But if you want to tweak things at the table level like you're doing, you're going to have look at the Query class. In particular the join() method and maybe the table_alias() method. A QuerySet subclass that uses its own Query class would be an appropriate approach here.
In short, I think this is wontfix because you're doing some very unsupported stuff here that would be quite hard to accommodate in the general case and there are alternative approaches that are more robust (the subclass approach). | https://code.djangoproject.com/ticket/7106 | CC-MAIN-2015-35 | refinedweb | 743 | 62.88 |
If you're looking for Python Interview Questions and Answers for Experienced & Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Python has a market share of about 4.0%. So, You still have opportunities to move ahead in your career in Python. Mindmajix offers advanced Python Interview Questions 2018 that helps you in cracking your interview & acquire your dream career as Python Developer.
Q. How is Python executed?
Python files are compiled to bytecode. which is then executed by the host.
Alternate Answer:
Type python .pv at the command line.
Q. What is the difference between .py and .pyc files?
.py files are Python source files. .pyc files are the compiled bvtecode files that is generated by the Python compiler
Q. How do you invoke the Python interpreter for interactive use?
python or pythonx.y where x.y are the version of the Python interpreter desired.
Q. How are Phon blocks defined?
By indents or tabs. This is different from most other languages which use symbols to define blocks. Indents in Python are significant.
Q. What is the Pthon interpreter prompt?
Three greater-than signs: >>> Also, when the int
erpreter is waiting for more input the prompt changes to three periods
Q. How do you execute a Python Script?
From the command line, type python .py or pythonx.y
.py where the x.y is the version of the Python interpreter desired.
Q. Explain the use of try: except: raise, and finally: finally block is executed. Raise may be used to raise your own exceptions.
Accelerate your career with PYTHON TRAINING and become expertise Python developer.
Q. Illustrate the proper use of Python error handling.
Code Example:
try:
….#This can be any code
except:
…# error handling code goes here
finally.
…# code that will be executed regardless of exception handling goes here.
Q. What happens if an error occurs that is not handled in the except block?
The program tenuinates. and an execution trace is sent to sys.stderr.
Q. How are modules used in a Python program?
Modules are brought in via the import statement.
Q. How do you create a Python function?
Functions are defined using the def statement. An example might be def foo(bar):
Q. How is a Python class created?
Classes are created using the class statement. An example might be class aa rdva rk(fooba r):
Q. How is a Python class instantiated?
The class is instantiated by calling it directly. An example might be
myclass =aardvark(5)
Q. In a class definition, what does the __ init_O function do?
It overrides the any initialization from an inherited class, and is called when the class is instantiated.
Q. How does a function return values?
Functions return values using the return statement.
Q. What happens when a function doesn’t have a return statement? Is this valid?
Yes, this is valid. The function will then return a None object. The end of a function is defined by the block of code being executed (i.e., the indenting) not by any explicit keyword.
Q. What is the lambda operator?
The lambda operator is used to create anonymous functions. It is mostly used in cases where one wishes to pass functions as parameters. or assign them to variable names.
Q. Explain the difference between local and global namespaces.
Local namespaces are created within a function. when that function is called. Global name spaces are created when the program starts.
Q. Name the four main types of namespaces in Python?
a) Global,
b) Local,
c) Module and
d) Class namespaces.
Q..
Q. What are the two major loop statements?
for and while
Q. Under what circumstances would von use a while statement rather than for?
The while statement is used for simple repetitive looping and the for statement is used when one wishes to iterate through a list of items, such as database records, characters in a string, etc.
Q. What happens if.ou putfor == ‘’ ‘’:
continue
print myChar
Q. Illustrate how to execute a ioop ten times.
i=1
while i < 10:
Q. How to use GUI that comes with Python to test your code?
That is just an editor and a graphical version of the interactive shell. You write or load code and run it, or type it into the shell.
There is no automated testing.
Q. What is Python good don’t my signal handlers work?
The most common problem is that the signal handler is declared with the wrong argument list. It is called as:
handler (signum, frame)
So it should be declared with two arguments:
def handler(signum, frame):
Q. How do I test a Python program or component?
Python comes with two testing frameworks:
The documentation test modulefinds examples in the documentation strings for a module and runs them, comparing the output with the expected output given in the documentation string.
The unit test.
Q. How do I find undefined g++ symbols __builtin_new or __pure_virtual?(‘,’)
print “Enter message, end with ^D:”
msg = ”
while 1:
line = sys.stdin.readline()
if not line:
break
msg = msg + line
# The actual mail send
server = smtplib.SMTP(‘localhost’)(‘’, 80)
httpobj.putrequest(‘POST’, ‘/cgi-bin/some-cgi-script’)
### now generate the rest of the HTTP headers…
httpobj.putheader(‘Accept’, ‘*/*’)
httpobj.putheader(‘Connection’, ‘Keep-Alive’)
httpobj.putheader(‘Content-type’, ‘application/x-www-form-urlencoded’)
httpobj.putheader(‘Content-length’, ‘.”)
>>> x
‘Guy%20Steele,%20Jr.’
>>> query_string = “name=”+x
>>> query_string
‘name)
Get Updates on Tech posts, Interview & Certification questions and training schedules | https://mindmajix.com/python-interview-questions | CC-MAIN-2018-05 | refinedweb | 920 | 70.6 |
DepthMask
Latest revision as of 21:12, 16 May 2013
Author: Neil Carter (NCarter) and Daniel Brauer (Danielbrauer)
[edit] Description
This shader draws faces which are invisible, but which still appear in the depth buffer. This allows you to prevent objects from being drawn where they are occluded by the mask.
To understand how this technique works, you should know about the depth buffer and render queues before reading further.
[edit] Usage
The example package shows how to use the shader to prevent the water from appearing inside a boat's hull. The setup requires two things:
- A mask object using the Depth Mask shader. This object will be drawn just after regular opaque objects, and will prevent subsequent objects from being drawn behind it.
- Objects you wish to be masked must have the SetRenderQueue script attached to them. In the Inspector, change their queue from 3000 (regular geometry) to 3020 (just after the mask shader)
[edit] Alternate Use
In most common scenarios, you will only need a few objects to be masked. If, however, you find that you have more objects that need to be masked than objects that do not, you might find it useful to attach the SetRenderQueue script to the masks themselves, and set their queues to 2090 (just before regular geometry). Objects that you don't want masked should have their queues set to 2080 (just before the mask shader).
[edit] Example
Unity 3.4.1 package: 111KB
[edit] ShaderLab - DepthMask.shader
Shader "Masked/Mask" { SubShader { // Render the mask after regular geometry, but before masked geometry and // transparent things. Tags {"Queue" = "Geometry+10" } // Don't draw in the RGBA channels; just the depth buffer ColorMask 0 ZWrite On // Do nothing specific in the pass: Pass {} } }
[edit] C# - SetRenderQueue.cs
/* SetRenderQueue.cs Sets the RenderQueue of an object's materials on Awake. This will instance the materials, so the script won't interfere with other renderers that reference the same materials. */ using UnityEngine; [AddComponentMenu("Rendering/SetRenderQueue")] public class SetRenderQueue : MonoBehaviour { [SerializeField] protected int[] m_queues = new int[]{3000}; protected void Awake() { Material[] materials = renderer.materials; for (int i = 0; i < materials.Length && i < m_queues.Length; ++i) { materials[i].renderQueue = m_queues[i]; } } } | http://wiki.unity3d.com/index.php?title=DepthMask&diff=cur&oldid=13835 | CC-MAIN-2020-10 | refinedweb | 364 | 54.12 |
One of the more common things IT Pros who work with Active Directory need to do is actually view or collect information from AD. That is true for AD administrators, application or service administrators, security specialists-pretty much any role in an environment where Active Directory is the store for user identities. The related roles which need data from AD are as varied as the methods with which you can collect that data. There’s a lot of both.
It seems go without saying, right? There can be difficulties in how and from where that data can be collected though. Those difficulties can be an obstacle to get the information needed to get your role done.
The difficulties can boil down to simply not having the Remote Server Administration Tools on the computer the data retrieval is coming from which ultimately leads to having the Active Directory PowerShell module available or installed.
This is particularly relevant in a scenario where you have a PowerShell code which may be executed on remote computers which may not have the Remote Server Administrations Tools (and hence the AD PS module) on them.
In addition, there is one thing the AD PowerShell module does not do and that is retrieve Active Directory object replication metadata. AD object replication metadata is a set of information which domain controllers use to track the update applicability for attributes. It is commonly referred to when attempting to track an update through AD to see if it has applied successfully or to gather some forensic information about where an update originated from and when it originated.
The typical way to glean this data is to use Repadmin.exe’s “Showobjmeta” switch and specify the distinguished name (DN) of the object you are interested in. DNs can be tough to simply jot down even when you know the entire path to an object. Often we don’t know the path to an object and must search for it in AD first to get the DN in order to then use it with Repadmin.exe.
To work around these obstacles, and as part of the needs for the Microsoft support diagnostics, I’ve written a PowerShell script. The script can be downloaded from the Microsoft Technet Script Gallery at this link.
The GetADObjectData.ps1 script will do Active Directory searches for a specified objects attribute values and can also retrieve the AD replication metadata for the object.
More about the script:
- The script has four switches
- “ObjectName”: The switch should contain the name or samaccountname (string) of the target object to retrieve data from. If none is supplied then the script will use the locally logged on user’s $env:username variable.
- “ADAttrs”: This is a Boolean switch to tell the script whether to collect AD attribute values or not. Defaults to $true.
- “ReplMetadata”: This is a Boolean switch to tell the script whether to collect the AD replication metadata for the specified object. Defaults to $false.
- “ToVariable”: This is a Boolean switch to tell the script to return all of the data to a variable or not. Defaults to $false. If this switch is $false then the values will be piped out to the PowerShell console.
- The script saves the results to %systemroot%\temp\ADObjectData.txt.
- The distinguished name of the object does not need to be known. Only the name or (depending on objectclass of the object) the samaccountname needs to be supplied.
- The object name which is specified may be what is in the “name” attribute for an object or the “samaccountname” of the object. This allows for a wider scope of object types to be retrieved from AD and adds some resiliency to the tendency to use a name value instead of samaccountname for user class objects-something I’m guilty of doing,
- The attributes which are returned are unformatted. This means that strings and integer value attribute types will be human readable while other attribute types will appear unreadable. An example would be the lastlogontimestamp attribute which will appear as an integer value and must be translated in PowerShell into a DateTime object in order to be readable as an actual date and time.
- The data which is returned when requesting the AD replication metadata from an object is from a single writable domain controller in the domain the user object is in. If updates to an object are “in flight” and have not yet applied to the domain controller which the query is against then the data will not show them.
- The data which is returned when requesting the AD object attribute values is retrieved from a Global Catalog query. Keep this in mind if you are expecting to see data which is not published to the partial attribute set. That data won’t appear.
- The script uses the System.DirectoryServices namespace and several of it’s “children” namespaces. These methods are used for constructing the LDAP queries (which is a common method to use for constructing LDAP queries using .Net code) as well as querying the domain controllers for the AD replication metadata.
- If using the ToVariable switch the returned data is in a PSObject. If both AD attributes and replication metadata are being returned the returned PSObject will have two noteproperties. The first one is a PSObject value named “Attributes”. The second one is named “Metadata” and is a hash table. Since the Metadata infromation is a hash table you have to add some code (which you can copy from lines 105-119 of the script) to handle the values and put them in a usable format. They will be readable in the output text file already however.
- You need to be a domain users for the domain or forest. You do not need to have domain administrator permissions to run this script, however some environments may be more restrictive than the default AD environment and as a result prevent the script from working.
- The “ToVariable” switch can be very useful if you have a list of objects you need to collect data on and would like to send all of a list of objectnames through the script as a foreach.
Here’s an example of the command line use of the script. In this case I put the values into a variable…
PS C:\Testing> $TestGroup3 = .\GETADOBJECTDATA.PS1 -Objectname "testgroup3" -ADAttrs $true -ReplMetadata $true –ToVariable $true
..and here is a snippet of the returned AD replication metadata from the object and how it will appear in the output text file:
Active Directory Replication Metadata for CN=TestGroup3,OU=TestOU,DC=tspring,DC=com
***************************************
Replication Metadata: nTSecurityDescriptor
********************
Replication Metadata: whenCreated
********************
The script is meant to be a good alternative to using Repadmin.exe or the AD PowerShell module if those aren’t available. One of the most handy things I’ve done with it are to retrieve a list of security group members for a large list of security groups. I didn’t know the domain or forest the groups always resided in, and I definitely didn’t know the group’s DNs, and this script did a great job getting that data without my having to have elevated privileges in the environment or spend a lot of time on the matter.
I hope this helps folks out in troubleshooting and maintaining their AD environments. | https://blogs.technet.microsoft.com/tspring/2014/11/03/no-matter-where-you-go-there-you-are-retrieving-data-from-active-directory/ | CC-MAIN-2018-51 | refinedweb | 1,222 | 60.14 |
Before we said that a common method of anti crawler is to detect IP and limit access frequency. So we need to bypass this limitation by setting up proxy ip. There are many websites that offer free proxy ip, such as. We can get a lot of proxy IP from the website. But not every of these IPS can be used, or very few can be used.
We can use beautifulsoup to analyze web pages, then process, extract the proxy IP list, or use regular expressions to match. It’s faster to use regular expressions. Ip_url is, and random_hearder is a function that randomly obtains the request header.
def download_page(url): headers = random_header() data = requests.get(url, headers=headers) return data def get_proxies(page_num, ip_url): available_ip = [] for page in range(1,page_num): Print ("Crawl the proxy IP of page% d"% page) url = ip_url + str(page) r = download_page(url) r.encoding = 'utf-8' pattern = re.compile('.*?.*?.*?(.*?).*?(.*?)', re.S) ip_list = re.findall(pattern, r.text) for ip in ip_list: if test_ip(ip): Print ('% s:% s passes the test and is added to the list of available agents'% (ip [0], IP [1]) available_ip.append(ip) Time. sleep (10) print ('crawl end') return available_ip
After getting the ip, we also need to check the IP to make sure that the IP can be used. How to detect it? We can use proxy IP to access a website that can display the access ip, and then check the result of the request.
def test_ip(ip,test_url=''): proxies={'http': ip[0]+':'+ip[1]} try_ip=ip[0] try: r=requests.get(test_url, headers=random_header(), proxies=proxies) if r.status_code==200: r.encoding='gbk' result=re.search('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}',r.text) result=result.group() print(result) If result [: 9]== try_ip [: 9]: print ('% S:% s test passes'% (ip [0], IP [1]) return True else: Print ('% s:% s carrier agent failed, using local IP'% (ip [0], IP [1]) return False else: Print ('% s:% s request code is not 200'% (ip [0], IP [1]) return False except Exception as e: print(e) Print ('% s:% s error') (ip [0], IP [1]) return False
Some tutorials just get 200 HTTP status codes and think they’re successful. That’s wrong. Because proxy IP access is unsuccessful, you will default to use your own ip. Of course I can succeed with my own IP access.
Finally, we need to detect IP before we use it, because you don’t know when it will not be available. So usually store more proxy ip, so as not to be useless when you need it.
The code for this article refers to, and I have made some modifications. | https://developpaper.com/reptiles-2-establishment-of-proxy-ip-pool/ | CC-MAIN-2020-05 | refinedweb | 451 | 71.85 |
I love ironies. In fact, I collect them. One of my favorite ironies
about XML is that this technology, which has its genesis in
documentation is rarely used to document applications created with it.
Ironies can make you laugh and ironies can make you cry. If you find
yourself knee deep in a modern XML document, you are more likely to want
to cry. Cry out for some documentation as to what all these darned tags
mean!
In days of yore, the one true source of documentation was the DTD
schema. These days, schemas (regardless of notation) may or may not
exist. Even if they do, there is no guarantee that they can be fished
out of the ether easily. Even if they can, who knows how informative
they will be? Sometimes you need the free-flowing expressive power of a
document format to express your ideas. Only so much richness can be
squeezed into a comment!
Irony number two coming up: the XML world now has a way of making every
element and every attribute globally unique. It is called the namespaces
in XML recommendation[1]. This recommendation uses URIs as the
uniquification mechanism for elements and attributes. Fine. However (and
this is the irony), there is no guarantee that there is anything at the
end of these namespace URIs. It is just a *name* not the address of a
retrievable resource.
Try telling that to Eudora or Word or any of a gazillion other Web-aware
apps that cheerfully turn these things into hypertext links just begging
to be clicked. There can be no more disappointing moment during the
search for documentation of an XML application than when you get a "404"
attempting to retrieve the resource named in a namespace name.
For the sake of completeness, imagine that this article now spends a
thousand paragraphs or so arguing the merits and de-merits of treating a
namespace as anything other than a set of names. Those of you interested
in all the philosophical nuances may seek enlightenment in the XML Cover
Pages[2]. Furthermore, imagine that after said one thousand paragraphs,
I convince you that it is a good idea to put something at the end of
your namespace URIs...
Now that we have decided that it is a good idea to put a retrievable
resource at the end of a namespace URI, what should we put there?
Here we hit the age old tension between making something easy for humans
to read and easy for computers to read. As I sit in Eudora or Emacs
reading XML fragments, I want to be able to follow the links in the
namespaces and retrieve something human readable. On the other hand, in
my software, I would like to be able to retrieve a resource for the
namespace and automatically process it. I can envisage wanting to
enumerate all the tag names. I can envisage seeking to retrieve a good
CSS stylesheet to render the schema and so on.
Enter RDDL[3]. A blend of machine and human readable XHTML basic format
that provides both human readable documentation about a namespace and a
bunch of machine processable links to related resources such as style
sheets, schemata, and so on.
RDDL is a simple, effective way to document your namespaces. As a
founding member of the non-nominalist, anti namespace 404 movement, I
urge you to use RDDL liberally.
NOTES
[1]
[2]
[3] | http://www.itworld.com/nl/xml_prac/07182002/ | crawl-001 | refinedweb | 573 | 70.43 |
Micropython + LittlevGL
What is Micropython?
Micropython is Python for microcontrollers.
With Micropython you can write Python3 code and run it on bare metal architectures with limited resources.
Micropython highlights
- Compact - fit and run within just 256k of code space and 16k of RAM. No OS is needed, although you can also run it with OS, if you want.
- Compatible - strives to be as compatible as possible with normal Python (known as CPython)
- Verstile - Supports many architectures (x86, x86-64, ARM, ARM Thumb, Xtensa)
- Interactive - No need for the compile-flash-boot cycle. With the REPL (interactive prompt) you can type commands and execute them immediately, run scripts etc.
- Popular - Many platforms are supported. User base is growing bigger.
Notable forks: MicroPython, CircuitPython, MicroPython_ESP32_psRAM_LoBo
- Embedded Oriented - Comes with modules specifically for embedded systsems, such as the machine module for accessing low-level hardware (I/O pins, ADC, UART, SPI, I2C, RTC, Timers etc.)
Why Micropython + LittlevGL?
Micropython today does not have a good high-level GUI library.
LittlevGL is a good high-level GUI library, it’s implemented in C and its API is in C.
LittlevGL is an Object Oriented Compenent Based library, which seems a natural candidate to map into a higher level language, such as Python.
Here are some advantages of using LittlevGL in Micropython:
- Develop GUI in Python, a very popular high level language. Use paradigms such as Object Oriented Programming.
- GUI development requires multiple iterations to get things right.
With C, each iteration consists of
Change code➥
Build➥
Flash➥
Run.
In Micropython it’s just
Change code➥
Run. You can even run commands interactively using the REPL (the interactive prompt)
Micropython + LittlevGL could be used for:
- Fast prototyping GUI.
- Shorten the cycle of changing and fine-tuning the GUI.
- Model the GUI in a more absract way by defining reusable composite objects, taking advantage of Python’s language features such as Inheritance, Closures, List Comprehension, Generators, Exception Handling, Arbitrary Precision Integers and others.
- Make LittlevGL accessible to a larger audiance. No need to know C in order to create a nice GUI on an embedded system.
This goes well with CircuitPython vision. CircuitPython was designed with education in mind, to make it easier for new or unexperienced users to get started with embedded development.
So how does it look like?
TL;DR: It’s very much like the C API, but Object Oriented for LittlevGL componets.
Let’s dive right into an example!
A simple example
import lvgl as lv lv.init() scr = lv.obj() btn = lv.btn(scr) btn.align(lv.scr_act(), lv.ALIGN.CENTER, 0, 0) label = lv.label(btn) label.set_text("Button") lv.scr_load(scr)
In this example we create a button, align it to center and add a text label on it, “Button”.
Finally, we load the screen with the button, in order to display it.
A little more advanced example
In this example I’ll assume you already have some basic knowledge of LittlevGL. If you don’t - please have a quick look at LittlevGL tutorial.
class SymbolButton(lv.btn): def __init__(self, parent, symbol, text): super().__init__(parent) self.symbol = lv.label(self) self.symbol.set_text(symbol) self.symbol.set_style(symbolstyle) self.label = lv.label(self) self.label.set_text(text)
In this example we create a reusable composite component called
SymbolButton.
It’s a class, so we can create object instances from it. It’s composite, because it consists of several native LittlevGL objects:
- A Button -
SymbolButtoninherits from
lv.btn.
lv.btnis a native LittlevGL Button component.
- A Symbol label - a label with a symbol style (symbol font) as a child of
self, ie. child of the parent button that SymbolButton inherits from.
lv.labelis a native LittlevGL label component that represents some text inside another component.
- A Text label - a label with some text as another child of
self.
SymbolButton constructor (
__init__ function) does nothing more than create the two labels and set their contents and style.
Here is an example of how to use our
SymbolButton:
self.btn1 = SymbolButton(page, lv.SYMBOL.PLAY, "Play") self.btn1.set_size(140,100) self.btn1.align(None, lv.ALIGN.IN_TOP_LEFT, 10, 0) self.btn2 = SymbolButton(page, lv.SYMBOL.PAUSE, "Pause") self.btn2.set_size(140,100) self.btn2.align(self.btn1, lv.ALIGN.OUT_RIGHT_TOP, 10, 0)
Here, we set the size of each button, align
btn1 to the page and align
btn2 relative to
btn1.
We call
set_size and
align methods of our composite component
SymbolButton - these methods were inherited from
SymbolButton parent,
lv.btn, which is a LittlevGL native object.
The result would look something like this:
For a more complete example, which includes other object types as well as action callbacks and driver registration, please have a look at this little demo script.
Here are some more examples of how to use LittlevGL in Micropython:
Creating a screen with a button and a label
scr = lv.obj() btn = lv.btn(scr) btn.align(lv.scr_act(), lv.ALIGN.CENTER, 0, 0) label = lv.label(btn) label.set_text("Button") # Load the screen lv.scr_load(scr)
Creating an instance of a struct
symbolstyle = lv.style_t(lv.style_plain)
symbolstyle would be an instance of
lv_style_t initialized to the same value of
lv_style_plain
Setting a field in a struct
symbolstyle.text.color = lv.color_hex(0xffffff)
symbolstyle.text.color would be initialized to the color struct returned by
lv_color_hex
Setting a nested struct using dict
symbolstyle.text.color = {"red":0xff, "green":0xff, "blue":0xff}
Creating an instance of an object
self.tabview = lv.tabview(lv.scr_act())
The first argument to an object constructor is the parent object, the second is which element to copy this element from
Calling an object method
self.symbol.align(self, lv.ALIGN.CENTER,0,0)
In this example
lv.ALIGN is an enum and
lv.ALIGN.CENTER is an enum member (an integer value).
Using callbacks
for btn, name in [(self.btn1, 'Play'), (self.btn2, 'Pause')]: btn.set_action(lv.btn.ACTION.CLICK, lambda action,name=name: self.label.set_text('%s click' % name) or lv.RES.OK)
Here, we have a loop that sets an action for buttons
btn1 and
btn2.
The action of
btn1 is to set
label text to “Play click”, and the action of
btn2 click is to set
label text to “Pause click”.
How does this work?
There are two Python features you first need to understand: lambda and Closure.
set_action function expects two parameters: an action enum (
CLICK in this case) and a function. In Python a functions are “first class”, this means they can be treated as values, and can be passed to another function, like in this case.
The function we are passing is a
lambda, which is an anonymous function. Its first parameter is the action, and its second parameter is the
name variable from the
for loop. The function does not use the
action parameter, but it uses the
name for setting the label’s text.
After setting the label’s text, the lambda function finishes and returns lv.RES.OK value. A lambda cannot have a
return statement since it must be an expression.
set_text is evaluated to None, so
set_text(...) or lv.RES.OK is evaluated to
lv.RES.OK and is treated as the lambda’s function return value.
You might ask yourself - why do we need to pass
name as a parameter? Why not use it directly in the lambda like this:
lambda action: self.label.set_text('%s click' % name)?
Well, this will not work correctly! Using
name like this would create a Closure, which is a function object that remembers values in enclosing scopes,
name in this case. The problem is, that in Python the resolution of
name is done when
name is executed. If we put
name in the lambda function, it’s too late, name was already set to
Pause so both buttons will set “Pause click” text. We need
name to be set when the for loop iteration is executed, not when the lambda function is executed, therefore we pass
name as a parameter and this is the moment it is resolved. Here is a short SO post that explains this.
Currently the binding is limited to only one callback per object.
How does it work?
TL;DR: A script parses LittlevGL headers and creates a Micropython module.
To use LittlevGL in Micropython, you need Micropython Binding for LittlevGL.
This binding is a generator for LittlevGL Micropython module.
It’s essentialy a python script that reads and parses LittlevGL C headers and generates Micropython module from them. This module can be used in Micropython to access most of LittlevGL API.
LittlevGL is an Object Oriented component-based library. There is a base class called
lv_obj from which all other components inherit from, and a hierarchy between the components. Objects have their method functions, inherit their parent methods etc.
Micropython Binding for LittlevGL tries to take advantage of this design, and models this class hierarchy in Python. You can create your own (pure Python) composite components from existing LittlevGL components by inheritance.
For more details, please refer to the README of Micropython Binding for LittlevGL.
How can I use it?
TL;DR: The quickest way to start: Fork
lv_micropython. It has working Unix (Linux) and ESP32 ports of Micropython + LittlevGL.
Micropython Binding for LittlevGL (
lv_binding_micropython) was designed to make it simple to use LittlevGL with Micropython. In principle it can support any micropython fork.
To add it to some Micropython fork you need to add
lv_binding_micropython under Micropython lib as a git submodule.
lv_binding_micropython itself contains LittlevGL as a git submodule.
In the Micropython code itself, very few changes are needed. You need to add some lines to Micropython Makefile in order to create LittlevGL Binding module and in order to compile LittlevGL, and you also need to add the new
lvgl module to Micropython by editing
mpconfigport.h.
As an example, I’ve created
lv_micropython - a Micropython fork with LittlevGL binding.
You can use it as is, or as an example of how to integrate LittlevGL with Micropython.
lv_micropython can currently be used with LittlevGL on the unix port and on the ESP32 port.
LittlevGL needs drivers for the display and for the input device.
The Micropython binding contains some example drivers that are registered and used on
lv_micropython:
- SDL unix drivers (display and mouse)
- ILI9341 driver for ESP32.
- Raw Resistive Touch for ESP32 (ADC connected to screen directly, no touch IC)
It is easy to create new drivers for other displays and input devices. If you add some new driver, we would be happy to add it to Micropython Binding, so please send us a PR!
How can I know which LittlevGL objects and functions are available on Micropython?
Actually, almost all of them are available!
If some are missing and you need them, please open an issue on Micopython Binding Issues section, and I’ll try to add them.
- Run Micropython with LittlevGL module enabled (for example,
lv_micropython)
- Open the REPL (interactive console)
import lvgl as lv
- Type
lv.+ TAB for completion. All supported classes and functions of LittlevGL will be displayed.
- Another option:
help(lv)
- Another option:
print('\n'.join(dir(lv)))
- You can also do that recursively. For example
lv.btn.+ TAB, or
print('\n'.join(dir(lv.btn)))
You can also have a look at the LittlevGL binding module itself. It is generated during Micropython build, and is ususally called
lv_mpy.c.
That’s a huge API! There are more than 25K lines of code on the LittlevGL binding module only! Before counting LittlevGL code itself!
It depends on LittlevGL configuration. It can be small or large.
Remember that LittlevGL binding module is generated when you build Micropython, based on LittlevGL headers and configuration file -
lv_conf.h.
If you enabled everything on
lv_conf.h - the module will be large. You can disable features and remove unneeded components by changing definitions in
lv_conf.h, and the module will become much smaller.
Anyway, remember that the module is on Progam Memory. It does not consume RAM by itself, only ROM.
From RAM perpective, every instance of LittlevGL object will usually consume only a few bytes extra, to represent a Micropython wrapper object around LittlevGL object.
I would like to try it out! What is the quickest way to start?
The quickest way to start: Fork
lv_micropython. It has working unix (Linux) and ESP32 ports of Micropython + LittlevGL.
LittlevGL on Python? Isn’t it kinda.. slow?
No.
All LittlevGL functionality (such as rendering the graphics) is still in C.
The Micropython binding only provides wrappers for LittlevGL API, such as creating components, setting their properties, layout, styles etc. Very few cycles are spent over there compared to other LittlevGL functionality.
Can I use LittlevGL binding on XXXX Micropython fork?
Probably yes!
You would need to add Micropython Binding for LittlevGL as a submodule in your fork, and make some small changes to the Makefile and
mpconfigport.h in your port, but that’s about it.
For more details please have a look at the README.
Can I use LittlevGL binding with XXXX display/input-device hardware?
Yes, but you need a driver.
LittlevGL requires a driver for Display and Input device.
Once you have a C driver for your hardware, it’s very simple to wrap it as a module in Micropython and use it with LittlevGL Binding for Micropython.
You can see some examples of such drivers (and their wrapper Micopython module) on the
driver directory of LittlevGL Binding for Micopython.
I need to allocate a LittlevGL struct (such as Style, Color etc.) How can I do that? How do I allocate/deallocate memory for it?
In most cases you don’t need to worry about memory allocation.
That’s because LittlevGL can take advantage of Micropython’s gc! (Garbage Collection)
When some memory is allocated, Micropython will know when to release it, when it is no longer needed.
LittlevGL structs are implemented as Micropython classes under
lvgl module.
You can create them as any other object:
import lvgl as lv s = lv.style_t()
You can also create a struct which is a copy of another struct:
import lvgl as lv s = lv.style_t(lv.style_plain)
You can access them much like C structs, using Python attributes:
s.text.color = lv.color_hex(0xffffff)
Something is wrong / not working / missing in LittlevGL on Micopython!
Please report bugs and problems on the Micopython Binding Issues section of Micropython Binding for LittlevGL on GitHub.
You can also contact us on LittlevGL Forum for questions, or any other discussions. | https://blog.littlevgl.com/2019-02-20/micropython-bindings | CC-MAIN-2019-47 | refinedweb | 2,422 | 59.09 |
CoreCoder
LANGUAGES: C# | VB.NET
ASP.NET VERSIONS: 1.x | 2.0
ASP.NET Control Builders
Define a Custom Control Builder for a Custom Server Control
By Dino Esposito
When you author an ASP.NET Web page you use a special markup language to identify constituent controls. The same markup language is employed by Visual Studio when you drag and drop controls from the toolbox onto a Web form. When you save the project, Visual Studio generates a markup script to describe the controls in the page and their settings. As a result, the final ASP.NET page is saved as a text file with the popular .aspx extension. When a request for the .aspx resource arrives, ASP.NET processes the source code of the .aspx file to build a page class dynamically.
The markup provides a description of the control; the ASP.NET parser is responsible for interpreting that markup and generating proper C# or Visual Basic .NET code. Are there any rules to guide the behavior of the parser? How can the parser know about the intended behavior of a particular markup element?
Each and every ASP.NET server control is associated with a control builder class. A control builder works side by side with the page parser and helps to analyze the markup for the control and to build all the necessary child controls. In this article, I ll review the typical behavior of a control builder and discuss how to define a custom control builder for a custom server control.
The ControlBuilder Class
The control builder class handles any child markup element that the main tag of a control contains. The base class for all ASP.NET control builders is ControlBuilder. What s the typical behavior of a control builder?
The ControlBuilder class parses any top-level block of markup that is flagged with the runat attribute. The builder processes every nested element it encounters within the control s tags and adds a child control to the Controls collection of the control. In addition, it creates literal controls for the text located between nested control tags.
Most built-in server controls have their own control builders; for custom controls the default control builder works just fine (except in a few circumstances). You ll use a custom control builder if the control you re authoring has a complex layout or contains child tags that require ad hoc parsing.
Note, though, that as a control developer, you have other tools to partially govern the parsing process of the control s markup. Let s tackle this point first.
Parsing Attributes
If you derive a custom control from an existing control, you typically stick to the existing builder and don t change the markup layout. As a result, either your control doesn t have child elements, or any child elements are taken care of by the default builder. If you create your control from an abstract class, such as WebControl or CompositeControl, you can make yourself responsible for the control markup. When do you need to add child elements to a control s markup? Style and collection properties may require a child tag. The default control builder, though, can take care of these situations with a little help from a couple of attributes: ParseChildren and PersistenceMode.
The ParseChildren attribute tells the ASP.NET parser and control builder how to parse the nested content of a control. The attribute takes a Boolean value that indicates whether the nested content should be parsed as properties or child controls.
The PersistenceMode attribute indicates how a control property is persisted declaratively in a host page. Figure 1 lists possible modes of persistence. The most common setting is InnerProperty, which instructs Visual Studio to save the contents of the property as a nested tag named after the property:
<x:MyControl runat="server" ... >
<MyProperty>
:
</MyProperty>
</x:MyControl>
Figure 1: Persistence modes for control properties.
If you choose InnerDefaultProperty, you can have only one nested tag; by opting for InnerProperty, you can have as many nested tags as needed. This is good for rich controls with multiple templates and styles.
You need a control builder when you need to take full control of the contents inside the opening and closing tag of the control.
Designing a Control Builder
The control builder class is automatically replaced when you apply the ControlBuilder attribute to a custom control, as follows:
[ControlBuilder(typeof(MyControlBuilder))]
public class MyControl
{
:
}
The MyControlBuilder class kicks in when the ASP.NET parser gets to process the markup of any instance of MyControl. MyControlBuilder makes itself responsible for any control tree that results from the parsing process.
To understand the role of the control builder, let s consider the markup for a list control, such as DropDownList:
<asp:dropdownlist
<asp:listitem ... />
<asp:listitem ... />
<asp:listitem ... />
:
</asp:dropdownlist>
The <asp:ListItem> element indicates an object of type ListItem that is parsed to populate the Items collection of the DropDownList. A control builder determines the type of the object behind the <asp:ListItem> tag and how the information it may contain is stored inside the control.
Let s consider the possible markup of a custom list control, such as a TextBoxList control a control that renders out a table with a column for the label and a column for the textbox:
<x:textboxlist
<x:inputfield ... />
<x:inputfield ... />
<x:inputfield ... />
:
</x:textboxlist>
To implement a control in accordance with the schema just mentioned, three classes are needed: the TextBoxList class for the control, the InputField class for the child element, and the control builder class to control the page parser.
The Custom Control Builder Class
The control builder class is rarely a very complex piece of code. Its structure is extremely simple and agile and basically consists of a series of overrides. The only method you absolutely need to override for a significant and functional implementation is GetChildControlType.
The GetChildControlType method returns the type of the control s children tags. The default implementation of the base class simply returns null. The method takes two arguments: the name of the child tag found, and its collection of attributes. What programmers should do to implement the method depends mostly on the schema they have in mind. The method is responsible for getting the type that a particular child tag represents. If you need to map the nested markup to some custom structures, the GetChildControlType method is crucial.
In the sample control builder, the GetChildControlType method should take into account any tag named <InputField> and force the runtime to create an instance of the InputField type (see Figure 2).
public class TextBoxListControlBuilder : ControlBuilder
{
public override Type GetChildControlType(
string tagName,
IDictionary attributes)
{
if (tagName.ToLower() == "inputfield")
return typeof(InputField);
return null;
}
}
Public Class TextBoxListControlBuilder : Inherits
ControlBuilder
Public Overrides Function GetChildControlType( _
ByVal tagName As String, _
ByVal attributes As IDictionary) As Type
If tagName.ToLower() = "inputfield" Then
Return GetType(InputField)
End If
Return Nothing
End Function
End Class
Figure 2: GetChildControlType should take into account any tag named <InputField> and force the runtime to create an instance of the InputField type.
With a similar implementation, only recognized tags are processed in a custom way. All other tags will be treated by the page parser as literal markup and converted to literal controls, which are added to the control s child control tree. The TextBoxList control features an internal array of InputField instances (see Figure 3).
[ControlBuilder(typeof(TextBoxListControlBuilder))]
[ParseChildren(false)]
public class TextBoxList : WebControl
{
private ArrayList m_formFields = new ArrayList();
public ArrayList Items {
get {return m_formFields;}
}
:
}
<ControlBuilder(GetType(TextBoxListControlBuilder))> _
<ParseChildren(False)> _
Public Class TextBoxList : Inherits WebControl
Private _inputFields As ArrayList()
Public ReadOnly Property InputFields As ArrayList
Get
If _inputFields Is Nothing Then
_inputFields = New ArrayList()
End If
Return _inputFields
End Get
End Property
:
End Class
Figure 3: TextBoxList features an internal array of InputField instances.
The ControlBuilder attribute indicates the type of the control builder that must be used for this control. The ParseChildren attribute explicitly states that the general rule that child tags map to properties is not true in this case; parsing still occurs, but it is taken care of by the custom builder. The control also needs an additional override the AddParsedSubObject method:
protected override void AddParsedSubObject(object obj)
{
if (obj is ProAspNet.CS.Ch20.FormField)
m_formFields.Add(obj);
}
Protected Overrides Sub AddParsedSubObject( _
ByVal obj As Object)
If TypeOf(obj) Is InputField Then
_inputFields.Add(obj)
End If
End Sub
Fundamental is the role played by the AddParsedSubObject method. Any tag that the GetChildControlType method recognizes is transformed into a living instance of the specified type. This object is then passed to the AddParsedSubObject method for further processing. If the object is of the correct type, it s added to the internal collection of InputField objects. At this point, once the InputField class is defined, the control is ready for rendering.
The InputField Class
The InputField class is a class that gathers information about the textboxes to create within the main control. The class features a couple of string properties named Label and Text. The Label property indicates the text for the label; the Text property indicates the default text for the textbox. Figure 4 shows the full source code of the class. The physical textbox is created when the control renders out. Other properties in the control s programming interface can let page authors access the contents of the child textboxes.
public class InputField
{
private string _label;
private string _text;
public InputField()
{
}
public string Label
{
get { return _label; }
set { _label = value; }
}
public string Text
{
get { return _text; }
set { _text = value; }
}
}
Public Class InputField
Private _label As String
Private _text As String
Public Sub New()
End Sub
Public Property Label As String
Get
Return _label
End Get
Set (ByVal value As String)
_label = value;
End Set
End Property
Public Property Text As String
Get
Return _text
End Get
Set (ByVal value As String)
_text = value;
End Set
End Property
End Class
Figure 4: The InputField class.
The TextBoxList control will typically be a composite control, and as such will render its contents through the CreateChildControls method. Internally, the method loops through the _inputFields collection and creates as many labels and textboxes as necessary.
In the end, the goal of the control builder is to help the parser understand the contents of the control s markup and to load the proper information inside the control instance bound to the page. If you have a made-to-measure control builder, you can design at will the markup of the control. The PersistenceMode attribute assigned to control properties helps Visual Studio persist correctly the markup of the control when you use it in the Web Forms designer.
Conclusion
Most ASP.NET built-in controls have their own builder, so there s no need for you to change or replace it. Custom controls may constitute a different story. If the custom control is designed after an existing control, you normally have no reason to replace the builder. If you want to design a completely custom control and go for a rich and descriptive markup, then use a control builder.
Dino Esposito is a Solid Quality Learning mentor and the author of Programming Microsoft ASP.NET 2.0 (Microsoft Press, 2005). Based in Italy, Dino is a frequent speaker at industry events worldwide. Join the blog at. | https://www.itprotoday.com/web-application-management/aspnet-control-builders-30-oct-2009 | CC-MAIN-2021-17 | refinedweb | 1,883 | 53.31 |
New Developer Content for SharePoint Server 2010 and SharePoint Online
We continually expand and update the Microsoft SharePoint 2010 Software Development Kit (SDK), adding documentation for new and improved features of Microsoft SharePoint Server 2010 and responding to feedback from our customers. With this topic, you can quickly find the latest additions and changes to content and code samples.
The SharePoint 2010 SDK is also available as a download in the Microsoft Download Center. To download the SDK, see SharePoint 2010 Reference: Software Development Kit.
The SharePoint 2010 Class Libraries and Web Service References now include the following namespaces and classes.
October 2011
June 2011
August 2010
You can obtain these and other code samples by downloading and installing the SharePoint 2010 Reference: Software Development Kit. In addition, see the SharePoint code samples available on Code Gallery. | http://msdn.microsoft.com/en-us/library/ff847475(v=office.14).aspx | CC-MAIN-2014-15 | refinedweb | 137 | 51.58 |
Hi,. Later I'd like to recognise the keypress as part of conditional statements and keep looping.
Here is the code that I've got so far, it has been modified from an example. Currently the loop just keeps running indefinately, regardless of what I press.
Some extra information that may be important: this code is a small addition to a large working example (which I am modifying) that runs on a remote embedded PC, I've omitted about 450 lines (including lots of headers) for this snippet. Also, it may be useful to know that I'm accessing the PC via Putty , a telnet / SSH client.
Thanks for reading, any help appreciated,Thanks for reading, any help appreciated,Code:
#include <stdio.h>
#include <err.h>
#include <curses.h> /* for ERR */
int main(){
char chr; /* Keyboard character */
int done = 0; /* while condition */
while (done != 1){
printf(". "); /* show that we are looping */
usleep(5000); /* wait a moment */
if ((chr = getch()) != ERR) done = 1; /* if anything has been pressed leave the loop*/
}
printf(" chr = %c\n", chr); /* show which key was pressed */
}
Ad | http://cboard.cprogramming.com/c-programming/133213-while-getch-question-printable-thread.html | CC-MAIN-2014-10 | refinedweb | 181 | 72.66 |
If I have the following struct:
mutable struct FactorF{V,C,T} # variables, cardinality, array type vals::T end
and I have an instance of
Factor where
T happens to be
Array{Float64,3}, what is the proper way to access the
Float64 ? Here is an example of an approach I found:
import Base: eltype eltype(::FactorF{V,C,T}) where {V,C,T} = T.parameters[1] A = FactorF{(1,2,3), (3,3,3), Array{Float64,3}}(rand(3,3,3)) eltype(A) |> println # prints Float64
Is this the proper way to do this?
Note that the original poster on Slack cannot see your response here on Discourse. Consider transcribing the appropriate answer back to Slack, or pinging the poster here on Discourse so they can follow this thread.
(Original message ) (More Info) | https://discourse.julialang.org/t/if-i-have-the-following-struct-mutable-struct-factorf-v-c-t-variables-card/56611 | CC-MAIN-2022-21 | refinedweb | 134 | 61.67 |
{-# LANGUAGE ScopedTypeVariables #-} {-# OPTIONS -Wall #-} ---------------------------------------------------------------------- -- | -- Module : FRP.Reactive.Internal.Reactive -- Copyright : (c) Conal Elliott 2008 -- License : BSD3 -- -- unary) --------------------------------------------------------------------} -- | Make the event into a list of futures eFutures :: EventG t a -> [FutureG t a] eFutures (Event (Future (Max MaxBound,_))) = [] eFutures (Event (Future (t,a `Stepper` e))) = Future (t,a) : eFutures e -- TODO: redefine 'eFutures' as an unfold -- Show a future sFuture :: (Show t, Show a) => FutureG t a -> String fs = let maxleng = 20 a = (intersperse "->" . map sFuture) fs inf = length (take maxleng a) == maxleng in if not inf then concat a else concat (take maxleng a) ++ "..." -- TODO: clean up sFutures def: use intercalate, concat before trimming, -- and define&use a general function for truncating and adding "...". -- Test. instance (Show a, Show b) => Show (EventG a b) where show = sFutures . eFutures instance (Show x, Show y) => Show (ReactiveG x y) where show (x `Stepper` e) = show x ++ " `Stepper` " ++ show e {-------------------------------------------------------------------- Execution --------------------------------------------------------------------} -- | Run an event in the current thread. Use the given time sink to sync -- time, i.e., to wait for an output time before performing the action. runE :: forall t. Ord t => Sink t -> Sink (EventG t Action) runE sync ~(Event (Future (Max bt,r))) = tsync bt (runR sync r) where tsync :: AddBounds t -> Sink Action tsync MinBound = id -- no wait tsync (NoBound t) = (sync t >>) -- wait tsync MaxBound = const (return ()) -- finished! -- TODO: I'm not sure about the MaxBound case. We could instead just wait -- forever (cheaply). Try out this terminating definition instead. -- | Run an event in a new thread, using the given time sink to sync time. forkE :: Ord :: Ord t => Sink t -> Sink (ReactiveG t Action) runR sync (act `Stepper` e) = act >> runE sync e -- | Run a reactive value in a new thread, using the given time sink to -- sync time. The initial action happens in the current thread. forkR :: Ord t => Sink t -> ReactiveG t Action -> IO ThreadId forkR = (fmap.fmap) forkIO runR | http://hackage.haskell.org/package/reactive-0.10.3/docs/src/FRP-Reactive-Internal-Reactive.html | CC-MAIN-2015-06 | refinedweb | 316 | 71.24 |
[Date Index] [Thread Index] [Author Index]
Re: factoring
- To: mathgroup at smc.vnet.net
- Subject: [mg115101] Re: factoring
- From: George Woodrow III <georgevw3 at mac.com>
- Date: Thu, 30 Dec 2010 19:06:23 -0500 (EST)
In[1]:= Factor[x y - x z + y^2 - y z] Out[1]= (x + y) (y - z) You need to put a space between x and y (or a times sign). xy is the variable with the name 'xy', and not x * y. george On Dec 30, 2010, at 4:09 AM, r_poetic wrote: > Hello, > an easy question: > > why does Factor[xy-xz+y^2-yz] fail to return (x+y)(y-z), and what > command would do that? > > Thanks! > > | http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00778.html | CC-MAIN-2018-17 | refinedweb | 116 | 85.42 |
Sometimes, you might need to provide a small window for the user to input a small amount of data with a few text boxes. For example, you want to provide an edit box for defining keyword of search, and the edit box will be sitting at the Windows task bar. It might be more elegant that a label indicating the purpose of the text box will not occupy more space of the task bar. An easy approach is just to use a hint property of the text box. However, you will need to have the mouse pointer over the text box to show hint. It will be handy to show the hint all the time. A good existing instance of this example is Google Desktop's Deskbar.
When you don't have keywords defined, "Google" inside the text box tells you to Google(search). After you focus on the text box, or the text box has some content, the "Google" hint inside the text box will disappear.
From a certain point to view, within a small window with a few text boxes, you will always know the purposes of the boxes after a few instances of running. In some scenario, it is good to keep the window small. To save space for data input, the UI design like Google Deskbar will be handy and economic.
While there can be various approaches of implementing this feature, I am not going to develop a new derived class from class TextBox, rather, I would just plug these features to a text box. The in-box label is shown when you are not using the box.
TextBox
If the box has content or is being focused, the in-box label will disappear.
The code was constructed with Visual Studio 2005. This project in the above download link contains two example classes: Flashing and InTextBoxLabel. The second one will be discussed in the next article.
Flashing
InTextBoxLabel
public class InTextboxLabel
{
protected TextBoxBase box;
protected string hint;
protected Label label;
public InTextboxLabel(TextBoxBase box, string hint)
{
this.box = box;
this.hint = hint;
box.Enter += new EventHandler(box_Enter);
box.Leave += new EventHandler(box_Leave);
box.TextChanged += new EventHandler(box_TextChanged);
label = new Label();
label.Text = hint;
label.ForeColor = SystemColors.ActiveBorder;
box.Controls.Add(label);
label.Dock = DockStyle.Fill;
label.Click += new EventHandler(panel_Click);
if (String.IsNullOrEmpty(box.Text))
label.Show();
}
void box_TextChanged(object sender, EventArgs e)
{
if (!String.IsNullOrEmpty(box.Text))
label.Hide();
else if (! box.Focused)
label.Show();
}
void panel_Click(object sender, EventArgs e)
{
label.Hide();
box.Select();
}
void box_Leave(object sender, EventArgs e)
{
if (String.IsNullOrEmpty(box.Text))
label.Show();
}
void box_Enter(object sender, EventArgs e)
{
label.Hide();
}
}
In the client code of a Form, you may plug class InTextboxLabel to TextBox objects. When the form is loaded, we have labels embedded in text boxes.
InTextboxLabel
TextBox
private void Form1_Load(object sender, EventArgs e)
{
new InTextboxLabel(textBox1, "User Name");
new InTextboxLabel(maskedTextBox1, "Password");
}
Will those InTextboxLabel objects be disposed after Form1_Load is finished? No.
Form1_Load
InTextboxLabel provides visual effects to the TextBox object through subscribing to some events of the TextBox object, thus the TextBox object has references to the InTextboxLabel object.
In the attached source code, you may also find a class named InTextBoxGraphic, which can give some visual effects like the one in the Google Taskbar.
InTextBoxGraphic
Rather than developing something like the TextboxWithEmbeddedLabel class derived from the Textbox class, we have InTextboxLable being plugged into a TextBox object. This approach is light-weight and flexible. It can work on derived classes of Textbox like class MaskedTextBox, and in addition, you may merge features introduced by the same design pattern, for example, if you have codes like this:
TextboxWithEmbeddedLabel
Textbox
InTextboxLable
MaskedTextBox
new InTextboxLabel(textBox1, "User Name");
new Flashing(textBox1);
Now we have a flashing text box with an embedded label.
As you can see, this design pattern is based on the builder pattern with extension. The director subscribes to some events of the builder, thus the builder now has a reference to the director. Shall we continue to call this builder pattern? or helper? Just looks similar.
Somehow I came up with names like Agent and Parasite.
Agent
Parasite
As we knew, a pattern is named after a logical structure, call sequences and purposes. I haven't yet known the "official" name that might exist in the programming world. Maybe you can tell me.
Anyway, for the time being, I would call it the Agent pattern, and the class being developed is called the Agent class.
Agent
You might be thinking of introducing an in-box label to other Windows controls like ComboBox or DatetimePicker. Moreover, you might consider evolving this InTextboxLabel into something like InBoxLabel to work on any derived class of the Control class. While it is easy to change the above code, you need to be aware that in different derived classes of the Control class, the Text property has different meanings, for example, property Text in class DateTimePicker and class Panel is inappropriate.
ComboBox
DatetimePicker
InBoxLabel
Control
Text
DateTimePicker
Panel
You might be wondering why class InTextboxLabel does not implement the Idisposable interface, as the class contains a Label object which implements the IDisposable interface. Microsoft FxCop will cry out for this.
Idisposable
Label
IDisposable
When the form or the text box with InTextboxLable is disposed, will the Label object created inside InTextboxLabel be safely disposed? The answer is yes. Though the Label object is created inside InTextboxLabel, it is then assigned to the Controls property of the TextBox object which implement IDisposable. When the TextBox object is disposed, the Controls of the TextBox will be disposed, and the Label object attached will go, then the sequence reaches InTextboxLabel. As the Textbox object is the only one having the reference to InTextboxLabel, GAC reached the end and will dispose InTextboxLabel. I have tested with Spy+ to monitor the Windows resources allocated. Spy+ showed that the window handle to the Label was gone when the TextBox was disposed.
InTextboxLabel
Controls
IDisposable
Yes, it is more robust to have IDisposable implemented if you are going to evolve this class for flexibility of adapting different use cases. The current implementation works for these scenarios:
I had some very thoughtful discussions with a colleague who has in-depth knowledge and experience of .NET. He pointed out many weaknesses of the implementation regarding a greater vision. The following works will make the class more healthy and more robust for different scenarios:
Implement IDisposable interface
Hide the constructors, and provide some static functions delivering the instantiated class. So, we will have something like:
static
public static InTextboxLabel AttachLabelToTextBox(TextBox textBox, string hint)
{return new InTextboxLabel(textbox, hint); }
I agree this will make the interfaces of the class more meaningful and more friendly. Actually in .NET 2, there are a lot of classes delivering objects this way.
After all, I will just remind you that the discussion around IDisposable in this section is not part of this design pattern, as this pattern does not necessarily create disposable objects inside.
I have been using this design pattern for years, with Delphi coding, when I felt that developing a derived class or using a component suit is overkill or too expensive.
Generally I don't like a very fancy UI, skinny things, strange shape buttons, etc. which I consider are too disturbing and confusing. I generally just need a little bit extra from existing generic Windows controls provided by the development tools like Delphi and Visual Studio. There are many high quality 3rd party components delivering powerful and consistent UI experiences, and I do use them, such as Jedi Libraries and Turbo Power suits. They are open sources, and it is quite safe to stay with them. I did sometimes use some commercial packages like Info Power, mostly only when I needed some powerful UI features urgently. Though I could purchase the source code, I did not have the interest or resources to maintain these heavy codes.
As a matter of fact, I gradually had removed visual components of Info Power suit from my legacy projects when I had time to do so, replaced the codes with the agent pattern described above. The codes are short and easy to maintain. Just let me outline all benefits of this agile approach.
Of course, the Agent pattern is not a silver bullet. It has its own use cases and limitations.
Although previous and latter examples are with visual controls, this design pattern is not limited to visual controls. Later, I will provide some links to some examples codes which were for TDataset in Delphi.. | http://www.codeproject.com/Articles/17209/Why-Develop-Custom-Controls-Just-Customize-Gener | CC-MAIN-2015-32 | refinedweb | 1,433 | 62.88 |
Currently I have an Arduino hooked up to a Raspberry Pi. The Arduino controls a water level detection circuit in service to an automatic pet water bowl. The program on the Arduino has several "serial.println()" statements to update the user on the status of the water bowl, filling or full. I have the Arduino connected to the Raspberry Pi via USB. The small python program on the Pi that captures the serial data from the Arduino is as follows:
import serial
ser = serial.Serial('/dev/ttyUSB0',9600)
file = open('index.html', 'a+')
message1 = """<html>
<head><meta http-</head>
<body><p>"""
message2 = """</p></body>
</html>"""
while 1:
line=ser.readline()
messagefinal1 = message1 + line + message2
print(line)
file.write(messagefinal1)
file.close()
Traceback (most recent call last):
File "commprog.py", line 15, in <module>
file.write(messagefinal1)
ValueError: I/O operation on closed file
After the first iteration of your while loop, you close the file and never open it again for editing. When you try to append to a file that is closed, you get an error. You could instead move the open statement inside your loop like so:
while 1: line=ser.readline() messagefinal1 = message1 + line + message2 print(line) file = open('index.html', 'a+') file.write(messagefinal1) file.close() | https://codedump.io/share/xtwmDp6plnGB/1/arduino-raspberry-pi-and-html-page | CC-MAIN-2017-43 | refinedweb | 210 | 60.01 |
Created on 2011-11-09 22:17 by Nekmo, last changed 2019-07-29 11:38 by vstinner.
Currently, the mapping of namespaces is global and can cause failures if multiple instances are used or in multithreading. The variable is in xml.etree.ElementTree._namespace_map. I ask it to be switched to xml.etree._Element instance.
Tagging this as targeting 3.3.
Nekmo, could you possibly poste some code showing the problem?
In my case, I have several clients, and they define the namespaces. I am interested in return the same namespace that they gave me, for example, the client "A" gives me this:
<house:iq xmlns:
To name the namespace, I set it at nsmap:
>>> import xml.etree.ElementTree as etree
>>> etree.register_namespace('house', '')
>>> etree._namespace_map
{'': 'house',
'': 'dc',
'': 'wsdl',
'': 'rdf',
'': 'html',
'': 'xs',
'': 'xsi',
'': 'xml'}
Thus, keeping the name of the namespace:
>>> etree.tostring(etree.Element('{}iq'))
b'<house:iq xmlns:'
But if I have a client "B", which uses a different name, and run in parallel, problems can occur:
<home:iq xmlns:
>>> import xml.etree.ElementTree as etree
>>> etree.register_namespace('home', '')
>>> etree._namespace_map
{'': 'home',
'': 'dc',
'': 'wsdl',
'': 'rdf',
'': 'html',
'': 'xs',
'': 'xsi',
'': 'xml'}
Therefore, I ask that _namespace_map is within etree._Element instance, and not global
This patch proposes an implementation of the feature.
>>> from xml.etree import ElementTree as ET
>>> ET.tostring(ET.Element('{}iq'), encoding="unicode", namespaces={'': 'home'})
'<home:iq xmlns:'
Florent, thanks for the notification.
Nekmo, note that you are misusing this feature. The _namespace_map is meant to provide "well known namespace prefixes" only, so that common namespaces end up using the "expected" prefix. This is also the reason why it maps namespaces to prefixes and not the other way round. It is not meant to temporarily assign arbitrary prefix to namespaces. That is the reason for it being a global option.
That being said, lxml.etree's Element factory takes an "nsmap" parameter that implements the feature you want. It's documented here:
Note that it maps prefixes to namespaces and not the other way round. This is because there is a corresponding "nsmap" property on Elements that provides the currently defined prefixes in the context of an Element. ElementTree itself does not (and cannot) support this property because it drops the prefixes during parsing. However, I would still request that an implementation of the parameter to the Element() factory should be compatible for both libraries.
Also look for "nsmap" in the compatibility docs (appears in two sections):
Reading the proposed patch, I must agree that it makes more sense in ElementTree to support this as a serialiser feature. ET's tree model doesn't have a notion of prefixes, whereas it's native to lxml.etree.
Two major advantages of putting this into the serialiser are: 1) cET doesn't have to be modified, and 2) it does not require additional memory to store the nsmap reference on each Element. The latter by itself is a very valuable property, given that cET aims specifically at a low memory overhead.
I see a couple of drawbacks:
1) it only supports the case that namespaces are globally defined. The implementation cannot handle the case that local namespaces should only be defined in subtrees, or that prefixes are being reused. This is no real restriction because globally defined namespaces are usually just fine. It's more of an inconvenience in some cases, such as multi-namespace languages like SOAP or WSDL+XSD, where namespaces are commonly declared on the subtree where they start being used.
2) lxml.etree cannot support this because it keeps the prefixes in the tree nodes and uses them on serialisation. This cannot easily be overridden because the serialiser is part of libxml2.
I didn't see in the patch how (or if?) the prefix redefinition case is handled. Given that prefixes are always defined globally, it would be nice if this only resulted in an error if two namespaces that are really used in the document map to the same prefix, not always when the namespace dict is redundant by itself.
Also note that it's good to be explicit about the keyword arguments that a function accepts. It aids when help(tostring) tells you directly what you can pass in, instead of just printing "**kw".
Thank you Stefan for the comments.
I've added the prefix collision detection, and removed the **kw argument.
(+ tests)
Updated with documentation.
Thank you for the review.
I know this does not cover different namespaces in subtree.
But this use case seems very specific. The user could find other means to achieve it.
Given that this is a major new feature for the serialiser in ElementTree, I think it's worth asking Fredrik for any comments.
Of course it's better to have someone else to review the patch.
However in this case, I'm not sure it is a major feature.
BTW, I noticed that effbot is currently marked as *inactive* maintainer
If it is not an oversight, it means that this issue might wait "an extended period" before receiving a response.
Do we merge the patch for 3.3?
I'm +1 on this (patch submitted 8 months ago, backward compatible and reviewed).
Can this be honestly classified as a bugfix though? If it's a feature it will have to be postponed to 3.4
Looks like a new feature to me.
Well, it fixes the behavior of ElementTree in some multi-threaded cases, provided you pass the namespace map as an argument of the serializer call.
The fix implements an optional argument for this use case.
As a side effect, it makes it easier to work with custom namespaces.
If the consensus is to wait for next version, I'm fine with that.
Florent, what you describe is exactly the definition of a new feature.
Users even have to change their code in order to make use of it.
I'm changing the issue name to reflect the direction it's taken. Florent, once 3.3 is branched, could you please refresh the patch vs. head for 3.4 (don't forget the "what's new") and I'll review it for commit.
I'd also expand the doc of register_namespace to note what it should and shouldn't be used for (once this feature is added).
This patch no longer applies to the tip of default. Whoever updates it should also address Eli's comment about expanding the register_namespace doc. I'm adding the 'easy' tag because Florent already did the hard work, and at this point it is just a patch update and doc change.
This issue is 8 years old and has already 3 patches attached, it is not newcomer friendly: I remove the "easy" keyword. | https://bugs.python.org/issue13378 | CC-MAIN-2020-50 | refinedweb | 1,125 | 65.52 |
This content has been marked as final. Show 40 replies
15. Re: Oracle 10.1 + Leopard491831 Nov 3, 2007 12:11 PM (in response to 150819)The gcc 3.3 compiler does come with the new 4.x compiler in the Xcode download, but isn't installed by default. You can go back to the installer and click on the configure-button (or something like that) to choose extra options, including gcc 3.3
However, there's no gcc_select any more. But since it's only a shellscript, I just copied it over from another machine. Seems to work OK.
It is important to use gcc 3.3 : it won't work correctly if you use anything later.
16. Re: Oracle 10.1 + Leopard606209 Nov 15, 2007 4:24 PM (in response to 491831)Upgraded to Leopard 10.5.1 and tried to bring up Oracle 10.1.0.5 again with processes set to 75. Still get a kernel panic, so Leopard 10.5.1 has not fixed the issue.
Rand
17. Re: Oracle 10.1 + Leopard606209 Dec 8, 2007 9:03 PM (in response to 491831)I believe that I have discovered the problem in Apple's 10.5 kernel that causes the kernel panic when the Oracle server starts with "processes" larger than about 50. I've built and booted a Darwin kernel with a patch to fix the problem and was able to start Oracle with "processes=400". No kernel panic.
I've reported the problem to Apple along with a description of the cause and a solution. Hopefully Apple will patch the kernel to fix this bug for the next release. The Apple problem ID is 5574916.
Rand
18. Re: Oracle 10.1 + Leopard615529 Jan 1, 2008 6:18 AM (in response to 606209)Hi,
Would it be possible to tell us how you modified the kernel in order to change the number of processes. I'd like to test that out too!
Thanks
19. Re: Oracle 10.1 + Leopard606209 Jan 1, 2008 4:37 PM (in response to 491831)Sure,
The change is to the source file,
xnu-1228/bsd/kern/sysv_sem.c
in routine semop
There is a block of code between
#if CONFIG_MACF
...
#endif
that is in the wrong location. It should be moved following the "if" statement that immediately follows it. That if statement checks the validity of the value of the nsops variable and Oracle apparently passes a value in the stmop_args structure which exceeds MAX_SOPS. The code between #if CONFIG_MACF and #endif uses nsops but if the value exceeds MAX_SOPS that causes the kernel panic. Moving the code between #if CONFIG_MACF and #endif after the validity check prevents the kernel panic and Oracle is able to startup successfully, in my tests.
20. Re: Oracle 10.1 + Leopard606209 Feb 11, 2008 5:14 PM (in response to 491831)I just tested to see if Apple fix the bug that I reported and told them what was wrong and how to fix it. I'm sorry to say that Apple did not fix the bug in 10.5.2 and the kernel still panics if processes is set to something larger than about 75.
The two bug report numbers are:
bug id 5736091 for Mac OS X 10.5.1
bug id 5574916 for Mac OS X 10.5.2
Suggest people start complaining to Apple that this bug has not been fixed. It was originally reported on November 1, 2007.
Rand
21. Re: Oracle 10.1 + Leopard606209 Feb 11, 2008 5:55 PM (in response to 491831)Oops,
I mixed up the bug ID numbers, they should be
bug ID 5574916 for Leopard 10.5.1
bug ID 5736091 for Leopard 10.5.2
Sorry 'bout that.
22. Re: Oracle 10.1 + Leopard615529 Feb 11, 2008 6:24 PM (in response to 606209)Same thing here. Oracle keeps crashing when started on 10.5.2.
We will have to wait until 10.5.2 darwin kernel becomes available.... Or just modify the 10.5.1 kernel...
23. Re: Oracle 10.1 + Leopard623179 Feb 14, 2008 6:25 AM (in response to 150819)I get the same, too, using gcc 3.3. Has anybody so far a solution to this?
24. Re: Oracle 10.1 + Leopard623179 Feb 15, 2008 6:30 AM (in response to 623179)This is what I get:
ld: file not found: /Users/oracle/service/u01/app/oracle/product/10.1.0/db_1/lib/libclntsh.dylib.10.1
Error in invoking target 'install' of makefile '/Users/oracle/service/u01/app/oracle/product/10.1.0/db_1/sqlplus/lib/ins_sqlplus.mk'.
Message was edited by:
user620176
25. Re: Oracle 10.1 + Leopard298346 Feb 22, 2008 5:55 PM (in response to 623179)I contacted apple via my apple developer connection account to find out about this bug.
Here is their response:
Thank you for contacting us regarding the status of Bug ID# 5574916.
At this time, there isn't any new information available for this issue. I have checked with engineering, and the issue is still being investigated.
26. Re: Oracle 10.1 + Leopard579657 Feb 25, 2008 2:28 PM (in response to 491831)Hi all,
I've a Mac with OSX 10.5.1 installed on it. Now I'd like to have an Oracle database running, so I downloaded the 10g release for Mac. But well, doesn't seem to work so far.
I followed these instructions:
I'm new on Mac and 10.5.1 is my first system. I can't go back to a 10.4 installation to get stuff from there.
I'm at the point where I have to call runInstaller -> Oracle Installer opens but when I click on Next it crashes.
Does anyone know how I could proceed?
Ah, well, and one more thing. I'm running the normal OS X version. Not OS X Server... could this be a problem?
27. Re: Oracle 10.1 + LeopardRonald Rood Feb 25, 2008 3:24 PM (in response to 579657)What cpu do you have ? Currently it only runs on powerpc, not on intel.
R.
28. Re: Oracle 10.1 + Leopard298346 Feb 25, 2008 7:03 PM (in response to 491831)I've been able to get 10.1.0.3 running ok on Leopard with the processes set to 65, but I also reduced the job_queue_processes from 10 to 5. When I set processes to 50, I had some other problems with oracle so the change for the job_queue_processes was an attempt to reduce the overall amount of processes tied to oracle. Don't take this as anything more than a "for what it's worth", I'm not saying these are related for sure, just that by changing both I was able to have a higher value for processes.
Message was edited by:
tobermei
29. Re: Oracle 10.1 + Leopard579657 Feb 26, 2008 2:03 AM (in response to 491831)Oh - thanks for the quick replies.
Well, I have an Intel processor. I didn't read anything on Oracle's site or in the documentation that it would run on PPC only, so wasn't aware of that.
I hope Oracle will update their release for OSX... I mean, Apple delivers Intel Macs for quite a long time now. What about all these people? Well, so the only solution for me seems to be to install a VM with Windows or Linux and install Oracle in that VM... | https://community.oracle.com/thread/581719?start=15&tstart=60 | CC-MAIN-2015-18 | refinedweb | 1,247 | 77.03 |
glview_register_frame_callback()
Register a callback that will be fired every time the app is expected to draw a frame.
Synopsis:
#include <glview/glview.h>
GLVIEW_API int glview_register_frame_callback(frame_callback frame_callback)
Since:
BlackBerry 10.0.0
Arguments:
- frame_callback
The function to call after all events have been processed, and the app is expected to draw the frame.
Library:libglview (For the qcc command, use the -l glview option to link against this library)
Description:
The display callback is initially set by the glview_initialize() function and is the only mandatory callback. An app can use this function to set a different display callback. A display callback must always be registered and valid. Setting the callback to NULL is invalid and will fail.
Returns:
- EFAULT: Attempt to register a NULL display callback. In the event of GLVIEW_FAILURE, the previously registered frame_callback will remain.
Last modified: 2014-09-30
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.glview.lib_ref/topic/glview_register_frame_callback.html | CC-MAIN-2015-18 | refinedweb | 159 | 59.8 |
Sending email through Gmail SMTP server with C#
THE FOLLOWING WILL ALMOST CERTAINLY BE THE ANSWER TO YOUR QUESTION IF ALL ELSE HAS FAILED:
I got the exact same error, it turns out Google's new password strengh measuring algorithm has changed deeming my current password as too weak, and not telling me a thing about it (not even a message or warning)... How did I discover this? Well, I chose to change my password to see if it would help (tried everything else to no avail) and when I changed my password, it worked!
Then, for an experiment, I tried changing my password back to my previous password to see what would happen, and Gmail didn't actually allow me to do this, citing the reason "sorry we cannot allow you to save this change as your chosen password is too weak" and wouldn't let me go back to my old password. I figured from this that it was erroring out because either a) you need to change your password once every x amount of months or b). as I said before, their password strength algorithms changed and therefore the weak password i had was not accepted, even though they did not say anything about this when trying to login ANYWHERE! This (number 2) is the most likely scenario, as my weak password was about 4 months old, and it let me use it in Gmail.
It's pretty bad that they said nothing about this, but it makes sense. Because most hijacked emails are logged into using software outside of gmail, and I'm guessing you are required to have a stronger password if you want to use Gmail outside of the Gmail environment.
I hope this helps!
For, I quickly receive an SmtpException on Send(message). The message is
The SMTP server requires a secure connection or the client was not authenticated.
The server response was:
5.5.1 Authentication Required. Learn more at" <-- seriously, it ends there.
UPDATE:
This is a question that I asked a long time ago, and the accepted answer is code that.
Turn On Access For Less Secure Apps and it will work for all no need to change password.
I was using corporate VPN connection. It was the reason why I couldn't send email from my application. It works if I disconnect from VPN.
smtp.Host = "smtp.gmail.com"; //host name smtp.Port = 587; //port number smtp.EnableSsl = true; //whether your smtp server requires SSL smtp.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network; smtp.Credentials = new NetworkCredential(fromAddress, fromPassword); smtp.Timeout = 20000;
Simple steps to fix this:
1)Sign in to your Gmail
2)Navigate to this page & set to "Turn On"
I also found that the account I used to log in was de-activated by google for some reason. Once I reset my password (to the same as it used to be), then I was able to send emails just fine. I was getting 5.5.1 message also.
Turn on less secure apps for your account:
You can also connect via port 465, but due to some limitations of the System.Net.Mail namespace you may have to alter your code. This is because the namespace does not offer the ability to make implicit SSL connections. This is discussed at, and I have supplied an example of how to use the CDO (Collaborative Data Object) in another discussion (GMail SMTP via C# .Net errors on all ports).
I ran into this same error ( "The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required. Learn more at" ) and found out that I was using the wrong password. I fixed the login credentials, and it sent correctly.
I know this is late, but maybe this will help someone else.
Dim SMTPClientObj As New Net.Mail.SmtpClient SMTPClientObj.UseDefaultCredentials = False SMTPClientObj.Credentials = New System.Net.NetworkCredential("myusername@gmail.com", "mypwd") SMTPClientObj.Host = "smtp.gmail.com" SMTPClientObj.Port = 587 SMTPClientObj.EnableSsl = True SMTPClientObj.Send("myusername@gmail.com","yourusername@gmail.com","test","testbody")
If you get an error like "The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required. Learn more at" as I get before this, make sure the line
SMTPClientObj.UseDefaultCredentials = False included and this line should before the
SMTPClientObj.Credentials.
I did try to switch these 2 lines the opposite way and the 5.5.1 Authentication Required error returned.
I'm not sure which .NET version is required for this because eglasius mentioned there is no matching
enableSsl setting (I'm using .NET 4.0, but I suspect it to work in .NET 2.0 or later), but this configuration justed worked for me (and doesn't require you to use any programmatic configuration):
<system.net> <mailSettings> <smtp from="myusername@gmail.com" deliveryMethod="Network"> <network defaultCredentials="false" enableSsl="true" host="smtp.gmail.com" port="587" password="password" userName="myusername@gmail.com"/> </smtp> </mailSettings> </system.net>
You might have to enable POP or IMAP on your Gmail account first:
I recommend trying it with a normal mail client first...
Yet another possible solution for you. I was having similar problems connecting to gmail via IMAP. After trying all the solutions that I came across that you will read about here and elsewhere on SO (eg. enable IMAP, enable less secure access to your mail, using and so on), I actually set up a new gmail account once more.
In my original test, the first gmail account I had created, I had connected to my main gmail account. This resulted in erratic behaviour where the wrong account was being referenced by google. For example, running opened up my main account rather than the one I had created for the purpose.
So when I created a new account and did not connect it to my main account, after following all the appropriate steps as above, I found that it worked fine!
I haven't yet confirmed this (ie. reproduced), but it apparently did it for me...hope it helps.
The problem is not one of technical ability to send through gmail. That works for most situations. If you can't get a machine to send, it is usually due to the machine not having been authenticated with a human at the controls at least once.
The problem that most users face is that Google decides to change the outbound limits all the time. You should always add defensive code to your solution. If you start seeing errors, step off your send speed and just stop sending for a while. If you keep trying to send Google will sometimes add extra time to your delay period before you can send again.
What I have done in my current system is to send with a 1.5 second delay between each message. Then if I get any errors, stop for 5 minutes and then start again. This usually works and will allow you to send up to the limits of the account (last I checked it was 2,000 for premier customer logins per day). | http://code.i-harness.com/en/q/ac07c | CC-MAIN-2018-39 | refinedweb | 1,196 | 66.33 |
of an OS plumber registers. The comparator compares the contents of the match register against the value of a free running monotonic up-counter. When the output of the up-counter equals the value in the match register an interrupt is generated. Each of the comparators can output an interrupt. A maximum of 8 timer blocks are supported for a total of 256 timers. Each timer block can have different clocking attributes. Specific implementations may include only a subset of these timers. A minimum of three timers is required.
The specification contains the following block diagram of the HPET architecture.
Some of the timers may be enabled to generate a periodic interrupt. If a timer is set to be periodic, its period is added to the match register each time a match occurs, thus computing the next time for this timer to generate an interrupt.. An up-counter is usually 64 bits wide but 32-bit implementations are permitted by the specification and 64-bit up-counters can also be driven in 32-bit mode. Up-counters run at a minimum of 10 MHz. which is much faster than the older RTC (Real Time Clock) and can thus produce periodic interrupts at a much higher resolution. The registers associated with these timers are mapped to memory space.
The BIOS uses ACPI ( Advanced Configuration and Power Interface) functionality to inform the operating system of the location of the HPET memory-mapped register space. Here is an example of a disassembled ACPI HPET table from an Intel DX48BT2 (AKA BoneTrail) motherboard.
$ cat /sys/firmware/acpi/tables/HPET > /var/tmp/hpet.out
$ iasl -d /var/tmp/hpet.out
$ cat /var/tmp/hpet.dsl
/*
* Intel ACPI Component Architecture
* AML Disassembler version 20090123
*
* Disassembly of /var/tmp/hpet.out, Sun Jul 5 19:34:47 2009
*
* ACPI Data Table [HPET]
*
* Format: [HexOffset DecimalOffset ByteLength] FieldName : FieldValue
*/
[000h 000 4] Signature : "HPET" /* High Precision Event Timer table */
[004h 004 4] Table Length : 00000038
[008h 008 1] Revision : 01
[009h 009 1] Checksum : CE
[00Ah 010 6] Oem ID : "INTEL "
[010h 016 8] Oem Table ID : "DX48BT2 "
[018h 024 4] Oem Revision : 0000076E
[01Ch 028 4] Asl Compiler ID : "MSFT"
[020h 032 4] Asl Compiler Revision : 01000013
[024h 036 4] Hardware Block ID : 8086A301
[028h 040 12] Timer Block Register :
[028h 040 1] Space ID : 00 (SystemMemory)
[029h 041 1] Bit Width : 00
[02Ah 042 1] Bit Offset : 00
[02Bh 043 1] Access Width : 00
[02Ch 044 8] Address : 00000000FED00000
[034h 052 1] Sequence Number : 00
[035h 053 2] Minimum Clock Ticks : 0001
[037h 055 1] Flags (decoded below) : 00
Page Protect : 0
4K Page Protect : 0
64K Page Protect : 0
Raw Table Data
0000: 48 50 45 54 38 00 00 00 01 CE 49 4E 54 45 4C 20 HPET8.....INTEL
0010: 44 58 34 38 42 54 32 20 6E 07 00 00 4D 53 46 54 DX48BT2 n...MSFT
0020: 13 00 00 01 01 A3 86 80 00 00 00 00 00 00 D0 FE ................
0030: 00 00 00 00 00 01 00 00 ........
$
See page 30 of the HPET v1.0a specification for a detailed breakdown of the individual bits in the Event Time Block (called Hardware Block by the AML disassember). Note that only one Event Timer Block need be described in the HPET table in order to bootstrap an operating system. This is the case here. For non-legacy platforms, the Event Timer Block described in the HPET is the one that provides functionality to replace the 8254/RTC Periodic Interrupt Logic.
Other Event Time Blocks are described in the ACPI namespace. Here is the relevant section from the disassembled ACPI DSDT table.
Device (HPET)
{
Name (_HID, EisaId ("PNP0103"))
Name (_CRS, ResourceTemplate ()
{
Memory32Fixed (ReadOnly,
0xFED00000, // Address Base
0x00004000, // Address Length
)
})
Method (_STA, 0, NotSerialized)
{
If (HPEE)
{
Return (0x0F)
}
Else
{
Return (Zero)
}
}
}
Note the assigned PNPID (PNP0103) for the HPET. Because no _UID is specified it means that there are no other HPET timer blocks.
Here is a list of the HPET-related messages outputted when this particular motherboard is booted up under Fedora 11.
$ dmesg | grep -i HPET
ACPI: HPET CFBF2000, 0038 (r1 INTEL DX48BT2 76E MSFT 1000013)
ACPI: HPET id: 0x8086a301 base: 0xfed00000
hpet clockevent registered
HPET: 4 timers in total, 0 timers will be used for per-cpu timer
hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0
hpet0: 4 comparators, 64-bit 14.318180 MHz counter
rtc0: alarms up to one month, 114 bytes nvram, hpet irqs
$
The first line is outputted when the ACPI HPET table is read. The second line is outputted when the ACPI HPET table is mapped into memory by …/arch/x86/kernel/acpi/boot.c. The next line is outputted when the HPET legacy interrupts are started and HPET is registered as the global clock. The following line is outputted when the kernel checks to ensure that at least one timer is reserved for userspace (/dev/hpet.) The next two lines of output comes from the HPET device driver (…/drivers/char/hpet.c.) It shows that 2 timers have allocated interrupts and two do not..
Here is the relevant part of the output from /proc/time_list as it relates to HPET:
Tick Device: mode: 1
Broadcast device
Clock Event Device: hpet
max_delta_ns: 149983005959
min_delta_ns: 5000
mult: 61496114
shift: 32
mode: 3
next_event: 9223372036854775807 nsecs
set_next_event: hpet_legacy_next_event
set_mode: hpet_legacy_set_mode
event_handler: tick_handle_oneshot_broadcast
tick_broadcast_mask: 00000000
tick_broadcast_oneshot_mask: 00000000
Here is the output from /proc/sys/dev/hpet and /proc/driver/rtc:
$ cat /proc/sys/dev/hpet/max-user-freq
64
$ cat /proc/driver/rtc
rtc_time : 06:34:31
rtc_date : 2009-07-06
alrm_time : **:24:40
alrm_date : ****-**-**
alarm_IRQ : no
alrm_pending : no
24hr : yes
periodic_IRQ : no
update_IRQ : no
HPET_emulated : yes
DST_enable : no
periodic_freq : 1024
batt_status : okay
The HPET driver (/dev/hpet) has a similar API to the Real Time Clock driver. It is a character device which can support any number of HPET devices. The kernel API has three interfaces exported from the driver:
hpet_register( struct hpet_task *tp, int periodic )
hpet_unregister( struct hpet_task *tp )
hpet_control( struct hpet_task *tp, unsigned int cmd, unsigned long arg )
The userspace interface to HPET is defined in the header /usr/include/linux/hpet.h. The current set of supported operations is:
#define HPET_IE_ON _IO('h', 0x01) /* interrupt on */
#define HPET_IE_OFF _IO('h', 0x02) /* interrupt off */
#define HPET_INFO _IOR('h', 0x03, struct hpet_info) /* get information */
#define HPET_EPI _IO('h', 0x04) /* enable periodic */
#define HPET_DPI _IO('h', 0x05) /* disable periodic */
#define HPET_IRQFREQ _IOW('h', 0x6, unsigned long) /* set frequency */
The following example shows how to use the published interface to access a HPET and call a simple periodic signal handler hpet_alarm between 2 and 99 times a second.
#include <stdio.h>
#include <stdlib.h;>
#include <fcntl.h>
#include <time.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <signal.h>
#include <fcntl.h>
#include <sys/time.h>
#include <linux/hpet.h>
#include <stdint.h>
#include <sys/ioctl.h>
#include <signal.h>
static uint16_t hpet_sigio_count;
static uint64_t secs;
static void
hpet_alarm(int val)
{
struct timespec t;
clock_gettime(CLOCK_REALTIME, &t);
if (!secs) secs = t.tv_sec;
fprintf(stderr, "hpet_alarm called. iteration: %2d secs: %ld nsecs: %ld \n",
hpet_sigio_count, (t.tv_sec - secs) , t.tv_sec * 100000 + t.tv_nsec );
hpet_sigio_count++;
}
int
main(int argc, const char **argv)
{
struct sigaction old, new;
struct hpet_info info;
int frequency;
int iterations;
int retval = 0;
int fd;
int r, i, value;
if (argc != 3) {
fprintf(stderr, "Usage: %s frequency(1-64) iterations(10-99)\n", argv[0]);
return -1;
}
frequency = atoi(argv[1]);
iterations = atoi(argv[2]);
if (frequency > 64 || frequency < 1 ) {
fprintf(stderr, "ERROR: Invalid value for frequency\n");
return -1;
}
if (iterations < 10 || iterations > 99 ) {
fprintf(stderr, "ERROR: Invalid value for iterations\n");
return -1;
}
hpet_sigio_count = 0;
sigemptyset(&new.sa_mask);
new.sa_flags = 0;
new.sa_handler = hpet_alarm;
sigaction(SIGIO, NULL, &old);
sigaction(SIGIO, &new, NULL);
fd = open("/dev/hpet", O_RDONLY);
if (fd < 0) {
fprintf(stderr, "ERROR: Failed to open /dev/hpet\n");
return -1;
}
if ((fcntl(fd, F_SETOWN, getpid()) == 1) ||
((value = fcntl(fd, F_GETFL)) == 1) ||
(fcntl(fd, F_SETFL, value | O_ASYNC) == 1)) {
fprintf(stderr, "ERROR: fcntl failed\n");
retval = 1;
goto fail;
}
if (ioctl(fd, HPET_IRQFREQ, frequency) < 0) {
fprintf(stderr, "ERROR: Could not set /dev/hpet to have a %2dHz timer\n", frequency);
retval = 2;
goto fail;
}
if (ioctl(fd, HPET_INFO, &info) < 0) {
fprintf(stderr, "ERROR: failed to get info\n");
retval = 3;
goto fail;
}
fprintf(stdout, "\nhi_ireqfreq: 0x%lx hi_flags: %0x%lx hi_hpet: 0x%x hi_timer: 0x%x\n\n",
info.hi_ireqfreq, info.hi_flags, info.hi_hpet, info.hi_timer);
r = ioctl(fd, HPET_EPI, 0);
if (info.hi_flags && (r < 0)) {
fprintf(stderr, "ERROR: HPET_EPI failed\n");
retval = 4;
goto fail;
}
if (ioctl(fd, HPET_IE_ON, 0) < 0) {
fprintf(stderr, "ERROR: HPET_IE_ON failed\n");
retval = 5;
goto fail;
}
/* wait for specified number of signal interrupts */
for (i = 0; i < iterations; i++) {
(void) pause();
}
if (ioctl(fd, HPET_IE_OFF, 0) < 0) {
fprintf(stderr, "ERROR: HPET_IE_OFF failed\n");
retval = 6;
}
fail:
sigaction(SIGIO, &old, NULL);
if (fd > 0)
close(fd);
return retval;
}
Here is the output from this example when it is invoked with a frequency of 32 and an iteration count of 64.
$ sudo ./hpet_example 32 64
hi_ireqfreq: 0x20 hi_flags: 00 hi_hpet: 0x2 hi_timer: 0x4a1cb9c8
hpet_alarm called. iteration: 0 secs: 0 nsecs: 124683205055050
hpet_alarm called. iteration: 1 secs: 0 nsecs: 124683236313149
hpet_alarm called. iteration: 2 secs: 0 nsecs: 124683267566342
hpet_alarm called. iteration: 3 secs: 0 nsecs: 124683298821905
hpet_alarm called. iteration: 4 secs: 0 nsecs: 124683330077493
hpet_alarm called. iteration: 5 secs: 0 nsecs: 124683361341893
hpet_alarm called. iteration: 6 secs: 0 nsecs: 124683392590764
hpet_alarm called. iteration: 7 secs: 0 nsecs: 124683423849157
hpet_alarm called. iteration: 8 secs: 0 nsecs: 124683455101917
hpet_alarm called. iteration: 9 secs: 0 nsecs: 124683486357683
hpet_alarm called. iteration: 10 secs: 0 nsecs: 124683517617931
hpet_alarm called. iteration: 11 secs: 0 nsecs: 124683548872198
hpet_alarm called. iteration: 12 secs: 1 nsecs: 124682580229541
hpet_alarm called. iteration: 13 secs: 1 nsecs: 124682611481235
hpet_alarm called. iteration: 14 secs: 1 nsecs: 124682642740016
hpet_alarm called. iteration: 15 secs: 1 nsecs: 124682673992697
hpet_alarm called. iteration: 16 secs: 1 nsecs: 124682705247479
hpet_alarm called. iteration: 17 secs: 1 nsecs: 124682736504664
hpet_alarm called. iteration: 18 secs: 1 nsecs: 124682767758840
hpet_alarm called. iteration: 19 secs: 1 nsecs: 124682799014280
hpet_alarm called. iteration: 20 secs: 1 nsecs: 124682830270129
hpet_alarm called. iteration: 21 secs: 1 nsecs: 124682861530334
hpet_alarm called. iteration: 22 secs: 1 nsecs: 124682892784577
hpet_alarm called. iteration: 23 secs: 1 nsecs: 124682924038220
hpet_alarm called. iteration: 24 secs: 1 nsecs: 124682955294110
hpet_alarm called. iteration: 25 secs: 1 nsecs: 124682986550572
hpet_alarm called. iteration: 26 secs: 1 nsecs: 124683017805756
hpet_alarm called. iteration: 27 secs: 1 nsecs: 124683049061117
hpet_alarm called. iteration: 28 secs: 1 nsecs: 124683080318331
hpet_alarm called. iteration: 29 secs: 1 nsecs: 124683111576954
hpet_alarm called. iteration: 30 secs: 1 nsecs: 124683142828988
hpet_alarm called. iteration: 31 secs: 1 nsecs: 124683174083954
hpet_alarm called. iteration: 32 secs: 1 nsecs: 124683205337967
hpet_alarm called. iteration: 33 secs: 1 nsecs: 124683236593144
hpet_alarm called. iteration: 34 secs: 1 nsecs: 124683267851530
hpet_alarm called. iteration: 35 secs: 1 nsecs: 124683299104054
hpet_alarm called. iteration: 36 secs: 1 nsecs: 124683330358748
hpet_alarm called. iteration: 37 secs: 1 nsecs: 124683361617445
hpet_alarm called. iteration: 38 secs: 1 nsecs: 124683392870249
hpet_alarm called. iteration: 39 secs: 1 nsecs: 124683424124489
hpet_alarm called. iteration: 40 secs: 1 nsecs: 124683455379717
hpet_alarm called. iteration: 41 secs: 1 nsecs: 124683486634424
hpet_alarm called. iteration: 42 secs: 1 nsecs: 124683517889149
hpet_alarm called. iteration: 43 secs: 1 nsecs: 124683549144315
hpet_alarm called. iteration: 44 secs: 2 nsecs: 124682580500695
hpet_alarm called. iteration: 45 secs: 2 nsecs: 124682611761325
hpet_alarm called. iteration: 46 secs: 2 nsecs: 124682643011863
hpet_alarm called. iteration: 47 secs: 2 nsecs: 124682674265864
hpet_alarm called. iteration: 48 secs: 2 nsecs: 124682705521034
hpet_alarm called. iteration: 49 secs: 2 nsecs: 124682736776049
hpet_alarm called. iteration: 50 secs: 2 nsecs: 124682768030654
hpet_alarm called. iteration: 51 secs: 2 nsecs: 124682799285398
hpet_alarm called. iteration: 52 secs: 2 nsecs: 124682830544701
hpet_alarm called. iteration: 53 secs: 2 nsecs: 124682861797319
hpet_alarm called. iteration: 54 secs: 2 nsecs: 124682893051578
hpet_alarm called. iteration: 55 secs: 2 nsecs: 124682924306748
hpet_alarm called. iteration: 56 secs: 2 nsecs: 124682955562132
hpet_alarm called. iteration: 57 secs: 2 nsecs: 124682986823545
hpet_alarm called. iteration: 58 secs: 2 nsecs: 124683018073636
hpet_alarm called. iteration: 59 secs: 2 nsecs: 124683049327560
hpet_alarm called. iteration: 60 secs: 2 nsecs: 124683080586707
hpet_alarm called. iteration: 61 secs: 2 nsecs: 124683111841132
hpet_alarm called. iteration: 62 secs: 2 nsecs: 124683143095147
hpet_alarm called. iteration: 63 secs: 2 nsecs: 124683174349985
hpet_alarm called. iteration: 64 secs: 2 nsecs: 124683205607103
$
Well, I think that I have provided you with enough information so that you should now be able to go away and experiment with the HPET interface yourself.
By the way, not all VMware products support HPET. Currently ESX does not provide a virtual HPET to guest operating systems and in some cases it may be necessary to disable HPET altogether because of timer drift in virtual machines. See VMware TimeKeeping for more information.
P.S. I tested the the above example on an Intel DX48BT2 motherboard running a 2.6.29.5-191 kernel.
Enter your email address | http://blog.fpmurphy.com/2009/07/linux-hpet-support.html | CC-MAIN-2017-17 | refinedweb | 2,146 | 65.73 |
We will keep the relevant user constants as class constants. We will write the implementation of the methods from the
BaseDao abstract class in this class. Simply, we will add the body of those abstract methods and our own methods required into the class. So, follow the steps listed here:
Daodirectory named
UserDao.php, and type in the following code:
<?php namespace My\Dao; class UserDao extends BaseDao { private $db = null; public function __construct() { $this->db = $this->getDb(); } } $userDao = new \My\Dao\UserDao; ?>
As you can see, the class is under the
My\Dao namespace and extends to the
BaseDao class, so the class will have methods inherited from the parent. ...
No credit card required | https://www.safaribooksonline.com/library/view/php-application-development/9781849515801/ch07s06.html | CC-MAIN-2018-26 | refinedweb | 115 | 66.23 |
Let's talk about how to set up web driver resources to get you prepared so that you can begin playing with the tool.
When you start using selenium, you will notice a bunch of issues when it comes to browser and driver combination because the browsers get updated at a specific frequency.
Now let's open up your Visual Studio and create a new project.
Select a unit test project and click Next. Enter the name in the Project Name field and click on the Create button. Once the project is created, you will see a class is added to the project, let's change the name of the class to 'SeleniumTests'.
Now that we have a unit test project, we are ready to install the selenium web driver. So let's right-click on your project in Solution Explorer and select Manage NuGet Packages...
Click on the Browse, search for selenium web driver, and install the latest stable version of the Selenium.WebDriver.
So, let's install the ChromeDriver so that we can utilize it in our automated functional testing. The easiest way to install it is by using NuGet Package Manager.
Search for chrome driver on the Browse tab, and install the latest stable version of the Selenium.WebDriver.ChromeDriver.
Now that we have installed all the required NuGet packages, let's write a simple code to make sure that everything is working. Add the following to the 'TestMethod1'.
[TestClass] public class SeleniumTests { [TestMethod] public void TestMethod1() { IWebDriver driver = new ChromeDriver(); driver.Navigate().GoToUrl(""); } }
This code will open the google home page in the chrome browser. Then let's run the test and if everything is good you will see a passing test on the Test Explorer. | https://riptutorial.com/selenium-webdriver/learn/100001/setup-selenium | CC-MAIN-2022-05 | refinedweb | 290 | 71.85 |
This article introduces a library of C++ classes which I have named Windows/OpenGL Classes, or WOC for short. WOC leverages the substantial functionality of OpenGL and hides its complexity behind a hierarchy of user-derivable base classes and leaf classes. In addition, a basic Win32 application-andwindowing framework is offered, as well as some very flexible value-generating and member-function-calling class templates whose purpose is to constitute, and relay values around, 'virtual circuits' for the purposes of either animation or geometry generation.
Casual users of WOC, and anyone tempted by the instant gratification of some pretty graphics pictures, are welcome to visit the WOC section of my website at. The WOC header and implementation files (127kb zipped) can be found in the same place. This article is aimed at those interested in WOC 'under the hood' but it also includes a first tutorial on its use.
Because WOC hides the OpenGL API, you don't need to know OpenGL. It helps to have heard of, and to be able to visualise, 3-D cartesian coordinate space and to have the gist of the basic translation, rotation and scaling transformations, particularly the significance of applying either translation or rotation before the other. With the exception of the animation templates, only a very basic knowledge of C++ is required to use WOC; it is in the ballpark of rudimentary MFC. Use of the animation templates is optional but more demanding as it requires a good knowledge of the generic programming techniques of modern C++.
OpenGL (Open Graphics Library) was developed by Silicon Graphics and it is a hardware-independent specification of a graphics programming interface. Although windowing tasks and user input are not part of the OpenGL specification, implementations for different platforms all have a standard core of functionality and are packaged with the OpenGL Utility Library (GLU) which does offer a common abstraction of windowing support hiding a specific implementation for each platform. GLU is not a perfect solution and on Win32 I prefer to use the Win32 Extensions.
OpenGL's interface is at the level of geometric primitives - points, lines and polygons - and no higher. WOC also allows the geometry of a 3-D model to be defined at this level, either manually or generated automatically, but also introduces types representing higher-level elements, defined once and referenced many times, in model and scene hierarchies created by the user. WOC controls OpenGL's state transparently and is responsible for managing, transforming and rendering the user's scene and animations. Also, for Win32, the basics of registering and creating windows, a message-loop, window procedure, and creating and managing an OpenGL rendering context and default animated model are all taken care of by WOC upon the instantiation of, in the simplest case, a single Application class object.
At the bottom of WOC's geometry class hierarchy [class diagram on following page] is the VectorT template. This represents a vector, or one-dimensional matrix. A vector has magnitude and a direction in space. VectorT is used as a set of either three or four scalar values which together represent a vector-like concept. So, it can be used to represent any of: a set of homogeneous or non-homogeneous coordinates in threespace (i.e. a point); a free vector in three-space (e.g. a normal vector); ray rotations, translations and scalings. The class diagrams shown in the figures are from Rational Rose and give the class names without their generated 'C-' prefixes. I will do the same in this discussion. WOC specialises VectorT with the GLfloat type and typedefs the result to Vector. GLfloat is itself a typedef for the built-in type float. At a similarly low level the UV class represents a set of floatingpoint texture coordinates (u and v, corresponding to the x and y directions respectively) which identify a point on a texture map (an image). A Model instance is a collection of all the 3- D points (Vector), lighting normals (Vector) and texture coordinates (UV) from which its polygons are constructed. This repository of geometric resources is then referenced by Triangle instances so that the points, normals and texture coordinates can be re-used and mixed and matched as required. A logical collection of Triangles is placed into a Geometry - e.g. the triangles defining the surface of a sphere - also with re-use in mind. Geometries are collected and managed by the Model, but are referenced by the Model's Groups. I will say more about material and transformation in due course, but a Group applies a Material and a set of Transformations to a Geometry so that the same Geometry (e.g. our sphere) may be stretched, scaled, translated, textured or coloured many times depending on the properties of each Group which references it.
As you can see, WOC's abstraction of a 3-D model is factored into several classes. It is possible to construct a model at either low, medium or high level as desired. The lowest level involves defining points, normals and texture coordinates and then defining Triangles in terms of the returned indices of the points. Alternatively, Triangles can be constructed with their points, normals and texture coordinates as parameters, with the option to re-use duplicate existing points, normals, etc, within a threshold of similarity. Normals are optional: they may be supplied or alternatively WOC will on request calculate face or vertex normals. Finally, the highest (and easiest) level of model definition is afforded by the polymorphic model-loader classes. You simply point these at a Model instance and, together with some optional parameters, instruct them to load. Several model-loaders (cube, grid, tetrahedron, sphere, tube) are built into WOC but you can derive your own. The sphere loader re-uses existing points it has already placed in the Model's repository, because the spherical to cartesian coordinate conversion formula it uses generates a large number of proximate points at the poles. There is also a model
loader specialised for reading from a disk file in Wavefront .OBJ format, making it childsplay to build an .OBJ model-viewer with WOC. A more Model-centric view of the classes already mentioned can be found in the WOC Class Reference on my website; I won't reproduce it here.
Let's look at Transformation(s) next.
Earlier I mentioned that a Group applies a set of Transformations to a Geometry. A Transformation is an abstract base class for Translation, Scaling and two types of Rotation. WOC has a class which is a collection of Transformation-derived types, and is known as a Transformations. A Transformations' elements are applied in the order in which they were added because matrix multiplication is not generally commutative. As can be seen in Figure 2, an OGLWnd also has its own Transformations instance. An OGLWnd is a window with an OpenGL Rendering Context in its client area, and its Translations and Rotations are generally sufficient to give the correct view (or 'camera angle') onto the scene as a whole, although Scalings can be applied if required. An OGLWnd owns a collection of Models and once the scene itself has been transformed, each Model in the scene is transformed and rendered and that process in turn involves transforming and rendering each Models' Groups. A Model's Transformations collection (not shown in Figure 2) exists so that transformations common to all Groups in a Model can be factored up to the Model. Immediately before transforming a Model or a Group, the current transformation matrix is pushed onto OpenGL's matrix stack and later popped once the Model or Group has been rendered. This ensures the current transformation always keeps in step with the inorder walk of the scene tree. It probably also bears mentioning that Groups themselves may have a further collection of (sub)Groups in the way that Models do. This allows the definition of a scene tree to go to any depth and also provides for Group nodes to contain only a Transformations collection without a Geometry nor a Material; this provides for further factoring out of Transformations common to child Groups. The OGLWnd's Transformations are accessed by the class's built-in mouse interface which allows the user to translate, rotate and zoom the scene. Mouse sensitivity along with a host of other settings are available from the OGLWnd's context menu.
Materials are another part of a Model's repository of resources which are re-usable by its Groups. A Material is essentially a definition of how the triangles in the Group should reflect the colour components of the lights illuminating them. A number of stock Material definitions are built into WOC (e.g. emerald, ruby, pearl, brass, bronze, red enamel, various colours of plastic and rubber to name but a few) so the casual WOC user need never get into the technicalities. The definition of lights is taken care of by the Light class which wraps OpenGL's lightrelated APIs. A Light contains a Model instance which defines the appearance of a Light should it need to be represented visually. By default a Light's Model is loaded with low-resolution sphere geometry but you can derive from Light and change the Model used. Lights also have their own Transformations collection so that they can be placed anywhere, or even animated.
The remaining corners of the WOC class model can be explored by checking out the WOC Class Reference on my website.
To follow along with this tutorial you will need to download the WOC header and implementation files from my website. I recommend that you unzip them into a folder named woc and locate it at the same level as (i.e. a sibling of) the project folders which use it. This is because the projects look for the WOC files at the path: ..\woc\ as we'll see later.
If you like WOC and find it useful then you may want to use it more than once - perhaps even lots of times. In that case it's nice to have a skeleton or template project to model new WOC projects on. That's where the Skeleton Project (WOCSkeleton) comes in. This tutorial shows you how to integrate the WOC source files into a Visual C++ project, but you can then save the project and re-use it as a starting-point for new projects. You can either follow along with the steps or just download the files of the completed project. Of course you don't have to use a skeleton project if you don't want to, but you'll find it more convenient than following these steps each time you make a new project. For those who are unable to use Visual C++, or who prefer not to, WOC will build under the GNU Compiler Collection (gcc/g++). I have more to say about this on my website.
Launch Visual C++ and use the Win32 Application wizard to create a new project named WocSkeleton. Locate the project folder as a sibling of the woc folder containing the WOC header and implementation files. The steps which follow apply to the 'Hello World' option (on Step 1 of the Win32 Application Wizard), but feel free to choose one of the others if you're happy to add the appropriate files, code and resources on your own to make a minimal Win32 application.
Open the file stdafx.h for editing. There is usually a comment near the end of the file indicating where to place your own headers:
// TODO: reference additional headers your program requires here
Even if you don't have this comment, just find a suitable place near the end of the file before the close of the include guard and type this:
#include "woc.h" // directory path set in project settings. using namespace woc;
This is an include of the main WOC header file which in turn includes several other WOC header files. Together these files contain the declarations of all of WOC's types, and some complete definitions. Because I have opted for a using directive, and because stdafx.h is included by the other source files in the project, all names in the woc namespace will now be visible in the global namespace throughout the project without further qualification. If you don't like this, you can omit the using directive and explicitly qualify. At this stage the project doesn't yet know where to find the woc.h file - that's done in step 3. However, you can opt to hard-code the path to the file here (even a relative path) and skip step 3. It's up to you.
If a project includes a lot of header files from the same folder, and the name or location of that folder may change, then you wouldn't want to have to edit the path in every #include. Although WOC's include structure presently obviates the explicit including of more than one header file, that may not always be the case. So, if you followed step 2 to the letter then now you'll need to let the project know where to look for additional header files, specifically woc.h. You could make this setting for the whole of Visual C++ by adding a new include file directory on the Directories tab of the Options dialog (Tools/Options... menu), but I prefer to make the setting apply only to the project at hand so that it will easily transfer between Visual C++ installations. To do this, choose Project/Settings..., choose Settings For: All Configurations, choose the C/C++ tab, Category: Preprocessor, and in the Additional include directories: edit box, type:
..\woc\
This is correct for the case where the woc folder is a sibling of the new project's folder. You may choose a different arrangement but, if you do, then you should edit the above include directory path to match. Note that the Additional include directories: edit box may contain more than one path, commaseparated. There is one more change to make whilst you're editing the project settings. Use of the dynamic_cast operator requires run-time type information which is enabled with the /GR compiler switch. So, choose Settings For: All Configurations, the C/C++ tab, Category: C++ Language, and check the Enable Run- Time Type Information (RTTI) checkbox.
The project will now compile, but so that it will also link when we come to using the WOC classes, you'll need to add the implementation file to the project's Source Files folder on the FileView tab of the Workspace pane. Right-click the Source Files folder, choose Add Files to Folder... from the context menu, navigate to the woc folder and choose the file woc.cpp. Whilst you're at the FileView tab you can also add all the WOC header files (woc.h and all the others you'll find in the folder) to the Header Files folder so that all the WOC classes will appear on the ClassView tab.
One final step before the project will link is to reference the static library files for OpenGL and the Win32 Common Controls. To do this, choose Project/Settings..., choose Settings For: All Configurations, choose the Link tab, Category: General, and in the Object/library modules: edit box, add:
opengl32.lib glu32.lib glaux.lib comctl32.lib
Now the project will build. Just check that it does.
At present the project is using no WOC features and, if you run it, it will behave as it did when it was first generated by the wizard. Now we need to remove most of the wizard-generated Windows code and replace it with a small amout of WOC code. Open the file WocSkeleton.cpp for editing and delete everything from it except the #include directives and the WinMain function. Next, delete all the code from the body of the WinMain function and type this in its place:
// Perform application initialization: if (!theApp.InitInstance(hInstance, nCmdShow, IDC_WOCSKELETON, IDI_WOCSKELETON, IDI_SMALL, NULL, IDS_APP_TITLE)) { return FALSE; } return theApp.MessageLoop((LPCTSTR)IDC_WOCSKELETON);
You may be wondering about the identifier theApp - where is it declared? Nowhere as yet, so add the following declaration after the includes but before WinMain:
// The one and only application object. CWocApp < CWocFrameWnd < CWocOGLWnd > > theApp;
The meaning of this code is that we are declaring an identifier named theApp which is of type CWocApp. This is a class template whose single template parameter specifies the type of window to use for the application's main window (defaults to CWocFrameWnd ). The parameter can be any CWocWnd derived class so long as it implements a CreateFrame method as CWocFrameWnd and CWocOGLWnd do. The CWocFrameWnd class is another class template whose single parameter specifies the type of the view window it may be required to use to overlay the client area of the frame window. The parameter must either be CWocWnd (the default) or a class derived from it. Incidentally, the constructor of the frame window object takes a BOOL parameter, defaulting to TRUE, indicating whether or not the frame window is required to create a view. If this parameter is FALSE then the view type is ignored. If we wanted to override the default of creating a view then we have to wait until the application object has created the frame window then call SetCreateView(FALSE) on the frame window at any time, but most logically in an overriden OnCreate handler.
Now you can build and run the sample, so let's leave further code editing until the next step whilst we look at some of the default features of the classes. The code that earlier I directed you to insert specifies the view of the main frame window to be an OpenGL rendering context window (CWocOGLWnd). When you run the sample you'll see the default behaviour of the CWocOGLWnd class. Firstly, a default 3-D model is displayed which is lit and rotating and its normals are shown. How this happens is that an (overridable) initialiser function in the OpenGL window class (the member is CWocOGLWnd::InitialiseGL if you want to take a look at it) creates a new model object, loads some geometry and face normals into it and then calls a method on the model to require it to show its normals. The model then makes some changes to the OpenGL state to reflect its requirements and, since normals now exist, the model requests the view class to activate lighting and GL_LIGHT0. The view class also defaults to rotating the scene a small amount on a timer which fires every few milliseconds. The CWocOGLWnd class has two significant features: a mouse interface to manipulate the view transformations, and a Properties Dialog.
Mouse manipulation is a mode which you can toggle into and out of by holding down Ctrl and right-mouse clicking inside the view. When you're in mouse manipulating mode the mouse cursor will disappear and you can manipulate the scene in several ways, even whilst model animation is taking place, by moving the mouse with various combinations of the mouse buttons depressed. With no buttons depressed the view is rotated about the X and Y axes; the left button causes rotation about the Z axis; the right button causes zooming in and out; and both mouse buttons depressed together causes the view to pan.
Getting the OpenGL Window Properties Dialog to display can be done either programmatically by calling CWocOGLWnd::PropertiesDialogDoModeless or by the user double-clicking the right mouse button anywhere inside an OpenGL Window. For a full explanation of all the controls on the OpenGL Window Properties Dialog together with the theory behind them, please see the documentation for the CWocOGLWndPropertiesDialog class (nested within the CWocOGLWnd class) in the WOC Class Reference.
Now to reactivate the application's main menu. The project wizard created an About dialog box resource along with a dialog procedure and command handler to display the dialog. Earlier we deleted that code but we still have the dialog resource and, as you'll see, it's very easy to add a handler to your project to handle the menu commands and to create and display the dialog. First, in order to control the handling of specific commands, we need to override the command handling functionality in the default frame window class CWocFrameWnd. Command handlers exist in all of WOC's window classes: standard windows, frame windows and consequently any window used as a view. This means that you can either derive your own frame window and add a command handler to it or do the same with a view window; the only difference being that frame windows get to handle commands before their views. In this case, because the purposes of the menu commands being handled are 1. to close the application and 2. to display the application's About box, the most appropriate place to handle these commands is in the application's main frame window. The plan then is to derive a class from the existing CWocFrameWnd template class and implement the virtual OnCommand method on the derived class. Type the following code into WocSkeleton.cpp immediately before your declaration of theApp:
template < class _TyView = CWocOGLWnd > class CWocSkeletonFrameWnd : public CWocFrameWnd < _TyView > { public: CWocSkeletonFrameWnd (BOOL nCreateView = TRUE) : CWocFrameWnd < _TyView >(nCreateView){}; virtual ~CWocSkeletonFrameWnd(){}; virtual BOOL OnMenuOrAcceleratorCommand (UINT nId) { switch (nId) { case IDM_EXIT : return theApp.Exit(); case IDM_ABOUT : { CWocDialog dlgAbout(IDD_ABOUTBOX, this); dlgAbout.DoModal(); break; } default: return CWocFrameWnd < _TyView >::OnMenuOrAcceleratorCommand(nId); } return 0; // indicate that the message // has been handled. } };
So what are the handlers doing? The IDM_EXIT handler is simply calling a method on your application object to destroy the main window and thus quit the application. The IDM_ABOUT handler makes use of the CWocDialog class which in this case needs no specialisation as it handles IDOK and IDCANCEL straight out of the box. The arguments passed to the constructor of CWocDialog are: the dialog's template resource ID, and a pointer to the window object which owns the dialog.
Finally, we have to amend the type of our application object as its main window is no longer the base frame window class but rather the class we've just defined. So replace your theApp declaration with this line:
CWocApp < CWocSkeletonFrameWnd < > > theApp;
And that's it. If you build and run now you'll find that your menu works again. The WocSkeleton is referred-to by further tutorials on my website as they all use it as a starting point. For this reason I suggest you save your project and put it aside if you've been following along, or just download the WOCSkeleton project files if you prefer.
I hope this introduction to WOC has been of some interest. Please visit my website if you wish to follow the remaining four tutorials and learn how to define your own models in WOC. There is also a gallery of sample demos built using WOC at:
Naturally any feedback regarding WOC, good or bad, is welcome via email.
The best introduction to OpenGL is the 'Red Book': Woo, M., J. Neider, and T Davis: OpenGL Programming Guide, Addison Wesley.
The standard computer graphics canon is: Foley, J., A. van Dam, et al: Computer Graphics: Principles and Practice, Addison Wesley.
Aspiring 3-D game programmers are directed to the excellent: Abrash, M: Graphics Programming Black Book Special Edition, Coriolis Group Books.
SGI, Silicon Graphics and OpenGL are registered trademarks of Silicon Graphics, Inc. | https://accu.org/index.php/journals/413 | CC-MAIN-2019-47 | refinedweb | 3,920 | 59.33 |
Hello!
Thank you for reading my post, if I am not being careful about the use
of terminology or am not being exactly clear enough for my question to
be understood, kindly notify and I will correct.
I remember that when I used Rails 1.2, I can just add a blank action
in a controller and then create a .rhtml with that same name as does
the action and then I can just write "hello world"in that .rhtml and
it will show in
Now, I go back to my bookstore controller and add
def greenapple
end
and I go create greenapple.html.erb
and I typed “hello world” in greenapple.html.erb (I know it’s silly, I
just wanted to demonstrate what my problem is)
It gives me:
Routing Error
No route matches “/greenapple/” with {:method => :get}
Any ideas? Or, which I am sure, if you know some web pages that I can
read through, that will be great, too!!
Thanks a quintillion!
Nik | https://www.ruby-forum.com/t/a-question-about-the-restful-routing/136369 | CC-MAIN-2021-31 | refinedweb | 167 | 78.79 |
Operator overloading in Scala and Kotlin: two slightly different ways
As you probably know, Java does not allow user-defined operator overloading, which is the ability to define the arithmetic operators (like
+ and
*) for types other than the Java Virtual Machine (JVM) numeric primitives.
Scala does have operator overloading, or, at least, something that in practical terms is pretty much the same thing: you can use the arithmetic operators as function names.
If Scala has a feature, Kotlin probably has that same feature, too, though it probably uses a slightly different syntax to hide the fact that the idea came from Scala.
Operator overloading is a feature you might not care about if you feel the primitive numeric data types in the JVM are more than enough for all your number-crunching needs.
Though of course operator overloading is not limited to numeric data types. You could very well choose to define the bitwise left shift operator (
<<) to add a
String to an output stream — you know, like in C++?
It looks like all of Scala’s collection classes have at least a few functions named with operator symbols. So there is plenty of precedent if you wish to use operator symbols for your own non-numeric data types.
However, it is for your custom numeric data types that operator overloading makes the most sense, in my opinion.
For example, if you need a data type for algebraic integers like √2, or imaginary numbers like i (note that the
double
Math.sqrt(2) is a rational floating point approximation to √2; while purely imaginary numbers are altogether outside the scope of
double).
So, if we were to define a class to represent algebraic integers, or a class to represent complex numbers like √i, it would make sense to use operator overloading to make the formulas in our programs look a bit more like mathematical formulas.
A better example, though, would be a class to represent fractions like 1/2 and 3/4. In Java, we can certainly define a
Fraction class that is constructed with an integer numerator and a nonzero integer denominator.
The main benefit of the
Fraction class, I think, is postponing the use of floating point arithmetic until it’s strictly necessary, thus avoiding the accumulation of small errors.
This
Fraction class would also provide conveniences like automatically putting fractions in lowest terms even if the constructor doesn’t get the numerator and denominator in lowest terms, e.g.,
new Fraction(70, 50).
But what about adding, subtracting, etc.? We’d have to use function calls.
Fraction oneHalf = new Fraction(1, 2);
Fraction oneThird = new Fraction(1, 3);
Fraction sum = oneHalf.plus(oneThird);
Fraction difference = oneHalf.minus(oneThird);
Fraction product = oneHalf.times(oneThird);
// etc.
It would be nice if, for instance, we could write something like “
oneHalf + oneThird” instead of “
oneHalf.plus(oneThird).” In Scala, we can.
Part of what makes that possible is that the Scala compiler will understand infix notation (ditching the parentheses)for any instance function that takes a single parameter.
So, if we have a
Fraction class written up in Java with the basic arithmetic functions, we can use it from a Scala class like this:
val oneHalf = new Fraction(1, 2)
val oneThird = new Fraction(1, 3)
val sum = oneHalf plus oneThird
val difference = oneHalf minus oneThird
val product = oneHalf times oneThird
// etc.
The other part of it is that in Scala we have a somewhat larger repertoire of characters for function names than we do in Java. So we can define, for example, in the Scala
Fraction class,
def +(addend: Fraction): Fraction = {
val crossNumerLeft = this.numerator * addend.denominator
val crossNumerRight = this.denominator * addend.numerator
val crossDenom = this.denominator * addend.denominator
new Fraction(crossNumerLeft + crossNumerRight, crossDenom)
}
Certainly we could call this as, say, “
oneHalf.+(oneThird),” if we really wanted to. But the whole point here is to call it as “
oneHalf + oneThird,” thus using a syntax that feels more natural.
Of course the JVM does not allow the plus symbol in identifiers. But it does allow the dollar sign. So what the Scala compiler does is compile the
+ function as
$plus, which, for all I know, could be the name the of a dollar store near Silicon Valley.
We can use such a Scala class from a Java class, but in the Java class we’d need to use the identifier with the dollar sign, e.g., “
oneHalf.$plus(oneThird).” Of course in other Scala classes we can use the plus sign infix syntax.
For the unary negation operator, there is a small wrinkle. I mistakenly thought that it would be sufficient to define
- with no parameters, and then the Scala compiler would be able to distinguish it from
- with a subtrahend parameter. That’s not the case at all.
What you have to do is define it as
unary_-, so that then the Scala compiler understands this is a prefix symbol.
def unary_- = new Fraction(-this.numerator, this.denominator)
Then it becomes immediately available for use in our “binary”
- function:
def -(subtrahend: Fraction): Fraction = this + (-subtrahend)
It is my understanding that there are only three other symbols that are available as prefix operators in Scala, and Scala inventor Martin Odersky doesn’t want to add any more of them.
Put in an implicit conversion from
Int to
Fraction, and the
Fraction class becomes really useful at the Scala REPL, which starts to feel a little bit like a Mathematica notebook.
I suppose that in an IDE, with all the auto-complete help, operator overloading is not that big a deal. It still pays off in subtle ways, though. I think it’s worthwhile to implement even if you don’t use the local Scala REPL.
If you have
Fraction implement the
Ordered[A] trait (which extends
java.lang.Comparable[A]), then you can also write things like “
if (someFraction > otherFraction)” rather than the somewhat clunky “
if (someFraction.compareTo(otherFraction) > 0).”
I hinted earlier that Kotlin takes Scala features but implements them differently just for the sake of not being exactly like Scala. That’s certainly the impression I get from Kotlin’s when statements (compare Scala match statements, both are touted as an improvement on Java switch statements).
However, the Kotlin designers might have had a legitimate objection to the way Scala does operator overloading. The objection, I think, pertains to Java interoperability.
Having to type “
$plus” doesn’t seem so bad. For unary negation, though, “
unary_$minus” feels like the opposite of a shortcut, and really makes you appreciate an IDE’s auto-complete.
To say nothing of “
$greater$eq” (if
Fraction implements the
Ordered[A] trait). Though I suppose in that case you’d rather use
compareTo() anyway.
Kotlin takes a different approach. We can also write things like “
oneHalf + oneThird,” but we don’t define
+(). Instead, we define
plus() with the
operator modifier.
operator fun plus(addend: Fraction): Fraction {
val crossLeft = this.numerator * addend.denominator
val crossRight = this.denominator * addend.numerator
val crossDenom = this.denominator * addend.denominator
return Fraction(crossLeft + crossRight, crossDenom)
}
And instead of defining
unary_-, we define
unaryMinus().
operator fun unaryMinus(): Fraction {
return Fraction(-this.numerator, this.denominator)
}
Then these operators are available for use in our “binary”
minus() function.
operator fun minus(subtrahend: Fraction): Fraction {
return this + (-subtrahend)
}
The advantage of this approach becomes apparent when interoperating with Java classes. Theoretically I could bring
EgyptianFractionViewer.java into the project and have it use
Fraction.kt instead of
Fraction.java without any problems.
In practice, I did have a few problems, the most annoying of which was that I can’t use the default parameter (for when denominator can be understood to be 1, making the fraction an integer) from a Java class, unless the Kotlin class has an auxiliary (likely chained) constructor to fill in the default parameter.
A little reflection will readily show why this is the case. The default parameter syntax in Scala and Kotlin is a convenience which neither the Scala compiler nor the Kotlin compiler can force on the Java compiler.
But the arithmetic function calls were no problem at all. Obviously I can’t use the operator forms from a Java class, but I didn’t have to rename anything from
Fraction.kt in a Java class (the Egyptian Fraction Viewer program only uses fraction addition, subtraction and comparison).
Though I suppose that with
Fraction.scala, it wouldn’t be too big a deal to mass replace in
EgyptianFractionViewer.java each occurrence of “
.plus” with “
.$plus,” “
.minus” with “
.$minus,” etc.
Kotlin has its own version of Scala’s
Ordered<T>, but it’s surprisingly called
Comparable<T>. There’s no risk of confusion with
java.lang.Comparable<T>, since you’d have to explicitly import that one if that’s the one you want.
However, the Boolean functions
greater(),
greaterEq(),
less(),
lessEq() are unavailable to Java classes, unlike
$greater(),
$greater$eq(),
$less(),
$less$eq() would be from a Scala class to a Java class. No big loss there.
One of the arguments given against operator overloading is that it can lead to meaningless identifiers. But, as Cay Horstmann points out in Scala for the Impatient, it’s always possible to give things meaningless names even when we’re limited to ASCII letters.
Granted that it is possible with operator overloading to give things counter-intuitive names, such as calling “
+” a function that really should be called “
removeAll(),” or perhaps worse, “
remove().”
Ultimately, though, the designers of Scala and Kotlin had greater faith than the designers of Java that programmers will only use operator overloading when it is appropriate, and in ways that make sense. | https://alonso-delarte.medium.com/operator-overloading-in-scala-and-kotlin-two-slightly-different-ways-2e8e2546ede4 | CC-MAIN-2021-21 | refinedweb | 1,604 | 54.52 |
README
¶
testlog
The testlog package provides a convenient way to log to the test output when
testing projects that log to a logger (such as the standard libraries
"net/http".Server).
This ensures that log output doesn't pollute test output and is shown under the
correct test only if the test that generated the log output fails.
import ( "code.soquee.net/testlog" ) testlog is a log.Logger that proxies to the Log function on a testing.T.
It is used to group log messages under the tests that generated them in test output and to only show those messages if the test that generated them failed.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
This section is empty. | https://pkg.go.dev/code.soquee.net/testlog | CC-MAIN-2022-27 | refinedweb | 124 | 65.22 |
Starting new year's
resolution early - two blog posts in one day!
After receiving a
response from a WCF solicit response port, my orchestration was raising the
following exception:
Inner exception: Received unexpected message type ''
does not match expected type ''
I updated the
BizTalk config to trace the received WCF message. It looked ok, had expected
namespace and root node.
Problem was, I'd
forgotten to set the receive pipeline of the physical two way send port to XML
Receive - it was sat at PassThruReceive. This meant that the message type
wasn't getting promoted. Because the receive on my orchestration's logical
solicit response port was strongly typed - it threw the exception.
Had a funny problem
with BizTalk today that I thought worth blogging about in case anyone else
makes the same mistake.
I have the following
setup:
File Receive Port
(map to canonical) --> Orchestration --> Send Port (map to external)
After deploying with
the BizTalk Deployment Framework I kicked off an integration test which dropped
a single record flat file in the receive location. The send port did its thing
and all looked good. Then I noticed many (and I mean many!) messages were being
sent.
My first thought was
perhaps the source file wasn't being deleted by the file adapter but that drew
a blank. It turned out the problem was caused by a simple mistake with the
orchestration's logical activating receive port. I had set the Binding to Direct!
So, canonical
message entered the orchestration but also left the orchestration (because I
have mapping on the send port rather than within the orchestration). This meant
that the send (from the orchestration to the messagebox) triggered a new
activating receive and another orchestration instance!
I’ve been working with BizTalk since 2006r1 (the one without the handy WCF adapters). Since that time I’ve tried various community offerings to improve the deployment process. In 2008 I was working for a large retailer. In order to support parallel development for phased but overlapping releases (config management fun!), they had up to four different BizTalk groups each containing between two and four servers. At the time, we had approximately eighteen different BizTalk applications of varying complexity, running on each BizTalk group. As you can imagine, this made management of the binding file time consuming and difficult. I turned to Michael Stephenson’s BizTalk Configuration Management tool on CodePlex () this was a great help since it allowed us to maintain all the settings for the bindings in a single SQL Server database.
Last year I joined a BizTalk development team where they were already using the BizTalk Deployment Framework (BTDF)() along with BizTalk 2010 and TFS 2010. I was really pleased because I’d heard a lot about the BTDF but hadn’t previously found the time to work with it. Working with the deployment framework can be quite daunting initially because it provides so much functionality. Fortunately the existing team members we able to quickly answer my questions.
At the time, the deployment framework was being used in the “conventional” way:
It was at this time “I had a dream!”. Wouldn’t it be great if a check-in of source code triggered a build, automated deployment and test of our BizTalk applications. Of course, this is not a very original dream and Continuous Integration for non-BizTalk applications is very common. Just to make the task more challenging, I wanted to keep the Build Server free of BizTalk and instead deploy to a remote two-node BizTalk server group (known as our “dev” group). If I could get this this to work then it should be easy to adapt so that the BizTalk application could be deployed to other environments higher up the food chain, namely “test”, “pre-prod” and, dare I say it; “prod”.
The required development / configuration can be broken up into three categories:
When built with TFS, the folder structure is different to that when simply building from within Visual Studio. With a standard VS build you may have you may find your solution is c:\development\solution name\solution.sln. When TFS builds it first does a “get latest” of the source code into the build agent folder. The path for this is determined by the Build Agent’s “Working Directory”. This can be accessed Start\All Programs\Microsoft Team Foundation Server 2010\Team Foundation Administration Console. From the UI select Build Ahent Properties. Another dialog will be displayed enabling you to set the “Woking Directory” for any particular build agent. The default is as follows: $(SystemDrive)\Builds\$(BuildAgentId)\$(BuildDefinitionPath). The following table gives a comparison of various build paths for a Standard VS build and a TFS build where System Drive = “C”, BuildAgentId = “3” and BuildDefintionPath = "\Solution\OvernightBuild”
c:\development\solution\solution.sln
c:\builds\3\tfs project name\overnightbuild\sources\solution name\solution.sln
c:\development\solution\project\project.csproj
c:\builds\3\tfs project name\overnightbuild\sources\solution name\project name\project.btproj
c:\development\solution\project\bin\project.dll
c:\builds\3\tfs project name\overnightbuild\binaries\project.dll
The btdfproj file has an item group describing the location path for the binaries of the solution’s: Schemas, Components, Pipelines, Pipeline Components, Orchestrations and Transforms. We need to configure these item groups to select from the TFS build structure where a TFS build is being used.
An example of the ItemGroup describing the binary locations (pulled from the BizTalkDeploymentFramework.targets file) can be seen below (note this wouldn’t actually be needed since we’re are using the default name for the orchestrations assembly but I am showing it to illustrate the difference between this and the item group used when a TFS build):
We are able to override this by adding the item group into the solution’s btdfproj file as follows:
Note how this overriding group will only be used if the “TeamBuild” variable is true, explained in the TFS section.
For the BizTalk deployment framework to create Virtual Directories, it expects to find them in the folder $(RedistDir)\ProjectName\bin. With a TFS build, components of the virtual directory (e.g. .svc, .dll etc.) won’t be in the correct place.
In order to remedy this, it is necessary to override the “CustomRedist” target, as illustrated by the following example:
Note: the WebServiceBinFolderPath has been declared as a property at the top of the btdf file, as illustrated below:
Solutions are built by TFS using what are know as build definitions.
An example of the VS UI used to manage settings within a build definition can be seen below:
One critical item from the above page that must be set is the “Items to build”. When not using BTDF this would typically point include the project files for any dependent projects, then the .sln of the BizTalk solution to be built. However, for any automated BTDF build, the .btdfproj file must be appended to the list of projects / solutions that need to be built.
From the previous screen grab, notice the pane labelled “Build process template”. This points to a .xaml windows workflow file which I have adapted from a standard build template.
One critical change can be seen in the following screen grab:
Note the condition that has been added to determine if the particular loop iteration is dealing with a btdfproj. If not then MS build is called in the normal way. However, if it has been requested to build a btdfproj then the following are passed as arguments to msbuild:
“/p:TeamBuild=True /t:Installer”
The critical parameter is TeamBuild, this use for this is explained in the section “BTDF Configuration Changes”
After the required assemblies have been built (into the binaries folder rather than \bin), then next tasks are:
In order to achieve this, a couple of custom Powershell script are called from the build xaml, one for the undeploy and another for the deploy. These can be seen in the following screen grab:
In order for these Powershell scripts to be generic, so that they can be used for any BizTalk application, it has been necessary to create many custom arguments that can be passed into the xaml execution and then passed from here into the Powershell scripts. These custom arguments are defined from a section at the base of the xaml designer window, as can be seen in the following screen grab:
Any argument defined in the xaml designer will become available on the build definition UI, once the xaml has been selected as the required build template. In the following screen grab, notice how the “ApplicationNameInBizTalk” argument has been made available to be configured for the build definition
As described in the previous section, toward the end of the xaml build execution Powershell is called to undeploy then deploy the BizTalk application.
The sequence of events required to achieve this are illustrated in the following diagram:
An illustration of the required Powershell scripts, their functions and organisation can be seen below:. The initial request to Powershell will be made by the TFS Build Service. The credentials of this service will be passed to the BizTalk server. It’s important that the account used for the TFS Build service has is a member of the BizTalk administrators group.
The following steps are required to enable Powershell remoting between the Build Server and BizTalk Servers:
When running the TFS build you may receive errors reporting that the schema cs files used for unit test (eg modified_OP-Order-v1.xsd.cs) could not be found.
Ensure that the “MSBuild Platform” property within the “Process” tab of the build definition is set to “X86” (rather than the default value of “Auto”)
Ensure you’ve included the .btdfproj in the projects to build on the process tab
Problem with assemblies missing from MSI
Check you have added the TeamBuild property to the btdfproj file: <TeamBuild Condition=" '$(TeamBuild)' == '' ">False</TeamBuild>
Tom Abrahams () for his amazing effort in taking the BTDF to where it is today. All those who think Tom deserves a Connected Systems MVP – have a word with Microsoft!
Randy Aldrich Paulo () for a great post that provided me with the basis for the development of my Powershell Scripts
I! | http://geekswithblogs.net/RobBowman/Default.aspx | CC-MAIN-2014-49 | refinedweb | 1,706 | 58.62 |
Gatsby.js with Contentful Content Management
January 20, 2018
One of the most magical aspects of Gatsby.js is its ability to pull in data from a wide variety of sources. Leveraging the power of GraphQL, you can populate a Gatsby website with local markdown files, a CSV file, a MongoDB database, or from a list of popular content management systems (CMS). This means that if you have a Wordpress site, you can use Gatsby to pull in your posts and pages, and create a modern static website with a React front-end.
Gatsby also supports Contentful, which is a cloud hosted, headless CMS. The "headless" part means that there is no front-end layer, only a back-end which you can log into (think /wp-admin but a lot simpler). This is a perfect fit for us because we can use Gatsby as the front-end layer and Contentful as the back-end to manage pages and blog posts.
Contentful and Gatsby.js Blog Demo ( view source )
Start by installing the gatsby-starter-default official starter.
gatsby new gatsby-contentful cd gatsby-contentful
Now this starter is pretty bare bones, we're going to need a few additional plugins to make our site work.
yarn add gatsby-source-contentful gatsby-transformer-remark gatsby-image
The plugin
gatsby-source-contentful is what allows us to connect with Contentful's API and get our data. The other two plugins will help us format that data into something useful for our site.
Let's add some configuration to our
gatsby-config.js, which is located in the root folder.
module.exports = { siteMetadata: { title: 'Gatsby Default Starter', }, plugins: [ { resolve: `gatsby-source-contentful`, options: { spaceId: `###`, accessToken: `###`, }, }, 'gatsby-plugin-react-helmet', 'gatsby-transformer-remark' ], };
Here we are activating the Contentful plugin, but we need to create a new account with Contentful in order to get the
spaceId and
accessToken. Good thing they offer a very reasonable free tier, let's sign up! Head on over to Contentful.com.
Setting up Contentful
If you are new to Contentful, they have a beginner's guide to run you through the basics. For me personally, I come from a Wordpress background and Contentful reminded me a lot of Advanced Custom Fields. You can create a content model and populate it with whatever fields you like. Once you sign up, there should be a Sample Project created for you to experiment with. Let's add a new content model so we're all on the same page.
For this demo, the "Blog" model I created has 4 fields. Title, slug, featured image, and content. The content field is long text and I used the default markdown appearance. The model name, "Blog", is also important because it will be used later in our GraphQL query.
While still in Contentful, head to the APIs page and create a new API key. Here is where we can grab our
spaceId and
accessToken for our
gatsby-config.js. We can now also create some sample blog posts, go to the Content page and add them under the newly created Blog content type.
Querying for Contentful with GraphQL
After you add in your
spaceId and
accessToken, we can start up our Gatsby development process by going into the terminal and running
gatsby develop. Go to to see your site. If we now head to we can see what's known as GraphiQL, which is a graphical interface for experimenting with GraphQL queries.
Let's try and get our new blog post data from GraphiQL. On the left panel enter the following query and hit the play button.
{ allContentfulBlog { edges { node { id title slug } } } }
In the right panel, you should see the blog posts that you made in Contentful! GraphiQL also has some cool auto complete features, which will show you all the options available for a particular set of data. If you add a new line right above
allContentfulBlog and hit
control + spacebar, you should get a dropdown with all the other fields you can query.
Notice that the
allContentful fields all have the content model's name after it, such as
allContentfulBrand or
allContentfulProduct. This is where the Contentful model name comes into play. Since we want our blog posts, the one we want to use is
allContentfulBlog to match our "Blog" content model.
Open up
src/pages/index.js in a text editor, this is our homepage component. Replace the code with the demo site's index.js code. Notice the GraphQL query at the bottom of the page, which looks very similar to our experiments in GraphiQL.
export const pageQuery = graphql` query pageQuery { allContentfulBlog( filter: { node_locale: {eq: "en-US"} }, sort: { fields: [createdAt], order: DESC } ) { edges { node { id title slug createdAt(formatString: "MMMM DD, YYYY") featuredImage { resolutions(width: 300) { ...GatsbyContentfulResolutions } } content { childMarkdownRemark { excerpt } } } } } } `
There's some extra filters and sorting applied to the data, and we are getting fields such as the title, slug, featured image, and content. If you look at the
featuredImage, we are getting the resolutions to be used with
gatsby-image. For the
content, the query comes in as markdown and we need to use
gatsby-transformer-remark to convert it. After saving this page, you should see a list of Contentful blog posts on the homepage!
The last step is to have Gatsby generate the actual pages for each post. In the root folder, open up the
gatsby-node.js file and replace with the following:
const path = require('path') exports.createPages = ({graphql, boundActionCreators}) => { const {createPage} = boundActionCreators return new Promise((resolve, reject) => { const blogPostTemplate = path.resolve('src/templates/blog-post.js') resolve( graphql(` { allContentfulBlog (limit:100) { edges { node { id slug } } } } `).then((result) => { if (result.errors) { reject(result.errors) } result.data.allContentfulBlog.edges.forEach((edge) => { createPage ({ path: edge.node.slug, component: blogPostTemplate, context: { slug: edge.node.slug } }) }) return }) ) }) }
This uses the Gatsby API to automatically create pages that are queried with GraphQL. We use a
blogPostTemplate in this file that doesn't exist yet, so create a new folder and file at
src/templates/blog-post.js with the following code:
import React, { Component } from 'react' import PropTypes from 'prop-types' import Img from "gatsby-image" class BlogPost extends Component { render() { console.log(this.props) const { title, createdAt, featuredImage, content } = this.props.data.contentfulBlog return ( <div> <h1 style={{ borderBottom: '1px solid #ccc', paddingBottom: '0.5rem' }}> {title} </h1> <p>{createdAt}</p> <div> <Img sizes={featuredImage.sizes}/> </div> <hr /> <div dangerouslySetInnerHTML={{__html:content.childMarkdownRemark.html}} /> </div> ) } } BlogPost.PropTypes = { data: PropTypes.object.isRequired } export default BlogPost export const pageQuery = graphql` query blogPostQuery($slug: String!){ contentfulBlog(slug: {eq: $slug}) { title createdAt(formatString: "MMMM DD, YYYY") featuredImage { sizes(maxWidth: 800) { ...GatsbyContentfulSizes } } content { childMarkdownRemark { html } } } } `
The query used on this page is similar to the one from the homepage, we just display the data in a different way. You'll probably need to stop and restart the
gatsby develop command in the terminal, this will rebuild the site with the new blog pages. Feel free to add more blog posts in Contentful, but right now you'll need to rebuild the site each time a new post is created to pull in that data. | https://codebushi.com/gatsby-with-contentful-cms/ | CC-MAIN-2019-22 | refinedweb | 1,190 | 56.05 |
Hi Dennis and others:
> On Sep 3, 2015, at 12:01 PM, Dennis Reedy <dennis.reedy@gmail.com> wrote:
>
> Hi Greg,
>
> Some comments inlined below.
>
>> On Sep 3, 2015, at 1046AM, Greg Trasuk <trasukg@stratuscom.com> wrote:
>>
>>>
>>> +1 for putting it in the net.jini.config API namespace, the DSL lives in net.jini.config.
>>
>> Arguably, the important thing, the thing that really _should_ be in net.jini.config,
is the Configuration interface., bakes that forms part of the API that one uses to create
services. If the Configuration interface changed, the code to start up any service would
immediately break.
>>
>> The ConfigurationFile implementation is in there because it’s in there. Developers
never see the ConfigurationFile class if they’re using the Configuration interface and the
ConfigurationProvider correctly.
>
> Well, it needs to be in jsk-platform.jar is the important thing. If we choose to make
implementations of the Configuration interface go to org.apache.river thats fine. Lets just
be consistent.
>
I’m with you on this
>>
>>> This is a good thing, we should consider deprecating the DSL in favour of Groovy.
>>>
>>
>> Really? We should force Java developers to learn a new programming language so they
can configure their system?
>
> You dont have to learn a new language. Groovy is just an extension of Java. You can certainly
learn some of the new idioms and approaches that Groovy provides, or just use straight Java.
For that matter, learning some of Java 8’s new features is similar to learning some of the
Groovy idioms. I dont think it’s a stretch at all for Java developers to use a Groovy based
configuration approach, at least it hasn’t been for so far.
>
> Lastly, the current configuration approach requires learning. You have Java syntax, but
no behavior. You cannot provide logic, just invocations to static methods. I have found that
this has always been a stumbling block for those that have to learn how to configure Jini
services. The common reaction I have heard is; “What is this? Is it Java?” The answer
is, well kind-of sort of, but no.
>
>>
>> I do not understand your logic in saying “we should consider deprecating the DSL
in favour of Groovy”. Nor your logic. I’m not saying the Java-like Configuration DSL
is wonderful, but surely a Groovy-based DSL vs a Java-based DSL is purely a matter of taste.
>
> No, its functionality. You cannot provide logic with the current approach, you can with
Groovy.
>
>>
>>> There's no standards body for Jini standardization, we need to be able to manage
and evolve our API sensibly,
>>
>> This is my point, lest you think I’m just “resisting progress”. Sensibly is
the key word. We, the Apache River project, inherited the Jini Specification, but then we
very purposefully drew a line around it, saying “This is a point of reference for when people
write services”. We adopted the policy that changes to the specification need to be discussed
and voted on, because those changes affect users of the tool set. Changes to the Jini API
cross a line of demarcation where _we_ decided, long ago, that “we really need to talk about
it”. That’s why I’m challenging cavalier statements like “we should consider deprecating
the DSL in favour of Groovy”.
>>
>> The demarcation point of the specification should serve as a reminder that we’re
writing this thing in order to let people create service-oriented architectures. We need
to consider the users.
>>
>
> And how exactly are we not doing this?
>
>>
>>
>>> locking out progress would lead to paralysis and inevitable obsolesence.
>>>
>>> The Groovy configuration is far superior to the DSL in many ways,
>>
>> Please specify.
>>
>>> leaving it as an implementation detail, discourages usage.
>>>
>>
>> Here is my real point when I suggest that GroovyConfiguration might be best separated
out into a separate project. We could structure a project, discuss it, vote on a release
and have it into Maven Central by the end of next week. So users of River could have an easy
way to use a GroovyConfiguration pretty much RIGHT NOW (I realize they can use it now, but
it would be easier if they had a jar file with the right provider api hooks) instead of having
it when they get around to adopting River 3.0, which will be after we get around to releasing
River 3.0.
>
> Well you can use GroovyConfig right now, just add rio-platform.jar to your startup classpath
:)
>
> I am considering your point wrt creating a separate project, I warming up to it, but
I would really rather see River split into a multi-module project instead of splintering off
multiple repositories/projects that can be used with the River platform. I see the Groovy
configuration implementation as part of the River platform, not an external project.
>
I think we’re in agreement. Perhaps our confusion is because I haven’t fully explained
what I mean by “separate deliverable”. I should probably create a separate thread to
talk about “projects and deliverables” and how they relate to repositories. The gist
of what I’m getting at is that a “release” shouldn’t be a big thing. Right now, we
“release” River, and it generates 10 or so artifacts. Problem is, a good change to a
single artifact requires us to consider the impact on every other aspect of the project.
We need to reduce that coupling (given, of course, that there is always some artifacts really
do have natural coupling).
> If multiple repositories/projects is the intent/direction then we can define River core,
then create stand-alone repositories/projects that depend on River core. Move out Outrigger,
Mahalo, Norm, etc … Is that what you’d like to see?
> Different projects that can be added to core River? Just curious. Would certainly allow
things to move at different velocity.
Yep, that’s it in a nutshell. So if someone says “Here’s a patch that speeds up lookups”,
we can go ahead and consider it in isolation, and release it quickly. And people could build
the part they’re interested in working on, without having to understand the existing complicated
build structure.
>
>>
>> When I have, in the past, talked about “navel gazing”, this is what I mean.
Here we are, arguing over whether the existing configuration DSL should be entirely replaced,
and what the right package is, when we could have created a separate deliverable and had it
done by now, if only we were willing to use the actual extension mechanism that’s built
into the existing product rather than talk about changing the public API!
>
> The public API is not changing at all.
>
>>
>> When I argue against messing around with the JTSK, it’s because delivering useful
functionality to users in small increments will be faster than making any changes to that
behemoth. No matter how you slice it, the larger the deliverable, the longer things take,
especially if we’re doing our due diligence correctly and considering the downstream impact.
Believe it or not, when I show a bias against touching the JTSK I am promoting a bias towards
action.
>
> The JTSK is actually a misnomer, I see little starting point with the Jini Technology
Starter Kit. I for one, sincerely hope we mess around with the JTSK to make it more approachable
to developers.
>
Totally agree with you. Except instead of making the JTSK more approachable to developers,
I argue for making Jini technology more approachable. One of the big points I’d like people
to take from the river-examples work is that you absolutely do not need to download or build
the JTSK in order to use Jini/River.
>>
>> Pardon my venting. It’s because I have used Jini in real applications and truly,
truly think it’s a technology that we should be promoting, so anything that gets in the
way of ACTUALLY SHIPPING SOMETHING kind of gets under my skin. I’ll stop now.
>
> I hope I have misconstrued your point here, and I realize this is a vent, but with all
due respect Greg, you seem to imply that we have not, or do not currently use Jini in real
applications. I hope you realize this somewhat misguided at best, and that we are all trying
to make River more applicable and work in real applications. The reasons why contributions
like GroovyConfig (and others) have been introduced is because they are used in real-world
applications, make configuration easier (and more easy to understand).
>
Yeah, you misconstrued my point; my fault - I guess I wasn’t clear enough. I actually REALLY
LIKE the idea of getting Groovy configuration out there in a ‘groovy-config.jar’ that
you can just drop into your classpath, or call it out in your dependency list in Maven or
Gradle. I’m trying point out that since the Configuration system already has the extension
mechanism of the META-INF entries, and GroovyConfiguration doesn’t depend on anything that
isn’t in the public API, there’s no need to tie it into releasing River 3.0.
> IMO, we are trying very hard to actually ship something. I realize you have some issues
with whats going on, and from recent discussions, I have gleaned the following:
>
> 1. Is it that the Groovy configuration approach has not been put into a separate project
although the Groovy configuration implementation (2 classes) has been part of the project
for 6 years?
>
> 2. Is it that small modifications to the project build (in qa-refactor-namespace branch)
have been made to create a groovy-config.jar that allows developers/deployers to use the Groovy
configuration capability?
>
See above - I’d like to get improvements to the user experience out there as quickly as
possible. Tying them to the process of releasing the JTSK can only slow things down.
> 3. Is it the dep-libs approach? I would also love to change this BTW.
I do dislike deps-lib.
>
> 4. Is it the package for net.jini.config.GroovyConfig and net.jini.config.Component?
This is easily solvable, as I had indicated previously if this is an issue I have no qualms
moving it. I just think we should be consistent in that all implementations of net.jini.config.Configuration
should be done the same way.
>
I’m pretty sure we agree here, Dennis. I don’t want to give the impression that I think
this is a big issue.
> 5. Do you ant any of the above put up for a vote?
Not at the present time.
>
> Lets figure this out and get this done.
>
>
> Regards
>
> Dennis
>
>
>
>
> | http://mail-archives.apache.org/mod_mbox/river-dev/201509.mbox/%3C17C3775E-5D97-4765-8362-2AFAE3D2FA29@stratuscom.com%3E | CC-MAIN-2019-04 | refinedweb | 1,775 | 64.2 |
.NET 3 - The Game Challenge at VBUG Newcastle
I did that presentation there yesterday. I had added a piece on top, to stretch things a bit more in the direction of WCF: the Game Status Viewer uses an additional published service to query game status information and displays that in a console window.
Here are the slides and samples in that most current version: Net3GameChallengeNewcastle.zip (393498 bytes)
During the presentation I also mentioned that permissions problem that comes up when you run the sample from Step6 as an “ordinary” user. If this happens, you will get an error message like this: HTTP could not register URL. Your process does not have access rights to this namespace (see for details).
The link to the Microsoft page has all the information you need, not just for Vista, but also for other Windows versions. On Vista, you use the netsh tool (as administrator) to add an entry to the urlacl list:
netsh> http add urlacl url= user=\Everyone
Of course this example is not very secure - modify as needed. Removing the entry from the urlacl list is equally easy to do:
netsh> http delete urlacl url=
A word of caution: in any real world application you should carefully evaluate your requirements when this happens to you. These URL namespaces are protected for good reason, and making them available for arbitrary use may have security implications.
If you examine the default urlacl list, you will see that there are entries there for certain use cases, like the one for the URL. You should probably use those in most cases instead of creating your own.
Important Update:I just heard from my friend Dominick that I’m really full of shit (update to the update: Dominick just asked me to make it clear that “full of shit” were not the words he used <g>) with regard to that recommendation I was making above. The entry is in fact there for a particular purpose, but that purpose is not for anybody else to use it. It is also highly doubtful whether it’s a good idea for that entry to be there at all, since it introduces a security hole by default… well, that was actually something that came to my mind when I saw it, but I wasn’t going to go into it in any detail. Anyway, apparently this hole should really have been removed in .NET 3.5, but it didn’t happen. In any case, again, it’s not recommended you use it.
So what are you supposed to do? In general, Microsoft takes the position that establishing a listener on a machine is always an operation that should require administrative privileges. I’m not entirely sure I agree with that, but that’s a different matter - in any case it always requires an administrator to do some configuration work when a process that doesn’t have admin privileges itself wants to do some listening. In the case of a boring TCP listener, it will be the Windows Firewall that reacts to this, and it brings up the usual nice UI that allows anybody with a knowledge of the Administrator password to do what’s necessary to make the process work. In the case of the HTTP listener, like in that demo of mine, it’s not the firewall, but instead http.sys, the subsystem for the handling of HTTP communications, that reacts to the listener becoming active. Http.sys in turn requires some special privilege handling because it uses a mechanism known as port sharing internally - yes, even if there isn’t actually any port sharing (in the colloquial sense) going on. For some reason, the powers that be at Microsoft have decided that a UI for this configuration, like the one the firewall has, is not needed… or whatever. In any case, it’s not there. And that’s what makes fiddling with netsh necessary.
Back to the question what you’re supposed to do. There are a few different things you could do. First - don’t use HTTP. WCF makes it very simple to go for TCP instead, for instance, and while there’s still security infrastructure in place, it’s a lot easier for the end user to deal with, assuming he knows the Administrator password. Of course one downside of this is that you can’t publish anything that requires HTTP to comply with expectations, like XML Web Services. Second - establish your own specific rule using netsh. Specific to your application, that is. The optimal way of configuring this rule would involve specifying a URL that’s as complete as you can make it, use a non-standard port, and create your own user group to assign the privilege to. If you don’t want your own user group, at least make sure not to use \Everyone - NT AUTHORITY\INTERACTIVE or NT AUTHORITY\Authenticated Users is a lot better than that.
So that’s it. I wasn’t going to go into a great level of detail on this, but there you go… at least I think my recommendation makes a lot more sense now. Dominick has a wrap up of his http.sys related discussion on his blog (and here’s his original post). He’s also written a tool for http.sys acl configuration on Windows XP and Server 2003, which you can find here. | http://www.sturmnet.org/blog/archives/2008/02/08/net-3-the-game-challenge-at-vbug-newcastle/ | crawl-001 | refinedweb | 901 | 59.53 |
How to print to stderr in Python?
I found this to be the only one short, flexible, portable and readable:
from __future__ import print_functionimport sysdef eprint(*args, **kwargs): print(*args, file=sys.stderr, **kwargs)
The function
eprint can be used in the same way as the standard
print("Test")Test eprint("Test")Test eprint("foo", "bar", "baz", sep="---")foo---bar---baz
import syssys.stderr.write()
Is my choice, just more readable and saying exactly what you intend to do and portable across versions.
Edit: being 'pythonic' is a third thought to me over readability and performance... with these two things in mind, with python 80% of your code will be pythonic. list comprehension being the 'big thing' that isn't used as often (readability).
Python 2:
print >> sys.stderr, "fatal error"
Python 3:
print("fatal error", file=sys.stderr)
Long answer
print >> sys.stderr is gone in Python3. says:
Old:
print >> sys.stderr, "fatal error"
New:
print("fatal error", file=sys.stderr)
For many of us, it feels somewhat unnatural to relegate the destination to the end of the command. The alternative
sys.stderr.write("fatal error\n")
looks more object oriented, and elegantly goes from the generic to the specific. But note that
write is not a 1:1 replacement for | https://codehunter.cc/a/python/how-to-print-to-stderr-in-python | CC-MAIN-2022-21 | refinedweb | 212 | 68.97 |
TOAST UI Grid is only while you're editing using view mode.!
TOAST UI products can be used by using the package manager or downloading the source directly. However, we highly recommend using the package manager.
TOAST UI products are registered in two package managers, npm and bower. You can conveniently install it using the commands provided by each package manager. When using npm, be sure to use it in the environment Node.js is installed.
$ npm install --save tui-grid # Latest version $ npm install --save tui-grid@<version> # Specific version
$ bower install tui-grid # Latest version $ bower install tui-grid#<tag> # Specific version
TOAST UI products are available over the CDN powered by TOAST Cloud.
You can use the CDN as below.
<link rel="stylesheet" href="" /> ... <script src=""></script>
If you want to use a specific version, use the tag name instead of
latest in the url's path.
The CDN directory has the following structure.
tui-grid/ ├─ latest/ │ ├─ tui-grid.comb.js // This file includes the backbone and underscore. │ ├─ tui-grid.comb.min.js │ ├─ tui-grid.css │ ├─ tui-grid.min.css │ ├─ tui-grid.js │ └─ tui-grid.min.js ├─ v2.10.0/ │ ├─ ...
Add the container element where TOAST UI Grid will be created.
<div id="grid"></div>
TOAST UI Grid can be used by creating an instance with the constructor function. To get the constructor function, you should import the module using one of the following ways depending on your environment.
var Grid = tui.Grid;
var Grid = require('tui-grid'); /* CommonJS */
import Grid from 'tui-grid'; /* ES6 */
You can create an instance with options and call various APIs after creating an instance.
var instance = new Grid({ el: $('#grid'), // Container element columns: [ { title: 'Name', name: 'name' }, { title: 'Artist', name: 'artist' }, { title: 'Release', name: 'release' }, { title: 'Genre', name: 'genre' } ], data: [ { name: 'Beautiful Lies', artist: 'Birdy', release: '2016.03.26', genre: 'Pop' } ] }); instance.setData(newData); // Call API of instance's public method Grid.applyTheme('striped'); // Call API of static method
You can also see the older versions of API page on the releases page.
This software is licensed under the MIT © NHN Entertainment. | http://ui.toast.com/tui-grid/ | CC-MAIN-2018-30 | refinedweb | 351 | 58.48 |
How do I rotate a mesh around specified axis?
I have several
objmesh files that I am importing into a
Scente3Dview and I am trying to animate them. I need to move them into correct position relative to other objects and for example rotate them around its own center (not the 0,0,0 coordinate). For example here is my translate code:
Transform { id: centerFeedWheelTransform property real userAngle: 0.0 matrix: { var m = Qt.matrix4x4(); m.rotate(userAngle, Qt.vector3d(0, 0, 1)) m.translate(Qt.vector3d(8.66, 14.28, -18.04)); //m.rotateAround(Qt.vector3d(20,30,40),userAngle, Qt.vector3d(0, 0, 1)); // this did not work return m; } }
The problem with the code above is that when object is translated it's origin is not carried with it, so it turns around the scene (0,0,0) coordinate.
Any suggestions how I could move entities around in the scene and rotate them around their own independent origin? Ideally I like to set the origin for rotation to center of mass, but even if I can manually enter a vector, at least I can get by.
- kshegunov Qt Champions 2017 last edited by kshegunov
Hi,
Translate and scale matrices don't commute. To rotate around the object's own axis you need to switch to the object's local coordinate system. This ultimately means you need to do a translation, make the desired rotation and then reverse the translation. Suppose you have an object that is located at:
Qt.vector3d(10, 10, 4), you'd do something like this:
m.translate(Qt.vector3d(-10, -10, -4)); //< Move the object to the coordinate system's origin m.rotate(userAngle, Qt.vector3d(0, 0, 1)); //< Rotate around the z-axis with your angle. m.translate(Qt.vector3d(10, 10, 4)); //< Restore the object's original position
Kind regards.
Thanks for your help!
Is there a way to get the current location of my entitiy in code so I do not have to manually figure that out every time? That would make my life much easier since there are lots and lots of moving parts in my scene.
As an example, here is how I load the mesh file and create and entity with it:
Mesh { id: centerFeedWheelMesh source: "../../resources/CenterFeedWheel2.obj" }
Entity { id: centerFeedWheel components: [centerFeedWheelMesh, centerFeedWheelTransform] }
I can not find any way to do this in the docs. Please tell me there is a way to get the current 3D location of entity. Yes?
- kshegunov Qt Champions 2017 last edited by kshegunov
@Aras
The current location of a 3D entity is simply it's applied transformation's translation. There's no special property for this, you can get it from the
Transformobject (if you've applied any).
@kshegunov that is not true! I can load multiple
.objfiles and they will appear at different locations on the scene without me applying any transformation to them. Their coordinate relative to the origin is taken into account when importing the objects. For example, depending on where your object was in the blender world, it will appear in different location when imported into the scene.
That behavior is useful because you can arrange parts of a model in blender and then export them to
obj. Now when you load them in your 3D scene they will be in correct position relative to each other. My problem now is that I need to know what their position vectors are so than I could use the trick you mentioned to rotate them around a custom rotation axis.
- kshegunov Qt Champions 2017 last edited by
@Aras
I don't know, sorry. My early-dev experiment with Qt3D was almost half an year ago and I just bound the meshes to entities, which I could manipulate. I didn't load scenes.
@kshegunov no worries, you suggestion help me find a workaround for now. Here is how I did it, in case anyone else runs into the same problem:
Inside blender I place the 3D cursor to where I want to rotate the item around and set that as the origin of my object
Write down the
(x,y,z)coordinates of my object in blender properties view
Export selected object as
.objfile
Import mesh in qt
Mesh { id: centerFeedMountMesh source: "../../resources/CenterFeedMount.obj" } Mesh { id: centerFeedWheelMesh source: "../../resources/CenterFeedWheel.obj" } Entity { id: centerFeedMount components: [ centerFeedMountMesh, darkGreenMaterial] Entity { id: centerFeedWheel components: [centerFeedWheelMesh, orangeMaterial, centerFeedWheelTransform] } }
- I rotate my object using the following code: (will of course be different for each object)
Note: for some reason I had to swap Y and Z coordinates to get the correct vector. So if in blender X=10, Y=20, Z=30 then your translate vector will be
Qt.vector3d(10, 30, 20)
Transform { id: centerFeedWheelTransform property real userAngle: 0.0 matrix: { var m = Qt.matrix4x4(); m.translate(Qt.vector3d(8.06, -18.04, 14.28)); m.rotate(userAngle, Qt.vector3d(0, 0, 1)) m.translate(Qt.vector3d(-8.06, 18.04, -14.28)); return m; } }
This works pretty well for me right now, so I will mark item as resolved. I still like to know if I can get the coordinate of entity in my Qt3D program. If you know how to do that, please comment!
@Aras Hi, i am facing the same problem ! can i get the complete source code of your work around here so , that i can resolve my issue. Please your help is required. Thank you !
@Naveen_D the transform was really the key to getting it working, I can not share my full code but I can tell you the 3D scene works after I fixed the rotation axis issue with this workaround.
Can you share your code and tell me what issue is it that you are facing? Is it the same problem with objects rotating around wrong axis?
@Aras Thank you for the reply,
Actually the problem is, I have a car model, Using blender software i have sliced the car obj for few diff parts like door, wheel, window, bonnet etc and i have rearranged the whole car by adding the car parts in the code.
When i try to transform or rotate the door in a particular angle, it is not happening. The actual scenario i want is, to open and close the door as we do in a normal car.
@Naveen_D ok that sounds exactly like the problem I had. Please read my instructions above more carefully. The key is to figure out what is the coordinate of your door. You need to move (translate) the door to the origin in a way that the hinge ends up at (0,0,0) then you do your rotation and immediately after that use the opposite vector of you translate to move the door back to its correct location. This may seem counter intuitive but it works and does not cause any glitches in rendering. This is the line it is happening:
m.translate(Qt.vector3d(8.06, -18.04, 14.28)); m.rotate(userAngle, Qt.vector3d(0, 0, 1)) m.translate(Qt.vector3d(-8.06, 18.04, -14.28));
Inside blender I place the 3D cursor to where I want to rotate the item around and set that as the origin of my object
Write down the (x,y,z) coordinates of my object in blender properties view
Export selected object as .obj file
Ya i got that.. even i tried with what instructions u have given above.. but i didn't get the output.,,,Can you please elaborate these instructions above. I was bit confused what exactly i need to do.
And you are using the matrix there, and returning the value m, can you tell me where you are returning the value ?
You need to move (translate) the door to the origin in a way that the hinge ends up at (0,0,0)
as said by you, i have tried with the following code. Where i took the x,y,z values of object's dimension instead of 3D cursor x,y,z values from the blender software and i am using the same method as you have shown above. but the problem here is when i give those dimension values for both front and back door, i am getting a gap between the door and the car.
Can you tell me what i am doing is the correct way or what changes i need to do to make it work correctly.
I have few questions,
- Is it possible to apply animations to that matrix ?
- Based on a button click i want to rotate the door, how to do that ?
here is the code
main.qml
import Qt3D.Core 2.0 import Qt3D.Render 2.0 import Qt3D.Input 2.0 import Qt3D.Extras 2.0 import QtQuick 2.5 import QtQuick.Controls 1.4 Entity { id: sceneRoot Camera { id: camera projectionType: CameraLens.PerspectiveProjection fieldOfView: 25 aspectRatio: _window.width / _window.height nearPlane : 0.1 farPlane : 1000.0 position: Qt.vector3d( 10, 0.0, 15.0 ) viewCenter: carMainTransform.translation } OrbitCameraController {camera: camera} // FirstPersonCameraController{ camera: camera} components: [ RenderSettings { activeFrameGraph: ForwardRenderer { clearColor: Qt.rgba(0, 0.5, 1, 1) camera: camera } }, InputSettings { } ] CarEntity {} Transform { id: carMainTransform rotation: fromAxisAndAngle(Qt.vector3d(0, 1, 0), 30) translation: Qt.vector3d(5.0,0.0,5.0) } Entity { id: carMainEntity components: [carMainTransform] } }
CarEntity.qml
import Qt3D.Core 2.0 import Qt3D.Render 2.0 import Qt3D.Input 2.0 import Qt3D.Extras 2.0 import QtQuick 2.5 import QtQuick.Controls 1.4 Entity { id: mainEntity PhongMaterial { id: carMaterial } Mesh { id: carMesh source: "qrc:/Meshes/CarBody.obj" } // Car door Mesh { id: carDoorMesh source: "qrc:/Meshes/CarFrontDoor.obj" } PhongMaterial{ id: carDoorMaterial } Transform { id: carDoorTransform property real userAngle: -45.0 matrix: { var m= Qt.matrix4x4(); m.translate(Qt.vector3d(0.501096,1.5006,1.78036)) m.rotate(userAngle, Qt.vector3d(0,1,0)) m.translate(Qt.vector3d(-0.501096,-1.5006,-1.78036)) return m } } Mesh { id: carBackDoorMesh source: "qrc:/Meshes/CarBackDoor.obj" } PhongMaterial{ id: carBackDoorMaterial } Transform { id: carBackDoorTransform property real userAngle: -45.0 matrix: { var m= Qt.matrix4x4(); m.translate(Qt.vector3d(0.466782,1.48042,1.60597)) m.rotate(userAngle, Qt.vector3d(0,1,0)) m.translate(Qt.vector3d(-0.466782,-1.48042,-1.60597)) return m } } Entity { id: firstEntity components: [ carMesh, carMaterial ] Entity { id: secondEntity components: [carDoorMesh, carDoorMaterial, carDoorTransform] } Entity { id: thirdEntity components: [carBackDoorMesh, carBackDoorMaterial, carBackDoorTransform] } } }
Thank you. | https://forum.qt.io/topic/71343/how-do-i-rotate-a-mesh-around-specified-axis/14 | CC-MAIN-2020-05 | refinedweb | 1,732 | 57.47 |
Searching is one of the most important components of your web application. Let's take an example of an E-commerce platform where there are thousands of item on sale but to find the specific item you are looking for, you need to search 🔍 for the item using the search component provided by the platform.
Today we will learn to build a simple search form which searches from a list of data using react.
Setting up the project
For setting up your project, you can use either
create-react-app or also you can go to CodeSandBox.
You can find an article on setting up your react project here.
After creating the project, at first, let's make a simple UI that has an input field and displays the list of search results.
Go to the
index.js file which is at the root of your project and clean up all the code inside and add the following code.
import React from "react"; import ReactDOM from "react-dom"; function App() { return ( <div className="App"> <input type="text" placeholder="Search" /> <ul> <li>Item 1</li> <li>Item 2</li> </ul> </div> ); }
In the component above, we create a simple input form(which currently doesn't do anything) and a mock list of the results that are going to be displayed.
Now we apply two-way data binding to the input field, which basically takes the value from the user and saves it into the state.
import React from "react"; import ReactDOM from "react-dom"; function App() { const [searchTerm, setSearchTerm] = React.useState(""); const handleChange = event => { setSearchTerm(event.target.value); }; return ( <div className="App"> <input type="text" placeholder="Search" value={searchTerm} onChange={handleChange} /> <ul> <li>Item 1</li> <li>Item 2</li> </ul> </div> ); }
We have now created an state named
searchTerm which saves the data from the search input on every occurance of the
change event. The
handleChange method takes the
event object as the arguement and sets the current value of the form to the
searchTerm state using
setSearchTerm method provided by
React.useState method.
Now we create a mock list of data and search the data based on the input provided by the user on the input box we created.); }; return ( <div className="App"> <input type="text" placeholder="Search" value={searchTerm} onChange={handleChange} /> <ul> <li>Item 1</li> <li>Item 2</li> </ul> </div> ); }
In the above code snippet, we create a mock list/array named
people, from which we are going display the list in our component. We also create a state named
searchResults which is used to set the search result.
Now we apply the search functionality to our component.); }; React.useEffect(() => { const results = people.filter(person => person.toLowerCase().includes(searchTerm) ); setSearchResults(results); }, [searchTerm]); return ( <div className="App"> <input type="text" placeholder="Search" value={searchTerm} onChange={handleChange} /> <ul> {searchResults.map(item => ( <li>{item}</li> ))} </ul> </div> ); }
Now in the above code snippet,
React.useEffect hook is used which executes whenever the dependency of the method gets changed. The
React.useEffect hook takes two arguments. The first argument is the function to execute when the data in the dependency is modified and the second argument is an array of dependencies the
React.useEffect hook is dependent on. So whenever the value of the dependencies in the
React.useEffect hook changes the function in its first argument executes.
So in the
React.useEffect hook above, the dependency is
searchTerm which gets changed on every input by the user which in turn executes the function in the first argument of the
React.useEffect hook. The following function gets executed
() => { const results = people.filter(person => person.toLowerCase().includes(searchTerm.toLowerCase()) ); setSearchResults(results); }
In the above function, the
filter method is applied to the
people array which returns a new array according to the condition returned in every iteration. The condition is
person.toLowerCase().includes(searchTerm.toLowerCase()) which means if the
person in the people's list
includes the
searchTerm then return
true otherwise return
false.
After the filtered list is set on the
searchResults state using the
setSearchResult provided by
React.useState hook.
Now we have set the search results to the state, we display it by using the
searchResults.map method in our component which iterates over all the
searchResults and renders them inside the
ul.
<ul> {searchResults.map(item => ( <li>{item}</li> ))} </ul>
The final result looks something like this
You can find the completed code here
Thankyou.
You can also follow me on Twitter.
Discussion
Nice tutorial :) One comment - you don't really need to set the filtered results on the state in this case. I'd just filter the people according to the search term during render:
Here's a sandbox.
Since setState is asynchronous isn’t it better to use useEffect to handle this side effect? Please explain if I’m wrong here.
In this case
setStatewill cause a re-render so the component will be rendered with the new value of filter, which is then applied to the filtered array before displaying the final results.
I still think
useEffectis a better approach (readable, maintainable), because calculating search in reality will be a side effect and could be left asynchronous, and not block rendering. When the search calculation is complete, by setting state we can trigger a re-render with the search results instead of blocking rendering, during the typing of a search term.
The functional component itself is the
renderfunction, I prefer to leave it as it is and use helper functions to perform side effects. But this is just my opinion, what do you think?
The two approaches are functionally similar. The main difference is that with
useEffectapproach you introduce extra unnecessary code, plus saving unnecessary values to the state, which can be derived at render time. This is similar to the discussion of storing derived data onto the state.
It's not that
useEffectapproach is wrong here, it's just that it can be simplified :)
Got it, I'm learning a lot about React from this conversation. Thank you for the link :')
Sure thing! :) I also wrote an article about some of the common mistakes with React:
The most common mistakes when using React
Alex K. ・ Sep 11 ・ 5 min read
Hi Alex.. Can you make a component in react functional hooks which has sort, filter & search feature using your method?
The final result should be a single element which can be mapped to show the cards accordingly and It should not alter the existing data in the array but re-arrange/show accordingly.
I have a sample data as below:
const data = [
{
_id: "dress1",
image: "/images/fans.jpg",
title: "shirt",", "L", "XL", "XXL"],
price: 29.9584
},
{
_id: "dress2",
image: "/images/mcb.jpg",
title: "Pants",", "M", "L"],
price: 18.78
}];
i m facing problem this type data filterting
Thank you, that was really helpful! One quastion: do you know how to make the search accept two seach parameters? like i want to search base on the name and also the address?
This what i tried, and failed :(
I think it'd be
const results = !searchTerm && !searchTerm2instead of
||otherwise the filtering will be applied only of both search terms are present.
i came up with this solution and it works now but only if one of the conditions is true. I want to make the method filter the two conditions in the same time. So like i want to know all the people that their name is Alex and lives in New york
For your specific case you can do like this:
However, note that the filter won't be applied unless both search terms are present.
Yah but that is exactly what i don't want. I want that the user can choose one of the filters or both of them in the sametime
Hello! In the end you found the solution?
when I m try iterate array object data cause some error will you explain why this error and how to solve this
screenshots...
dev-to-uploads.s3.amazonaws.com/i/...
dev-to-uploads.s3.amazonaws.com/i/...
dev-to-uploads.s3.amazonaws.com/i/...
Hi, the
nameproperty in the first object of the array is a number, which doesn't have string methods such as
toLowerCase(). Make sure it is a string.
dev.to/amitdotcode/search-box-filt...
please solved my this problm
this is my code link please check ...
my problem is I m using a search filter method to the search box if anybody searches note list
note data search but the problem is if I remove text my search filter my added data old data not
show only filter data show... if you see my code and add some note and filter you will better understand
what I m try to say
link here......
codesandbox.io/s/unruffled-bose-tk...
how to use this array instead of people
const searchData = [
{
urlName: "bb",
linkurl: "/bb",
},
{
urlName: "aa",
linkurl: "/aa",
},
{
urlName: "ea",
linkurl: "/ee",
},
{
urlName: "d s",
linkurl: "/dd",
},
];
Thanks for sharing your knowledge.
how to handle this type functionality and data please reply me I share you screenshots
call data from api and filter ....
codesandbox.io/s/unruffled-bose-tk...
Please Help Me .. See my Code
The problem is when I search in the search field to my note app its work but when I remove search field text my old add data not show only search data show see my code you better understand that what I m trying to say
Thanks. I tried you exact code and get this warning:
29:6 warning React Hook useEffect has a missing dependency: 'filtered'. Either include it or remove the dependency array react-hooks/exhaustive-deps
Any suggestions?
My search isn't working
Actually, the search is working. I didn't realize that I had to type lower case. Can that be removed? Most people will start typing names in upper case (at least the first letter).
Great.
One nitpick here: Make sure you trim your string for searching. Trim removes all the spaces at the start and end of the string.
This is great for simple arrays, but what about arrays of objects and you want the search to be broader in scope. For instance if you want to search a fitness class (which is an object) by name, or by duration, or by intensity or any other property
The cover photo looks like the macbook has tin worm 🐛
🤣🤣 Actually i got the picture from unsplash
Okay well whoever took it, they must have been worried about yin worm 😁
Great post
How do I change the code if I want people to display as a list by default, without typing something in the searchbox?
Thanks for this!
helpful, thanks.
Has anyone built a search app that has a form that the user submits and then display the results?
Search only starts with a small letter. It doesn't work with the big one
Hi, thank you for this post, it was very helpful. I was wondering, is there a way to make the list hidden, and only show when the user is actually searching?
when i try to use this code to filter i m facing this kind of error again and again please help
TypeError: oldData.toLowerCase is not a function
Such a detailed post! helped a lot to understand hooks. Much appreciated.
It was realy helpful, thanks!
Maybe somebody know how to implement a feature which would highlight the searched results in the list? | https://dev.to/asimdahall/simple-search-form-in-react-using-hooks-42pg | CC-MAIN-2020-45 | refinedweb | 1,916 | 62.17 |
US5520639A - Needleless hypodermic injection methods and device - Google PatentsNeedleless hypodermic injection methods and device Download PDF
Info
- Publication number
- US5520639AUS5520639A US08/407,867 US40786795A US5520639A US 5520639 A US5520639 A US 5520639A US 40786795 A US40786795 A US 40786795A US 5520639 A US5520639 A US 5520639A
- Authority
- US
- United States
- Prior art keywords
- housing
- ampule
- valve
- plunger
- popp2005/31516—Piston or piston-rod constructions, e.g. connection of piston with piston-rod reducing dead-space in the syringe barrel after is a continuation of Ser. No. 08/097,266 filed on Jul. 23, 1993, now U.S. Pat. No. 5,399,163 which was a continuation-in-part of Ser. No. 07/920,106, filed Jul. 24, 1992 now U.S. Pat. No. 5,383,851, and which is incorporated herein by reference.
The field of the present invention is needleless hypodermic injection methods and devices.
Various needleless hypodermic injection devices have been known and used in the past. These devices, also known as jet injectors, typically use spring or compressed gas driven plungers to accelerate an injectant to a velocity sufficient to pierce through the skin and enter the underlying tissues.
While large jet injection apparatus have been successfully used for mass inoculations, e.g. in the military services, these apparatus are relatively complex, costly, limited in performance and are not portable. Thus, injections using needles remain as the standard despite their disadvantages (for example, accidental needle sticks and risk of spreading infection to both the patient and medical professional; safe disposal of the used needle, patient's fear of needles; and pain caused by needle injections). Jet injection avoids or diminishes these disadvantages.
Although many portable needleless injectors have been proposed, these known devices have not achieved widespread acceptance in the medical field, due to a variety of factors.
Significantly, the characteristics of needleless or jet injections typically vary with the pressures exerted by the injection device, the nozzle diameter of the ampule, the patient's size, age and weight, the nature of the injection site, and the viscosity of the injectant.
The soft layers of tissue at standard injection sites in humans, listed in the order from outside to the inside are: 1) the dermis, 2) the adipose, 3) the deep fascia, a tough membrane that surrounds the muscle, and 4) the muscle. The deep fascia and the skin are the toughest layers to penetrate with a jet injection. The adipose tissue is the most easily penetrated.
Parenteral injections into humans are classified according to four well established tissue regions in which the injectant may be deposited. These are: intra-dermal, subcutaneous, intra-muscular, and intravenous. With intra-dermal injections, the injectant is deposited in the dermis layer. With subcutaneous (SC) injections, the injectant is deposited in the adipose tissue. With intramuscular injections (IM), the injectant is deposited in the muscle. Intra-venous are those injections deposited directly into a vein, an injection method generally not suitable for jet injection.
Intradermal injections, the least invasive of the three types, are employed when the dose is very small and it is desired to visualize patient response. Subcutaneous injections are employed when it is desired to prolong the time for absorption of the medication, when the dose is relatively small, or the injectant is non-irritating. Intramuscular injections, the most invasive of the three types, are employed when it is desired to have rapid absorption, when the medication is irritating, or when the dose is relatively large.
Absorption is not solely dependent on placement of the injectant, it is also dependent on the medication. Some medications are formulated to slow the rate of absorption. For example, intramuscular medications are sometimes oil based for this purpose. Similarly, subcutaneous medications sometimes contain crystalline compounds to delay absorption.
A long standing basic difficulty with jet injection has been the complex problem of determining which are the preferred injection variables. These variables include: 1) pressure profile, 2) nozzle size, 3) patient factors, i.e., age, sex and size, 4) injection site, and 5) medication viscosity. The repeated failures of the prior art to adequately solve these complex variables problems has contributed to the lack of acceptance of a handheld and portable jet injector in the medical community.
The pressure profile is the pressure exerted on the liquid injectant, typically measured over time, from the beginning to the end of the injection. The pressure profile must be selected, in combination with the nozzle size and other factors, to deliver the injectant through the skin to the desired depth, preferably with minimum pain.
The patient factors are also important. Gender is significant as women typically have a different adipose distribution than men. Men also typically have tougher tissue that women. The patient's age is important because infants are born with very little muscle, thick layers of adipose, and very easily penetrated skin. As infants age and become mobile the adipose is gradually replaced by muscle. At adolescence the introduction of hormones changes tissue composition. Aging through mid-life is usually associated with gradual weight gain and decrease in tissue strength.
Injection sites are very significant because in all patients the thickness of the skin and adipose tissue varies at different regions of the body. The medical profession has established generally accepted injection sites for conventional needle syringes that are best suited for specific types of injection. The subcutaneous sites typically have a thick adipose layer and are free of major nerves and vasculature. Intramuscular sites typically have a thin adipose layer, a thick muscle layer, and are free of major nerves and vasculature.
Finally, the viscosity of the injectant must be considered as it effects characteristics of the jet injection. In addition, it has been discovered that viscosity effects have been widely misunderstood in the prior art.
The prior art has generally not been able to overcome the complexities and difficulties of simultaneously accounting for all of the foregoing variables. Thus, jet injection, despite its great potential advantages, remains virtually unused. Accordingly, it is an object of the invention to provide improved methods and devices for needleless injection, so that the advantages of jet injection may be brought into use.
To these ends, in a needleless injection device, actuation of the device initially causes a valve to open. The device engages a plunger extending from an ampule. The plunger is then driven into the ampule generating a high velocity jet of injectant from the nozzle of the ampule. Variable doses of injectant can be provided as the device can engage any position of the plunger regardless of the plunger position.
Also, to this end, another needleless injection device has a trigger on a housing to actuate an initiator valve. A reservoir is filled with compressed gas by actuation of the trigger. Upon reaching a predetermined pressure, a second valve opens to allow compressed gas to flow and act on a piston to drive a plunger into an ampule. Simultaneously, the mechanical movement of the second valve closes off further gas flow into the reservoir.
An interlock system is advantageously provided to prevent the trigger from actuating the initiator valve unless an ampule is properly installed in the device. Preferably, filters prevent stray liquified compressed gas from entering into internal chambers of the device.
In novel methods of needleless injection, the pressure profiles of the injectant, nozzle diameter, patient and injection site parameters, as well as injectant viscosity, are selected to achieve desired injection characteristics.
The present invention also provides a method of peri-fascial injection wherein the injectant is purposely deposited on the deep fascia in a thin sheet. This provides rapid absorption into the blood stream, without the invasiveness, injection discomfort, and occasional post injection soreness associated with injection deep into the muscle.
In the drawings, wherein similar reference characters denote similar elements throughout the several views:
FIG. 1 is a perspective view of the present needleless injection device;
FIG. 2 is a section view of the present needleless injection device taken along line 2--2 of FIG. 8;
FIG. 2a is a section view thereof further illustrating an ampule and plunger installed in the device with the device in a ready to inject position, except for the piercing mechanism, which is not shown having pierced the cartridge;
FIG. 2b is a section view thereof illustrating a clamping mechanism of the device in a pre-injection position;
FIG. 2c is a section view thereof illustrating a drive piston, clamping mechanism and plunger in a post-injection position;
FIG. 3 is an enlarged view fragment of the section view of FIG. 2, generally showing the back half of the device;
FIG. 4 is an enlarged view fragment of the section view of FIG. 2, generally showing the front half of the device;
FIGS. 4a and 4b are section view fragments thereof showing an alternate embodiment;
FIG. 5 is a further enlarged section view fragment of a valve shown in FIG. 3;
FIG. 6 is a partial section view fragment taken along line 6--6 of FIG. 8 and showing selected features only;
FIG. 6a is a partial section view of a preferred alternative housing and piston plenum shut-off valve design;
FIG. 6b is a partial section view fragment of an alternative preferred exhaust valve used in the housing shown in FIG. 6a;
FIG. 6c is an enlarged partial section view of a bleed gas valve shown in FIG. 6a;
FIG. 7 is an enlarged section view fragment of the initiator valve;
FIG. 7a is a section view fragment of an alternate preferred initiator valve body;
FIG. 7b is an enlarged section view fragment of an alternative preferred initiator valve;
FIG. 8 is a back end elevation view of the device;
FIG. 9 is a front elevation view thereof;
FIG. 10 is a side elevation view in part section of the present plunger and an ampule;
FIG. 10a, 10b and 10c are section view fragments of alternate plunger and ampule embodiments;
FIG. 11 is a section view taken along line 11--11 of FIG. 10;
FIG. 12 is a graphic illustration of operation of certain features of the present device;
FIG. 13 is a front elevation view of the indicator ring shown in FIG. 4;
FIG. 13a is a side elevation view fragment taken along line 13a--13a of FIG. 13;
FIG. 14 is a side elevation view thereof in part section;
FIG. 15 is a perspective view of a second embodiment of the present needleless injection device in use to provide an injection into a patient's arm;
FIG. 16 is a schematically illustrated side section view of the injection device of FIG. 15;
FIG. 17 is an enlarged section view fragment thereof showing the device of FIG. 15 prior to an injection;
FIG. 18 is a similar view thereof showing the device of FIG. 15 just after an injection;
FIG. 19 is a section view taken along line 19--19 of FIG. 18 and showing the interlock system of the device in the ready or unlocked position;
FIG. 20 is a similar section view of FIG. 17 showing the interlock system of the device of FIG. 15 in the locked position;
FIG. 21 is an enlarged section view fragment of an alternate embodiment of the poppet valve of FIG. 17;
FIG. 22 is an enlarged section view fragment of the initiator valve of FIG. 15;
FIG. 23 is a perspective view of a poppet valve body;
FIG. 24 is an enlarged section view fragment of the poppet plenum shown in FIG. 21;
FIG. 25 is a graphic illustration of a pressure-volume preferred injectant pressure profile;
FIG. 26 is a schematic illustration of the present peri-fascial needleless injection;
FIG. 27 is a table showing ampule selection and parameters; and
FIGS. 28, 29, and 30 are graphic illustrations of pressure-time preferred injectant pressure profiles for ampules having 0.004, 0.008 and 0.014 inch diameter nozzles, respectively.
Turning now in detail to the drawings, as shown in FIGS. 1 and 2, an injector or needleless injection device 20 has a front end 22, a back end 24, a top surface 26 and a bottom surface 28. A trigger 30 is slidably mounted on the injector 20 adjacent the bottom surface 28. The injector 20 includes an upper housing 42 and a shorter lower housing 44 attached to the upper housing 42. The lower housing 44 has a flat upper surface 82 which lies against a flat lower surface 84 of the upper housing 42. The upper housing 42 and lower housing 44 are attached together with four (4) pins 86.
The upper housing 42 and lower housing 44 together are sized and shaped to readily fit the user's hand, with the user's palm resting over the top surface 26 and side of the injector 20, and with the user's index finger easily positionable over the trigger 30. The top surface 26 has a step or incline 34 at approximately the center of the injector 20. The upper and lower housings may alternatively be formed as a single housing.
Turning to FIG. 3, the lower housing 44 is substantially hollow and defines a lower housing space 48. Similarly, the upper housing 42 defines an upper housing space 46 (FIG. 6). Within the lower housing 44 is a cartridge chamber 50 for receiving and holding a compressed gas cartridge 54, e.g., a CO2 cartridge. A cartridge seat 52 at the forward end of the cartridge chamber 50 supports the back of the cartridge 54. A generally u-shaped plastic cartridge chamber cover 56 snaps into place on the lower housing 44 over the cartridge chamber 50.
A generally cylindrical piercing housing 58 is slidably positioned behind the cartridge chamber 50 within the lower housing 44. O-rings 60 seal the piercing housing 58 against the lower housing 44 while allowing the piercing housing 58 to slide within the lower housing 44. An annulus 62 extends around the circumference of the piercing housing 58 in between the o-rings 60. A cylindrical piercing body 66 is positioned within the piercing housing 58 and sealed against the piercing housing 58 by o-rings 88. A piercing point 68 extends forward from the front surface of the piercing body 66 and is centrally aligned with the neck of the cartridge 54. A seal 64 on the front end of the piercing body surrounds the piercing point 68. The seal 64 extends sufficiently forward to seal against the neck of the cartridge 54 before the piercing point 68 penetrates into the cartridge 54.
A bore 70 extends through the piercing point 68 and piercing body 66 connecting to the annulus 62. A piercing body nut 74 threads into the back end of the piercing housing 58, to secure the piercing body 66 and seal 64 in position within and against the forward end of the piercing housing 58. A piercing housing nut threads into the back of the lower housing 44. Spanner tool openings are provided in the piercing body nut 74 and the piercing housing nut 76 for assembly purposes.
A threaded shaft 72 extends through and engages threads in the piercing housing nut 76. A knob 78 attached to the threaded shaft 72 has a flip handle 80 which can be flipped up perpendicular to the plane of the knob 78 to allow the knob 78 and threaded shaft 72 to be more easily turned by hand. The forward end of the threaded shaft 72 bears against the back surface of the piercing body 66.
A hole 92 extends through the upper surface 82 of the lower housing to connect the annulus 62 to a bore 96 leading into the upper housing space 46. An o-ring 94 seals the connection of the hole 92 and bore 96.
At the back end of the upper housing 42 is a transparent window lens 98 secured to an end nut 108 by a rubber window retainer 100. A Bourdon tube 116 is soldered into a gauge base 114 and has an open end 124 extending into a gauge chamber 122. The pointer 102 extends perpendicularly from the back end of the Bourdon tube 116. As shown in FIG. 8, a gauge label 104 applied to the back end of a gauge body 106 around the Bourdon tube 116 provides a calibrated pressure scale with the scale and pointer visible through the lens 98. Stop pins extending from the back end of the gauge body 106 provide high and low pressure end point stops for the pointer 102.
The end nut 108 has threads 110 at its forward end which engage the upper housing 42. To calibrate the gauge for a given pressure, the gauge body 106 is rotated relative to the gauge base 114. When the correct index is achieved, the gauge body 106 and gauge base 114 are adhered together. A guiding pin 112 extends from the upper housing 42 into a keyway groove and holds the gauge body 106 in place while the end nut 108 is tightened.
Shims 118 are provided at the front surface at the gauge base 114, as required, for proper stack up and positioning of components in the upper housing 42.
An initiator valve housing 142 is spaced apart from the gauge base 114 by a filter retainer ring 120. A sandwiched assembly of filter disks 130 and synthetic filters 132 are contained within the back end of the housing 142. The filter disks 130 are preferably sintered stainless steel or bronze (and preferably 2.0 micron, 0.062 inch×0.500 inch diameter available from NUMET). The synthetic filter 132 separating the two filter disks 130 is preferably three layers of Tyvek 1025D, 0.006 inch×0.625 inch diameter, available from DuPont. Tyvek is a DuPont trademark for a high density polyethylene spunbonded olefin fiber material. O-rings 140 seal the filter disks 130 against the retainer 140 and synthetic filter 132. O-ring 126 seals the filter retainer 140 within the upper housing 42. O-ring 126 and o-ring 150 seal the gauge chamber 122 such that compressed gas provided through the bore 96 can flow out of the gauge chamber 122 only through the filters.
A port 148 extends through the back wall of the initiator valve housing 142 into an initiator valve chamber 146 within the housing 142. An initiator valve 144 within the initiator valve chamber 146 controls gas flow from the port 148 through the initiator valve chamber 146 to a reservoir port 154 formed through the forward wall of the initiator valve housing 142.
A regulation valve 156 includes a regulation seat 158 formed around the reservoir port 154. A dart 160 moves into and out of the regulation seat 158. The dart 160 has a threaded dart shaft 162 threaded into the narrower tube section at the back end of a poppet body 172. A dart pin 164 extending through the tube section of the poppet body 172 and the threaded dart shaft 162 secures the adjustment of the longitudinal position of the dart 160 in relation to the regulation seat 158. A reservoir spacer 166 within the upper housing 42 extends from the forward end of the initiator valve housing 142 to a poppet housing 178, forming a reservoir 168 around the tube section of the poppet body 172. O-rings 126 seal the reservoir spacer 166 against the upper housing 42 and seal the initiator valve housing 142 to the reservoir spacer 166.
A poppet valve 170 within the poppet housing 178 has a conical plastic poppet seat 188 centered within and positioned against a forward wall of the poppet housing 178. Referring to FIG. 5, the poppet body 172 has a sharp sealing edge 200 biased against the poppet seat 188 by a compression spring 186 held in position within the poppet housing 178 by a poppet nut 180. Alternatively, the sealing edge 200 and poppet seat 188 may be configured with unlike angles selected so that the inner diameter contacts first, to minimize creep effects. The poppet nut 180 has a threaded forward section 184 engaged to a threaded rear section 182 of the poppet housing 178. The poppet nut 180 is turned to adjust the compression on the spring 186 and correspondingly set the cracking pressure of the poppet valve 170. Preferably, the poppet valve 170 is designed to crack open at 450 p.s.i.
The diameter of the poppet seat 188 exposed to reservoir pressure prior to crack (thus that which governs cracking pressure) remains constant although the conical seat may creep, as the sealing surface, facing reservoir pressure, is parallel to the axis of poppet movement. As the plastic creeps, stress on the plastic is reduced by increased contact area on the outer part of the conical seat. Yet, the sealing diameter remains unchanged. Thus, creep is self healing and some creep is allowed without sacrificing cracking pressure consistency.
The conical seat is attached to the poppet housing 178 rather than the poppet body 172 while all hard (poppet) parts are made concentric and perpendicular. Thus, irregularities in the seat 188 or soft part will creep to conform to hard parts. The hard parts are free to rotate but will still conform to the existing soft part deformation.
Sliding friction of the poppet body 172 is advantageously minimized and consistent. Hence, the seal 206 used with the back up ring 204 may be a low friction seal such as a rubber U-packing equivalent to a Parker 8400 series seal. In addition, since this seal is pressurized only after cracking due to the poppet body being pressurized internally before cracking, seal friction is greatly minimized. The poppet body begins to move during opening before this seal is pressurized. Thus, breakway friction is not increased by gas pressure. This minimizes time dependency of cracking pressure. Without this feature, it has been found that ampule peak pressure rises with time between shots.
By appropriate selection of the poppet sealing diameters (i.e., the tube section o.d., poppet housing i.d. and conical seal contact diameter) and spring force, i.e., for an approximately 450 p.s.i. cracking pressure and an approximately 150 p.s.i. regulation pressure, the poppet and regulation valves together can act as a low pressure regulator.
A cannula 176 is attached to and extends back from a drive piston 212 in front of the poppet valve 170 through the poppet housing 178 and poppet seat 188 and into the back section of the poppet body 172. Poppet body supply holes 174 extend through the poppet body 172 (FIG. 3). A cannula exhaust hole is provided through the cannula 176 at a position just slightly behind the o-ring 207 which slidably seals the cannula 176.
Referring still to FIG. 5, radially spaced apart drive bores 194 extend through the poppet housing 178 and connect a poppet annulus 198 to the front surface of the poppet housing 178. The poppet annulus 198, a ring-shaped space, is formed by the inside walls of the poppet housing 178, the front surface of the poppet 172 and the conical surface of the poppet seat 188. The front ends of the drive bores 194 are sealed by a preferably rubber disk drive bore seal 196 adhered to the back surface of the drive piston 212.
A joggle 192 in the poppet housing 178, which engages a corresponding lip within the upper housing 42, acts as a stop for the poppet housing 178. The reservoir spacer 166, initiator valve housing 142, filter ring, shims and the gauge body 106 are then subsequently installed within the upper housing 42 and stack up against the poppet housing 178, with the end nut 108 clamping these components in place.
The poppet nut can be rotated to adjust cracking pressure, after the injector is assembled. A window or opening, located in the compressed gas cartridge chamber, extends through the injector housing and through the poppet housing 178. The threaded forward section 184 of the poppet nut has spanner-like holes. A tool extending through the window can engage these holes to turn the poppet nut to adjust cracking pressure.
Still referring to FIG. 5, an o-ring 206, or more preferably a cup seal, slidably seals the poppet body 172 against the poppet housing 178 and poppet nut 180. The o-ring 206 and back up rings 204 prevent metal to metal contact during movement of the poppet body 172 and also act as pivots and guides to allow slight eccentricity between the poppet body 172 and poppet nut 180. Seal 207, preferably a cup seal, slidably seals to the poppet body or rod 172 to the poppet housing 178.
With the drive piston 212 at its rear most position (i.e., with the injector 20 in the "ready" condition), a ring-shaped plenum 202 is formed between the poppet housing 178 and the drive piston 212, or the o-ring 214 which slidably seals the drive piston 212 within the upper housing 42. The plenum 202 is just wide enough to insure compression on the face seal 195. During actuation, the entire back surface of the drive piston 212 is acted upon by compressed gas. A backup ring 218 is provided adjacent to the drive piston seal 214 which is preferably a low friction U-packing, equivalent to a Parker 8400 series seal.
Turning to FIG. 4, a clamp piston 210 is slidably positioned within the drive piston 212 and slidably seals against the drive piston 212 with a clamp piston o-ring 222. The back surface of the clamp piston 210 and the front vertical wall of the drive piston 212 form a clamp piston plenum 216 (FIG. 3).
An o-ring joggle 220 adjacent the back end of the drive piston 212 acts as a stop for the clamp piston o-ring 222. A clamp piston spring 224 within the clamp piston 210 biases forward a jaw plate 228 butting against two opposing flange walls 229 (shown in phantom in FIG. 4) extending from a jaw retainer nut 242, allowing just enough clearance for the jaws to move freely. The force of the clamp piston spring 224 is accordingly transferred from the plate 228 to the flange walls 229 to the jaw retainer nut 242 and bypasses the clamp jaws 236. The clamp jaws 236 are biased outwardly or apart and away from each other by a pair of spaced apart jaw springs 238. The clamp jaws 236 have teeth 240. Each clamp jaw 236 has a planar ramp surface 234 flatly engaged by a corresponding planar ramp drive surface 232 on the forward end of the clamp piston 210. The surfaces 234 and 232 are preferably inclined at about 15 degrees to horizontal. This angle is selected to provide a proper balance between friction losses, contact surface length, travel and clamping force. The jaw retainer nut 242 is threaded into the front end of the drive piston 212.
A return spring 244 is compressed in between the jaw retainer nut 242 and a pressure plate 248. A forward nut 246 threaded into the forward end of the upper housing 42 supports the pressure plate 248.
An indicator ring 250, as shown in FIGS. 13 and 14, is rotatably positioned in between the front end of the upper housing 42 and a front collar 252 threaded onto the front end of the upper housing 42. The indicator ring 250 has colored sections on its outside edge visible through view ports 256 in the front collar 252, when the indicator ring 250 is turned to a ready to actuate position signifying that the ampule lugs are fully engaged with the injector lugs. A detent pin 288 biased against the back surface of the indicator ring 250 holds the indicator ring in either the ampule loading/unloading position or the ready position, and provides a positive tactile (and optionally an audible click) indication that the ampule is correctly and fully installed. Referring to FIG. 13a, the detent pin 288 slides in or slides against a track 324 cut into the back of the indicator ring.
The return spring 244 biases the pressure plate 248 forward, to clamp an ampule behind the lugs 254 on the front collar 252, and it also acts to return the drive piston after an injection.
The indicator ring 250 has three equally spaced apart turning lugs 258 extending inwardly, for engaging the lugs 382 at the back of an ampule 360 (FIG. 10). The front collar 252 has three equally spaced apart retaining lugs 254 extending radially inwardly, for engaging the front surfaces of the ampule lugs 382, to hold the ampule into the injector 20.
Referring to FIGS. 2 and 4, an actuator link 262 has a forward hook 264 in front of the indicator ring 250. A rear hook 260 on the actuator link 262 is attached to an actuator slide block 266 slidably mounted in between the upper housing 42 and lower housing 44. A slide block spring 268 pushing off of the lower housing 44 forwardly biases the actuator slide block 266. The forward surface of the actuator slide block 266 forms the trigger 30.
Referring to FIGS. 2 and 6, an exhaust valve fork 270 extends laterally and upwardly from the actuator slide block 266 to engage a collar on a spool valve 286. The slide block 266 has a rounded back end 272 facing an initiator valve cam 274 pivotally attached to a holder with a roll pivot pin 278. Together they are held in a cavity in the upper housing by the upper surface of the lower housing. A gap 280 separates the rounded slide block end 272 and the initiator valve cam 274 (FIG. 3). A set screw 276 threaded into the initiator valve cam 274 engages an initiator pin in the initiator valve 144.
As shown in FIG. 6, an orifice 282 in the upper housing 42 connects to a drive plenum exhaust bore 284 to continuously vent or bleed the drive plenum 202 to ambient pressure. The orifice has an approximately 0.004 in. diameter opening. The spool valve 286 attached to the exhaust valve fork 270 is slidably positioned within a spool housing 294 secured within an exhaust passage 296 in the upper housing 42. The spool valve 286 fits within a spool bore 302 in the spool housing 294 with a very close tolerance. While the spool valve 286 does not absolutely seal against the spool bore 302, leakage between them is very low. No o-rings are used on the spool valve to reduce static and sliding friction.
A reservoir exhaust bore 290 links the reservoir 168 to a spool valve plenum 300 around the spool valve 286. A spool valve hole 301 leads from the spool valve plenum 300 to an exhaust duct 304 behind the spool valve 286. O-rings 292 are positioned on either side of the spool valve plenum 300 to seal the stationary spool valve housing 294 around the reservoir exhaust bore 290. Muffler seals 306 seal the forward end of the spool valve housing 294 against a muffler tube 308 filled with fiberglass wool 310 or other acoustic material and leading to an exhaust port 316 open to ambient pressure. A muffler retainer 312 and set screw 314 secure the spool valve housing 294, muffler seals 306 and muffler tube 308 within the exhaust passage 296.
The initiator valve 144, as shown in more detail in FIG. 7, has an initiator valve pin 330 extending from a pin socket 332. A socket spring 334 overlying the pin socket 332 biases the initiator valve pin 330 outwardly or downwardly into engagement with the set screw 276 in the initiator valve cam 274. A valve stem 336 spaced slightly apart from the pin socket 332 has a stem collar 342 with a rubber seat ring 340 sealably engaging a seat neck 350, within an upper chamber 344 of the initiator valve 144. A stem collar spring 346 positioned in between a valve nut 348 and the stem collar 342 biases the seat ring 340 into engagement with the seat nut 350 to maintain the valve 144 in a closed position. The seat nut 350 is supported by, or part of a valve seat 352 sealed within the initiator valve chamber 146 by an o-ring 338.
As shown in FIG. 6a, in an alternate preferred design, the housing is a single piece housing 303, rather than the two-piece housing shown in FIG. 2.
An alternative preferred design to the exhaust valve shown in FIG. 6 is illustrated in FIG. 6b wherein a valve stem 291 slides inside of a front seal 293 and a rear seal 295. A seal spacer 297 separates the front seal 293 and the rear seal 295. The rear end of the valve stem 291 has two narrow slots 305 which provide a channel for flow of gas when the valve is opened, while giving support to the pressurized rear seal 295 to prevent it from collapsing inwardly. The slots 305 form a gradual angle with the rear seal 295 to prevent it from catching on an abrupt edge which could damage the seal. When actuated, the valve stem 291 is pushed forward and the front edge of the valve slots 305 moves forward to the forward edge of the rear seal 295. This allows pressurized exhaust gas to flow from an inlet port 307 , through the seal spacer 297, out of the valve slots 305, through a muffler 309 and into an outlet port 311. The front and rear seals 293 and 295 are both u-cup type seals to provide for low friction. The exhaust valve is virtually gas tight and requires very little force for actuation. The only significant force that is translated to the valve stem is after opening, the stem is forced to open further which assists in returning the actuator of the injector.
FIG. 6c shows a piston plenum shut-off valve 321 used in the housing 303, as an alternative to the continuously venting orifice 282 and drive plenum exhaust bore 284 shown in FIG. 6. Shut-off valve 321 includes a piston 323 which has a filter 325, an orifice 327 and a seal 329. The piston 323 is biased upwardly and into an open position via a spring 331. When the main piston space is pressurized during the first millisecond of the injection event, and when the pressure reaches approximately 50 psi, the pressure drop across the orifice 327 acts against the piston 323 and drives the piston 323 downwardly against a shut-off seal 333. After the piston 323 seals against the shut-off seal 333, the force keeping the piston 323 down against the seal is provided by the pressure acting on the area of the annulus created by the piston seal 329 and the shut-off seal 333. The shut-off seal 333 is supported by a valve base 335 which has a vent 337 beneath the shut-off seal 333 to prevent seal escape due to trapped gases. Passageways 339 are provided for venting gas. When the pressure acting on the valve is reduced to approximately 50 psi or below, the piston 323 moves away from the shut-off seal 333 due to force provided by a spring 331, and gas flows freely through the filter 325, the orifice 327, and through the passages 339 in the valve base 335. The shut-off valve 321 conserves gas during the injection and provides improved gas efficiency. Comparative testing shows a 30-50% improvement in gas efficiency over the passive orifice design shown in FIG. 6.
FIGS. 7a and 7b show an alternate preferred embodiment initiator valve 145 (illustrated in the closed position). The initiator valve 145 includes an initiator valve body 147 having an inlet 149 and an outlet 151. A valve poppet 153 is biased against a valve seat 155 by a spring 157. The valve seat 155 is preferably a high durometer ethylene-propylene which resists absorption by carbon dioxide. A valve seat retainer 159 supports the valve seat 155. A valve stem 169 passes through a valve stem guide 161 and a valve stem seal 163. A valve stem spring 165 biases the valve stem into a closed position. A valve stem seal 167 slidably seals the valve stem against the valve stem guide 161. The valve stem seal 167 is preferably a u-cup seal to provide a low break out friction force.
As shown in FIG. 10, an ampule 360 has three spaced apart lugs 382 at its back end. A flare 380 leads into an ampule chamber 384 to guide a contoured end 364 of a plunger 362 to engage the ampule 360. In between the contoured end 364 and a plunger head 370 of the plunger 362 are an o-ring 366 and a split Teflon back up ring 368.
As shown in FIG. 11, the plunger shaft 372 has a cruciform cross section to provide a high moment of inertia using minimum material for the disposable plunger and ampule. A collar 374 on the plunger 362 is spaced apart from the tip of the contoured end 364 so that the collar 374 contacts the back surface 388 of the ampule 360 just before the contoured end 364 of the plunger 362 reaches the front end of the ampule 360. This prevents the contoured end 364 from colliding with the front end of the ampule 360 and overstressing the ampule or buckling the plunger shaft 372. Webs 376 extending from the plunger shaft 372 support the collar 374. Although the back section 390 of the plunger shaft 372 may have teeth or ridges 378 matching the teeth or ridges 240 on the inside surfaces of the clamp jaws 236, a smooth back section 390 is preferred to avoid variations.
Preferred dimensional relationships of parts are shown in the drawings. As an example, the drive piston 212 outside diameter is preferably 1.125 inch.
In operation, the cartridge 54 is loaded into the injector 20 by removing or unsnapping the plastic cartridge chamber cover 56, placing the cartridge 54 into the cartridge chamber 50, with the neck of the cartridge 54 facing the piercing point 68, and then replacing the cartridge chamber cover 56. The cartridge chamber cover 56 snaps into position on the lower housing 44. A wavy brass liner 32 may be provided in the cartridge chamber 50 to increase thermal conductivity between the cartridge 54 and the injector 20.
Referring to FIGS. 2 and 3, the flip handle 80 on the knob 78 is flipped outwardly so that the knob 78 can be more easily turned. The knob 78 is turned by hand causing the threaded shaft 72 to advance forwardly and drive the piercing body 66 and housing 58 towards the cartridge 54. As the piercing body 66 approaches the neck of the cartridge 54, the seal 64 engages and seals against a perimeter on the flat end surface of the cartridge 54. As the user continues to turn the knob 78, the piercing point 68 engages and pierces the cartridge seal. Compressed gas from the cartridge 54 flows through the bore 70, into the annulus 62, through the hole 92 and moves through the bore 96 into the gauge chamber 122. The seal 64 prevents leakage of compressed gas into the cartridge chamber 50 which remains at ambient pressure. The cartridge seat 52 supports the cartridge 54 longitudinally against the force exerted by the seal 64 and piercing pin 68. O-rings 60, 88 and 94 prevent leakage from the passageways from the cartridge 54 to the gauge chamber 122. This relatively long supply path through highly thermally conductive materials or metal components improves heat transfer to the saturated CO2, to reduce the amount of liquid CO2 entering the gauge chamber 122. The heat transfer helps keep the cartridge pressure up, which otherwise tends to drop with each injection due to cooling, caused by expansion of gas out of the cartridge.
As the piercing body 66 and housing 58 slide forward within the lower body to pierce the cartridge 54, the knob 78 moves forward towards the piercing housing nut 76. With the piercing body 66 fully sealed and engaged against the cartridge 54. The piercing body 66 and housing are in a fully forward position and the back surface of the knob 78 is approximately flush with the back surface of the upper housing 42.
Compressed gas fills the gauge chamber 122, passes through the filters 130 and 132, flows through the port 148 (FIG. 3) and into the upper chamber 344 of the initiator valve 144 (FIG. 7). Within the initiator valve 144, the stem collar spring 346 biases the seat ring 340 on the stem collar 342 against the seat neck 350, thereby sealing the upper chamber 344 and preventing the compressed gas from moving forward.
The cartridge 54 contains a saturated propellant gas, such as CO2, in both liquid and gas states, at temperatures near room temperature. The filters 130 and 132 substantially prevent any liquid from the cartridge 54 from passing. This allows the device to be used in any orientation without affecting injection characteristics. Without the filters, liquid CO2 could pass into the initiator valve 144 and reservoir 168 and flash into gas during actuation of the injector 20, causing unpredictable injection characteristics.
As compressed gas fills the gauge chamber 122, the Bourdon tube 116 which opens into the gauge chamber 122 is also pressurized. The pressure within the Bourdon tube 116 causes it to spiral outwardly resulting in movement of the pointer 102 to indicate the gas pressure on the gauge label 104 (after the gauge body 106 and gauge base 114 have been properly calibrated). The user can then check the available gas pressure within the injector 20 by looking at the pointer 102 through the lens 98, as shown in FIG. 8.
The ampule 360, plunger 362 and a filling needle may be provided in a sterile package. The filling needle has a fitting to engage the Luer fitting 392 on the ampule. The ampule may be filled in the same way as a conventional needle and syringe. The filling needle is inserted into a vial of injectant and the injectant is drawn up into the ampule by pulling back on the plunger. Dosage is read by the alignment of the red o-ring 366 with volume graduations on the transparent ampule. The filling needle is removed and safely discarded. The ampule is then ready to be placed into the injector. Variable dosage injections are accordingly achieved by loading the ampule in the same manner as for a needle and syringe. In contrast to other injectors, the present injector 20 can inject various dosages without adjusting the injector. The ampule 360 may be filled to e.g., 1/3, 1/2, 3/4, etc. of its full volume capacity. Referring to FIG. 10, loading the ampule 360 with differing volumes of injectant will cause the plunger 362 to extend from the ampule 360 by varying amounts. However, since the injector 20 can successfully drive the plunger 362 from any plunger starting position, a single size ampule 360 can be used for various dosage injections. Ampules of varying volumes are not required.
With the ampule 360 loaded with the desired dosage and the plunger 362 extending from the ampule 360, the plunger and ampule are installed into the injector 20. The lugs 382 on the ampule 360 are aligned to pass through the lugs 254 on the front collar 252. The back end of the plunger 362 is passed through the front collar 252, through the return spring 44 and through the clamp piston spring 224. Since the teeth or ridges 378 on the plunger 362 extend continuously in between the webs 376 and the back end of the plunger, regardless of the dosage carried by the ampule 360, the teeth 240 of the clamp jaws 236 will over lie the plunger 362.
The back surface 388 of the ampule 360 comes to rest against the pressure plate 248. The lugs 382 on the ampule 360 fit in between the lugs 258 on the indicator ring 250. The user then turns the ampule (clockwise as viewed from the front) through an acute angle e.g., approximately 45°, from an ampule loading position to an ampule ready position. As the ampule turns, it causes the indicator ring 250 to turn with it as the sides of the ampule lugs 382 push against the sides of the indicator ring lugs 258. A step on each ampule lug prevents the indicator ring and ampule from being turned beyond range. In addition, as shown in FIG. 13a, the track on which the detent pin 288 acts is deep enough that the detent cannot be forced out of the track. The two ends of the track act as detent stops. As the indicator ring 250 turns and locks into an injection ready position (FIG. 2a), the colored or painted sections on the outside perimeter of the indicator ring 250 moves into view through the view ports 256. This indicates to the user that the ampule is properly installed in the injector 20 and ready for injection.
As the indicator ring 250 turns with the ampule 360 from the ampule loading position to the ready position, a cut out 320 in the indicator ring (FIG. 13) moves into alignment with the hook 264 on the actuator link 262. The trigger 30 can then be pulled back to actuate the injector 20 to provide an injection to a patient.
If the cut out 320 in the indicator ring 250 is not aligned with the hook 264, the actuator link 262 prevents the trigger 30 from moving to actuate the device. Therefore, the injector 20 cannot be actuated unless an ampule is properly installed and aligned in the ready position. With a cartridge 54 and an ampule 360 properly installed within the injector 20, the nozzle 386 of the ampule 360 is placed against the patient's skin and the trigger 30 on the actuator slide block 266 is pulled back by the user's index finger. As the slide block end 272 approaches the initiator valve cam 274, the exhaust valve fork 270 slides the spool valve 286 from an open position (which allows the reservoir 168 to bleed or exhaust through the exhaust bore to ambient) to a closed position wherein the spool valve 286 substantially seals off the reservoir exhaust bore 290. The reservoir 168 is accordingly sealed off before the slide block end 272 engages the initiator valve cam 274. The spool valve serves as an exhaust control valve.
As the actuator slide block 266 continues to move rearwardly, the slide block end 272 pushes against the initiator valve cam 274 levering the set screw 276 against the initiator valve pin 330. The lever arm design of the initiator valve cam 274 provides an approximately 4:1 mechanical advantage. This reduces the force necessary to pull the trigger 30 back to actuate the device. On the other hand this mechanical advantage also incurs a 4:1 travel loss, which is advantageously employed in timing the operation of the initiator valve and spool valve, through adjustment of the set screw 276. The close tolerance and low leakage fit between the spool valve 286 and spool housing 294 add only a negligible amount of frictional drag on the trigger 30. There are no soft seals which slide with the trigger. The sliding movement of the trigger performs three functions: It controls the initiator valve, it controls the spool valve, and it provides an interlock when disabled by the actuator link 262. The absence of sliding elastomer seals on either the initiator valve or the spool valve and the 4:1 mechanical advantage of the initiator valve cam 274 allow both of these high pressure valves to be operated with minimum finger force on the trigger.
Referring to FIGS. 3 and 7, as the actuator slide block 266 moves against the initiator valve cam 274, the set screw 276 pushes up on the initiator valve pin 330. The pin socket 332 is driven up against the valve stem 336 causing the stem collar to shift upwardly and separate the seat ring 340 from the seat neck 350, thereby opening the initiator valve 144. Similarly, in the embodiment of FIGS. 7a and 7b, the valve poppet spring 157 biases the valve poppet 153 toward the valve seat 155. Gas pressure from the gas inlet 149 drives the poppet 153 into the valve seat 155 creating a gas tight seal. The valve seat 155 is vented on the bottom side 171 to prevent the seat from escaping from the groove 173 due to trapped gases. The valve seat retainer 159 retains and vents the valve seal 155. The valve stem 169 is mechanically isolated from the poppet 153 to assure that the poppet closes without interference from the stem.
When the initiator valve 147 is actuated, the valve stem 169 slides up and contacts the valve poppet 153, pushing it away from the valve seat 155. Gas flows from the inlet 149 through a gap between the valve poppet and valve seat, through a side hole 175, around an annulus 177, and out through the outlet 151. When the valve stem is released, the valve stem spring 165 returns the valve stem to the neutral position and the valve poppet 153 also returns to the closed position.
Referring once again to FIGS. 3 and 7, with the initiator valve 144 opened, compressed gas flows from the cartridge 54 through the filters and initiator valve 144, through the reservoir port 154 past the dart 160 and into the reservoir 168. Referring to FIGS. 3 and 5, as the reservoir 168 fills with compressed gas, gas pressure also builds within the poppet chamber 208, as gas flows from the reservoir 168 through the poppet body supply holes 174.
Since the cannula 176 is opened to the reservoir 168, compressed gas flows from the reservoir 168 through the cannula 176 into the clamp piston plenum 216.
Referring to FIGS. 2b and 4, as pressure builds within the clamp piston plenum 216, the clamp piston 210 is driven forward compressing the clamp piston spring 224 and driving the clamp jaws 236 together, through the interaction of the ramp drive 232 on the clamp piston 210 and the clamp piston ramps 234 on the clamp jaws 236. The teeth 240 on the clamp jaws 236 clamp down and around the plunger
With or without teeth on the plunger, the jaws engage the plunger with enough gripping force to avoid any slippage between the jaws and plunger during actuation of the injector.
The clamp jaws 236 and their driving mechanism perform two functions: They grab onto the plunger at whatever position the plunger is in, and they transfer driving force from the drive piston to the plunger.
If the ampule 360 is loaded with a maximum volume, the plunger 362 will be fully extended to the rear such that the clamp jaws 236 will engage the plunger 362 close behind the webs 376. On the other hand, if the ampule 360 is loaded with a minimal dosage, the plunger 362 will extend a shorter distance behind the ampule 360 and the clamp jaws 236 will engage the plunger 362 towards the back end of the plunger. However, regardless of the volume of the injectant in the ampule, the clamp jaws 236 securely clamp and engage the plunger 362 with the teeth 240 on the clamp jaws 236 locked into the teeth 378 on the plunger 362. The gas pressure in the clamp piston plenum 216 maintains the engagement of the clamp jaws 236 to the plunger 362 during the injection sequence. As represented in FIG. 12, the clamp jaws clamp onto the plunger before the poppet valve opens.
Referring to FIGS. 3, 4 and 5, pressure in the poppet chamber 208 continues to build until it is sufficient to crack the poppet valve 170 open. Specifically, the poppet spring chamber 226 is sealed from the reservoir 168 and the poppet chamber 208 and is vented to ambient pressure. As pressure increases within the poppet chamber 208, the rearward acting force resulting from the gas pressure acting on the incline surfaces 152 of the poppet body 172 will exceed the forward acting force of the poppet spring 186. When this "cracking point" is reached (preferably at approximately 450 p.s.i.), the poppet valve 170 snaps open. The poppet body 172 shifts or slides rearwardly. The sealing edge surface 200 separates from its sealing engagement against the conical poppet seat 188 allowing gas from the reservoir 168 to flow through the poppet chamber 208 to the drive bores 194. As the poppet valve 170 begins to open and the poppet body 172 moves away from the conical poppet seal 188, the annular front surface 230 of the poppet body 172 is acted on by gas pressure now in the poppet annulus 198. Since the surface areas acted on by the compressed gas are vastly increased with the addition of the front surface 230 of the poppet body, the force acting on the poppet body 172 rapidly escalates. The poppet valve 170 therefore opens with an "over-center" or hard-over action. When the poppet valve 170 opens and the poppet body 172 shifts rearwardly, the regulation valve 156 closes down via the dart 160 engaging and sealing against the regulation seat 158. Thus, additional gas supply to the reservoir 168 is, at least initially, restricted by the regulation valve 156, with substantially only the reservoir 168 then acting as a source of compressed gas.
To maintain at least the minimum pressure on the drive piston throughout the injection, pressure regulation of the reservoir is provided through poppet area ratios and spring forces (which may be readily determined for various capacity injectors by those skilled in the art). During injection of larger dosages, the reservoir pressure reaches a desired minimum pressure. Up to this time, the drive piston plenum has been supplied by a fixed supply of gas from the reservoir. At this point, the spring force, acting forwardly on the poppet body, overcomes the net pressure force, acting rearwardly on the poppet body. As the reservoir pressure drops below this value preferably approximately 150 p.s.i., the poppet body moves forward, lessening the regulation valve restriction to incoming flow. Specifically, the dart 160 moves with the poppet body away from the seat 158 to allow commencement or increase of gas flow. Thus, the opening of the regulator valve consequently increases gas flow into the reservoir and increases the reservoir pressure. As gas pressure then increases above the desired minimum value, the poppet body again moves rearwardly to restrict the incoming flow. Thus the poppet valve and regulator valve act together as a reservoir pressure regulator (and consequently drive piston plenum pressure and ampule pressure). Actual physical movement of the poppet body from fully open to full closure of the regulator valve is approximately 0.020 inch. Referring to FIG. 12, regulation movement, when present, occurs generally during the last half of the injection.
With this pressure regulation technique, the reservoir volume may be reduced, thus less gas is used, especially during smaller deliveries. In addition, the regulation/small reservoir combination, as compared to fixed volume/no regulation, results in smaller final pressures for smaller dosages of deliveries and larger final pressures for larger dosages. Thus final pressures are less dependent on dosage volume and ampule pressures are more consistent, which provides for more uniform injections.
The CO2 cartridge is filled with saturated CO2. Thus the source pressure is highly dependent on temperature (varying roughly 10 psi/deg F.). The peak ampule pressure is determined by the poppet valve cracking pressure which is independent of source pressure. The minimum delivery pressure, governed by the pressure regulation is also independent of source pressure. Both of these features are controlled by area ratios and spring rates. Thus the injector is substantially temperature independent.
Certain known injectors can apparently provide variable dosage by pulling the plunger only part way back, leaving a gap between the drive piston and the plunger. With this technique, however, the drive piston must then travel across the gap before contacting the plunger, altering the piston momentum and dead volume parameters of the device, and substantially effecting ampule pressure characteristics. With the present clamping mechanism, dead volume and piston momentum are independent of dosage, and consistent ampule pressure characteristics are maintained.
FIG. 12 illustrates the effect of pressure regulation. With a smaller dosage of e.g., 1/2 ml or less, generally there is no pressure regulation. With larger dosages of e.g., over 3/4 ml, pressure regulation occurs. With intermediate range dosages of e.g., between 1/2 and 3/4 ml, some pressure regulation may occur.
The poppet annulus 198 and drive bores 194 create a "dead volume" which should be minimized for preferred injection characteristics i.e., rapid pressure build up and acceleration of the plunger. However, the flow restrictions or pressure drops caused by the poppet annulus and drive bores 194 are preferably also minimized for the same reason. In a preferred embodiment, ten equally radially spaced apart drive bores 194 are provided through the front surface of the poppet housing 178.
The rubber or elastomeric face seal 196 adhered to the back of the drive piston 212 assists to rapidly open the poppet valve 170. The face seal 196 encourages the build up of pressure in the drive bores 194 and poppet annulus 198 before pressurizing the drive plenum 202. The "dead volume" of the drive plenum 202 is therefore eliminated by the drive bore seal 196. Accordingly, the rapid pressure increase within the drive bores 194 and poppet annulus 198 shorten the time required for opening the poppet valve 170 providing a quick ampule pressure rise time and a more uniform ampule peak pressure. The poppet body supply holes 174 have a large diameter to minimize pressure drop from the reservoir 168 to the poppet chamber 208.
With the poppet valve 170 open, gas flows through the poppet annulus 198 and drive bores 194 into the drive plenum 202. The gas pressure in the drive plenum 202 acting on the relatively large surface area of the entire back surface of the drive piston 212 generates a large force on the drive piston 212 in the forward direction. The drive piston 212 accelerates forward with the clamp piston 210 driving the plunger 362 into the ampule 360. The injectant dose within the ampule chamber 384 is sprayed out of the ampule nozzle 386 in a high velocity jet which penetrates through the patient's skin. FIG. 2c shows the position of the plunger 362 and piston 212 after injection.
If the trigger 30 is held back for longer than necessary for the injection, only a small amount of gas is wasted since all spaces within the injector, except the drive plenum, remain virtually sealed while the trigger is held back. The drive plenum is opened to ambient pressure, but only through orifice 282 which severely restricts flow. The regulation valve 156 restricts flow while the trigger is held back.
After the injection, the trigger is released. The slide block spring 268 assisted by exhaust gas pressure returns the slide block 266 to its forward position. The initiator valve then closes. Then the exhaust valve fork 270 moving with the slide block 266 pulls the spool valve 286 forward reconnecting the spool valve bore 302 and spool plenum 300 to the reservoir exhaust bore 290. The spool valve and exhaust passage allow the injector to be quickly and quietly reset for another injection. Gas in the reservoir exhausts out through the reservoir exhaust bore 290 and exhaust passage 296. As this occurs, the exhaust gas pressure in the exhaust passage 296 pushes on the back of the spool valve 286 and helps to return the spool valve and slide block forward to their original ready positions. The slide block spring 268 consequently need only exert a slight force, thereby helping to reduce the finger force necessary to pull the trigger 30.
Immediately after the injection, the drive piston 212 is in the forward position (FIG. 2c), with the plunger shoulder in contact with and exerting a large force on the back end 388 of the ampule 360. The drive piston return spring 244, clamp piston spring 224 and jaw springs 238 are compressed. The jaws 236 are engaged with the plunger and the clamp piston 210 is forward. Each part must then return to the ready position.
Upon release of the trigger 30, the reservoir 168 is able to rapidly vent to atmosphere. Drive piston plenum gas vents into the reservoir, in part, through the poppet body, until the poppet valve closes. Gas also vents into the reservoir through the cannula 176, until the holes in the cannula are sealed by the o-ring 190 contained within the poppet seat 188. This remaining gas, which occupies a relatively small volume, and is at a very low pressure, vents through the bleed orifice 282 connecting the drive piston plenum directly to the atmosphere through the drive plenum exhaust bore 284. Since the orifice 282 is always open, even during the injection, some beneficial drive gas is lost, thus it is a very small, restrictive orifice. Because the orifice 282 is small, if it was the only vent for drive piston plenum gas (i.e., if there were no cannula side holes), venting and reset time would be unacceptably long.
During venting, the following reset sequence occurs and is controlled by component areas and spring forces, which may be readily determined by those skilled in the art. First, the clamp jaws 236 and clamp piston 210 release. This must occur before the drive piston is released so that the plunger is not pulled back. The clamp piston spring force overcomes the opposing pressure force. This release occurs when the drive piston 212 is close to a force equilibrium condition. The pressure force must be close to the opposing spring force. If not, then the drive piston 212 will rapidly return (if the spring force is larger) or plunge forward (if pressure force is larger) causing noise and possible damage to the injector. Thus a force balance is established at the point of plunger release, regardless of the dosage.
After the plunger release, the drive piston 212 returns as the reservoir bleeds. The drive piston 212 is forced rearward by the drive piston return spring against the opposing pressure force. Gas exhaust and reset occurs quietly and quickly.
O-ring 222 serves as a seal and a bumper to quiet the clamp piston return.
During the injection, the plunger 362 is driven forward until the collar 374 contacts the back surface 388 of the ampule 360. Accordingly, if the trigger 30 is squeezed once and an injection given, released and squeezed again after some delay (i.e., "second fire") without replacing the ampule, the jaws will grab the plunger with the plunger collar in the forward most position, i.e., in contact with the rear ampule face. Thus no forward movement of the drive piston will occur. A second fire does not damage the ampule, plunger or injector.
The cannula 176 is attached to and moves with the drive piston 212. The cannula exhaust hole 190 in the cannula 176 speeds the return stroke of the piston 212. The poppet valve closes before the drive piston begins its return. Thus a bleed hole in the cannula is required for gas to flow from the drive piston plenum to the reservoir. During the return stroke, up until the time the cannula exhaust hole 190 passes behind the o-ring 206, gas in the drive plenum 202 flows through the cannula exhaust hole 190 through the cannula 176, back into the reservoir 168 and out through the relatively unobstructed exhaust system of the reservoir exhaust bore 290 and the exhaust passage 296. After the cannula exhaust hole 190 passes behind the o-ring 206, the gas remaining in the now very small volume drive plenum 202, which is a very low pressure, is exhausted through the orifice 282 and drive plenum exhaust bore 284 to ambient. Gas in the clamp piston plenum 216 similarly exhausts through the cannula 176 through the reservoir 168 and out through the reservoir exhaust bore 290 and the exhaust passage 296.
The spent ampule and plunger are turned and removed from the injector 20 which is then prepared for the next injection sequence. The ampule and plunger are preferably a single use disposable unit.
As shown in FIGS. 10a and 10b, the plunger may have tapered sections at the front or back which engage a generally complimentary tapered section in the ampule. During an injection, the injector exerts hundreds of pounds of force on the plunger which drives the tapered section of the plunger of FIGS. 10a and 10b into an interference fit with the tapered section of the ampule. The used and non sterile plunger and ampule cannot easily then be re-used. The tapered sections can also act as a plunger stop, in place of the collar on the plunger of FIG. 10. The taper on the plunger and ampule are slightly mismatched and lock together only with high forces (at the end of an injection) and not at low forces (during filling of the ampule). FIG. 10c shows another nonreusable ampule and plunger having a detent. The detent is dimensioned so that only a large force will cause engagement.
The injector can be modified to give multiple sequential injections to the same patient. As shown in FIGS. 4a and 4b, a drive piston stop 394 is added, and acts to stop the drive piston, as the plunger shoulder does in variable delivery. When the injector actuates, a small dose is delivered. The jaws then disengage and the injector resets. The plunger will automatically be in a "ready" position for the next shot, and the injector may be fired again to deliver the same small dosage. This sequence may be repeated to deliver several small dosage injections until the plunger shoulder contacts the ampule. Dosage may be adjusted by rotating the outer ring 396 to the desired value, indicated by graduations 398 on the injector housing. A longer ampule can be provided to allow for more sequential shots.
The present method of needleless injection uses a system of an injector and compatible ampules. The injector is designed to apply a specific force on the plunger of the ampules. The force applied to the plunger by the injector is varied, forming a force-displacement curve. At the beginning of the injection, the force applied to the plunger is quite high. As the plunger is advanced, the applied force is reduced substantially linearly until the volume injected reaches approximately 0.5 ml, and thereafter the force is held substantially constant. This force displacement curve is independent of the ampule nozzle size. This force-displacement curve translates directly to an ampule pressure-volume injected curve. The injection system employs a singular pressure profile and a family of ampules with various orifice sizes to achieve various depths of penetration. FIG. 27 shows preferred uses of various diameter nozzles with the pressure profile described below.
The traditional approach to measuring pressure profile is to use a pressure-time curve. However, a pressure-volume profile is particularly useful because this pressure profile is nearly the same for any size nozzle. In the following discussion, both time and volume will be used as a reference.
Referring to FIGS. 25 and 28-30, the preferred pressure profile has the following properties: First the pressure rapidly increases from 0 to a value of about 3900-4300 psi (and preferably about 4100 psi) in less than 6 milliseconds (and preferably less than 1 ms). This quick pressure rise avoids "splash-back" and loss of injectant. This pressure range is sufficient to pierce the tissues, but not so high as to cause the excessive pain associated with higher pressures. The pressure is gradually reduced to approximately 1200-2000 psi (and preferably 1800 psi) in a generally linear (pressure-volume) fashion corresponding with volume discharged of 0.5 ml. In a pressure-time framework, the curve forms an exponential decay. At this point, the pressure is held constant until the end of the injection, at which time the pressure abruptly goes to 0 (optimally in less than 5 ms.) Final pressures below about 1200 psi tend to result in "leak-back" of the injectant after the injection. The pressure profile is defined as the pressure immediately proximal to the nozzle. The above-described pressure profile covers an injection larger than approximately 0.5 ml. If the injection is less than this amount, the pressure-profile curve is simply truncated at the end of the delivered volume.
Medication viscosity affects penetration of intramuscular injections in a direction contrary to prior art. Experimental data shows that more viscous medications, in the range from 0.01 to 0.7 poise, have greater fascia penetrating capability, believed to result from the reduced turbulence, lower Reynold's number, and also apparently due to higher core velocities for viscous injectants as compared to less viscous injectants. Thus, the present invention also includes the appropriate guidelines for selection of nozzle size with viscous medications. Viscous medications preferably use the same size orifice as for water based medications. Nearly all viscous medications are intramuscular injections. Testing shows that viscous medications have more energy to penetrate the deep fascia than water based medications, but do not go substantially deeper into the muscle. Therefore, the deposition into the muscle is comparable independent of medication viscosity.
The present peri-fascial injection is provided by using a nozzle diameter which is smaller than that which would ordinarily be used for an intramuscular injection. The peri-fascial injection is provided by using a SC nozzle (0.004") at an IM injection site preferably with less than 5 mm adipose. This works well because IM sites tend to have very thin layers of adipose tissue. The SC nozzle has sufficient penetrating energy to deposit the medication on the deep fascia when injected into a thin layer of adipose. A peri-fascial injection can also be given at an IM injection site having a 10-15 mm adipose layer using a 0.006 inch diameter nozzle and the above-described pressure profile. As shown in FIG. 26, the injectant 800 in a peri-fascial injection bores through the skin 802 and adipose 804, but not the fascia 806. Rather, the injectant forms a thin layer 808 over the fascia. The thin layer 808 may provide the same pharmacological effect as an IM injection, without penetrating the muscle.
In a second embodiment, as shown in FIG. 15, the present needleless injection device 520 has an ampule 360 at its front end. The ampule is held against the patient's skin while the device 520 is triggered to achieve the injection.
As shown in FIG. 16, the needleless injection device 520 has a tubular housing 524 and a bridge section 526 attached to the housing 524. At the back end of the housing 524 is a cartridge holder 528 which holds a compressed CO2 cartridge 54. A screw knob 532 is threaded through the cartridge holder 528 to drive the cartridge 530 into a piercing body assembly 538. A chamber port 536 extends through the back end of the cartridge holder 528 to vent the cartridge chamber 534 to atmosphere. A wavy bronze liner 600 between the cartridge 54 and cartridge holder increases heat transfer to the cartridge by improving metal-to-metal contact. Copper wool may also be pressed against the rounded back end of the cartridge 530 by a round plate on the forward end of the screw knob 532, to further increase heat transfer to the cartridge. A pressure indicator assembly 540 is contained within the housing 524 and bridge 526, in between the piercing body assembly 538 and an initiator body or valve body 542. A trigger 544 protrudes through a trigger opening 642 above the bridge 526 over the initiator body 542. A safety button 546 is attached to an interlock slide block 548 outside of the housing 524 and under the bridge section 526. A reservoir 660 is located in between the initiator body 542 and a poppet valve body 550 within the housing 524. A piston 552 slidably positioned within the housing 524 receives and drives an ampule plunger 554 which extends into and is provided with the disposable ampule 522.
As shown in FIG. 17, the housing 524 has a threaded back or tail section 566. A tensioning nut 562 and a lock nut 564 are tightened into the tail section 566 to hold the various internal components in position. The neck of the gas cartridge 54 is compressed against a washer face seal 560 by the screw knob 532. A spacer 568 surrounds the neck of the cartridge 530. The piercing body assembly 538 includes a piercing ring having a hollow point 572 protruding into the gas cartridge 54. A bore 574 extends through the piercing ring 570. Sintered filters 130 and a Tyvek filter 132 are included as previously described in the first embodiment. O-rings 576 seal the filters 130 against the piercing ring 570. Alternatively, filters 130 may be bonded or soldered to the piercing ring, for enhanced heat transfer to further minimize or prevent passage of any liquid beyond the filters.
FIG. 17 illustrates the needleless injection device 520 with the interlock system in the locked condition. FIG. 18 illustrates the same device in the triggered position as it appears during an injection sequence.
The pressure indicator assembly 540 includes an indicator housing 584 containing an indicator pin 588 which is biased downwardly by a compression spring 590. An O-ring 586 seals the indicator housing 584. The compression spring 590 is selected such that the indicator pin 588 will protrude slightly above the bridge 526 when the device has sufficient gas pressure for an injection. When the gas pressure is insufficient for an injection, the spring 590 drives the indicator pin 588 down and flush with the surface of the bridge 526, indicating to the user that the cartridge 530 needs to be replaced. A flange seal 598 seals the indicator pin 588 against the indicator housing 584.
An initiator assembly 592 includes the initiator body 542. A bore 582 extends from the front end of the piercing ring 570, below the indicator housing 584 and into a bore 594 leading into a lower initiator valve chamber 626 in the initiator body 542.
As shown in FIG. 22, which illustrates an enlarged detail of the initiator assembly 592 generally similar to the initiator valve 144 shown in FIG. 7, an initiator pin 602 extends above the initiator body 542 and is supported by a pin socket 604. An upper compression spring 606 biases the pin socket 604 against the upper surface of a spring guide 608. A valve stem 612 is positioned below the pin socket 604 and is upwardly biased into an initiator valve seat 610 by a lower conical compression spring 620. The valve stem 612 has a stem collar 614 having a seat ring 616 aligned with a seat neck 618 on the initiator valve seat 610. An O-ring 596 seals the initiator valve seat 610 against the initiator body 542. A hole through the stem collar 614 underneath the seat ring 616 prevents any trapped gas from dislodging the seat rung. An initiator valve nut 622 is threaded into the initiator body 542 and supports the lower compression spring 620. The lower initiator valve chamber 626 is formed by the initiator valve seat 610 and the initiator valve nut 622. The bore 594 extends into the lower chamber 626. The spring guide 608 and initiator valve seat 610 define an upper initiator valve chamber 628 connecting to a bore 624.
Referring once again to FIG. 17, a ball 632 is positioned on top of the initiator pin 602. The trigger 544 has a finger surface 640, a trigger notch 638, and a trigger arm 636 pivotally attached to the bridge 526 by a pin 634. The finger surface 640 of the trigger 544 extends through the trigger opening 642 in the bridge 526, with the device 520 in the locked position as shown in FIG. 17.
Referring to FIGS. 17 and 21, a regulation valve 784 has a regulation valve body 646 sealed against the inside walls of the housing 524 by an O-ring 692. The regulation valve body 646 has a central generally conical seat 650. A valve spacer ring 786 separates the regulation valve body 646 from a poppet valve body and surrounds the reservoir 660. The wall thickness of the spacer ring 786 can be made thicker or thinner to adjust the reservoir volume and correspondingly the pressure decay profile of the injector. A poppet nut housing 658 is threaded into the back end of the poppet valve body 664 and slidably supports a poppet plunger 652 having a dart valve 654 shaped to seal against the seat 650. A compression spring 656 biases the plunger 652 forward i.e., towards the ampule 522. A spring base 666, adjacent the front end of the poppet plunger 652, supports the forward end of the poppet spring 656. As shown in FIG. 17, the forward end of the poppet plunger 652 has a ball follower 670 engaged against a float ball 668 pivotally supporting a cup 672. FIG. 21 shows an alternate embodiment wherein the cup 672 is supported on a rounded forward end of the poppet plunger 652. An annular poppet face 674 (preferably of DuPont Type 12 Nylon) is positioned over and around the cup 672 and slidably seals against a bore 754 extending through the poppet valve body 664 (FIG. 23). The ball 668 or rounded forward end of the poppet plunger 652 allows the poppet face 674 to align and orient itself within the bore 754.
A low friction seal 648 having a spring biased graphite-filled PTFE jacket (Bal. Seal Engineering Co., Inc., Santa Ana, Calif. Series 31X) seals the poppet nut housing 658 against the poppet plunger 652. The poppet nut housing 658, which is threaded into the poppet valve body 664 is also sealed against the poppet valve body 664 by an O-ring 662. Another low friction seal 676, similar to seal 648, seals the poppet face 674 and the poppet valve body 664.
A plenum seat 690 having a rear neck ring 696 is positioned within the housing 524 against and in front of the poppet valve body 664. The neck ring 696 seals against the poppet face 674, except during actuation of the device. The piston 552 has a piston neck 698 which extends back through the plenum seat 690. Referring to FIGS. 21 and 24, the front surface of the poppet face 674, the neck ring 696 of the plenum seat 690, and the inside walls of the bore 754 in the poppet valve body, form a hollow annulus plenum 682.
With the poppet face 674 pivotably mounted to the plunger 652, the poppet face can pivot off the neck ring 696. Consistent cracking pressures (±5% peak ampule pressure variation, C.V., for all shots on a single cartridge and the same for variation between cartridges) provided by the pivoting poppet face 674 are believed to result from the pivoting or tilting separation when the valve opens. The poppet face 674 is preferably made of a low friction creep resistant material.
As shown in FIGS. 17 and 23, a double "D" shape with the poppet valve body 664 has flats 750 on its sides and a forward slot 752. Positioned within the round housing 524, the flats define reservoir feed channels 680 which connect the reservoir 660 to the poppet plenum 682, as shown in FIG. 17. A relief bore 758 and counter bore 766 extend into the top of the poppet valve body 664 to connect to a relief passage 688 in the housing 524 and bridge section 526.
A reservoir bleed bore 678 extends through the valve spacer ring 786 and the housing 524. A bleed orifice 684 is provided in the bleed bore 678 to restrict flow.
With reference to FIG. 17, the piston 552 has a piston bore 700 extending through the piston neck 698 and connecting to a drive chamber 782. The drive chamber 782 is separated and sealed from the surrounding annulus plenum 682 by the neck ring 696. The piston bore 700 leads forward from a filter 701 through an orifice 703 to a muffler 702 which in turn is joined to a release bore 704 which connects to an ampule plunger chamber 760. The filter 701 and muffler 702 are sintered stainless steel to allow for a quiet release of the gas. The ampule plunger chamber freely vents forward to ambient.
The piston 552 has an ampule plunger cup 706 with a generally conical opening to receive and secure the back end of the ampule plunger 554. The outside diameter of the ampule plunger cup 706 and a piston tube 708 extending forwardly around the ampule plunger cup 706 locate and confine a piston return spring 710. The forward end of the return spring 710 is supported by a pressure plate 712 supported by a load spreading washer 714 held in place within the housing 524 by a snap ring 716. O-ring 718 slidably seals the piston 552 against the piston sleeve 694. A front end retainer has three equally spaced apart lugs 746. The retainer 720 is bonded onto the housing 624.
Referring to FIGS. 17 and 19, an ampule indicator ring 722 is rotatably positioned within the retainer 720. The ampule indicator ring 722 has a pin slot 768 and indicator sections 724 painted with e.g., green paint. The retainer 720 has viewing ports 744 on opposite sides. A ring detent ball 762 is biased outwardly from the ampule indicator ring 722. A lock detent 764 is provided in the retainer facing the ampule indicator ring 722.
Referring to FIG. 17, an interlock spring 728 biases the slide block 548 rearwardly. The safety button 546 is attached to the interlock slide block by screws and/or pins 742. The slide block 548 has a vertical trigger stop 748 at its back end. The safety button 546 extends upwardly through a slot 644 in the bridge 526. A detent ball 738 supported by a detent spring 740 within the slide block 548 is biased into a bridge lock detent 736 on the bridge 524. A forward or actuating detent 734 is provided in the bridge 524 in front of the safety detent 736. An interlock pin 726 is attached to the slide block 548 by a spring guide rod 732 and extends forward to the ampule indicator ring 722. An interlock spring 728 supported over the spring guide rod 732 extends forward to an anchor 730 at the front end of the bridge 526.
In operation, a compressed gas cartridge, preferably a CO2 cartridge 54 is placed into the cartridge holder 528. The cartridge holder 528 is then threaded into the back section of the housing 524, such that the seal in the neck of the cartridge 54 is placed against the piercing point 572 of the piercing ring 570. The screw knob 532 is turned forward to force the cartridge into the piercing point 572. The front face of the cartridge 54 seals against the washer face seal 560 while the piercing point 572 pierces open the cartridge 54.
Pressurized gas flows from the cartridge 54 through the bore 574; sintered filters 130 and Tyvek filter 132 in the piercing ring 570; through the bore 582; through the indicator chamber 558 below the pressure indicator assembly 540; through the bore 594 and into the lower chamber 626 of the initiator assembly 592. The filters substantially prevent any liquid from the cartridge 54 from passing beyond the piercing ring 570. This allows the device to be used in any orientation without effecting injection characteristics. A high heat conductivity path from the housing to the filters 578 helps to boil any liquid in the filters into gas. With sufficient gas pressure in the indicator chamber 558, the indicator pin 588 is driven upwardly and protrudes beyond the top surface of the bridge 526, thereby indicating sufficient pressure and gas volume for an injection.
A pre-filled ampule 522 with an ampule plunger 554 is placed into the front end of the device 520 by aligning the lugs at the back end of the ampule 522 to pass through the lugs 746 on the retainer 720. The detent ball 762 engaged into the detent 764 (FIG. 20) form a ring holder and provide a slight holding force on the ampule indicator ring 722 to prevent it from inadvertently rotating while the ampule is installed. With the back end of the ampule pressed against the end plate 716, the ampule 522 is rotated clockwise through an acute angle, turning the ampule indicator ring 722 from the locked position shown in FIGS. 17 and 20 to the unlocked or ready position shown in FIG. 19. The green indicator sections 724 simultaneously move into alignment with the viewing ports 744 indicating to the user that the ampule is properly positioned and the device can be enabled first and then triggered.
As shown in FIGS. 17 and 20, when the pin slot 768 in the ampule indicator ring 722 is not aligned with the interlock pin 726, the interlock slide block 548 cannot be moved forward to unlock the trigger.
Referring to FIGS. 17 and 18, with the ampule 522 properly installed in the injector 520, the pin slot 768 is aligned with the interlock pin 726. The user pushes forward on the safety button 546 which causes the interlock slide block to slide forward. While the interlock pin 726 moves forward (from position G to position P) into the pin slot 768 in the ampule indicator ring 722, the trigger lock 748 also moves forward (from position B) by an equal amount into alignment with the trigger notch 638, (to position K) as shown in FIG. 18. The detent ball 738 moves from the safety detent 736 to the actuate detent 734, to temporarily hold the slide block 548 in the forward or unlocked position against the biasing force of the interlock spring 728.
With the nozzle of the ampule 522 against the patient's skin, the trigger 630 is pressed down and pivoted (from position A in FIG. 17 to position J in FIG. 18). This causes the trigger arm 636 to push down on the ball 632. Referring to FIGS. 18 and 22, the ball 632 drives the initiator pin 602 down (from position C to position L) causing the seat ring 616 to separate from the seat neck 618, to open the initiator valve assembly 592. The compressed gas in the lower chamber 626, as well as additional compressed gas flowing from the cartridge 530, rushes through the initiator valve assembly 592 out past the seat 650 and into the reservoir 660. From there, the compressed gas flows through the reservoir feed channels 680 to the annulus plenum 682. The pressure drop across the filters 578 and 580 is low and does not significantly delay the operating time of the device.
As the gas pressure builds up in the annulus plenum 682 it eventually reaches a point when gas pressure force on the poppet face 674 equals the preset force in spring 656 and the face 674 "cracks" aways from neck ring 696. The flow of gas from the plenum 682 into drive chamber 782 also exposes the poppet face 674 and correspondingly poppet plunger 652 moves backwards very rapidly. This provides the requisite rapid pressure rise in the drive chamber 782, and also regulates the peak "cracking" pressure to negate effects of variable ambient or cartridge gas temperature. Thus the injection device 520 can achieve injections with uniform peak pressure over a wide temperature range of, for example 50°-100° F. The gas pressure in the drive chamber 782 drives the piston 552 forward causing the ampule plunger 554 to rapidly move into the ampule 522, thereby driving the injectant within the ampule 522 out of the ampule nozzle at a velocity sufficient to penetrate the patient's skin.
As the poppet plunger 652 is driven to the rear of the device 520 to operate the regulation valve 784, i.e., the dart valve 654 moves rearwardly and seals against the seat 650, thereby preventing further flow of compressed gas into the reservoir 660 and to the drive chamber 682. The gas pressure driving the piston 552 also drives the poppet plunger 652 in reverse to shut off gas flow. This allows the device to operate more independently of temperature.
A boosting channel 770 with an orifice may be provided in the regulation seat body 646 so that the dart valve 654 of the poppet plunger 652 does not entirely seal off all flow of compressed gas when it seals against the seat 650. The boosting channel 770 accordingly provides a stronger injection by decreasing the pressure decay rate during injection.
After the piston 552 is driven fully forward, as shown in FIG. 18, and the injection completed, the compressed gas remaining in the drive chamber 782 and behind the piston 552 slowly bleeds to ambient through the bleed orifice 684, piston bore 520, muffler 702, and the release bore 704. From the ampule chamber 760, the gas bleeds out through the front plate 716 and around the ampule 522. The muffler 702 reduces gas flow noises.
As the gas bleeds from the drive chamber 782, the piston return spring 710 gradually returns the piston 552 to the position shown in FIG. 17. At the same time, the poppet spring 656 returns the poppet plunger 652 to its original forward position such that the seat 650 is unsealed. The poppet face 674 moves forward and re-seals against the neck ring 696 reestablishing the annulus plenum 682 from the drive chamber 782. The reservoir 660 also bleeds to ambient through the reservoir bleed bore 678. The bleed orifice 684 through the valve spacer ring 786 and the housing 524 causes the reservoir 660 to bleed relatively slowly and silently. The spaces within the poppet body 550 also bleed through the bore 686 and relief passage 688. If there is only a small amount of pressure remaining in the gas cartridge, it is possible that even if the trigger is pressed and gas flows into the reservoir 542, the device will not operate because the gas pressure is to low to overcome the bias of spring 656 to crack the poppet face away from the neck right 696. However, if the gas remained in the reservoir, and was subsequently warmed (e.g. by a human hand) the pressure in the reservoir could increase to the cracking pressure causing the device to unintentionally actuate. The constant bleed down of the reservoir avoids this possibility.
Once the trigger 630 is released, the seat ring 616 in the initiator assembly 592 reseals against the seat neck 618 shutting off further gas flow. The safety button 546 is pushed to the back of the device 520 causing the trigger stop 748 to once again slide underneath the trigger arm 636. At the same time, the interlock pin 726 is withdrawn from the ampule indicator 722 so that the ampule 522 can be turned and removed. The device 520 is then ready for installation of another ampule and another injection.
All spaces within the device 520 from the upper chamber 628 forward are bled to ambient pressure. Accordingly, the components forward of the initiator body 542 are not pressurized in between injections, thereby reducing stresses and potential material creep distortions of non metal parts used in a lightweight and compact injector 520. The interlocking system provided by the ampule indicator ring 722 and the interlock slide block 548 prevents the injection device 520 from being fired unless an ampule is secured in proper position at the front end of the injection device 520. Since there is no rapid venting of compressed gas during injection, the injection device 520 operates relatively silently. The tubular housing 524 provides a compact linear device which facilitates handling, use, transport, and storage.
While various features and advantages are for simplicity and brevity explained and illustrated only in connection with one of the above embodiments, these features and advantages may be included in other embodiments as well, as those skilled in the art will appreciate. In addition, although several embodiments have been shown and described, it will be obvious to those skilled in the art that other variations and modifications are possible, without departing from the spirit and scope of the present invention.
Claims (9)
Priority Applications (3)
Applications Claiming Priority (1)
Related Parent Applications (1)
Publications (1)
Family
ID=25443171
Family Applications (3)
Family Applications Before (2)
Country Status (8)
Cited By (355)
Families Citing this family (387)
Citations (22)
Family Cites Families (12)
- 1992
- 1992-07-24 US US07/920,106 patent/US5383851A/en not_active Expired - Lifetime
- 1993
- 1993-07-23 DE DE1993627165 patent/DE69327165D1/en not_active Expired - Lifetime
- 1993-07-23 JP JP50471494A patent/JP3633615B2/en not_active Expired - Lifetime
- 1993-07-23 DE DE1993627165 patent/DE69327165T2/en not_active Expired - Lifetime
- 1993-07-23 EP EP19930918344 patent/EP0651663B1/en not_active Expired - Lifetime
- 1993-07-23 WO PCT/US1993/006940 patent/WO1994002188A1/en active IP Right Grant
- 1993-07-23 CA CA 2140772 patent/CA2140772C/en not_active Expired - Lifetime
- 1993-07-23 US US08/097,266 patent/US5399163A/en not_active Expired - Lifetime
- 1993-07-23 AT AT93918344T patent/AT187083T/en not_active IP Right Cessation
- 1993-07-23 AU AU47827/93A patent/AU676490B2/en not_active Expired
- 1995
- 1995-03-21 US US08/407,867 patent/US5520639A/en not_active Expired - Lifetime
- 2003
- 2003-12-24 JP JP2003426219A patent/JP2004141667A/en active Pending | https://patents.google.com/patent/US5520639A/en | CC-MAIN-2019-26 | refinedweb | 15,821 | 59.64 |
#include <semaphore.h>
#include <unistd.h>
#include <dirent.h>
#include <linux/videodev2.h>
#include "libavcodec/avcodec.h"
#include "v4l2_context.h"
Go to the source code of this file.
Definition at line 35 of file v4l2_m2m.h.
Referenced by buf_to_m2mctx(), and ctx_to_m2mctx().
Definition at line 39 of file v4l2_m2m.h.
Allocate a new context and references for a V4L2 M2M instance.
Definition at line 382 of file v4l2_m2m.c.
Referenced by v4l2_decode_init(), and v4l2_encode_init().
Probes the video nodes looking for the required codec capabilities.
Definition at line 341 of file v4l2_m2m.c.
Referenced by v4l2_decode_init(), and v4l2_encode_init().
Releases all the codec resources if all AVBufferRefs have been returned to the ctx.
Otherwise keep the driver open.
Definition at line 319 of file v4l2_m2m.c.
Reinitializes the V4L2m2mContext when the driver cannot continue processing with the capture parameters.
Definition at line 189 of file v4l2_m2m.c.
Referenced by v4l2_handle_event().
Reinitializes the V4L2m2mContext when the driver cannot continue processing with the any of the current V4L2Contexts (ie, changes in output and capture).
Definition at line 231 of file v4l2_m2m.c.
Referenced by v4l2_handle_event(). | http://ffmpeg.org/doxygen/trunk/v4l2__m2m_8h.html | CC-MAIN-2019-30 | refinedweb | 180 | 56.01 |
This appendix lists error messages that may be encountered in applications that use Oracle XDK for Java during the execution of the XSU interfaces.
Keywords and Syntax for Error Messages
See Also: the XQuery error messages
These error messages are in the range XSUK-0001 through XSUK-0099.
These error messages are in the range XSUE-0000 through XSUE-0099.
XSUE-0000: Internal Error -- Exception Caught string
XSUE-0001: Internal Error -- string
XSUE-0002: string is not a scalar column. The row id attribute can only get values from scalar columns.
XSUE-0003: string is not a valid column name.
XSUE-0004: This object has been closed. If you would like the object not to be closed implicitly between calls, see the string method.
XSUE-0005: The row-set enclosing tag and the row enclosing tag are both omitted; consequently, the result can consist of at most one row which contains exactly one column which is not marked to be an XML attribute.
XSUE-0006: The row enclosing tag or the row-set enclosing tag is ommitted; consequently to get a well formed XML document, the result can only consist of a single row with multiple columns or multiple rows with exactly one column which is not marked to be an XML attribute.
XSUE-0007: Parsing of the sqlname failed -- invalid arguments.
XSUE-0008: Character string is not allowed in an XML tag name.
XSUE-0009: this method is not supported by string class. Please use string instead.
XSUE-0010: The number of bind names does not equal the number of bind values.
XSUE-0011: The number of bind values does not match the number of binds in the SQL statement.
XSUE-0012: Bind name identifier string does not exist in the sql query.
XSUE-0013: The bind identifier has to be of non-zero length.
XSUE-0014: Root node supplied is null.
XSUE-0015: Invalid LOB locator specified.
XSUE-0016: File string does not exit.
XSUE-0017: Can not create oracle.sql.STRUCT object of a type other than oracle.sql.STRUCT (i.e. ADT).
XSUE-0018: Null is not a valid DocumentHandler.
XSUE-0019: Null and empty string are not valid namespace aliases.
XSUE-0020: to use this method you will have to override it in your subclass.
XSUE-0021: You are using an old version of the gss library; thus, sql-xml name escaping is not supported.
XSUE-0022: cannot create XMLType object from opaque base type: string
These error messages are in the range XSUE-0100 through XSUE-0199.
XSUE-0100: Invalid context handle specified.
XSUE-0101: In the FIRST row of the resultset there is a nested cursor whose parent cursor is empty; when this condition occurs we are unable to generate a dtd.
XSUE-0102: string is not a valid IANA encoding.
XSUE-0103: The resultset is a "TYPE_FORWARD_ONLY" (non-scrollable); consequently, xsu can not reposition the read point. Furthermore, since the result set has been passed to the xsu by the caller, the xsu can not recreate the resultset.
XSUE-0104: input character is invalid for well-formed XML: string
These error messages are in the range XSUE-0200 through XSUE-0299.
XSUE-0200: The XML element tag string does not match the name of any of the columns/attributes of the target database object.
XSUE-0201: NULL is an invalid column name.
XSUE-0202: Column string, specified to be a key column, does not not exits in table string.
XSUE-0203: Column string, specified as column to be updated, does not exist in the table string.
XSUE-0204: Invalid REF element - string - attribute string missing.
XSUE-0206: Must specify key values before calling update routine. Use the string function.
XSUE-0207: UpdateXML: No columns to update. The XML document must contain some non-key columns to update.
XSUE-0208: The key column array must be non empty.
XSUE-0209: The key column array must be non empty.
XSUE-0210: No rows to modify -- the row enclosing tag missing. Specify the correct row enclosing tag.
XSUE-0211: string encountered during processing ROW element string in the XML document.
XSUE-0212: string XML rows were successfully processed.
XSUE-0213: All prior XML row changes were rolled back. | http://docs.oracle.com/cd/E11882_01/appdev.112/e23582/adx_ermg_xsu.htm | CC-MAIN-2014-23 | refinedweb | 703 | 75.4 |
Actor Rajnikanth, now being treated at a hospital in Singapore, is fine and recovering well, his son-in-law and actor Dhanush said on Tuesday.
“He (Rajnikanth) is recovering well. He is doing fine. He is responding well to the treatment,” he told reporters in Chennai.
“Doctors have found out the root cause of his (Rajnikanth’s) trouble. It is something that can be reversed,” Dhanush said.
According to him, the 61-year-old actor is cheerful and is “in a mood to shop. He eats whatever he wants.
“We hope he will return (home) in a week or 10 days. If we do not return, that means we are on a holiday (in Singapore),” he said.
He dismissed as a joke reports that the actor required a kidney transplantation.
“There is no kidney transplant. It’s a joke.”
Rajnikanth, who was flown to Singapore on May 28, was admitted to the Mount Elizabeth Hospital for “rest and rehabilitation” after discharge from Sri Ramachandra Medical Centre in Chennai. He was admitted to the SRMC on May 13 for respiratory infection and other problems.
Keywords: Rajnikanth illness, Dhanush, Mount Elizabeth Hospital | http://www.thehindu.com/features/cinema/rajnikanth-fine-recovering-well-dhanush/article2065995.ece | CC-MAIN-2014-23 | refinedweb | 190 | 68.26 |
In several MXDs, I have symbology displayed based on a rotation field. In each MXD, I rotate the data frame based on the site plan. When I export to a PDF within ArcMap, the symbols stay rotated as they should.
I have already gone into ArcMap Advanced Settings Utility and checked off Rotate marker symbols with dataframe.
Sample when exporting to PDF from ArcMap:
When I use arcpy.mapping.ExportToPDF, the PDF exports correctly, but it will not honor the rotation. I would like to use Python because I have several dozen MXDs that will be updated on a monthly basis.
Sample when using arcpy.mapping.ExportToPDF
Sample Code:
import arcpy, os folderPath = r"J:\GIS\PrePlans" for filename in os.listdir(folderPath): fullpath = os.path.join(folderPath, filename) if os.path.isfile(fullpath): basename, extension = os.path.splitext(fullpath) if extension.lower() == ".mxd": mxd = arcpy.mapping.MapDocument(fullpath) mapPath = mxd.filePath fileName = os.path.basename(mapPath) print "Exporting " + fileName txtFile.write("Exported " + fileName +'\n') arcpy.mapping.ExportToPDF(mxd, basename + '.pdf')
Solved! Go to Solution.
You have to use 32-bit python when you execute the script. If you have installed 64-bit background geoprocessing then 64-bit python is used and rotation i not honored.
Hello Brian,
I'm with the arcpy.mapping team. I was not able to reproduce your scenario.
What version of ArcMap are you using?
It would be best to open an incident with support services for tracking purposes but you are also welcome to send me data / exact steps to jbarrette@esri.com.
Thanks,
Jeff
Was this resolved?
I am having the same issue when exporting to .AI using arcpy on 10.4.1
I don't rotate the data frame but I have a few layers displayed using rotation field. When exporting through standard way (File>Export>...) everything works fine. But when I export using arcpy the symbols end up un-rotated.
MXD:
:
AI file:
The code I am using:
import arcpy, os MapMainFolder = r"Z:\Workspace" # topmost folder AIoutLoc = r"J:\CURRENT PROJECTS" for (root, dirs, files) in os.walk (MapMainFolder): for fileName in files: if os.path.splitext (fileName)[1] == ".mxd": arcpy.AddMessage (fileName) fullPath = os.path.join (root, fileName) mxd = arcpy.mapping.MapDocument (fullPath) print fileName df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0] #ungoup layers for lyr in arcpy.mapping.ListLayers(mxd, "*", df): depth = len(lyr.longName.split("\\")) if depth == 1: refLayer = lyr elif depth == 2: moveLayer = lyr arcpy.mapping.MoveLayer(df, refLayer, moveLayer, "BEFORE") arcpy.RefreshTOC() #export AI ai = fileName.replace (".mxd",".ai") AIpath = os.path.join (AIoutLoc,ai) arcpy.AddMessage ("Exporting " + ai) arcpy.mapping.ExportToAI(mxd,AIpath,"PAGE_LAYOUT",0,0, resolution=300, image_quality="BEST", convert_markers="true")
This was not resolved as our project decided to export PDFs manually instead of via Python. This is because the number of pages per each location relates to the number of stories at a building, which involves exporting PDF for floor 1, checking off floor 1 and checking on floor 2, exporting, so on.
You have to use 32-bit python when you execute the script. If you have installed 64-bit background geoprocessing then 64-bit python is used and rotation i not honored.
Thank you,
This seems to happen in other areas as well such as exporting TPKs via ArcPy and using Data Reviewer ArcPy functions. | https://community.esri.com/t5/python-questions/symbol-rotation-not-honored-during-arcpy-mapping/td-p/143746 | CC-MAIN-2022-27 | refinedweb | 556 | 53.58 |
Cycle Detection Algorithms
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
As we have already seen various algorithms to detect a cycle in a linked list. In this article we are going to explore some more cycle detection algorithms in general.
A cycle in a data structure as we have already seen is a condition with no end.
Here are a few popular cycle detection algorithms:
- Floyd's cycle detection algorithm
- Brent’s Cycle Detection Algorithm
Both of these algorithms are used to find the cycle in a linked list.Both of the algorithms use the slow and fast pointer approach but implementation is different. before we go into the details of these methods, let's look at the major differences between these two algorithms.
Difference between Floyd's and Brent's Algorithm
- Brent's Algorithm Finds the length of loop in with cycle detection loop itself. No need to traverse the loop again for counting the nodes in the loop.
- Brent's Algorithm is faster than Floyd's algorithm.
- Time complexity of Brent's Algorithm is big o(m+n). while Floyd's algorithm complexity is big o(n).
Brent's Algorithm
Brent's cycledetection algorithm is similar to floyd's cycle detection algorithm as both the approaches use two pointes but there is a difference between the two approaches. Here we make one pointer halted till every iteration and move it to other pointer at every power of two. The smallest power of two where the two pointers meet is the starting point of the cycle.
Pseudocode for the algorithm
- Move fast pointer in powers of 2 till we find a loop. After every power, we reset slow pointer to previous value of fast pointer. Reset length to 0 after every power.
- The condition for loop testing is slow and fast become same as in the floyd's algorithm and if there is no loop then the fast pointer points to null after the loop is terminated.
- If there is a loop then when we come out of the loop we have the length of the loop as well.
- We reset slow pointer to head pointer and fast pointer to node at position head + length.
- Beginning of loop can be found by moving both the pointers one by one.
Code for the Brent's algorithm
#include<iostream> using namespace std; class Node { public: int data; Node* next; }; Node* detectCycle(Node* head) { // if head is null then no loop if (head == NULL) return NULL; Node* slow = head; Node* fast = head->next; int power = 1; int length = 1; while (fast != NULL && fast != slow) { if (length == power) { // updating the power. power *= 2; // updating the length length = 0; slow = fast; } fast = fast->next; ++length; } // if it is null then no loop if (fast == NULL) return NULL; // length stores actual length of the loop. // Now set slow to the beginning // and fast to head+length i.e length of the cycle. slow = fast = head; while (length > 0) { fast = fast->next; --length; } // Now move both pointers by one node so that they meet at the beginning loop. while (fast != slow) { fast = fast->next; slow = slow->next; } return slow; } Node* newNode(int key) { Node* temp = new Node(); temp->data = key; temp->next = NULL; return temp; } // Driver program to test above function int main() { Node* head = newNode(1); head->next = newNode(2); head->next->next = newNode(3); head->next->next->next = newNode(4); head->next->next->next->next = newNode(5); // Create a loop for testing head->next->next->next->next->next = head->next->next; Node *res = detectCycle(head); if (res == NULL) cout<<"NO LOOP"<<endl; else cout<<"LOOP IS AT "<< res->data; return 0; }
output:
LOOP IS AT 3
Explanation
detectCycle() function detects the loop in the list If loop was there in the list then it returns, the first node of loop otherwise returns NULL. The while loop in the function returns NULL if there is no loop else it will return the first node in the loop.the if condition in the while loop determines the condition after which we will update the power and length as smallest power of two this gives us the starting node in the cycle if there is cycle in the loop.
The code gives us the length of the loop as well if we want we can print the length of the loop too.
the function returns the slow pointer .i.e the starting point of the loop if there is loop and NULL if there is no loop.
the function calculates the length of the loop as well in the above code we have not returned the value of the length variable if required we can print or return the length of the loop as well.
then in the main function we have created a loop for the testing if our code is working fine or not. the output of the above code is "LOOP IS AT 3" (without the quotes).
- Time complexity:
Big O(M+N)
M is the beginning of a cycle and N is the length of the loop.
- Space complexity:
Big O(1)
Floyd's cycle finding algorithm
As we have already discussed earlier about the floyd's algorithm , it uses the same approach as brent's (the approach of two pointers slow and fast pointer).This algorithm does not gives us the length of the loop and the starting point of the loop. The Floyd's Algorithm only detects the loop and returns a particular node in the loop (not necessarily the first node in the loop).
The brent's algorithm is a better approach than the floyd's algorithm and it also gives the starting node as well as the length of the loop whereas the floyd's algorithm does not gives the length and the starting node of the cycle.
here is the pseudocode for the floyd's algorithm.
pseudocode for the method.
Implementation of Floyd's Algorithm:
//algorithm to find the loop in linkedlist detectloop() { node* slow=head; node* fast=head; while(slow && fast && fast->next) { slow=slow->next; fast=fast->next->next; if(slow==fast) { cout<<"Loop Found"<<endl; return 1; } } cout<<"Loop not found"<<endl; return 0; } }; int main() { Linkedlist obj; //insert nodes in linkedlist obj.insertnode(3); obj.insertnode(9); obj.insertnode(7); obj.insertnode(4); obj.insertnode(5); //create loop for testing obj.createloop(); // detect loop obj.detectloop(); //point the head to null to create a new list head=NULL; //insert the nodes in list obj.insertnode(9); obj.insertnode(10); obj.insertnode(11); obj.insertnode(12); obj.insertnode(13); //detect if there is a loop obj.detectloop(); }
Output:
Loop Found Loop not found
Explanation and Example of the Method detect loop
1. This is the initial state of the algorithm, where slow and fast both the pointer points to the head of the linked list.
2. At the second step of the algorithm, the slow pointer moves by one node and the fast pointer moves by two nodes. so now slow is at 2 and fast is at 3.
3. At the third step of the algorithm, the loop continues and now slow is at 3 and fast is at 5.
4. At the fourth step of the algorithm, the slow pointer is at 4 and now fast pointer is at 3.
5. This is the fifth and the last step of the algorithm, in this step slow moves by one node and fast moves by two so now both points to the node 5.
- Time complexity:
Big O(n)
- Space complexity:
Big O(1) | https://iq.opengenus.org/cycle-detection-algorithms/ | CC-MAIN-2021-17 | refinedweb | 1,280 | 67.59 |
Today I was trying to add NProgress to my Next.js project.
I wanted the progress bar to:
- show when switching routes / pages
- show when any fetch call is made
- display only after a delay, I don't want to show a loader at EVERY interaction, only when the requests are "slow"
Here's a demo of how
NProgress looks like:
Since I hit met some challenges while implementing all of that, I felt like it would be good to share how I did it. So here it is:
First, install the
nprogress package:
npm install nprogress
Then edit or create your
_app.js and add:
// global styles are required to be added to `_app.js` per Next.js requirements. import "nprogress/nprogress.css"; const TopProgressBar = dynamic( () => { return import("components/TopProgressBar"); }, { ssr: false }, ); export default function MyApp({ Component, pageProps }) { return <> <TopProgressBar /> <Component {...pageProps} /> </> }
Here we use dynamic imports and the ssr option to make sure our
TopProgressBar is loaded only in browser environements.
If you're wondering how relatively loading
components/TopProgressBar works, just configure you
jsconfig.json as shown in the Next.js documentation.
Finally create
components/TopProgressBar.js:
import Router from "next/router"; import NProgress from "nprogress"; let timer; let state; let activeRequests = 0; const delay = 250; function load() { if (state === "loading") { return; } state = "loading"; timer = setTimeout(function () { NProgress.start(); }, delay); // only show progress bar if it takes longer than the delay } function stop() { if (activeRequests > 0) { return; } state = "stop"; clearTimeout(timer); NProgress.done(); } Router.events.on("routeChangeStart", load); Router.events.on("routeChangeComplete", stop); Router.events.on("routeChangeError", stop); const originalFetch = window.fetch; window.fetch = async function (...args) { if (activeRequests === 0) { load(); } activeRequests++; try { const response = await originalFetch(...args); return response; } catch (error) { return Promise.reject(error); } finally { activeRequests -= 1; if (activeRequests === 0) { stop(); } } }; export default function () { return null; }
Here we register to Next.js router events and monkey patch the global fetch. I was worried monkey patching
fetch would fail to register early on but turns out it works 🤷♂️!
As you can see,
TopProgressBar renders nothing, I guess there might be issues of doing things this way so if you encounter some, just let me know!
That's it!
If you're wondering if
NProgress is still maintained because of the low number of commits and "high" number of issues, the good news is that they are working on a new version for 2020:
Even if you're not using Next.js, you should be able to adapt this code to your favorite platform or framework.
Top comments (8)
Nice, writeup. I instantly was reminded of doing this with Turbolinks in Rails back in the days.
Although it's technically a nice solution, I actually dislike the top progress bar on websites. Somehow, I always feel like I have to wait longer because of the progress bar. Maybe it's because only slow loading sites use it, and I am wrongly attributing it to the bar. 🤷♂️
Yep indeed it can feel this way, when it's fast enough though and delayed then it's a good indicator that "the UI is up to date now"
Hi, I like the solution. But if you use SWR with fetch, then with each revalidation the progress bar will appear. Is there a way to turn it off for SWR revalidate fetches?
Good question; I am experiencing the same as you because of using SWR. For now, I am fine with it, but you're right; there should be a way to disable it.
Ways to do this:
If you try one of those or find other ways, let me know!
Hi, it works good with routing but it doesnt work with fetch, im using node-fetch, is this the issue? which fetch package are you using for this? thanks!
Hi there, with the latest Next.js versions you do not even have to include any fetch polyfill, so you can remove any fetch package and it should just works
Hi, Does NProgress also work for initial site load? Like a loader?
Also, for some reason, the code is not working after adding TopProgressBar in _app.js | https://dev.to/vvo/show-a-top-progress-bar-on-fetch-and-router-events-in-next-js-4df3 | CC-MAIN-2022-40 | refinedweb | 686 | 72.56 |
Foreword by Ricardo Sanchez
Back in March I had the pleasure to attend Karsten’s workshop at the 2013 Resonate conference in Belgrade, in it we learned how to work with audio and music while coding live using the Clojure programming language. It was great! I got so addicted to this new way of programming that it made me work on a little tutorial so I can share my experience with newcomers. After I finished my first draft I asked Karsten to do a technical review of it and he very kindly accepted. A couple of
weeks months later we managed to expand & transform it into a comprehensive introductory article to Clojure and functional programming with a few very cool examples I hope you’ll enjoy. Without Karsten’s input this tutorial would never have been what it is today – so for that a big THANKS is due to him and the Resonate team for putting together such an awesome event.
Foreword by Karsten Schmidt
Getting (back) into the magical world of Lisp had been on my to-do list for a long while, though my first encounter with Clojure in 2011 was through sheer coincidence (if you believe in such things!). It took me only a few hours to realise how this encounter was impeccable timing since there was this nagging feeling that I had become too accustomed to the status quo of the languages I’d been using for the last decade. Most importantly, I instinctively knew, I wanted to learn & use this language for my own work, badly & ASAP. And indeed, I’ve been fortunate enough to be able to use Clojure on several large projects since, from cloud based server apps/renderfarms to OpenGL/CL desktop apps and a festival identity. What I wasn’t quite prepared for were the many doors (of perception and inquiry) Clojure has opened wide in terms of process, thinking & learning about code from outside the boxes of our so beloved, popular languages & frameworks. Now, having been using it almost daily for 2.5 years, this tutorial, this labour of love, is largely meant to share some findings of this journey so far (though admittedly this first part is more of a crash course) – all in the hope to inspire some like-minded souls from this community, keen to help realising the untapped potential this language & its philosophy bring to the creative drafting table…
Since this tutorial has grown well beyond the scope of a single article and there’s no TL;DR version, we will be releasing it in stages over the coming weeks…
Introduction
This tutorial aims at giving you a taste of functional programming with Clojure, a modern dialect of Lisp designed to run as an hosted language on the Java Virtual Machine (and an increasing number of other host platforms). Based on the lambda calculus theory developed by Alonzo Church in 1930s, the family of functional languages has a long history and forms one of the major branches in the big tree of available programming languages today. Largely through developments in hardware and the increasing numbers of cores in CPU chip designs, as well as through the appearance of languages like Erlang, F#, Haskell, Scala & Clojure, the functional philosophy has been re-gaining traction in recent years, since it offers a solid and plausible approach to writing safe & scalable code on these modern hardware architectures. Core functional ideas are also slowly infiltrating long established bastions of the Kingdom of Nouns, i.e. through the inclusion of lambda expressions in both the latest versions of Java 8 and C++11. However, Clojure goes much further and its combined features (e.g. immutability, laziness, sequence abstractions, extensible protocols, multimethods, macros, async constructs, choice of concurrency primitives) make it an interesting choice for approaching data intensive or distributed applications with a fresh mindset around lightweight modelling.
Sections:
Getting to know Clojure
Clojure is a young (first release in 2007) and opinionated language whose philosophy challenges/contrasts (just as much as Rich Hickey, Clojure’s author, does) some commonly accepted assumptions about programming and software design. It therefore requires some serious unlearning and rethinking for anyone who’s ever only programmed in Processing, Java, C++ etc. (JavaScript programmers will find some of the ideas more familiar though, since that language too was heavily influenced by older Lisp implementations). So even if after working through this tutorial series you decide Clojure isn’t for you, the new perspectives should provide you with some food for thought and useful knowledge to continue on your journey.
As a reward for taking on this learning curve, you’ll gain access to ideas and tools, which (not just in our opinion) should excite anyone interested in a more “creative” computing process: a truly interactive programming environment without a write/compile/run cycle and live coding & manipulation even of long running complex systems, a language, which by design, removes an entire class of possible bugs, a super active, helpful community of thousands of users/seekers – and at the very least it will give you some alternative insights to common programming problems, regardless your chosen language. With the JVM as its current main host platform and Clojure’s seamless Java interop features, you’ll have full access to that language’s humungous ecosystem of open source libraries, often in an easier way than Java itself. Clojure’s sister project ClojureScript is equally making headway in the JavaScript world and there’re a number of efforts underway to port Clojure to other platforms (incl. Android and native compilation via LLVM).
Clojure’s (and Lisp’s) syntax is a bit like Marmite: There’re probably as many people who love it as there’re who hate it, though in this case the objections are usually just caused by unfamiliarity. The seemingly large amount of parantheses present is one of the most immediately obvious and eye-grabbing aspects to any novice looking at Clojure/Lisp code. However, being a programming language, this a) is obviously by design, b) not strictly true, and c) is the result of stripping out most other syntax and special characters known from other languages: I.e. there’re no semicolons, no operator overloading, no curly brackets for defining scope etc. – even commas are optional and considered whitespace! All this leads to some concise, yet readable code and is further enhanced by the number of powerful algorithmic constructs the language offers.
Whereas in a C-like language a simple function/method definition looks like this:
// C void greetings(char *fname, char *lname) { printf("hello %s %s\n", fname, lname); } // C++ void greetings(const char *fname, const char *lname) { std::cout << "hello " << fname << " " << lname << std::endl; } // Java public void greetings(String fname, String lname) { System.out.println("hello " + name); }
...in Clojure it is:
(defn greetings [fname lname] (println "hello" fname lname))
Calling this function then:
// C-style greetings("Doctor", "Evil"); ; Clojure (greetings "Doctor" "Evil")
Clojure's philosophy to syntax is pure minimalism and boils down to the understanding that every piece of source code, as in any programming language, is merely a definition of an executable tree structure. Writing Clojure is literally defining the nested branches of that tree (identified by brackets, also called S-expressions or sexp, short for symbolic expressions). Even after a short while, you'll find these brackets seem to become automatic and mentally disappear (especially when using an appropriate text editor w/ support for bracket matching, rainbow brackets and structural editing features).
Also because this tutorial is more of a crash course and limited in scope, we can only provide you with a basic overview of core language features. Throughout the tutorial you will find lots of links for further reading and a list of Clojure related books at the end. Now, without much further ado, let's dive in...
Setting up an environment
As with any programming language, we first need to ensure we've got some proper tooling in place, before we can begin our journey through unknown lands. Since Clojure is just a language and runtime environment, it doesn't have any specific requirements for editors and other useful tools. However, the Clojure community has developed and adopted a number of such tools, which make working with Clojure (even) more fun and the first one we introduce right away:
Leiningen
These days most software projects are using a large number of open source libraries, which themselves often have further dependencies of their own. To anyone having ever worked with a language with an active community, life without a package manager seems like pure hell, trying to manage & install dependencies manually. Leiningen is the de-facto build tool used by the Clojure community, with its name being an humorous take on Ant, the former de-facto build tool in the Java world. Lein calls itself a tool for "automating Clojure projects without setting your hair on fire". It's truly one of the easiest ways to get started with Clojure and is so much more than just a package manager, even though in this tutorial we'll be mainly using it as such. So please head over to the Leiningen website and follow the simple 3-step install procedure (or check your system package manager, e.g. Homebrew for OSX:
brew install leiningen). Regardless, you'll need to have an existing Java installation (Java 6 or newer) on your machine, before installing Leiningen...
The Clojure community has developed integration plug-ins for several popular editors & IDEs and we will start working with one of them Counterclockwise in the next part of this tutorial. A list of other options can be found on the Clojuredoc website.
Hello world, Hello REPL!
As briefly mentioned in the beginning, Clojure provides us with a fully dynamic programming environment, called the REPL: The (R)ead, (E)valuate, (P)rint, (L)oop. The REPL reads input from the user, executes it, prints the result of the process, then rinse & repeat...
The "read" phase converts source code into a data structure (mostly a nested list of lists). During "evaluation" this data structure is first compiled into Java byte code and then executed by the JVM. So unlike some other dynamic languages running on the JVM (e.g. JRuby, Rhino), Clojure is a compiled language and in many cases can have similar performance characteristics as Java.
The REPL quickly becomes the main sketch pad and development tool for many Clojure users, a space in which complex code is slowly built up from small parts, which can be immediately tested and experimented with, providing an uninterrupted experience.
To start a REPL with leiningen, simply type
lein repl on your command line and after a few moments, you should see a prompt like this:
$ lein repl nREPL server started on port 51443 REPL-y 0.1.10 Clojure 1.5.1=> _
Btw. The first time
leinis run, it will download Clojure and possibly a number of other files/libraries. This only happens once and all files are stored/cached in this folder
~/.m2/repository. REPL startup time will always take a few seconds due to the JVM initializations needed, however a REPL usually doesn't need to be restarted often, so in practice isn't an huge issue.
Should ever end up triggering an action which will make the REPL hang (e.g. trying to display an infinite sequence), you can press
Control+Cto cancel this action.
If you don't want to (or can't) install Clojure/Leiningen, you can try out all of the examples in this part of the tutorial using the online REPL at Try Clojure.
As is traditional, our first piece of Clojure code should be
(println "Hello World"). So please go ahead and type it at the prompt. Once you hit
Enter, the (R)ead phase of the REPL begins, turning our entered text into a stream of symbols. Provided there're no errors, these symbols are then (E)valuated according to the rules of the language, followed by (P)rinting the result of that code and (L)oop giving us a new prompt for input.
user=> (println "Hello World") Hello World nil user=> _
Input -> Process -> Output
...is one of the fundamental concepts in programming, especially in functional programming. If you look closely, you might be wondering where that
nilcame from?
nilis Clojure's equivalent of
nulland here indicates that our
printlnactually didn't produce any computational result. In fact, the display of "Hello World" was simply a side-effect of executing
println(by pushing that string to your system's output stream), but the
printlnfunction gave us no actual value back, which we might pass to another process. We will return to this important distinction later on when we'll talk about truthiness, predicates and pure functions.
Some other brief examples:
(+ 1 2) ; 3 (+ (+ 1 2) (+ 3 4)) ; 10 (+ 1 2 3 4) ; 10
Looks weird, huh? At least no more
nil is to be seen (of course we expected some results from an addition), and maybe you can already spot a pattern:
- Operations seem to come first (this is called prefix notation) and
- it seems the number of parameters/arguments doesn't matter...
These are the kind of assumptions, you might make coming from an imperative programming background (Java/C etc.), where symbols like
+,
-,
/,
* or
= actually are operators, basically symbols with pre-defined, hardcoded meanings and can only appear in certain places within the code. In Clojure however, there're no such operators and
+ is just defined as a standard function and that function accepts indeed a flexible number of params (as do all other basic math operators).
Clojure syntax summarized
The syntax of Clojure/Lisp boils down to just this one rule and any Clojure form has this overall structure (no exceptions!):
(function param1 param2 ... paramN)
Note: Functions are processes and parameters (also called arguments) are their inputs. The number of params depends of course on the function, with some not requiring arguments at all.
The parentheses define the scope of an S-expression (technically a list, but also a branch in the tree of our program), with its first element interpreted as a function to call. An important thing to consider at this stage is that all the elements (incl. the first) of such an expression can be (and often are) results of other function calls.
Here we calculate the sum of two products, the Pythagorean (c2 = a2 + b2) of a fixed triangle with sides a = 4 and b = 8:
(+ (* 4 4) (* 8 8)) ; 80
The image shows a visualization of the encoded tree structure we've written. The tree needs to be evaluated from the bottom to the top: The inner forms
(* 4 4) and
(* 8 8) are evaluated first before their sum can be computed. Clojure is read inside out. At first this might seem alien, but really just takes getting used to and doesn't prove problematic in practice, since most Clojure functions are often less than 10 lines long.
Symbols
Symbols are used to name things and are at the heart of Clojure. Clojure code is evaluated as a tree of symbols, each of which can be bound to a value, but doesn't need to be. Of course in practice, it mostly means a symbol must have a bound value in order to work with it. Yet there're situations in Clojure when a symbol must remain unevaluated (as is) and we'll describe some of these in more detail when we discuss the
list data type further below.
Imagine for a moment to be in Clojure's shoes and we have to evaluate the form
(+ 1 2). The Reader simply provides us with a list of 3 symbols
+,
1 and
2. The latter two are easily identified as numbers and need no further treatment. The
+ however is a symbol we don't know and therefore need to look up first to obtain its value. Since that symbol is part of the core language, we find out it's bound to a function and then can call it with
1 and
2 as its arguments. Once the function returns, we can replace the whole form
(+ 1 2) with its result, the value
3. If this form was part of a larger form/computation, this result is then inserted back into it and the whole process repeats until all remaining forms & symbols have been resolved and a final value has been computed.
Symbols are conceptually similar to variables in that both provide reusable, named references to data (and code). Yet, variables don't really exist in Clojure. The biggest difference to variables in other languages, is that by default a symbol is bound to a fixed value once, after which it can't be changed. This concept is called...
Immutability
Immutability isn't a well known concept among the (imperative) languages, which readers of this blog might be more familiar with. In fact, these languages are mainly built around its opposite, mutability - the ability to define data, pass it around via references/pointers and then change it (usually impacting multiple places of the codebase). The fact, that immutable data is read-only once it's been defined, provides the key feature for truly enabling safe multi-threaded applications and simplifies other programming tasks (e.g. easy comparison of nested values, general testability and the ability to safely reason over a function's behaviour). The presence of immutable data also leads to fundamental questions about the actual need for key topics in object oriented programming, e.g. the need for hiding data through encapsulation and all the resulting complexity is only required if a language doesn't provide features protecting data from direct 3rd party (i.e. user code) manipulation. This problem simply doesn't exist in Clojure! On the other hand immutability too provides one of the most challenging unlearning tasks for people coming from a world of mutable state, since it seems paradoxical to work it into any realworld system requiring constant changes to our data.
Since no real application can exist without changing its internal state, throughout the course of this tutorial we will show how a) Clojure makes the most of immutability using persistent data structures, b) how actual mutable behaviour can be achieved where it is beneficial and c) show how mutable state can be easily avoided in most parts of a Clojure program. But for now please remember: Clojure data is immutable by default.
As an aside, unlike other functional languages like Haskell or Scheme where all data is truly 100% immutable and changing state can only be achieved through Closures & Monads, Clojure takes a more pragmatic route and provides a number of mutable data types. However, each of those is intended for certain usage scenarios and we will only discuss two of them (Vars and Atoms) further below.
Symbol bindings
In most programming languages variables are based on lexical scope: Depending on the level at which a variable has been declared in a program, its binding is either global or local (e.g. local within a function or class). Clojure also provides lexical scope binding using the
let form, giving us local, symbolic value bindings only existing within its body.
The
let form has this general structure and the last (often only) expression of its body becomes the final result:
(let [symbol value symbol value ...] body-expressions)
Btw. The name
letcomes from mathematical texts, where we often say things like: "Let C be the sum of A and B" etc. To anyone with a previous career in BASIC, this should be familiar too...
Sticking with our pythagorean example from above, we could wrap the computation inside a
let form and introduce two symbols
a and
b:
(let [a 4 ; bind symbol a to 4 b 8] ; bind symbol b to 8 (+ (* a a) (* b b))) ; use symbols ; 80 a ; outside the let form symbol a is undefined CompilerException java.lang.RuntimeException: Unable to resolve symbol: a in this context
We will deal with
let several more times throughout this tutorial.
Vars & namespaces
Being restricted to only lexical scoped symbols defined with
let is of course a painstaking way of programming, but thankfully isn't the whole truth. The basis of programming is "Don't repeat yourself" and that implies we need some form of mechanism to refer to existing values/processes defined elsewhere. In Clojure, this mechanism is implemented using Vars, named storage containers holding our data. Vars are named using symbols and can keep any datatype. They're always global within a given namespace, meaning they're visible from anywhere within that namespace and possibly others too.
Namespaces are an important concept (not only) in Clojure to manage modularity and avoid naming conflicts. They're conceptually similar to namespaces in C++, packages in Java or modules in Python, although in Clojure have additional dynamic features. All Clojure code is evaluated as namespaced symbols and the language provides a rich set of functions to create, link and manipulate them. You can read more about them on the Clojure website. In the REPL the prompt will always show which namespace we're currently working in (default
user).
Back to Vars now, they're the closest thing to "traditional" variables in other languages, though are not equal: Whereas in many other languages a variable is a direct mapping from a named symbol to a value, in Clojure, symbols are mapped to Var objects, and only the Vars themselves provide a reference to their current values. This is an additional level of indirection, important for working in the dynamic environment of the REPL and in multi-threaded scenarios. For the latter, Vars provide a mechanism to be dynamically rebound to a new value on a per-thread basis and is one of Clojure's concurrency features we will discuss in a future part of this tutorial.
def is the general form used to define Vars. It is a special form which takes two arguments, but doesn't evaluate the first one and instead takes it literally to create a new Var of that name, which then holds the given value (if any). Vars can also be defined without a value, which keeps them initially unbound and is less common, but is sometimes needed to declare a Var for future reference.
We used Vars a couple of times already: the
+,
* and
let symbols are all bound to Vars defined in the
clojure.core namespace. But let's define two vars ourselves and then pass their values to a process:
(def a 4) ; #'user/a ; def returns the created var object a ; just providing a Var's name will return its value ; 4 (def b 8) ; #'user/b b ; 8 (+ (* a a) (* b b)) ; Vars used in a computation ; 80
If we want to refer to a Var itself, rather than its value, we can use the
#'prefix or the
varfunction.
To explain some more how Vars are used in the light of immutability, let's look at another example: In imperative languages like C, Java, JS etc. we have the
++ operator to increment a variable by one. Clojure has the
inc function: It too takes a value and returns the value + 1. So we can apply this to our
a and see what happens:
(inc a) ; returns a + 1 ; 5
Correct answer. But printing out
a shows its value is still 4...
a ; 4
This is because
inc does not operate on the Var
a, but only is given
a's current value
4. The value returned by
inc is entirely distinct and our
a is never touched (apart from reading its value).
Vars should only be used to define values in the interactive programming context of the REPL or for pre-defined values in Clojure source files. When we said that Vars are mutable, then this is only true in that they can be redefined with
def (and some other advanced functions we won't cover here) to have new values, but practically, a Var should be considered unchangeable. Of course, one could write a function which uses
def within its body to re-define a var with a new value, however this is considered non-idiomatic, generally bad form and is never seen in the wild. Just don't do it! If this is still confusing, we hope things will make more sense, once we've discussed Clojure's data structures and have seen how mutation of variables is actually not needed in practice.
Functions
In stark contrast to object oriented languages like Java, where classes and objects are the primary unit of computation, Clojure is a functional language with functions at its heart. They're "first class", standalone entities in this language and should be considered equal to any other type of data/value. They're accepted as arguments to other functions and can also be constructed or returned as result from a function call. With functions playing such a key role in Clojure, they can be defined in different ways and be given a name, but don't need to.
When defining a re-usable function, we most likely want to also give it a name so that we can easily refer to it again. To define a named function we use the built-in form
defn (
def's sibling and short for "define function") and provide it with all the important things needed: a name, a list of inputs (parameters) and the body of the function (the actual code). In pseudo-code this then looks like this:
(defn name [parameters] body-expressions)
...applied to our above example this could be written like this:
(defn hypot [a b] (let [a (* a a) b (* b b) c (+ a b)] (Math/sqrt c)))
This implementation is not the most concise, but shows how we can use
let to split up a computation into smaller steps and temporarily redefine symbols. We also make use of Java interop features to refer to Java's built-in
Math class and compute the square root of that expression. According to Pythogoras, this is the actual length of the third side (the Hypotenuse) of the right-angled triangle given by
a and
b. A shorter alternative would be just this:
(defn hypot [a b] (Math/sqrt (+ (* a a) (* b b))))
If you're coming from a C-style language, you might wonder where we define the actual result (or return value) of this function. In Clojure this is implicitly given: Just as with
let, the result of the last expression in a function's path of execution is the result.
Now that we have defined our first function, we can call it like this:
(hypot 9 12) ; call fn with 9 & 12 ; 15.0 (hypot a b) ; call fn with our Vars a=4 & b=8 ; 8.94427190999916
Anonymous functions
A function without a name is called an anonymous function. In this case they're defined via the special form
fn, like this:
(fn [params] body-expressions)
Just like
defn, this form takes a number of parameter names and body expressions. So another alternative would be to use
def and the
fn form to achieve the same function definition as above (
defn is really just a short form of this combo):
(def hypot (fn [a b] (Math/sqrt (+ (* a a) (* b b)))))
Anonymous functions are often used with Cloure's data processing features (see further below), for callbacks or if the result of a function is another function, e.g. to pre-configure functions as explained next (readers with a JS background should also find the following familiar):
Let's take another brief look at the
greetings function we showed at the beginning of this tutorial:
(defn greetings [name] (println "hello" name))
Now, we assume such a greeting exists in other languages too, so we might want to define a German version as well:
(defn greetings-de [name] (println "hallo" name))
The only difference between the two is the first part of the greeting, so a more reusable alternative would be to redefine
greeting to use two arguments:
(defn greetings [hello name] (println hello name)) ; #'user/greetings (greetings "hello" "toxi") ; hello toxi
This is one of the situations where anonymous functions come into play, since we could define a
make-greetings function which takes a single parameter (a greeting) and returns an anonymous function which then only requires a name, instead of a greeting and a name. Instead of using
println we make use of the
str function to concatenate values into a single string and return it as result.
(defn make-greetings [hello] (fn [name] (str hello " " name "!"))) ; str concatenates strings
With this in place, we can now define a couple of Vars holding such greeters for different languages and then use these directly:
(def greetings-es (make-greetings "Hola")) (def greetings-de (make-greetings "Guten Tag,"))
The new Vars
greetings-es &
greetings-de now contain the pre-configured functions returned by
make-greetings and we can use them like this:
(greetings-es "Ricardo") ; "Hola Ricardo!" (greetings-de "Toxi") ; "Guten Tag, Toxi!"
We call functions which consume or produce functions Higher Order functions (HOF) and they play a crucial role in the functional programming world. HOFs like the one above are used to achieve a concept called Partial application and the mechanism enabling it is called a Closure, which should also explain Clojure's naming. We could also use the
partialfunction to achieve what we've done here manually.
Multiple arities & varargs
Even though this isn't the place to go into details just yet, Clojure allows functions to provide multiple implementations based on the number of arguments/parameters given. This features enables a function to adjust itself to different usage contexts and also supports functions with a flexible number of parameters (also called varargs, discussed at the end of this article).
Guards
Errors are an intrinsic aspect of programming, but taking a defensive stance can help catching many of them early on during the design stage, also articulated through the "Fail fast" philosophy popular amongst software folk. Clojure supports this form of Design-by-contract approach, allowing us to specify arbitrary guard expressions for our functions and uses them to pre-validate inputs (parameters) and/or the output (post-validation). E.g. we might want to constrain the parameter to our
make-greetings function to only allow strings with less than 10 characters and ensures the function returns a string...
(defn make-greetings [hello] {:pre [(string? hello) (< (count hello) 10)] ; pre-validation guards :post [(string? %)]} ; result validation (fn [name] (str hello " " name "!")))
Guards are given as a Clojure map (discussed further below) with
:pre/
:post keys, each containing a vector of boolean-valued expressions (also discussed further below; in our case it's a call to
string?, a so called predicate function, which only returns
true if its argument is a string). Since the result of a function is an unnamed value, we use the
% symbol to refer to it in the post-validatator. Attempting to call this guarded function with a non-string or too long greeting string will now result in an error even before the function executes:
(make-greetings nil) ; AssertionError Assert failed: (string? hello) ... (make-greetings "Labas vakaras") ; apologies to Lithuanians... ; AssertionError Assert failed: (< (count hello) 10)
Mr. Fogus and Ian Rumford have some further examples...
Metadata
Before moving on to more exciting topics, let's briefly mention some more optional features of functions: metadata, documentation & type hints. The following function is an extended version of our
make-greetings fn with these all of these things included:
(defn make-greetings "Takes a single argument (a greeting) and returns a fn which also takes a single arg (a name). When the returned fn is called, prints out a greeting message to stdout and returns nil." [^String greeting] (fn [^String name] (str greeting " " name "!")))
The string given between the function name and parameter list is a doc string and constitutes metadata added to the Var
make-greetings defined by
defn. Doc strings are defined for all built-in Clojure functions and generally can be read in the REPL using
doc:
(doc make-greetings) ; ([greeting]) ; Takes a single argument (a greeting) and returns a fn which also ; takes a single arg (a name). When the returned fn is called, ; prints out a greeting message to stdout and returns nil.
Clojure allows arbitrary metadata to be added to any datatype and this data can be created, read and manipulated with functions like
meta,
with-meta and
alter-meta. Please see the Clojure website for more information. E.g. to show us the complete metadata map for our
make-greetings Var we can use:
(meta (var make-greetings)) ; {:arglists ([greeting]), ; :ns #, ; :name make-greetings, ; :doc "Takes a single argument (a greeting)..." ; :line 1, ; :column 1, ; :file "/private/var/..."}
Type hints attached to the function parameters were the other addition (and form of compiler metadata) used above. By specifying
^String we indicate to the compiler that the following argument is supposed to be a String. Specifying type hints is optional and is an advanced topic, but can be very important for performance critical code. Again, we will have to refer you to the Clojure website for further details.
Truthiness, conditionals & predicates
The concept of branching is one of the fundamental aspects in programming. Branching is required whenever we must make a decision at runtime based on some given fact and respond accordingly. In many programming languages, we use the Boolean values
true and
false to indicate the success or general "truth" value of something. These values exist in Clojure too, of course. However, in many places Clojure applies a more general term for what constitutes truth (or "success") and considers any value apart from
false or
nil as
true. This includes any datatype!
As you might expect, the basic boolean logic operations in Clojure are
and,
or and
not. The first two can take any number of arguments and each will be either
truthy or not:
(or nil false true) ; true (or 1 false) ; `or` bails at the first truthy value encountered ; 1 (and true nil false) ; `and` bails at the first falsy value encountered ; nil (and true "foo") ; if all arguments are truthy, `and` returns the last ; "foo" (not false) ; true (not nil) ; true (not 1) ; false
An important aspect of
and &
or is that both are lazy, i.e. their arguments are only evaluated if any preceeding ones were falsy. Combined with Clojure's definition of truthiness and
and/
or returning not just boolean values, it's often possible avoid traditional branching in our code.
For the cases where we do need proper branching, we can use the
if and
when forms:
if takes a test expression and one or two body expressions of which only one will be executed based on the test result, in pseudo code:
(if test true-body-expression false-body-expression)
...in real terms:
(def age 16) (if (>= age 21) "beer" "lemonade") ; "lemonade"
Being restricted to a single form for both the "truthy" and "falsy" branch is one important limitation of the
if form, but is a reflection of Clojure's focus on using functions and an encouragement to limit side effects (i.e. I/O operations) to be only contained within functions. The second, falsy branch of
if is also optional and if not needed, it is more idiomatic to use
when instead.
when is somewhat more flexible in these cases, since its body can contain any number of forms to be executed if the test succeeds:
(when (and (>= age 21) (< age 25)) (println "Are you sure you're 21?") (println "Okay, here's your beer. Cheers!"))
To achieve a similar effect using
if we can either wrap these two
println's in a function or use the
do form, which is used as an invisible container for grouping (often side-effecting) actions and it returns the result of its last expression:
(do (expression-1) (expression-2) ...)
Data structures
Data lies at the heart of any application, big or small. Apart from dealing with primitive data like individual numbers, characters and strings, one of the biggest differences between programming languages (and therefore one of the most important factors for choosing one language over another) is in the ways complex data can be defined and manipulated. As we will see next, this aspect is one of Clojure's highlights, as the language not only provides a rich set of primitives (incl. ratios, big integers, arbitrary precision decimals, unicode characters, regular expressions), but also truly powerful approaches to model & process data, of which we can unfortunately only outline some in the scope of this tutorial.
Before we discuss the various common data structures, we also need to point out once more that Clojure is an untyped, dynamic, yet compiled language. All of the following data structures can be fully recursive and contain any mixture of data types.
For reference, a full list of Clojure data structures is available on the Clojure Website
Lists
Lists are sequential data containers of multiple values and form the heart of Clojure (and Lisp in general). In fact, the name "Lisp" is short for List Processing. We actually already know by now how lists are defined, having done so many times in the previous sections: Lists can take any number of elements and are defined with
( and
) or using the function
list. We also know by now that lists are usually evaluated as a function calls, so trying to define a list of only numbers will not work as expected:
(1 2 3 4) ; ClassCastException java.lang.Long cannot be cast to clojure.lang.IFn
Because the first element of our list is the number
1 (not a function!), Clojure will give us this error... Here's how & why:
Homoiconicity
Long story short: In Clojure, code is data and data is (can be) code. Languages using the same data structures for code and data are called homoiconic and all Lisps share this feature, as well as other languages like R, XSLT, PostScript etc.
To treat code as data, we somehow need to circumvent the evaluation of our list as function call. To that purpose Clojure provides us with a
quote mechanism to evaluate a data structure literally (as symbolic data). We can do this with any Clojure data structure to recursively stop evaluation of a form as code:
(quote (1 2 3 4)) ; (1 2 3 4) '(1 2 3 4) ; the apostrophe is a shorthand for `quote` ; (1 2 3 4) '(+ 1 2) ; (+ 1 2) (println "the result is" (+ 1 2)) ; the result is 3 (println "the result of" '(+ 1 2) "is" (+ 1 2)) ; the result of (+ 1 2) is 3
The diagram below shows the impact of quoting and the difference of the resulting trees:
We could also use the
list function to programatically assemble a list/function (using our previously defined vars
a and
b) and then evaluate it with
eval:
; first construct a function call using a list of individually quoted symbols (def a-plus-b (list '+ 'a 'b)) ; #'user/a-plus-b a-plus-b ; show resulting list ; (+ a b) (eval a-plus-b) ; treat data as code: evaluate... ; 12 ; treat code as data structure & look at the first item of that list (first a-plus-b) ; #
; internal representation of `+` fn ; next treat code as data: replace all occurrences of a & b w/ their square values ; the {...} structure is a map of key => value pairs (discussed below): ; any keys found in the original list are replaced with their values ; so `a` is replaced with (* a a) and `b` with (* b b) ; the map is also quoted to avoid the evaluation of its contents (replace '{a (* a a) b (* b b)} a-plus-b) ; (+ (* a a) (* b b)) ; btw. if the map would *not* be quoted, it would be evaluated as: {a (* a a) b (* b b)} ; {4 16, 8 64} (eval (replace '{a (* a a) b (* b b)} a-plus-b)) ; data as code again... ; 80
We will discuss the
firstfunction in more detail below.
Right now, you might wonder why this is all worth pointing out. The most dramatic implication of homoiconicity is the enabling of metaprogramming, the programmatic generation & manipulation of code at run time and by doing this, being able to define our own language constructs. It also opens the door to lazy evaluation of code or skipping code entirely depending on context (e.g. what happens with
and/
or). Unlike C's pre-processor, which only operates on the original textual representation of the source code (before the compile step and hence is severely limited and potentially more error prone), Lisps give us full access to the actual data structures as they're consumed by the machine. For example this makes Clojure an ideal candidate for genetic programming or to implement your own domain specific language. The main forms responsible for these kinds of code transformations are macros and we will leave them for another tutorial...
Clojure lists have another important detail to know about: Because they're implemented as independent, linked elements, they can only be efficiently accessed at their head (the beginning of the list) and they don't provide direct, random access to any other elements. This restriction makes them less flexible than the next data structure, but still has some concrete use cases where this limitation doesn't matter: e.g. to implement stacks.
Vectors
Since lists in Clojure are both limited in terms of access and are semantically overloaded (as containers of code), it's often more convenient to use another similar data type to store multiple values: vectors. Vectors are literally defined using
[ and
] or the
vector function and are, like lists, a sequential data structure. We already encountered vectors when defining the parameters for our functions above, but just for kicks, here we define a vector with each element using a different data type: number, string, character & keyword (the latter is explained in more detail in the next section)
[1 "2" \3 :4] ; [1 "2" \3 :4]
Like lists, vectors can contain any number of elements. Unlike lists, but very much like arrays and vectors in others languages, they can also be accessed randomly using an element index. This can be done in multiple ways:
(def v [1 2 3 4]) ; #user/v (get v 0) ; using the `get` function with index 0 ; 1 (get v 10 -1) ; using `get` with a default value -1 for missing items ; -1 (v 0) ; using the vector itself as function with index 0 as param ; 1
Maps & keywords
Maps are one of the most powerful data structures in Clojure and provide an associative mapping of key/value pairs. They're similar to HashMaps in Java or some aspects of JavaScript objects, however both keys and values can of course be of any data type (incl.
nil, functions or maps themselves). The most common data type for map keys however, are keywords.
Keywords are simply symbols which evaluate to themselves (i.e. they have no other value attached). Within a given namespace only a single instance exists for each defined keyword. They can be created by prefixing a name with
: or with the
keyword function. Keywords can contain almost any character, but no spaces!
:my-key ; :my-key (keyword (str "my-" "key")) ; kw built programmatically ; :my-key
Back to maps now. They are defined with
{ and
} or the
hash-map function (plus a few other variations we will skip here). Here's a map with 3 keys (
:a :b :c), each having a different data type as its value (also note that
:c's map uses strings as keys, much like JSON objects):
(def m {:a 23 :b [1 2 3] :c {"name" "toxi" "age" 38}}) ; {:a 23 :b [1 2 3] :c {"name" "toxi" "age" 38}}
Having defined a map structure, we can now lookup its values using keys. Once again many roads lead to Rome:
(m :a) ; use the map as function with :a as lookup key ; 23 (:b m) ; use key :b as function applied to m ; [1 2 3] (get m :c) ; use get function with :c as key ; {"name" "toxi", "age" 38} (:foo m) ; lookup a missing key returns nil ; nil (get m :foo "nada") ; use get with default value for missing keys ; "nada"
Note, we can use both maps & keywords as functions, because both implement Clojure's mechanism for function calls. Depending on context, it's good to have both as an option.
Since the values for
:b and
:c are nested data structures, we can continue this further...
((:b m) 2) ; 3 ((:c m) "name") ; "toxi"
Although this works, Clojure offers an alternative (nicer) approach, which becomes especially handy if our nesting increases: The
get-in function allows us to specify a "path" (as vector) into our data structure to look up a nested value. As we saw already with
get, this function can be applied to both vectors and maps (or a mixture of both):
(def db {:toxi {:name "Karsten" :address {:city "London"} :urls ["" ""]} :nardove {:name "Ricardo" :urls [""]}}) (get-in db [:toxi :address :city]) ; "London" (get-in db [:nardove :address :city] "I think Bournemouth") ; "I think Bournemouth" (get-in db [:nardove :urls 0]) ; ""
select-keys can be used to extract a sub-set of keys from a map. The new map only contains the keys listed as arguments (if present in the map):
(select-keys m [:a :b :foo]) ; :foo isn't present in `m` so won't be in result... ; {:a 23 :b [1 2 3]}
Sets
Sets are incredibly useful whenever we must deal with unique values, but don't care about their ordering. The name comes from Set theory in Mathematics. A Clojure set is (usually) unordered and will never contain more than a single instance of a given value. We will exploit this fact in the next part of the tutorial to build up our first full example application. Sets are defined like this:
#{1 2 3 4} ; #{1 2 3 4} #{1 1 2 3 4} ; IllegalArgumentException Duplicate key: 1
Be aware, that the literal definition syntax of sets doesn't allow duplicate values. However we can use it's functional equivalent:
(hash-set 1 1 2 3 4) ; #{1 2 3 4}
...or we could use the
set or
into functions to convert another data structure into a set and hence filter out any duplicate values from the original (of course without destroying the original!):
(def lucky-numbers [1 2 3 4 4 2 1 3]) ; #user/my-vals (set lucky-numbers) ; #{1 2 3 4} (into #{} lucky-numbers) ; #{1 2 3 4} lucky-numbers ; [1 2 3 4 4 2 1 3]
Since a set can be considered a special kind of map in which keys have no values, but are simply mapped to themselves, we can use the same lookup approaches to check if a value is present or not.
(get #{1 2 3 4} 3) ; 3 (#{1 2 3 4} 5) ; nil (get #{1 2 3 4} 5 :nope) ; :nope
As a slightly more practical example, let's define a nested set of sets to encode the following mini social graph:
(def g #{#{:toxi :ricardo} #{:mia :toxi} #{:filip :toxi} #{:filip :ricardo}})
Let's also define a simple lookup function (a predicate) to check if two people know each other:
(defn knows? "Takes a graph and two node names, returns true if the graph contains a relationship between the nodes (ignoring direction)" [graph a b] (not (nil? (graph #{a b}))))
The
nil?function returns true if its given argument is
nil.
Now we can use this function to get some answers (the order of names doesn't matter):
(knows? g :toxi :filip) ; true (knows? g :ricardo :toxi) ; true (knows? g :filip :mia) ; false
Common data manipulation functions
Even in the face of immutability, what good is a data structure, if it can't be manipulated? One of the most often quoted and popular sayings amongst Clojurians is:
"It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
-- Alan J. Perlis
It neatly sums up Clojure's approach to data processing and is achieved through a number of elegant abstraction mechanisms, allowing dozens of core language functions to work polymorphically with different types of data. Polymorphism allows for a very small, but powerful API and so reduces cognitive load for the programmer. Because each of the above mentioned data structures has its own peculiarities, the concrete behaviour of the functions discussed below slightly varies and adjusts to each data type.
Adding elements
Adding new data to existing collections is one of the most common programming tasks and in Clojure is usually done with
conj.
For vectors, new elements are added to the end/tail:
(conj [1 2 3] 4 5 6) ; [1 2 3 4 5 6]
For lists, new elements are added to the beginning/head (because it's most efficient), therefore resulting in the opposite value order:
(conj '(1 2 3) 4 5 6) ; (6 5 4 1 2 3)
Maps are unordered collections and consist of key/value pairs. To add new pairs to a map, we need to define each as vector:
(conj {:a 1 :b 2} [:c 3] [:d 4]) ; {:d 4, :c 3, :a 1, :b 2}
Sets are also unordered and don't allow duplicates, so adding duplicate values will have no effect:
(conj #{1 2 3} 1 2 4 5) ; only 4 and 5 are added ; #{1 2 3 4 5}
Another often used alternative exists for maps and vectors, both of which are associative collections: Maps associate keys with values. Vectors associate numeric indices with values. Therefore Clojure provides us with the
assoc function to add new or replace existing associations (
assoc too takes a flexible number of parameters so that more than one such association can be changed in one go):
(assoc {:a 23} :b 42 :a 88) ; override :a, add :b ; {:a 88 :b 42} (assoc [1 2 3] 0 10, 3 40) ; override 1st element, add new one (comma is optional) ; [10 2 3 40]
Important: For vectors you can only add new indices directly at the tail position. I.e. if a vector has 3 elements we can add a new value at position 3 (with indices starting at 0, this is actually the 4th element, therefore growing the vector by one). Attempting to
assoc a greater index will result in an error, be careful:
(assoc [1 2 3] 10 -1) ; IndexOutOfBoundsException...
Nested data manipulations
When dealing with nested structures we can use
assoc-in and
update-in to manipulate elements at any level. E.g. we might want to add Ricardo's current home town to our above mini DB map:
(assoc-in db [:nardove :address :city] "Bournemouth") ; {:toxi ...... ; :nardove ; {:name "Ricardo", ; :urls [""], ; :address {:city "Bournemouth"}}}
Like
get-in,
assoc-in takes a path into the datastructure and adds (or replaces) the value for that key. Whilst doing that it also creates any missing nesting levels automatically (i.e.
:nardove's map did not even contain an
:address key beforehand).
update-in is similar to
assoc-in, however instead of a fixed value to be inserted into the collection, it takes a function (incl. any additional params) which is being applied to the current value for the key and then uses the result of that function as the new value. E.g. here we use
update-in and
conj to add another URL to
:toxi's DB entry:
(update-in db [:toxi :urls] conj "") ; {:toxi ; {:name "Karsten", ; :urls ["" "" ""], ; :address {:city "London"}} ....
Removing elements
To remove items from a collection, we can use
dissoc (for maps) or
disj (disjoin) for sets. If a key to be removed isn't present, both functions have no effect.
(dissoc {:a 23 :b 42} :b) ; {:a 23} (disj #{10 20 30} 20) ; #{10 30}
Lists and vectors only allow for the direct removal near the head or tail, but don't support removing random items:
pop applied to a list removes the first item, for vectors the last item. If the collection is empty,
pop will throw an exception.
(pop '(1 2 3)) ; (2 3) (pop [1 2 3]) ; [1 2]
Immutability, one more time
We've just seen how we can add & remove elements from collections, thus seemingly modifying them - which technically would make them mutable, not immutable. However, as we've seen earlier, the modified results returned by these functions are not the original collections. To clarify:
(def v [1 2 3 4]) ; #'user/v (def v2 (conj v 5)) ; #'user/v2 v2 ; [1 2 3 4 5] v ; [1 2 3 4]
Our original
v still exists even though we've added
5 to it! Under the hood Clojure has created a new data structure (bound to
v2) which is the original collection
v with
5 added. Thinking like a programmer, your next questions should be immediately: Isn't that incredibly inefficient? What happens if I want to add a value to a vector with 10 million elements? Doesn't it become super slow & memory hungry to copy all of them each time? The short answer is: No. And here's why:
Persistent data structures
All Clojure data structures are so called persistent data structures (largely based on the paper by Chris Okasaki). Internally they're implemented as a tree and therefore can easily provide structural sharing without the need to copy data, which would be the naive solution to achieve immutability. The following diagram illustrates what happens internally for the above example:
Using trees as the internal data structure, our
v2 can share the original contents of
v and simply add a new leaf to its tree, pointing to the added value
5. This is very cheap and doesn't cause a huge loss of performance, regardless of the size of the collection. The same principle is applied to all of the mentioned data structures and it's this uniform approach which both enables & requires immutability.
Sequences
This section discusses Clojure's uniform approach to data processing using sequence abstractions. A sequence is a logical view of a data structure. All Clojure data structures can be treated as sequences, but the concept is extended even further and Clojure sequences include Java collections, strings, streams, directory structures and XML trees etc. You can even build your own ones by implementing an interface. The name for sequences in Clojure is
seq and any compatible data structure can be explicitly turned into a
seq using the function with the same name.
The sequence API
The sequence API is a minimal, low level interface consisting largely of only these four functions:
first,
next to read and
cons &
seq to create sequences. All of the following functions are built on top of these, but before we get there let's first illustrate their role using a vector and a hash map as an example:
(def my-vec ["1st" "2nd" "3rd"]) (def my-map {:a "1st" :b "2nd" :c "3rd"})
Any Clojure collection can be turned into a seq, using the
seq function. If the original collection is empty,
seq will return
nil.
(seq my-vec) ; ("1st" "2nd" "3rd) (seq my-map) ; ([:a "1st"] [:c "3rd"] [:b "2nd"]) (seq "creativeapplications.net") ; a string's seq is its characters ; (\c \r \e \a \t \i \v \e \a \p \p \l \i \c \a \t \i \o \n \s \. \n \e \t) (seq []) ; nil
Since a map consists of key/value pairs, a map's seq is a seq of its pairs (vectors of 2 elements). And since a map is an unordered collection, the order of elements in its seq is undefined...
first
...returns the first element of a seq (or
nil if the sequence is empty):
(first my-vec) ; "1st" (first my-map) ; [:a "1st"] (first "hello") ; a string can be turned into a seq as well... ; \h (first []) ; first of an empty vector/seq returns nil ; nil
next & rest
As you might have guessed already,
next returns a seq of all the remaining elements, excluding the first one. Again, if there're no further elements in the seq,
next also returns
nil.
(next my-vec) ; ("2nd" "3rd") (next my-map) ; ([:c "3rd"] [:b "2nd"])
We could now also combine the use of
first and
next to retrieve other elements, e.g. the 2nd element is the first element of the seq returned by
(first (next my-vec)) ; "2nd" (first (next (next my-vec))) ; "3rd"
rest is almost identical to
next, however will always return a seq: If there're no more elements, it will simply return an empty seq instead of
nil.
cons
This function is used to prepend elements to a seq.
cons takes two arguments, a value and an existing seq (or seqable collection) and adds that value at the front. If
nil is given as the 2nd argument, a new seq is produced:
(cons 1 nil) ; (1) (cons 2 (cons 1 nil)) ; (2 1) (cons \c "ab") ; (\c \a \b)
Looping, iteration & recursive processing
At this point you might be wondering what use these above functions have in practice. Since Clojure offers far more high-level approaches to work with data collections, direct use of these functions in Clojure is actually less common. Yet, before we discuss these higher level functions, please bear with us as we want to illustrate some other important core operations common to all programming languages, one to which Clojure adds its own twist (again): Iteration & recursion. Meet the
loop construct.
loop & recur
loop defines a body of code which can be executed repeatedly, e.g. to iterate over the elements of a sequence. This is best illustrated by an example, a loop which takes a vector and produces a seq of the vector's elements in reverse order (Clojure actually provides the
reverse function to do the same for us, but we're focussed on
loop here):
(loop [result nil, coll [1 2 3 4]] (if (seq coll) (let [x (first coll)] (recur (cons x result) (rest coll))) ; no more elements, just return result... result)) ; (4 3 2 1)
The vector following the
loop keyword is a binding vector just as we know from
let and it can be used to bind any number of symbols. In our case we only require two: the initially non-existing
result sequence (set to
nil) and
coll, a vector of numbers to be processed. The other code is the loop's body which is executed repeatedly: At each iteration we first check if our
coll contains any more elements by calling
seq (remember,
seq returns
nil (and is therefore falsy) when given an empty collection). If there're any elements remaining, we bind
coll's
first element to
x. What follows next is a call to
recur, the actual mechanism to trigger the recursive re-execution of the loop, however each time with new values given for the loop's
result and
coll symbols. These new values are the updated result sequence (with
x prepended) and the remainder of the current collection produced via
rest. Once
rest returns an empty seq, the loop is finished and
result is "returned" as final value.
The combined application of
loop &
recur is the most low-level and verbose construct to create iterative behavior in Clojure, but because of that is also the most flexible. The most important restriction however is that
recur can only be used at the end (tail) of a
loop's execution path, meaning there can be no further expressions following
recur (hence the concept is called Tail recursion. In the above example you might think the final occurance of
result violates this restriction, but that is not true:
recur is the last expression of the "truth branch" of the enclosing
if, whereas the returned
result is on its other branch and therefore independent.
doseq
A more concise way of completely iterating the elements of a collection is offered by
doseq, however this form is designed to work with/trigger side effects and only returns
nil. The example iterates over a vector of hashmaps and displays each person's age with some extra formatting:
(doseq [p [{:name "Ben" :age 42} {:name "Alex"} {:name "Boris" :age 26}]] (let [name (:name p) suffix (if (#{\s \x} (last name)) "'" "'s") age (or (:age p) "rather not say")] (println (str name suffix " age: " age)))) ; Ben's age: 42 ; Alex' age: rather not say ; Boris' age: 26 ; nil
The value of
suffix is based on the
last letter of a person's name and is usually
's (unless the last letter is in the set
#{\s \x}). We'd also like the
:age to be optional and provide a default value if missing...
dotimes
dotimes is yet another looping construct used for side effects, this time just for simply binding a symbol to a number of iterations:
(dotimes [i 3] (println i)) ; 0 ; 1 ; 2
Common sequence processing functions
Now that we've discussed some of the underlying forms and mechanisms, it's time to focus on the more commonly used features of Clojure's sequence processing.
Loops and iterators are the de-facto tools/patterns to process collections in many imperative languages. This is where idiomatic Clojure probably differs the most, since its functional approach is more focused on the transformation of sequences using a combination of higher order functions and so called:
Pure functions
A pure function's does not depend on any other data than its inputs and causes no side effects (i.e. I/O operations). This makes them referentially transparent, meaning a function could be replaced with its result without any impact, or in other words, a function is consistently providing the same value, given the same inputs. Pure functions can also be idempotent, meaning a function, if applied multiple times, has no other effects than applying it once. E.g.
(Math/abs -1) will always provide
1 as a result and
(Math/abs (Math/abs -1)) will not change it, nor will it cause any other effect.
Pure functions play a key role in functional programming. Their characteristics allow us to compose small, primitive functions into more complex constructs with predictable behaviors.
Memoization of pure functions
The caching of results of previous calls to a function is called memoization. This technique is especially useful if these results are produced by a complex/slow process. Clojure provides the
memoize HOF to allow any function be memoized, however safe memoization requires those functions to be pure. We can demonstrate this caching effect by simulating a slow function using Java interop and
Thread/sleep:
; simulate long process by sleeping for x * 1000 milliseconds (defn slow-fn [x] (Thread/sleep (* x 1000)) x) (def not-so-slow-fn (memoize slow-fn)) (not-so-slow-fn 3) ; 3 <-- takes 3 seconds the first time (not-so-slow-fn 3) ; 3 <-- immediate (cached) result
Map-Reduce
Several years ago, Google published a paper about their use of the Map-Reduce algorithm. Whereas this paper was focused on the distributed application of that algorithm running in parallel on thousands of machines, the general approach itself has been around for decades and plays an important role in many functional languages, where it is the de-facto pattern to process data without the need for explicit loops.
The idea of Map-Reduce is to first transform the elements of an input collection into an intermediate new collection of values, which is then passed to a reduction function, producing a single final result value. This result could be any data type, though, incl. a new collection.
Even though Map-Reduce is a 2-phase process, each phase can also be applied on its own. I.e. sometimes there's no need for a later reduction or an initial mapping step.
Btw. Several modern "NoSQL" database systems (e.g. CouchDB, MongoDB, Riak) and distributed data processing platforms like Hadoop also heavily rely on Map-Reduce as underlying mechanism to process & create views of data. So if you ever intent to work with such, it's quite useful knowledge to work through this section, even if you have no further interest in Clojure...
map
In mathematical terms mapping is the transformation of values through the application of a function. In Clojure the
map function is one of the most often used functions. It takes a transformation function and applies it to each element in a given collection/sequence. E.g. The example below takes the function
inc and a seq of numbers. It then applies
inc to each number individually and returns a new sequence of the results:
(map inc [1 2 3 4 5]) ; (2 3 4 5 6)
The transformation function given to
map can be anything and its also one of the situations where anonymous functions are often used. E.g. Here we produce a seq of square numbers:
(map (fn [x] (* x x)) [1 2 3 4 5]) ; (1 4 9 16 25)
As an aside, since anonymous functions are often very short, they can also be defined more concisely (though become less legible). The following is equivalent to the above expression:
(map #(* % %) [1 2 3 4 5]) ; (1 4 9 16 25)
Here we use the reader macro
#(..) to define an anon fn and the symbol
% to refer to the first (in this case only) argument. If such a function takes more than a single arg, then we can use
%2,
%3 etc. to refer to these...
(#(* % %2) 10 2) ; call anon fn with args: 10, 2 ; 20
map can also be applied to several collections at once. In this case the transformation function needs to accept as many parameters as there are collections. Let's use
map to build a seq of hashmaps from two vectors of point coordinates and colors. Each time our transformation fn is given a single point (a vector) and a color keyword. The fn simply combines these values into a single map with keys
:pos and
:color:
(map (fn [p c] {:pos p :color c}) ; transformation fn [[100 0] [0 -100] [0 100] [200 100]] ; points [:red :green :blue]) ; colors ; ({:pos [100 0], :color :red} ; {:pos [0 -100], :color :green} ; {:pos [0 100], :color :blue})
Important: You might have noticed that our vector of points has one more element than there're colors. In that case,
mapwill stop as soon as one of the input collections is exhausted / has no further values. In this case we only have 3 colors, so the last (4th) point is ignored.
Laziness and lazy seqs
One thing not immediately obvious when experimenting with
map in the REPL, is that the seq returned by
map is a so called
lazy-seq, that is, the transformation function is actually not applied to our original values until their results are needed. In other words,
map is more of a recipe for a computation, but the computation does not ever happen if we don't attempt to use its results.
To illustrate this better, let's again simulate a slow transformation function which takes 1 second per value. With 5 values in the original collection, our entire processing should take approx. 5 seconds:
(def results (map (fn [x] (Thread/sleep 1000) (* x 10)) [1 2 3 4 5]))
When this code executes, we can see the REPL immediately returned a result, the new Var
user/results. It did not take 5 seconds, because at this stage we haven't yet attempted to do anything with that new Var - and hence no mapping did take place thus far. It's plain lazy!.
Now trying to display the contents of
results however will force the computation and therefore will take 5 seconds until we can see the mapped values:
results ; takes ~5secs, world's slowest multiply ; (20 30 40 50 60)
reduce
reduce is Clojure's natural way of expressing an accumulation over a sequence of values. Like
map it takes a function, an optional initial result and a sequence whose elements will be passed to the transformation function individually, for example:
(reduce + 0 [1 2 3 4 5 6 7 8 9 10]) ; 55
In this case
reduce uses the function
+ to combine all values of our seq into the accumulated result one by one. The transformation function must always take 2 arguments: the current result (reduced value) and the next item to be processed. If no initial result is given, the first iteration will consume the first 2 items from the sequence.
In our case this happens:
(+ 0 1) ; 0 is the initial result, returns 1 (+ 1 2) ; 1 is current result, returns 3 (+ 3 3) ; returns 6 ; and so on until the seq is exhausted...
Clojure also provides an alternative to
reduce, called
reductions. Instead of the just final reduction it returns a seq of all intermediate results (here we also use
range to create a seq of numbers from 0-9):
(reduce + (range 10)) ; 45 (reductions + (range 10)) ; (0 1 3 6 10 15 21 28 36 45)
filter
filter takes a function and a seq, then applies the function to each element and returns a lazyseq of only the elements the function returned a "truthy" value for. These kind of functions are also called "predicates".
Clojure has a number of predicate functions which rely on truthiness and they can be easily recognized by their general naming convention, a function name suffixed with
?. E.g.
even? can be used to filter out all even numbers from the seq of numbers 0-9:
(filter even? (range 10)) ; (0 2 4 6 8)
Since the function needn't strictly return
true or
false, we can also use a set as predicate to filter out only values which are present in the set:
(filter #{1 2 4 8 16 32} (range 10)) ; (1 2 4 8)
Again we're using data as code, since vectors, maps & sets all can be used as functions and return
nil if a value isn't present, therefore fulfilling the general contract of a predicate function...
take / drop
Sometimes we are only interested in a chunk of values from a larger collection. We can use
take to retrieve the first
n elements from a collection as a lazy sequence:
(take 3 '(a b c d e f)) ; (a b c)
In contrast, we can use
drop to ignore the first
n elements and give us a lazy sequence of all remaining elements:
(drop 3 '(a b c d e f)) ; (d e f)
Clojure has a few other variations on that theme, most notably
take-last,
drop-last,
butlast,
take-nth,
take-while and
drop-while. The latter two also take a predicate function and terminate as soon as the predicate returns a "falsy" result:
(take-while #(< % 5) (range 10)) ; (0 1 2 3 4)
concat & mapcat
concat splices any number of seqs together into a single new lazy seq. The new `rotate-left` function shows how we can use `concat` with `take`/`drop` to rotate elements in a sequence:
(concat [1 2 3] '(a b c) {:a "aa" :b "bb}) ; (1 2 3 a b c [:a "aa"] [:b "bb"]) (defn rotate-left [n coll] (concat (drop n coll) (take n coll))) ; #'users/rotate-left (rotate-left 3 '(a b c d e f g h i)) ; (d e f g h i a b c)
mapcat is a combination of
map &
concat. Like
map it accepts a transformation function and a (number of) seqs. The mapping function needs to produce a collection for each step which are then concatenated using
concat:
; another social graph structure as from above (only w/ more people)... (def g2 #{#{:ricardo :toxi} #{:filip :edu} #{:filip :toxi} #{:filip :ricardo} #{:filip :marija} #{:toxi :marija} #{:marija :edu} #{:edu :toxi}}) ; step 1: produce a seq of all relations (map seq g2) ; ((:marija :filip) (:toxi :marija) (:edu :filip) (:ricardo :filip) ; (:toxi :edu) (:toxi :ricardo) (:marija :edu) (:toxi :filip)) ; step 2: combine rels into single seq (mapcat seq g2) ; option #1: `seq` as transform fn (mapcat identity g2) ; option #2: `identity` as transform (same result) ; (:marija :filip :toxi :marija :edu :filip :ricardo :filip :toxi :edu :toxi :ricardo :marija :edu :toxi :filip) ; step 3: form a set of unique nodes in the graph (set (mapcat identity g2)) ; #{:toxi :marija :edu :ricardo :filip} ; step 4: build map of node valence/connectivity (frequencies (mapcat identity g2)) ; {:marija 3, :filip 4, :toxi 4, :edu 3, :ricardo 2}
There're two functions we haven't dealt with so far:
identitysimply returns the value given as argument.
frequenciesconsumes a seq and returns a map with the seq's unique values as keys and their number of occurrences as values, basically a histogram.
take &
drop are also important with respect to one more (optional) property of lazy sequences we haven't mentioned so far:
Infinite sequences
The concept of infinite data in a non-lazy (i.e. eager) context is obviously unachievable on a machine with finite memory. Laziness, however does enable potential infinity, both in terms of generating and/or consuming. In fact, there're many Clojure functions which exactly do that and without the proper precautions (i.e. combined with
take,
drop and friends), they would bring a machine to its knees. So be careful!
We already have used one of these potentially infinite sequence generators above:
range when called without an argument produces a lazyseq of monotonically increasing numbers:
(0 1 2 3 4 ...) (Since the REPL always tries to print out the result, do not ever call one of these without guards in the REPL!)
Other useful infinite lazyseq generators are:
cycle delivers a lazyseq by repeating a given seq ad infinitum:
(take 5 (cycle [1 2 3])) ; (1 2 3 1 2) (take 10 (take-nth 3 (cycle (range 10)))) ; (0 3 6 9 2 5 8 1 4 7)
repeat produces a lazyseq of a given value:
(take 5 (repeat 42)) ; (42 42 42 42 42) (repeat 5 42) ; (42 42 42 42 42)
repeatedly produces a lazyseq of the results of calling a function (without arguments) repeatedly:
(take 5 (repeatedly rand)) ; (0.07610618695828963 0.3862058886976354 0.9787365745813027 0.6499681207528709 0.5344143491834465)
iterate takes a function and a start argument and produces a lazyseq of values returned by applying the function to the previous result: so (f (f (f x)))... Here to generate powers of 2:
(take 5 (iterate #(* 2 %) 1)) ; (1 2 4 8 16) (take 5 (drop 10 (iterate #(* 2 %) 1))) ; (1024 2048 4096 8192 16384)
Since infinite lazyseqs are values just like any other (but at the same time can't be exhausted) it sometimes it's helpful to think about them as high level recipes for changing program states or triggers of computations. Combined with the various sequence processing functions they provide a truly alternative approach to solving common programming problems.
Sequence (re)combinators
Here're some more core functions related to combining collections in different ways:
interleave recombines two sequences in an alternating manner (also lazy):
(interleave [:clojure :lisp :scheme] [2007 1958 1970]) ; (:clojure 2007 :lisp 1958 :scheme 1970)
interpose inserts a separator between each element of the original seq:
(interpose "," #{"cyan" "black" "yellow" "magenta"}) ; ("cyan" "," "magenta" "," "yellow" "," "black")
zipmap combines two collections into a single hashmap, where the 1st collection is used for keys and the second as values. Let's have some Roman Numerals:
; first the individual pieces: ; powers of 10 (take 10 (iterate #(* 10 %) 1)) ; (1 10 100 1000 10000 100000 1000000 10000000 100000000 1000000000) ; apply powers to 1 & 5 (take 5 (map (fn [x] [x (* x 5)]) (iterate #(* 10 %) 1))) ; using `map` ; ([1 5] [10 50] [100 500] [1000 5000] [10000 50000]) (take 5 (mapcat (fn [x] [x (* x 5)]) (iterate #(* 10 %) 1))) ; using `mapcat` ; (1 5 10 50 100) ; altogether now... (zipmap [:I :V :X :L :C :D :M] ; keys (mapcat (fn [x] [x (* x 5)]) (iterate #(* 10 %) 1))) ; values ; {:M 1000, :D 500, :C 100, :L 50, :X 10, :V 5, :I 1}
for
Since we've just discussed sequence generators, we also must briefly mention
for. Unlike
for loops in other languages, Clojure's
for is a so called List comprehension, just another generator of lazyseqs, though one on crack if we may say so...
for combines the behavior of
map with lexical binding as we know from
let and conditional processing. It returns its results as lazyseq. Here we iterate over the seq returned by
(range 4) and bind
i to each value successively, then execute
for's body to tell us if that current value of
i is even:
(for [i (range 4)] {:i i :even (even? i)}) ; ({:i 0, :even true} {:i 1, :even false} {:i 2, :even true} {:i 3, :even false}) (into {} (for [i (range 4)] [i (even? i)])) ; {0 true, 1 false, 2 true, 3 false}
for can also be used to created nested seqs. This happens automatically when more than one symbol is bound, e.g. here we create positions in a 4x2 grid (the first symbol defines the outer loop, the next one(s) inner loops:
(for [y (range 2) ; outer loop x (range 4)] ; inner loop [x y]) ; result per iteration ; ([0 0] [1 0] [2 0] [3 0] [0 1] [1 1] [2 1] [3 1])
The symbol binding part can be further customized with additional bindings to pre-compute values used in the body of
for and/or we can specify a predicate to skip an iteration (therefore also achieving filtering a la
filter) or cancel iteration (using
:while). The next example creates points only along the border of a 4x4 grid (center points are skipped):
(for [y (range 4) x (range 4) :let [border? (or (= 0 x) (= 3 x) (= 0 y) (= 3 y))] :when border?] ; skip iteration when border? is false [x y]) ; ([0 0] [1 0] [2 0] [3 0] ; manually formatted to better visualize result... ; [0 1] [3 1] ; [0 2] [3 2] ; [0 3] [1 3] [2 3] [3 3])
every? / some
Sometimes we need to check if the values of a collection match certain criteria, e.g. to enforce a restriction. The
every? function takes a validation function (predicate) and applies it to all elements of a collection. It only returns true, if the predicate returns a truthy value for all of them. Here we check if all elements in a seq have a
:name key (remember, keywords can be used as functions!)
(every? :name [{:name "nardove"} {:name "toxi"} {:age 88}]) ; false
Or we could write our own predicate and check if all values are multiples of 3, that is a number for which the remainder,
rem, of a division by 3 is zero:
(every? #(zero? (rem % 3)) [666 42 99 12]) ; true
Alternatively, we can use
some if we only want to ensure some of the values match a condition.
some will return the first truthy value returned by the predicate (or
nil if no items match). Again, we are using data (a set) as predicate fn:
(some #{3 6 9 12} [1 2 3 4 5 6]) ; 3
...or ask if some names are longer than 4 characters:
(some #(> (count %) 4) ["mia" "nardove" "toxi"]) ; true (some #(if (> (count %) 4) %) ["mia" "nardove" "toxi"]) ; "nardove"
apply
So far we have used the phrase "applies a function to x" several times. In short it simply means that a function is called with
x as its argument. Though, what should we do if we have a function accepting multiple arguments, but have our arguments only in a single collection (i.e. one built with
map etc.)?
To stick with some familiar constructs and add a concrete use case, our
hypot function defined earlier, computes the length of the longest side in a triangle, given the lengths of the 2 other sides. At the same time we could interpret this as the calculation of the distance of a 2d point from the origin in a cartesian coordinate system: One side is the distance along the X-axis and the other the distance in Y.
Imagine we have a collection of 2d points and we want to measure their distance from the origin (their magnitude):
(def points [[0 100] [200 100] [-300 50]]) ; #'user/points
Now we could use
map and our
hypot function to compute the distance/length for each point and produce a new sequence of the results. However,
hypot so far requires 2 arguments
a &
b, but our points are defined as vectors of 2 elements and therefore each point is just a single value (the vector itself). For such situations, Clojure provides us with the
apply function, allowing a function to accept a collection of values as individual arguments (with possibly additional ones given as well). So whereas the following will produce an error...
(hypot [200 100]) ; ArityException Wrong number of args (1) passed to: user$hypot
... using
apply will unravel our vector into two individual arguments and call our function correctly:
(apply hypot [200 100]) ; 223.60679774997897
With this in place, we can now plug this into a
map form and process all our points:
(map #(apply hypot %) points) ; (100.0 223.60679774997897 304.138126514911)
To complete an earlier arc of our tutorial, we could also plug this into another
reduce step to give us the longest distance (using
max as the reduction function):
(reduce max (map #(apply hypot %) points)) ; 304.138126514911
Destructuring
As we've just learned with
apply, sometimes it is required to adapt our data to a function's specifics. But we can also achieve the opposite and adapt a function to expect a specific data structure and do so without having to jump through hoops painstakingly pulling out individual values from a given collection. Clojure make this very easy using destructuring.
Destructuring is a way to bind symbols to values in a collection, by replicating the overall structure of the collection and placing the symbols to be bound at the points from which we intend to get a value from in the supplied data structure. A few lines of code will illustrate this much better...
Sequential destructuring
As we know a vector is just a sequence of values, each of which can be another nested data structure:
(def nested-data [10 20 [30 40] {:x 1 :y 2}]) ; some test data
To bind the first 3 items of that input vector to symbols
a,
b and
c, a naive and inelegant solution would be to bind each symbol individually, like this:
(let [a (nested-data 0) b (nested-data 1) c (nested-data 2)] (prn :a a :b b :c c)) ; :a 10 :b 20 :c [30 40]
Using sequential destructuring, this can be expressed much more concisely. All we need to do is telling Clojure the symbols' values are part of a sequence, by wrapping them in a vector themselves:
(let [[a b c] nested-data] (prn :a a :b b :c c)) ; :a 10 :b 20 :c [30 40]
Sometimes we might need values which are not successive in the collection e.g. say we only care about the 2nd and 4th value:
(let [[_ b _ d] nested-data] (prn :b b :d d)) ; :b 20 :d {:x 1 :y 2}
It's idiomatic to use the
_symbol to bind values we're not interested in (in this case the 1st and 3rd elements).
The third element of
nested-data is another vector. To also restructure its elements, we simply need to replicate the overall structure of
nested-data and indicate that this 3rd element is a sequence itself. We combine this with another destructuring option, called
:as, to bind the entire 3rd element to yet another symbol,
third:
(let [[_ _ [c d :as third]] nested-data] (prn third "contains:" c d)) ; [30 40] "contains:" 30 40
When attempting to destructure sequences with more symbols than there are values, any symbols with missing values are bound to
nil:
(let [[_ _ _ _ missing] nested-data] (prn "missing?" (nil? missing))) ; "missing?" true
Likewise, if we're only interested in the first x elements of a seq, we don't need to specify any additional symbols/placeholders. Clojure doesn't care if there're more elements in a seq than destructuring symbols. However, in addition to the initial elements we're interested in, we might still want to hold on to the
rest of the collection too. This can be done with
&:
(let [[a b & more] nested-data] (println (count more) "more elements:" more)) ; 2 more elements: ([30 40] {:x 1 :y 2})
Destructuring can be used almost anywhere whenever Clojure expects a symbol binding form. E.g. in the symbol binding part of a
for form or to specify the argument list(s) of a function.
Map destructuring
Maps too can be destructured, though because the lookup of values requires keys, their destructuring form needs to refer to keys as well. Since we used
[ and
] to specify a sequential destructuring, it should also make sense that we use
{ and
} for destructuring maps. In the following we destructure the 4th element of
nested-data and bind this map's
:x to symbol
a and
:y to
b:
(let [{a :x b :y} (nested-data 4)] (prn :a a :b b)) ; :a 1 :b 2
If we wanted to use the same symbol names as the keys used in the original map, an alternative is:
(let [{:keys [x y] :as v} (nested-data 4)] (prn :x x :y y :v v)) ; :x 1 :y 2 :v {:x 1 :y 2}
As with sequential destructuring we can use
:as to also bind the entire map and of course can be done recursively. You can find more examples in Jay Field's blog post about this matter.
Destructuring and function arities
A function providing more than one implementation is called a "multi-arity" function and many core Clojure functions are implemented like this to provide maximum flexibility. So finally, let's extend our earlier
hypot function and turn it into a multi-arity fn, accepting not only two numbers, but also a single seq (w/ minimum two elements) instead:
(defn hypot ([[a b]] (hypot a b)) ; destructure the seq and then call itself with the 2 args ([a b] (Math/sqrt (+ (* a a) (* b b))))) ; #'user/hypot (hypot [9 12]) ; no more need for `apply` ; 15.0 (= (hypot [9 12]) (hypot 9 12)) ; testing other arity... ; true
Remember to wrap each arity implementation in its own form, i.e. surround with
(and
).
End of part 1
Congratulations!!! You made it through to here and we're truly proud of you! Even though we could only give you glimpses of The Clojure Way™ so far, we hope you're excited enough to try out more when we will be applying some of these basics to more practical & visual examples in the next part(s) of this tutorial. In the next part we will start building our first projects and introduce you to Quil, a Clojure wrapper around Processing.
In the meantime we recommend that you sharpen your Clojure Skillz by checking out some of the materials below, esp. the 4clojure puzzles are a great way of learning.
Further reading & references
- Clojure mailing list - main community discussion (~8600 members)
- clojure-doc.org - great community based collection of guides & tutorials aimed at all levels (incl. setup guides for various tools & platforms)
- clojuredocs.org - community & example based reference for core Clojure namespaces & functions (learn by example)
- Clojure cheatsheets - online & PDF versions, incl. ClojureScript
- Stackoverflow - SO questions tagged w/ Clojure
- Try Clojure - online playground REPL, no installation needed
- 4clojure - online learning resource to solve Clojure puzzles of varying difficulties
- Planet Clojure (Twitter) - Clojure blog aggregator
- O'Reilly book - IMHO currently most comprehensive & accessible book
- The Joy of Clojure - another great book, also touching more on the why & how of Clojure's philosophy
- clojure-toolbox.com - curated list of Clojure projects, grouped by topic
- clojuresphere.com - autogenerated list of Clojure projects on GitHub, incl. dependency info
- clojars.org - community repository for open source Clojure libraries (main repo for Leiningen)
- ClojureWerkz - growing collection of well maintained open source libraries (mainly DB centric projects)
Thank you.
OK! I’m ready to start lisping some generative images. Is that part 2?
Indeedy! :)
TIP: For those of you (not hardcore developers/computer scientist like me) having trouble installing Lein on Mac that don’t know where to locate the “bin” folder just use “open -a Finder /usr/local/bin” in the Terminal and then drag the script file there
You shouldn’t need to install into `/usr/local/bin` but use (or first create) a `bin` folder in your home directory, that’s what the `~` is a shorthand for in unix parlance… once you have that folder, restart Terminal and that folder should now be on your path (if it still doesn’t also see here:)
Alternatively, you can use Homebrew to install leiningen: `brew install leiningen` (will update instructions above)
Oh my! that is what I did when I first started :P
Btw. there is a bug with the latest version, which stops REPL from work () on the Mac. Downgrading to 2.1.2 will fix this.
Seems you missed to declare “age” “38” in the map example.
This is a great introduction, better than most books. Especially liked well selected and easy to grasp examples. Thanks Ricardo and Karsten!
Great detailed introduction to the language! I love to learn me some Clojure.
Excellent | https://www.creativeapplications.net/tutorials/introduction-to-clojure-part-1/ | CC-MAIN-2019-26 | refinedweb | 14,938 | 53.24 |
Dear Sir,
I got struck in this below lesson. In this code i just did loop for adding prices by passing stock item from if condition expression but for reducing -1 from stock i can't do because i can't return total and stock at a same time. So, do i need to make new variable for stock or what??
My Code is below:
shopping_list = ["banana", "orange", "apple"]
stock = {
"banana": 6,
"apple": 0,
"orange": 32,
"pear": 15
}
prices = {
"banana": 4,
"apple": 2,
"orange": 1.5,
"pear": 3
}
Write your code below!
def compute_bill(food):
total=0
for i in food:
if stock[i]>0:
total= total+prices[i] **stock[i]-1**------------------------------------->is this wrong? how we can return value of stock return total
print(compute_bill(shopping_list)) | https://discuss.codecademy.com/t/error-in-lesson-stocking-out/106519 | CC-MAIN-2018-39 | refinedweb | 127 | 72.05 |
what is the scope of a var declared in a JSP declarations using <%!
container? servlet? page? package?
thx much
-pan
scope of declarations? (1 messages)
- Posted by: Paul Hartzog
- Posted on: November 06 2000 16:51 EST
Threaded Messages (1)
- scope of declarations? by Marc Missire on November 07 2000 02:34 EST
scope of declarations?[ Go to top ]
A great question, and one that bit my team not long ago.
- Posted by: Marc Missire
- Posted on: November 07 2000 02:34 EST
- in response to Paul Hartzog
To find the answer, try this experiment:
As you know, a JSP is compiled into a Java Servlet when it is requested,
(the firs time, and later if it has changed). Looking at the generated
Servlet source code is a really great way to answer questions like this.
A variable declared in a JSP declaration ends up appearing at the top
of the generated Servlet, while a variable declared in a JSP scriplet
appears in the service method. This means that if you use the declaration
syntax, you end up with one copy of the variable per Servlet instance...
under normal circumstances, this means multiple users will "share" it
(perhaps useful for hit counters).
With a scriptlet variable, each request gets its own copy (which is more
often what you want).
To illustrate this, you can do a quick experiment with the J2EE SDK 1.2.1,
free from Sun. I also tried the same thing with WebLogic 4.5.1, with the same
results, but let's use the J2EE SDK here.
Create two short JSPs in j2skdee1.2.1/public_html:
my_declaration.jsp:
----------------------
<%! int foo = 1; %>
Hello!
----------------------
and my_declaration.jsp:
----------------------
<% int foo = 1; %>
Hello!
----------------------
Now start the "j2ee" server, and request the two files in your browser.
In both cases, you'll just see "Hello!", but the important thing is we've
generated the Servlet. By default, the J2EE SDK keeps the generated java
source for you to examine (for WebLogic, use the "keepgenerated=true" option
in weblogic.properties).
Go to: j2sdkee1.2.1/repository/<hostname>/web
Note the two .java files. I've included them up to the point our variable,
"foo", appears.
From the one named something like
_0002fmy_0005fdeclaration_0002ejspmy_0005fdeclaration_jsp_0.java:
----------------------------------------------------------------------------------------------------
public class _0002fmy_0005fdeclaration_0002ejspmy_0005fdeclaration_jsp_0 extends HttpJspBase {
// begin [file="/my_declaration.jsp";from=(0,3);to=(0,17)]
int foo = 1;
// end
static {
}
public _0002fmy_0005fdeclaration_0002ejspmy_0005fdeclaration_jsp_0( ) {
}
private static boolean _jspx_inited = false;
public final void _jspx_init() throws JasperException {
}
public void _jspService(HttpServletRequest request, HttpServletResponse response)
throws IOException, ServletException {
----------------------------------------------------------------------------------------------------
And the one named something like
_0002fmy_0005fscriptlet_0002ejspmy_0005fscriptlet_jsp_0.java:
----------------------------------------------------------------------------------------------------
public class _0002fmy_0005fscriptlet_0002ejspmy_0005fscriptlet_jsp_0 extends HttpJspBase {
static {
}
public _0002fmy_0005fscriptlet_0002ejspmy_0005fscriptlet();
// begin [file="/my_scriptlet.jsp";from=(0,2);to=(0,16)]
int foo = 1;
----------------------------------------------------------------------------------------------------
See what I mean? I am not sure why the JSP spec is so vague about this. A few JSP
books I've read mention this, but it's usually buried in the middle somewhere.
In either case, the variable would be available only in the JSP you placed it,
but using the declaration syntax definitely has this "interesting" behavior and
I thought I should mention it. Can anyone else comment on this?
Hope this helps.
-Marc | http://www.theserverside.com/discussions/thread.tss?thread_id=1873 | CC-MAIN-2015-48 | refinedweb | 520 | 66.13 |
This sketch tests the modem on the GSM shield to see if it is working correctly. You do not need a SIM card for this example.
First, import the GSM library
#include <GSM.h>
Create an instance of the GSMModem class:
GSMModem modem;
Create a variable to hold the IMEI number of the modem
In
setup, open a serial connection to the computer. After opening the connection, send a message indicating the sketch has started.
Call
modem.begin() to start the modem. Send a status message depending on the outcome, and end
setup().
Inside
loop, use
modem.getIMEI() to return the IMEI number of the modem. This number is unique to your GSM shield.
If there is a valid response from
getIMEI(), print it to the serial monitor and reset the modem with
modem.begin().
Once reset, check the IMEI again. If it is a valid return again, the modem is functioning as expected.
If, after resetting the modem, there is not a valid return from
getIMEI(), report an error
If you never received an IMEI after starting the sketch, report it, and end the program.
Once your code is uploaded, open the serial monitor. You should see the HTML of print out on screen when it is received.
The complete sketch is below.
Last revision 2015/08/17 by SM | https://www.arduino.cc/en/Tutorial/GSMToolsTestModem | CC-MAIN-2015-40 | refinedweb | 221 | 76.32 |
John Kennedy
Microsoft Corporation
September 12, 2001
Download the Today.exe sample file.
Thankfully, I'm back to full health (at least physically), and I've been able to get mobile again. Of course, by mobile, I mean able to walk to the alehouse across the road, but I'm thankful all the same. Of course, in an ideal world, the alehouse would share the same wireless Internet access facilities as the Starbucks, and then I wouldn't have to go home all weekend, but I'm not sure my wife would share this sentiment.
This month we're going to take a good look at one of my very favorite aspects of Pocket PC development—the amazingly flexible Today screen. While the Pocket PC tries to comfort users with a familiar desktop-style Microsoft Windows® user experience, there are times when something designed specifically for a PDA is a better idea. This is certainly the reasoning behind the Today screen, which provides an at-a-glance summary of the day's tasks, e-mails, and appointments. It's fair to think of the Pocket PC's Today screen as the equivalent to a standard Windows desktop. That said, it's probably more accurate to consider the Today screen as a unique, stand-alone application—an application that we can exploit.
The Today screen is often used as the default display. When the Pocket PC is reset, or when no other programs or settings windows have been opened, the Today screen is the window at the forefront of the display. The Pocket PC can also default to the Today screen if the device has been powered off for 1 to 12 hours (a value defined in the Start/Settings/Today control panel dialog).
Figure 1. If you want the Today screen visible when you switch on your device, or when you leave it in the cradle, change this setting from Start/Settings/Today.
Look closely at a typical Today screen and you'll see that it has six different sections or components. They are:
Figure 2. Notice how the Today screen consists of different components. The date and "today" logo are actually two separate components.
You can toggle all components on and off from the Start/Settings/Today dialog. You can also alter the position of the last four, and tweak a few options in the Calendar and Tasks setting.
Figure 3. Several of the Today components have a separate options dialog. We'll be making use of this in our own components.
So, are you getting excited yet? Have you worked out what we're going to do? Yes, you've guessed it, we're going to create our own Today screen component and add it to the list.
You might be thinking, "big deal, why should I be getting excited about writing some little hack utility that lists the number of tasks I've got?" Well, I'm going to tell you why, but first, you'll have to start getting into that Gadget Love frame of mind that I tend to spout about after I've had a few pints of Guinness. Sadly, after a few pints, my accent starts to become even less intelligible than normal (remind me to tell you about the "Twelve Bottles of Heineken and the Embarrassing Press Launch incident" some time), and so here are my thoughts in a slightly more sober format. I've left in a few little drunken-type effects for realism.
Let me in! I've lost my keys again! I love you! I think I'm going to be sick! A Pocket PC isn't simply a little computer, it's a little computer that you can carry with you everywhere, and use all the time. Even the smallest, lightest, sexiest new Sony laptop live up to this expectation. A Pocket PC quickly becomes as personal, and as important, as a wallet.
Just like a wallet, a Pocket PC will start to collect important information, contact information, notes, and photographs. Better than a wallet, a Pocket PC will also start to collect Windows Media Format files, movies (if you have a good-sized Compact Flash card), eBooks, and even games.
The point is that everyone wants his or her Pocket PC to look and act in a particular and unique way. Imported leather cases and leopard skin, multifunction belt cases are one approach, and customised Today screens are another.
As a slight aside, it's worth looking at the rest of the Today screen before we get coding. Unlike the Palm-size PC screen (remember those?), or in fact, Big Brother desktop Windows, the Pocket PC Today screen has its own unique status/menu bar at the bottom of the screen.
This status bar is not seen in any other applications because it is replaced by a menu bar. So, while it is entirely possible to add a cute little icon to this bar, it simply won't be seen when the user is running anything other than the Today screen. In some ways this is quite a shame, as this part of the screen was a handy place to store often-used programs, especially those of a little, shall we saw, hack-y nature.
If you are desperate to get an icon or something into the user's line of sight no matter what application they are running, you could potentially abuse the title bar at the top of the screen. Unless an application specifically launches itself as full-screen, the title bar can always be seen. There is no official support for adding your own applications to this part of the screen, but I could share a few secrets in a later column if you posted some nice comments (or sent some cash). Several third party programs (WISbar and Gigabar for example) completely overwrite the title bar to add new graphics and utilities. Of course, this kind of practice is entirely unsupported, but such utilities are immensely popular and shouldn't be ignored.
What form does a Today screen component actually take? It's nothing more than a DLL file, created in Microsoft eMbedded Visual C++®, copied to the Pocket PC, and referenced from the registry. The process of creating and using the component, therefore, goes a like this:
There is a little more to it of course, but once you've seen the basic framework of code, you'll quickly and easily get your own components up and running. Sadly, it's not possible to use Microsoft eMbedded Visual Basic® to create these components (suck it up, Larry!).
With only a few minor additions, the component behaves like a typical Win32® application. So we have a WM_Paint section of code, for example, that handles the display. We can assume that we have an application that has given us a small rectangle of screen real estate, 240 pixels wide by a program-defined number of pixels high. If you want your program to support any third party landscape mode utilities, you can't assume the 240 pixel-wide window and should query the system first.
Today components share enough of the standard Windows application features to ensure that they can respond to screen taps. This means that your components can both report information (your new task reporting application, for example), and respond to user actions (tap to display more information on the task). The component can appear like a mini-application, albeit one that is readily available to the user on the desktop.
Here's a chunk of code that highlights the important parts, and forms a useful skeleton for your own Today screen component. I've also included the entire project for you to download and use from within eMbedded Visual C++ to make it easy for you to get started.
//
// Example TODAY COMPONENT code.
//
// Simple draws a large, black, rectangular
// lump. It'll get more exciting, honest.
#include "windows.h"
#include <todaycmn.h>
#include <Aygshell.h>
const TCHAR k_szWindowClass[] = TEXT("TodayTest");
HINSTANCE g_hInst = NULL;
// The height, in pixels, of the Today component is
// picked at random. It's gotta be something!
#define MODULE_HEIGHT 42
/*************************************************************************/
/* WndProc for the window */
/*************************************************************************/
LRESULT WINAPI CustomItemWndProc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp)
{
PAINTSTRUCT ps;
HDC hdc=NULL;
// This structure is used to store all kinds
// of internal Today component information.
TODAYLISTITEM *ptli2;
switch(msg)
{
// This happens when data changes..
case: (WM_TODAYCUSTOM_CLEARCACHE):
break;
// This happens every two seconds or so.
case (WM_TODAYCUSTOM_QUERYREFRESHCACHE):
ptli2=(struct _TODAYLISTITEM *)wp;
// This is an important part,
// so pay attention!
if (0 == ptli2->cyp)
{
// Only return true once, when the
// height is being set.
ptli2->cyp = MODULE_HEIGHT;
return TRUE;
}
else
{
// Most of the time this branch will occur.
return FALSE;
}
break;
// Standard Windows Paint message
case WM_PAINT:
hdc = BeginPaint(hwnd, &ps);
// Quick and dirty paint example
BitBlt(hdc,0,0,240,MODULE_HEIGHT,NULL,0,0,BLACKNESS);
EndPaint(hwnd, &ps);
break;
default:
break;
}
return DefWindowProc(hwnd, msg, wp, lp);
}
/*************************************************************************/
/* Initilize the class */
/*************************************************************************/
void InitilizeClass(HINSTANCE hinst)
{
WNDCLASS wc;
memset(&wc, 0, sizeof(wc));
wc.style = 0;
wc.lpfnWndProc = (WNDPROC)CustomItemWndProc;
wc.hInstance = hinst;
wc.hIcon = NULL;
wc.hCursor = NULL;
wc.hbrBackground = (struct HBRUSH__*)GetStockObject(WHITE_BRUSH);
wc.lpszClassName = k_szWindowClass;
UnregisterClass(k_szWindowClass, hinst);
RegisterClass(&wc);
}
/*************************************************************************/
/* Initilize anything that is required for the DLL. */
/*************************************************************************/
BOOL WINAPI DllMain(HANDLE hDLLInst, DWORD fdwReason, LPVOID lpvReserved)
{
UNREFERENCED_PARAMETER(lpvReserved);
switch (fdwReason)
{
case DLL_PROCESS_ATTACH:
g_hInst = (struct HINSTANCE__ *)hDLLInst;
DEBUGREGISTER((HINSTANCE)hDLLInst);
InitilizeClass((HINSTANCE)hDLLInst);
break;
case DLL_PROCESS_DETACH:
UnregisterClass(k_szWindowClass, (struct HINSTANCE__ *)hDLLInst);
break;
}
return TRUE;
}
// This code is the entry to the Today component DLL:
HWND InitializeCustomItem(TODAYLISTITEM *ptli, HWND hwndParent)
{
HWND hWnd;
if (!ptli->fEnabled)
return NULL;
hWnd = CreateWindow (k_szWindowClass, k_szWindowClass, WS_VISIBLE|WS_CHILD, 0, 0, 0, MODULE_HEIGHT,
hwndParent, NULL, g_hInst, NULL);
ShowWindow (hWnd, SW_SHOWNORMAL);
return hWnd;
}
Apart from the .cpp file, you must create an extra file that exports the DLL settings. The .def file is nothing more than a text file that contain the following three lines:
EXPORTS
InitializeCustomItem @ 240 NONAME
CustomItemOptionsDlgProc @ 241 NONAME
Once you create the .def file, make sure to add it to your source files using the menu option Add Files to Folder.
With our code complete, the DLL can sit in the Windows directory forever, but nothing will happen until it is added to the registry. The Pocket PC registry contains a key called HKEY_LOCAL_MACHINE\Software\Microsoft\Today\Items\ that lists all the currently installed Today screen components. If you want your component to appear in that list on the Settings dialog, you'll need to create a key and then add the following information:
Figure4. Using the development tool's registry editor, add the new key and values to tell the Pocket PC to use your new component.
The easiest way to add the details is to use the registry editor that is included with the Visual eMbedded Toolkit. You can also use a registry editor on the Pocket PC device itself (one of the best is written by Philippe Majerus and is available from) or add the new keys to the installation routine that you will eventually use to distribute your new killer app.
Once you've entered the new registry details, you can try out the component by going to Start/Settings/Today and ensuring that there is a check mark next to the component name. If the name doesn't appear, make sure you have the registry details correct and the dll is in the right location.
Figure 5. Switch on your component from the Today's settings dialog. You can alter its position in relation to the other components by tapping the Move Up/Move Down buttons. The Options... button will open a dialog that you can use to request settings from the user.
When creating Today screen components, you'll soon come across one rather tedious little feature. You'll discover that it's not possible to overwrite the component with a new version because the old one will report that it's still in use. This can make the compile-test-edit-compile-test cycle a chore.
The obvious solution is to switch off the component (from Start/Settings/Today), and then perform a warm reset. Once you re-establish the ActiveSync® connection, the component can be deleted or replaced. Of course, re-establishing the connection takes time and gets boring.
I like to create the replacement Today DLL with a slightly different name (for example, mycomponent2.dll rather than mycomponent1.dll), and then edit the registry to reflect the new component name. Close and reopen the component from the Settings dialog, and the new code is running. You can recycle the names and build up a set of three or four components on the device, but even after the registry editing, this process is a lot quicker than anything involving a reset is.
Next month, we'll use the skeleton code to create a component that provides a useful calculator feature. The calculator will have a skinable interface that will make it easy to change to suit your personal tastes.
If you like writing small applications and utilities that make your Pocket PC easier to use, then join the club—that's my favorite pastime too. Here's a useful code snippet that brings the Today screen to the front of the display order. This can be useful when assigning a hardware key (use Start/Settings/Buttons to assign the application to a key), or used from within a third-party task utility. Simply add this code
// Find the Today screen and bring it to the front.
HWND top = FindWindow(TEXT("DesktopExplorerWindow"),TEXT("Desktop"));
if (top != NULL)
SetForegroundWindow(top);
Sure signs you are getting old: the music in the pop charts is mostly random noise, your jeans are getting tighter around the old waist, and memory prices are dropping faster than drool from your chin in a Best Buy store. The current price of large capacity Compact Flash cards—that is, 128Mb and 256Mb cards—is amazing. In my day, and only a year ago, they were a lot more expensive. Buy now and keep your favorite eBooks and albums of "real music" in WMA format with you all the time.
I really like the Hewlett-Packard Jornada Pocket PC. I like the slim metal case and flip-up screen cover. What I don't like is the Compact Flash Type 1 expansion socket. Almost all other Pocket PCs have a Type 2 socket, and they have no problems with massive Microdrive memory cards, chunkier network cards, and other Type 2 only hardware.
Well, good news Jornada owners. A device called the CF Pocket that was mentioned on several Japanese Web sites has reached the Western World. The CF Pocket is a small pouch that slots into the Type 1 socket, and offers a full Type 2 socket in return. I've tested it with a 340-megabyte (MB) Microdrive card and it works perfectly.
The CF Pocket is available from several good Pocket PC accessory stockists, including atek.com and MobilePlanet.
It's getting easier and cheaper to go wireless with your local area network. PC Card (also known as PCMCIA) format IEEE 802.11b cards have been around for quite a while now, but at least two manufactures—DLink and Socket—are launching Compact Flash format versions. This means that Casio and Hewlett-Packard devices should soon be able to join in the fun.
Once you've tried a wireless LAN card on your Pocket PC, you'll never go back. If you are lucky enough to have a broadband (cable or DSL) connection, you can browse the Web at high speed from anywhere in your house, listen to live radio, and watch streaming video, not to mention having speedy downloads and responsive debugging during developing, all without any nasty cables.
Larry Roof is a partner at tonked, a firm that specializes in the development of mobile solutions and training. He's the author of Professional Visual Basic Windows CE, available from Wrox Press. | http://msdn.microsoft.com/en-us/library/ms837908.aspx | crawl-002 | refinedweb | 2,669 | 61.87 |
Set Vector Y to 0 (in World Space)
Hi,
I understand retrieving world space has been discussed before like in this thread ()
Sorry for asking the questions, but I can't seem to set the Vector Y of the points to 0 (in world space). Basically, I'm mimicking the "Set Point Value" command.
You can see an illustration of the problem here:
For the illustration file, you can check it here:
The code used is as follow:
import c4d def set_Y_vector(obj, value): oldPoints = obj.GetAllPoints() obj_mat = obj.GetMg() inv_obj_mat = ~obj_mat # Not used newLocalPoints = [c4d.Vector(p[0], p[1]*value, p[2]) for p in oldPoints] # The zero Y value in newLocalPoints is in local space. So I'm trying to convert it on global space below newWorldPoints = [p * obj_mat for p in newLocalPoints] obj.SetAllPoints(newWorldPoints) obj.Message (c4d.MSG_UPDATE) c4d.EventAdd() set_Y_vector(op, 0)
Is there a way around this?
Thank you for looking at the problem.
Without actually trying the code:
newLocalPoints is in local space. This is where you zero the y value. Which means that the point will now have a y of 0 in local space.
Then you multiply the world matrix with the new positions. BUT: This world matrix is already part of the object that contains the points. Now your points have applied the matrix twice, first through the object and then through this multiplication.
While the local y is 0, the multiplication with obj_mat will give the point a global y anyway because obj_mat may contain a translation in space. Moreover, the obj itself does, too. So, your point's y will definitely not be world 0 in the end.
What you really need to do:
Determine the world vector for the points (it's only a vector and not a matrix since a point does not contain a rotation or scale). This is done by applying obj_mat to each point vector, resulting in a new vector for each point. Why? Because your object's transformation in space (which is the meaning of obj_mat) is affecting all points in the object's local system.
For this global vector, you set the y component to 0.
Then you transform this global vector back to the object's local space by applying inv_obj_mat. Note that after this inverse transformation, the local y may no longer be 0!
Profit.
Okay, I wrote the script after all. You owe me a beer.
import c4d from c4d import gui def set_Y_vector(obj, value): oldPoints = obj.GetAllPoints() obj_mat = obj.GetMg() inv_obj_mat = ~obj_mat newWorldPoints = [p * obj_mat for p in oldPoints] newLocalPoints = [inv_obj_mat * c4d.Vector(p[0], p[1]*value, p[2]) for p in newWorldPoints] obj.SetAllPoints(newLocalPoints) obj.Message (c4d.MSG_UPDATE) c4d.EventAdd() def main(): set_Y_vector(op, 0) if __name__=='__main__': main()
Hi Bentraje, thanks for reaching out us.
With regard to your request, I second all the considerations that @Cairyn kindly provided reminding once again that points positions are always stored in local space and must be always defined in local space to unpredictable results due to transformation matrices being applied twice.
On top of this, looking at your snippet, if your intent is to locate the spline points Y-coordinate at a certain value in global space, I'd refrain from using the multiplication but rather use the passed value directly as new value.
Last but not least, I've optimized the code with one single for-loop
def set_Y_vector(obj, value): oldLocalPoints = obj.GetAllPoints() obj_mat = obj.GetMg() inv_obj_mat = ~obj_mat # Non-preallocation penalty is neglectable newLocalPoints = [] for p in oldLocalPoints: # get the world position of the current point worldpoint = obj_mat * p # set the local position on the new point by setting the y-coordinate to the desired value and transform by the inverted matrix newLocalPoint = inv_obj_mat * c4d.Vector(worldpoint[0], value, worldpoint[2] ) # just append newLocalPoints.append(newLocalPoint) # set the new points obj.SetAllPoints(newLocalPoints) # notify Cinema obj.Message (c4d.MSG_UPDATE) c4d.EventAdd()
Best, Riccardo
@Cairyn and @r_gigante
RE: BUT: This world matrix is already part of the object that contains the points. Now your points have applied the matrix twice, first through the object and then through this multiplication.
Thanks for the clarification.
Works as expected. (And I agree, I owe you a beer Cairyn hehehe).
Have a great day ahead! | https://plugincafe.maxon.net/topic/11599/set-vector-y-to-0-in-world-space | CC-MAIN-2020-10 | refinedweb | 719 | 56.55 |
dladdr, dlclose, dlerror, dlopen, dlsym, dlvsym - programming interface to dynamic linking loader
Synopsis
Description
Glibc extensions: dladdr() and dlvsym()
Notes
History
Bugs
Example
Colophon
#include <dlfcn.h>
void *dlopen(const char *filename, int flag);
char *dlerror(void);
void *dlsym(void *handle, const char *symbol);
int dlclose(void *handle);
Link with -ldl.
The four functions dlopen(), dlsym(), dlclose(), dlerror() implement the interface to the dynamic linking loader..
The function dlopen() loads the dynamic library file named by the null-terminated string filenames() decrements the reference count on the dynamic library handle handle. If the reference count drops to zero and no other loaded libraries use symbols in it, then the dynamic library is unloaded.
The function dlclose() returns 0 on success, and nonzero on error.
The using the gcc(1) -nostartfiles command-line option..
Glibc adds two functions not described by POSIX, with prototypes
#define _GNU_SOURCE /* See feature_test_macros(7) */ .
POSIX.1-2001 describes dlclose(), dlerror(), dlopen(), and dlsym()..
The dlopen interface standard comes from SunOS. That system also has dladdr(), but not dlvsym(). todays gcc(1) will generate code that just loads the final symbol address from the got (Global Offset Table) at run time before passing it to dladdr().
Load
ld(1), ldd(1), dl_iterate_phdr(3), rtld-audit(7), ld.so(8), ldconfig(8)
ld.so info pages, gcc info pages, ld info pages
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/dladdr.3.php | CC-MAIN-2017-47 | refinedweb | 252 | 57.77 |
simple events propogation
Python _ _ _ _____ _____ _ _| |_| |_ ___ _ _(_)______ _ _ / -_) V / -_) ' \ _| ' \/ _ \ '_| |_ / _ \ ' \ \___|\_/\___|_||_\__|_||_\___/_| |_/__\___/_||_|
Supported Pythons: 2.6+, 3.2+
A Python package for propogating events, without messy hooks and subclassing. Useful when you have multiple components that need to communicate and share data around.
Since this component is used for systems where they integrate with multiple other components, the most important development goal is stability. Extensibility is already built into the module, so it comes “for free”.
from eventhorizon import signal done = signal('test-done') @done.register def finalize(): return done.trigger() done.silence(finalize)
Most of this package is inspired by the Backbone.js web framework and it’s events system. You can also use Eventhorizon as a substitute for the blinker library, but Eventhorizon is lighter and provides less features.
The documentation for the library is available and hosted in Github pages.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/eventhorizon/ | CC-MAIN-2017-43 | refinedweb | 195 | 58.28 |
System Calls link(2)
NAME
link - link to a file
SYNOPSIS
#include <unistd.h>
int link(const char *existing, const char *new);
DESCRIPTION. The super-user may make
multiple links to a directory. Unless the caller is the
super-user, the file named by existing must not be a direc-
tory.
Upon successful completion, link() marks for update the
st_ctime field of the file. Also, the st_ctime and st_mtime
fields of the directory that contains the new entry are
marked for update.
RETURN VALUES
Upon successful completion, 0 is returned. Otherwise, -1 is
returned, no link is created, and errno is set to indicate
the error.
ERRORS
The link() function will fail if:
EACCES A component of either path prefix denies search
permission, or the requested link requires writing
in a directory with a mode that denies write per-
mission..
System Calls link(2)
ELOOP Too many symbolic links were encountered in
translating path.
EMLINK The maximum number of links to a file would be
exceeded.
ENAMETOOLONG
The length of the existing or new argument exceeds
PATH_MAX, or the length of a existing or new com-
ponent direc-
tory.
EPERM The file named by existing is a directory and the
effective user of the calling process is not
super-user.
EROFS The requested link requires writing in a directory
on a read-only file system.
EXDEV The link named by new and the file named by exist-
ing are on different logical devices (file sys-
tems).
SEE ALSO
symlink(2), unlink(2)
Everything2 Last change: 5 July 2000 2
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/link%25282%2529 | CC-MAIN-2014-42 | refinedweb | 275 | 54.83 |
Or: Taking a Picture Every 30 Seconds and Sending It To A Server..
One of the things I set out was the probe thermometer and 2 probes: one to measure the air temperature, and one to measure the internal temperature of the meat. Smoking is a low and slow method of cooking: you want to get the air temperature up to 225˚F and hold it there for hours as the meat slowly cooks and infuses with smoke. Smoking a pork shoulder (a.k.a. pulled-pork-to-be) can take 8 - 12 hours. Hence why I’m waking up at 7am.
So where does React Native play into all this?
Well, holding a temperature with a Weber kettle is a bit of a trick. And a manual one at that. There are 2 air vents you can tweak – one on top, one on the bottom. Open them up to increase the temperature, close them down to lower it. The fire takes a while to respond, though. It’s a fire, not a digital dial. So you, as the pit master, get to be a human PID controller for the day.
What I mean is: you have to keep watching the temperature, adjusting the vents, and re-checking. If you’re good at it, you don’t have to tweak it much, but I’m a newb, so I’m out there a lot.
I wanted to be able to know, without running out to the smoker every 15 minutes, whether the temperature was at 225˚F or close enough.
This is where React Native comes in.
At 9pm, after I’d laid out all the materials, I had the idea: I’ll make an app to take a picture of the thermometer every 30 seconds, and upload it to a server – and then I can just refresh a page instead of running down to the smoker!
And before you tell me – yes, I know there are remote thermometers for sale that do exactly this. And yes, I also know I could’ve just sat outside with a beer all day watching the thing, and that would’ve been fun too. But really I just wanted an excuse to play with React Native :)
Grand Plans: The System Layout
Like any good project, I started off thinking about how I wanted it to work.
I would need:
- A phone with a camera (old iPhone 4S).
- An app running on the phone to take pictures all day.
- A server to receive the pictures, running on my laptop.
- The same server to serve up the latest picture.
I decided I wanted to keep this as minimal as possible (mostly because it was 9pm and I still needed to wake up at 7). There would be little to no security. There would be no websockets notifying a React app to download the latest image. This server would simply accept images, and send back the latest upon request.
React Native
You’ve probably heard of React Native - a framework for building native mobile apps using React and JS. If you can write React apps, you can figure out React Native pretty quickly. The core concepts are the same, just props and state.
Since there’s no DOM behind React Native, though, there are some differences. Mainly, the HTML elements you know and love (
div,
span,
img, etc.) are replaced by React Native components (
div ==
View,
span ==
Text,
img ==
Image).
Also, “real” CSS isn’t supported, but RN does support styling through inline styles. Flexbox layout and most normal styles like
color and
backgroundColor and the like will work. I noticed that some shorthand properties don’t work either: something like
border: 1px solid red would instead be described explicitly, like
{ borderWidth: 1, borderColor: 'red' }.
Expo
Expo is a tool, and a platform, for building apps with React Native.
One nice thing about using Expo is that it lets you deploy apps to your phone without signing up for an Apple Developer subscription (for us iPhone people anyway). I’ve read that you actually can get an app onto your phone without the Apple Developer subscription, but it requires messing with Xcode and that wasn’t something I wanted to tackle this evening.
The other big bonus with Expo is that it comes with the Expo SDK which gives you a bunch of native APIs out of the box – like the accelerometer, compass, location, maps, and the most important one for this project: the camera.
Install Expo on Computer and Phone
I used the Expo command line but they also provide an IDE. If you want to follow along, install the Expo commandline tool with NPM or Yarn:
npm install -g exp
(Yes, it’s
exp, not expo).
Then you need to install the Expo app on your phone, and you can find that in the App Store / Play Store.
Create the Project
With the command line tool installed, run this command to create a new project:
exp init grillview
It’ll prompt for a template: choose the “blank” one.
Then follow the provided instructions to start it up:
$ cd grillview $ exp start
At some point it will ask you to create an account with Expo. This is needed in order to deploy the app from your computer to Expo’s servers. Then the Expo app on your phone can load your app.
Follow the instructions to send the URL to your device, or just type it in. Expo also lets you run this in a simulator, but I thought it’d be more fun with the real phone so that’s what I did.
Once you’ve got it open on your phone, the developer experience is pretty nice. Change code, save, and the app will live reload (auto-refresh) automatically – just like developing locally with Create React App. There’s a small delay as it downloads the JS bundle each time. You can also enable hot reloading (no refresh) from Expo’s developer menu, which you can bring up if you shake your phone. Gently. Don’t throw it through a window or whatever.
File Structure
Expo sets us up with an
App.js file in the root of the project, which exports the
App component. Here’s the entirety of the generated app:', }, });
You’ll notice there’s a
Text component inside the
View. Try leaving the “Open up App.js…” text alone, but removing the wrapping
Text component and see what happens.
If you peek inside
package.json you’ll see this line:
"main": "node_modules/expo/AppEntry.js"
This is what kicks off our app, and it expects to find an
App.js file that exports the root component.
If you wanted to reorganize the project structure, the first step would be to copy AppEntry.js into your project and modify it accordingly, but we’re gonna stick with defaults on this one.
Using the Camera
Permission Granted
To take pictures, Expo provides a
Camera component. But before we can use it, we need to ask for permission.
Open up
App.js, add a new
import for the camera and permissions objects, and change the component to look like this:
import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; // add this: import { Camera, Permissions } from 'expo'; export default class App extends React.Component { // initialize state state = { cameraPermission: null }; render() { const { cameraPermission } = this.state; // Render one of 3 things depending on permissions return ( <View style={styles.container}> {cameraPermission === null ? ( <Text>Waiting for permission...</Text> ) : cameraPermission === false ? ( <Text>Permission denied</Text> ) : ( <Text>yay camera</Text> )} </View> ); } }
Now the app should render “Waiting for permission…” and just be stuck there, since we’re not doing anything yet.
We’ll ask for permission in the
componentDidMount lifecycle hook. Add that in:
export default class App extends React.Component { ... componentDidMount() { Permissions.askAsync(Permissions.CAMERA) .then(({ status }) => this.setState({ cameraPermission: status === 'granted' }) ); } render() { ... } }
When you save, and the app refreshes, you’ll see a dialog asking for camera permission. And once you allow it, the text should change.
If this is your first time using Expo, it will probably ask for permissions for Expo itself before asking about your app.
Live Camera View
Now let’s replace the “yay camera” text with a component that will render the camera. Add a new component to
App.js named
Autoshoot. For now, it will just render the Camera, and we can make sure everything is working.
class Autoshoot extends React.Component { render() { return ( <View style={{ flex: 1, width: '100%' }}> <Camera style={{ flex: 1 }} type={Camera.Constants.Type.back} ref={cam => this.camera = cam}> </Camera> </View> ); }
We’re putting the Camera inside a View, giving both
flex: 1 so they take up the entire height, and the
width: '100%' so the View takes the entire screen (without the width set, you’ll see a blank screen: try it!).
We’re using the “better” camera (on iPhone anyway – the
back one, as opposed to the
front selfie one).
And we’re saving a
ref to this camera component, because that’s how we’ll trigger the shutter in the next section.
Now that this component exists, go back to the render method of
App and replace the “yay camera” element with this Autoshoot component:
render() { const { cameraPermission } = this.state; // Render one of 3 things depending on permissions return ( <View style={styles.container}> {cameraPermission === null ? ( <Text>Waiting for permission...</Text> ) : cameraPermission === false ? ( <Text>Permission denied</Text> ) : ( <Autoshoot/> )} </View> ); }
Finally: Taking a Picture
To trigger the shutter, we’ll put a “button” of sorts inside the Camera component. Unfortunately
Camera doesn’t support the
onPress prop (the one that gets triggered when you tap it), so we’ll import
TouchableOpacity and render one of those inside.
At the top, import it:
import { StyleSheet, Text, View, TouchableOpacity } from 'react-native';
And in Autoshoot’s
render, insert the component as a child of Camera:
render() { const { photo } = this.state; return ( <Camera style={{ flex: 1 }} type={Camera.Constants.Type.back} ref={cam => this.camera = cam}> <TouchableOpacity style={{ flex: 1 }} onPress={this.takePicture}/> </Camera> ); }
Then we need a
takePicture method, which we can insert above
render:
takePicture = () => { this.camera.takePictureAsync({ quality: 0.1, base64: true, exif: false }).then(photo => { this.setState({ photo }); }) }
At this point, the app will behave the same: when you tap the screen, the app will still display the camera (and hopefully no errors).
Next, we need to initialize the state of
photo at the top:
class Autoshoot extends React.Component { state = { photo: null } ... }
Then inside
render, we’ll either render the photo (if there is one) or the camera:
render() { const { photo } = this.state; return ( <View style={{ flex: 1, width: '100%' }}> {photo ? ( <ImageBackground style={{ flex: 1 }} source={{ uri: photo.uri }} /> ) : ( <Camera style={{ flex: 1 }} onPress={this.takePicture} type={Camera.Constants.Type.back} ref={cam => this.camera = cam}> <TouchableOpacity style={{ flex: 1 }} onPress={this.takePicture}/> </Camera> )} </View> ); }
We’re using the
ImageBackground component for the first time here too, so make sure to import that at the top from ‘react-native’:
import { StyleSheet, Text, View, TouchableOpacity, ImageBackground } from 'react-native';
There we go! Now you can tap the screen to take a picture, and it will stay up on the screen.
Here’s a quick exercise for you:
Make it so that when you tap the captured photo, the app goes back to displaying the Camera. Hint:
ImageBackground doesn’t support
onPress, so you’ll need to use the same trick we used with the
TouchableOpacity.
Taking Photos On a Timer
We’ve got the code in place to take a picture manually – now let’s automate it.
We can do this by essentially calling
takePicture on an interval. But there’s a small problem: the camera needs a bit of time to focus before it takes the shot. So what we really need is something like this:
- Activate camera (screen shows live camera)
- Let it focus for 3 seconds
- Take a picture (screen shows still image)
- Wait 27 seconds
- GOTO 1
And once we get that working, we’ll insert a step “3a”: send the picture to the server. (which doesn’t exist yet, but we’ll get to that in a bit)
When
Autoshoot initially renders, we’ll start a 30-second timer. Let’s create a constant for the timer, and the amount of time to focus, because we’ll need it in a few places.
const PHOTO_INTERVAL = 30000; const FOCUS_TIME = 3000; class Autoshoot extends React.Component { componentDidMount() { this.countdown = setTimeout( this.takePicture, PHOTO_INTERVAL ); } componentWillUnmount() { clearInterval(this.countdown); } ... }
And for testing purposes, just change the timeout to 2 seconds so we’re not waiting around all day.
When the app reloads (which you can trigger manually by shaking your device, and choosing “Reload JS Bundle”), a photo will be taken automatically. Awesome.
Start Another Timer
Now that we’re taking a photo automatically, we just need a couple more timers to have it take photos all day long.
There are a few ways to write this: we could do it with two stacked timers (one for 27 seconds, which then triggers one for 3 seconds), or we could do it with 2 simultaneous timers, or we could do it with
setState callbacks.
The latter option is probably the most precise (and avoids potential race conditions), but we’ll go with the easy option: 2 simultaneous timers. With the triggers this far apart, a race condition/overlapping timers is pretty unlikely.
To make it work, replace
takePicture with this implementation:
takePicture = () => { this.camera.takePictureAsync({ quality: 0.1, base64: true, exif: false }).then(photo => { this.setState({ photo }); // In 27 seconds, turn the camera back on setTimeout(() => { this.setState({ photo: null }); }, PHOTO_INTERVAL - FOCUS_TIME); // In 30 seconds, take the next picture setTimeout(this.takePicture, PHOTO_INTERVAL); }); }
Now when the app refreshes, it will take pictures for infinity. (or until your battery runs out)
The Express Server
We have the React Native app taking pictures now. Let’s work on building a server to send them to.
We’re going to use Express to write a barebones server to handle two routes:
POST /: Upload a new photo
GET /: View the latest photo
For this most simple of servers, we’re just gonna create a
server.js file in the root of our
grillview project. React Native and Express, side-by-side. (Is this a recommended way to create Real Projects™? Nah, but this whole thing is a bit of a hack, so.).
We’ll need a couple packages to make this work, so install those now:
yarn add express body-parser
Then we can start with a barebones Express server. Create the
server.js file and paste this in:
const express = require('express'); const bodyParser = require('body-parser'); const app = express(); // If your phone has a modern camera (unlike my iPhone 4S) // you might wanna make this bigger. app.use(bodyParser.json({ limit: '10mb' })); // TODO: handle requests const port = process.env.PORT || 5005; app.listen(port); console.log(`Grill server listening on ${port}`);
This won’t handle requests yet, but it will run. We have
bodyparser.json in place to handle the POST’ed images. Now let’s add the POST request handler in place of the TODO:
// Store the single image in memory. let latestPhoto = null; // Upload the latest photo for this session app.post('/', (req, res) => { // Very light error handling if(!req.body) return res.sendStatus(400); console.log('got photo') // Update the image and respond happily latestPhoto = req.body.image; res.sendStatus(200); });
This just accepts the image from the client and saves it in a local variable, to be returned later.
Quick warning: this is doing nothing about security. We’re blindly saving something from the client and will parrot it back, which is a recipe for disaster in a deployed app. But since I’m only running it on my local network, I’m not too worried. For a real app, do some validation of the image before saving it.
Underneath that we’ll add the GET handler that will send back the latest image:
// View latest image app.get('/', (req, res) => { // Does this session have an image yet? if(!latestPhoto) { return res.status(404).send("Nothing here yet"); } console.log('sending photo'); try { // Send the image var img = Buffer.from(latestPhoto, 'base64'); res.writeHead(200, { 'Content-Type': 'image/png', 'Content-Length': img.length }); res.end(img); } catch(e) { // Log the error and stay alive console.log(e); return res.sendStatus(500); } });
We’re creating a buffer to convert the base64 image to binary, and then sending it to the client.
And just to reiterate: this is not a secure setup. We’re assuming that the client sent us a good base64 image, but Rule 1 is “Don’t trust the client” – we should be validating the image before storing it.
That’s all we need for the server! Start it up:
node server.js
Then visit – you should see the message “Nothing here yet”. Leave the server running in a separate command line terminal, and we’ll go work on sending images to the server.
Uploading the Pictures
Back in
App.js and the
Autoshoot component, we need to add a method for uploading the picture. In a larger app we might pull the API methods into a separate file and export them as individual functions – but since we only have the single call to make, we’ll put it in
Autoshoot. Add this method:
uploadPicture = () => { return fetch(SERVER_URL, { body: JSON.stringify({ image: this.state.photo.base64 }), headers: { 'content-type': 'application/json' }, method: 'POST' }) .then(response => response.json()) }
Here we’re using
fetch (which is built into React Native) to POST the data to the server. Notice the
SERVER_URL variable, which we haven’t created yet. Since this will only be working on our local network, we can hard-code that above
Autoshoot:
const SERVER_URL = 'http://<your-ip>:5005/'
Replace
<your-ip> with your own dev machine’s IP address. If you don’t know where to find that, Google is your friend :)
Now we’ll change
takePicture to call
uploadPicture, and as part of that change, we’ll pull out the timer code into a separate method because we want to call it from 2 places:
// Here's the timer code, lifted from takePicture: queuePhoto = () => { // In 27 seconds, turn the camera back on setTimeout(() => { this.setState({ photo: null }); }, PHOTO_INTERVAL - FOCUS_TIME); // In 30 seconds, take the next picture setTimeout(this.takePicture, PHOTO_INTERVAL); } // Take the picture, upload it, and // then queue up the next one takePicture = () => { this.camera.takePictureAsync({ quality: 0.1, base64: true, exif: false }).then(photo => { this.setState({ photo }, () => { this.uploadPicture() .then(this.queuePhoto) .catch(this.queuePhoto); }); }); }
Notice that I’m calling
queuePhoto in both the
.then and
.catch handlers.
I wanted the app to keep on chugging away even if I restarted the server (which will cause failed requests), so I just made it ignore errors entirely.
During development it was helpful to add a console log in there to see why things were failing (syntax errors, etc), but I took it out once everything was working.
Time to cook some pulled pork!
With those last changes in place, the app is working!
I was excited to try it out. The next morning, I set up the thermometer and the phone. Started up the app, aaand… hmm, there’s no good place to put the phone.
I could’ve just put the phone and the thermometer on the ground. That’s what I should’ve done. What a reasonable person would do.
7am Dave did not do that. He grabbed an old board, cut 2 pieces of scrap wood, and fashioned it together into a little shelf leaned against the house.
“Carpentry.” It has pocket screws. Why? I have no idea.
As for the app?
It performed admirably. Mostly. It only crashed a few times.
It turned out to be pretty useful, and saved me a bunch of running up and down the stairs to check the temperature. A+++ would build again.
And the pulled pork was delicious.
Takeaways
I think it’s important to work some fun into programming projects. Give yourself permission to build something that already exists, if only to learn how to build it yourself. It doesn’t have to be a big serious project, or a perfect portfolio piece.
And on that note, don’t be afraid to hack things together. It’s a fun project! Write some terrible code that you know is terrible. Don’t stress so much about perfect abstractions and Best Practices and feeling like you have to incorporate every new library and tool. It’ll be fine. You can always refactor it when you write the blog post ;)
Recipes, Tools, Code…
You can get the full code for this project on Github.
I followed Amazing Ribs’s Perfect Pulled Pork recipe.
I used a Weber 22” Grill with a Slow n’ Sear (evidently discontinued, but I see there’s a v2 which looks similar).
The thermometer is a ThermoWorks DOT.
(no affiliate links, just good products) | https://daveceddia.com/perfect-pulled-pork-react-native-expo-express/ | CC-MAIN-2019-35 | refinedweb | 3,508 | 65.93 |
j2me
j2me how to compile and run j2me program at command prompt
j2me
j2me COmmand c=new Command("Exit",Command.EXIT,100);Please expalin abt 3 parameters
J2ME Command Class
J2ME Command Class
In the given J2ME Command Class example, we have set the various..., 3
etc. In J2ME commands are used for keeping information of the commands
Compiling package in command line
Compiling package in command line Hi friends,
i am totally new to java programming, i am basic learner in java.
My query is, How to compile java package using command line.
For Eg: When i compile following command,
c:>set
J2ME question
the user chooses "new user". IM stuck at if command part
J2ME Tutorials...J2ME question Lets say i have 2 screens. One for new user, another for existing user. Currently, the midlet contains radio boxes that allows users
j2me - MobileApplications
j2me i am trying to load one image in j2me program..but get... display;
private Form form;
private Command exit;
private Image image;
private...(this);
exit = new Command("Exit", Command.EXIT, 1);
form = new Form("Immutable Image
MySQL Average Command
MySQL Average Command
This example illustrates how to execute the Average command in MySQL.
In this example we create a select query to find the average of 'lastAccess'
field.
Query
| J2ME
Crlf |
J2ME Command Class |
J2ME Record
Store | J2ME Form...
Map | Business Software
Services India
J2ME Tutorial Section
Java
Platform Micro Edition |
MIDlet Lifecycle J2ME
|
jad and properties file
chgrp command in java - Java Beginners
chgrp command in java I used chgrp and chown two commands in java to change files properties like
Runtime rt = Runtime.getRuntime();
String... your query please send detail source code because your posted query is short so
Command Midlet Example
J2ME Button MIDlet
This example illustrates how to create command button in your form. Command
class build to bind only the semantic information of the command
Modify Data Type with ALTER Command
this?If this is a valid query, then what will happen to the data 'Smith'?
Use the following query:
ALTER TABLE Emp MODIFY EmpName int(255
Mysql Alter Command
Mysql Alter Command
Mysql Alter Command define the list of Command used for removing... with Example
The Tutorial illustrate an example from 'Mysql Alter Command
Mysql Like Command
to view data from Table named employee1 using like command:
The Query below... Mysql Like Command
Mysql Like Command is used for retrieving the records from
Mysql Like Command
data from Table named employee1 using like command:
The Query below is used...
Mysql Like Command
Mysql Like Command is used for retrieving the records from a table
j2me
j2me i need more points about j2me handing multiple pages
J2ME handing multiple pages I have 1 midlet and 1 form. How do i make my display of the midlet when users pressed the back command from the form... display;
private TextField username, pwd;
private Command login;
private Ticker
help on mySQL 5 command Line - SQL
help on mySQL 5 command Line Dear Sir,
Sorry for my mistake... |
+----+------+-------------------------?Ebr />
I want to use the command SELECT WORD, MIN(num... |
+------+----------------------+
What command for MIN should i used?
Thanks in advanced solution - MobileApplications
j2me solution Hi friends,
In one of my mobile application i am retrieving latitude .longitude and altitude values in the emulator and storing... points and then insert into database.
Check your sql query, I think application not working on n79
J2ME application not working on n79 Hi, i had developed assignment... MIDlet
implements CommandListener
{
private static final Command CMD_EXIT = new Command("Exit", 7, 1);
private boolean firstTime = true;
private Date
J2ME application not working on n79
J2ME application not working on n79 Hi,
i had developed...
{
private static final Command CMD_EXIT = new Command("Exit", 7, 1);
private boolean...;
Calendar rightNow = Calendar.getInstance();
private final Command Java Editor plugin
J2ME Java Editor plugin
Extends Eclipse Java Editor support ing J2ME...:
* Autocompletion of J2ME Polish directives.
* Autocompletion of J2ME Polish variables
error in compiling j2me apllication - Applet
error in compiling j2me apllication hi,
in my j2me application... videoDisplayingForm=null;
Command stop=null;
Command exit=null;
ErrorPage errorPage=null...()-------------------*/
public void commandAction(Command c
database related query
database related query i have created database in phpmyadmin of wampserver ! but i can't see it in mysql command line ! Can anybody please help me
J2ME Display Size Example
J2ME Display Size Example
In the given J2ME Midlet example, we are going to display... on the
screen and the screen size will be displayed at the command prompt (image 2
J2ME Read File
J2ME Read File
In this J2ME application, we are going to read the specified file.
This example... of this file by the help of j2me midlet.
Compute command
Compute command hello,
What is compute command?
hii, samar
Compute command control computations on subsets created by the BREAK command
Java Query - Java Beginners
Java Query Q. Write a program to display on command prompt as if the user enters 123,it should be display in word like "One Hundred and Twenty Three"? Hi Friend,
Try the following code:
import java.util.
Print command
Print command Can I use System.out.println command in Struts form bean or Struts action class. I am using Struts 1.3.8 but when I write this command...;You can use System.out.println command in Strut Action Class. The message written
J2ME Current Date And Time
J2ME Current Date And Time
This is a simple J2ME form example, that is going to show the current date
and time on the screen. Like core Java J2ME too use the same
query
Query
J2ME Servlet Example
J2ME Servlet Example
... file on the command window with the specific path.
Run the tomcat server from... steps.
For Details follow this link: J2ME Cookies Example
List in J2ME
List in J2ME
J2ME Canvas List Example will explain you, how to create list of items...;show = new Command("Show", Command.OK, 1);
how to connect j2me program with mysql using servlet?
how to connect j2me program with mysql using servlet? my program of j2me
import java.io.*;
import java.util.*;
import javax.microedition.midlet....";
private Display display;
private Command exit = new Command("EXIT
JDBC connection and SQL Query - JDBC
each time. Now I'm trying to execute a query to insert those values into an oracle... variables. I'm trying to execute following command. Though I use executeQuery or udate Query it is not accepting as the format for them is executeQuery(String
J2ME Form Class
J2ME Form Class
In this J2ME Extends Form example, we are going to discuss
about form... in
J2ME.
The application will look like as follow...
In this image, "My
J2ME Draw Triangle
J2ME Draw Triangle
As you already aware of the canvas class and it's use in J2ME... will look like as follow...
Source code to draw a triangle in J2ME
file name
Appending Image into the J2ME Form
Appending Image into the J2ME Form
... image into the J2ME
Form. The syntax of adding image is given below..
public int...){}
exit = new Command("Exit", Command.BACK
J2ME Crlf Example
J2ME Crlf Example
The given J2ME Midlet, discuss about how to show the messages... by selecting print
command. The message will be displayed as follow..
J2ME Cookies Example
J2ME Cookies Example
...;Display.getDisplay(this);
exit = new Command("...;Command("Logon", Command.SCREEN, 2);
form
J2ME Event Handling Example
J2ME Event Handling Example
In J2ME programming language, Event Handling are used to handle certain... screen.
As you know in J2ME there are two MIDP user interface APIs and therefore
MySQL Ascending Command
MySQL Ascending Command
... with Example
The Tutorial illustrate an example from 'MySQL Ascending Command... the records in the specific
columns of a table.
Query
MySQL Ascending Command
MySQL Ascending Command
This example illustrates how to display the data in the ascending order.
In this example, the below table shows the data... in the ascending order.
Query
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/90874 | CC-MAIN-2015-22 | refinedweb | 1,335 | 56.35 |
Branch from Preprocessor
Last updated on
I see alot of code thats written like this
/* branch.c */ #include <stdio.h> /* #define SOMETHING */ int main() { #ifdef SOMETHING printf("Something\n"); #else printf("No Something\n"); #endif return 0; };
Usually this is done to hide some extra feature, debug code or platform code. It can be nicer to just branch off the preprocessor instead.
That way the compiler will always evaluate both paths so if a particular branch goes stale you’ll know about it sooner.
/* branch.c */ #include <stdio.h> /* toggle between 1 or zero */ #ifndef SOMETHING #define SOMETHING 0 #endif int main() { if(SOMETHING) { printf("Something\n"); } else { printf("No Something\n"); } return 0; };
Compile with something,
gcc branch.c -DSOMETHING=1 && ./a.out
outputs …
Something
Compile without something,
gcc branch.c && ./a.out
outputs …
No Something
There is no performance impact, the compiler is smart enough to notice the branch is constant and throw away the other branch.
Two things to consider however
- Its not allways possible as sometimes are trying to hide platform differences.
- MSVS will generate a warning by default. | https://blog.cooperking.net/posts/2018-07-31-branch_from_preprocessor/ | CC-MAIN-2021-39 | refinedweb | 183 | 64.81 |
The most practical tutorials on Sikuli GUI automation testing tool:
In part-1 of this “introduction to Sikuli tutorial series”, we have discussed about Sikuli, how it works, and how to create a simple Sikuli project.
In this 2nd part, you are going to learn some advanced concepts like – how to create Sikuli maven project and how Sikuli can be used with Selenium WebDriver to automate webpages.
This part is essential because.
What You Will Learn:
- What is covered in this Sikuli Tutorial#2:
- Installing Eclipse Maven Plugin
- Installing Apache Maven
- Install Sikuli Script Jar in Maven Repository
- Creating Sikuli Maven Project
- Sikuli Example Program: Open a file in Widows Explorer
- Executing Sikuli Maven Project from Command line
- Selenium Vs Sikuli
- Integrating Sikuli With Selenium WebDriver
- Conclusion
What is covered in this Sikuli Tutorial#2:
- Installing Eclipse Maven Plugin
- Installing Apache Maven
- Install Sikuli Script Jar in Maven Repository
- Creating Sikuli Maven Project
- Example program: Open a file in Widows Explorer
- Executing Sikuli Maven Project from Command line
- Selenium vs Sikuli
- Integrating Sikuli With Selenium WebDriver
Installing Eclipse Maven Plugin
Step #1:
Open Eclipse, Go to Help -> Install a new Software. Click on “Add” button and add the following URL.
Step #2:
Check All the check boxes listed, click “Next” and install the maven plugin.
(Click on image to enlarge)
Installing Apache Maven
Step #1:
Download latest version of maven from here.
Step #2:
Extract the downloaded zip file and put it under somewhere in your machine.
Copy the bin folder path of Maven, and append the path in the environment variable.
(It requires JAVA_HOME variable in the environment variable. Please set JAVA_HOME variable in your environment)
Step #3:
Check whether maven installed correctly, Open command prompt and type “mvn -version”. It should return something like this,
(Click on image to enlarge)
It indicates Maven successfully installed in your machine.
Install Sikuli Script Jar in Maven Repository
As I mentioned in part -1, we’ve already got sikuli-script.jar, next we need to install sikuli-script.jar in maven repository.
By using the following command we can install sikuli-script.jar in maven repository.
Mvn install: install-file -Dfile=D:\Jars\Sikuli-r930\win32\Sikuli-IDE\sikuli-script.jar -DgroupId=com.test.sikuli -DartifactId=sikuli -Dversion-1.0.1 -Dpackaging=jar
(Click on image to enlarge)
Creating Sikuli Maven Project
Step #1:
Open Eclipse and create new Maven Project.
Step #2:
Add the following dependencies in your POM file.
<dependency> <groupId>com.test.sikuli</groupId> <artifactId>sikuli</artifactId> <version>1.0.1</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency>
(Click on image to enlarge)
Step #3:
Create a package inside src/test/java and Create a class inside the package. Now you can start writing the Sikuli script inside this class.
Sikuli Example Program: Open a file in Widows Explorer
Step #1:
Create a Sikuli Maven Project, as explained above.
Step #2:
Take screenshot of required elements and put it inside the Maven project.
– file.png
Step #3:
Create a class with name “Test1”, and Paste the following code inside the sikuli class.
package com.test; import org.junit.Test; import org.sikuli.script.FindFailed; import org.sikuli.script.Screen; public class Test1 { @Test public void openFileTest() throws FindFailed, InterruptedException { // TODO Auto-generated method stub Screen s=new Screen(); s.find("file.png"); s.doubleClick("file.png"); System.out.println("File icon clicked"); } }
Executing Sikuli Maven Project from Command line
Step #1:
Open Command Prompt and cd to the project directory.
Step #2:
Execute the above project from command prompt using the following command.
mvn clean test -Dtest=Test1
Selenium Vs Sikuli
Integrating Sikuli With Selenium WebDriver
Step #1:
Create a new Java Project in eclipse by clicking New -> Java project.
Step #2:
- Right click on the Project Go to Build Path -> Configure Build Path.
- Switch to Libraries Tab.
- Click on “Add External Jars” and Add Selenium library jars as well as Sikuli-scritp.jar
Step #3:
Create a package inside src/ folder and create a class under that package.
Step #4:
Take All required screenshot of web elements and save inside the project.
Step #5:
Copy the following code inside that class.
package com.test; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.support.ui.WebDriverWait; import org.sikuli.script.FindFailed; import org.sikuli.script.Screen; public class OnlinePainting { public static void main(String[] args) throws FindFailed { // TODO Auto-generated method stub WebDriver driver=new FirefoxDriver(); WebDriverWait wait=new WebDriverWait(driver,20); driver.manage().window().maximize(); driver.get(""); Screen screen=new Screen(); screen.wait("1398665726055.png", 20); screen.click("1398666382715.png"); screen.click("1398666248846.png"); screen.click("1398666729252.png"); screen.click("1398666188894.png"); screen.click("1398665763634.png"); screen.click("1398666592027.png"); screen.click("1398666610951.png"); screen.click("1398666308624.png"); screen.click("1398666326406.png"); screen.click("1398666570749.png"); screen.click("1398666703708.png"); screen.click("1398666382715.png"); screen.click("1398666857321.png"); screen.waitVanish("1398665763634.png"); } }
Step #6:
Right click on the project, Select RunAs -> Java Application.
Before Execution:
Conclusion
- Sikuli scripts can be easily integrated with selenium WebDriver to automate flash websites.
- Sikuli can automate windows as well as all other applications.
- As it uses Visual match, we can automate almost anything, we see on the screen.
- It provides extensive support to Flash objects. i.e. we can automate adobe flash player components. (Audio player, video player)
- Sikuli scripts can be created as maven project and can be run from command prompt.
- Hence, Sikuli is most friendly, automation tool to automate challenging flash/windows applications.
With this, we are concluding our Sikuli tutorial series. Feel free to post your queries in comments.
23 comments ↓
Thanks for posting 2nd part of Sikuli GUI automation tool and explaining how to use it with Selenium.
we at our team using this tool for more than a year now. it’s good except few limitations.
Awesome man , hope you would become father of tetsing
Our testers are using this at a daily basis. It really is a greate test tool! The only problem we run up is to let Jenkins kick start the Sikuli test suite. Any one knows a solution for this?
Thanks for sharing your experiences. Can we automate captcha with Sikuli..?
@micky – yes .. we can automate captcha. But you should not reload the page. Otherwise captcha image will be changed.
Hi STH team
Thanks for sharing such nice posts. These are really helpful in my testing career as I leaned lot of things from this blog. plz share selenium articles as well.
You can start a website by using app.open() to open the browser you want to test and type in the URL. It’s important to remember that Sikuli supports typing in keys. So you can do a lot of stuff with shortcut keys. As for using sikuli and Jenkins. Try this site.
We use Sikuli extensively to automate our mobile websites via iOS Simulator and GenyMotion Android emulators. It is absolutely a fantastic tool, an automated and speedy black box tester.
It is an unusually powerful tool when scripted in it’s native Jython – but just the API with Java alone makes it so clunky and complicated just to click something from what I see here.
For instance, the ‘find’ operation is inherent and implied with ‘click’ in Jython, so you just click something. You don’t import several things, explicitly find the image, then explicitly click the image, etc.
How to identify an correct image when the images are identical. Example: We have the search icon more than once in a web page.
Please help me.
How to handle the IE10 Download popup using Sikuli?
Well explained really helpful article to learn and install sikuli for testing purpose.
If intereseted, now you can run sikuli directly on non-rooted Android devices.
unable to use images when i exported the jar file but in eclipse it is working fine.
Thanks for such a useful information.
Our testers are using this at a daily basis. It really is a useful testing tool.
Regards,
Kvsc
Can i get a tutorial to automate testing for a installation wizard.(i mean a wizard which shows offers to users)
what is dependcy for sikuli to be used in pom.xml file so that it can be used by all the member in the team?
Hi,
Thank you very much for tutorial. I have one question, can we use Siluli for Kibana kind of web sites? Where we need to deal with more graphs and its validations. Please provide some insight.
Can you share the images that you screenshot in above scripts?
Can we use sikuli without triggering mouse event, because, each time I need to wait for the action to complete before I take the control of the mouse. If I have 10 test cases, and each test case, if I use sikuli, then I need to wait until the actions are completed. Anyother way to handle this.
Thanks
Geetha
Hi sir ,Thanks for the tutorial . Can I click on wordpad and enter some text in it .If so Please provide me the codes please.
Hi Vijay ,
Please write a complete series of tutorials about using Sikuli with Katalon Studio. I’ve one question “How can we select a checkbox out of multiple checkboxes” using Sikuli.
Regards,
Sher Hassan
Please mail (subbrao.spi@gmail.com) me Sikuli Doc either in pdf or word doc | http://www.softwaretestinghelp.com/sikuli-tutorial-part-2/ | CC-MAIN-2017-34 | refinedweb | 1,575 | 58.79 |
New version of AnotherClojureBox (ACB)
2009-07-31 23:12:08 GMT
Hi I just release a new version of AnotherClojureBox!! What's new: - First, ACB Launcher!! Many people ask me for what file should be executed in order to edit or run REPL, so I did a launcher that wiil be always there. - Improved inline form documentation (now include clojure.contrib namespaces) - You can rebuild clojure API or clojure keywords in order to update Scite to a new (may be alpha) version. You can also include others libraries documentation to the API or the keywords list. - Regex syntax #"" breaks the lexer (Fixed). - ACB Launcher & Config. - Select forms in the editor and send to execute in the REPL (Ctrl-E from scite). Again, give it a try and let me know what do you think about it! You can download in my homepage Enjoy!(Continue reading) | http://blog.gmane.org/gmane.comp.java.clojure.user/month=20090801 | CC-MAIN-2016-07 | refinedweb | 146 | 67.25 |
Continuing first article, this time we will write some more useful custom collectors: for grouping by given criteria, sampling input, batching and sliding over with fixed size window.
Grouping (counting occurrences, histogram)Imagine you have a collection of some items and you want to calculate how many times each item (with respect to
equals()) appears in this collection. This can be achieved using
CollectionUtils.getCardinalityMap()from Apache Commons Collections. This method takes an
Iterable<T>and returns
Map<T, Integer>, counting how many times each item appeared in the collection. However sometimes instead of using
equals()we would like to group by an arbitrary attribute of input
T. For example say we have a list of
Personobjects and we would like to compute the number of males vs. females (i.e.
Map<Sex, Integer>) or maybe an age distribution. There is a built-in collector
Collectors.groupingBy(Function<T, K> classifier)- however it returns a map from key to all items mapped to that key. See:
import static java.util.stream.Collectors.groupingBy; //... final List<Person> people = //... final Map<Sex, List<Person>> bySex = people .stream() .collect(groupingBy(Person::getSex));It's valuable, but in our case unnecessarily builds two
List<Person>. I only want to know the number of people. There is no such collector built-in, but we can compose it in a fairly simple manner:
import static java.util.stream.Collectors.counting; import static java.util.stream.Collectors.groupingBy; //... final Map<Sex, Long> bySex = people .stream() .collect( groupingBy(Person::getSex, HashMap::new, counting()));This overloaded version of
groupingBy()takes three parameters. First one is the key (classifier) function, as previously. Second argument creates a new map, we'll see shortly why it's useful.
counting()is a nested collector that takes all people with same sex and combines them together - in our case simply counting them as they arrive. Being able to choose map implementation is useful e.g. when building age histogram. We would like to know how many people we have at given age - but age values should be sorted:
final TreeMap<Integer, Long> byAge = people .stream() .collect( groupingBy(Person::getAge, TreeMap::new, counting())); byAge .forEach((age, count) -> System.out.println(age + ":\t" + count));We ended up with a
TreeMapfrom age (sorted) to count of people having that age.
Sampling, batching and sliding window
IterableLike.sliding()method in Scala allows to view a collection through a sliding fixed-size window. This window starts at the beginning and in each iteration moves by given number of items. Such functionality, missing in Java 8, allows several useful operators like computing moving average, splitting big collection into batches (compare with
Lists.partition()in Guava) or sampling every n-th element. We will implement collector for Java 8 providing similar behaviour. Let's start from unit tests, which should describe briefly what we want to achieve:
import static com.nurkiewicz.CustomCollectors.sliding @Unroll class CustomCollectorsSpec extends Specification { def "Sliding window of #input with size #size and step of 1 is #output"() { expect: input.stream().collect(sliding(size)) == output where: input | size | output [] | 5 | [] [1] | 1 | [[1]] [1, 2] | 1 | [[1], [2]] [1, 2] | 2 | [[1, 2]] [1, 2] | 3 | [[1, 2]] 1..3 | 3 | [[1, 2, 3]] 1..4 | 2 | [[1, 2], [2, 3], [3, 4]] 1..4 | 3 | [[1, 2, 3], [2, 3, 4]] 1..7 | 3 | [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7]] 1..7 | 6 | [1..6, 2..7] } def "Sliding window of #input with size #size and no overlapping is #output"() { expect: input.stream().collect(sliding(size, size)) == output where: input | size | output [] | 5 | [] 1..3 | 2 | [[1, 2], [3]] 1..4 | 4 | [1..4] 1..4 | 5 | [1..4] 1..7 | 3 | [1..3, 4..6, [7]] 1..6 | 2 | [[1, 2], [3, 4], [5, 6]] } def "Sliding window of #input with size #size and some overlapping is #output"() { expect: input.stream().collect(sliding(size, 2)) == output where: input | size | output [] | 5 | [] 1..4 | 5 | [[1, 2, 3, 4]] 1..7 | 3 | [1..3, 3..5, 5..7] 1..6 | 4 | [1..4, 3..6] 1..9 | 4 | [1..4, 3..6, 5..8, 7..9] 1..10 | 4 | [1..4, 3..6, 5..8, 7..10] 1..11 | 4 | [1..4, 3..6, 5..8, 7..10, 9..11] } def "Sliding window of #input with size #size and gap of #gap is #output"() { expect: input.stream().collect(sliding(size, size + gap)) == output where: input | size | gap | output [] | 5 | 1 | [] 1..9 | 4 | 2 | [1..4, 7..9] 1..10 | 4 | 2 | [1..4, 7..10] 1..11 | 4 | 2 | [1..4, 7..10] 1..12 | 4 | 2 | [1..4, 7..10] 1..13 | 4 | 2 | [1..4, 7..10, [13]] 1..13 | 5 | 1 | [1..5, 7..11, [13]] 1..12 | 5 | 3 | [1..5, 9..12] 1..13 | 5 | 3 | [1..5, 9..13] } def "Sampling #input taking every #nth th element is #output"() { expect: input.stream().collect(sliding(1, nth)) == output where: input | nth | output [] | 1 | [] [] | 5 | [] 1..3 | 5 | [[1]] 1..6 | 2 | [[1], [3], [5]] 1..10 | 5 | [[1], [6]] 1..100 | 30 | [[1], [31], [61], [91]] } }Using data driven tests in Spock I managed to write almost 40 test cases in no-time, succinctly describing all requirements. I hope these are clear for you, even if you haven't seen this syntax before. I already assumed existence of handy factory methods:
public class CustomCollectors { public static <T> Collector<T, ?, List<List<T>>> sliding(int size) { return new SlidingCollector<>(size, 1); } public static <T> Collector<T, ?, List<List<T>>> sliding(int size, int step) { return new SlidingCollector<>(size, step); } }The fact that collectors receive items one after another makes are job harder. Of course first collecting the whole list and sliding over it would have been easier, but sort of wasteful. Let's build result iteratively. I am not even pretending this task can be parallelized in general, so I'll leave
combiner()unimplemented:
public class SlidingCollector<T> implements Collector<T, List<List<T>>, List<List<T>>> { private final int size; private final int step; private final int window; private final Queue<T> buffer = new ArrayDeque<>(); private int totalIn = 0; public SlidingCollector(int size, int step) { this.size = size; this.step = step; this.window = max(size, step); } @Override public Supplier<List<List<T>>> supplier() { return ArrayList::new; } @Override public BiConsumer<List<List<T>>, T> accumulator() { return (lists, t) -> { buffer.offer(t); ++totalIn; if (buffer.size() == window) { dumpCurrent(lists); shiftBy(step); } }; } @Override public Function<List<List<T>>, List<List<T>>> finisher() { return lists -> { if (!buffer.isEmpty()) { final int totalOut = estimateTotalOut(); if (totalOut > lists.size()) { dumpCurrent(lists); } } return lists; }; } private int estimateTotalOut() { return max(0, (totalIn + step - size - 1) / step) + 1; } private void dumpCurrent(List<List<T>> lists) { final List<T> batch = buffer.stream().limit(size).collect(toList()); lists.add(batch); } private void shiftBy(int by) { for (int i = 0; i < by; i++) { buffer.remove(); } } @Override public BinaryOperator<List<List<T>>> combiner() { return (l1, l2) -> { throw new UnsupportedOperationException("Combining not possible"); }; } @Override public Set<Characteristics> characteristics() { return EnumSet.noneOf(Characteristics.class); } }I spent quite some time writing this implementation, especially correct
finisher()so don't be frightened. The crucial part is a
bufferthat collects items until it can form one sliding window. Then "oldest" items are discarded and window slides forward by
step. I am not particularly happy with this implementation, but tests are passing.
sliding(N)(synonym to
sliding(N, 1)) will allow calculating moving average of
Nitems.
sliding(N, N)splits input into batches of size
N.
sliding(1, N)takes every N-th element (samples). I hope you'll find this collector useful, enjoy! | https://www.nurkiewicz.com/2014/07/grouping-sampling-and-batching-custom.html | CC-MAIN-2019-22 | refinedweb | 1,283 | 52.26 |
Maheshkumar S Tiwari
(MVP)
When:
12 Aug 2013 1:07 PM
Last revision by
Steef-Jan Wiggers
(MVP, Microsoft Partner)
When:
1 Jan 2015 12:28 PM
Revisions:
54
29
>
BizTalk Developer Interview Questions and Answers - Schema
BizTalk Developer Interview Questions and Answers - Schema
Article
History
BizTalk Developer Interview Questions and Answers - Schema
Table of Contents
Introduction
Questions and Answers
Author
Contributors
See Also
Introduction
This article intends to cover the answers to BizTalk schema related questions, which a BizTalk developer can face during an interview.
Questions and Answers
.
What is the purpose of a property schema?
Property schema is a special type of schema, not created to describe messages. Instead it describes context properties. It consists of
only child node
under a root node. See MSDN
Different Types of BizTalk Schemas
.
.
What is the target namespace for schema?
Target Namespace is to schema what a namespace is to .Net Object and root node as a class name.
Is it possible to create a custom data type and use it in a schema?
Yes, it's possible to create custom data types and it can be used across the schema. See
Can We Have Custom Data Type
.
Can schema have two nodes with the same name and different
datatypes
?
Yes, as long as they are not in the same scope.!
Is it possible to include and import in a single schema?
Yes, it is possible, both are the ways to utilize already existing schema. The only condition is the schema which is included should have same
TargetNamespace
or no namespace.
By default, what is the data type of elements in a schema?
xs
:
string
What
is the difference between Group Max occurs
, Group Min Occurs and Max occurs, Min Occurs?
These are all node properties. See MSDN
Node Properties
..
What is BlockDefault property used for?
Use the
BlockDefault
property to prevent or restrict the types of derivations that can be used in instance messages for all data types defined by the schema being edited. See MSDN
BlockDefault (Node Property of All Schemas)
.
.
Can we have schema without a target namespace? What will be its MessageType?
Yes, we can have a schema without target namespace and it's message type will be the Root node.
Which property is only available for the flat file schema?
Custom Date/time property is only available for flat file schema.
.
BizTalk can automatically create the schema from DTD, well formed XML, XDR. To do this schema generator is used.
How is schema generator invoked?
Right click the project in Solution Explorer and select
Add Generated Items --> Generate Schemas.
What is InstallWFX
.
vbs
script?
It is a script which when run installs the BizTalk Schema Generator. It is used when generating schema from existing items. It's likely to get error first time or after updates "WFX to XSD Schema generation module is not installed". Then this script can be used to install the schema generator.
Can "EDI" be a part of Namespace?
It can be but it should be avoided in the projects that uses BizTalk EDI engine as during run time there can be conflicts with this and expected results might not be seen.
Is it possible to promote XML record of
ComplexContent
?
No. To promote XML record its ContentType property should be set SimpleContent.
What is the maximum length allowed for
promoted
properties?
255 characters
What is the maximum length allowed for Distinguished fields?
It can be of any length, no limits.
How to create an XPath alias to a field which can be used in decision making in Orchestration?
Distinguished field is
a
XPath alias
to
the field
.
To create it
,
right click the element-->Promote-->Show promotion-->Add.
What is the Root Node?
It's a node within a BizTalk Server schema that represents the outermost XML element in the business document specified by the schema.
.
What are encoding options
available
used by BizTalk when creating schema?
There are various options but BizTalk always
uses
UTF-16 encoding for their schemas. See
more
.
Does BizTalk add any namespaces when creating schema?
Yes.
and
are added by BizTalk when creating a schema. See
more
.
How is schema namespace added by BizTalk when creating schema?
By default, the BizTalk Editor will set the namespace of a schema to..
Author
Maheshkumar S Tiwari
|
User Page
Contributors
Steef-Jan Wiggers
[Microsoft Integration MVP]
Sandro Pereira
[Microsoft Integration MVP]
See Also
Read suggested related topics:
BizTalk Developer Interview Questions and Answers
BizTalk: Advanced Questions
BizTalk Administrator Interview Questions and Answers (50 Q&A)
BizTalk Server: Deep Dive in Schema Design | http://social.technet.microsoft.com/wiki/contents/articles/19063.biztalk-developer-interview-questions-and-answers-schema.aspx | CC-MAIN-2015-11 | refinedweb | 764 | 66.84 |
katex.dart
Fast math typesetting for the web, ported to Dart.
Overview
katex.dart is a port of the KaTeX project codebase which was originally developed and released by Kahn Academy. As such, credit should be provided to them for the inspiraton and current design principals on which this project is based.
A very detailed description is provided by the KaTeX project:
KaTeX is a fast, easy-to-use JavaScript library for TeX math rendering on the web.
- Fast: KaTeX renders its math synchronously and doesn't need to reflow the page. See how it compares to a competitor in this speed test.
--render expressions using Node.js and send them as plain HTML.
Motivation
The KaTeX (KaTeX.js) project provides fast typsetting of mathematical expressions in the browser, however, there are some areas of interest that the katex.dart project aims to explore which may or may not be commonly shared by the KaTeX.js project:
- Adopting Polymer and Shadow DOM technologies to handle the presentation of the mathematical expressions.
- Providing better maintainability and extensibility than KaTeX.js through the class-based architecture and optional typing interface afforded by the Dart language.
- Providing better performance than KaTeX.js through code executed on the Dart VM (currently only supported through the use of the Dartium web browser) and possibly through the transpiled JavaScript code produced by Dart's Dart2JS compiler.
- Matching the functionality and features provided by the MathJax project as, currently, KaTeX.js does not support as many features, symbols and mathematical functions as MathJax.
- Adopting MathML as the native interface for outputing both the presentation and semantics of a mathematical expression remains a key area of interest for future work on this project. However, as of yet, browser support for MathML remains critically low.
Status
Please note that this project is under heavy development and API changes are very probable while this package is in beta. Currently, much more work needs to be accomplished to bring the benefits of Dart's class-based structure, typing interface and the promise of increased execution speed under the Dart VM to this project.
The only browser that can currently run this code is Dartium which only ships with the Dart SDK. This will be true while this project is in beta.
Server-side rendering is not yet supported, but it is planned.
Demonstration Application and Benchmarks
TODO
Usage
Include the
main.dart and
katex.min.css files on the page.
The
main.dart file is not provided by the project and is not required to be named
main.dart. In this example, the
main.dart file contains the code necessary to import the
katex.dart package, create the
Katex instance and render the formatted expression to an
Element. See the Dart code provided below.
<link rel="stylesheet" href="/path/to/katex.min.css"> <script type="application/dart" src="main.dart"></script>
Execute the
katex.render method with a TeX-formatted mathematical expression
String and an
Element to append the output into:
import 'package:katex/katex.dart'; Katex katex = new Katex( loggingEnabled: false ); katex.render( "c = \\pm\\sqrt{a^2 + b^2}", element );
The
loggingEnabled argument can be set to
true to enable key activity logging in the
katex.parser and
katex.lexer libraries.
Browser Support
In general, the most current and prior major release of any given browser will be supported by this project. The KaTeX.js project (as a project goal) seems to support older browsers than this project will ultimately support. If older browser support is needed, please consider the use of the KaTeX.js project.
TODO
Build Notes
TODO
Running the Demonstration Application Locally
To run the example application, run the following command from the repository root of the project:
pub serve
After the command completes, visit the following address in a web browser of choice:
Please note that the Dartium web browser that ships with the Dart SDK can run the Dart code natively (as it includes the Dart VM), but that all other browsers must have the Dart code transpiled to JavaScript. When using the
pub serve command, the Dart2JS compiler included with the Dart SDK performs the transpiling automatically. However, there may be some noticable "lag" in loading the example application while the Dart code is being transformed.
Contributing
Thank you for your consideration! Please review the Contributing Guidelines.
If your primary interest is developing this project in JavaScript, please consider contributing to the KaTeX.js project. Please be mindful of their procedures and goals as, in time, this project's requirements may be substaintially different. | https://www.dartdocs.org/documentation/katex/0.1.0/index.html | CC-MAIN-2017-47 | refinedweb | 761 | 57.06 |
This article about with react native is written by Julia Korsun. building a location-based app Read original article on Django Stars blog. What good can happen when we tap Allow on the pop-up that asks to access our location? Some apps provide a better experience, like Facebook suggesting events nearby. Others — can’t work properly without knowing device location, like Uber or Google Maps. These location-based apps use device location to enable and control some features. From wok delivery to Find My iPhone, location-based apps help us with our everyday tasks just by knowing where we are. Location can be either the primary function, like in Tinder; or auxiliary, like in Instagram: when uploading a photo, Instagram will suggest you a place so you can tag your location. Whether it’s the main function or not, location does improve the user experience. In this article, I’ll tell you about the main tech components of location-based apps, and you’ll learn how to develop one using React Native. First, I’ll briefly describe React Native and compare it to native app development. Then I’ll share approaches to gathering and displaying the location in an app, and finally, in a short passage I’ll describe some of the design challenges and how React Native copes with them. You May Also Like djangostars.com What to Consider When Building the Backend for a Location-Based Service If you make a quick review of apps in various categories — healthcare, games, finance, — it will show that the device’s… Tools For Location-Based Application Development: React Native In this part, I will briefly describe the React Native framework, its pros & cons, and why it’s great for building location-based apps. React Native is an open source JavaScript framework that allows developers to create cross-platform apps with their native behavior. What is behavior in this description? Let me explain. iOS and Android are different — their interface is different, their animation is different, lots of things are different. Having the same app for iOS and Android would require developing two separate apps. It used to be a laborious process, but using React Native, developers write one code, and it works correctly on both platforms. This allows businesses to offer their app to both iOS and Android users which means a bigger market share. That’s why many companies prefer React Native — they can’t afford developing two separate apps or are confident about whether their users have iOS or Android. And considering that the cross-platform market is likely to grow to $80 billion by 2020, it seems a rational choice for startups. Now, I’ll explain the pros and cons of React Native in terms of developing location-based apps. React Native Pros Rather than writing separate code for each system (iOS and Android), you build one to operate them both. Neither do you design different UI and UX. Cross-platform. React Native uses native controls and modules. The code interacts with the corresponding native iOS and Android components and renders the code to native APIs. Native API is the focus — by using a separate from UI thread, it increases the performance of the app. High performance. The React Native community grows every day, and so does the number of open-source components. This allows for sharing the experience among the community members, improving the framework, and finding the solutions to existing bugs. All this combined accelerates the development process. Open source. The three previous points conclude into a considerable advantage — React Native saves your money. It’s faster than building two separate apps and so it takes less time for testing and releasing an MVP. It saves money. However, there are cases when you might not want to use React Native. They include: If you know that your audience prefers a particular platform, I suggest you opt for native development. Firstly, the app will be tailored to match the specifics of the OS, and secondly, you’ll be able to use platform-specific features, like ARKit for iOS. You don’t need a cross-platform app. The one thing I particularly dislike is that React Native has a limited number of supported native APIs. There are enough to build a location-based app, though. In case you need others — you can bridge them using the native code. You need more APIs than React Native offers. How to gather and display user location This part is about gathering and displaying the location data. Depending on the specifics of your app, you will opt for a particular way. Gathering location data I single out three ways to gather device location. This is a generic overview for you to understand the cases when to opt for each case and the differences between them. Note: Using React Native API There’s a that identifies the location of a device. It’s easy to install and use, but there’s a ‘ ’ — it neither works on the background nor shows the location provider (3G, Wi-Fi, GPS). native JavaScript API but react-native-background-geolocation It’s a package that determines the location of a device from 0 to 1000 meters (0.6 miles). It takes more battery energy, but on the other hand, it’s up to you to configure how often to track the location. The package also integrates with SQLite — you can store the recorded location data and sync it to your database via HTTP. import { NativeModules, DeviceEventEmitter, PermissionsAndroid } from 'react-native' import Config from 'react-native-config' import get from 'lodash/get' const { GeoLocation } = NativeModules class BackgroundGeoLocation { constructor(token, user_id) { this.state = null } start(dispatch, nextState) { this.dispatch = dispatch const token = get(nextState, 'session.data.token') const user_id = get(nextState, 'user.data.user_id') const id = get(nextState, 'user.data.id') this.state = { user_id, token, } return PermissionsAndroid.check(PermissionsAndroid.PERMISSIONS.ACCESS_COARSE_LOCATION) .then(is_granted => is_granted === PermissionsAndroid.RESULTS.GRANTED ? is_granted : PermissionsAndroid.requestMultiple([ PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION, PermissionsAndroid.PERMISSIONS.ACCESS_COARSE_LOCATION, ]) ) .then(_ => PermissionsAndroid.check(PermissionsAndroid.PERMISSIONS.ACCESS_COARSE_LOCATION)) .then(is_granted => is_granted === PermissionsAndroid.RESULTS.GRANTED ? true : new Error()) .then(_ => setTimeout(() => GeoLocation.startService(token, user_id, id, `${Config.API_URL}/live/car-tracking/gps-pos/`), 300)) .catch(e => console.log(e)) } stop() { return GeoLocation.stopService() .then(_ => console.log(_)) } handleLocationChange(geo) { console.log(geo) } } export default BackgroundGeoLocation Being a package, it requires regular maintenance and updates, but there’s a support channel on GitHub from the creator of the package. Package price: iOS — free; Android — $300 for one app. Note: Combo: Bridge native code to JavaScript API The main issue with the first approach (to use native JavaScript API) can be solved by adding native code that will start a foreground service in a separate thread. That’s exactky what we did when . Partly, we chose this way because React Native made it easy to bridge native code to React Native components. developing our food delivery product Azyan package com.djangostars.azyan;; /** * Created by AGulchenko on 5/7/18. */; } } } Getting permission to access the location data sometimes causes troubles, but the troubles are bearable. React Native solves the problem without too much fuss. Different systems ask for the permission to access the location data on different stages: iOS requests the permission first time you open an app; Android — upon the download. It could cause trouble if we were using the native code, however, React Native simplifies this process using module. It allows access to location data without triggering the permission alert. the check access to location data Read Full Azyan Case Study: Displaying Location The location data isn’t always precise. You must have seen something like this: Here’s why it may happen: the device collects the location data from three sources: Wi-Fi, cellular and GPS, the latter being the least accurate. Our devices are in the constant state of checking if there’s good Internet connection. If there’s none, the device will enable GPS. And if there’s a quick leap from 4G to GPS, the device is lost’. ‘ To solve this problem, I recommend by Google. It allows you to set the time and distance at which the location data is updated: for instance, update the data every 50 meters and every 10 seconds. You will avoid the noisy data as this API matches all device locations to roads and sidewalk. However, if the device is far from either, it won’t be effective. Fused Location Client A Few Words About Design In this short part, I will tell you about obstacles that may arise with building location-based apps and how to solve them with React native components. React Native allows for simple ways of displaying maps. It UI components for simplify the job for engineers. We would use Material Design to create a Google Maps wrapper, and then React Native would adjust them to the specific features of each platform. Material Design is a React Native feature that makes an endless list of search results. In Uber, if you start typing an address, you will get all the destinations starting with what you’ve entered. So try , it will show all 3rd Aves around. Such lists aren’t really endless — they just load more results as we scroll down through them. another build-in UI component includes the fixed header, footer, and delimiters to the search. If you were to create such result lists from scratch, it would take you more time than building a complete React native app. Infinite List 3rd Ave Flat List — <FlatList data={get(props, 'order.data.items', [])} renderItem={listItem} keyExtractor={({ id }) => id} style={style.list} /> function listItem({ item: { quantity, name } }) { return ( <View style={style.main}> <Text style={[style.count, style.listItemText]}>{quantity}</Text> <Text style={[style.listItemText, style.name]}>{name}</Text> </View> ) } Bottom Line React Native may be the right choice if you are going to build a location-based application. If most of the described advantages are true for you, go do deeper research on the technology and its capabilities. I now encourage you to pay more attention to the apps you use. You’ll be surprised to find out that of them ask to allow access to your location. For many companies, it’s crucial to know user location to provide quality and more user-oriented service. Why don’t you try? most If you find this post useful, please tap 👏 button below :) | https://hackernoon.com/how-to-develop-a-location-based-application-using-react-native-ce819814925d | CC-MAIN-2021-49 | refinedweb | 1,731 | 56.55 |
library.engine <library.engine at gmail.com> added the comment: I second request for tag names not prefixed with a root namespace in python, mostly because of ugly code, as performance degradation is negligible on relatively small files. But this ubiquitous repeating (even in the case if you're appending a variable to every tag name) is just against the DRY principle, and I don't like it. I think an extra option to pass list of namespaces that should NOT be prepended to the tag names would be sufficient. ---------- nosy: +library.engine _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________ | https://mail.python.org/pipermail/python-bugs-list/2011-May/139409.html | CC-MAIN-2018-05 | refinedweb | 102 | 69.31 |
I posted a discussion in the user forum (the first, and only post there) at Binding to multiple namespaces? but have not yet received any response. I'm assuming that due to the lack of volume there, nobody is actually monitoring that forum, so I'm posing the question here.
Basically I need to bind my SchemaResolverDeployer to more than one namespace. Any thoughts?
Afaik, that should be doable.
As that was the reason Alexey changed the resolver type to multiple resolver.
Search older posts, I think it was the JCA guys that asked for similar feature.
Sorry, I missed that. (For some reason the XB folder in my mail client hasn't been updated until now) And I didn't know we had a user forum, that's new
Yes, that's doable. You specify the namespace for the root class. Everything referenced from it (unless some class is annotated with a specific namespace or a prefix) will be bound to the namespace of the root. | https://community.jboss.org/message/522171 | CC-MAIN-2014-15 | refinedweb | 167 | 73.58 |
Let us continue our journey to master ns-3. We are not yet ready to foray into its strongly fortified castle so, instead, lets explore Doxygen, Waf and socket programming. Once they are conquered, we will be ready for a direct attack on ns-3.
Documenting ns-3 with Doxygen
Programming 101 teaches us that comments make programs more understandable. We also learn that comments are removed by the compiler at the first phase itself. But hats off to the ingenuity of Dimitri van Heesch who created Doxygen, which made the whole business of commenting a lot more exciting. Doxygen is a document preparation tool used by software developers to automatically generate documentation for projects, provided they are commented with tags known to Doxygen. To a developer, this is a real gift but to a novice it might be a slight hindrance. Unlike normal comments, in Doxygen, comments are not intended to help the person who reads the source file directly; rather, they aid in the automatic document generation. Initially, when you go through the source files of ns-3, which are plagued with Doxygen style comments, you might find them unattractive. Figure 1 shows an ns-3 source file with Doxygen comments.
So is it necessary to learn Doxygen to master ns-3? The short answer is No. Let me elaborateif you know Doxygen, well, thats great! You will have a clear idea where to look for answers in the ns-3 documentation. But if you are not familiar with Doxygen, then all you have to do is remember the fact that whatever text comes in between /** and */ in an ns-3 source file is a Doxygen comment, and it in no way affects the working of the ns-3 source files written in C++.
All commands start with a backslash (\) or an at-sign (@). For example, you might see commands like \brief, \author,
\param, \bug, etc, inside Doxygen style comments. They convey a special message to the Doxygen parser. The command \brief is used to start a paragraph that serves as a brief description. \author starts a new paragraph to display the author names. \param starts a parameter description for a function parameter with a name, followed by a description of the parameter. \bug is used for bug reporting. There are many more commands used in Doxygen, but you need to look them up only if a need arises. So for now, let us not worry much about Doxygen but move forward and tackle other issues relevant to ns-3.
Installing ns-3 with Waf
Software installation on Linux is often considered troublesome by new users and gives them nightmares. To simplify the installation procedure, ns-3 uses a Python based build automation tool called Waf. Those who are familiar with ns-2 might remember that a tool called make was the build automation tool in it. But in ns-3, make is replaced by Waf; a relatively modern tool.
ns-3 can be installed by using individual source files or a tarball (archive) release of the source files. I will only discuss the installation of ns-3.22 (the latest version of ns-3) with the tarball release of source files. But remember, with different operating systems and different versions, ns-3 installation might fail occasionally. If that happens, you are the victim of a problem called dependency hell. In case of a failed installation, please follow the detailed installation steps given at. To follow the installation steps of ns-3.22 given here, you need to download a file named ns-allinone-3.22.tar.bz2. Please remember that this file alone will enable you to install ns-3 with Waf. The file can be downloaded from. It is an archived and compressed file with a size about 25MB. The extension .tar denotes an archive file and the extension .bz2 is associated with the file compression utility bzip2. The installation requires GCC tools like g++. But usually, the Linux environment will contain all the required tools, by default. So the only thing to ensure is that you have Waf installed in your system prior to ns-3 installation.
If you feel that you are ready to install ns-3, then let us begin. First, create a new directory in your Linux file system called ns (or any other name that you prefer). Then copy the file into the directory ns. After copying, you have to extract (unzip) the file. To do so, either right click on the file with your mouse and select the menu item called extract here or something similaror execute the following command on a terminal. Set the path of the terminal to the directory ns.
tar xjf ns-allinone-3.22.tar.bz2
Now you have the unzipped file inside the directory called ns-allinone-3.22 in your current directory ns. Move to the directory ns-allinone-3.22/ns-3.22 by typing the following command in a terminal:
cd ns-allinone-3.22/ns-3.22
To start the installation of ns-3, type the following commands on the terminal:
./waf configure ./waf clean ./waf --build-profile=debug --enable-examples --enable-tests configure
This will install ns-3 with the build profile debug enabled. If you want to enable the build profile optimized, then use the following command:
./waf --build-profile=optimized --enable-examples --enable-tests configure
But remember that many of the example scripts provided by ns-3 may not be available with the build profile optimized. If the build profile used is debug, you can execute the Hello World equivalent of ns-3 by typing the following command on the terminal:
./waf --run hello-simulator
The output of the script executed is Hello World. Thus, we have executed our first ns-3 scripta milestone in our long journey.
The basics of socket programming
Now that we have installed ns-3 and executed our first ns-3 script, the next logical step is to learn socket programming. To do so, let us begin by discussing the TCP/IP model of a network, which divides the different functionalities associated with a computer network into a number of layers. The different layers are the application layer, transport layer, Internet layer and the link layer (the link layer in the TCP/IP mode also includes the physical layer of the OSI model another network model).
The process of sending a text file from one system (node) to another can be summarised as follows. At the sender side, the message is first processed by the application layer protocol, FTP, which deals with the user aspects, followed by the transport layer protocol, TCP, responsible for process-to-process connectivity (selecting the correct process from a number of processes at the receiver node). This is followed by the network layer protocol, IP, which is responsible for source-to-destination connectivity (selecting the receiver node from a number of nodes in a network), and finally, a link layer protocol, say ARP, which helps in finding the next node by converting the IP address to a MAC address, is called. At the receiver side, all these layers operate on the message in the reverse order. Even though there are four layers associated with network communication, in socket programming, we mostly worry about the two layers in the middlethe transport layer and the network layer.
The protocol data units associated with these layers are as follows: frames in the link layer, packets in the network layer, segments (for TCP) or datagram (for UDP) in the transport layer, and messages in the application layer. Similarly, three of these layers have addresses associated with them the port address (16 bit) associated with the transport layer, the IP address (IPv4 address is 32 bit and IPv6 address is 128 bit) associated with the network layer, and the MAC address (48 bit) associated with the link layer. So we need at least a few of these addresses for communication, depending on the nature and placement of the two nodes that are communicating.
Now that we are familiar with the layered architecture of a network and the different addresses associated with each layer, the next important question to be answered is, What is a socket? It is a mechanism used to realise client-server or peer-to-peer communication in a network. It is a protocol-independent method to create a connection between two processes residing in the same computer or different computers in a network. This clearly highlights one major difference between ns-2 and ns-3. In the latter, real data is sent between two processes to analyse the networks behaviour, whereas in ns-2, real data flow does not happen.
Socket programming is used in ns-3 to set up inter-process communication between two processes such that actual data can be sent between them. This data flow is monitored by various ns-3 modules and summarised into a log file called the trace file. By analysing the trace file, you obtain your results.
Now let us look at how to set up a connection between two processes as a starting point to ns-3 simulation. A socket acts as an end point to a communication channel. In order to set up communication between two processes, there should be two socketsone at the senders side and another at the receivers side. The behaviour of these two sockets is different. Often, one behaves like a server socket and the other like a client socket. The client tries to connect to the server to obtain data. For practical implementation, socket APIs provided by the operating system are used. The most commonly used socket APIs are the Berkeley sockets (BSD sockets), which essentially are a computing library for Internet sockets and UNIX domain sockets used for inter-process communication. These APIs were originally developed for C but can be used with C++ also, without any changes.
Since most communication nowadays involves the Internet, let us only discuss the Internet socket. It is characterised by its address and the protocol used. The address of an Internet socket consists of the IP address and the port number, thus involving the transport layer and the network layer. Another important aspect is the underlying transport protocol used; the choices include TCP and UDP.
Communication involving sockets can be broadly classified into two typesreliable communication (client-server) and unreliable communication (peer-to-peer). In reliable communication involving TCP, a virtual circuit is established, which guarantees packet transmission. In unreliable communication involving UDP, a datagram based communication model is followed, which only provides the unordered delivery of packets. There are a number of socket APIs (system-defined functions provided by various header files), which help us realise TCP and UDP based communication. First, let us discuss the APIs used commonly by both TCP and UDP connections.
Socket APIs common to TCP and UDP connections
socket: The socket API is used to create a socket of a given domain, type and protocol. In case of a TCP connection, both client and server use this function. For a UDP connection, both the peers (nodes) use this function.
int socket (int domain, int type, int protocol);
The domain is AF_INET (IPv4 only) or AF_INET6 (IPv6 or IPv4). The type is SOCK_STREAM for TCP and SOCK_DGRAM for UDP. The protocol is usually set as 0 so that the type defines the connection of the domain.
bind: The bind API gives the address of the socket on the server side. It binds a particular socket with an address to which other processes can connect. In the Internet protocol, the address is a combination of the IP address and port address. The bind API is only used at the server side in TCP and at the receiver side in UDP.
int bind ( int sid , struct sockaddr *addrPtr , int len );
Here sid is the socket ID, addrPtr points to the address structure and len is the size of the pointer addrPtr.
close: The close API is used to close a socket and terminate the connection. This results in freeing all the kernel data structures previously allocated.
int close( int sockid );
Socket APIs specific to the TCP connection
send: The send API is used to send a message over a stream. It returns the number of bytes sent or -1 in case of a failure.
int send (int sid, const char *bufferPtr, int len, int flag);
Here bufferPtr contains the message to be sent and the flag is set as 0, by default.
recv: The recv API is used to receive up to len bytes in bufferPtr, and it returns the number of bytes received or -1 in case of failure.
int recv (int sid, const char *bufferPtr, int len, int flag);
Socket APIs that are specific to the TCP server
listen: The listen API fixes the maximum number of connections the server can accept. It returns 0 in case of success or -1 in case of failure.
int listen (int sid, int size);
Here, size sets the maximum number of connections the server can accept.
accept: The accept API connects a specific client to the server. It waits for an incoming connect request from the client, and returns the socket ID and address of the client connected.
int accept (int sid, struct sockaddr *addrPtr, int *lenPtr);
Here, the accept API creates a new socket for the client whose connection it has accepted.
Socket APIs specific to TCP clients
connect: The connect API is used by the client process to specify the address of the server to connect with.
int connect (int sid, struct sockaddr *addrPtr, int len);
Here, addrPtr specifies the server address to connect with, and the API returns 0 in case of success and -1 in case of failure.
Socket APIs specific to UDP connections
sendto: The sendto API is the UDP counterpart of the TCP API send. It additionally specifies the address to which the message is to be sent because no prior connection is established in UDP communication.
int sendto ( int sid, const void *bufferPtr, size_t bufferLength, int flag, struct sockaddr *addrPtr, socklen_t addrLength );
Here addrPtr specifies the address of the node to which data is sent. It returns the number of bytes sent or -1 in case of failure.
recvfrom: The recvfrom API is the UDP counterpart of the TCP API recv. recvfrom obtains the sender address as additional information from the variable addrPtr.
int recvfrom ( int sid, void *bufferPtr, int bufferLength, int flag, sockaddr *addrPtr, i nt *addrLengthPtr );
The recvfrom API returns the number of bytes received or -1 in case of failure.
Socket programming header files
The socket programming APIs are not provided by a single header file. The header files <sys/types.h> and <sys/socket.h> are included in every program to access functions like socket, bind, listen, accept, connect, send, recv, sendto, recvfrom, etc. The header file <unistd.h> part of the POSIX library is included to access the close function. The header file <netinet/in.h> is included to use structures like sockaddr_in, which is used to store addresses for the Internet address family. To access the gethostbyname function, which searches the database and finds an entry that matches the host name specified by the name argument, we use the header file <netdb.h>. The header file <arpa/inet.h> is included to use the function htons, which converts an unsigned short integer from host byte order to network byte order. There are many other header files and functions in socket programming. But I am only discussing the bare minimum, which are absolutely essential.
An example of TCP client-server communication
Now let us look at an example where a TCP connection is established between a client and a server:
// TCP server program tcp_server.cc #include<iostream> #include<cstdlib> #include<cstring> #include<unistd.h> #include<sys/types.h> #include<sys/socket.h> #include<netinet/in.h> using namespace std; int main() { int sockfd, newsockfd, portno; socklen_t cli; char buffer[256]; struct sockaddr_in serv_addr, cli_addr; int n; sockfd = socket(AF_INET, SOCK_STREAM, 0); bzero((char *) &serv_addr, sizeof(serv_addr)); portno = 3333; serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = INADDR_ANY; serv_addr.sin_port = htons(portno); bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)); listen(sockfd,1); cli = sizeof(cli_addr); newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &cli); bzero(buffer,256); n = read(newsockfd,buffer,255); cout<<Message Received from Client : <<buffer<<\n\n; n=send(newsockfd,Your Message Received,21,0); close(newsockfd); close(sockfd); return 0; }
Both the client and server programs do not contain any code for error checking in order to reduce the size. The function bzero copies zeros on an array. The port number assigned to the server is 3333. The server listens to this port for the incoming connect request from the client. The client should send its requests to this port checked by the server for connections. The port number should be that of an ephemeral port ranging between 1024 and 49151. BSD sockets often use port numbers between 1024 and 5000. Figure 2 shows the terminal executing the server program.
// TCP client program tcp_client.cc #include<iostream> #include<cstdlib> #include<unistd.h> #include<cstring> #include<cstdio> #include<sys/types.h> #include<sys/socket.h> #include<netinet/in.h> #include<netdb.h> using namespace std; int main() { int sockfd, portno, n; struct sockaddr_in serv_addr; struct hostent *server; char buffer[256]; portno = 3333; sockfd = socket(AF_INET, SOCK_STREAM, 0); server = gethostbyname(127.0.0.1); bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length); serv_addr.sin_port = htons(portno); connect(sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)); cout<<\nEnter the Message: ; bzero(buffer,256); cin.getline(buffer,256); n = send(sockfd,buffer,strlen(buffer),0); bzero(buffer,256); n = read(sockfd,buffer,255); cout<<Acknowledgement from Server : <<buffer<<\n\n; close(sockfd); return 0; }
This is a very simple client-server program in which the client sends a message to the server, and the server prints the message on its terminal before replying with an acknowledgement. The client, in turn, prints the acknowledgment from the server on its terminal. The function bcopy copies memory regions of arbitrary length. The client is also assigned the port 3333, same as the server. The IP address used by the client is 127.0.0.1, which is a special IPv4 address denoting the local host. This happens because the client and server processes are running on the same system. But if the server is running on a system other than the client system, then use the IP address of the system running the server process in the client program. Since TCP is a connection-oriented protocol, the server program should be executed before the client program; otherwise, the client will not be able to connect to the server. Figure 3 shows the terminal executing the client program.
An example of UDP peer-to-peer communication
Usually, in UDP, the communicating nodes are not considered as client and server, but are treated as peers involved in communication. To save some space, I havent included the program here. You can download all the programs discussed in this article from opensourceforu.com/ article_soure_code/june2015/ns3.zip.
There are two UDP based programs udp_sender.cc and udp_receiver.cc. But there are no big differences between TCP and UDP programs. Instead of using send and recv functions, UDP programs use sendto and recvfrom functions. Unlike TCP, in UDP it is not mandatory for the receiver process to be active before the sender process. Each packet contains the destination address in UDP because it is a connectionless service. This is not required in TCP. While using UDP in programs the functions listen, accept and connect are not used. Even though the programs are somewhat similar, how TCP and UDP actually work is quite different. But we will deal with those differences while learning ns-3.
Socket programming contains a lot of topics which countless authors and textbooks are trying to tackle effectively. To summarise everything about socket programming into a few pages, I was compelled to omit many important topics. I know crimes of omission are far more serious than crimes of commission, but do forgive me. For a thorough understanding of socket programming, do refer to the book titled UNIX Network Programming, Volume 1: The Sockets Networking AP by Richard Stevens.
Maintaining ns-3 with Mercurial
All large software projects require a revision control tool for source code management during their development phase, and ns-3 is no exception. The very popular revision control tool called Mercurial is used with ns-3. But I will not discuss Mercurial in this series, because its need arises only when you start contributing code to the ns-3 project. At present, our aim is to familiarise ourselves with ns-3; becoming a part of ns-3 development is not our goal. If at a later stage, your mastery over ns-3 allows you to contribute code, then an understanding of Mercurial will be a necessity. There are Mercurial commands like clone, add, commit, etc, which will allow you to make a copy of an existing project and add your own content to it. I will be content if I could help somebody reach that level with this series on ns-3. | http://opensourceforu.com/2015/07/the-next-step-to-mastering-ns-3/ | CC-MAIN-2016-50 | refinedweb | 3,549 | 54.63 |
From: Pavol Droba (droba_at_[hidden])
Date: 2004-07-29 03:23:16
Hi,
On Wed, Jul 28, 2004 at 03:47:14AM +1000, Thorsten Ottosen wrote:
>
>
> "Doug Gregor" <dgregor_at_[hidden]> wrote in message news:7DEAB5E4-DFD3-11D8-BD44-000D932B7224_at_cs.indiana.edu...
> |
> | On Jul 27, 2004, at 6:42 AM, Thorsten Ottosen wrote:
>
> | > contribution will fit into the overall scheme. Then a formal
> | > mini-review should follow.
> |
> | At what point are there enough algorithms under the same category that
> | we should just call it a "full" review?
>
> Good question! I don't gave an definite answer.
>
> I would prefer that the main contribution is organized by a few people; this main contribution
> should then be given a full review. And then
> extra small contributions are mini-reviewed. Take the string-algorithms as an example. I hope Pavol will encourage people
> to add extra functions and work out their interface with him and others on the list.
>
> I don't see the first real review happening without some group with the main responsibility. And I don't see very small
> contributions happening on their own because I fear the big picture is lost.
>
I completely agree. Mini-reviews are very good idea, provided, the there is a person/group that is responsible
for the overal picture for a particular algorithms group.
During the development of the string algo lib, I have found that there are several useful facilities that
are orthogonal to an algorithm functionality. A good example is the facility that evolved to Boost Range library.
These facilities should be reused by all algorithm.
In addtion issues like directory organization, namespace usage and etc. are also important to form a consistent
algorithmic library.
If core issues are settled, than the "mini-review" approach can be very good alternative.
Regards,
Pavol.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/07/69374.php | CC-MAIN-2019-39 | refinedweb | 321 | 58.69 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
In this recipe, we show how to create, manipulate and visualize graphs with NetworkX.
You can find the installation instructions of NetworkX on the official documentation. ()
In brief, you can just execute
pip install networkx. On Windows, you can also use Chris Gohlke's installer. ()
import numpy as np import networkx as nx import matplotlib.pyplot as plt %matplotlib inline
n = 10 # Number of nodes in the graph. # Each node is connected to the two next nodes, # in a circular fashion. adj = [(i, (i+1)%n) for i in range(n)] adj += [(i, (i+2)%n) for i in range(n)]
Graphobject here, giving the list of edges as argument.
g = nx.Graph(adj)
print(g.nodes())
print(g.edges())
print(nx.adjacency_matrix(g))
draw_circularfunction that simply positions nodes linearly on a circle.
plt.figure(figsize=(4,4)); nx.draw_circular(g)
colorattribute to this node. In NetworkX, every node and edge comes with a Python dictionary containing arbitrary attributes. This feature is extremely useful in practice.
g.add_node(n, color='#fcff00') # We add an edge from every existing # node to the new node. for i in range(n): g.add_edge(i, n)
plt.figure(figsize=(4,4)); # We define custom node positions on a circle # except the last node which is at the center. t = np.linspace(0., 2*np.pi, n) pos = np.zeros((n+1, 2)) pos[:n,0] = np.cos(t) pos[:n,1] = np.sin(t) # A node's color is specified by its 'color' # attribute, or a default color if this attribute # doesn't exist. color = [g.node[i].get('color', '#88b0f3') for i in range(n+1)] # We now draw the graph with matplotlib. nx.draw_networkx(g, pos=pos, node_color=color) plt.axis('off');
plt.figure(figsize=(4,4)); nx.draw_spectral(g, node_color=color) plt.axis('off');
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter14_graphgeo/01_networkx.ipynb | CC-MAIN-2018-13 | refinedweb | 357 | 60.41 |
maven-2 423
- How can I create an executable JAR with dependencies using Maven?
- How do I tell Maven to use the latest version of a dependency?
- Can I add jars to maven 2 build classpath without installing them?
- differences between dependencymanagement and dependencies in maven
- Run a single test method with maven
- How to configure encoding in Maven?
- Maven parent pom vs modules pom
- Find Oracle JDBC driver in Maven repository
- Including dependencies in a jar with Maven
- Maven Modules + Building a Single Specific Module
- Convert Existing Eclipse Project to Maven Project
- Maven dependency for Servlet 3.0 API?
- force Maven to copy dependencies into target/lib
- Maven command to determine which settings.xml file Maven is using
- run main class of Maven project
- Build Maven Project Without Running Unit Tests
- What causes imported Maven project in Eclipse to use Java 1.5 instead of Java 1.6 by default and how can I ensure it doesn't?
- Making Maven run all tests, even when some fail
- Is there a simple way to remove unused dependencies from a maven pom.xml?
- Maven: add a dependency to a jar by relative path
- Exclude all transitive dependencies of a single dependency
- Maven artifact and groupId naming
- Differences between Ant and Maven
- Spring 3.0 - Unable to locate Spring NamespaceHandler for XML schema namespace
- Importing maven project into eclipse
- Sharing Test code in Maven
- Get source jar files attached to Eclipse for Maven-managed dependencies
- Best practices for copying files with Maven
- Retrieve version from maven pom.xml in code
- Maven compile with multiple src directories
- Maven: Command to update repository after adding dependency to POM
- Get Maven artifact version at runtime
- How do I get my Maven Integration tests to run
- SLF4J: Class path contains multiple SLF4J bindings
- Add a dependency in Maven
- Maven check for updated dependencies in repository
- Warning - Build path specifies execution environment J2SE-1.4
- How to get Maven project version to the bash command line
- Why do so few people use Maven? Are there alternative tools?
- Building executable jar with maven?
- Difference of Maven JAXB plugins
- How do I install Maven with Yum?
- How can I download a specific Maven artifact in one command line?
- Which maven dependencies to include for spring 3.0?
- How to configure Eclipse build path to use Maven dependencies?
- Should we use Nexus or Artifactory for a Maven Repo?
- Maven: how to do parallel builds?
- Maven: best way of linking custom external JAR to my project?
- How to read an external properties file in Maven
- How can I get Maven to stop attempting to check for updates for artifacts from a certain group from maven-central-repo? | http://code.i-harness.com/en/tagged/maven-2 | CC-MAIN-2018-51 | refinedweb | 446 | 61.77 |
This patch implements TIP #112: Ensembles
See for details
Apply with 'patch -p0 <ensemble.patch'
IP - Comment Removed: 130.88.1.31
data_type - 210894
Logged In: YES
user_id=79902
Patch (plus docs) applied to HEAD
File Deleted - 60189:
File Added - 61598: ensemble.patch
Logged In: YES
user_id=79902
Patch that implements API Revision 2.23
File Deleted - 59429:
File Added - 60189: ensemble.patch
Logged In: YES
user_id=79902
Patch that implements API Revision 2.17
File Deleted - 59327:
File Added - 59429: ensemble.patch
Logged In: YES
user_id=79902
Patch that implements API Revision 2.14
File Deleted - 59320:
File Added - 59327: ensemble.patch
Logged In: YES
user_id=79902
Patch that implements API Revision 2.9
File Deleted - 59187:
File Added - 59320: ensemble.patch
Logged In: YES
user_id=79902
Another improvement. API Revision 2.8
File Deleted - 59014:
File Added - 59187: ensemble.patch
Logged In: YES
user_id=79902
New patch time. This one is supposed to implement Revision 2.5
Logged In: YES
user_id=79902
Current code (patchfile 59014) corresponds to Revision 2.2
of the TIP.
Since I don't understand exactly what I'd do to make use of
the TIP90 mechanisms in this context, I'll let someone
explain that stuff a bit more first. :^)
I'm still not yet 100% sure about the -command handling. I
want to write some more tests before committing to a final
version.
I'm not sure yet about what to do about loops. Or whether a
command prefix is really preferable to a script prefix for
-unknown.
Logged In: YES
user_id=68433
Couple notes on the API --
namespace ensemble create ?cmdName? ?-option value ...?
could be replaced with
namespace ensemble create ?-command cmdName? ?-option value
...?
since currently [namespace ensemble create -command foo] is
an error ("-command is read-only").
The -unknown handler could (should?) use the TIP 90
-returncode / -returnlevel mechanism; that way the ensemble
command can distinguish a TCL_CONTINUE meaning "reparse"
from a TCL_CONTINUE that the resolved command returns. I'd
also suggest treating anything other than a TCL_OK,
TCL_CONTINUE, or TCL_RETURN return code as an "unexpected
return code error".
Beware infinite loops in the -unknown handler:
namespace eval foo {
namespace create -unknown "continue;#"
}
I'd prefer that the -unknown handler be a command prefix,
not a script prefix.
Logged In: YES
user_id=79902
Here's a version with a better API
File Deleted - 58365:
File Added - 59014: ensemble.patch
File Added - 58365: ensemble.patch | https://core.tcl-lang.org/tcl/tktview/786509 | CC-MAIN-2021-31 | refinedweb | 404 | 68.36 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
<snip>
* Starting Bluetooth ...
* Running hid2hci ...
[ ok ]
* Starting hcid ...
[ ok ]
* Starting hidd ...
Can't listen on HID control channel: Address already in use
[ ok ]
</snip>
try disabling the input.service:
1. Edit /etc/bluetooth/input.service
2. Replace Autostart=true to Autostart=false
3. Restart bluetooth (/etc/init.d/bluetooth restart)
You can read more about the input service at:
(In reply to comment #1)
> try disabling the input.service:
Well, that helps, thanks. Doesn't solve the issue though... :)
> You can read more about the input service at:
>
This link appears completely unresponsive...
Same problem here..
Setting Autostart=false helped me.
hmm...does thist mean that I have to reconnect keyboard/mouse with every
reboot/disconnect?
No, but I'm experiencing the same issue with my notebook. Please make sure your
hidd server is started, it also seems to help to keep your adapter visible and
connectable (see bluetooth-applet from bluez-gnome).
(In reply to comment #2)
> Doesn't solve the issue though... :)
I suppose we shouldn't use hidd anymore but instead use the input service dbus
interface:
#!/usr/bin/python
import dbus
bus = dbus.SystemBus()
# service activation
bmgr = dbus.Interface(bus.get_object('org.bluez', '/org/bluez'),
'org.bluez.Manager')
bus_id = bmgr.ActivateService('input')
imgr = dbus.Interface(bus.get_object(bus_id, '/org/bluez/input'),
'org.bluez.input.Manager')
# device creation
path = imgr.CreateDevice('xx:xx:xx:xx:xx:xx')
idev = dbus.Interface (bus.get_object(bus_id, path), 'org.bluez.input.Device')
# host initiated connection
idev.Connect()
###########################
Could you please check if this works for you? It doesn't work for my dinovo :(
(In reply to comment #6)
> Could you please check if this works for you? It doesn't work for my dinovo :(
Nope, doesn't work... Plus, I really kinda prefer the old way. :)
hidd is gone in 3.10.1 The new service architecture is what should be used. If
you want the old daemon use USE="old-daemons". Please test this using 3.10.1:
ver 3.10.1:
Add option to disable installation of manual pages.
Fix input service encryption setup.
Fix serial service methods.
Fix network service connection handling.
Provide a simple init script.
the encryption setup could be what's borking it
Hi! I have upgraded to bluez-utils-3.10.1 with USE="-cups -debug -examples hal
-old-daemons -test-programs usb" and now my bluetooth mouse no longer works...
With bluez-utils-2.25-r1 it worked very well (automatic pairing without any
user intervention).
With bluez-utils-3.10-r1 automatic pairing no longer worked, but I could anyway
use `hidd --search` or `hidd --connect` to manually connect to my mouse.
Now hidd is gone and I cannot use my mouse :(
Recompiling with USE="old-daemons" isn't a solution IMHO, because those daemon
are deprecated, and I'd prefer to go back to the old behaviour of
bluez-utils-2.25 (automatic pairing of input devices) anyway...
Btw, that little script from comment #6 does work for me... Where is it
supposed to be put?
19:17 < holtmann> It works fine actually. However they have to repair the
device.
Please try repairing the device.
(In reply to comment #11)
> 19:17 < holtmann> It works fine actually. However they have to repair the
> device.
>
> Please try repairing the device.
>
How?
> Btw, that little script from comment #6 does work for me... Where is it supposed to be put?
I think it should be adopted by bluez-gnome / kdebluetooth or another wizard
thingy.
Sorry guys, but I really have some problems in understanding how this whole new
"Services" thing works... For now I will use the script posted in comment #6 to
connect to my mouse. I hope that kdebluetooth or some other tool will provide
an easy and user-friendly interface to bluez as soon as possible...
Well...now the python script doesn't work anymore! :O
Traceback (most recent call last):
File "./MightyMouse-connect.py", line 13, in ?
path = imgr.CreateDevice('00:14:51:C2:9B:F6')
File "//usr/lib/python2.4/site-packages/dbus/proxies.py", line 63, in
__call__
return self._proxy_method(*args, **keywords)
File "//usr/lib/python2.4/site-packages/dbus/proxies.py", line 134, in
__call__
args,
File "//usr/lib/python2.4/site-packages/dbus/connection.py", line 595, in
call_blocking
reply_message = self.send_message_with_reply_and_block(
dbus.DBusException: org.bluez.input.Error.AlreadyExists: Input Already exists
Created an attachment (id=119461) [edit]
hidtool.py
Improved script
(In reply to comment #16)
> Created an attachment (id=119461) [edit]
> hidtool.py
>
> Improved script
>
Reopening this bug until we have hid reported working
(In reply to comment #12)
> > Please try repairing the device.
> How?
Well, the mouse should have a button to do connect/pairing. Press it and
`hcitool scan` if dbus doesn't pick it up - this has nothing to do with repair
as in hardware repair of a broken thing, I think you got confused :)
(In reply to comment #14)
> Sorry guys, but I really have some problems in understanding how this whole new
> "Services" thing works...
You are not the only one who's unhappy with this:..
Could someone please (re)try the input service with the latest bluez-gnome-0.7
(now in portage). The new bluetooth-applet should ask for authorisation for
incoming requests. hidtool.py is still required to add the input devices.
(In reply to comment #19)
Yes, bluez-gnome-0.7 adds itself as authorization agent.
Though I had to make sure that the "Automatically authorize..." setting was set
since it will be quite hard to click on the notify balloon until the mouse is
working.
So to summarize:
I have input.services, autostart=false;
I ran hcitool scan to get the addresses of the keyboard and mouse.
Used the hcitool.py script to add the addresses:
hcitool.py --connect <myaddress>
And I have to run hcitool.py without arguments on _every_ reboot to trigger the
input daemon.
As a side note, restarting my bluetooth gives:
* Service bluetooth stopping
* Service bluetooth stopped
* Service bluetooth starting [ !! ]
* Service bluetooth started
(In reply to comment #20)
> And I have to run hcitool.py without arguments on _every_ reboot to trigger the input daemon.
If I set input.service autostart=true, I don't have to run hcitool.py on every
reboot.
Thanks
(In reply to comment #21)
> (In reply to comment #20)
> > And I have to run hcitool.py without arguments on _every_ reboot to trigger the input daemon.
>
>
> If I set input.service autostart=true, I don't have to run hcitool.py on every
> reboot.
>
That's good to hear. Maybe add a hid use flag to make it autostart
automatically.
I think we'd better use the old-daemons use flag to disable the input service,
maybe we should add the hidd init.d script to start the hid daemon.
(In reply to comment #23)
> I think we'd better use the old-daemons use flag to disable the input service,
> maybe we should add the hidd init.d script to start the hid daemon.
>
Well as long as the input service has autostart=false, I don't see any problem
in always installing it besides of course little bloat. Yes I think it would be
prudent to add a hidd init script for now with the USE="old-daemons" use flag.
Created an attachment (id=119911) [edit]
hidtool.py
> I don't see any problem in always installing it
Okay, we are talking about the same thing, I don't want to disable the input
service, just setting autostart to false when old-daemons is enabled.
I've updated the hidtool.py script so one can re-connect devices by 'path',
list productid / vendorid and some cosmetic fixes :)
(In reply to comment #25)
>
> > I don't see any problem in always installing it
> Okay, we are talking about the same thing, I don't want to disable the input
> service, just setting autostart to false when old-daemons is enabled.
>
+*bluez-utils-3.11 (24 May 2007)
+
+ 24 May 2007; Petteri Räty <betelgeuse@gentoo.org>
+ +files/3.11/conf.d-hidd, +files/3.11/init.d-hidd,
+ +bluez-utils-3.11.ebuild:
+ Version bump. Added an init script for the hidd daemon when the old-daemons
+ use flag is on as suggested in bug #178160. The input service is also
disabled
+ by default when the old-daemons use flag is on.
+
20:51 < Betelgeuse> holtmann: do you think it's better to install hidd or that
custom script?
20:52 < holtmann> No No and No. The connect or create is only needed once.
After that the HID connects back to you. It works this way. Period.
20:52 < Betelgeuse> holtmann: ok
20:52 < Betelgeuse> holtmann: so I install the helper and instruct people to
run it the first time?
20:53 < holtmann> Have a script using input API is perfectly fine. I personally
would prefer to use it C only. So no Python dependency for bluez-utils.
So basically for the new service architechture we need some kind of a helper
installed. Either what Dick wrote or installing hidd. I think
kdebluetooth-1.0_rc3 has something for it too.
I suppose gnome-bluetooth will get that functionality too (there is some wizard
code already in the tarball).
Maybe I will rewrite hidtool.py to a C program.
(In reply to comment #28)
> I suppose gnome-bluetooth will get that functionality too (there is some wizard
> code already in the tarball).
> Maybe I will rewrite hidtool.py to a C program.
>
kdebluetooth 1.0_beta3 should have a wizard for it too
being in python is not a problem for us but you could get your script upstream
if you write it in C
Both bluez-gnome and kdebluetooth have GUI support for input devices nowadays
and jakub reports that they work so marking this as fixed. | http://bugs.gentoo.org/178160 | crawl-002 | refinedweb | 1,673 | 60.92 |
"graham" <graham73 at telocity.com> wrote in message news:B63CA023.172F7%graham73 at telocity.com... [snip] > > You may dislike having to be explicit about the details > > of the new function object -- 'use the code of local > > function add, use the current variable look-up settings > > as the "global" namespace' -- and wish new.function > > defaulted them for you, but it doesn't. Big deal. > > Actually it is a big deal. When you do > > c = a + b > > to construct the object c (say a and b are ints, although in this > case c isn't an object, but that doesn't change me point), you don't But of course it IS an object, just like a function-object is one. That is, by Python definition. If you're thinking 'class-instance' when you say 'object', then neither ints not functions in Python are 'objects' in THIS sense. > have to say where you are getting a and b from (which environment or > namespace or whatever you want to call it). But if I want to > define a function I do have to be explicit about where I am > getting objects used in the function definition from. So functions No! Absolutely no difference. c = lambda x, y=a, z=b: x + y + z You don't have to be any more explicit about where a and b are 'coming from' (which environment or namespace) in the definition of this object (which you're binding to c) than you do in the definition of the object that YOU were binding. > are not treated like other objects, and hence are not first class. They're treated exactly like every other object, and thus are first-class. Alex | https://mail.python.org/pipermail/python-list/2000-November/023053.html | CC-MAIN-2017-04 | refinedweb | 282 | 71.75 |
UWP-039 - Adaptive Layout with Device Specific Views:
PDF: Coming Soon
HI Bob,
Great series.
I have been watching, reviewing, performing the exercises, and documenting each section.
This lesson UWP-040, although, has found me stuck. As I could not get my code to work, I ended up copying your code (Book.cs, MainPage.xaml, MainPage.xaml.cs) and still had the same issue. If you could please share your thoughts, I would be greatly appreciative.
Issue: with the following: <DataTemplate x:
error message: The name "Book" does not exist in the namespace "using:xBindDataExample.Models".
@Dan: I'm willing to bet that either (1) your folder is named Model, not Models, or (2) the Book class is not in the Models namespace. I would take a close look at the Book class definition, the EXACT name of the folder, namespace, class. If it's not that, then I would probably need to see your code.
Hi Bob,
Thank you for your quick response. I was able to get the same code working, without any changes, by completing the following:
1. Exiting out of VS 2015
2. Launching the project (xBindDataExample)
3. Without editing the code, Running the Code.
3a. If I tried to edit the code, before running, I received the squiggly line under the x:DataType bind name.
4. I can now continue to edit the code.
Note: If I edit the class name and the DataType bind to that new name, I get the squiggly line error again. Exiting out of VS and running code as soon as I enter the project seems to correct the issue again and allow me to continue.
I can't explain this strange occurrence, but thank you again for your quick response. I have so many ideas for applications; I am excited about learning through your series and starting my adventures.
Bob, this is odd. I am making my own app (Universal Windows)
And when I compile my XAML with my Gridview, including this "ItemsSource="{x:bind teamRow3}"
The compiler complains "Error bind is not supported in a Windows Universal project"
Any idea why I cannot use x:bind in my own Universal App?
RESOLVED: its x:Bind (it needs an upper case B) what a horrible compiler message :)
I really got stuck over here.
When you create your Book under Model, does it mean that your Model updates the View? I thought Windows follows MVVM and we would require a ViewModel class to link to the view. This is what confuses me.
@BobTabor: I think a comment should be made that before continuing at 3:07 You should build your project. In order to get that class being compiled.
As I have seen a lot of Bob's excellent screencasts I know he is a fan or rebuilding and saving all files a lot. That made me figuring this out. (a) Just copy his behavior it's all build on years of experience!
I can't say this enough, Bob you are an awesome teacher! Keep it up!
Hi Bob,
Thank you for the great videos. I really need help on the listview. Many samples out there describe
Mauriez is absolutely correct - to allow recognize book class in nameSpase- it MUST be Build before...
This explained a lot about gridviews to me, thanks. I have a question though. Let's say I want to update one of the items in the gridview. For example, I add a button, and set that buttons action to this:
Books[0].Title = "hello";
Bindings.Update();
If this was outside of the gridview, it would update, however, the gridview does not show "hello" for the title of the first book. How would I get this behavior?
Hey Bob I am getting an error related to GetBooks there is an error showing that A namespace cannot directly contain members such as fields or methods | https://channel9.msdn.com/Series/Windows-10-development-for-absolute-beginners/UWP-040-Data-Binding-to-the-GridView-and-ListView-Controls | CC-MAIN-2019-09 | refinedweb | 648 | 74.29 |
dini 2.0.0
INI-like format parser written in D.
To use this package, put the following dependency into your project's dependencies section:
dini
dini is a library written in D Programming Language
that allows you to read and write INI configuration files with ease.
Features
Easy to use
Documentation and examples helps you understand library. It's also very nice to use :).
Well documented
The code is well documented. If you find something that isn't, be sure to open issue about it.
Variable lookups
You can "paste" defined variables values in values using
%variable%
Section inheriting
Sections can inherit values from other sections
Configurable
Since version 2
You can define custom quotes, comments and use custom type to store values (reader only).
Also, if you want to create custom data from INI, you can use
INIReaderto construct one.
NOTE: Current development version -
2.0.0is backwards API compatible, if you have any compatibility issues, please report them.
Quick start
Installation
Stable version
{ ... "dependencies": { "dini": "~> 1.0.1" } ... }
Latest version
{ ... "dependencies": { "dini": "~> 2.0.0-rc" } ... }
Usage
Let's check how it works in real life. In the examples, we'll use following configuration file:
[def] name1=value1 name2=value2 [foo : def] name1=Name1 from foo. Lookup for def.name2: %name2%
Now, lets try to parse it, we can do it with using code similar to:
import std.stdio; import dini; void main() { // Parse file auto ini = Ini.Parse("path/to/file.conf"); // Print foo.name1 value writeln(ini["foo"].getKey("name1")); }
You can also set INI variables before parsing:
import std.stdio, std.path; import dini; void main() { // Create ini struct instance Ini ini; // Set key value ini.setKey("currentdir", getcwd()); // Now, you can use currentdir in ini file ini.parse(); // Print foo.name1 value writeln(ini["foo"].getKey("currentdir")); }
This allows for using
%currentdir% in configuration file now.
Global Inheriting
If you would like to inherit sections that are in another one, you can use
. at the beggining to start from global scope:
[a] [a.b] [b] ; Note the dot at beggining [b.c : .a.b]
Global lookups
The same goes for variable lookups:
[a] [a.b] var=test [b] [b.c] var=%.a.b.var%
- Registered by Max Alibaev
- 2.0.0 released 3 years ago
- robik/DIni
- Boost License
- Dependencies:
- none
- Versions:
- Show all 5 versions
- Download Stats:
0 downloads today
3 downloads this week
4 downloads this month
1184 downloads total
- Score:
- 1.2
- Short URL:
- dini.dub.pm | http://code.dlang.org/packages/dini/2.0.0 | CC-MAIN-2019-22 | refinedweb | 417 | 60.72 |
Hi!
We have a bunch of OSB12 instances. We have Soap services (Web Request Service) and Oracle Rest Resource services. (Web services). The names Detected for the "rest" kind are "Oracle Service Bus REST resource" and "_OSB_REST_Resource_randomnumber" and the SOAP ones are a service name for the uri inside.
Did anyone else had this "problem". We need to "Better detect" the services. Using the Server naming rules for those "Web Services" (OSB REST) don't go anywhere, the only options that show something is the same name that the one of ootb.
Solved! Go to Solution.
For each instance of service is there separate process? Or is it the same one? If it is separate, you can for example set fo each env variable DT_CUSTOM_PROP (...) and then use process properties for service creation.
Sebastian
@Dante P.
Do you mean you'd like to have OSB 12.x Proxy and Business Services reported, instead of default WS, including namespace and method?
If so, it's a case for custom service configuration. Please let me know.
Best, Slawek
Hi! We want to improve the _OSB_REST_Resource randomnuber to something more.... human. like in the soap services, But we cant find how....
Dante,
I don't want to elaborate and keep you busy with large amount of docs, but quite ok intro might be found here:
Proxy and Business are OSB specific. Customers of mine (large financial institutions) are monitoring their OSB 12c with Dynatrace. Working together, we prepared dedicated sensors, which are helping us to detect "custom services" for both - the Proxy and the Business side of the OSB calls.
For the Proxy, I'd suggest to check:
com.bea.wli.sb.pipeline.MessageProcessor.processRequest(com.bea.wli.sb.pipeline.RouterContext)
RouterContext's getPipelineContext() will let you see the "human readable" Proxy service name.
For the Business:
com.bea.wli.sb.transports.TransportManagerImpl.sendMessageToService(com.bea.wli.sb.transports.ServiceTransportSender, com.bea.wli.sb.transports.TransportSendListener, com.bea.wli.sb.transports.TransportOptions)
ServiceTransportSender's getEndPoint().getServiceRef() will let you see the "human readable" Business service name.
Both Classes are well documented.
These are good to start with. Then, you'd have branches, nodes, pipelines, etc.
PS
I know you've asked for OSB Rest, but that's not necessarily related to so-called Business Service.
BTW
It's worth to know, OSB was designed to perform async operations - don't be surprised if you see http response handled in different thread than the initial request - which might be a challenge if OSB Admins wants you to measure the performance of Business part of of the request.
Thanks! will start looking into this in our QA enviroment. I thought to add a namespace (currently detected as - ) but nobody knows how to. Even the internal OSB support.
Thanks!!
Same situation here, Could you please share your findings?
regards,
FP
Hey. No need to create custom devices....
I managed to get it working with the new Service Detection API.
Thanks for this - helped us heaps to get some of the OSB monitored... what we are finding though is calls to OSB (Proxy or Business Service Side) the time is only 1-2ms only for a 20ms call. Any way to add in anything to get more sensors to pick up the whole time taken and not just what looks like an async call? Im sure all I need is another entry point for the http response...
Hey. No need to create custom devices....
I managed to get it working with the new Service Detection API.
Creating the new services act as decoupling all the things the Rest service had, those OSB_REST_XXXX are no more and also the Oracle rest resource isn't there any more. Now everything goes to it's unique service. | https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/Oracle-OSB-Rest/td-p/118979 | CC-MAIN-2021-49 | refinedweb | 627 | 67.55 |
In this lesson, you’ll learn about type comments. As you saw, annotations were introduced in Python 3, and they haven’t been backported to Python 2. This means that, if you’re writing code that needs to support legacy Python, then you can’t use annotations.
Instead, you can use type comments. These are specially formatted comments that can be used to add type hints compatible with older code. Type comments will not be available in the
__annotations__ dictionary. To add type comments to a function, you do this:
def func(arg): # type:(str) -> str ...
For variables, add the type comment on the same line:
my_variable = 42 # type: int
The type comments are just comments, so they can be used in any version of Python. Try adding type comments to the function from the previous lesson:
>>> import math >>> def circumference(radius): ... # type: (float) -> float ... return 2 * math.pi * radius ... ... >>> circumference(4.5) 28.274333882308138 >>> circumference.__annotations__ {}
A type comment must start with the
type: literal and be on the same line as the function definition or the following line. If you want to annotate a function with several arguments, you write each type separated by comma. You can also write each argument on a separate line with its own annotation:
# headlines.py def headline1(text, width=80, fill_char="-"): # type: (str, int, str) -> str return f" {text.title()} ".center(width, fill_char) print(headline1("type comments work", width=40)) def headline2( text, # type: str width=80, # type: int fill_char='-', # type: str ): # type: (...) -> str return f" {text.title()} ".center(width, fill_char) print(headline2("these type comments also work", width=70)) pi = 3.142 # type: float
Run the example through Python and Mypy:
$ mypy headlines.py $ python3 headlines.py ---------- Type Comments Work ---------- ------------------- These Type Comments Also Work -------------------
If you have errors, for instance if you happened to call
headline1() with
67 as the first argument on line 7, and
headline2() with
width="normal" on line 16, then Mypy will tell you the following:
$ mypy headlines.py headlines.py:7: error: Argument 1 to "headline1" has incompatible type "int"; expected "str" headlines.py:16: error: Argument "width" to "headline2" has incompatible type "str"; expected "int"
Should you use annotations or type comments when adding type hints to your own code? annotation
francoisg on Dec. 12, 2019
small heads up: 04:00 you call mypy on headlines instead of headline1 | https://realpython.com/lessons/type-comments/ | CC-MAIN-2021-25 | refinedweb | 395 | 65.32 |
Writing Win32 DLLs.
Compiling a DLL
Use the -shared switch to tell the compiler that the generated code is to be put into a DLL. Code compiled for an EXE file will use the optimization assumption that _tls_index==0. Such code in a DLL will crash.; import core.sys.windows.dll; __gshared HINSTANCE g_hInst; extern (Windows) BOOL DllMain(HINSTANCE hInstance, ULONG ulReason, LPVOID pvReserved) { switch (ulReason) { case DLL_PROCESS_ATTACH: g_hInst = hInstance; dll_process_attach( hInstance, true ); break; case DLL_PROCESS_DETACH: dll_process_detach( hInstance, true ); break; case DLL_THREAD_ATTACH: dll_thread_attach( true, true ); break; case DLL_THREAD_DETACH: dll_thread_detach( true, true ); break; } return true; }
Notes:
- DllMain simply forwards to the appropriate helper functions. These setup the runtime, create thread objects for interaction with the garbage collector and initialize thread local storage data.
- The DLL does not share its runtime or memory with other DLLs.
- The first boolean argument to the dll-helper functions specify whether all threads should be controlled by the garbage collector. You might need more control over this behaviour if there are threads in the process that must not be suspended. In this case pass false to disable the automatic handling of all threads.
- WRITE.d:
module mydll; import std.c.stdio; -L/IMPLIB mydll.d dll.d mydll.def C:>
which will create mydll.dll and mydll.lib. Now for a program, test.d, which will use the dll:
test.d:
import mydll; int main() { mydll.dllprint(); return 0; }
Create an interface file mydll.di that doesn't have the function bodies:
mydll.di:.
- Notify the GC about external references to a memory block by calling GC.addRange.
-
Manys
Having core.runtime; import std.c.stdio; import std.c.stdlib; import std.string; import std.c.windows.windows; HINSTANCE g_hInst; extern (C) { void gc_setProxy(void* p); void gc_clrProxy(); } extern (Windows) BOOL DllMain(HINSTANCE hInstance, ULONG ulReason, LPVOID pvReserved) { switch (ulReason) { case DLL_PROCESS_ATTACH: printf("DLL_PROCESS_ATTACH\n"); Runtime.initialize(); break; case DLL_PROCESS_DETACH: printf("DLL_PROCESS_DETACH\n"); Runtime.terminate(); break; case DLL_THREAD_ATTACH: printf("DLL_THREAD_ATTACH\n"); return false; case DLL_THREAD_DETACH: printf("DLL_THREAD_DETACH\n"); return false; } g_hInst = hInstance; return true; } export void MyDLL_Initialize(void* gc) { printf("MyDLL_Initialize()\n"); gc_setProxy(gc); } export void MyDLL_Terminate() { printf("MyDLL_Terminate()\n"); gc_clrProxy(); } is in this version as well. This is because the same DLL should be usable from both C and D programs, so the same initialization process should work for both.
- MyDLL_Initialize
When the DLL is dynamically linked via Runtime.loadLibrary() the runtime makes sure that any initialization steps required by the D program are executed after the library is loaded. If the library is statically linked, this routine is not called by the program, so to make sure the DLL is initialized properly we have to do some of the work ourselves. And because the library is being statically linked, we need a function specific to this DLL to perform the initialization. This function takes one argument, a handle to the caller's gc. We'll see how that handle is obtained later. To pass this handle to the runtime and override the DLL's built-in gc we'll call gc_setProxy(). The function is exported as that is how a function is made visible outside of a DLL.
- MyDLL_Terminate
Correspondingly, this function terminates the DLL, and is called prior to unloading it. It has only one job: informing the runtime that the DLL will no longer be using the caller's gc via gc_clrProxy(). This mydll.def -g -L/map
Links mydll.obj into a DLL named mydll.dll. mydll.def is the Module Definition File, and has the contents:
LIBRARY MYDLL DESCRIPTION 'MyDll demonstration DLL' EXETYPE NT CODE PRELOAD DISCARDABLE DATA PRELOAD MULTIPLE core.runtime; import std.stdio; import std.gc; import mydll; //version=DYNAMIC_LOAD; version (DYNAMIC_LOAD) { import std.c.windows.windows; alias MyClass function() getMyClass_fp; int main() { HMODULE h; FARPROC fp; getMyClass_fp getMyClass; MyClass c; printf("Start Dynamic Link...\n"); h = cast(HMODULE) Runtime.loadLibrary("mydll.dll"); if (h is null) { printf("error loading mydll.dll\n"); return 1; } fp = GetProcAddress(h, "D5mydll10getMyClassFZC5mydll7MyClass"); if (fp is null) { printf("error loading symbol getMyClass()\n"); return 1; } getMyClass = cast(getMyClass_fp) fp; c = (*getMyClass)(); foo(c); if (!Runtime.unloadLibrary(h)) { printf("error freeing mydll.dll\n"); return 1; } printf("End...\n"); return 0; } } else { // static link the DLL extern (C) { void* gc_getProxy(); } int main() { printf("Start Static Link...\n"); MyDLL_Initialize(gc_getProxy()); Runtime.loadLibrary(), Runtime.unloadLibrary().
Running it looks like this:
C:>test Start Dynamic Link... DLL_PROCESS_ATTACH static this for mydll Hello world! static ~this for mydll DLL_PROCESS_DETACH End... C:> | http://dlang.org/dll.html | CC-MAIN-2013-48 | refinedweb | 744 | 51.24 |
17 January 2008 06:27 [Source: ICIS news]
TOKYO (ICIS news)--Shin-Etsu Chemical on Thursday reported an 18.5% increase in third-quarter operating income from the previous year partly on steady sales at its Dutch subsidiary Shin-Etsu PVC.
Consolidated operating income at the Japanese chemical company for the nine months ended 31 December was yen (Y) 213.5bn ($1.9bn), up from Y180.1bn in the same time a year ago.
Net sales rose 7.4% to Y1043.6bn from Y971.3bn year on year, while net income increased 26.7% to Y143.4bn from Y113.2bn.
For polyvinyl chloride (PVC) business operations, while the competitors’ sales fell due to an impact of decreased construction of houses in North America, sales of the PVC subsidiary Shintech in the ?xml:namespace>
However, PVC sales in
For cellulose derivatives, while domestic business was still in the process of recovery after the explosion of the plant in Naoetsu in March 2007, sales at SE Tylose in
As a result, the third-quarter operating income in the organic and inorganic chemicals segment decreased 10% to Y74bn from Y82.2bn a year ago, while net sales were Y533bn, down 0.02% from Y533.1bn.
In the electronics materials segment, the nine-month operating income rose 56.2% to Y120.6bn from Y77.2bn a year ago, while net sales rose 22.5% to Y429.6bn from Y350.7bn.
($1 = Y107 | http://www.icis.com/Articles/2008/01/17/9093552/shin-etsu-q3-operating-income-up-18.5.html | CC-MAIN-2014-10 | refinedweb | 238 | 69.79 |
This example shows how to open a word document using Java. If you are working with tools where you have to open the document by clicking on it, you can use the java.awt.Desktop API to easily open the document by passing the file object. Note that this API will open any document passed into the File class will be opened in the default program used by that native operating system.
Lets look at this example to open a word document in Windows operating system
OpenDocExample.java
package javabeat.net.io; import java.awt.Desktop; import java.io.File; import java.io.IOException; /** * Open Word Document Example * * @author Krishna * */ public class OpenDocExample { public static void main(String[] args) { //Create file object File file = new File("D:\\SampleDoc.docx"); try { //Open the file using Desktop class Desktop.getDesktop().open(file); }catch (IOException exception){ exception.printStackTrace(); } } }
Speak Your Mind | http://www.javabeat.net/java-open-word-document/ | CC-MAIN-2015-06 | refinedweb | 148 | 52.97 |
CodePlexProject Hosting for Open Source Software
Wasn't sure the best way to phrase the question, but essentially, what i want to so is define an interface such as
public interface IWhatsMyName
{
string GetMyName();
}
Then I want two or more services to implement this interface
public class HiMyNameIsA: IWhatsMyName
{
public string GetMyName(){ return "Slim Shady"; }
}
public class HiMyNameIsB: IWhatsMyName
{
public string GetMyName(){ return "Marshal Mathers"; }
}
public class WhatsMyMFName: IWhatsMyName
{
public string GetMyName(){ return "Snoop"; }
}
Then in some other service I want to say something like
//find all implementors of the IWhatsMyName inteface
foreach(IWhatsMyName service in ?)
{
Output.WriteLine( service.GetMyName() );
}
Which should yield...
Slim Shady
Marshal Mathers
Snoop
How can this be done?
Any help is appreciated. Thanks.
R.
If you derive from IDependency you get such functionality for free: requesting
IEnumerable<IMyDependeny>
in the ctor gives you all instances which you can loop over then.
You could also derive your interfaces from IEventHandler. This is something special, with it you could request simply IMyDependency in the ctor, but when you call its methods actually that method of all registered IMyDependency types is called. You can only
have void methods this way, though. Event handlers are more than that, they also make it possible to call into the interface without having a static dependency on the type, but that's another story. See
this and this.
That's beautiful. Thanks.
I had thought about using IEventHandler but really need to use the Interface to return results. In my example in the original question I return a string but in fact I will want to return a IEnumerable of objects from each service and then aggregate the results....
interface IMessageFeedSource
{
IEnumerable<IMessageFeedObject> GetNewEvents(DateTime startDate, DateTime endDate)
}
I think I'm going to try out
IEnumerable<IMessageFeedObject>
and see how it works!
Thanks for the quick response :)
Ryan
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/356512 | CC-MAIN-2017-30 | refinedweb | 343 | 51.68 |
You need to build a mobile app that lists some data and shows details on a separate screen once user taps an item? Follow this complete guide on how to master React Native List and Navigator components to switch between a list and a detail screen.
If you don’t have any experience with React Native check out 3 Steps to Build Your First Mobile App with React Native. It explains the basics and how to set up required software to get started.
Table of contents
- What are We Building
- Set up the Stage
- Let’s Get Started
- App Component
- List Component
- Row Component
- Movie Screen
- The Best Part
- Source Code
- What’s Next
What are We Building
Our goal is to build an app that lists top 10 movies of 2016 in a beautiful list with background images and detail on each movie on a separate screen. The app has to work on both, iOS and Android devices. And here is what we want it to look like.
Set up the Stage
Let’s start off by creating a new app.
Create a New App Project
Open Terminal App and run these commands to initialize a new project and run it the emulator.
react-native init Top10MoviesOf2016; cd Top10MoviesOf2016 && react-native run-ios;
After that’s done, you should see React Native boilerplate app running in an emulator.
Enable Hot Reloading
Once your app is up and running, press ⌘D and select Enable Hot Reloading. This will save you some time having to reload the app manually every time you make a change.
To see how that works just open
index.ios.js file and change Welcome to React native to something else and save the file. The app should get updated immediately, without manually reloading it. Isn’t it great?
Let’s Get Started
Open
index.ios.js file and delete all of its content. Do the same for
index.android.js. Let’s build our app from scratch to get a better understanding of how things work.
If you run your app on an iOS device, then
index.ios.js gets executed, or
index.android.js if you run it on Android. And, since we need our app to work on both platforms, we are going to have our app code in a separate file called
App.js and then just import it in both
index.ios.js and
index.android.js to avoid repeating the same code twice.
So, go ahead and insert the following code into both,
index.ios.js and
index.android.js files.
import { AppRegistry } from 'react-native'; import App from './App'; AppRegistry.registerComponent('Top10MoviesOf2016', () => App);
All this code does, it just imports
App component, which we haven’t created yet, and registers it as our main app component.
Once you save the file, you’re going to see an error screen. That screen is pretty self-explanatory and just tells us that
App.js file we’re trying to import doesn’t exist. So, let’s fix it.
App Component
Create a file called
App.js. And start off by importing components that we’ll be using.
import React, { Component } from 'react'; import { Navigator, // Allows to navigate between different screens StatusBar, // Allows to hide the status bar Text } from 'react-native';
Next, let’s define a route mapper function that will be responsible for handling navigation between a list and a movie detail screens.
const RouteMapper = (route, navigationOperations, onComponentRef) => { if (route.name === 'list') { return ( // TODO: Add List component <Text>The list is going to be here</Text> ); } else if (route.name === 'movie') { // TODO: Add Movie component for a movie detail screen } };
There is only two routes in our app:
list for a list and
movie for a movie detail screen. We have some temporary text instead of an actual list, for now, just to make sure that our route mapper works as expected, and we’re going to replace it with an actual
List component later. As you noticed, we put those handy to-do notes to remind us to update that code once we build
List and
Movie components.
And finally, let’s define
App class. We’re going to use
componentDidMount lifecycle hook, which runs after a component has been rendered, to hide the status bar. And we’ll use
Navigator component, inside
render() method, that will be handling navigation between screens.
export default class App extends Component { componentDidMount() { // Hide the status bar StatusBar.setHidden(true); } render() { return ( // Handle navigation between screens <Navigator // Default to list route initialRoute={{name: 'list'}} // Use FloatFromBottom transition between screens configureScene={(route, routeStack) => Navigator.SceneConfigs.FloatFromBottom} // Pass a route mapper functions renderScene={RouteMapper} /> ); } }
You can read more on available component lifecycle hooks here State and Lifecycle
Let’s check out how our app is looking so far. Go to the emulator, and press ⌘R to reload the app because that error broke hot reloading earlier and you have to reload it manually.
Ok, it looks like
Navigator is working fine and it renders
<Text>The list is going to be here</Text> component by default. That’s great.
List Component
It’s time to start building
List component. Go ahead and create
List.js file and start filling it up with the code.
Import Components
First, import components that we’re going to use.
import React, { Component } from 'react'; import { ListView, // Renders a list RefreshControl, // Refreshes the list on pull down Text } from 'react-native';
Get the Data for the List
The next step is to figure out how are we going to get the data for the list. There are a few ways to get that data.
- Fetch from an API. That’s what you most likely would do in a real world app. You would have some backend for storing and serving the data for your mobile app.
- Load it from a file. You can just store all of your data in a separate file and load it as needed. That would work if you had static data that doesn’t change over time.
- Hardcode into the code. That’s the easiest way that would work for our example app. We’re going to be lazy and hardcode some sample date into the code.
To make it easier for our demo app purposes let’s just add an array of data to
List.js file after import statements.
const demoData = [ { title: 'Zootopia', rating: 98, image: '', large: '', plot: ...", }, { title: 'Hell or High Water', rating: 98, image: '', large: '',...', }, { title: 'The Jungle Book', rating: 95, image: '', large: '', plot: Idris Elba) forces Mowgli\'s guardian, the panther Bagheera (Ben Kingsley), to shepherd the child to safety in the "man village." Along the way, the boy meets an affable, lazy bear named Baloo (Bill Murray), as well as a snake with hypnotic powers (Scarlett Johansson) and an orangutan (Christopher Walken) who wants to harness...', }, { title: 'Love & Friendship', rating: 98, image: '', large: '', plot: ...', }, { title: 'Finding Dory', rating: 94, image: '', large: '', plot: '...', }, { title: 'Hunt for the Wilderpeople', rating: 98, image: '', large: '', plot: ...', }, { title: 'Kubo and the Two Strings', rating: 97, image: '', large: '', plot: ...', }, { title: 'Captain America: Civil War', rating: 90, image: '', large: '',...', }, { title: 'Sing Street', rating: 97, image: '', large: '', plot: ...', }, { title: 'Moonlight', rating: 99, image: '', large: '', plot: 'The tender, heartbreaking story of a young man\'s struggle to find himself, told across three defining chapters in his life as he experiences the ecstasy, pain, and beauty of falling in love, while grappling with his own sexuality.', }, ];
It’s an array of movie objects, each of which has a title, rating, image, large image, and plot.
Outline List Component Class
Let’s define
List component class and outline what methods we’re going to need first, and go through each of them after.
export default class List extends Component { /** * Store the data for ListView */ state = {} /** * Call _fetchData after component has been mounted */ componentDidMount() {} /** * Prepare demo data for ListView component */ _fetchData = () => {} /** * Render a row */ _renderRow = (movie) => {} /** * Renders the list */ render() { return <Text>List Component</Text>; } }
You can export multiple classes, variables or functions, but only one can be exported as a default one.
Update App.js to use List Component
Let’s go on a little detour and open up
App.js file to make it use
List component that we just created.
Add import statement first.
import List from './List';
Then, find our to-do note and
<Text>The list is going to be here</Text>
// TODO: Add List component <Text>The list is going to be here</Text>
And replace it with
<List> component
<List navigator={navigationOperations} />
Let’s bring up the emulator and make sure that
<List> component is being rendered.
Continue Working on List.js
Now, let’s go back to
List.js and continue working on it.
Define State
Let’s start with state. We’re going to use state to store movie data for ListView component. Let’s go ahead and add some stuff to our
state = {} definition, so it’d look like following.
state = { // ListView DataSource object dataSource: new ListView.DataSource({ rowHasChanged: (row1, row2) => row1 !== row2, }), // Used for RefreshControl isRefreshing: false, }
new ListView.DataSource creates an instance of
DataSource object, which we’re going to use to fill it up with our movie data and use it for
ListView component to render it on the screen.
Prepare Data for ListView
Next, let’s update
_fetchData_ and
componentDidMount to load the actual data into
this.state.dataSource once component has been mounted.
componentDidMount() { // Fetch Data this._fetchData(); }
_fetchData = () => { // Data is being refreshed this.setState({ isRefreshing: true }); this.setState({ // Fill up DataSource with demo data dataSource: this.state.dataSource.cloneWithRows(demoData), // Data has been refreshed by now isRefreshing: false, }); }
Render Row
Let’s update
_renderRow, which will be rendering each row in the list.
_renderRow = (movie) => { return ( // TODO: Update with Row component <Text>{movie.title}</Text> ); }
Render the List
Let’s update
render() method.
render() { return ( <ListView // Data source from state dataSource={this.state.dataSource} // Row renderer method renderRow={this._renderRow} // Refresh the list on pull down refreshControl={ <RefreshControl refreshing={this.state.isRefreshing} onRefresh={this._fetchData} /> } /> ); }
It returns
ListView component with
dataSource,
renderRow, and
refreshControl props. The latter is not required, so if you’re not planning on refreshing your data, you can omit it.
Let’s check out how our app is looking so far.
Looks great. It just needs some styling work.
Row Component
It’s time to make our list beautiful. And to do so we’re going to create a reusable component called
Row, and then update
_renderRow() method in
List.js to use it instead of simple
Text that we have now.
Let’s start off by creating a new file called
Row.js.
Import Components
Import components we’re going to use first.
import React, { Component } from 'react'; import { Image, // Renders background image StyleSheet, // CSS-like styles Text, // Renders text TouchableOpacity, // Handles row presses View // Container component } from 'react-native'; import Dimensions from 'Dimensions'; // Detect screen size to calculate row height const screen = Dimensions.get('window');
Define Class
Define
Row class.
export default class Row extends Component { // Extract movie and onPress props passed from List component render({ movie, onPress } = this.props) { // Extract values from movie object const { title, rating, image } = movie; return ( // Row press handler <TouchableOpacity // Pass row style style={styles.row} // Call onPress function passed from List component when pressed onPress={onPress} // Dim row a little bit when pressed activeOpacity={0.7} > {/* Background image */} <Image source={{uri: image}} style={styles.imageBackground}> {/* Title */} <Text style={[styles.text, styles.title]}>{title.toUpperCase()}</Text> {/* Rating */} <View style={styles.rating}> {/* Icon */} <Image source={{uri: ''}} style={styles.icon} /> {/* Value */} <Text style={[styles.text, styles.value]}>{rating}%</Text> </View> </Image> </TouchableOpacity> ); } }
It has only one,
render() method, that returns
Image component with some text and icon inside it, wrapped into
TouchableOpacity component to allow user press on rows and navigate them to a separate movie detail screen using a function that we defined in
_renderRow of
List component in
List.js file.
Styles
And lastly, define styles.
const styles = StyleSheet.create({ // Row row: { paddingBottom: 4, // Add padding at the bottom }, // Background image imageBackground: { height: screen.height / 3, // Divide screen height by 3 justifyContent: 'center', // Center vertically alignItems: 'center', // Center horizontally }, // Shared text style text: { color: '#fff', // White text color backgroundColor: 'transparent', // No background fontFamily: 'Avenir', // Change default font fontWeight: 'bold', // Bold font // Add text shadow textShadowColor: '#222', textShadowOffset: { width: 1, height: 1 }, textShadowRadius: 4, }, // Movie title title: { fontSize: 22, // Bigger font size }, // Rating row rating: { flexDirection: 'row', // Arrange icon and rating in one line }, // Certified fresh icon icon: { width: 22, // Set width height: 22, // Set height marginRight: 5, // Add some margin between icon and rating }, // Rating value value: { fontSize: 16, // Smaller font size }, });
Update List.js to use Row Component
Let’s open up
List.js file to make it use
Row component that we just created.
Add import statement first.
import Row from './Row';
Then, find
_renderRow() method with
Text component and a to-do note.
_renderRow = (movie) => { return ( // TODO: Update with Row component <Text>{movie.title}</Text> ); }
Remove to-do note, and replace
Text component with
<Row> component.
_renderRow = (movie) => { return ( <Row // Pass movie object movie={movie} // Pass a function to handle row presses onPress={()=>{ // Navigate to a separate movie detail screen this.props.navigator.push({ name: 'movie', movie: movie, }); }} /> ); }
It returns
Row component and passes movie object as
movie prop, and a function to navigate users to a separate movie detail screen once a row is pressed as
onPress prop.
Let’s bring up the emulator and see how our
Row component is looking.
It looks much better. Great job so far. Hang on for a little while longer, and we’ll be done with the app soon.
Movie Screen
The last thing left to do is a movie detail screen that appears when a user taps on a movie in the list.
Let’s start off by creating a new file called
Movie.js.
Import Components
Import components we’re going to use first.
import React, { Component } from 'react'; import { Image, // Renders background image ScrollView, // Scrollable container StyleSheet, // CSS-like styles Text, // Renders text TouchableOpacity, // Handles button presses View // Container component } from 'react-native';
Define Class
Define
Movie class.
export default class Movie extends Component { // Extract movie object passed as a prop from Row component render({ movie } = this.props) { // Extract values from movie object const { title, rating, large, plot } = movie; return ( <View style={styles.container}> {/* Background image with large image */} <Image source={{uri: large}} style={styles.imageBackground}> {/* Use ScrollView in case plot is too large to fit on the screen */} <ScrollView style={{flex: 1}} > {/* Title */} <Text style={[styles.text, styles.title]}>{title.toUpperCase()}</Text> {/* Rating */} <View style={styles.rating}> {/* Icon */} <Image source={{uri: ''}} style={styles.icon} /> {/* Value */} <Text style={[styles.text, styles.value]}>{rating}%</Text> </View> {/* Plot */} <View style={styles.plot}> <Text style={styles.plotText}>{plot}</Text> </View> </ScrollView> {/* Button container */} <View style={styles.buttonContainer}> {/* Press handler */} <TouchableOpacity // Go to the previous screen onPress={() => {this.props.navigator.pop();}} // Dim button a little bit when pressed activeOpacity={0.7} // Pass button style style={styles.button} > <Text style={styles.buttonText}>CLOSE</Text> </TouchableOpacity> </View> </Image> </View> ); } }
It has only one,
render() method, that returns flexible
View component, which takes up all of the screen space, and contains a background image, movie title, rating, plot, and close button to go back to the list, using
navigator prop that we passed from
RouteMapper in
App.js file.
Styles
And lastly, define styles.
const styles = StyleSheet.create({ // Main container container: { flex: 1, // Take up all screen space backgroundColor: '#333', // Dark background }, // Background image imageBackground: { flex: 1, // Take up all screen space padding: 20 // Add padding for content inside }, text: { backgroundColor: 'transparent', // No background color: '#fff', // White text color fontFamily: 'Avenir', // Change default font fontWeight: 'bold', // Bold font // Add text shadow textShadowColor: '#222', textShadowOffset: {width: 1, height: 1}, textShadowRadius: 4, }, title: { fontSize: 22, // Bigger font size marginTop: 30, // Add space between top screen edge marginBottom: 5, // Add space at the bottom textAlign: 'center', // Center horizontally }, rating: { flexDirection: 'row', // Arrange icon and rating in one line justifyContent: 'center', // Center horizontally }, icon: { width: 22, // Set width height: 22, // Set height marginRight: 5, // Add some margin between icon and rating }, value: { fontSize: 16, // Smaller font size }, plot: { backgroundColor: 'rgba(255,255,255,0.5)', // Semi-transparent white background borderRadius: 10, // Rounder corners marginTop: 40, // Margin at the top padding: 10, // Padding for content inside }, plotText: { color: '#333', // Dark text color fontFamily: 'Avenir', // Change default font fontSize: 15, // Small font size }, buttonContainer: { marginTop: 20, // Add some margin at the top }, button: { backgroundColor: '#617D8A', // Color the button padding: 15 // Padding inside }, buttonText: { color: '#fff', // White button text fontFamily: 'Avenir', // Change default font fontWeight: 'bold', // Bold font textAlign: 'center', // Center horizontally } });
Update App.js to use Movie Component
Let’s open up
App.js file to make it use
Movie component that we just created.
Add import statement first.
import Movie from './Movie';
Then, find our to-do note
} else if (route.name === 'movie') { // TODO: Add Movie component for a movie detail screen }
And replace it with
<Movie> component
} else if (route.name === 'movie') { return ( <Movie // Pass movie object passed with route down as a prop movie={route.movie} // Pass navigationOperations as navigator prop navigator={navigationOperations} /> ); }
Now, let’s bring up the emulator and click on any movie to see how
Movie component is working.
Looks pretty good. Congratulations! You did a great job!
The Best Part
And the best part of it is that you don’t have to do anything at all to adapt your code to run on Android. Just launch Android emulator and run
react-native run-android in the terminal.
react-native run-android
Source Code
You can get the source code of the app using git. Just run in terminal:
To download the code execute in terminal:
git clone && cd top-movies-of-2016;
To install React Native along with all required modules and launch the app execute:
npm install; react-native run-ios;
What’s Next
You’ve learned a lot on how to build beautiful lists for your apps. You can dive deeper into React Native documentation to find out more features available for
ListView component. And if you have any questions, just leave a comment.
I hope you enjoyed the tutorial. Subscribe to find out about new tutorials and learn how to build amazing apps!
Pingback: Listview work on ios but does not work on android | FYTRO SPORTS() | https://rationalappdev.com/react-native-list-app-complete-how-to-guide/ | CC-MAIN-2020-40 | refinedweb | 3,073 | 64.71 |
In this third Swing tutorial we're going to add some text to our JTextArea when we click the button.
Its quite simple to do this, we need to add an Action Listener to the button. An Action Listener basically listens to the button, when the button is clicked it tells the Action Listener and we can program the Action Listener to do anything when it knows that the button has been clicked.
To do this add this code to your "MainFrame" class:
btn.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { } });
You need to import Action Listener and Action Event, CTRL + SHIFT + O(CMD + SHIFT + O on Mac).
As you can see we add an Action Listener to the button we created in the previous tutorial, called "btn", then we created a new Action Listener to add to it. Inside the Action Listener is a method, called actionPerformed, inside this method we can tell the Action Listener what to do when the button is clicked.
So to complete our goal of adding text to our JTextArea every time the button is clicked lets add 1 line of code to the actionPerformed method:
textArea.append("Hello\n");
We type the name of our JTextArea, in my case "textArea", and append a string "Hello\n", you can change this, the "\n" in the string creates a new line after the text so it doesn't all go on the same line.
If we run this now we can type in our JTextArea and when we click the button it adds "Hello" to it.
MainFrame.class
import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener;); btn.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { textArea.append("Hello\n"); } }); } } | https://caveofprogramming.com/guest-posts/swing-tutorial-3-reacting-to-button-clicks.html | CC-MAIN-2018-09 | refinedweb | 291 | 60.65 |
In 2004 I was running a hedge fund consultancy, where I advised many of the world’s leading hedge funds. With this perspective, I wrote an article called The Tao of Alpha. The article offers a unique viewpoint on how alpha was then used and understood. We have transcribed the original article below.
The Global Alpha Shortage
Even the least sophisticated of investors understand that alpha is something to be pursued. Alpha is good. And more alpha is better. Not surprisingly then, most marketing documents are laced with the word. It slips easily off the tongue of marketers and managers. It appears in conference titles, as in “Portable Alpha Asia 2005” or “Tapping into Alpha”. It can be found in the actual fund names, like “Alpha Simplex”, “Alpha Partners”, and “Absolute Alpha”. A particular group in New York, realizing more is better, called their company “Double Alpha”.
The irony here is that many an investor and a fair few hedge fund professionals only have a vague understanding of what alpha really is let alone the important principles that underlie it. Alpha is not a synonym for high or consistent returns. Getting high returns out of “pure alpha” is actually extremely difficult. Alpha, although unambiguous in its formal definition, is a term that is sloppily used in the industry. Though investors expect it, many don’t actually know if they are getting any.
While it is indeed difficult for an investor to check a manager’s performance for the presence of alpha, this dilemma is nothing compared to the manager’s problem of trying to get some in the first place. Unfortunately for all, alpha is not something that can be “mined”, “harvested” or “produced” as many a hedge fund claims. Contrary to numerous marketing brochures, it is in fact not “generated” so much as “taken”. And it is taken directly from other investors.”.
This article begins by presenting an intuitive explanation of why there is a total of zero alpha in the world. It then goes on to argue that most investors would do well to content themselves with no alpha anyway. But for those who insist on it, the article ends with a look at hedge funds as an alpha seeking investment.
The Basics of Beta
Alpha and beta are simply the constant and slope terms of the linear regression of return against some benchmark. While perfectly concise, mathematics is not the shortest path to an intuitive understanding of this topic. A better approach is to study the idea from a qualitative perspective emphasizing interpretation, (rather than derivation). For this reason, the following discussion eschews mathematics (any textbook introducing corporate finance will have complete coverage of the topic from that angle) in favor of a conceptual approach.
To achieve this, we explore alpha and beta in the realm of a simple stock market and then generalize to the entire investment universe. In this market, as in the real world, every stock issued by a company is owned by some person or organization somewhere. Thus the entire set of all stocks, “the market”, is owned by some (large) group of investors. This group will share all profits and losses between them via dividends and capital gains.
Assume that on the day that I decide to make an investment into this stock market, its capitalization is $100 billion. That means, based on the current price of every stock, the cost to buy every last one of them is $100 billion. After some period of time, the market gains 10%. The market cap would now be $110 billion which means there is $10 billion of positive gain to be shared amongst the group of investors who own the stocks. Collectively, the community of investors have put $100 billion at risk and made 10% in return. Obviously, depending on which stocks the various investors hold, some will make more than 10%, others less. But the important thing is that the average return will be exactly 10% because there is after all, only $10 billion dollars to be shared amongst $100 billion of investment capital. Furthermore, there was some average risk borne by each investor, because, by the same logic, there was never more than $100 billion to be lost. This later point is more abstract, since “risk” is not tangible in the way that return is. However, the concept is critical: No matter how one chooses to think about “risk”, there is only so much of it in the market and that which is present is shared amongst the entire investment community. If an investor wants to share in the return promised by the market, he must be willing to share in the risk. Quite intuitively, the more return he wants, the more of the risk he must be willing to assume. But collectively, investors share a finite amount of risk and return. The essence of alpha and beta is in the division of the two amongst investors.
Now regardless of what any other investor does, if I invest in a slice of the entire market, making my portfolio a perfect microcosm of the whole thing then, relative to respective sizes, both the market and I will bear the same risk and get the same return. Making my portfolio a microcosm of the entire market is as simple as allocating my capital in the same proportion as the overall market. If a particular stock is 17% of the entire market capitalization, then it is also 17% of my portfolio. There are treatises to be had which argue that investing in the market in this fully diversified fashion is the best thing one can possibly do. Any other allocation takes unnecessary risk. But none of that is relevant for this discussion. The important thing is this: Investing in the entire market can be done by simply buying an index future or some index tracking fund or even manually by looking up market cap data on the internet and buying the stocks in the proper proportions. It is dead easy. And because it is so easy to do—no industries to research, no balance sheets to study, no earnings reports to read, no CEOs to interview, no models to program—it costs almost nothing to achieve. One can have a perfectly diversified portfolio of stocks, giving a perfectly fair share of all corporate profits, for no cost. The key point here is that any investor has what is like a “natural right” to take the market’s risk in exchange for the market’s return for free. This market return or “natural right” is nothing more than a qualitative definition of “beta”. Although this idea is just a paradigm, the implication is a universal truth: Neither money, time, nor resources need be spent for beta, the market’s overall return per unit of risk. It’s free. Only when an investor wants more than this—a better risk/return deal—does he need to spend time or money.
Alpha
If I own a slice of the entire market then what is left is simply a big slice of the market. To the extent I fiddle with the amount of each stock in my slice—“over weighting” and “under weighting” in the industry jargon—the other big slice will, necessarily, have the inverse of my adjusted weightings. If I over weight a stock, the rest of the market, collectively, will have to be under weight. For example, if I have the market portfolio with a little extra IBM and little less Dupont, then all other investors collectively have the market portfolio too but under weighted in IBM and over weighted in Dupont. Because risk and return must be conserved, any benefit (higher return or less risk) I get out of this tactical decision will necessarily be to the detriment of other investors. In our example stock market, if I increased my returns without increasing my risk, then at least one person’s returns have decreased because there still only $10 billion of profits to be shared. This is the essence of alpha: Extra return without extra risk. When an investor has “generated” alpha, it means he got more return than his level of risk warranted. Unfortunately, it also means that someone else got less. Quite simply, total alpha is nil as a direct consequence of the finite amount of returns per unit of risk to be shared by the investment community.
Two to Tango
The implication of a world void of alpha is that its pursuit is a zero sum game. And statistically, participating in a zero sum game is a pointless exercise because there is no expected profit. An investor, aware of this reality and his own limitations, may be wise to forego the quest completely. Zero alpha from not trying is better than the expectation of zero alpha minus the costs of trying. In fact, there is plenty of evidence that simple, boring beta is more profitable in the long term than beta + expected alpha – fees; in other words, the hunt for alpha seldom recovers the costs of the expedition. Yet there are some 10,000 hedge funds looking for a piece of the alpha pie—a pie with zero slices. In light of the complete lack of alpha in the world, it is reasonable to ask why they all embark on this cumulatively futile quest.
The answer lies between confidence and hubris. If an investor knows something another participant doesn’t — if he has “an edge”— then he might just get some alpha at the expense of others. And there is not a hedge fund in the world — not even one — that does not think it has just the edge required to make its alpha expectation above zero. This is why, in a zero sum game, there are so many people willing to play.(Besides, hedge funds get paid to play by their investors, which, from a business point of view, is reason enough!) But across the industry, only one thing is assured: a fair amount of the trillion plus dollars searching for alpha via hedge funds will end up providing it for others — not taking it. There is simply no alpha for a lot of speculative capital in this world.
Beta’s Omnipresence
Within a subsection of the market, say pharmaceutical companies, the same principle holds: conservation of risk and return. Any investor can take pharmaceutical industry risk and get the industry’s return quite easily; (pharmaceutical beta). However, once he starts favoring particular pharmaceutical companies over other pharmaceutical companies, he is then seeking alpha. Within the sector, total alpha is still zero meaning that any investor who gets more return per unit of risk in the sector has taken it from another investor. Indeed, alpha and beta exist with respect to cap companies or growth companies or export oriented companies, or any classification one cares to make. There is always conservation of risk and return with in the grouping and hence the concept of alpha and beta.
Also note that the investor’s decision to take pharmaceutical risk—even just its beta— is an alpha seeking decision in the context of the entire market. Almost any beta exposure is an alpha seeking decision in a larger context. Choosing a pure beta exposure to the S&P500—as opposed to the Nikkei or FTSE— is an alpha seeking decision in some global context. That alpha and beta exist endlessly on various levels might make the whole exercise seem quite arbitrary. But this would be an incorrect assessment. When interpreted properly, alpha and beta offer a concise expression of the trade off between risk and return in whatever domain an investor is active. A long/short fund focusing on the pharmaceutical industry must be compared to a simple investment in the sector. If a fund simply delivers the beta of the industry, it has done nothing to earn its fees.
The Joy of Alpha
The appeal of alpha is obvious: Excess return for no excess risk would appear tantamount to free money. Of course in reality there are risks borne to achieve alpha (just ask the guy who got the negative alpha.) But alpha is achieved from taking risks that have nothing in common with the risks associated with beta. Alpha returns are the rewards for taking unique risks, whereas beta risks manifest as headline news items that retrospectively explain market moves each day: earnings, fiscal and monetary factors, consumer spending, inflation and so on. An investor with beta exposure to the S&P500 can therefore have an inherent sense of the risks he is taking by reading the headlines of any business periodical. But alpha risks are not tangible in the same way. The Wall Street Journal will never offer any clues on alpha risks because they have nothing to do with any broad market factors. Indeed, an “alpha” source that shows sensitivity to some tangible economic factor is most likely not alpha at all but rather beta from somewhere the analyst has not looked yet.
An alpha source is valuable precisely because it comes from taking something other than market risk. No matter what the market does, a source of alpha can not be damaged by even the most violent turmoil because its risks lie elsewhere. It is thus the perfect supplement to any portfolio. While making no promises of perpetual positive returns, an alpha source does guarantee a return stream which will behave independently even during the market’s worst times. Any investor who has watched all his positions lose money in tandem during a market crisis understands the value of true diversification as offered by alpha.
To Seek or Not to Seek
Given the charms of alpha, it is not surprising that so many investors covet it. But despite its appeal, the unglamorous companion beta, is nonetheless an excellent investment: It offers a diversified, tangible set of risks and a share of the market’s returns in compensation. It is readily available and cheap. And, most importantly, it has a 100 year track record of 10% per annum. However, any investor is welcome to seek more than his entitlement of beta. He need simply own something different from the market portfolio. But in doing this, he runs the risk of getting it wrong—of getting negative alpha. Now and always, 50% of capital hunting for alpha ends up being the prey. This is axiomatic in a zero sum game. The pursuit of alpha is therefore to be approached cautiously. An investor seeking more than beta must also ask whether the alpha (or technically, the expected alpha), justifies the cost (in time or in money if he gets help). Theoretically and empirically, there is plenty to suggest that seeking alpha is a waste of both.
The Case Against Seeking Alpha
The case against seeking alpha rests on the efficient market hypothesis. An efficient market is one whose securities cost exactly what they are actually worth; nothing overvalued, nothing undervalued. The theory is based on the simple idea that there are enough smart investors participating in the market that no security could possibly remain wrongly priced for long. Considering the number of competent people watching most markets, it would seem reasonable that between them all they are digesting every last bit of data and as a result prices reflect all known information. Of course, less competent investors will drive prices in arbitrary directions, but in all likelihood, will cancel each other out in terms of net effect on the market. To the extent that they do not, competent investors will arbitrage mis-pricings. In theory, arbitrage opportunities presented to the attentive investors ensure that equilibrium is a perpetually efficient market.
The implication of the efficient market hypothesis is that no investor can hope to acquire any information that is not already known and reflected in the price of a security. An investor can do all the homework she wants, but in an efficient market, all assets sell for exactly what they are worth—there are no “cheap” stocks. If an investor can’t hope to know anything that is not already reflected in a security’s price, then by implication, she certainly cannot expect to consistently generate returns that are better than the market, i.e. she can have no hope of alpha. In an efficient market, chasing after alpha is completely futile. Some may achieve it and delude themselves and others into thinking it was skill. But if markets are efficient, only luck can explain alpha.
Fortunately for the entire fund industry, whose existence would be pointless were markets truly efficient, there is evidence to contradict the hypothesis. Irrational investors don’t cancel each other out, (think dot com bubble) and genius investors do not appear every time something is mis-priced because they are smart enough not to jump in front of trains. (No one dared short the NASDAQ in 1999, even though the chairman of the US central bank practically declared the market inefficient!) There are more problems. For instance, stocks change price too frequently if they are only reacting to fresh information. October 1987 is a particularly stark case in point. On October 19th, the S&P dropped 23% even though there was no news that day to spur a decline. This is true for many of the worst one-day drops in financial markets. So the reality is that markets are not efficient, at least not perfectly. And if markets are anything but perfectly efficient then there exists portfolios which offer better risk/return characteristics than the entire market; i.e., there is alpha to be had.
Although not precisely quantifiable, there is a relationship between market efficiency and alpha: the more efficient the market the less alpha available from it. The degree of market inefficiency is a critical point. Sustainable alpha generation requires an investor to perpetually identify market inefficiencies. The more efficient the market, the tougher this task becomes. Thus the question of how much sustainable alpha is theoretically available hinges on how inefficient markets actually are. While it is clear markets are inefficient, it is also obvious that they are not inefficient enough to create any sort of alpha bonanza.
Thus the argument against seeking alpha is simply that markets are efficient enough to impede most investors from sustaining alpha. In an efficient market the optimal portfolio is the market portfolio, (i.e pure beta); in a near efficient market, beta is near optimal.
Empirical studies have shown that unglamorous buy and hold strategies (beta) tend to outperform active strategies (beta +/- alpha – costs) over the long term. And there is a last simple argument against alpha pursuit: Because of the zero sum nature of alpha, one must necessarily be better at alpha generation than at least half the capital chasing alpha in the market. And given the size of global capital markets, that is a lot of competition.
The Case for Seeking Alpha
Given the long term reliability of diversified market portfolios, the strong arguments that holding nothing but beta is optimal and the odds against consistently generating alpha in near efficient markets, it’s understandable that many smart investors are content with just beta. But there are good reasons why a rational investor might still seek alpha. The first is dead simple: Markets make no promise about appreciation. They do not necessarily go up in any quarter, year, or even decade. Beta, for all its fine qualities, is hardly of any use when markets are down—in fact, it becomes a liability. The desire to generate positive returns in the immediate future, regardless of the market, is a perfectly reasonable objective. In a sense, all investors are absolute return investors and hence have a strong motivation to seek returns wherever they can be found. Enough alpha can compensate for poor returns from beta.
There is a second strong motivation to seek alpha: The nature of beta. If one invests in the market as a whole, then yes, one is diversified within that space, but ultimately, there is only one return source: that market. Diversification being a pillar of sound strategy, a prudent investor is therefore attracted to the prospect of an alternate source of returns. To have returns coming from somewhere other than the market— indeed somewhere completely uncorrelated with the market—is extremely desirable. And alpha, by definition, has absolutely no correlation with beta. It is therefore the perfect diversification and the ideal companion to beta. A priori, an investor cannot know if the search for alpha will prove profitable, but he can be certain of one thing: The alpha he gets will have no relation to how the market does. So there are strong, rational reasons to seek alpha in spite of the difficulty in consistently getting it.
Enter the Hedge Fund
Having weighed the arguments for and against, an investor who concludes that a pure beta portfolio is not sufficient will have to enter the alpha arena. It is that simple. Either one is content with beta or not. (And judging investors by their actions, most are not, though a fair few will not have thought it through in quite this way.) So, having decided on the need for alpha, the next question is then, how to go about the hunt? Given the difficulty of amassing enough time, resources, experience and skill, most investors choose to pay a professional. This would seem sensible. Yet it is interesting to note that the total fees that are paid each year by investors to “experts” for this service is in the order of tens of billions of dollars. The irony is that, in all likelihood, less than half the capital seeking professional help will actually get positive alpha after costs. But all will pay. Absurd as it seems, taken collectively, an entire community pays billions in fees for receiving a total of nothing, (i.e. a total of zero alpha), between them all.1 Clearly, choosing the right people to seek alpha is critical.
Thus we come to hedge funds. Hedge funds are alpha hunters for hire. Theoretically, but alas not in reality, a hedge fund should deliver alpha and nothing but alpha. The reason for this follows directly from the fact that beta is already available for free. An investor can get market exposure for way less than the cost of a hedge fund, so why would he ever pay for beta? Sadly, this is not a rhetorical question. The answer is that hedge funds are happy to sell beta to any investor who is willing to pay for it. Because so many investors don’t have the ability to distinguish alpha from beta and also don’t know that beta is available cheaply elsewhere, hedge funds continue to sell it at “2/20”.
In theory, alpha and beta are nicely separable, but, as we shall see, in reality it is far more difficult. But the principle still holds: A hedge fund should not sell beta, or, more properly, an investor should refuse to pay for it. We can perhaps excuse beta accompanying alpha as long as the hedge fund does not charge for the beta component. A performance fee on the alpha component of a returns is quite reasonable. But there are few hedge funds that make this distinction—whether it be difficult or not. Why would they if investors don’t insist on it?
In the days before hedge funds were en vogue, the concept of not charging for beta seemed to be better understood. Professional money managers were judged against an index. “Beating” the index—tantamount to generating alpha—was implicitly rewarded by more capital being attracted to the manager. No one was impressed by 10 percent returns when the market was up 20. The problem was the converse: -15 percent when the market was worse was little consolation for an investor. In the presence of a down market, it seems reasonable to claim that, ultimately, all that matters is return. What good is beating an index by 3 percent if it’s down 12? A professional money manager should not be able to hide behind an index when he fails. After all, the investor still losses money. A good investment would have been no investment. This all leads to the conclusion that a manager should be paid if and only if he actually makes money. And thus the concepts of “absolute returns” and “performance fees” were born.
But the pendulum has now swung too far in the “absolute returns” direction. Slapping said label on a fund which is more or less long the stock market is not sufficient to call the thing a hedge fund and charge exorbitant fees. I once evaluated a New York based equity long short manager who had returned 11.4% net to investors in 2003, their first full year of returns. In our meeting, the partners were keen to point out that the strategy had proved successful thus far as evidenced by recent performance. They went on to explain that their prospectus gives them a broad mandate to be net long or net short. Thus far, they had maintained a long bias.
“The S&P returned close to 30% in 2003,” I pointed out.
“Ah, but we are absolute return fund,” the senior partner answered, “You should not compare us to any stock index.”
“You wish!” I thought.
In 2003, 447 of 500 S&P stocks went up, meaning it was pretty hard to buy any equity in that year and not make money. To not judge this manager against the tail wind he enjoyed all year is naïve. A hedge fund can not simply label itself “Absolute Returns” and gain immunity from contextual judgment. In a sense, there is an onus of proof on the fund. If it can show that its strategy contains no beta because it lacks correlation with any measurable market factor, then it has the privilege of labelling itself “Absolute Returns”. Technically, “absolute return” should imply a conduit of pure alpha. The problem endemic in the industry now is not just that “absolute return” has become a buzz phrase void of meaning but that it is treated as a label that can be arbitrarily applied to any strategy, excusing it from benchmarking and instantly allowing beta to be sold as alpha.
That said, there is more going on here than evil hedge funds maliciously selling beta in the guise of alpha. There are good reasons why beta is so prevalent. First, genuine pure alpha is actually extremely hard to find. To achieve it, a manager must have a real, sustainable edge that almost no other market participant has. This reduces to an advantage in either the acquisition or the processing of information. But how often does information come into the hands of a manager that other investors don’t already know? Almost never. This leaves information processing as the only possible edge. Whether it is a manager’s raw trading instinct based on 20 years experience plus a Bloomberg at one extreme or the most systematic, model based computer controlled trading strategy at the other, it seems unlikely that any strategy can perpetually work or remain in the knowledge of a few. And so barriers to the consistent production of alpha are immense implying there are few advantages a hedge fund can realistically hope for. And without some advantage, the search for alpha has zero expected return. In this light, it is no wonder managers are forced to turn to beta!
Beta, as we have noted, is cheap, easy and reliable whereas searching for alpha is resource intensive. Hence a hedge fund—a business—is constrained on one side by finite resources and on the other by the need to achieve returns. Consciously or otherwise, injecting beta into an investment is a cost effective way to add expected return to a portfolio. Thus the needs of the investor are not necessarily aligned with those of the hedge fund business. An investor might prefer minimal returns coming from an authentic pure alpha strategy whereas a manager might find the lack of high absolute returns difficult to sell when competing with other hedge fund businesses.
The presence of beta, by the way, is the reason hedge funds have such an annoying habit of failing investors at the exact same time as conventional strategies do. Hedge fund returns, obfuscated by the combination of multiple beta sources and perhaps even some alpha, can look reasonably uncorrelated to other markets in benign times. But if a fund is executing strategies that are beta linked, then the die is already cast: Its fate is inexorably tied to that of the market. If and when the market experiences a substantial decline, there is little hope the fund can avoid losses. The presence of beta means a fund is fundamentally exposed to the same risk factors as conventional investments. Beta, the very reason a fund might have done well in the past, can eventually prove its Achilles heel. Thus, the performance of a hedge fund through volatile market conditions can tell an investor a lot about how much beta is really present.
So, while hedge funds might be the natural choice for the alpha seeking investor, a serious problem is they often provide beta rather than alpha. Therefore, hedge funds as a class provide no solution: An arbitrary diversified hedge fund portfolio will in all likelihood contain beta, a collection of negative and positive alpha and fees, all summing to something less than beta. (Hence the absurdity of the “investable hedge fund index”.) In theory we can now define the ideal hedge fund: It is one that consistently provides alpha after fees with zero beta. Such a fund positively impacts any portfolio to which it is added. It is the perfect hedge fund and it almost certainly does not exist. In reality, hedge funds truly seeking pure alpha are rare enough, those who achieve it consistently are even rarer. The fact is, hedge funds almost certainly come with some beta for all the reasons discussed above. Therefore a practical definition of a good hedge fund is one that at least provides a desirable beta source while still generating alpha after fees. A desirable beta source is one that is not already sufficiently represented in the portfolio the hedge fund is to augment. Thus an investor gets value from a hedge fund if and only if it provides alpha after fees and beta exposure that her portfolio needs. Note this means no hedge fund can be said to be good in any absolute sense. The quality of the fund is always a function of the larger portfolio it is to join as well as its own characteristics.
The Fund of Hedge Funds
A fund of funds warrants its fee if it is capable of building a portfolio of diversified, alpha generating hedge funds. As is clear by now, this is no easy task, which is why the few that can do this are worth their cost.
The Investable Hedge Fund Index
As mentioned parenthetically above, a hedge fund index cannot possibly be of any value to an investor since it is almost certainly a jumble of betas, offsetting alphas and fees.
The Problem of Luck
On September 11, 2001, investors who were long gold, short airline stocks, or long defense stocks, to name a just few good trades, made money because of the manifestation of a risk that none of them could have imagined. No investor fortunate to profit in such circumstances would ever claim their profits on that particular day were due to their skill. They were lucky and that’s all. This stark case illustrates a particular problem with the analysis of alpha: The role of luck.
In general, it is difficult to determine, ex-post, why alpha was won or lost. Every investor who seeks excess returns does so taking positions he rationally believes to have positive expected returns in light of the risks. Should he end up achieving his desired profit, he will likely conclude it was because his analysis was correct. In contrast, the investor providing him the alpha will probably conclude that he was unlucky rather than unskilled. The point is, many investors seeking alpha will find it but not necessarily for the reasons they thought. Just because one rationalized an expected outcome which subsequently occurs does not necessarily mean that it occurred for reasons postulated a priori. Those short United Airlines on September 11 made money, but not for the reasons they thought they would. Obvious in this case, but the general case can be less clear.
Thus the role of luck makes it even more difficult to evaluate a hedge fund. Alpha from good fortune cannot be expected to be sustainable. Unfortunately, there is no information in the track record of a hedge fund to aid in attribution of alpha to its two possible sources: skill and luck. Thus a good track record—even one dripping with alpha—does not imply a skilled manager. A good track record is merely a prerequisite for further investigation as opposed to justification for an investment.
The problem of luck is not a mere theoretical exercise. It is a real problem arising as a statistical artifact of the sheer number of hedge funds in the world. It completely undermines the industry’s favorite crutch: track record. Funds that have no edge at all—of which there are many—need not all fail immediately. There are so many hedge fund attempts that it is inevitable that some bad ones end up with good track records. To demonstrate just how far luck can take a fund, I did a simple experiment: By simulation in Excel, I created track records for 100 equity long short funds that did nothing but randomly allocate $100 amongst random combinations of S&P500 shares. The only constraint was that the fund maintain equal dollar longs and dollar shorts. Each also charged 2%/20%. The results demonstrate just how far luck can go. After 3 years, 5 of 100 have stellar track records. Of course many more are awful. But the point is, given enough unskilled hedge funds, three full years is not enough time to weed the last of them out.
In the real world, funds with three-year track records are the survivors of a larger batch who all had big aspirations years earlier. But there is every chance that some of those hedge funds are the inevitable few that survive largely due to luck. Of the roughly 7000 hedge funds that existed at the start of 2005 2 , 848 had vanished by year end.3 But amongst the survivors were both the skilled and the lucky. Best not to mistake one for the other, which is why quantitative analysis is a beginning not an end.
Checking for Alpha
For all its intricacies, there are ultimately two and only two ways to achieve alpha. It can be had by market timing or by security selection. An investor who takes exposure to the market as whole, taking pure beta risk, but manages to exit the market ahead of a few down turns will have achieved some alpha because, during the life of her strategy, she will have taken less than the total market risk (by being divested some of the time) but received more than the market return (having been absent while it was declining.) Thus good market timing translates to alpha.
The other option is security selection. An investor who manages to select better performing securities without taking excess risk achieves alpha. Note that if the extra returns came with extra risk, then she has achieved nothing toward alpha generation. It is worth mentioning that many a hedge fund show their “monthly alpha” as their return less the market’s return. This is nonsense as it leaves risk out of the equation altogether.
The line between the two methods for generating alpha can easily become blurred but this is not a problem at all. To some extent an investor can remain blissfully indifferent to how the alpha is created, as long as it is. The challenge of course is determining if a manager is capable of the task going forward. To this end, past performance offers only minimal assistance. When the luxury of a track record is available, the question of whether alpha was generated can be addressed. But the answer is only useful to a point. If past performance does indeed reveal alpha, then the manager warrants further study to determine whether the good results can be maintained. On the other hand, if there is no sign of alpha in the past, there is little basis for optimism about the future unless the manager has some good explanations for past failings. Thus examining a track record for alpha is a useful starting point. In a world with over 10,000 hedge funds, it does offer an efficient, semi-automatable method for filtering out poor performers.
Checking for alpha is a process of elimination. Only by confirming that returns cannot be explained by beta exposures can one confirm the presence of alpha. Doing this is both art and science. An investor need simply regress the manager’s returns against the correct beta benchmark or benchmarks to check for alpha. This can be done easily in Excel or using one of hundreds of tools available to the industry. That is the science of alpha analysis.
The art is knowing what beta factors are relevant for the analysis. For example, measuring a globally invested stock fund against the S&P500 will mistakenly find alpha that is not really there simply because the choice of beta benchmark is wrong. The appropriate benchmark is entirely dependant on what the fund is actually doing. As such, an investor must understand the nature of a fund’s strategy. This becomes challenging as strategy complexity increases. But it is a challenge that must be met, because choosing poor benchmarks only results in “false alpha” and ultimately poor investment decisions. It is noteworthy that a full 40% of institutions surveyed by State Street in 2005 readily admit they simply don’t know if they are getting alpha from their hedge fund investments. How many more don’t know but wont admit it, remains unknown.
Conclusion: The Cult of Alpha
Alpha, the humble y-intercept of a linear regression, has somehow achieved cult status in the hedge fund industry. That only a few understand the concept’s intricacies is both cause and symptom of this phenomenon. That alpha is something to be coveted is about the only thing universally understood. Its precise definition, its short supply, the impediments to finding it, and its real utility are generally not. Because finance rarely offers any absolutes and any view can be justified, alpha is often treated as a subjective concept. It is not. Its meaning is perfectly defined and not debatable. The existence of zero alpha in the world is not a view; it is a fact. And it is fact easily demonstrated with high school mathematics. Intuitively, total alpha is nil because of the finite amount of returns in a market that must be shared amongst all investors; no investor can get extra return without another getting less.
That there is zero total alpha in the world has extreme implications:
- On average, 50% of capital seeking alpha will not get it; after fees, the statistics are even worse.
- One investor’s alpha is always another’s negative alpha.
- The pursuit of alpha is a competition between investors; it is a zero sum game.
- An investor without some advantage over other investors has no expectation of success.
Given the above, the decision to pursue alpha should be taken soberly. While beta is available to all by right, alpha is a privilege for the skilled (and the lucky). Seeking alpha is risky but success is well compensated. But to seek it effectively takes real skill. An investor after all is in direct competition with her peers. And her peers include bone fide geniuses, Nobel laureates, thousands of PhDs, and tens of thousands of professional investors, each with the same mission. Between them all, they will share the zero dollars of alpha available in the world.
How much alpha is good? Alas, it is impossible to answer. Since alpha, by definition, is return that cannot be attributed to the market, there is nothing to compare it to. It correlates with nothing and there does not exist a benchmark to compare with. All alpha is unique. But this is the very beauty of it: In a world where diversification is so unreliable, genuine alpha is a godsend. Any time an investor adds a truly uncorrelated return source to her portfolio, she improves it. Sustainable alpha is like a golden goose: One does not fret the size of the egg.
1 To the extent that professional investors are taking some alpha from amateurs would make this statement slightly too bold. But suffice to say, there is billions of dollars paid out to professional investors who generates nothing but negative alpha.
2 Source: The Economist
3 Source: HFR
Great post
Thanks, Ruslan!
[…] 2004 I enjoyed my 15 minutes of fame for an article I wrote called The Tao of Alpha, in which I explained the concept of alpha as a zero-sum game. Sources of alpha in 2004 were much […] | https://blog.quandl.com/the-tao-of-alpha | CC-MAIN-2018-13 | refinedweb | 6,916 | 61.26 |
Within the past five years, there has been a major change in the type of content found on the World Wide Web. In just a few short years, content has evolved from being primarily text and images, into a multimedia experience! Drupal contributors have put much effort in making this integration with multimedia as easy as possible. However, one issue still remains: in order to present multimedia to your users, you cannot rely on Drupal alone. You must have another application layer to present that media. This is most typically a Flash application that allows the user to listen or watch that media from within their web browser. This article explores how to use Drupal to manage a list of audio nodes and also builds a Flash application to play that music. When it comes to multimedia, Flash is the portal of choice for playing audio on a web sites.
Integrating audio in Drupal is surprisingly easy, thanks to the contribution of the Audio module. This module allows you to upload audio tracks to your Drupal website (typically in MP3 format), by creating an Audio node. It also comes with a very basic audio player that will play those audio tracks in the node that was created. To start, let's download and enable the Audio module along with the Token, Views, and getID3 modules, which are required for the Audio module. The modules that you will need to download and install are as follows:
- Audio—
- Views—
- Token—
- getID3—
At the time of writing this article, the Audio module was still considered "unstable". Because of this, I would recommend downloading the development version until a stable release has been made. It is also recommended to use the development or "unstable" versions for testing purposes only.
Once we have downloaded these modules and placed them in our site's modules folder, we can enable the Audio module by first navigating to the Administer | Modules section, and then enabling the checkboxes in the Audio group as follows:
After you have enabled these modules, you will probably notice an error at the top of the Administrator section that says the following:
This error is shown because we have not yet installed the necessary PHP library to extract the ID3 information from our audio files. The ID3 information is the track information that is embedded within each audio file, and can save us a lot of time from having to manually provide that information when attaching each audio file to our Audio nodes. So, our next step will be to install the getID3 library so that we can utilize this great feature.
Installing the getID3 library
The getID3 library is a very useful PHP library that will automatically extract audio information (called ID3) from any given audio track. We can install this useful utility by going to, which is the getID3 library URL at SourceForge.net. Once we have done this, we should see the following:
We can download this library by clicking on the Download link on the first row, which is the main release. This will then take us to a new page, where we can download the ZIP package for the latest release. We can download this package by clicking on the latest ZIP link, which at the time of writing this article was getid3-1.7.9.zip
Once this package has finished downloading, we then need to make sure that we place the extracted library on the server where the getID3 module can use it. The default location for the getID3 module, for this library, is within our site's modules/getid3 directory. Within this directory, we will need to create another directory called getid3, and then place the getid3 directory from the downloaded package into this directory. To verify that we have installed the library correctly, we should have the getid3.php at the following location:
Our next task is to remove the demos folder from within the getid3 library, so that we do not present any unnecessary security holes in our system.
Once this library is in the correct spot, and the demos folder has been removed, we can refresh our Drupal Administrator section and see that the error has disappeared. If it hasn't, then verify that your getID3 library is in the correct location and try again. Now that we have the getID3 library installed, we are ready to set up the Audio content type.
Setting up the Audio content type
When we installed the Audio module, it automatically created an Audio content type that we can now use to add audio to our Drupal web site. But before we add any audio to our web site, let's take a few minutes to set up the Audio content type to the way we want it. We will do so by navigating to Administer | Content Types, and then clicking on the edit link, next to the Audio content type.
Our goal here is to set up the Audio content type so that the default fields make sense to the Audio content type. Drupal adds the Body field to all new content types, which doesn't make much sense when creating an Audio content. We can easily change this by simply expanding the Submission form settings. We can then replace the Body label with Description, since it is easily understood when adding new Audio tracks to our system.
We will save this content type by clicking on the Save content type button at the bottom of the page. Now, we are ready to start adding audio content to our Drupal web site.
Creating an Audio node
We will add audio content by going to Create Content, and then clicking on Audio, where we should then see the following on the page:
You will probably notice that the Title of this form has already been filled out with some strange looking text (as shown in the previous screenshot). This text is a series of tags, which are used to represent track information that is extracted using the getID3 module that we installed earlier. Once this ID3 information is extracted, these tags will be replaced with the Title and Artist of that track, and then combined to form the title of this node. This will save a lot of time because we do not have to manually provide this information when submitting a new audio track to our site. We can now upload any audio track by clicking on the Browse button next to the Add a new audio file field. After it adds the file to the field, we can submit this audio track to Drupal by clicking on the Save button at the bottom of the page, which will then show you something like the following screenshot:
After this node has been added, you will notice that there is a player already provided to play the audio track. Although this player is really cool, there are some key differences between the player provided by the Audio module and the player that we will create later in this article.
How our player will be different (and better)
The main difference between the player that is provided by the Audio module and the player that we are getting ready to build is how it determines which file to play. In the default player, it uses flash variables passed to the player to determine which file to play. This type of player-web site interaction places the burden on Drupal to provide the file that needs to be played. In a way, the default player is passive, where it does nothing unless someone tells it to do something.
The player that we will be building is different because instead of Drupal telling our player what to play, we will take an active approach and query Drupal for the file we wish to play. This has several benefits, such as that the file path does not have to be exposed to the public in order for it to be played. So, let's create our custom player!
Building a custom audio player for Drupal
Once we have our new directory set up, we will need to open up both the chapter5.fla and the main.as file within our Flash IDE, where we will then direct our attention once again to the main.as file.
Click here to access all the codes used in this article.
The first thing we will need to do is temporarily change the nodeId variable at the top of this script to the node ID of the audio node that we just created as follows:
// Declare our variables
var baseURL:String = "";
var gateway:String = baseURL + "/services/amfphp";
var sessionId:String = "";
var nodeId:Number = 8;
Now that this node ID is set to the correct node, our next task is to determine what data we are looking for when we load the node. This will bring us back to our Drupal web site, where we will take advantage of the Services Administrator to investigate the data from our audio node.
Examining the Audio node using Services Administrator
For this section, we will navigate back to our Services Administrator section by going to Administer |Services in our Drupal web site. Once we are there, we will then click on the node.get link, which will let us load any node in our system to examine the data that will be passed to our Flash Application. We will then need to provide the node ID for the audio node we created—where it says nid, and then click on the button below that says Call method.
Looking at the results from this call, the data that we are looking for is all contained within the audio tag in the node object, which should look similar to the following screenshot:
From looking at this data structure, we determine that we can access the filepath of our audio no de within our onNodeLoad function. So, let's test this out by modifying our "Hello World" code to replace the node title with the filepath to our audio file.
Referencing the audio file path
Using the knowledge that we gained from the Services Administrator, we should be able to now reference the audio filepath for any given audio node within our Drupal web site. If we observe the node object data returned from our Services Administrator, we can determine how to access the file path to our song by using the following code:
node.audio.file.filepath
We can easily test this out by opening up our main.as file, and then placing a trace statement to display this file path within the onNodeLoad function:
// Called when Drupal returns with our node.
function onNodeLoad( node:Object )
{
// Print out the node title.
title.text = node.title;
// Trace the audio file path.
trace( node.audio.file.filepath );
}
The output should then show the correct filepath to the node that we just loaded (as shown in the following screenshot).
We have now successfully referenced the audio file path. Our next task will be to create an audio class that will use this path to play some music!
Writing a custom AudioPlayer class
Writing a custom AudioPlayer class When working with ActionScript 3, it is highly recommended to use the object-oriented features that are built into the language using the class construct. By creating a class for our custom audio functionality, we will be encapsulating the code, which makes our code more maintainable, expandable, and portable. This is also referred to as componentization. This section assumes that you already have some previous experience with object-oriented techniques, but in case you do not, I will try my best to explain the concepts as we move forward. If you are just beginning with object-oriented programming, then I would also highly recommend reading the Wikipedia article at, which describes in great detail the concepts behind object-oriented programming. With that said, let's begin building our AudioPlayer class.
In Flash , there is already a class called Sound that was built to play audio files, and we can build our class to utilize this functionality to play audio. So, let's begin by creating a blank file next to your chapter5.fla project file called AudioPlayer.as. We will then open up this file and write the following:
package
{
// Import all dependencies
import flash.media.Sound;
// Declare our class
public class AudioPlayer
{
// Constructor function.
// Called when someone creates a new AudioPlayer
public function AudioPlayer()
{
// Make sure to create our sound object
sound = new Sound();
// Let us know that we created this player.
trace( "AudioPlayer created!" );
}
// Declare our sound variable.
private var sound:Sound;
}
}
Here we have created a new class that we will use to place all of our custom Audio player functionality. Currently, this doesn't really do much, other than send a trace to the output to notify us that the player has been created. To help track our progress, we can test this out by going back to our main.as file, and within the onNodeLoad function, we can place the following code to create our custom AudioPlayer:
// Called when Drupal returns with our node.
function onNodeLoad( node:Object )
{
// Print out the node title.
title.text = node.title;
// Create our AudioPlayer.
var player:AudioPlayer = new AudioPlayer();
// Trace the audio file path.
trace( node.audio.file.filepath );
}
Now, when we run this application, we will get a very pleasant surprise when our trace statement from within our custom AudioPlayer gets called to reveal that we really did create our custom audio player!
Our next step will be to add functionality to our custom audio class to play any audio track passed to our player.
Playing audio in Flash
In order to play an audio track, we will need to first create a public function within our custom class, that will be used to play any given file passed to our routine. This function will be used to play any given audio file path provided as the file string passed as an argument to the play function. Since we have already included the sound object in our custom class, we can now use that to load and play our file.
To do this , we will need to import the URLRequest class, because that class is used to pass a URL string to the load routine of the sound object. After this, we can then call the load routine on the sound object using this URLRequest object, and then play the file after it has been loaded. This will look as follows:
package
{
// Import all dependencies
import flash.media.Sound;
import flash.net.URLRequest;
// Declare our class
public function AudioPlayer
{
// Constructor function.
// Called when someone creates a new AudioPlayer
public function AudioPlayer()
{
// Make sure to create our sound object
sound = new Sound();
// Let us know that we created this player.
trace( "AudioPlayer created!" );
}
// Play an audio file
public function playFile( file:String )
{
// Print out what file is playing...
trace( "Playing file " + file );
// Load our sound file.
sound.load( new URLRequest( file ) );
// Play our sound file.
sound.play();
}
// Declare our sound variable.
private var sound:Sound;
}
}
We have finished setting up our audio class to play an audio file. We can now direct our attention to the main.as file, where we will play the audio file from Drupal using our new custom audio class.
Using our AudioPlayer class to play audio
Now that we have our main.as file opened, we can direct our attention once again to the onNodeLoad function, where we will pass the correct file path from Drupal to our custom AudioPlayer class. Since the path to our audio file, given to us from the node object, is relative to the base URL of our Drupal web site, we will need to add the base URL of our web site to the front of this path before we send it to the play function of our custom class. We can do this pretty easily by creating a variable called fileURL, which will hold the baseURL to our web site, and then add that to the audio file path before sending it to the play function of our custom class. The code to do this should look like the following:
// Called when Drupal returns with our node.
function onNodeLoad( node:Object )
{
// Print out the node title.
title.text = node.title;
// Create our AudioPlayer.
var player:AudioPlayer = new AudioPlayer();
// Declare our base URL.
var fileURL:String = baseURL;
// Add our file's relative path.
fileURL += "/";
fileURL += node.audio.file.filepath;
// Play our audio file
player.playFile( fileURL );
}
Now, when we run our application, we should be greeted with the sweet sound of success! We will now expand our audio player to include some controls, so that your site visitors can start and stop the music while playing.
Summary
In this article we saw how audio is handled within Drupal and how to build a custom application that can play audio content created through Drupal.
Playing music by itself is pretty cool, but is not very useful unless we give our users a way to interact with the playback of that audio track. In the next part of the article we will create some very basic controls that will allow our users to do just that.
If you have read this article you may be interested to view : | https://www.packtpub.com/books/content/working-drupal-audio-flash-part-1 | CC-MAIN-2015-18 | refinedweb | 2,901 | 66.67 |
I know it should be easy but angular 2.0 has no many examples yet..
In one of my components in some case I need to add class on my body tag. But my application is bootstrapped deeper than body, so I need something like
angular.element('body').addClass('fixed');
body
Update
I'm not sure if
DOM is actually still supported in RC. The related statements aren't very clear. Something like
DOMis only for internal use. Either access the DOM directly or use a custom renderer.
I haven't see how a custom renderer might be implemented or how to provide an implementation depending on the current platform (webworker, server, DOM thread).
Update This seems to be the Angular2 way
import { DOM } from 'angular2/src/platform/dom/dom_adapter'; DOM.addClass(DOM.query("body"), 'fixed');
Import from
.../src/... at your own risk.
.../src/... is considered private implementation and you can't expect any guarantees that the API won't change without notic.
I tried it in Dart and it works fine (not sure if the TS import above is correct though). In Dart
DOM is exported by
package:angular2/angular2.dart
Original
If you want to access a DOM element that's outside of your Angular application root, just use
document.querySelector(), no need to involve Angular. | https://codedump.io/share/Ul7H3cWfIDd8/1/angular-2x-selecting-dom-element | CC-MAIN-2017-34 | refinedweb | 218 | 59.8 |
CFD Online Discussion Forums
(
)
-
Main CFD Forum
(
)
- -
C++ & F90 Compilation on SGIs
(
)
David Hunt
June 21, 2000 13:53
C++ & F90 Compilation on SGIs
Dear All,
this is not strictly a CFD query other than its target application. I'm trying to call an f90 sub-program from a C++ routine on an SGI. I suspect this is not an uncommon thing for CFD folk to do. I've read lots of manuals and still can't get it to work. I've put below example soucre code and command line stuff. If you can see where I'm going wrong, please let me know.
Many thanks Dave Hunt.
ccode.cpp: ========== #include<iostream> using namespace std; extern "C" void subr_(int*); main() { int nlen=3;
cout << "Hello World" << std::endl; subr_(&nlen); }
csubr.f90: ========== subroutine subr(nlen) integer :: nlen write(6,*) nlen end subroutine subr
command line: ============= (note, I'm using the SG CC and f90 compilers)
f90 -c csubr.f90 CC -LANG:std -c ccode.cpp CC -LANG:std ccode.o csubr.o
the last step gives me the following ERROR: ld32: ERROR 33 : Unresolved text symbol "_FWF" -- 1st referenced by csubr.o.
Use linker option -v to see when and which objects, archives and dsos are loaded. ld32: INFO 152: Output file removed because of error.
Oliver Gloth
June 21, 2000 14:01
Re: C++ & F90 Compilation on SGIs
oonumerics.org
. There is a number of good links, including tools for language interoperability. A tool for C++/F77 exists and is called CPPF77. I don't know how useful it is for FORTRAN-90, though.
Stefan Nilsson
June 22, 2000 03:43
Re: C++ & F90 Compilation on SGIs
Hi,
I think the problem is that since you're using CC to link your application the correct fortran90 libraries are not included, and therefore the _FWF (whatever that is) cannot be found. You'll have to explicitly include the correct libraries when linking.
If you compile some simple f90-program (hello world) using the -v flag, you will se all libraries included when linking a f90 program, then just add these when linking the C++/f90 program.
Best wishes Stefan
David Hunt
June 22, 2000 03:55
Thanks: C++ & F90 Compilation sorted
Stefan,
thanks, that was sound advice. I needed a -lfortran flag to link one of the f90 libraries.
Dave
All times are GMT -4. The time now is
01:37
. | http://www.cfd-online.com/Forums/main/2270-c-f90-compilation-sgis-print.html | CC-MAIN-2016-50 | refinedweb | 403 | 73.47 |
In message <BANLkTimKhApFW8G1-pG0u_9Kv2YB0R1O0w@mail.gmail.com> you wrote:> On Thu, May 19, 2011 at 12:58 AM, Michael Neuling <mikey@neuling.org> wrote=> :> > Eric,> >> >> This patch adds save/restore register support for the BlueGene/P> >> double hummer FPU.> >> > What does this mean? =A0Needs more details here.> >> > Hi Mikey,> > any specific details you are looking for here? AFAIK these patches> are required for the kernel to save/restore the double hummer> properly.I should have been more specific. What does double hammer mean?I description of how double hammer differs from normal and why a changein the fpu code is needed would be great.> > >>> >> +#ifdef CONFIG_BGP> >> +#define)|(462<<1)> >> +#define)|(974<<1)> >> +#endif /* CONFIG_BGP */> >> > Put these in arch/powerpc/include/asm/ppc-opcode.h and reformat to fit> > whats there already.> >> > Also, don't need to put these defines inside a #ifdef.> >> > Sure, I'll fix that up.> > >> +#ifdef CONFIG_BGP> >> +#define SAVE_FPR(n, b, base) li b, THREAD_FPR0+(16*(n)); STFPDX(n, base=> , b)> >> +#define REST_FPR(n, b, base) li b, THREAD_FPR0+(16*(n)); LFPDX(n, base,=> b)> >> > 16*? =A0Are these FP regs 64 or 128 bits wide? =A0If 128 you are doing to> > have to play with TS_WIDTH to get the size of the FPs correct in the> > thread_struct.> >> > I think there's a bug here.> >> > I actually have three different versions of this code from different> source patches that I'm drawing from - so your help in figuring out> the best way to approach this is appreciated. The kittyhawk version> of the code has 8* instead of 16*. According to the docs:> "Each of the two FPU units contains 32 64-bit floating point registers> for a total of 64 FP registers per processor." which would seem to> point to the kittyhawk version - but they have a second SAVE_32SFPRS> for the second hummer. What wasn't clear to me with this version of> the code was whether or not they were doing something clever like> saving the pair of the 64-bit FPU registers in a single 128-bit slot> (seems plausible). Ok, sounds like there is 32*8*2 bytes of data, rather than the normal32*8 bytes for FP only (ignoring VSX). If this is the case, then you'llneed make 'fpr' in the thread struct bigger which you can do by settingTS_FPRWIDTH = 2 like we do for VSX.If there is some instruction that saves and restores two of these at atime (which LFPDX/STFPDX might I guess), then we can use that, otherwisewe'll have to do 64 saves/restores. Double load/stores will be fasterI'm guessing though. If two at a time, do we need to increase the index in pairs?> If this is not the way to go, I can certainly> switch the kittyhawk version of the patch with the *, the extra> SAVE32SFPR and the extra double hummer specific storage space in the> thread_struct. I'd be tempted to keep it in the 'fpr' part of the struct so you canthen access it with ptrace/signals/core dumps.> If it would help I can post an alternate version of the patch for> discussion with the kittyhawk version.Sure.The most useful thing would be to see the instruction definition forSTFPDX/LFPDX.> > >> =A0/*> >> diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms=> /44x/> > Kconfig> >> index f485fc5f..24a515e 100644> >> --- a/arch/powerpc/platforms/44x/Kconfig> >> +++ b/arch/powerpc/platforms/44x/Kconfig> >> @@ -169,6 +169,15 @@ config YOSEMITE> >> =A0 =A0 =A0 help> >> =A0 =A0 =A0 =A0 This option enables support for the AMCC PPC440EP evalua=> tion board.> >>> >> +config =A0 =A0 =A0 BGP> >> > Does this FPU feature have a specific name like double hammer? =A0I'd> > rather have the BGP defconfig depend on PPC_FPU_DOUBLE_HUMMER, or> > something like that...> >> >> + =A0 =A0 bool "Blue Gene/P"> >> + =A0 =A0 depends on 44x> >> + =A0 =A0 default n> >> + =A0 =A0 select PPC_FPU> >> + =A0 =A0 select PPC_DOUBLE_FPU> >> > ... in fact, it seem you are doing something like these here but you> > don't use PPC_DOUBLE_FPU anywhere?> >> > A fair point. I'm fine with calling it DOUBLE_HUMMER, but I wasn't sure if> that was "too internal" of a name for the kernel. Let me know and> I'll fix it up.What I'm mostly concerned about is disassociating it with a particularCPU. If it has an external name, then all the better.> I'll also change the CONFIG_BGP defines in the FPU code to PPC_DOUBLE_FPU> or PPC_DOUBLE_HUMMER depending on what the community decides.Mikey | https://lkml.org/lkml/2011/5/19/651 | CC-MAIN-2017-43 | refinedweb | 746 | 73.17 |
Previous Chapter: Graphs: PyGraph"
Next Chapter: Finite State Machine in Python
Next Chapter: Finite State Machine in Python
NetworkX
Overview
NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. Pygraphviz is a Python interface to the Graphviz graph layout and visualization package.
- Python language data structures for graphs, digraphs, and multigraphs.
- Nodes can be "anything" (e.g. text, images, XML records)
- Edges can hold arbitrary data (e.g. weights, time-series)
- Generators for classic graphs, random graphs, and synthetic networks
- Standard graph algorithms
- Network structure and analysis measures
- Basic graph drawing
- Open source BSD license
- Well tested: more than 1500 unit tests
- Additional benefits from Python: fast prototyping, easy to teach, multi-platform
Creating a Graph
Create an empty GraphOur first example of a graph will be an empty graph. To see the proper mathematical definition of a graph, you can have a look at our previous chapter Graphs in Python. The following little Python script uses NetworkX to create an empty graph:
import networkx as nx G=nx.Graph() print(G.nodes()) print(G.edges()) print(type(G.nodes())) print(type(G.edges()))If we save this script as "empty.py" and start it, we get the following output:
$ python3 empyty.py [] [] <class 'list'> <class 'list'>We can see that the result from the graph methods nodes() and edges() are lists.
Adding Nodes to our GraphNow we will add some nodes to our graph. We can add one node with the method add_node() and a list of nodes with the method add_nodes_from():
import networkx as nx G=nx.Graph() # adding just one node: G.add_node("a") # a list of nodes: G.add_nodes_from(["b","c"]) print("Nodes of graph: ") print(G.nodes()) print("Edges of graph: ") print(G.edges())
Adding Edges to our GraphG can also be created or increased by adding one edge at a time by the method add_edge(), which has the two nodes of the edge as the two parameters. If we have a tuple or a list as the edge, we can use the asterisk operator to unpack the tuple or the list:
import networkx as nx G=nx.Graph() G.add_node("a") G.add_nodes_from(["b","c"]) G.add_edge(1,2) edge = ("d", "e") G.add_edge(*edge) edge = ("a", "b") G.add_edge(*edge) print("Nodes of graph: ") print(G.nodes()) print("Edges of graph: ") print(G.edges())In our previous example, the first edge consists of the nodes 1 and 2, which had not been included in our graph so far. The same is true for the second edge with the tuple
("d", "e"). We can see that the nodes will be automatically included as well into the graph, as we can see from the output:
Nodes of graph: ['a', 1, 'c', 'b', 'e', 'd', 2] Edges of graph: [('a', 'b'), (1, 2), ('e', 'd')]We can add a bunch of edges as a list of edges in the form of 2 tuples.
# adding a list of edges: G.add_edges_from([("a","c"),("c","d"), ("a",1), (1,"d"), ("a",2)])We can also print the resulting graph by using matplotlib:
nx.draw(G) plt.savefig("simple_path.png") # save as png plt.show() # display
Generate Path GraphWe can create a Path Graph with linearly connected nodes with the method path_graph(). The Python code code uses matplotlib. pyplot to plot the graph. We will give detailed information on matplotlib at a later stage of the tutorial:
import networkx as nx import matplotlib.pyplot as plt G=nx.path_graph(4) print("Nodes of graph: ") print(G.nodes()) print("Edges of graph: ") print(G.edges()) nx.draw(G) plt.savefig("path_graph1.png") plt.show()The created graph is an undirected linearly connected graph, connecting the integer numbers 0 to 3 in their natural order:
Renaming NodesSometimes it is necessary to rename or relabel the nodes of an existing graph. For this purpose the function relabel_nodes is the ideal tool.
networkx.relabel.relabel_nodes(G, mapping, copy=True)
The parameter G is a Graph, the mapping has to be a dictionary and the last parameter is optional. If copy is set to True, - which is the default - a copy will be returned, otherwise, i.e. if it is set to False, the nodes of the graph will be relabelled in place.
In the following example we create again the Path graph with the node labels from 0 to 3. After this we define a dictionary, in which we map each node label into a new value, i.e. city names:
import networkx as nx import matplotlib.pyplot as plt G=nx.path_graph(4) cities = {0:"Toronto",1:"London",2:"Berlin",3:"New York"} H=nx.relabel_nodes(G,cities) print("Nodes of graph: ") print(H.nodes()) print("Edges of graph: ") print(H.edges()) nx.draw(H) plt.savefig("path_graph_cities.png") plt.show()The Python program returns the following output:
Nodes of graph: ['Toronto', 'Berlin', 'New York', 'London'] Edges of graph: [('Toronto', 'London'), ('Berlin', 'New York'), ('Berlin', 'London')]The visualized graph looks liks this:
When we relabelled the graph G in our previous Python exampls, we create a new graph H, while the original graph G was not changed. By setting the copy parameter flag to False, we can relabel the nodes in place without copying the graph. In this case the line
H=nx.relabel_nodes(G,cities)
will be changed to
nx.relabel_nodes(G,cities, copy=False)
This approach might lead to problems, if the mapping is circular, while copying is always safe. The mapping from the nodes of the original node labels to the new node labels doesn't have to be complete. An example of a partial in-place mapping:
import networkx as nx G=nx.path_graph(10) mapping=dict(zip(G.nodes(),"abcde")) nx.relabel_nodes(G, mapping, copy=False) print("Nodes of graph: ") print(G.nodes())Only the nodes 0 to 4 are nenamed, while the other nodes keep the numerical value, as we can see in the output from the program:
$ python3 partial_relabelling.py Nodes of graph: [5, 6, 7, 8, 9, 'c', 'b', 'a', 'e', 'd']The mapping for the nodes can be a function as well:
import networkx as nx G=nx.path_graph(10) def mapping(x): return x + 100 nx.relabel_nodes(G, mapping, copy=False) print("Nodes of graph: ") print(G.nodes())The result:
$ python3 relabelling_with_function.py Nodes of graph: [107, 106, 103, 108, 109, 104, 105, 100, 102, 101]
Previous Chapter: Graphs: PyGraph"
Next Chapter: Finite State Machine in Python
Next Chapter: Finite State Machine in Python | http://python-course.eu/networkx.php | CC-MAIN-2017-09 | refinedweb | 1,095 | 64.61 |
OctaneRender for Daz Studio
edited December 1969 in The Commons
Couldn't find a post about it already, but finally I will have my prefered render in Studio without 3rd party software or export:
OctaneRender for Daz Studio - Initial Overview
cool :)..I've been wanting to look into Octane for a while now...a D|S plugin sure makes things easier!
i have been using octane for 8 months and it is just wonderful. the plugin will make it easier but it's easy right now once you learn octane. i may even opt to stay with the standalone version of octane even after the plugin is released.
it's a gpu renderer, so if you have a fast graphic card render times for my projects are all under 30 min, i see posts about 2 hour renders with daz or lux and I laugh because it would be 15 min in octane. need more speed, get another graphic card (nvidia).
at first my only issue was animation, that was resolved a few months ago. i'm not one for giving out technical advice, but if you are willing to invest in the learning curve it is the single best thing one can do to improve your renders.
the workflow that work for me is:
sketchup
blender
kinect mocap
scupltris/goz
any/all
export into daz, add characters
exoprt into octane
import renders into after effects
end
sometimes i export sketchup or blender direct into octane then export the character into octane.
Looks good. I am a luxrender/Reality user and have used Octane in the past with 3DSMax. My main concerns are the cost and the node system which I never cared for. The standalone license for Octane is just over $100 so i hope the plugin is reasonably priced. Looks like it''s time to upgrade my graphics card for more memory/cuda cores.
Octane is excellent, however I want to comment on the 3Delight thing: The render here and here are done in 3delight, with UberEnvironment2, UberSpot used for the streetlights in the dusk scene), full-raytracing on 20 lights, SSS on the Worm, and ALL of the Urban Sprawl loaded and the 3Delight Linux Standalone 64 bit 10.0.50 ( renderdl -q -progress ) took just over 3 minutes on a Quad-core AMD-Phenom 2.5Ghz.
EDIT: I did not turn off raytracing for the hair. It was the default load, no changes.
$ time renderdl -q -progress mavka_l4w.rib
real 3m19.794s
user 6m10.284s
sys 0m0.723s
3Delight doesn't necessarily NEED to take a long time to render. It just needs the right environment.
Kendall
The node thing is bothering me too. I'll try the demo on my older machine, it has a slower CPU but a cuda compatible video card. I haven't seen any good skin renders in the octane gallery so I don't think this is my cup of tea anyway, there are plenty of nice renders but not skin I could love.
I don't have an issue with my 3delight render speed most of the time, but lux on the other hand is troublesome.8-40 hours using two machines is getting tiresome. Thing with reality is that I have been able to create some nice looking skin. That is a major sticking point for me since I primarily do pinups. Stones, Plastics, glass and metals are really all secondary if not tertiary in my priorities.
Can't wait to get home and try the demo though.
Kendall, what resolution were your renders? I like to do 3000-5000pixels
EDIT: 3Delight license prices are crazy...
For those two, it was done @ 1024x768 for the forums.
EDIT: Also, I forgot to specify that the Raytracing bounces are set to 6, and the shading rate set at 0.20
Kendall
thanks, I'm assuming that 3d-username was talking about substantially larger images than that, but my assumption could be wrong. I would hope not otherwise those quoted times are not impressive at all.
This is the FREE version. Limited to 2 cores. Still took 3 minutes.
EDIT: Just to specify... octane would take a matter of about 30 seconds to render those scenes. My point is that 3Delight doesn't need hours to render. It can take that long, if the conditions are right, but it doesn't always take a long time.
Kendall
This is the FREE version. Limited to 2 cores. Still took 3 minutes.
Kendall???
I have not timed these two specific scenes in DS, as they were quicky "20 minute" like things. I just happened to have their times in a terminal session's buffer for easy posting. I guess I could create a bigger test.
Yes. Whether it takes longer in Studio is dependent on a number of things. First is the number of cores available. The Studio version of 3delight is not core limited as the free standalone is. Second is the complexity of the scene. The overhead of Studio running on a heavy scene can come into play. Third is the OS. The Linux Version of 3Delight *flies*.
I always use the standalone for anything non-trivial as I like to farm out my rendering to other nodes, and continue working. DS is locked until the render finishes if done internal.
Kendall
Thanks for the info.
You know, I made a mistake... I forgot to take into account that those timings were during the day while the Linux system was under load. There were several other users beating on the system during those renders (at least 2 of which always have firefox running with multiple tabs open). I'll see if I can clear up a workstation to run the tests on to give more reliable timings.
Sorry about that.
Kendall
There's a thread at the old forums, if anyone's interested. Click Me
On my system, it's pretty much a fair test between the Linux native and embedded Studio 3Delight versions...I've still got a dual core, so, both of them are using the same number of cores.
I was doing some renders for an item I'm making, fairly simple scene. In Studio it would take about 20 mins to render, stand alone about 3. I don't have exact times, because I was doing them last week. The funny thing, though, was that sometimes it would take nearly ten minutes to dump the rib...other times it dumped the rib quickly. It seemed to hinge on how long it took to generate the shadowmap. In this case, full raytracing was much faster...
The one thing I really dislike about the Octane demo...saving of the image is locked out.
(MODS - We might want to move this out of the Octane thread)
Yeah, that is an issue that I didn't cover... The amount of time necessary to dump the RIB out of DS can be significant. This is especially true if one is using shadowmaps, or complex shaders that need compiling/exncypting that also include textures. Many times the time needed to generate the RIB is significantly greater than the time needed to collect the resources and the rendering combined.
I'm going to try to run some times this evening on the stages, for both Windows, Mac, and Linux.
Kendall
ok so i've been spending the last 2 and a half hours on this. The machine with the cuda card happened to die on me when I tried to boot it up. I spent some time diagnosing and I think its the mainboard. I took the video card out and put it in my second rendering machine that has nothing in it. Got octane running on that to test it out and the test scenes load and render fine.
I am having an issue with my own exports from daz though. if i export the model without a texture i can see it in the viewport (black). if i tell daz to export the materials (collect) then I never see anything in octane. any ideas?
EDIT I might be on to something...
EDITED: I think it just comes down to a memory issue. Ive only got 640 mb of vram and even though I only have one genesis figure that's nude with no hair its too much unless I use one of the crappy basic textures. least I got it figured out.
no problem with some environments, a scene with buildings and trees, bushes, rocks and a creek only easts up 111mb vram.
the three textures that would load eat up about 400mb of memory. thats with no clothes or hair or environments.
even with defaults things don't look too bad, and it definitely is fast. but even with a two gig card i think it would be hard to do anything without a ton of time being spent on reducing texture sizes.
The 'must fit in memory' is a killer for GPU-only renderers. It also affects Lux's SLG all-GPU render module as well. Reality at least has the option to reduce texture sizes when collecting them during export, but that kind of defeats the purpose. Who cares if it can render in 30 seconds if the textures look like crud because they've been reduced to fit in VRAM?
The hybrid modes in LuxRender, while still having issues, at least get around the memory constraint limit. Only the geometry has to fit in VRAM; the textures stay entirely in CPU-space.
I haven't tried the GPU stuff in lux lately, it was all crud the last time I tried. Nothing looked remotely correct. Guess it's worth trying again. My main computer has a solid video card(2gb radeon something or other), just not a CUDA card. I'm doing this testing to see if I can get decent skin results with my old hardware(in this case an nvidia with 640mb ram) before I spend a boat load of funds to really dig into this.
if I could get images that look like my luxrenders in 1/10th of the time I would do them more and I would be interested in getting something to work. Not sure Octane is the answer with its overly deep nodes. I was able to give the skin a bit of sheen but I couldn't copy the setting to all skin surfaces. (or I just couldnt figure out how) So I had to do it manually.
That's one of the main differences...Octane pretty much thrives on procedurals. The basis of its materials are procedurals. The reason is, they don't need all that much memory. If you can 'cook' the right shader, all you'll need for great skin is a couple of greyscale maps (displacement/bump/and such). Even large size/highly detailed ones are a fraction of the memory a full color 'texture map' requires.
Basically, it's a lie about how many CUDA cores a card has determines how fast Octane will go. I recently saw a motherboard with 4 PCIe 16x slots...I'd take that board and 4 mid/upper end consumer CUDA capable cards over a monster Quadro with a bazillion CUDA cores and only 2 to 3 GB of memory, especially 4 2GB cards.
If folks are thinking about how horrible it is because they are running into memory errors with 4 GB of RAM...just think how bad it will be when they go back to 1 or 2 GB.??
I'm going to take this answer to another thread. Look for it there.
Kendall
I'm convinced Octane can be very fast. From what I hear and have seen it's not really what I'm looking for.
Most images I've seen look like tech demos and very few are people driven(i've looked at about a hundred or so). Most of the renders of people I have seen they are fully clothed people with masks. The only bare skin I have seen isn't "pretty" its very technical (as in I would say "that's impressive technically" not "that's sexy").
Not to say that you can't make pretty skin. I just wish I had a good enough video card to experiment more, I would love to see if I could squeeze something hot out of it but with such a steep investment I just can't justify it.
Of course for people with extra funds they will say that's really cheap, even for hobbyist! If I could see myself making great skin that rendered quickly I'd probably be all over this, but I'm not sure I can devote the time to learning how to do procedural skin. I've collected a good selection of human textures, and that is what defines many of my characters. Stretch marks, makeup and imperfections are a big win for me, that's what makes them unique, appealing and human.
If there is a tutorial or guide somewhere I'd look into it, but I didn't really find one. I did see a lot of "make your skin less waxy" comments though :)
Right now, I think the reason there isn't any 'sexy' skin for octane is because it's main use has been 'technical'. Obviously, if they are investing the time/effort into making a plugin for Studio it is capable of the 'art', it's just not there yet. (Isn't the main selling point the Autodesk plugin?)
To make it really practical, on my end...a huge increase in graphics card memory and a drop in price for said cards. Because, some of the scenes I can come up with will bring a 2 GB card to its knees weeping and crying worse than the latest hot game could.
you have a valid point mjc1016, I've been tinkering with the standalone and forgot what got me interested is the future daz studio plugin. I only saw a little bit of the videos and read the forums and it seemed like the goal was to simply the texture process and maybe even help with optimization? I have been meaning to review those articles again but have been too busy trying to make this skin look less shiny without making it look dull. I was able to get a character to work with some hair for testing purposes. The background i brought in ended up with some flat looking textures, like something from the sega saturn days.
anyway I still think there is potential otherwise I wouldn't have been wasting my whole night on it. Even if I said "bout to give up" it was really, "let me try this and that first".
Mec4d is one of the biggest proponents of Octane, and is probably one of the most experienced with it here on the forums. Does anyone want to contact her and invite her to enlighten the masses?
Kendall
i would wait for the plugin then.
that should be more in the way of one click renders, but that may not even have super basic settings.
most of the best samples with daz characters are in the forum section for octane customers only.
I didn't mean to promote SLG... :) It's still unusable for production renders, as far as I'm concerned. Too many features it doesn't support, or renders incorrectly vs. LuxRender proper. Hybrid Path in LuxRender works reasonably as of the 1.0 release of Lux, but Hybrid Bidir still needs lots of work. (Reality calls Path 'External' and Bidir 'Internal' on the Options tab.) Hybrid modes are nowhere near as fast as SLG, though. And they can be SLOWER than CPU-only if your scene is fairly simple. (The overhead of sending the rays to the GPU to trace outweighs the speed of the GPU calculating those rays when there isn't much geometry.)
3d-username, not sure anyone asked for super basic settings. But not having to dedicate a weekend to doing basic edits to the textures for one scene would be nice. Like I said I didn't even see a way to copy settings from one texture to another. So I have to assume it's not possible other than manually.
cwichura, yeah I tried SLG and found it to be about the same as before lol. No harm. | http://www.daz3d.com/forums/viewreply/110985/ | CC-MAIN-2017-34 | refinedweb | 2,748 | 81.53 |
Sometimes you need to perform one-off initialisation logic before your app starts up properly. For example, you might want to validate your configuration is correct, populate a cache, or run database migrations. In this post, I look at the options available and show some simple methods and extension points that I think solve the problem well.
I start by describing the built-in solution to running synchronous tasks with
IStartupFilter. I then walk through the various options for running asynchrnous tasks. You could (but possibly shouldn't) use
IStartupFilter or
IApplicationLifetime events to run asynchronous tasks. You could use the
IHostedService interface to run one-off tasks without blocking app startup. However the only real solution is to run the tasks manually in program.cs. In my next post I'll show a suggested proposal that makes this process a little easier.
Why do we need to run tasks on app startup?
It's very common to need to run various sorts of initialisation code before an app is able to start up and begin serving requests. In an ASP.NET Core app there are clearly lots of things that need to happen, for example:
- Determining the current hosting environment
- Configuration of the dependency injection container
- Building of the dependency injection container
- Configuration of the middleware pipeline
All of these steps need to occur to bootstrap the application. However there are often one-off tasks you want to perform before the
WebHost is run and starts listening for requests. For example:
- Checking your strongly-typed configuration is valid.
- Priming/populating a cache with data from a database or API
- Running database migrations before starting the app. (This often isn't a good idea, but may be good enough for some apps).
Sometimes these tasks don't have to be run before your app starts serving requests. The cache priming example for instance - if its a well behaved cache, then it shouldn't matter if the cache is queried before it's primed. On the other hand, you almost certainly want your database to be migrated before your app starts serving requests!
There are some examples of where one-off initialisation tasks are required by the ASP.NET Core framework itself. A good example of this is the Data Protection subsystem, used for transient encryption (cookie values, anti-forgery tokens etc). This subsystem must be initialised before the app can start handling any requests. To handle this, they use a
IStartupFilter.
Running tasks synchronously with
IStartupFilter
I've written previously about
IStartupFilter, as it's a really useful interface to have in your tool belt for customising your apps:
- Exploring IStartupFilter in ASP.NET Core (an introduction to
IStartupFilter)
- Understanding your middleware pipeline with the Middleware Analysis package (spoiler alert - it uses an
IStartupFilter)
- Adding validation to strongly typed configuration objects in ASP.NET Core (again, using an
IStartupFilter)
If you're new to the filter, I recommend reading my introductory post, but I'll provide a brief summary here.
IStartupFilters are executed as part of the process of configuring your middleware pipeline (typically done in
Startup.Configure()). They allow you to customise the middleware pipeline that's actually created by the app, by inserting extra pieces of middleware, forking it, or doing any number of other things. For example, the
AutoRequestServicesStartupFilter shown below inserts a new piece of middleware at the start of your pipeline:
public class AutoRequestServicesStartupFilter : IStartupFilter { public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next) { return builder => { builder.UseMiddleware<RequestServicesContainerMiddleware>(); next(builder); }; } }
This is useful, but what does it have to do with running one-off tasks on app startup?
The key feature
IStartupFilter provides is a hook into the early app startup process, after the configuration is set and the dependency injection container is configured, but before the app is ready to start. That means you can use dependency injection with
IStartupFilters and so can run pretty much any code. The
DataProtectionStartupFilter for example is used to initialise the Data Protection system. I used a similar
IStartupFilter approach to provide eager validation of strongly typed configuration.
The other very useful feature is it allows you to add tasks to be executed by registering a service with the DI container. That means as a library author, you could register a task to be run on app startup, without the app author having to invoke it explicitly.
So why can't we just use
IStartupFilter to run asynchronous tasks on startup?
The problem is that the
IStartupFilter is fundamentally synchronous. The
Configure() method (which you can see in the code above) doesn't return a
Task, so it's not a good idea to be trying to do sync over async. I'll discuss this a little later, but for now a quick detour.
Why not use health checks?
ASP.NET Core 2.2 introduces a health checks feature for ASP.NET Core applications, which allows you to query the "health" of an application, exposed via an HTTP endpoint. When deployed, orchestration engines like Kubernetes, or reverse proxies like HAProxy and NGINX can query this endpoint to check if your app is ready to start receiving requests.
You could use the health check feature to ensure your application doesn't start servicing requests (i.e. returning a "healthy" status from the health check endpoint) until all the required one-off tasks are complete. However this has a few downsides:
- The
WebHostand Kestrel itself would startup before the one-off tasks have been executed. While they wouldn't receive "real" requests (only health check requests) that might still be an issue.
- It introduces extra complexity. As well as adding the code to run the one-task, you need to add a health check to test if the task completed, and synchronise the status of the task.
- The start of the app would still be delayed until all the tasks have completed, so it's unlikely to reduce startup time.
- If a task fails, the app would continue to run in a "dead" state, where the health check would never pass. That might be acceptable, but personally I prefer an app to fail immediately.
- The health checks aspect still doesn't define how to actually run the task, just whether the task completed successfully. You still need to decide on a mechanism to run the tasks on startup.
For me, health checks doesn't seem like the right fit for the one-off tasks scenario. They may well be useful for some of the examples I've described, but I don't think they fit all cases. I really want to be able to run my one-off tasks on app startup, before the
WebHost is run
Running asynchronous tasks
I've spent a long time discussing all the ways not to achieve my goal, how about some solutions! In this section I walk through some of the possibilties for running asynchronous tasks (i.e. tasks that return a
Task and require
await-ing). Some are better than others, and some you should avoid, but I wanted to cover the various options.
To give something concrete to discuss, I'll consider the database migration example. In EF Core, you can migrate a database at runtime by calling
myDbContext.Database.MigrateAsync(), where
myDbContext is an instance of your application's
DbContext.
There's also a synchronous version of the method,
Database.Migrate(), but just pretend there isn't for now!
1. Using
IStartupFilter
I described earlier how you can use
IStartupFilter to run synchronous tasks on app startup. Unfortunately the only way to run asynchronous tasks would be to use a "sync over async" approach, in which we call
GetAwaiter().GetResult():
Warning: this code uses bad
asyncpractices.
public class MigratorStartupFilter: IStartupFilter { // Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next) { // Create a new scope to retrieve scoped services using(var scope = _seviceProvider.CreateScope()) { // Get the DbContext instance var myDbContext = scope.ServiceProvider.GetRequiredService<MyDbContext>(); //Do the migration by blocking the async call myDbContext.Database.MigrateAsync() .GetAwaiter() // Yuk! .GetResult(); // Yuk! } // don't modify the middleware pipeline return next; } }
It's very possible that this won't cause any issues - this code is only running on app startup, before we're serving requests, and so deadlocks seem unlikely. But frankly, I couldn't say for sure and I avoid code like this wherever possible.
David Fowler is putting together some great (work in progress) guidance on doing asynchronous programming correctly. I highly recommend reading it!
2. Using
IApplicationLifetime events
I haven't discussed this much before, but you can receive notifications when your app is starting up and shutting down with the
IApplicationLifetime interface. I won't go into detail about it here, as it has a few problems for our purposes:
IApplicationLifetimeuses
CancellationTokens for registering callbacks, which means you can only execute callbacks synchronously. This essentially means your stuck with a sync over async pattern, whatever you do.
- The
ApplicationStartedevent is only raised after the
WebHostis started, so the tasks are run after the app starts accepting requests.
Given they don't solve the sync over async problem of
IStartupFilters, and don't block app startup, we'll leave
IApplicationLifetime and move on to the next possibility.
3. Using
IHostedService to run asynchronous tasks
IHostedService allows ASP.NET Core apps to execute long-running tasks in the background, for the duration of the app's lifetime. They have many different uses - you could use them to run periodic tasks on a timer, to handle other messaging paradigms, such as RabbitMQ messages, or any number of other things. In ASP.NET Core 3.0, even the ASP.NET web host will likely be built on top of
IHostedService.
The
IHostedService is inherently asynchronous, with both a
StartAsync and a
StopAsync function. This is great for us, as it means no more sync over async! Implementing a database migrator as a hosted service might look like something like this:
public class MigratorHostedService: IHostedService { // async Task StartAsync(CancellationToken cancellationToken) { // Create a new scope to retrieve scoped services using(var scope = _seviceProvider.CreateScope()) { // Get the DbContext instance var myDbContext = scope.ServiceProvider.GetRequiredService<MyDbContext>(); //Do the migration asynchronously await myDbContext.Database.MigrateAsync(); } } public Task StopAsync(CancellationToken cancellationToken) { // noop return Task.CompletedTask; } }
Unfortunately,
IHostedService isn't the panacea we might hope for. It allows us to write true async code, but it has a couple of problems:
- The typical implementation for
IHostedServices expects the
StartAsyncfunction to return relatively quickly. For background services, it's expected that you'll start the service asynchronously, but that the bulk of the work will occur outside of that startup code (see the docs for an example). Migrating the database "inline" isn't a problem as such, but it will block other
IHostedServices from starting, which may or may not be expected.
IHostedService.StartAsync()is called after the
WebHostis started, so you can't use this approach to run tasks before your app starts up.
The biggest problem is the second one - the app will start accepting requests before the
IHostedService has run the database migrations, which isn't what we want. Back to the drawing board.
4. Manually running tasks in Program.cs
None of the solutions shown so far offer the complete solution. They either require using sync over async programming, (which while probably ok in the context of application startup, isn't great to encourage), or don't block app startup. There's a simple solution I've ignored so far, and that's to stop trying to use framework mechanisms, and just do the job ourselves.
The default Program.cs used in the ASP.NET Core templates builds and runs an
IWebHost in one statement in the
Main function:
public class Program { public static void Main(string[] args) { CreateWebHostBuilder(args).Build().Run(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>(); }
However, there's nothing stopping you from running code after
Build() has created the
WebHost, but before you call
Run(). Add to that the C# 7.1 feature of allowing your
Main function to be async, and we have a reasonable solution: accepting requests // There's an async overload, so we may as well use it await webHost.RunAsync(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>(); }
This solution has a number of advantages:
- We're now doing true async, no sync over async required
- We can execute the task asynchronously
- The app doesn't accept requests until after our tasks have executed
- The DI container has been built at this point, so we can use it to create services.
It's not all good news unfortunately. We're still missing a couple of things:
- Even though the DI container has been built, the middleware pipeline has not. That doesn't happen until you call
Run()or
RunAsync()on the
IWebHost. At that point the middleware pipeline is built, the
IStartupFilters are executed, and the app is started. If your async task requires configuration that happens within any of these steps, you're out of luck
- We've lost the ability to automatically run tasks by adding a service to the DI container. We have to remember to manually run the task instead.
If those caveats aren't a problem, then I think this final option provides the best solution to the problem. In my next post I'll show a couple of ways we can take this basic example and build on it, to make something a little easier to use.
Summary
In this post I discussed the need to run tasks asynchronously on app startup. I described some of the challenges of doing this. For synchronous tasks,
IStartupFilter provides a useful hook into the ASP.NET Core app startup process, but running asynchronous tasks requires doing sync over async programming which is generally a bad idea. I described a number of the possible options for running async tasks, the best of which I found is "manually" running the task in Program.cs, between building the
IWebHost and running it. In the next post I'll present some code to make this pattern easier to use. | https://andrewlock.net/running-async-tasks-on-app-startup-in-asp-net-core-part-1/ | CC-MAIN-2019-47 | refinedweb | 2,349 | 54.42 |
Proejct Note : please locate your project e.g [C:\Users\b\Desktop\WP\WP] from command prompt then execute the npm install
In this article I will demostrate to start with Angular 2.0 using Microsoft TypeScript 2.0 over .NET Framework with Visual Studio 2015.
Node.js is an open-source, cross-platform JavaScript runtime environment for developing a diverse variety of tools and applications. Visual Studio is an IDE for developing variety of application developed and supported by Microsoft. In this article, I will show you to work with Node.js using the IDE Visual Studio 2015.
Before we start you need to make sure few following requirements that your system meet.
NodeJS : It is noting but server side javascript. You can download NodeJs setup for your machine from here.
NPM : NPM is kind of resource manager for multiple piece of scripts that may like to work together for single project it provides them environment. You can find npm here.
TypeScript ^2.0 : TypeScript is a programming language and used by Angular 2.0 developer team as core language to work with Angular 2.0 You can download the setup from here.
Visual Studio 2015 with Update 3 is the said to be minimum requirement to work with Node.js application configurations and settings.
Now, let get started with building a simple application and launch your first application. In this application we will go through 11 simple steps to launch the application. I have tried to simplify the steps by writting possible details about it. Not in general but it may require specific debugging on your system you may post your comment below.
1) File -> New -> Project -> [Create an Empty Web project from templete] -> [Click OK and launch Project]
2) Copy the project path and open it in command prompt to do this right click on Solution Explorer and [Open Folder in File Explorer]
e.g cd C:\Users\b\Documents\visual studio 2017\Projects\StatWorkks\WebApplication1 Or
cd {Your Path}
3) Check few this to ensure the things are correct so far by running these commands
node -v It should be > v6.x.x
npm -v It should be > v4.x.x
tsc -v It should be > v2.x.x
Get update if you find any older version of any of these. If tsc gives an older version then it mean you probably have installed any version of typeScript earlier and may require to update the system variable. To do this go to Computer -> Properties(right click) -> Advance system settings -> Environment variable -> System variable -> path(click edit) then Find the typescript path and update it to latest.
Warning : Change carefully in system variable if you are not sure then know it before any change.
4) Now, go to command prompt and run the following command npm init. Give it a name 'angular2' when ask and accept all the default by hitting enter. Eventually it will adds a package.json file in your project. Include this file in your project be right click. Change the code to the following (remember we could have done this directly by GUI but I proceeded this way to let you explore the way npm usually works). Now, copy and past this code in your just included package.json file
{
"name": "angular2",
"version": "1.0.0",
"description": "This is demo app",
"main": "index.ts",
.2",
"systemjs": "0.19.40",
"core-js": "^2.4.1",
"reflect-metadata": "^0.1.8",
"rxjs": "5.0.1",
"zone.js": "^0.7.4"
},
"devDependencies": {
"@types/core-js": "^0.9.35",
"@types/jasmine": "^2.5.36",
"@types/node": "^6.0.46",
"canonical-path": "0.0.2",
"concurrently": "^3.1.0",
"http-server": "^0.9.0",
"jasmine-core": "~2.4.1",
"karma": "^1.3.0",
"karma-chrome-launcher": "^2.0.0",
"karma-cli": "^1.0.1",
"karma-jasmine": "^1.0.2",
"karma-jasmine-html-reporter": "^0.2.2",
"lite-server": "^2.2.2",
"lodash": "^4.16.4",
"protractor": "~4.0.14",
"rimraf": "^2.5.4",
"tslint": "^3.15.1",
"typescript": "~2.0.10"
},
"scripts": {
"test":
"echo \"Error: no test specified\" && exit 1"
},
"author": "ahmad anas",
"license": "MIT"
}
5) At root directory add typings.json and below code in it( you can also try this with command prompt in same directory execute the command npm i -g typings)
{
"
},
"name": "angular2"
}
6) At root directory add tsconfig.json with following code( You can configure this also by npm install tsconfig)
{
"compilerOptions":
{
"experimentalDecorators": true,
"moduleResolution": "node"
}
}
7) Add index.html and past the following code
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title></title>
<!-- Polyfill(s) for older browsers -->
<script src="node_modules/core-js/client>
<my-app>
</my-app>
</body>
</html>
8) Now add app folder in root directory and add three files in this app.component.ts, app.module.ts and index.ts
Note : click no if any configuration popups
9) Add the following code in all three relevent files
app.component.ts
import { Component } from "@angular/core"
@Component({
selector: 'my-app',
template : `<h1>Welcome to Angular 2.0 Application<h1><div>{{ msg }}</div>`
})
export class AppComponent
{
msg: string = "Demo. Thanks You..!!"
}
app.module.ts
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from "./app.component";
@NgModule({
imports: [BrowserModule],
declarations: [AppComponent],
bootstrap: [AppComponent]
})
export class AppModule
{
}
index.ts
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app.module';
platformBrowserDynamic().bootstrapModule(AppModule);
10) Add systemjs.config.js at root folder with following codes
/**
* System configuration for Angular samples
* Adjust as necessary for your application needs.
*/
(function (global) {
System.config({
paths: { // paths serve as alias
'npm:': 'node_modules/'
},
map: { // map tells the System loader where to look for things
//: {
main: './index.js',
defaultExtension: 'js'
},
rxjs: {
defaultExtension: 'js'
}
}
});
})(this);
11) On the command prompt in same directory execute the command npm install
Note : Additional Setting may bother you to set it go to Tool -> Option -> Project and Solution Click on External Web Tools and place $(PATH) before $(DevEnvDir)\{Anything}..
At the end of this application you must be able to launch you first AngularJs 2.0 Application with Visual Studio 2015. This will be kick and ride must be on. Happy learning,. | https://www.codeproject.com/Articles/1164014/Kick-Start-with-AngularJS-and-Visual-Studio | CC-MAIN-2017-43 | refinedweb | 1,024 | 52.15 |
</div>
import numpy as np import matplotlib.pyplot as plt import keras %matplotlib inline
Using TensorFlow backend.
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
</div>
import csv def read_csv(filename): phrase = [] emoji = [] with open (filename) as csvDataFile: csvReader = csv.reader(csvDataFile) for row in csvReader: phrase.append(row[0]) emoji.append(row[1]) X = np.asarray(phrase) Y = np.asarray(emoji, dtype=int) return X, Y
</div>
X_train, Y_train = read_csv('D:/dataset/NLP/emoji/train_emoji.csv') X_test, Y_test = read_csv('D:/dataset/NLP/emoji/tesss.csv')
maxLen = len(max(X_train, key=len).split()) maxLen
10
</div>
import emoji emoji_dictionary = {"0": "\u2764\uFE0F", # :heart: prints a black instead of red heart depending on the font "1": ":baseball:", "2": ":smile:", "3": ":disappointed:", "4": ":fork_and_knife:"} def label_to_emoji(label): """ Converts a label (int or string) into the corresponding emoji code (string) ready to be printed """ return emoji.emojize(emoji_dictionary[str(label)], use_aliases=True)
index = 1 print(X_train[index], label_to_emoji(Y_train[index]))
I am proud of your achievements .
</div>
Y_oh_train = keras.utils.to_categorical(Y_train, 5) Y_oh_test = keras.utils.to_categorical(Y_test, 5)
index = 50 print(Y_train[index], "is converted into one hot", Y_oh_train[index])
0 is converted into one hot [1. 0. 0. 0. 0.].
</div>
def read_glove_vecs(glove_file): with open(glove_file, encoding="utf8") as f: words = set() word_to_vec_map = {} for line in f: line = line.strip().split() curr_word = line[0] words.add(curr_word) word_to_vec_map[curr_word] = np.array(line[1:], dtype=np.float64) i = 1 words_to_index = {} index_to_words = {} for w in sorted(words): words_to_index[w] = i index_to_words[i] = w i = i + 1 return words_to_index, index_to_words, word_to_vec_map
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('D:/data/glove.6B.50d.txt')
You've loaded:.(
embeddings_indexin perivious notbook)
Run the following cell to check if it works.
word = "ali" index = 113317 print("the index of", word, "in the vocabulary is", word_to_index[word]) print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
the index of ali in the vocabulary is 51314 the 113317th word in the vocabulary is cucumber
word_to_vec_map["ali"]
array([-0.71587 , 0.7874 , 0.71305 , -0.089955, 1.366 , -1.3149 , 0.7309 , 0.79725 , 0.47211 , 0.53347 , 0.37542 , -0.10256 , -1.0003 , -0.31226 , 0.26217 , 0.92426 , 0.43014 , -0.015593, 0.4149 , 0.88286 , 0.10869 , 0.95213 , 1.1807 , 0.06445 , -0.05814 , -1.797 , -0.18432 , -0.41754 , -0.73625 , 1.1607 , 1.5932 , -0.70268 , -0.61621 , 0.47118 , 0.95046 , 0.35206 , 0.6072 , 0.59339 , -0.47091 , 1.4916 , 0.27146 , 1.8252 , -1.2073 , -0.80058 , 0.52558 , -0.33346 , -1.4102 , -0.21514 , 0.12945 , -0.69603 ])
</div>
def sentence_to_avg(sentence, word_to_vec_map): # Split sentence into list of lower case words words = sentence.lower().split() # Initialize the average word vector, should have the same shape as your word vectors. avg = np.zeros((50,)) # average the word vectors. You can loop over the words in the list "words". for w in words: avg += word_to_vec_map[w] avg = avg / len(words) return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map) print("avg = ", avg)]
</div>
You now have all the pieces to finish implementing the
model() function. After using
sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters.
Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are: $$ z^{(i)} = W . avg^{(i)} + b$$ $$ a^{(i)} = softmax(z^{(i)})$$ $$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$
It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time.
def softmax(x): """Compute softmax values for each sets of scores in x.""" e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() def predict(X, Y, W, b, word_to_vec_map): """ Given X (sentences) and Y (emoji indices), predict emojis and compute the accuracy of your model over the given set. Arguments: X -- input data containing sentences, numpy array of shape (m, None) Y -- labels, containing index of the label emoji, numpy array of shape (m, 1) Returns: pred -- numpy array of shape (m, 1) with your predictions """ m = X.shape[0] pred = np.zeros((m, 1)) for j in range(m): # Loop over training examples # Split jth test example (sentence) into list of lower case words words = X[j].lower().split() # Average words' vectors avg = np.zeros((50,)) for w in words: avg += word_to_vec_map[w] avg = avg/len(words) # Forward propagation Z = np.dot(W, avg) + b A = softmax(Z) pred[j] = np.argmax(A) print("Accuracy: " + str(np.mean((pred[:] == Y.reshape(Y.shape[0],1)[:])))) return pred def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 401): """ Model to train word vector representations in numpy. Arguments: X -- input data, numpy array of sentences as strings, of shape (m, 1) Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1) word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation learning_rate -- learning_rate for the stochastic gradient descent algorithm num_iterations -- number of iterations Returns: pred -- vector of predictions, numpy-array of shape (m, 1) W -- weight matrix of the softmax layer, of shape (n_y, n_h) b -- bias of the softmax layer, of shape (n_y,) """ np.random.seed(1) # Define number of training examples m = Y.shape[0] # number of training examples n_y = 5 # number of classes n_h = 50 # dimensions of the GloVe vectors # Initialize parameters using Xavier initialization W = np.random.randn(n_y, n_h) / np.sqrt(n_h) b = np.zeros((n_y,)) # Convert Y to Y_onehot with n_y classes Y_oh = keras.utils.to_categorical(Y, n_y) # Optimization loop for t in range(num_iterations): # Loop over the number of iterations for i in range(m): # Loop over the training examples # Average the word vectors of the words from the i'th training example avg = sentence_to_avg(X[i], word_to_vec_map) # Forward propagate the avg through the softmax layer z = np.dot(W, avg) + b a = softmax(z) # Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax) cost = -np.sum(Y_oh[i] * np.log(a)) # Compute gradients dz = a - Y_oh[i] dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h)) db = dz # Update parameters with Stochastic Gradient Descent W = W - learning_rate * dW b = b - learning_rate * db if t % 100 == 0: print("Epoch: " + str(t) + " --- cost = " + str(cost)) pred = predict(X, Y, W, b, word_to_vec_map) return pred, W, b
pred, W, b = model(X_train, Y_train, word_to_vec_map)
Epoch: 0 --- cost = 1.9520498812810072 Accuracy: 0.3484848484848485 Epoch: 100 --- cost = 0.07971818726014807 Accuracy: 0.9318181818181818 Epoch: 200 --- cost = 0.04456369243681402 Accuracy: 0.9545454545454546 Epoch: 300 --- cost = 0.03432267378786059 Accuracy: 0.9696969696969697 Epoch: 400 --- cost = 0.02906976783312465 Accuracy: 0.9772727272727273
print(pred)
[.] .]]
print("Training set:") pred_train = predict(X_train, Y_train, W, b, word_to_vec_map) print('Test set:') pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
Training set: Accuracy: 0.9772727272727273 Test set: Accuracy: 0.8571428571428571."
def print_predictions(X, pred): print() for i in range(X.shape[0]): print(X[i], label_to_emoji(int(pred[i])))
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"]) Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]]) pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map) print_predictions(X_my_sentences, pred)
Accuracy: 0.8333333333333334).
def plot_confusion_matrix(y_actu, y_pred, title='Confusion matrix', cmap=plt.cm.gray_r): df_confusion = pd.crosstab(y_actu, y_pred.reshape(y_pred.shape[0],), rownames=['Actual'], colnames=['Predicted'], margins=True) df_conf_norm = df_confusion / df_confusion.sum(axis=1) plt.matshow(df_confusion, cmap=cmap) # imshow #plt.title(title) plt.colorbar() tick_marks = np.arange(len(df_confusion.columns)) plt.xticks(tick_marks, df_confusion.columns, rotation=45) plt.yticks(tick_marks, df_confusion.index) #plt.tight_layout() plt.ylabel(df_confusion.index.name) plt.xlabel(df_confusion.columns.name)
import pandas as pd print(Y_test.shape) print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4)) print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True)) plot_confusion_matrix(Y_test, pred.
</div> =X[i].lower().split() # Loop over the words of sentence_words for j, w in enumerate(sentence_words): # Set the (i,j)th entry of X_indices to the index of the correct word. X_indices[i, j] = word_to_index[w] return X_indices
Run the following cell to check what
sentences_to_indices() does, and check your results.
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"]) X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5) print("X1 =", X1) print("X1_indices =", X1_indices)
X1 = ['funny lol' 'lets play baseball' 'food is ready for you'] X1_indices = [[155345. 225122. 0. 0. 0.] [220930. 286375. 69714. 0. 0.] [151204. 192973. 302254. 151349. 394475.]]
</div>
Let's build the
Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of
sentences_to_indices() to it as an input, and the
Embedding() layer will return the word embeddings for a sentence.
word_to_vec_map.
trainable = Falsewhen calling
Embedding(). If you were to set
trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
from keras.layers import Embedding(vocab_len, emb_dim, trainable = False) #
from keras.layers import Input from keras.layers import LSTM from keras.layers import Dense, Dropout from keras.models import Model def Emojify_V2 """ # Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices). sentence_indices = Input(input_shape, dtype = np)(X) # Add dropout with a probability of 0.5 X = Dropout(0.5)(X) # Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors. X = Dense(5, activation = 'softmax')(X) # Add a softmax activation X = Activation('softmax')(X) # Create Model instance which converts sentence_indices into X. model = Model(sentence_indices, X) return model.
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index) model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 10) 0 _________________________________________________________________ embedding_1 (Embedding) (None, 10, 50) 20000050 _________________________________________________________________ lstm_1 (LSTM) (None, 10, 128) 91648 _________________________________________________________________ dropout_1 (Dropout) (None, 10, 128) 0 _________________________________________________________________ lstm_2 (LSTM) (None, 128) 131584 _________________________________________________________________ dropout_2 (Dropout) (None, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 5) 645 _________________________________________________________________ activation:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
It's time to train your model. Your Emojifier-V2 = keras.utils.to_categorical(Y_train, 5)
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
Epoch 1/50 132/132 [==============================] - 2s 18ms/step - loss: 1.6072 - acc: 0.2045 Epoch 2/50 132/132 [==============================] - 0s 2ms/step - loss: 1.5902 - acc: 0.2652 Epoch 3/50 132/132 [==============================] - 0s 2ms/step - loss: 1.5738 - acc: 0.2727 Epoch 4/50 132/132 [==============================] - 0s 2ms/step - loss: 1.5541 - acc: 0.3106 Epoch 5/50 132/132 [==============================] - 0s 2ms/step - loss: 1.5373 - acc: 0.2879 Epoch 6/50 132/132 [==============================] - 0s 2ms/step - loss: 1.5139 - acc: 0.3788 Epoch 7/50 132/132 [==============================] - 0s 2ms/step - loss: 1.4485 - acc: 0.5985 Epoch 8/50 132/132 [==============================] - 0s 2ms/step - loss: 1.3791 - acc: 0.6288 Epoch 9/50 132/132 [==============================] - 0s 2ms/step - loss: 1.3555 - acc: 0.5682 Epoch 10/50 132/132 [==============================] - 0s 3ms/step - loss: 1.3386 - acc: 0.6212 Epoch 11/50 132/132 [==============================] - 0s 3ms/step - loss: 1.3254 - acc: 0.5985 Epoch 12/50 132/132 [==============================] - 0s 2ms/step - loss: 1.2782 - acc: 0.6591 Epoch 13/50 132/132 [==============================] - 0s 2ms/step - loss: 1.2132 - acc: 0.7500 Epoch 14/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1906 - acc: 0.7424 Epoch 15/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1485 - acc: 0.7803 Epoch 16/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1244 - acc: 0.7955 Epoch 17/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0847 - acc: 0.8712 Epoch 18/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0822 - acc: 0.8409 Epoch 19/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0725 - acc: 0.8561 Epoch 20/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0711 - acc: 0.8333 Epoch 21/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0937 - acc: 0.8182 Epoch 22/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1135 - acc: 0.7879 Epoch 23/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0901 - acc: 0.8258 Epoch 24/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0570 - acc: 0.8485 Epoch 25/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0444 - acc: 0.8712 Epoch 26/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0076 - acc: 0.9167 Epoch 27/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0178 - acc: 0.8939 Epoch 28/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1122 - acc: 0.7879 Epoch 29/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0817 - acc: 0.8333 Epoch 30/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0215 - acc: 0.8939 Epoch 31/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0272 - acc: 0.8788 Epoch 32/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0057 - acc: 0.9015 Epoch 33/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0194 - acc: 0.8864 Epoch 34/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0050 - acc: 0.9015 Epoch 35/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0100 - acc: 0.9015 Epoch 36/50 132/132 [==============================] - 0s 2ms/step - loss: 1.1691 - acc: 0.7424 Epoch 37/50 132/132 [==============================] - 0s 2ms/step - loss: 1.2309 - acc: 0.6742 Epoch 38/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0858 - acc: 0.8182 Epoch 39/50 132/132 [==============================] - 0s 3ms/step - loss: 1.0107 - acc: 0.9015 Epoch 40/50 132/132 [==============================] - 0s 2ms/step - loss: 1.0232 - acc: 0.8864 Epoch 41/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9963 - acc: 0.9167 Epoch 42/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9957 - acc: 0.9167 Epoch 43/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9895 - acc: 0.9167 Epoch 44/50 132/132 [==============================] - 0s 3ms/step - loss: 0.9913 - acc: 0.9167 Epoch 45/50 132/132 [==============================] - 0s 3ms/step - loss: 0.9866 - acc: 0.9167 Epoch 46/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9855 - acc: 0.9167 Epoch 47/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9880 - acc: 0.9242 Epoch 48/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9877 - acc: 0.9167 Epoch 49/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9846 - acc: 0.9242 Epoch 50/50 132/132 [==============================] - 0s 2ms/step - loss: 0.9881 - acc: 0.9242
<keras.callbacks.History at 0x1263efb1518>
Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen) Y_test_oh = keras.utils.to_categorical(Y_test, 5) loss, acc = model.evaluate(X_test_indices, Y_test_oh) print() print("Test accuracy = ", acc)
56/56 [==============================] - 0s 7ms/step Test accuracy = 0.8035714370863778
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
# This code allows you to see the mislabelled examples C = 5 y_test_oh = np.eye(C)[Y_test.reshape(-1)] X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen) pred = model.predict(X_test_indices) for i in range(len(X_test)): x = X_test_indices num = np.argmax(pred[i]) if(num != Y_test[i]): print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
Expected emoji:😄 prediction: she got me a nice present ❤️ Expected emoji:😄 prediction: he is a good friend ❤️ Expected emoji:😞 prediction: This girl is messing with me ❤️ Expected emoji:🍴 prediction: any suggestions for dinner 😄 Expected emoji:😄 prediction: you brighten my day ❤️ Expected emoji:😞 prediction: she is a bully ❤️ Expected emoji:😞 prediction: My life is so boring ❤️ Expected emoji:😄 prediction: will you be my valentine 😞 Expected emoji:😄 prediction: What you did was awesome 😞 Expected emoji:😞 prediction: go away ⚾ Expected emoji:😞 prediction: yesterday we lost again ⚾
Now you can try it on your own example. Write your own sentence below.
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings. x_test = np.array(['not feeling happy']) X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen) print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices)))).
You have completed this notebook! ❤️❤️❤️
What you should remember:.
Dropout()right after
LSTM()to regularize your network. | https://nbviewer.jupyter.org/github/alireza-akhavan/class.vision/blob/master/47-text-classification-Emojify.ipynb | CC-MAIN-2019-43 | refinedweb | 2,777 | 61.83 |
KTRACE(2) MidnightBSD System Calls Manual KTRACE(2)
NAME
ktrace — process tracing
LIBRARY
Standard C Library (libc, −lc)
SYNOPSIS
#include <sys/param.h>
#include <sys/time.h>
#include <sys/uio.h>
#include <sys/ktrace.h>
int
ktrace(const char *tracefile, int ops, int trpoints, int pid);
DESCRIPTION
The ktrace() system call enables or disables tracing of one or more processes. Users may only trace their own processes. Only the super-user can trace setuid or setgid programs.
The tracefile argument gives the pathname of the file to be used for tracing. The file must exist and be a regular file writable by the calling:
The trpoints argument specifies the trace points of interest. The defined trace points are:
Each tracing event outputs a record composed of a generic header followed by a trace point specific structure. The generic header is:
struct ktr_header {
}; −1.
[ENOSYS]
The kernel was not compiled with ktrace support.
A thread may be unable to log one or more tracing events due to a temporary.
MidnightBSD 0.3 June 4, 1993 MidnightBSD 0.3 | http://www.midnightbsd.org/documentation/man/ktrace.2.html | CC-MAIN-2015-48 | refinedweb | 177 | 68.16 |
HBase in Hortonworks Data Platform (HDP) 2.6 includes the following new features:
HBase Storage Quota (Technical Preview)
In a multitenant environment, you often want to set quotas for limited resources for networking and storage to protect the SLAs of critical workloads. Earlier versions of HBase that were bundled in HDP support setting quota limits on RPC requests, also known as request throttling. HBase in HDP 2.6 introduces storage quota. This allows you to manage storage at either the namespace or the table level.
HBase Backup-and-Restore Supports Bulk-Loaded Data (Technical Preview)
HDP 2.6 allows you to use incremental backups with bulk-loaded data. In HDP 2.5, bulk-loaded data is only included in full backups. Bulk loading is a common technique for ingesting data into HBase. Bulk-loaded data does not produce write-ahead-logs (WALs). | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_data-access/content/apache_hbase.html | CC-MAIN-2018-09 | refinedweb | 142 | 52.05 |
Hey guys, you did such a wonderful job helping me last time, I figured I would bug you again
I am trying to pull information from one file to another file, and everytime I do that, it puts spaces between my characters, and doubles my last input so in essence file in is:
I went to the store
and the output to the other file is:
I w e n t t o t h e s t o r ee
The output on the screen is right except I get a double entry at the end as well. Any body got an idea? I am sure it has something to do with that crappy null character, but I still don't understand that fully. Any help would be appreciated. Here is my code:
#include <stdio.h>
#include <fstream.h>
#include <iostream>
char x;
char y;
void main()
{
ifstream inFile;
inFile.open("procedure.dat",ios::in);
ofstream outFile;
outFile.open("procedureout.dat",ios::out);
while (!inFile.eof())
{
inFile.get(x);
cout << x;
outFile.put(y) << x;
}
inFile.close();
outFile.close(); | https://cboard.cprogramming.com/cplusplus-programming/6664-cout-question.html | CC-MAIN-2017-13 | refinedweb | 180 | 72.16 |
On 20 April 2010 21:23, Lennart Regebro <rege...@gmail.com> wrote: > On Tue, Apr 20, 2010 at 13:44, Wichert Akkerman <wich...@wiggy.net> wrote: >> You may want to move it outside the zope.* namespace to encourage that :)
Advertising
-1 I think zope.testrunner is just fine, and acknowledges the heritage. Namespaces should be about which community (or company) owns a package, not marketing. I think we're a little over-sensitive to the "it's Zope so we hate it" sentiment. The people (if any) who still have such childish ideas are probably not all that interesting to us as consumers of our software anyway. Martin _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg33331.html | CC-MAIN-2016-44 | refinedweb | 124 | 68.77 |
.
Today we are going to examine the LINQ set operations that are part of the IEnumerable<T> extension methods. Now, most of the time when people think of set operations they think of the math or logic classes they are usually taught in, but really these LINQ methods have a much larger appeal and applicability than just math exercises!
When we think of set operations in the realm of set logic, there are three primary ones that come to mind:
Notice that in set theory, sets are collections of unique elements with no duplicates. Thus the set union, as we saw, removes all duplicates between the two sets. This becomes important as we discuss how these set operations apply to collections in .NET.
I've blogged before on the often overlooked HashSet and SortedSet classes (here) in .NET, which are collections that implement sets. Though these are useful, sometimes you want to be able to apply these useful operations to other types of collections as well. This is where the LINQ set operation extension methods come in handy, they are implementations of these set operations that can be applied to any sequence that implements IEnumerable<T> including favorites like arrays, List<T>, iterators, etc.
For the set operations to properly work on your sequences of a type, the type must have a valid notion of equality (both Equals() and GetHashCode() must be meaningful). Now, for nearly all the primitive types (int, double, char, etc) the string class, structs and some BCL reference types, equality is well defined and implemented and they will work fine as is.
However, custom classes you write can present a problem because the default implementation of equality most likely will not meet our needs. Just keep that in mind for now and we’ll come back to that in the end with examples of how this can bite you and how to mitigate it…
The Intersect() method gets the common elements from two different enumerable sequences, just like the set logic's intersection operation dictates. Intersections are very useful in determining where two sets overlap (that is, what elements two sets have in common).
The two forms of Intersect(), assuming extension method syntax where the first argument is the target, are:
Let’s play with string since it already has a good default equality comparer. So say we have two enumerable sequences of string:
1: // lets say we have a list of healthy stuff to consume
2: var healthyStuff = new List<string> { "fruits", "vegetables", "proteins", "simple carbs", "fiber" };
3:
4: // and we have a list of what i consume
5: var myStuff = new List<string> { "soda", "chips", "proteins", "fat", "sugar" };
So now that we have these two lists: one of healthy things we can consume, and one that is things i typically consume. We can perform an intersection on them and see what things I consume that are healthy by saying:
1: var results = myStuff.Intersect(healthyStuff);
2:
3: foreach (var item in results)
4: {
5: Console.WriteLine(item);
6: }
This will output the item “proteins” which is the only item common between what I eat and healthy stuff (I eat better than that, really, but just for illustrative purposes).
It should be noted that the Intersect of two sets follows the commutative property. This means that A ∩ B = B ∩ A. So in our example that would mean that the following two statements are logically identical:
1: // these two are identical
2: results = myStuff.Intersect(healthyStuff);
3: results = healthyStuff.Intersect(myStuff);
This makes sense because asking what healthy foods exist in the list that I eat is the same as asking what foods do I eat that exist in the healthy foods list.
The Union() method combines the unique elements from two different enumerable sequences, just like the set union operation dictates. Unions are very useful for combining two sets without duplicates. Thus if in our example we wanted to get a list of all the healthy foods and all foods I eat, we could union the two sets.
The two forms of Union(), assuming extension method syntax where the first argument is the target, are:
By unique elements, I don’t mean to imply that only items with no duplicates are in the resulting set, but that the resulting set eliminates any duplicates. So that if you had { 1, 1, 3, 5 } ∪ { 1, 3, 7 } the result would be { 1, 3, 5, 7 }.
For example:
1: var results = myStuff.Union(healthyStuff);
3: // this will output soda, chips, proteins, fat, sugar,
4: // fruits, vegetables, simple carbs, and fiber
5: foreach (var item in results)
6: {
7: Console.WriteLine(item);
8: }
Notice that the duplicate of protein is gone. Union() is also commutative, so A ∪ B = B ∪ A. However that said the ordering will be different since the elements from the first sequence appear first in the resulting sequence, followed by any elements in the result that came from the second sequence.
The nice thing about Union() is it gives you a nice and easy way to join together two sequences and eliminate duplicates. Note that this is very different from the Concat() extension method in LINQ that just concatenates one sequence to the end of the other, but this makes no attempt to remove duplicates.
1: // these are logically identical
2: results = myStuff.Union(healthyStuff);
3: results = myStuff.Concat(healthyStuff).Distinct();
The Except() method performs a set difference between two sets. That is, A – B yields the items in A minus any items in B that happen to be in A. Any items that were unique to B alone are ignored. Thus, if we wanted to get a list of the food I eat that is NOT healthy food, I could do the set difference between what I eat and the healthy things to eat.
The two forms of Except(), assuming extension method syntax where the first argument is the target, are:
Once again this is a simple set difference operation. { 1, 3, 5, 7 } – { 1, 5, 8 } = { 3, 7 }. The 1 and 5 are removed since they were in both the first and second set, and the 8 is removed since it didn’t exist in the first set. So the resulting sequence are only the unique items from the first set that are not also in the second set.
As you can probably tell, difference is not commutative because if you reverse the order of the sets in the difference you get two different things:
This means that you have to be careful with Except() that you are subtracting the sets correctly. Once again if we look at the food example:
1: // this is a list of the things I eat that are not healthy
2: // soda, chips, fat, sugar
3: var results = myStuff.Except(healthyStuff);
4:
5: // this is a list of healthy things that I do not eat
6: // fruits, vegetables, simple carbs, fiber
7: results = healthyStuff.Except(myStuff);
So as you can see, Except() is a handy way to get a list of elements in a sequence that do not match the items from a second sequence.
Please note, that like many of the System.Linq extension methods, Except(), Union(), and Intersect() are deferred execution which means they will not be invoked until an enumerator is requested from them. Thus if you did something like this:
1: results = healthyStuff.Except(myStuff);
3: // because results is an iterator (deferred execution) this clears
4: // myStuff before it is actually used, which alters our results
5: myStuff.Clear();
6:
7: foreach (var item in results)
8: {
9: Console.WriteLine(item);
10: }
The resulting set will be all of healthyStuff (instead of the difference) since the results variable holds an iterator to the resulting sequence, but that sequence will not be calculated until we begin enumerating through the results. Thus by calling Clear() on one of the sets before we actually use the results, we've altered the operands before the operator is actually applied. In this example, that means we try to subtract an empty set myStuff from the list of healthyStuff, which of course results in the full list.
To avoid this either iterate over the results immediately, or use extension methods like ToArray() or ToList() to immediately process the query and put the results in a collection.
In IEnumerable<T> the type T is covariant, this means you can use the set operations to manipulate two sets of different types related through inheritance. For example, if you had class Employee and class SalariedEmployee which inherits from Employee, then you can perform set operations between the two sets and the resulting set type is the wider of the two types (that is, the higher up the inheritance chain – Employee in this case).
Also notice that the only thing required for these set operations in System.Linq to work is that both sequences must implement IEnumerable<T>, this means they can be an array, a List<T>, a HashSet<T>, or an iterator from another query of type T (and so on). Essentially, this is just to say that you can intersect a HashSet<string> with a List<string> and so on, the only thing that is important is that their element types are the same (or covariant as stated above).
I hinted before that these operations will work exactly as you expect for primitives, strings, structs, and any reference types that correctly implement the concept of equality. And I hinted that custom classes you write may be in danger of not working. But why?
Well, you may think that the first problem is that with class the default concept of Equals() is a reference comparison. While this is true, it is only half the issue. Let’s say we define an Employee class and override Equals() on it:
1: public class Employee
2: {
3: public int Id { get; set; }
4: public string Name { get; set; }
5: public double Salary { get; set; }
7: public override bool Equals(object obj)
8: {
9: var other = obj as Employee;
10:
11: return other == null
12: ? false
13: : Equals(Id, other.Id) &&
14: Equals(Name, other.Name) &&
15: Equals(Salary, other.Salary);
16: }
17: }
So now if we attempt to perform set operations on these two lists, what do we get?
1: var listOne = new []
2: {
3: new Employee { Id = 1, Name = "John Smith", Salary = 12342 },
4: new Employee { Id = 12, Name = "Lucy Doe", Salary = 99243 }
5: };
7: var listTwo = new []
9: new Employee { Id = 2, Name = "Jane Doe", Salary = 3241 },
10: new Employee { Id = 1, Name = "John Smith", Salary = 12342 }
11: };
Now, if we try to do an intersection, we’d expect to see John Smith, right?
1: var results = listOne.Intersect(listTwo);
But we don’t, we get an empty list! Why? We can get a hint in that the second forms of Union(), Intersect(), and Except() that take an IEqualityComparer<TSource>. Why IEqualityComparer, why not just IComparer?
The answer is that IEqualityComparer requires both an Equals() and a GetHashCode() method to be defined. So this should be a good hint to us that we need to provide not only a meaningful Equals() overload but a GetHashCode() overload in our custom classes (or provide a separate custom IEqualityComparer of course). Remember that two items that are equal should always return the same hash code, but the opposite is not necessarily true. It's okay for two non-equal items to have the same hash code, but it's never okay for two equals items to not have equal hash codes. Typically this means that the GetHashCode() method should be based on the same fields used in the Equals() check (or a subset).
So, assuming we add an override for GetHashCode():
3: // ... all the other stuff from before
5: public override int GetHashCode()
6: {
7: unchecked
8: {
9: // using the pretty standard method of primes and combinging field hash codes
10: int hash = 11;
11:
12: hash = hash * 31 + Id.GetHashCode();
13: hash = hash * 31 + Name != null ? Name.GetHashCode() : 0;
14: hash = hash * 31 + Salary.GetHashCode();
15:
16: return hash;
17: }
18: }
19: }
Now we get the expected result of John Smith! Many of the LINQ extension methods use the hash codes of the items in the sequences to quickly and efficiently work their way through the lists. We don’t have this issue with primitives and classes such as string which already override Equals() and GetHashCode() appropriately, and struct doesn’t have this issue because struct by default already does a member-wise Equals() and GetHashCode() construction.
Thus, another way we could have corrected this would be to make Employee a struct, though this has larger ramifications to consider and shouldn’t be done lightly (for more info on class vs struct and all the differences see here). So the general recommendation is to either provide a meaningful Equals() and GetHashCode(), or create a separate IEqualityComparer that will define these for your custom class.
System.Linq includes some great set operation extension methods that can be used to operate on two sequences as if they were sets. These can come in handy when combining sequences with no duplicates (Union()), seeing if two sequences have elements in common (Intersect()), or seeing what elements in a sequence are not part of another sequence (Except()).
While set operations are typically thought of as math operations, these can be applied to many computer science problems and should be considered when needing to check membership between two sequences of items.
Print | posted on Thursday, May 5, 2011 7:13 PM |
Filed Under [
My Blog
C#
Software
.NET
Little Wonders
] | http://blackrabbitcoder.net/archive/2011/05/05/c.net-little-wonders-the-linq-set-operations----theyre-not.aspx | CC-MAIN-2013-48 | refinedweb | 2,268 | 65.46 |
Presubmit scripts can perform automated checks on the files in your change and the description of your change, and either fail your attempt to upload or commit, show a warning that you must acknowledge before uploading/committing, or simply show an informational message as part of the output of gcl.
Examples of things presubmit scripts may be useful for include:
To skip the scripts on upload, use the --bypass-hooks flag, as in:
--bypass-hooks
git cl upload --bypass-hooks
To skip the scripts on commit, use --bypass-hooks and directly commit your change.
You should only do these if necessary, as the presubmit scripts are there for a reason, but they're not perfect.
If you have trouble with a presubmit script, it's preferable to fix it, rather than simply bypassing it. See depot_tools: sending patches for how to contribute.
git-cl push
Please note that presubmit scripts are a best-effort kind of thing; they do not prevent users from submitting without running the scripts, since one can always dcommit, and in fact there is a --bypass-hooks (formerly --no_presubmit) flag to gcl that skips presubmit checks. Further, since they use the local copy of the PRESUBMIT.py files, users must sync their repos before the latest presubmit checks will run when they upload or submit.
--no_presubmit
More subtly, presubmit scripts do not guarantee invariants: even if presubmit scripts pass prior to submission to CQ, once all changes land, the scripts may fail! This is because 2 changes may individually pass the tests, and the patches both apply cleanly together, but the combined change does not pass tests. Since presubmit/precommit scripts run at upload or at start of CQ steps, if two such changes are in the CQ at the same time, they can both pass, both be enqueued, and both land, at which point the tests start failing. A common example is change 1 adding a new test, and change 2 changing existing tests. After they both land, there is a new test in the old style (from change 1), which is out of sync with the new tests (from change 2).
To create a new test, either create a new PRESUBMIT.py script or edit an existing one, adding a new function for your test.
To check your changes, first commit locally (else git-cl will complain about the dirty tree), then:
To test the upload checks (i.e., to run CheckChangeOnUpload):
git cl presubmit --upload
To test the submit checks (i.e., to run CheckChangeOnCommit):
git cl presubmit
The functions must match these method signatures. You do not need to define both functions if you're only interested in one type of event, and if you want to run the same tests in both events, just have them both call a single underlying function:
def CheckChangeOnUpload(input_api, output_api):
pass
def CheckChangeOnCommit(input_api, output_api):
The input_api parameter is an object through which you can get information about the change. Using the output_api you can create result objects.
input_api
output_api
Both CheckChangeOnXXX functions must return a list or tuple of result objects, or an empty list or tuple if there is nothing to report. The types of result objects you may use are output_api.PresubmitError (a critical error), output_api.PresubmitPromptWarning (a warning the user must acknowledge before the command will continue) and output_api.PresubmitNotifyResult (a message that should be shown). Each takes a message parameter, and optional "items" and "long_text" parameters.
This object can be used to transform from local to repository paths and vice versa, and to get information on the files in the change that are contained in the same directory as your PRESUBMIT.py file or subdirectories thereof.
The input_api.change object represents the change itself. Using this object you can retrieve the description of the change, any key-value pairs in the description (e.g. BUG=123), and details on all of the files in the change (not just the ones contained by the directory your PRESUBMIT.py file resides in).()
The input_api.is_committing attribute indicates whether the CL is being committed or just uploaded. This is particularly useful if you wish the same test to be run for both upload and committing, but with different behavior. A common pattern is to prompt a warning on upload, but an error on committing, which allows a CL to be uploaded and reviewed even if the test fails, but not committed (without dcommit). This can be done as follows:.
The most detailed documentation for the presubmit API is in its implementation code.
The canned checks are good examples of what you can do with the presubmit API.
An example simple file might be as follows:
def CheckChange(input_api, output_api):
results = []
results += input_api.canned_checks.CheckDoNotSubmit(input_api, output_api)
results += input_api.canned_checks.CheckChangeHasNoTabs(input_api, output_api)
results += input_api.canned_checks.CheckLongLines(input_api, output_api)
# Require a BUG= line and a HOW_TO_TEST= line.
if not input_api.change.BUG or not input_api.change.HOW_TO_TEST:
results += [output_api.PresubmitError(
'Must provide a BUG= line and a HOW_TO_TEST line.')]
return results
return CheckChange(input_api, output_api)
def MyTest(input_api, output_api):
test_path = input_api.os_path.join(input_api.PresubmitLocalPath(), 'my_test.py')
cmd_name = 'my_test'
if input_api.platform == 'win32':
# Windows needs some help.
cmd = [input_api.python_executable, test_path]
else:
cmd = [test_path]
test_cmd = input_api.Command(
name=cmd_name,
cmd=cmd,
kwargs={},
message=output_api.PresubmitPromptWarning)
if input_api.verbose:
print('Running ' + cmd_name)
return input_api.RunTests([test_cmd]) | http://www.chromium.org/developers/how-tos/depottools/presubmit-scripts | CC-MAIN-2017-13 | refinedweb | 891 | 64.1 |
#include <vtkSubGroup.h>
This class provides scalable broadcast, reduce, etc. using only a vtkMultiProcessController. It does not require MPI. Users are vtkPKdTree and vtkDistributedDataFilter.
Definition at line 49 of file vtkSubGroup.h.
Reimplemented from vtkObject.
Definition at line 52 of file vtkSubGroup.h.
Definition at line 58 of file vtkSubGroup.h.
Initialize a communication subgroup for the processes with rank p0 through p1 of the given communicator. (So vtkSubGroup is limited to working with subgroups that are identified by a contiguous set of rank IDs.) The third argument is the callers rank, which must in the range from p0 through p1.
Definition at line 101 of file vtkSubGroup.h. | http://www.vtk.org/doc/release/5.2/html/a01334.html | crawl-003 | refinedweb | 109 | 54.18 |
What I am trying to do is to set up a project in which wouldn't matter if precompiled header is set or not.
stdafx.hpp
#include "stdafx.hpp"
#include "../stdafx.hpp"
stdafx.hpp
stdafx.hpp
The canonical solution is easy: Don't include the precompiled header file (stdafx.h by default). If your code needs to compile with precompiled headers, use the /FI (Name Forced Include File) compiler switch:
This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line, in the CL environment variable, or in a command file.
This allows you to use precompiled header files, without modifying your source code.
The rules for using an #include directive with double quotation marks are outlined under #include Directive (C/C++):
Quoted form:
The preprocessor searches for include files in this order:
- In the same directory as the file that contains the #include statement.
- In the directories of the currently opened include files, in the reverse order in which they were opened. The search begins in the directory of the parent include file and continues upward through the directories of any grandparent include files.
- Along the path that's specified by each /I compiler option.
- Along the paths that are specified by the INCLUDE environment variable.
Using the /I (Additional Include Directories) compiler switch to include the directory of the header file used to generate the precompiled header would then simply allow you to write
/FIstdafx.hpp
There is no combination of settings/project topology that would allow you to simply toggle precompiled headers on or off. The /Y, /FI, and /I compiler switches must be used together, or removed entirely. To change a set of configurations as a unit, you can use property pages (see Working with Project Properties for details). | https://codedump.io/share/2TeXNGw3ccdW/1/setting-up-a-project-which-would-compile-both-with-and-without-precompiled-header | CC-MAIN-2018-26 | refinedweb | 313 | 51.07 |
Re:
The SSI documentation can be improved in several ways:
- It doesn't mention that SSI variables (set/echo) are actually request attributes and thus are not scope of SSI only. That means SSI pages can see attributes set by other components, and can set request attributes to pass to other components
- The "config" directive takes three parameters but they are not mentioned - "errmsg", "sizefmt" and "timefmt"
- The "echo" directive has an encoding parameter that appears in code but is not mentioned in the documentation. It can be "url", "entity" or "none"; defaulting to "entity"
- The "exec" directive take a "cmd" or "cgi" parameters - the "cgi" parameter, when used is the same as include
- The "include", "flastmod" and "fsize" directives can take either "file" or "virtual" parameters but they are not both documented
- SSI variables cannot start with "java.", "javax." or "sun." but this is not mentioned
Another reserved namespace is "org.apache.catalina.ssi.SSIMediator.*"
Patches are always welcome, including documentation patches.
Pull request here:
Fixed in:
- master for 9.0.18 onwards
- 8.5.x for 8.5.40 onwards
- 7.0.x for 7.0.94 onwards | https://bz.apache.org/bugzilla/show_bug.cgi?id=63184 | CC-MAIN-2019-39 | refinedweb | 190 | 53.81 |
Agenda
See also: IRC log, previous 2008-01-10
<msporny> 1) Action Items
<msporny>
<msporny> 2) Chaining completion by @href and @src: what is everyone's take on this?
<msporny> 3) non-prefixed RELs: putting aside the issue of *which spec doc* the
<msporny> reserved words are defined in, for now, it's probably worth voting on
<msporny> Manu's proposal, if there is enough of a quorum:
<msporny>
<msporny> I vote yes.
<msporny> 4) Test Cases: any updates we need to cover?
->
ACTION: [PENDING] Ben followup with Fabien on getting his RDFa GRDDL transform transferred to W3C [recorded in] [CONTINUES]
ACTION: [PENDING] Ben to add status of various implementations on rdfa.info [recorded in] [CONTINUES]
ACTION: [PENDING] Ben to respond to comment on follow-your-nose [recorded in] [CONTINUES]
ACTION: [PENDING] Ben to set up a proper scribe schedule [recorded in] [CONTINUES]
propose to have a look at
ACTION: [PENDING] Michael to create "Microformats done right -- unambiguous taxonomies via RDF" on the wiki [recorded in] [CONTINUES]
ACTION: [PENDING] Ralph followup with Dublin Core on what's going on with their namespace URI [recorded in] [CONTINUES]
Manu: Regarding Syntax document
... establishing subject
Mark explains current processing with hanging
Manu: For example @resource vs, @about (feedback from one of the developers)
Mark: One starts learning a new language should
start with basic stuff as 'I want triple'
... IMO we have to explain it well rather than removing features (even if verbose)
... we have basically agreed on chaining, so if we are about to remove features
... we would end up with a bunch of exceptions
<Ralph> [apologies for tardiness]
Mark: When preserving legacy values, one has to take care of the whole picture
Ralph: Also author awareness is an issue
... this is a new language, but people familiar with previous languages should not get into too much trouble
Steven reminds on the topic
Manu: Sent out three questions to the list
... so there are two more
... <img />
... @src setting the subject
... and cases where it doesn't
Mark: In my model, every attrib can be
subject/object
... so far no distinction between @about, etc.
Manu: One has to understand the processing model to 'get it' what happens
Mark: @src issue does not fundamentally touch my model
-> IR
cf SPARQL implementation report
Michael: my idea was to run the test cases and
report which tests pass and which fail
... is it necessary to have a call where all those listed in the table can participate?
<msporny>
<msporny>*%0D%0AFROM+%3Chttp%3A%
Michael: on hold TC
-- tests 2 & 3
Michael: 2 and 3 have meta in the body, which is not allowed in this version of RDFa, so we'll drop those
<mhausenblas> +1
-- test 4
Manu: test 4 -- we've also decided to disallow xml:base, so test 4 should be dropped
RESOLUTION: drop tests 2 and 3
Mark: I'd thought we'd made xml:base
optional
... but I'm sure the answer is in the spec
<markbirbeck> "If a language includes @xml:base [XMLBASE], an RDFa parser for that host language must process it, and use its value to set [base]."
-> Re: Test Case #73: @about with relative URL using XHTML @xml:base
Michael: so we could change test 4 to show that no triples are generated
x/the generated triples ignore xml:base/the generated triples ignore xml:base/
Manu: the href needs to change to illustrate this?
Michael: I'll fix test 4 to show what should be generated, with xml:base ignored
-- test 16: Blank node, explicit
Ralph: what was test 16 supposed to test?
Michael: see comment from Elias -- apparently
tests @about referring to a bnode
... propose to drop 16
Manu: test 64 tests bnode ref in a more atomic way
RESOLUTION: test 16 dropped | http://www.w3.org/2008/01/17-rdfa-minutes.html | CC-MAIN-2016-36 | refinedweb | 628 | 60.89 |
Keir Fraser wrote:
> On 15/09/2010 05:55, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:
>
>>>> +enum x86_segment sreg_to_index[] = {
>>>> + [VMX_SREG_ES] = x86_seg_es,
>>>> + [VMX_SREG_CS] = x86_seg_cs,
>>>> + [VMX_SREG_SS] = x86_seg_ss,
>>>> + [VMX_SREG_DS] = x86_seg_ds,
>>>> + [VMX_SREG_FS] = x86_seg_fs,
>>>> + [VMX_SREG_GS] = x86_seg_gs,
>>>> +};
>>>
>>> Since you dislike adding new namespaces and translations, I'm sure
>>> you can get rid of these. :) It might even simplify some of the
>>> macros below.
>>
>> True, some dupcation here. Regarding following definition in
>> x86_emulate.c, we can reuse.
>
> AFAICS if you must have your own extra instruction decoder, a few
> register translation definitions and arrays is the least of it
> really. I'd rather keep x86_emulate clean and separate rather than
> become intertwined with another emulator.
>
> What is wrong with simply extending x86_emulate to handle these
> VMX-related instructions? We've dealt with emulators provided by
> Intel guys in the past and frankly they were full of holes.
>
Certainly fine to move those VMX instruction emulation to hvm/emulate.c as if
you don't think that is VMX specific :)
Will do.
Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx | http://old-list-archives.xen.org/xen-devel/2010-09/msg00892.html | CC-MAIN-2020-50 | refinedweb | 173 | 59.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.