text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
◄ Back to index
You must first install FlashAppy on your calculator. Note that like all ROM patches, you will not be able to send your modified calculator ROM image to other calculators. Neither the authors of GTC nor the author of FlashAppy can be held responsible for any damage that could occur during or after this process. However, we are not aware of any case where a calculator was damaged by FlashAppy.
You only need to install FlashAppy once: you won't need to reinstall it after a reset or after upgrading to a newer version of GTC. On the other hand if you install an AMS update you will need to reinstall FlashAppy afterwards.
Legal notice: the
.89t/.9xt/.v2t and
.89y/.9xy/.v2y files in the
bin-89/92p/v200 directories are subject to the GNU General Public License, which grants you a number of rights. You may choose not to install them, but this will prevent you from using the TIGCC Library.
Once FlashAppy is installed, send all the files in the
bin-89,
bin-92p or
bin-v200 directory (depending on the model) to your calculator, and archive
them all.
Check that the GTC flashapp was properly transferred by entering the Var-Link screen and pressing F7: you should see GTC appear in the list. Otherwise, the transfer failed: make sure that FlashAppy is installed and that you have enough Archive memory.
To create a test source file, create a directory named
source, create an
empty text file inside that directory named
hello with the TI text editor,
and archive it with the Var-Link screen.
Now you can run the IDE by typing
gtc\gtc_ide() and open the file named
hello.
Type in the following code:
#include <tigcclib.h> void _main() { ST_helpMsg("Hello world!"); }
Now press the F5 key. This should bring up a compilation dialog, which should close after a few seconds (if not, you may not have installed GTC properly: make sure FlashAppy is correctly installed, that everything is archived, and that you have enough RAM).
Once the compilation is over you can exit the IDE and run your program by typing
outbin()
It should display the text
Hello world! in the status line. If it worked,
congratulations! You now have a working C compiler on your calculator.
Here is a small subset of the key commands supported by GTC IDE:
Also, GTC IDE supports keyboard shortcuts that the standard text editor doesn't: for example, you can press Shift-2nd-Right (press Shift and 2nd, then press Right while still holding Shift and 2nd) to highlight the text from the cursor to the end of the line. Likewise you can press Shift-2nd-Down to highlight a whole screen of text, or Shift-Diamond-Down to select until the end of the file. These shortcuts come in particularly handy when selecting large amounts of text.
main\outbin()to a
kbdprgmlike
kbdprgm9so that you can quickly run a freshly compiled program by typing Diamond-9 in the Home screen.
zheader. You should not delete
stdheador
keywordsif you want to compile programs designed for the TIGCC library. | http://gtc.ti-fr.com/doc/oncalc.html | CC-MAIN-2018-13 | refinedweb | 523 | 69.72 |
Friend, Father, Programmer.
♥ open source especially Python.
You want to implement search against user objects stored in redis using Python. Something like querying for all user ids whose username begins with "an".
Here we have user objects stored in as hashes with "user:obj:" as prefix.
For example
user:obj:3955 {id: 3955, username: 'John', ..}.
a -> 0.097
ab -> 0.097098
ac -> 0.097099
bc -> 0.098099
So for above four string if we find strings that has score that is => 0.097 and
Code
This to demonstrate simple redis pattern and using it in Python.
I do like Ubuntu Netbook Remix’s UI. However with 10.04 it’s just gone so unstable for me.
Considering all that I decided to switch to Xfce. It just worked like charm. But now I also use (and like :) ) UbuntuOne service for my backup. UbuntuOne is not integrated for XFCE. Also you cant do everything from UbuntuOne’s cli.
u1sdtool -q; killall ubuntuone-login; u1sdtool -c # configuration
u1sdtool --create-folder ~/my_data # add folders you want to be synced
u1sdtool --list-folders
u1sdtool --current-transfers
For more details you might want to check Ubuntu One wiki .
sudo apt-get install usb-modeswitch wvdial
vi /etc/wvdial
[Dialer Defaults]
Phone = #777
Username =
Baud = 460800
Stupid Mode = 1
New PPPD = 1
Tonline = 0
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Modem Type = Analog Modem
Baud = 460800
Modem = /dev/ttyUSB0
ISDN = 0..
Pulling your hairs over some i18n bug or you fix it but are not able to explain what. This is little help in getting fair idea about unicode/codecs/encoding/decoding etc.
Quick tips:
a. It does not make sense to have a string without knowing what encoding it uses.
b. Utf-8 is a way of storing string of Unicode code points.
c. Encoding: Transforming a unicode object into a sequence of bytes
d. Decoding: Recreating the unicode object from the sequence of bytes is known as decoding. There are many different methods for how this transformation can be done (these methods are also called encodings).
Now
Must Read 1.
Must Read 2.
Continue reading 1:
Continue reading 2:
Continue reading 3:
cat /etc/wvdial.conf
[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Modem Type = USB Modem
Baud = 460800
New PPPD = yes
Modem = /dev/ttyACM0
ISDN = 0
Stupid mode = 1
Phone = #777
Password = internet
Username = internet
------------ ----------
| | | Guest |
| Host ----+------+----- |
| | | Hub | | |
| |tap0| |tap1 | |
| |-----+-----+-----| |
| eth0 | | |
| | | | |
----+------- ----------
|
(Internet)
Host
* Add a hub
# vde_switch -x -d -tap tap0 -tap tap1
* Assign ip to host's nic
# ifconfig tap0 192.168.1.1
* Setup ip forwarding
Modify /etc/sysctl.conf
net.ipv4.ip_forward=1
* Setup masquerading
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
* Fire qemu
# vdeqemu -m 1024 -localtime /vm//jos_8.04_01/jos_8.04_01.img
Guest
# ifconfig eth0 192.168.1.2
# route add default gw 192.168.1.1
# vi /etc/resolv.com
# ping google.com
While there is a lot already written here my quick howto
$ sudo bash
# apt-get install dnsmasq squid
# echo "listen-address=127.0.0.1" >> /etc/dnsmasq.conf
# echo "no-dhcp-interface=" >> /etc/dnsmasq.conf
# vi /etc/dhcp3/dhclient.conf
# # ^ uncomment line #prepend domain-name-servers 127.0.0.1;
# vi /etc/resolv.conf # Add nameserver 127.0.0.1
# /etc/init.d/dnsmasq restart
# vi /etc/squid/squid.conf
http_port 3128
visible_hostname localhost
acl all src 0.0.0.0/0.0.0.0
cache_effective_user proxy
cache_effective_group proxy
http_access allow all
icp_access allow all
positive_dns_ttl 1 month
negative_dns_ttl 1 minute
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
cache_dir ufs /cache 400 16 256
cache_store_log none
# mkdir /cache # I have this dir on reizerfs partition
# chown proxy.proxy /cache
# /etc/init.d/squid restart
much better than struggling with the graphical tools.
shon@ubuntu:~$ cat test.dot
digraph FlowChart {
node [
fontname = "Bitstream Vera Sans"
fontsize = 8
shape = "record"
]
edge [
fontname = "Bitstream Vera Sans"
fontsize = 8
fontcolor = "Red"
]
// all blocks
greet [label="Hello, techie", shape="oval"]
which_os [label="What OS do you use?" shape="diamond"]
like_me [label="Great, me too!", shape="oval"]
which_browser [label="You must be using firefox", shape="diamond"]
ff [label="Cool", shape="oval"]
bye [label="Bye", shape="oval"]
// relations
greet -> which_os
which_os -> like_me [label="I use Linux"]
which_os -> which_browser [label="I use Windows"]
which_browser -> ff [label="Right"]
which_browser -> bye [label="what firefox?"]
}
shon@ubuntu:~$ dot test.dot -Tpng -o test.png && eog test.png
import zope.interface.verify
class ITest(zope.interface.Interface):
def foo(arg1): pass
def bar(): pass
class Test(object):
zope.interface.implements(ITest)
def foo(self): pass
class Test2(object):
zope.interface.implements(ITest)
def foo(self, arg1): pass
class Test3(object):
zope.interface.implements(ITest)
def foo(self, arg1): pass
def bar(self): pass
for cls in (Test, Test2, Test3):
try:
if zope.interface.verify.verifyClass(ITest, cls):
print "OK: %s correctly implements %s" % (cls.__name__, ITest.__name__)
except Exception, err:
print "Error detected with %s's implementation: %s" % (cls.__name__, err).
Anyways Python rocks!
[root@localhost ~]# yum -y install postgresql-python \
postgresql postgresql-server
[root@localhost ~]# /etc/init.d/postgresql start
[root@localhost ~]# /etc/init.d/postgresql status
[root@localhost ~]# su - postgres -c "createuser --createdb \
--adduser shon"
[root@localhost ~]# su - shon # normal user
[shon@localhost ]$ createdb test
[shon@localhost ]$ psql test
test=# \q
from elixir import *
metadata.connect("postgres:///test")
class Movie(Entity):
has_field('title', Unicode(30))
has_field('year', Integer)
has_field('description', Unicode)
def __repr__(self): return '' % (self.title, self.year)
metadata.create_all()
def test1():
m1 = Movie(title="Blade Runner", year=1982)
m2 = Movie(title="Life is beautiful", year=1980)
objectstore.flush()
print m1
def test2():
print Movie.select()[0]
test1()
# test2() | http://flavors.me/shon | crawl-003 | refinedweb | 940 | 52.46 |
Using Camera within the scene?
- None_None233
@ omz, thanx for the very nice update but as i checked the photo module, i realized that accessing camera may not be available when a scene is running (for me it equals to when my app is running). is there any trick to use camera in a running app? like just to take a photo when a scene is running, and then back to scene and use photos.get_image to load it . Or pausing the scene and then resuming it again? for me, i guess, accessing camera within the apps is an important issue.
I'm having the same issue. If there is any way to dismiss the scene with a function like <code>stop()</code>, that would be very helpful. Thanks!
Before defining main scene class (the class containing the
draw(),
setup(), and other functions), call the
capture_image()method. Store the result as a variable. Within the
setup()function, make the image variable a global. Convert the image to RGBA using the
.convert()method, save the result of
load_pil_image(image)as
self.img, and use
self.imgto access the image during the scene. Here is an example:
from scene import * import Image import photos global imgs imgs = photos.capture_image() class MyScene (Scene): def setup(self): global imgs imgs.convert('RGBA') self.img = load_pil_image(imgs) def draw(self): image(self.img, 0, 0, self.size.w, self.size.h) run(MyScene(), frame_interval=1)
@Coder 123 you can format your code using the tag {pre}{pre}, replace curly braces with angled brackets <>
- Irrelevant44
Thanks @Coder123, but is there a way to get multiple images i.e. take a photo with capture_image(), load it into a scene and do some processing on it, then take another photo and repeat, kind of what would be required for a camera application or similar.
- achorrath233
I tried breaking out of a scene by raising an exception, but I was unable to catch it. Instead of being handled by the try/except clause I put around run(), it took me back to the interpreter and highlighted the line where the exception was raised.
- Irrelevant44
@Achorrath I tried this as well as other evil things (deleting references, unloading vital modules, etc) It seems that the scene is tied to the execution of the program, and when it gets created it gets loaded permanently into the main execution. Given that, can I put in a feature request for access to the camera while scenes are running. Even without the automatic camera interface would be reasonable; I don't know how iOS allows access to the camera but this approach seems unnecessarily clunky...
Resurrecting this old thread... I was just trying to open a capture_image interface from another simple UI view. The capture interface was called just fine with a button in a sheet view on the iPad, but when I tried to call the capture interface from a button in a fullscreen view, the whole program (script and Pythonista itself) froze. Has anybody else seen this?
Perhaps if you have a simple example that reproduces the code, we can find out the problem.
Often, this type of problem is caused when the ui thread is trying to do some animation, etc while the main thread is trying to show the dialog, camera, etc. The standard fix for such problems is to wrap the code in another function, and call ui.delay on it using a short delay, long enough to be sure the ui is finished doing whatever it was doing, maybe 0.5 second or something.
Here is a simple ui button which captures an image, then shows it in an ImageView. This does not crash on my ipad2. When I added v.close() as the first line inside of the action, it does crash (in this case, drops back to home screen) Ui.delaying the subsequent code by 0.5 sec allowed it to work again. I was also able to get the camera "stuck" if I kicked off an ui.animate before showing the camera.
import ui, photos, io, console v=ui.View() v.bg_color='white' I=ui.ImageView(frame=(0,150,640,480)) I.content_mode=ui.CONTENT_SCALE_ASPECT_FIT v.add_subview(I) b=ui.Button(frame=(50,50,100,100)) b.bg_color='red' b.title='select image' v.add_subview(b) @ui.in_background def myaction(sender): img=photos.capture_image() console.show_activity() if not img: return with io.BytesIO() as bIO: img.save(bIO, 'PNG') imgOut = ui.Image.from_data(bIO.getvalue()) I.image=imgOut console.hide_activity() b.action=myaction v.present('fullscreen')
- wradcliffe
This example works fine on my ipad as well. It does bring up a question though about whether it is possible to write an app that can draw over the top of the capture_image display. Suppose I want to write an app that captures a selfie and needs the user to align their face with a template. I the example you provide, I would have to get the user to go back and forth between the capture interface and the app interface which would be klunky.
You cannot. But you could always crop and scale the image, or for instance have user drag a box over face
@JonB, thanks for the example. It does work fine on my iPad as well. The syntax is essentially what I have in my code, with a ui.Button activating a photo capture in the background. I haven't had a ton of time to look at the possible differences.
I'll say this, though. The one thing that made things work in my fullscreen view, was having hide_title_bar=True. This was independent of having the console show/hide activity. If this points to anything in particular, let me know. If/when I have time to dig a bit more into it, I'll add a new post. | https://forum.omz-software.com/topic/1402/using-camera-within-the-scene | CC-MAIN-2021-31 | refinedweb | 975 | 65.93 |
Providing Access to Instance and Class Variables
Don.
Referring to Class Variables and Methods
Avoid using an object to access a class (static) variable or method. Use a class name instead.
For example:
classMethod(); //OK
AClass.classMethod(); //OK
anObject.classMethod(); //AVOID!
Constants
Numerical constants (literals) should not be coded directly, except for -1, 0, and 1, which can
appear in a for loop as counter values.
Variable Assignments)should be written as
...
}
if ((c++ = d++) != 0) {Do not use embedded assignments in an attempt to improve run-time performance. This is the
...
}
job of the compiler. Example:
d = (a = b + c) + r; // AVOID!should be written as
a = b + c;Miscellaneous Practices
d = a + r;)) // USE
2. Returning Values Try to make the structure of your program match the intent. Example:
if (booleanExpression) {should instead be written as
return true;
} else {
return false;
}
return booleanExpression;Similarly,
if (condition) {should be written as
return x;
}
return y;
return (condition ? x : y);3. Expressions before ‘?’ in the Conditional Operator
If an expression containing a binary operator appears before the ? in the ternary ?: operator, it should be parenthesized. Example:
(x >= 0) ? x : -x;
4. Special Comments
Use XXX in a comment to flag something that is bogus but works. Use FIXME to flag something that is bogus and broken. | http://www.javatutorialcorner.com/2013/09/java-best-practice-programming-practices.html | CC-MAIN-2018-09 | refinedweb | 216 | 61.83 |
- Fund Type: ETF
- Objective: Small-cap
- Asset Class: Equity
- Geographic Focus: Australia
Vanguard MSCI Australian Small Companies ETF+ Add to Watchlist
VSO:AU46.6900 AUD 0.0800 0.17%
As of 23:22:11 ET on 03/06/2014.
Snapshot for Vanguard MSCI Australian Small Companies ETF (VSO)
ETF Chart for VSO
- VSO:AU 46.6900
46.6100
…
Previous Close
- 1D
- 1W
- 1M
- YTD
- 1Y
- 3Y
- 5Y
Recently Viewed Symbols
Save as Watchlist
Saving as watchlist...
Sponsored by
Fund Profile & Information for VSO
Vanguard MSCI Australian Small Companies ETF is an exchange traded fund incorportated in Australia. The Fund seeks to match the return (income and capital appreciation) of the MSCI Australian Shares Small Cap Index before taking into account fund fees and expenses.
Fundamentals for VSO
Dividends for VSO
Performance for VSO
Top Fund Holdings for V | http://www.bloomberg.com/quote/VSO:AU | CC-MAIN-2014-10 | refinedweb | 138 | 63.9 |
GroupByKey
Takes a keyed collection of elements and produces a collection where each element consists of a key and all values associated with that key.
See more information in the Beam Programming Guide.
Examples
In the following example, we create a pipeline with a
PCollection of produce keyed by season.
We use
GroupByKey to group all the produce for each season.
import apache_beam as beam with beam.Pipeline() as pipeline: produce_counts = ( pipeline | 'Create produce counts' >> beam.Create([ ('spring', '🍓'), ('spring', '🥕'), ('spring', '🍆'), ('spring', '🍅'), ('summer', '🥕'), ('summer', '🍅'), ('summer', '🌽'), ('fall', '🥕'), ('fall', '🍅'), ('winter', '🍆'), ]) | 'Group counts per produce' >> beam.GroupByKey() | beam.MapTuple(lambda k, vs: (k, sorted(vs))) # sort and format | beam.Map(print))
Output:
Related transforms
- GroupBy for grouping by arbitrary properties of the elements.
- CombinePerKey for combining all values associated with a key to a single result.
- CoGroupByKey for multiple input collections.
Last updated on 2021/02/05
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know! | https://beam.apache.org/documentation/transforms/python/aggregation/groupbykey/ | CC-MAIN-2022-05 | refinedweb | 169 | 60.72 |
In server-side development, it's extremely common to access variables from the execution environment.
In this post, I hope to convince you to consolidate access of these variables into one single-file (or a more structured way of accessing these values) to make it easier to refactor, maintain, and update as your project grows.
// Logging to console the stage we're on console.log(`This is the ${process.env.NODE_ENV}`);
Why is it useful to access variables from the environment?
I won't dive too much into the why of this, but you typically access sensitive values via this way, such as
- API keys and secrets
- Application identifiers
- Stage of the environment (looking at you
NODE_ENV)
- JSON Web Token key
- Database access credentials
- other top-secret values of this nature
These are values that you do not want committed to a version control system like GitHub and so you keep them out of there for security purposes.
You might also keep these out of there because these vary from stage to stage, and thus makes no sense to keep in GitHub.
So, getting them during runtime of the program it is! 😃
What's the issue with process.env?
In your own projects, you might be accessing environment variables through
process.env.MY_VARIABLE. This is great! It's fine, and it works.
But is it optimal?
Imagine you have two files that access the same environment variable, some sort of API key
// Usage 1 axios({ url: `${process.env.CMS_URL}/random-endpoint-1`/v1/random-endpoint-1` header: `Bearer ${process.env.MY_API_KEY}` }); // ... // Usage 2 axios({ url: `${process.env.CMS_URL}/random-endpoint-1`/v1/random-endpoint-2` header: `Bearer ${process.env.MY_API_KEY}` });
Both of these files are accessing the same API key from the environment directly. Now imagine your projects expands in scale and you have many more instances where this API key needs to be accessed.
See the problem that might occur? You now would
process.env.MY_API_KEY littered throughout your project.
What if you need to change the environment variable from
process.env.MY_API_KEY to
process.env.TWITTER_API_KEY?
- Yes, you can easily rename all instances (using a powerful editor like VS Code). But this is going to cause a pretty large commit created for this simple change.
What if you have a plethora environment variables, and you want to group them? Like API credentials, database credentials, etc.?
- There's no way to do this with the normal
process.env.XXX_YYYusage. Everything is at the same level and there's no grouping them.
What if you want to add context to each environment variable, so engineers can understand what purpose they serve?
- You can do this in your
.env.templatefile as single-line comments, but this won't show up in the IDE as a hint or documentation for your team members.
How should we be accessing the environment variables?
I won't say you 100% definitively, absolutely, should follow my advice. But I think it can help prevent the above shortcomings (and also add to your current environment variable usage).
Add a
config.js or
config.ts file!
What do I mean?
I mean consolidate access of environment variables from using
process.env.XXX_YYY everywhere, to just accessing it once! Through a single file(s)!
It can look something like
export const Config = { cmsUrl: process.env.CMS_URL, dbHost: process.env.DB_HOST, dbUser: process.env.DB_USER, dbPassword: process.env.DB_PASSWORD, dbName: process.env.DB_NAME, jwtSecret: process.env.ZEROCHASS_SECRET, awsRegion: process.env.AWS_REGION, awsBucket: process.env.AWS_BUCKET, twitterApiKey: process.env.TWITTER_API_KEY, }
Now, whenever I want to access any of these environment variables, I can do so by importing this file.
No more having to write
process.env.MY_VARIABLE over and over!
My above example with axios becomes this
import { Config } from './config'; // Usage 1 axios({ url: `${Config.cmsUrl}/random-endpoint-1` header: `Bearer ${Config.twitterApiKey}` }); // ... // Usage 2 axios({ url: `${Config.cmsUrl}/random-endpoint-2` header: `Bearer ${Config.twitterApiKey}` });
If I ever need to change the environment variable that the Twitter API key was stored in, I don't have to change a zillion files, I just change it here in
config.ts!
If I need to add documentation and group items, I can easily add it here.
export const Config = { general: { /** The URL for our Craft environment */ cmsUrl: process.env.NEXT_PUBLIC_CRAFT_CMS_URL, jwtSecret: process.env.ZEROCHASS_SECRET, /** The stage we're on, should be QA/Dev/Prod */ nodeEnv: process.env.NODE_ENV, }, database: { host: process.env.DB_HOST, user: process.env.DB_USER, password: process.env.DB_PASSWORD, name: process.env.DB_NAME, }, aws: { region: process.env.AWS_REGION, bucket: process.env.AWS_BUCKET, }, twitter: { /** API v1 URL for Twitter */ apiUrl: process.env.TWITTER_API_URL, /** API key for our Twitter app */ apiKey: process.env.TWITTER_API_KEY, }, }
And anyone who imports this file will get all that context, including the code hints on hover!
Hopefully this short post has given you some insight on how you might rethink your environment variable usage. You can even throw in some value-validation here, but I won't cover that here.
Let me know your thoughts!
Discussion (1)
Great post, Chris! I really liked the setup to change variables as the application scales. A thing I didn't notice if you are using or not is the dotenv (dependency to load variables) or what other alternative are you using here. Another really nice concept to explore would be how to share your .env file since this file isn't uploaded anywhere. Perhaps an idea for a future post! Thanks for sharing this! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/swellguycris/accessing-environment-variables-in-a-better-way-3ol8 | CC-MAIN-2021-31 | refinedweb | 915 | 61.73 |
Hello There:
I have a question on API changes in kentico 8
CMS.Membership.UserSettingsInfo doesn't contain a definition for 'UserShowSplashScreen'.
How can I solve this problem? Do you know what namespace, property or method can instead this?
Thank you beforehand.
Best Regards,
Makara KS
Check out the API Changes for upgrading from v7 to v8. It was removed.
Rightly said by Brenden you may check there to see for all possible changes that you might experience.
Sadly Kentico didn't have any replacement method for this so you need to contact Kentico to find a work around by sending them an email at suppor@Kentico.com
Thank you!
Please, sign in to be able to submit a new answer. | http://devnet.kentico.com/questions/api-change-kentico-8 | CC-MAIN-2016-44 | refinedweb | 121 | 69.28 |
We’ve just uploaded mypy 0.600 to the Python Package Index (PyPI). Mypy is an optional static type checker for Python. This is a major release that turns on strict optional checking by default (there’s a flag to disable it for backward compatibility), introduces a new mypy daemon mode with much, much faster incremental checking, along with other new features, bug fixes and library stub (typeshed) updates. You can install it as follows:
python3 -m pip install -U mypy
You can read the documentation for this release on ReadTheDocs.
New Features
Strict Optional Checking Enabled by Default
Over a year after introducing it, strict optional mode is now on by default. This means that mypy checks that you use an Optional[…] type whenever a None is a valid value (see below for how to get the old behavior):
def greet(name: str) -> None: print('hi, ' + name) greet(None) # Error! None not compatible with 'str'
Mypy also verifies that only valid operations are performed on None values:
from typing import Optional def greet(name: Optional[str]) -> None: print('hi, ' + name) # Error: Incompatible types # 'str' and 'Optional[str]' greet(None) # OK
Mypy recognizes common Python idioms for checking against None values:
from typing import Optional def greet(name: Optional[str]) -> None: if name is None: print('hello, stranger') else: print('hi, ' + name) # OK greet(None) # OK
You can still get the old behavior, where None was a valid value for every type, by using the --no-strict-optional flag (or strict_optional = False in your config file). Strict optional checking was previously available through the --strict-optional flag, which is no longer required. Read the now significantly expanded documentation for the details, including hints about how to migrate an existing codebase to strict optional checking. We have no plans to deprecate the backward compatibility flag, however.
Mypy Daemon
This release adds support for a mypy daemon (in beta) which can speed up mypy runtimes by a large factor, especially when performing incremental builds where only a few files have changed. The mypy daemon is a server process that runs on your development machine (where you’d normally run mypy) and keeps program state in memory to avoid redundant work when running mypy repeatedly. There’s a client tool dmypy that requests type checking results from the daemon.
We’ve used the mypy daemon at Dropbox for over a month now (it’s been under development much longer), and typical incremental builds for a codebase with millions of lines complete in a few seconds. Running mypy using the daemon can be dozens of times faster than non-daemon incremental mypy runs for large codebases. The documentation explains how to use the mypy daemon.
Remote Caching
We’ve documented how to use remote caching with mypy, where you set up a Continuous Integration build to pre-generate mypy cache files for each new commit in your repository, and mypy runs “warm up” the cache by downloading a pre-generated cache file to speed up type checking. This can speed up mypy runtimes by a large factor.
We’ve used remote caching at Dropbox together with the mypy daemon for over a month now, and our users are very happy with the results. A cold mypy run (when the daemon is not already running) for a codebase with millions of lines often takes less than a minute, including the time needed to download the remote cache data (over a fast network). Without remote caching typical runtimes were several minutes. Read the documentation for details on how to set up remote caching.
Support Repeated Assignment to Local Variable '_'
Mypy now allows unrestricted assignments to the underscore variable (_) within functions, so code like this will no longer generate errors:
from typing import Tuple def get_item(n: int) -> Tuple[int, str, float]: ... def get_length(n : int) -> int: length, _, _ = get_item(n) # OK! assert length > 0 return length
User-specific Config File
You can now store common mypy configuration options in a .mypy.ini file in your home directory, and it will be automatically used by mypy. This was contributed by Alejandro Avilés (PR 4939).
Notable Bugs Fixed
- Fix crashes in coverage reports (PR 4978)
- Fix to functional Enums (Elazar Gershuni, PR 4942)
- Allow incompatible overriding of __slots__ (PR 4890)
- Make an override of a ClassVar implicitly a ClassVar (Carl Meyer, PR 4718)
- Use EnumMeta instead of Enum to mark enum classes (Elazar Gershuni, PR 4319)
Other Improvements
- Improve performance with large config files and adjust semantics of section patterns (PR 4894)
- Improve performance when using custom source encodings (PR 4957)
- Reduce mypy memory usage (PR 4902, PR 4904)
Acknowledgments
First of all, we’d like to thank our employer, Dropbox, for funding the mypy core team.
Thanks to all mypy contributors who contributed to this release:
- Alejandro Avilés
- Carl Meyer
- Elazar Gershuni
- Emil Hessman
- Ethan Smith
- Jelle Zijlstra
Additional thanks to all contributors to typeshed:
- Aaron Miller
- Andrew Gaul
- Charles-Axel Dein
- Chris Gavin
- Eddie Schoute
- Freek Dijkstra
- Jelle Zijlstra
- John Reese
- Martin DeMello
- NODA, Kai
- rchen152
- Semyon Proshev
- Siva Chandra
- Svend Sorensen
- Tomas Krizek | http://mypy-lang.blogspot.co.uk/2018/05/mypy-0600-released.html | CC-MAIN-2018-22 | refinedweb | 851 | 52.33 |
.
Here’s another problem:
"<itunes:block>
This tag is used to block a podcast or an episode within a podcast from being posted to iTunes. Only use this tag when you want a podcast or an episode to appear within the iTunes podcast directory."
The first and second sentences are contradictory, aren’t they? Shouldn’t the second say "...when you DON’T want..."?
And what should the value of this element be? Should it just be empty (eg. <itunes:block /> or <itunes:block></itunes:block>)--ie., does it’s mere presence indicate that the item should be blocked (that' how I read it), or should it have a value?
And finally, I assume this can appear either at the channel or item level to block the entire feed or just one item, right? Being explicit about that wouldn’t hurt.Posted by Antone Roundy at
What’s your guess about the itunes:block element? No example, and I doubt they mean what the spec text says “This tag is used to block a podcast or an episode within a podcast from being posted to iTunes. Only use this tag when you want a podcast or an episode to appear within the iTunes podcast directory.” but once you change that to “not to appear” do they mean it’s an empty element, or one or the other of their Yyes|Nno values?
Posted by Phil Ringnalda at
Note to self: learn to type faster :)
Posted by Phil Ringnalda at
Sam Ruby: Podcast Specifications Questions[link]...
Excerpt from del.icio.us/tag/itunes at
Other questions worth asking:
- Why itunes:category? Why not reuse the category element in RSS, which includes a @domain attribute?
- Why itunes:summary? Why not reuse channel/description or item/description?
- Why itunes:author? Why not reuse channel/managingEditor or item/author?
- Why itunes:keywords? Why not reuse dc:subject?
- Why itunes:owner? Why not reuse channel/webMaster?
- Why itunes:image? Why not reuse channel/image?
- Why are they redefining the content model of copyright?
Here a Pod, There a PodiTunes 4.9 is out, with its previously-announced support for podcasts. The podcast directory at the iTunes Music Store is very...... [more]
Trackback from Musings at
Why itunes:category? Why not reuse the category element in RSS
I 'spose they could have specified their own taxonomy domain. And warned users that all other taxonomies/plain-text categories would be ignored.
Why itunes:summary?
Cuz, unlike the notoriously under-specified <description> element, they can say:
“Limited to 4000 characters or less, plain text, no HTML”
Why itunes:author?
Cuz, notoriously, the content of the item-level <author> element is an email address.
Why itunes:keywords? Why not reuse dc:subject?
Fair enough, though it is yet another namespace (funky, y’know...)
Why itunes:owner? Why not reuse channel/webMaster?
“Email address for person responsible for technical issues relating to channel.”
Not the same thing at all...
Why itunes:image? Why not reuse channel/image?
"This artwork can be larger than the maximum allowed by RSS. Details on the size recommendations are in the
section below."
Why are they redefining the content model of copyright?
?Posted by Jacques Distler at
Podcasting is the New NapsterI love the smell of disintermediation in the morning. The new iTunes software update enables a little Podcasts folder. Suddenly you get a clean aggregation of a selected fat heat of the long tail in an integrated experience....
Excerpt from Ross Mayfield's Weblog at
iTunes 4.9 (with podcasting support) AVAILABLE!Get it NOW! Oh and while you’re at it: don’t forget to pick up the new iPod updater as well. First quick impression: cool stuff. The interface still needs a little polishing, though - the menu navigation seems inconsistent. The tags are not...
Excerpt from - The GadgetGuy at
CountdownLinks and a countdown....
Excerpt from Anne’s Weblog about Markup & Style at
Word on the street is that <itunes:image> doesn’t work, and <itunes:link does.
Most amusing (in the sense of "I laughed so hard I spit bitten-tongue blood across the room") find, while looking at their prominently featured partners' feeds for clues: Disney has replaced the core /channel/image/url with <itunes:link>(their image url)</itunes:link>
Well, back to the RSS 1.0 RSS 0.91 module, which defines <rss091:webmaster> for sideways compatibility with <webMaster>. If Mark ever does find that hobby, I sure hope he tells me what it is, and how to get started in it.Posted by Phil Ringnalda at
Sam Ruby identifies the IP address of every person who comments on his weblog, as you can see by hovering over a name. For message boards and weblog software I have developed, I won’t reveal user IP addresses out of privacy and security...
Excerpt from Workbench at
Sock PuppetsFWIW, my experience is that both trolling and spamming were greatly reduced once I implemented this. Related: Beware of Strangers Users Who Share Locations... [more]
Trackback from Sam Ruby at
Phil - even more amusing, there is a very subtle mistake in Disney’s xmlns:itunes declaration, making all of their ITunes metadata effectively invisible to any parser than understands namespaces.
Posted by Sam Ruby at
Interesting. I wonder if it works anyway (for things other than the image), implying qname-matching regex parsing?
Posted by Phil Ringnalda at
I wonder if it works anyway (for things other than the image), implying qname-matching regex parsing?
Yes, the Disney namespace (PodCast instead of Podcast) works. But it’s not based on qname, and it’s not based on regular expressions. From my tests so far ( [link] ):
- If the itunes namespace matches exactly, it works.
- If the itunes namespace matches case-insensitively (lowercase, uppercase, or any mixed case other than the correct case), it works. (This is a generalization of the Disney case.)
- If the itunes namespace matches case-insensitively and is defined by a prefix other than “itunes”, it works.
- If the itunes namespace is something else (like substituting “example.com” for “itunes.com” in the namespace), it works but does not find any of the information in the iTunes namespace, and it falls back on information in the RSS feed (channel/description instead of channel/itunes:subtitle, etc). This “etc” probably bears further research, to determine the mapping between default RSS elements and iTunes elements.
- If the namespace declaration is missing, it does not work at all.
- If the feed is ill-formed (missing the final end tag on "rss"), it does not work at all.
By “works”, I mean it downloads and parses the feed, displays it in the local Podcasts subscriptions list, and downloads my sample MP3.
By “does not work at all”, I mean it displays nothing from the feed, only the URL and a “!” icon that, when selected, displays an alert “URL does not seem to be a valid Podcast URL.”
So it appears that iTunes uses a real, draconian, namespace-aware XML parser... except that namespaces are case-insensitive.
I pass along this information without further comment or judgement.Posted by Mark at
It appears that iTunes 4.9 doesn’t EVER respect the charset parameter in the HTTP Content-type header. Test cases:
- [link] displays subtitle with curly quotes (correct - HTTP declares “application/xml” with no encoding, XML says “windows-1252”, iTunes correctly treats it as windows-1252)
- [link] displays subtitle with blocks where quotes should be (incorrect - HTTP says windows-1252, XML has no encoding, iTunes incorrectly treats it as UTF-8)
- [link] displays subtitle with blocks where quotes should be (incorrect - HTTP says windows-1252, XML says iso-8859-1, iTunes incorrectly treats it as iso-8859-1)
<?xml version='1.0' encoding='iso-8859-1'?>
If this were removed, and if iTunes 4.9 ignored the charset parameter, then the feed would not be considered well formed.Posted by Sam Ruby at
If this were removed, and if iTunes 4.9 ignored the charset parameter, then the feed would not be considered well formed.
You are correct, that feed was not actually testing what I thought it was testing. (The other two appear to be accurate tests.) I’ve corrected the feed to match my prose description (HTTP declares “windows-1252”, XML has no encoding), and iTunes now exhibits the behavior you predicted: it does not look at the HTTP charset parameter at all, and since the XML body has no encoding information, it fails to parse the feed at all.Posted by Mark at
MarkP: "it appears that iTunes uses a real, draconian, namespace-aware XML parser... "except that namespaces are case-insensitive. (just keeps getting better)...
Excerpt from LaughingMeme's MLPs at
MarkP: "it appears that iTunes uses a real, draconian, namespace-aware XML parser... "kellan : MarkP: “it appears that iTunes uses a real, draconian, namespace-aware XML parser... ” - except that namespaces are case-insensitive. (just keeps getting better)...
Excerpt from HotLinks - Level 1 at
Liberal RSS Parsing and Apple iTunes... [more]
Trackback from Dare Obasanjo aka Carnage4Life at
Podcasting Spec Iterations AheadTantek Çelik: Excellent! Here’s a few more specific questions. In particular, don’t miss this one. I’d also suggest that you investigate the state of common practice at the moment, for example, Disney, ESPN, CNN and New... [more]
Trackback from Sam Ruby
The itunes:image tag has been fixed and now works.
The format is as follows:
<itunes:image
There are also updated docs that they say should be coming out shortly.Posted by Otto at
Otto: I’m still hearing conflicting input on this. Once the spec is available, it should literally only be a matter of hours before I make the change and the code is online.
Posted by Sam Ruby at
You know people, I am a total apple fan, have bought a mac with my last money, even though I am unemployed, got excited about the itunes podcasting feature, and I am SO disappointed about them bending the rules to their firm instead of first learning all the rss2 features properly.
I see no reason whatsoever for their tags which are merely doubles of the rss2 tags.
Last but not least: Even worse is their “nice” "enhanced" AAC format which makes lots of podcasters now release their podcasts in those compressions which no other mp3 player can play anymore.
I suggest to all of you to boykott this format by not producing it and not subscribing to such feeds if alternate mp3 feeds are available.
Don’t let apple suck us all into their empire and make us dependent from them.
I am trying to develop another podcast player on CyMeP.org in order to keep podcasting in the community and not in apples hands. You are welcome to suggest features you would like to see there to me - let’s do it together !
Posted by CyMeP.org at
HELP PLEASE!
I’m a newbie to podcasting and much of what you have posted is quite helpful- here’s my problem: I’ve got my tags correct, etc. for iTunes, but my inital post of the location of the .xml file to iTunes is WRONG! I want to edit that within iTunes, but it seems to be unchangeable- can I adjust that or DELETE that podcast. There does not seem to be any way to delete a published podcast- any help would be greatly appreciated- jean@indepres.org
Posted by Jean Larroux at
Jean, you might have more luck on the Syndication-dev mailing list
Posted by Sam Ruby at
FeedValidator.rb?This started out as a Random Thought (RT). background The Feed Validator is organized as a recursive descent parser for various feed formats. It is implemented in an object oriented fashion, where each element ‘knows’ what the possible chi... [more]
Trackback from Sam Ruby at
Comment on My year in 12 copy-and-paste comments by: MarkJan
Can I host podcast on my own Apache server? What are the requirements for a server to host podcasting?
Posted by Tyrse
I just spent 3 days trying to find one parser in this world that would show itunes channel image.
Seems I am sol. < itunes:image href=" seems unparsable?
Web Standards????
I ran into the same issues this morning. And now, when I try to submit my feed, I get “We are currently experiencing technical difficulties. Please try again later.”
Oh, well. Hopefully they get it all worked out. Still not sure how they want images included, though.Posted by Christian Cantrell at | http://www.intertwingly.net/blog/2005/06/28/Podcast-Specifications-Questions | crawl-001 | refinedweb | 2,089 | 65.83 |
« Gecko Plugin API Reference « Plug-in Side Plug-in API
Summary
Requests a platform-specific print operation for an embedded or full-screen plug-in.
Syntax
#include <npapi.h> void NPP_Print(NPP instance, NPPrint* PrintInfo);
Parameters
The function has the following parameters:
- instance
- Pointer to the current plug-in instance. Must be embedded or full-screen.
Description
NPP_Print is called when the user requests printing for a web page that contains a visible plug-in (either embedded or full-page). It uses the print mode set in the NPPrint structure in its printInfo parameter to determine whether the plug-in should print as an embedded plug-in or as a full-page plug-in.
- An embedded plug-in shares printing with the browser; the plug-in prints the part of the page it occupies, and the browser handles everything else, including displaying print dialog boxes, getting the printer device context, and any other tasks involved in printing, as well as printing the rest of the page. For an embedded plug-in, set the printInfo field to NPEmbedPrint.
- A full-page plug-in handles all aspects of printing itself. For a full-page plug-in, set the printInfo field to NPFullPrint or null.
For information about printing on your platform, see your platform documentation.
MS Windows
On MS Windows
printInfo->print.embedPrint.platformPrint is the device context (DC) handle. Be sure to cast this to type HDC.
The coordinates for the window rectangle are in TWIPS format. This means that you need to convert the x-y coordinates using the Windows API call DPtoLP when you output text.
See Also
NPPrint, NPFullPrint, NPEmbedPrint | https://developer.mozilla.org/en-US/docs/Archive/Plugins/Reference/NPP_Print | CC-MAIN-2018-51 | refinedweb | 272 | 56.76 |
Set Property value dynamically using Reflection in C#
Hi,
In this code snippet, we are going to look how to set a property value dynamically using C#. Create a class library with the following code. Compile this as well.
namespace MyClassLibrary
{
public class ReflectionClass
{
private int _age;
public int Age
{
get { return _age; }
set { _age = value; }
}
}
}
Create a console application and in the debug folder of this application copy the DLL which was created by previous project. Then put the following code.
Here, we are going to set the value to the property Age and we are going to get it back.
// will load the assembly
Assembly myAssembly = Assembly.LoadFile(Environment.CurrentDirectory + "\\MyClassLibrary.dll");
// get the class. Always give fully qualified name.
Type ReflectionObject = myAssembly.GetType("MyClassLibrary.ReflectionClass");
// create an instance of the class
object classObject = Activator.CreateInstance(ReflectionObject);
// set the property of Age to 10. last parameter null is for index. If you want to send any value for collection type
// then you can specify the index here. Here we are not using the collection. So we pass it as null
ReflectionObject.GetProperty("Age").SetValue(classObject, 10,null);
// get the value from the property Age which we set it in our previous example
object age = ReflectionObject.GetProperty("Age").GetValue(classObject,null);
// write the age.
Console.WriteLine(age.ToString());
If you have any queries, please feel free to post it here.
try this link: | https://www.dotnetspider.com/resources/19232-Set-Property-value-dynamically-using-Reflection.aspx | CC-MAIN-2021-43 | refinedweb | 234 | 52.26 |
/* $OpenBSD: pax.h,v 1.17 2005/11/09 19:59:06 otto Exp $ */ /* $NetBSD: pax.h,v 1.3 1995/03/21 09:07:41ax.h 8.2 (Berkeley) 4/18/94 */ /* * BSD PAX global data structures and constants. */ #define MAXBLK 64512 /* MAX blocksize supported (posix SPEC) */ /* WARNING: increasing MAXBLK past 32256 */ /* will violate posix spec. */ #define MAXBLK_POSIX 32256 /* MAX blocksize supported as per POSIX */ #define BLKMULT 512 /* blocksize must be even mult of 512 bytes */ /* Don't even think of changing this */ #define DEVBLK 8192 /* default read blksize for devices */ #define FILEBLK 10240 /* default read blksize for files */ #define PAXPATHLEN 3072 /* maximum path length for pax. MUST be */ /* longer than the system MAXPATHLEN */ /* * Pax modes of operation */ #define LIST 0 /* List the file in an archive */ #define EXTRACT 1 /* extract the files in an archive */ #define ARCHIVE 2 /* write a new archive */ #define APPND 3 /* append to the end of an archive */ #define COPY 4 /* copy files to destination dir */ #define DEFOP LIST /* if no flags default is to LIST */ /* * Device type of the current archive volume */ #define ISREG 0 /* regular file */ #define ISCHR 1 /* character device */ #define ISBLK 2 /* block device */ #define ISTAPE 3 /* tape drive */ #define ISPIPE 4 /* pipe/socket */ /* * Pattern matching structure * * Used to store command line patterns */ typedef struct pattern { char *pstr; /* pattern to match, user supplied */ char *pend; /* end of a prefix match */ char *chdname; /* the dir to change to if not NULL. */ int plen; /* length of pstr */ int flgs; /* processing/state flags */ #define MTCH 0x1 /* pattern has been matched */ #define DIR_MTCH 0x2 /* pattern matched a directory */ struct pattern *fow; /* next pattern */ } PATTERN; /* * General Archive Structure (used internal to pax) * * This structure is used to pass information about archive members between * the format independent routines and the format specific routines. When * new archive formats are added, they must accept requests and supply info * encoded in a structure of this type. The name fields are declared statically * here, as there is only ONE of these floating around, size is not a major * consideration. Eventually converting the name fields to a dynamic length * may be required if and when the supporting operating system removes all * restrictions on the length of pathnames it will resolve. */ typedef struct { int nlen; /* file name length */ char name[PAXPATHLEN+1]; /* file name */ int ln_nlen; /* link name length */ char ln_name[PAXPATHLEN+1]; /* name to link to (if any) */ char *org_name; /* orig name in file system */ PATTERN *pat; /* ptr to pattern match (if any) */ struct stat sb; /* stat buffer see stat(2) */ off_t pad; /* bytes of padding after file xfer */ off_t skip; /* bytes of real data after header */ /* IMPORTANT. The st_size field does */ /* not always indicate the amount of */ /* data following the header. */ u_int32_t crc; /* file crc */ int type; /* type of file node */ #define PAX_DIR 1 /* directory */ #define PAX_CHR 2 /* character device */ #define PAX_BLK 3 /* block device */ #define PAX_REG 4 /* regular file */ #define PAX_SLK 5 /* symbolic link */ #define PAX_SCK 6 /* socket */ #define PAX_FIF 7 /* fifo */ #define PAX_HLK 8 /* hard link */ #define PAX_HRG 9 /* hard link to a regular file */ #define PAX_CTG 10 /* high performance file */ #define PAX_GLL 11 /* GNU long symlink */ #define PAX_GLF 12 /* GNU long file */ } ARCHD; /* * Format Specific Routine Table * * The format specific routine table allows new archive formats to be quickly * added. Overall pax operation is independent of the actual format used to * form the archive. Only those routines which deal directly with the archive * are tailored to the oddities of the specific format. All other routines are * independent of the archive format. Data flow in and out of the format * dependent routines pass pointers to ARCHD structure (described below). */ typedef struct { char *name; /* name of format, this is the name the user */ /* gives to -x option to select it. */ int bsz; /* default block size. used when the user */ /* does not specify a blocksize for writing */ /* Appends continue to with the blocksize */ /* the archive is currently using. */ int hsz; /* Header size in bytes. this is the size of */ /* the smallest header this format supports. */ /* Headers are assumed to fit in a BLKMULT. */ /* If they are bigger, get_head() and */ /* get_arc() must be adjusted */ int udev; /* does append require unique dev/ino? some */ /* formats use the device and inode fields */ /* to specify hard links. when members in */ /* the archive have the same inode/dev they */ /* are assumed to be hard links. During */ /* append we may have to generate unique ids */ /* to avoid creating incorrect hard links */ int hlk; /* does archive store hard links info? if */ /* not, we do not bother to look for them */ /* during archive write operations */ int blkalgn; /* writes must be aligned to blkalgn boundary */ int inhead; /* is the trailer encoded in a valid header? */ /* if not, trailers are assumed to be found */ /* in invalid headers (i.e like tar) */ int (*id)(char *, /* checks if a buffer is a valid header */ int); /* returns 1 if it is, o.w. returns a 0 */ int (*st_rd)(void); /* initialize routine for read. so format */ /* can set up tables etc before it starts */ /* reading an archive */ int (*rd)(ARCHD *, /* read header routine. passed a pointer to */ char *); /* ARCHD. It must extract the info from the */ /* format and store it in the ARCHD struct. */ /* This routine is expected to fill all the */ /* fields in the ARCHD (including stat buf) */ /* 0 is returned when a valid header is */ /* found. -1 when not valid. This routine */ /* set the skip and pad fields so the format */ /* independent routines know the amount of */ /* padding and the number of bytes of data */ /* which follow the header. This info is */ /* used skip to the next file header */ off_t (*end_rd)(void); /* read cleanup. Allows format to clean up */ /* and MUST RETURN THE LENGTH OF THE TRAILER */ /* RECORD (so append knows how many bytes */ /* to move back to rewrite the trailer) */ int (*st_wr)(void); /* initialize routine for write operations */ int (*wr)(ARCHD *); /* write archive header. Passed an ARCHD */ /* filled with the specs on the next file to */ /* archived. Returns a 1 if no file data is */ /* is to be stored; 0 if file data is to be */ /* added. A -1 is returned if a write */ /* operation to the archive failed. this */ /* function sets the skip and pad fields so */ /* the proper padding can be added after */ /* file data. This routine must NEVER write */ /* a flawed archive header. */ int (*end_wr)(void); /* end write. write the trailer and do any */ /* other format specific functions needed */ /* at the end of an archive write */ int (*trail)(ARCHD *, /* returns 0 if a valid trailer, -1 if not */ char *, int, /* For formats which encode the trailer */ int *); /* outside of a valid header, a return value */ /* of 1 indicates that the block passed to */ /* it can never contain a valid header (skip */ /* this block, no point in looking at it) */ /* CAUTION: parameters to this function are */ /* different for trailers inside or outside */ /* of headers. See get_head() for details */ int (*rd_data)(ARCHD *, /* read/process file data from the archive */ int, off_t *); int (*wr_data)(ARCHD *, /* write/process file data to the archive */ int, off_t *); int (*options)(void); /* process format specific options (-o) */ } FSUB; /* * Format Specific Options List * * Used to pass format options to the format options handler */ typedef struct oplist { char *name; /* option variable name e.g. name= */ char *value; /* value for option variable */ struct oplist *fow; /* next option */ int separator; /* 2 means := separator; 1 means = separator 0 means no separator */ } OPLIST; #define SEP_COLONEQ 2 #define SEP_EQ 1 #define SEP_NONE 0 /* * General Macros */ #ifndef MIN #define MIN(a,b) (((a)<(b))?(a):(b)) #endif #define MAJOR(x) major(x) #define MINOR(x) minor(x) #define TODEV(x, y) makedev((x), (y)) /* * General Defines */ #define HEX 16 #define OCT 8 #define _PAX_ 1 #define _TFILE_BASE "paxXXXXXXXXXX" | http://opensource.apple.com/source/file_cmds/file_cmds-202.2/pax/pax.h | CC-MAIN-2014-15 | refinedweb | 1,274 | 63.12 |
I mainly need help with creating the 3 by 4 matrix (part 3 of the assignment) as i am confused as how i would create it. However this is the entire assignment:
Write a function that returns the sum of all the elements in a specified column in a matrix using the following header:
def sumColumn(matrix, columnIndex)
- Write a function that displays the elements in a matrix row by row, where the values in each row are displayed on a separate line (see the output below).
3.Write a test program (i.e., a main function) that reads a 3 X 4 matrix and displays the sum of each column. Here is a sample run:
Enter a 3-by-4 matrix row for row 0: 2.5 3 4 1.5
Enter a 3-by-4 matrix row for row 1: 1.5 4 2 7.5
Enter a 3-by-4 matrix row for row 2: 3.5 1 1 2.5
The matrix is
2.5 3.0 4.0 1.5
1.5 4.0 2.0 7.5
3.5 1.0 1.0 2.5
Sum of elements for column 0 is 7.5
Sum of elements for column 1 is 8.0
Sum of elements for column 2 is 7.0
Sum of elements for column 3 is 11.5 | https://www.daniweb.com/programming/software-development/threads/445690/creating-a-3-by-4-matrix | CC-MAIN-2017-13 | refinedweb | 227 | 83.56 |
Social Bookmarking Services Revisited 102
pchere writes "Social bookmarking allows you share bookmarks publicly instead of restricting them to the browser favourites. Del.icio.us is such a fast growing community and its users have created a large number of del.icio.us tools to further enhance the service. Organization by tags allows for quick retrieval of sites by topics and bookmarks are available as RSS feeds. An article in D-Lib Magazine reviews the Social Bookmarking Tools to "remind you of hyperlinks in all their glory, sell you on the idea of bookmarking hyperlinks, point you at other folks who are doing the same, and tell you why this is a good thing.""
Tags in other sites (Score:2, Informative)
Sure, you could go to google's image search, but where else can you easily see, for instance, celebrity nipples [celebrityflicker.com] or this category [celebrityflicker.com]?
Just looking at an object, and seeing other tags at the same time is extremely addictive. You can quickly jump to and fro within this kind of taxanomy with little effort. With certain experiments, we've seen a user stickyness not notice
Re:Tags in other sites (Score:1)
SPAM!!! He is involved with the founders (Score:2, Informative)
Re:SPAM!!! He is involved with the founders (Score:2)
Re:SPAM!!! He is involved with the founders (Score:1)
If people want to link to their own sites in a distinctly spammy / google-bomby way they have that right but they shouldn't end up at +5, Informative, because of it, they certainly shouldn't be gaining karma for posting a blatant shill.
That's an insult to those slashdotters who actually go out there and find good sites that
Yo mods, get off the crack! (Score:1, Insightful)
that..........????
Re:Yo mods, get off the crack! (Score:3, Insightful)
Link Page (Score:1, Insightful)
Re:Link Page (Score:1)
Re:Link Page (Score:2)
The article mentions the three axes of social bookmarks: URLs, tags, and users. A simple page of links only gives you the first of those. In addition, the various sites have additional useful features that a page of links would not provide.
One of the problems I've run into in managing even my small collection of bookmarks is finding things later. Tags help quite a lot with that. "What was that link to the monthly IBM puzzles? Well, I filed it under 'IBM'. Ah, there it is." With URLs and tags (and a
de.icio.us (Score:5, Interesting)
A good place to look is the page of "popular" sites. Some strange and interesting stuff turns up there fairly routinely.
Stuff like how to cut (i.e. vegetables, meats etc) and Chess strategies among other sometimes bizarre sites. [del.icio.us]
Bizarre like chess strategies! (Score:2)
Re:Bizarre like chess strategies! (Score:1)
That was the exact site that was listed on del.icio.us. Yeah I'm sure an accomplished and oh, so busy, grandmaster like yourself would have no time to puruse a site as pedestrian as this.
I on the other hand am not a full time professional chess geek, and would not have otherwise stumbled upon "Ward Farnsworth's Predator at the Chessboard."
So I suppose next you're going to tell me slashdot isn't a huge timewaster and not to stay away from it.
Re:Bizarre like chess strategies! (Score:2)
Re:Bizarre like chess strategies! (Score:2)
That site is about tactics, not strategy. Do you know the difference?
Re:de.icio.us (Score:1)
Re:de.icio.us (Score:2)
Re:de.icio.us (Score:2)
Just as a friendly fyi, I'd also suggest Hotlinks [upian.com], which is slanted to more technical / software articles.
Also, in terms of bookmarks managed solely by one individual, I *highly* recommend Andy Baio's WaxyLinks [waxy.org].
Ironically? (Score:2)
This is great and all (Score:5, Insightful)
Re:This is great and all (Score:2)
Re:This is great and all (Score:4, Informative)
I personally could care less. del.icio.us allows you to become a regsitered member (free) to have your own section of bookmarks. Only you can publish and customize to that section meaning that the only ads that show up will be the ones you put in there. You can then add a live bookmark in Firefox to the rss feed and have the last 30 links available to you anywhere you go. Rather I'm at home or at work I can keep my bookmarks together easily. del.icio.us will then keep a counter on how many people link to the same place and will give you the option of viewing other people's bookmarks who link to the same sites as you. They then take the most linked sites and place them at del.icio.us/popular. [del.icio.us] The only spam that will show up is the spam that you look for.
Some common feeds:
Re:NICE ENGLISH, JACKBALL (Score:2)
Re:This is great and all (Score:1)
There is no question but that spamming of these new social tools can and will occur; it almost goes with the territory that social forums will foster such 'parasites' and some instances have been noted already. So far, however, it does not seem to have been a major problem, largely because spam has been drowned out by legitimate use.
Is spam a good enough reason to dismiss smth.? (Score:1)
The Web is full of spam
You are getting spam on your cellphone or even in your snail-mailbox
Finally, even
However, I never heard of someone to completely stop using any of these just because of spam. So, the fact that social bookmarking is prone to attract spam (although so far it has not) is usually not a good enou
Backflip (Score:3, Interesting)
Is this the beginnings of... (Score:3, Interesting)
Re:Is this the beginnings of... (Score:1, Informative)
A manually operated webcrawler. (Score:5, Interesting)
Or would it just become a handy place that search engines would mine for data?
Re:A manually operated webcrawler. (Score:1)
Re:A manually operated webcrawler. (Score:1)
Re:A manually operated webcrawler. (Score:2)
Which only leaves META tags..for XML feeds...I guess it's about time robot exclusion standards were revised.
Re:A manually operated webcrawler. (Score:1)
Re:A manually operated webcrawler. (Score:2)
your spam detector needs work (Score:1)
One bookmark to rule them all (Score:5, Interesting)
Not keeping tons of bookmarks is also a good way to reduce info-overload: you only remember the stuff that matters. No more feeling compelled to check up on hundreds of old links (and then cleaning house of the dead ones yet again).
Re:One bookmark to rule them all (Score:2)
Re:One bookmark to rule them all (Score:1)
Re:One bookmark to rule them all (Score:3, Informative)
Re:One bookmark to rule them all (Score:2)
1. tags
2. social aspect (folksonomy)
3. full-text search
4. private / public bookmarks
5. nice UI
Delicious has only 1 and 2.
If you'd like to have all 5, I suggest you look at Simpy - you can use the demo/demo [simpy.com] account.
Link trend featue in a Social Bookmarks ecosystem (Score:2)
Also, I can have multiple "Topics" with Simpy (create a Topic, add a few people to it, watch their links, optionally applying a query filter over them). I use this a lot to keep abre
Re:One bookmark to rule them all (Score:1)
Yes, I do that, too. I remember being surprised when one day I realized that I could find many sites with Google faster than I could with my own bookmark set-up, however well organized it was.
Unfortunately, that doesn't work with everything, so I have my default page set to a nicely laid out menu of links I use a lot but cannot look up quickly. There are only 12 of these (mostly bank & credit
Re:Nothing to see here, move along (Score:2)
Re:Nothing to see here, move along (Score:1)
There's still the original Yahoo directory: [yahoo.com]. Which, by the way, you can actually search for what you want. All links are hand moderated, so what you're looking for should be relevent to the category.
There's also the Open Directory Project: [dmoz.org]. This is the roughly the same as the Yahoo version.. but its open!
Re:Nothing to see here, move along (Score:1)
wow, not a fluff piece (Score:4, Informative)
Re:wow, not a fluff piece (Score:3, Interesting)
I use del.icio.us. I'ts great, but the
Re:wow, not a fluff piece (Score:3, Interesting)
I want it to be easy to use bookmarks in speech, not just keep them in a file.
You can see this in wiki- in wiki, if you use a [[special link syntax,]] it'll automatically link the text.
I want that for everything.
If I'm writing in Slashdot, I shouldn't have to write out less-than a href=quote (lookup-and-paste-URL-here) greater-than blah blah less-than
Re:wow, not a fluff piece (Score:1)
I've heard so much fluff about folksonomies and social this-and-that that I'm well sick of it, but you're right. It's a serious article. I've come to reliably expect real content from dlib; They do a good job. The article at Burningbird.com [burningbird.net] is a great one, too. There seems to be a divide between people who do official tree-based classification and the tag-based classifiers. The tree people say flat namespaces aren't rich enough to provide con
Re:wow, not a fluff piece (Score:1)
Re:wow, not a fluff piece (Score:1)
Re:wow? (Score:4, Interesting)
Also, I have a live bookmark on my mother's, and on a friend's computer. All I have to do is tag something as "Mom", or "Joel", and it will show up in their bookmarks in Firefox.
When my 70 year old mom asks, "Where can I get cheap ink cartridges?", I will add a bookmark to her Firefox. All remotely.
Re:wow? (Score:1)
Yeah, I'll Bookmark This. (Score:4, Funny)
Good. (Score:2, Insightful)
one for your own site... (Score:2, Informative)
Re:one for your own site... (Score:2, Informative)
Spammers (Score:1)
Re:Spammers (Score:1)
I emailed Joshua (the creator) and he banned them right away.
Re:Spammers (Score:3, Insightful)
Second part (Score:5, Informative)
References better than the article? (Score:1)
Hmm... (Score:2)
Self Referential (Score:2)
blogs pointing to social booking marking tools linking to other blogs talking about syndication, itself syndicating another page talking about blogging that links to a social booking site...
I'm a bit worried about getting involved because I might not get out.
You heard about the two websites that accidetally syndicated each other didn't you? Right mess it was. I
scoring tags (Score:1)
basically, tags are given a one-digit score (1 low to 9 high) which informs the system how much a given item belongs to that tag.
so when bookmarking slashdot, for example, you might give it the following tags: news4 geek9
this means that slashdot is a 4 on the newsness scale, but 9 on the geekness scale. this sort of quantification would really come in handy for searches. when you search for "news," slashdot wo
Simpy (Score:2)
heh (Score:1)
Great for finding out about the latest viral pages (Score:1)
I've been using del.icio.us for about a year now, but I hardly ever visit the site itself except when I'm adding or retrieving my own bookmarks. Instead, I use the fantastic populicious [populicio.us] RSS feeds to tell me what the new popular links of the day are, coupled with RSS feeds for various tag intersections that I'm interested in.
I've found out about a helluva lot of stuff via del.icio.us over the last year that I just wouldn't have found out about otherwise.If you haven't used it yet, give it a look.
Digg (Score:2, Informative)
Re:Digg (Score:2, Insightful)
I still use it though.
Data validation (Score:2, Informative)
Here's what I want (Score:2, Interesting)
Could be solved by a mix of RSS-feed and Firefox plugin?
Anything like this exists?
Oh, and it should be easy as hell to input a new site, or it will never be popular...
Re:Here's what I want (Score:1)
I was doing this back in 1994 on my web site.. (Score:2)
Spammers (Score:3, Informative)
I keep thinking: "One of these days, the spammers are going to mess up this system."
Social Bookmarks Services Stock Market (Score:2) t ml?_mid=8976 [yahoo.com]
Fantasy market, but fun to play and watch.
The 2 leaders there, Delicious and Furl, are commercial (one has VC funding and the other is owned by a publically traded company). Simpy [simpy.com] is the first independent service there, and I hope you can see why [simpy.com] (demo/demo account). Yes, I'm a little biased, see my URL above.
del.icio.us = use.less (Score:1, Insightful)
Yuck (Score:1)
why is bookmarking back now (Score:2, Informative)
I think for most people, me included, bookmarking is easier and often provides more useful information to others than blogging, there is clearly overlap.
Services such as Wists [wists.com] which is somewhere between Flickr [flickr.com] and del.icio.us [del.icio.us] are an example of a bookmarking systems that are complimentary to del.icio.us allowing people to boo [del.icio.us]
Feed Me Links (Score:1)
It's a really great social bookmarking site. The userbase is still somewhat small, but it's growing quickly! Feed Me Links provides a great user interface, Firefox extension, IE plugin, RSS feeds, tags, Flash sidebar, and so much more!
Try this out.... (Score:1)
I have most of my links public at [klatt.us] but I have a few that are private.
... and oh yah, RSS feeds
too small pics (Score:1)
This is useless.
another community bookmarking service (Score:1)
spurl.net (Score:1)
There is also zniff [zniff.com] which searches in the bookmarks in spurl.net
Importing bookmarks (Score:1) | https://slashdot.org/story/05/05/22/1336243/social-bookmarking-services-revisited | CC-MAIN-2017-34 | refinedweb | 2,434 | 73.68 |
This chapter provides answers to frequently asked questions about Oracle HTTP Server.
Documentation from the Apache Software Foundation is referenced when applicable.
Oracle HTTP Server has a default content handler for dealing with errors. You can use the
ErrorDocument directive to override the defaults.
For HTTP, Oracle HTTP Server supports two types of virtual hosts: name-based and IP-based. HTTPS supports only IP-based virtual hosts.
If you are using IP-based virtual hosts for HTTP, then the customer has a virtual server listening on port 80 of a per-customer IP address. To provide HTTPS for these customers, simply add an additional virtual host for each user listening on port 4443 of that same per-customer IP address and use SSL directives, such as SSLRequireSSL to specify the per-customer SSL characteristics. Note that each customer can have their own wallet and server certificate.
If you are using name-based virtual hosts for HTTP, each customer has a virtual server listening on port 80 of a shared IP address. To provide HTTPS for those customers, you can add a single shared IP virtual host listening on port 4443 of the shared IP address. All customers will share the SSL configuration, including the wallet and ISP's server certificate.
You can use the Oracle HTTP Server as a cache by setting the
ProxyRequests to "On" and
CacheRoot directives.
You can use multiviews, a general name given to the Apache server's ability to provide language and character-specific document variants in response to a request.
You should use the Proxy directives, and not the Cache directives, to send proxy sensitive requests across firewalls.
Oracle HTTP Server is based on Apache version 1.3.28.
Oracle Database, 10g Release 1 (10.1) is still based on the 1.3.x stack from Apache organization.
You cannot apply the Apache security patches to Oracle HTTP Server for the following reasons:
openSSLalerts, since Oracle has removed those components from the stack in use.
mod_php is not supported, however, you have the following two options:
mod_phpby yourself and use it. If there is a support question on any aspect of Oracle HTTP Server, you might be asked to reproduce the problem without
mod_php.
The general idea is that all servers in a distributed Web site should agree on
We could initially map this namespace to two Web servers by putting
app1 on server1 and
app2 on server2. Server1's configuration might look like the following:
Redirect permanent /app2 Alias /app1 /myApps/application1 <Directory /myApps/application1> ... </Directory>
Server2's configuration is complementary. If you decide to partition the namespace by content type (HTML on server, JSP on server2), change server configuration and move files around, but do not have to make changes to the application itself. The resulting configuration of server1 might look like the following:
RedirectMatch permanent (.*) \.jsp$ AliasMatch ^/app(.*) \.html$ /myPages/application$1.html <DirectoryMatch "^/myPages/application\d"> ... </DirectoryMatch>
Note that the amount of actual redirection can be minimized by configuring a hardware load balancer to send requests to server1 or server2 based on the URL.
There are many attacks, and new attacks are invented everyday. Following are some general guidelines for securing your site. You can never be completely secure, but you can avoid being an easy target. | http://web.stanford.edu/dept/itss/docs/oracle/10g/server.101/b12255/faq.htm | CC-MAIN-2016-22 | refinedweb | 547 | 55.44 |
27 October 2010 08:21 [Source: ICIS news]
GUANGZHOU (ICIS)--China’s Weifang Yaxing Chemical has posted a third quarter net loss of yuan (CNY) 68m ($10.2m) against a net profit of CNY1.1m year on year due to high prices of feedstock high density polyethylene (HDPE), it said on Wednesday.
The company posted an operating loss of CNY92.2m in the July-September period, against an operating profit of CNY5.6m in the year-ago period, it said in a disclosure to the Shanghai Stock Exchange.
Operating revenue for the third quarter showed a 10% rise to CNY453.3m, from 411.1m year on year.
For the nine-month period ending 30 September, the company posted a net loss of CNY116.9m against a net profit of CNY6.9m in the year-ago period, it added.
The company’s losses could continue into the fourth quarter due to high HDPE costs, a company official said, adding that they were considering hiking prices of their products to offset the rise in feedstock costs.
The company produces chlorinated polyethylene (CPE), polyvinyl chloride (PVC) and membrane caustic soda at its facility in ?xml:namespace>
( | http://www.icis.com/Articles/2010/10/27/9404778/chinas-yaxing-chemical-posts-q3-net-loss-of-10.2m.html | CC-MAIN-2015-22 | refinedweb | 193 | 67.15 |
table of contents
NAME¶
rand, rand_r, srand - pseudo-random number generator
SYNOPSIS¶
#include <stdlib.h>
int rand(void); int rand_r(unsigned int *seedp); void srand(unsigned int seed);
rand_r():
Since glibc 2.24:
_POSIX_C_SOURCE >= 199506L
Glibc 2.23 and earlier
_POSIX_C_SOURCE
DESCRIPTION¶
The rand() function returns a pseudo-random integer in the range 0 to RAND_MAX inclusive (i.e., the mathematical().
Like rand(), rand_r() returns a pseudo-random integer in the range [0, RAND_MAX]. The seedp argument is a pointer to an unsigned int that is used to store state between calls. If rand_r() is called with the same initial value for the integer pointed to by seedp, and that value is not modified between calls, then the same pseudo-random sequence will result.
The value pointed to by the seedp argument of rand_r() provides only a very small amount of state, so this function will be a weak pseudo-random generator. Try drand48_r(3) instead.¶.)
EXAMPLES¶
POS int seed) {
next = seed; }
The following program can be used to display the pseudo-random sequence produced by rand() when given a particular seed.
#include <stdlib.h> #include <stdio.h> int main(int argc, char *argv[]) {
int.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/testing/manpages-dev/srand.3.en.html | CC-MAIN-2022-33 | refinedweb | 222 | 65.22 |
This class provides methods for using pco cameras.
Project description
The Python package pco offers all functions for working with pco cameras that are based on the current SDK. All shared libraries for the communication with the camera and subsequent image processing are included.
- Easy to use camera class
- Powerful API to pco.software development kit
- Image recording and processing with pco.recorder
Installation
Install from pypi (recommended):
$ pip install pco
Basic Usage
import pco import matplotlib.pyplot as plt with pco.Camera() as cam: cam.record() image, meta = cam.image() plt.imshow(image, cmap='gray') plt.show()
Logging
To activate the logging output create the Camera object with debuglevel= parameter.
The debug level can be set to one of the following values:
- 'off' Disables all output.
- 'error' Shows only error messages.
- 'verbose' Shows all messages.
- 'extra verbose' Shows all messages and values.
The default debuglevel is 'off'.
pco.Camera(debuglevel='verbose') ... [][sdk] get_camera_type: OK.
The optional timestamp= parameter activates a tag in the printed output. Possible values are: 'on' and 'off'.
The default value is 'off'.
pco.Camera(debuglevel='verbose', timestamp='on') ... [2019-11-25 15:54:15.317855 / 0.016 s] [][sdk] get_camera_type: OK.
Documentation
The pco.Camera class offers following methods:
- record() generates, configures and starts a new recorder instance.
- stop() stops the current recording.
- close() closes the current active camera and releases the blocked ressources.
- image() returns an image from the recorder as numpy array.
- images() returns all recorded images from the recorder as list of numpy arrays.
- image_average() returns the averaged image. This image is calculated from all recorded images in the buffer.
- set_exposure_time() sets the exposure time of the camera.
- wait_for_first_image() waits for the first available image in the recorder memory.
The pco.Camera class has the following variables:
The pco.Camera class has the following objects:
- sdk offers direct access to all underlying functions of the pco.sdk.
- recorder offers direct access to all underlying functions of the pco.recorder.
record()
Creates, configures and starts a new recorder instance.
def record(self, number_of_images=1, mode='sequence'):
- number_of_images sets the number of images allocated in the driver. The RAM of the PC is limiting the maximum value.
- mode sets the type of recorder:
- In 'sequence' mode this function is blocking during record. The recorder stops automatically when the number_of_images is reached.
- In 'sequence non blocking' mode this function is non blocking. Status must be checked before reading an image. This mode is used to read images while recording, e.g. thumbnail.
- In 'ring buffer' mode this function is non blocking. Status must be checked before reading an image. Recorder did not stop the recording when the number_of_images is reached. The first image is overwritten from the next image.
- In 'fifo' mode this function is non blocking. Status must be checked before reading an image. When the number_of_images in the fifo is reached, the following images are dropped until images were read from the fifo.
The entire camera configuration must be set before calling record(). The set_exposure_time() command is the only exception. This function has no effect on the recorder object and can be called up during the recording.
stop()
Stops the current recording.
def stop(self):
In 'ring buffer' and 'fifo' mode this function must to be called by the user. In 'sequence' and 'sequence non blocking' mode, this function is automatically called up when the number_of_images is reached.
def close(self):
Closes the activated camera and releases the blocked ressources. This function must be called before the application is terminated. Otherwise the resources remain occupied.
This function is called automatically, if the camera object is created by the with statement. An explicit call to close() is no longer necessary.
with pco.Camera() as cam: # do some stuff
image()
Returns an image from the recorder. The type of the image is a numpy.ndarray. This array is shaped depending on the resolution and ROI of the image.
def image(self, image_number=0, roi=None):
image_number specifies the number of the image to read. In 'sequence' or 'sequence non blocking' mode the recorder index matches the image number. If image_number is set to 0xFFFFFFFF the last recorded image is copied. This allows e.g. thumbnail while recording.
roi sets the region fo interest. Only this region of the image is copied to the return value.
>>> cam.record(number_of_images=1, mode='sequence') >>> image, meta = cam.image() >>> type(image) numpy.ndarray >>> image.shape (2160, 2560) >>> image, metadata = cam.image(roi=(1, 1, 300, 300)) >>> image.shape (300, 300)
images()
Returns all recorded images from the recorder as list of numpy arrays.
def images(self, roi=None, blocksize=None):
roi sets the region fo interest. Only this region of the image is copied to the return value.
blocksize defines the maximum number of images that are returned. This parameter is only useful in 'fifo' mode and under special conditions.
>>> cam.record(number_of_images=20, mode='sequence') >>> images, metadatas = cam.images() >>> len(images) 20 >>> for image in images: ... print('Mean: {:7.2f} DN'.format(image.mean())) ... Mean: 2147.64 DN Mean: 2144.61 DN ... >>> images = cam.images(roi=(1, 1, 300, 300)) >>> images[0].shape (300, 300)
image_average()
Returns the averaged image. This image is calculated from all recorded images in the buffer.
def image_average(self, roi=None):
roi defines the region fo interest. Only this region of the image is copied to the return value.
>>> cam.record(number_of_images=100, mode='sequence') >>> avg = cam.image_average() >>> avg = cam.image_average(roi=(1, 1, 300, 300))
set_exposure_time()
Sets the exposure time of the camera.
def set_exposure_time(self, exposure_time):
exposure_time must be given as float or integer value in the unit ‘second’. The underlying values for the function sdk.set_delay_exposure_time(0, 'ms', time, timebase) will be calculated automatically. The delay time is set to 0.
>>> set_exposure_time(0.001) >>> set_exposure_time(1e-3)
wait_for_first_image()
Waits for the first available image in the recorder memory.
def wait_for_first_image(self):
In recorder mode 'sequence non blocking', 'ring buffer' and 'fifo', the function record() returns immediately. It is the responsibility of the user to wait for images from the camera before calling image(), images() or image_average.
configuration
The camera parameters are updated by changing the configuration variable.
cam.configuration = {'exposure time': 10e-3, 'roi': (1, 1, 512, 512), 'timestamp': 'ascii', 'pixel rate': 100_000_000, 'trigger': 'auto sequence', 'acquire': 'auto', 'metadata': 'on', 'binning': (1, 1)}
The variable can only be changed before the record() function is called. It’s a dictionary with a certain number of entries. Not all possible elements need to be specified. The following sample code only changes the 'pixel rate' and does not affect any other elements of the configuration.
with pco.Camera() as cam: cam.configuration = {'pixel rate': 286_000_000} cam.record() ...
sdk
The object sdk allows direct access to all underlying functions of the pco.sdk.
>>> cam.sdk.get_temperature() {'sensor temperature': 7.0, 'camera temperature': 38.2, 'power temperature': 36.7}
All return values form sdk functions are dictionarys. Not all camera settings are currently covered by the camera class. Special settings have to be set directly by calling the respective SDK function.
recorder
The object rec offers direct access to all underlying functions of the pco.recorder. It is not necessary to call a recorder class method directly. All functions are fully covered by the methods of the camera class.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pco/ | CC-MAIN-2021-21 | refinedweb | 1,243 | 53.07 |
October 27, 2018 by Artur
Hello, today I will write a little bit about tooling. 2 weeks ago I thought about starting this blog. I asked my friend who blogs what blog engine should I use. His answer was rather suprising as he told me I should have used static website and commit blog posts as pull requests to it :O.
So, I started exploring the internet to find the best blog layout ( reactjs ) tool to write my static blog. I was able to dig out
gatsby starter blog from hundreds of chinese repos which github is flooded with right now. At the moment I created this blog I had no experience with Gatsby ecosystem but it looked promising for me.
Running this kind of gatsby package required me only to install
gatsby-cli and run command
gatsby develop. Pretty easy? Huh?
I decided to add some tweaks to this simple blog package as it was really pure blog with one author. So to add other authors( who I don’t have yet :( ) I added authors folder, so to add you as an author you need to create folder with your name and create
index.js file with this kind of content inside it:
export const Artur = { photo: require('./Artur.jpeg'), desc: 'GraphQL passionate. Code generation guru. Short code lover. Father. CTO. CEO.', name: 'Artur Czemiel', email: '[email protected]', }
and add of course this line to
authors/index.js :
import { Artur } from './Artur' export const Authors = { Artur, }
Later on you can use it inside your blogpost.
Adding your blog post is pretty easy though. Again you have to create a folder inside pages folder with blog post slug like
my-very-interesting-article. Add an
index.md file to it with this kind of header which is formatted by
graymatter package then:
--- title: My very interesting article date: '2018-10-27T13:23:04.284Z' author: Artur ---
ł That’s it. After writing article you just submit pull request to your fork. I merge the pull request and publish your article to website.
Sometimes I am kinda lazy person. I added small publish CLI to this project, which automatically using
opn opens browser with prefilled url and title fields for:
hackernews. So it is much easier to share your blog posts from this blog. It lives in
bin/index.js folder of this blog and uses
yargs and
inquirer and
graymatter which I mentioned before.
After that it opens a window so I can post on reddit. Simple and beautiful!
If you want to support me just star GraphqlBlog repo. The link is in the navigation bar. Of course you can write your own article here and submit a pull request. I promise I will improve the bloggin system to be the best open source blog in the world ; ).
Do you want to try our mock backend from GraphQL app. It is in beta phase and 100% free. | https://blog.graphqleditor.com/blog-publish-tools-inside-gatsby-blog/ | CC-MAIN-2019-26 | refinedweb | 487 | 75.61 |
I'm trying to use vectors in a resource management system I am making for a game and keep getting unhandled exceptions during runtime when I try to add elements to the vector, and can't seem to figure out why.
Here is all the code in which the vectors are giving me problems.
Class Definition (vector is initialized):
Here is the Mesh structure for reference:Here is the Mesh structure for reference:Code:class MeshContainer { private: vector<Mesh *> MeshList; public: MeshContainer() {} ~MeshContainer() { if(MeshList.size() > 0) { for(DWORD i=0; i<MeshList.size(); i++) { if(MeshList[i] != NULL) { MeshList[i]->MeshData->Release(); MeshList[i] = NULL; } } MeshList.clear(); } } void DrawMesh(DWORD dwMeshID); int LoadMesh(IDirect3DDevice9* &g_pd3dDevice, int iMeshNum); };
And finally, here is the function in which I try to add items to the vector:And finally, here is the function in which I try to add items to the vector:Code:struct Mesh { DWORD dwID; DWORD dwNumSubsets; LPD3DXMESH MeshData; Mesh() { dwID = 0; dwNumSubsets = 0; MeshData = NULL; } };
I have performed tests and the MeshList vector is being initialized to something other than NULL, it just doesn't seem to be something that I can add items to.I have performed tests and the MeshList vector is being initialized to something other than NULL, it just doesn't seem to be something that I can add items to.Code:int MeshContainer::LoadMesh(IDirect3DDevice9* &g_pd3dDevice, int iMeshNum) { Mesh *TempMesh = new Mesh(); switch(iMeshNum) { case MESH_TEAPOT: // Create a Teapot if(FAILED(D3DXCreateTeapot(g_pd3dDevice, &TempMesh->MeshData, 0))) return false; TempMesh->dwNumSubsets = 0; TempMesh->dwID = 0; break; default: return false; break; } MeshList.push_back(TempMesh); return true; }
Using the MSVC++ 2005 EE debugger I am pointed to the size() function in the vector class. I am told that this fails when the push_back() function is called (I looked into it and push_back calls size).
Here is the vector size function in case it provides any help:
The debugger tells me that the return statement is causing the exception (of course, since that is the only statement).The debugger tells me that the return statement is causing the exception (of course, since that is the only statement).Code:size_type size() const { // return length of sequence return (_Myfirst == 0 ? 0 : _Mylast - _Myfirst); }
When I try to use reserve() in an attempt to make space so that the vector isn't completely empty, the capacity() function, which is called in reserve, also creates an exception. If necessary I can provide more details about exactly what is going on in that instance however it is almost the exact problem the size() function is having.
Can anyone figure out how this is happening? I am initializing objects of this class elsewhere and calling the functions the right way. Oh and I am include the correct header and I am using namespace std which is why I'm not using std::. The entire thing compiles fine.
Thanks for help in advance. | http://cboard.cprogramming.com/cplusplus-programming/76255-vector-problem.html | CC-MAIN-2014-42 | refinedweb | 488 | 58.52 |
Orientation
In both semantic model standards Topic Maps and RDF/OWL and in many other NoSQL approaches to solve efficiently the problem of how to represent relations and relationships one major stumbling block is raised beyond all efforts: the namespace. It is a language problem, the babel we have in our civilized world is transferred into our IT systems. But machines do not have to understand our language, we do.
Good news for everyone, there is an alternative way of thinking on modelling data:
Hence AtomicDB Data Model or as I call it AIR
Atomic Information Resource Data Model
The Entity-Attributes 'Silo' Structure
The problem here is that from a semantic point
of view, similar diagrams are in need from users that want to express business processes but when we reach the implementation stage software engineers have to marry business requirements with the technical constrains of the database system hence the ER diagram you see. Generally speaking this is known as "The Model", a conceptual view of the user on data. The ER version of the model has several limitations, due to the architecture of RDBMS. The main one is that each attribute remains enclosed in the table structure and in the case the same attribute appears in another table, the dataset that it represents has to be repeated. In our example above, the primary key (pid) of "Parts" is repeated as a foreign key (catpid) in Catalog. The difficulties that arise in data aggregation due to this limitation are substantial.
How to Break Free from the Entity-Attribute-Value Paradigm
The relational and the entity-relationship model made a huge impact in the IT world for nearly half a century. But during this long period of standardization it meant also one thing, everyone had to comply with the rules and requirements of the model. Everyone had to think in terms of Entity-Attribute-Value or Subject-Predicate-Object as it is known in the RDF semantic model. Programming languages have been affected to from this monolithic way of thinking. Although it proved to be advantageous to program with classes and objects, it created an artificial problem of how to map these onto persistent data structures on the disk, also known as the object-relational impedance mismatch problem. Knowledge representation frameworks did not escape from this too. Ontologies expressed in OWL followed the same paradigm with classes, attributes, and values. Serialization methods such as JSON (object-name-value) and XML (element-attribute-value) also came after the same rationale.
The Signified - Sign - Signifier Alternative Paradigm
The aforementioned Entity-Attribute bond and distinction plays its role here too. But most important another concept, 'value', is added to make this triplet even more difficult to handle in our digital world. This is mainly due to the fact that three perspectives, the conceptual, the representative and the physical layer encoding are mixed in such a way that it is very hard to separate and work with them at distinct levels of abstraction. The R3DM, or S3DM, conceptual framework that we discuss in Part 3 is based on the natural process of semiosis where the signified, i.e. concept, entity, attribute and the signifier, i.e. value are referenced through symbols, i.e. signs, at discrete layers. The same philosophy is shared in the architecture of this database management system and we demonstrate this with the following example.
Join Data Science Central to comment on this post.
Views: 5024
Comment
© 2020 Data Science Central ®
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central | https://www.datasciencecentral.com/profiles/blogs/the-past-entity-attribute-value-vs-the-future-sign-signifier | CC-MAIN-2020-16 | refinedweb | 607 | 50.16 |
This the spectrum. Just such a scenario arose and I was asked:
What is the recommended way to resample a 1D spectrum?
I recalled using pysynphot, which is very useful for working with 1D spectra. However, the documentation is still a bit thin. After a bit of of exploring, here is an example that John Johnson and I wrote to illustrate our answer to the question. This package takes care to resample while preserving flux. We hope you find this useful! Or leave your alternative answer in the comments below.
For those bleeding-edge python users out there, pysynphot is becoming an astropy affiliated package. You can download this new version from the pysynphot GitHub repository and test it out.
{ 10 comments… read them below or add one }
I don’t get why I would want to do it this complicated, not using some simple interpolation + scaling just using numpy packages? In the above example I would have no idea in the end what comes out….
If I know what units my input spectrum is both resampling and redshifting are straightforward mathematical expressions which can be entered as “pure” Numpy Python code.
Sorry for the self publicity, but for an example of what Christian Herenz said, check out
the program is given as stand alone but it’s python code which you can dig into (comments included).
Points in a spectrum are integrated fluxes over small wavelength bins, rather than flux densities. Pysynphot nicely preserves the integrated flux, while not all standard interpolation schemes do this.
Hi. I am the current developer of the “pysynphot GitHub repository” () mentioned in your blog post. It is currently still in development and not ready for general usage. However, once it is ready, it will utilize the powerful modeling package in astropy (), which will greatly expand the functionality of the official release version. Until that happens, please use the pysynphot officially distributed by STScI at . Thank you.
Hi Pey,
is there any way to use or even just test pysynphot at this point? Building the git source fails immediately on attempts to
from jwst_lib import modeling
which I cannot find anywhere in stsci_python either (or any “modeling” module whatsoever). It looks like it is intended to be replaced by astropy.modeling eventually, but until that works, what jwst_lib is supposed to be used here? Thanks, Derek
This works well, but how would you resample an associated uncertainty array? You would want to add the uncertainties in each bin in quadrature but how could you include that in this snippet using pysynphot? Thanks!
Hi, Joe. Currently, pysynphot does not handle uncertainties. It might in the future, resource permitting.
Hello
Thank you for your nice work !
I would like to decrease the resolution of a template-model spectrum down to the resolution of my observed spectra. Could you explain me the procedure through pysynphot or any alternative ?
Thank you in advance !
Hi Mike,
the code I mentioned above () can do this. It simply creates a new spectrum by averaging the signal every N pixels (where you supply N via the “smooth” parameter). This should be enough for most purposes, but I suspect you can get a more optimal smoothing with scipy.ndimage.filters.gaussian_filter and the likes.
Hi, Mike. If you still wish to use pysynphot to resample your spectrum, please email all the details to help[at]stsci.edu and someone will get back to you shortly. Thank you for your interest. | http://www.astrobetter.com/python-tip-re-sampling-spectra-with-pysynphot/ | CC-MAIN-2015-14 | refinedweb | 577 | 65.22 |
NAME
::clig::String - declare an option with parameters of type string
SYNOPSIS
package require clig namespace import ::clig::* setSpec db String -opt varname usage [-c min max] {[-d default ...] | [-m]}
DESCRIPTION
The String command declares -opt to have zero or more string arguments. varname declares that arguments found for -opt are passed to the caller of the parser in a variable (tcl) or slot (C) of that name. usage is a short text describing -opt. It is used in the generated manual or in a usage-message printed by the parser if necessary. -c specifies the minimum and maximum number of parameters -opt may have. If less than min or more than max are found, the parser prints an error message to stderr and terminates the calling process. Use min==max to request exactly so many arguments. As a special value for max, oo is understood as positive infinity. -d stores a (list of) default value(s) for option -opt. The parser pretends these were found on the command line, if option -opt is not given. WARNING: The number of default values is not checked against the minimum and maximum values given for -c. -m declares that -opt is mandatory. Put another way, the ‘‘option’’ is not really optional but must show up on the command line. If it does not, the parser prints an error message to stderr and exits the calling process. This can not be used together with -d. char**, otherwise it has type char*.
SEE ALSO
clig(1), clig_Commandline(7), clig_Description(7), clig_Double(7), clig_Flag(7), clig_Float(7), clig_Int(7), clig_Long(7), clig_Name(7), clig_Rest(7), clig_Usage(7), clig_Version(7), clig_parseCmdline(7) | http://manpages.ubuntu.com/manpages/dapper/man7/clig_String.7.html | CC-MAIN-2013-20 | refinedweb | 276 | 65.32 |
Introduction
11/18/2004
At the 2004 ACL2 workshop, I gave a heartfelt endorsement of the ACL2 package system, describing it as "not actually that terrible." Some deemed my comments incredulous, perhaps having been burned by the package system before. And so, I have been conscripted to write something about how to use packages successfully.
So here goes something. It's mostly about packages, how to define them, how to use them in your own books, and how to certify books that use packages. I also mention briefly how to use local includes and redundant events to cleanly separate your library's interface from its implementation. This technique was shown to me by Eric Smith, so he deserves the credit here.
This guide is not comprehensive. If you want to go into more details, you should see the following documentation topics, among others.
Table of Contents
- Creating Packages with DEFPKG
- Managing DEFPKG Events with Package Files
- Working With Your Package Interactively
- Certifying Books In Your Package
- Including Books Based On Packages
- Example: Working with Multiple Packages at Once
- Should we use Exports Lists?
- Local include-books With Redundant Events
- Conclusions
- PS - A Note on Generative Programming
Creating Packages with DEFPKG
A new package is created by a defpkg event.
- (defpkg name symbols)
- Name is a string and MUST BE IN CAPS.
- Symbols is a list of unique symbols.
You can also include a documentation string, but I won't try to fool you into believing that anyone uses this.
Picking a Name
The name is just a string and should be in caps. If it is not capitalized, bizarre incompatibilities can occur between various Lisp implementations. If I recall correctly, GCL is fine with lowercase package names, but really you should just make them uppercase to avoid these problems.
To access the symbols from package NAME, e.g., foo, you will generally need to write NAME::foo. As a result, you will want to pick a name which is short enough that you don't mind typing it. You can use hyphens in the name if you'd like to have more than a single word, but of course this is a lot of typing that you are committing to.
Generating the List of Symbols
The symbol list defines what symbols will be accessible from within the package. You certainly should be aware of the following constants:
- *acl2-exports*
- A list of 900+ function names, theorem names, macros, and commands such, which are ACL2 specific.
- *common-lisp-symbols-from-main-lisp-package*
- A list of 900+ function names from common lisp, including a lot of the functions which aren't part of ACL2.
You certainly want to include most of these symbols in your package. But, they have some overlap, so you cannot simply append them together. Instead, you should use the built-in function union-eq, which merges the lists without creating duplicates. The combined list has about 1700 symbols.
Of course, you might not want all of them. A lot of those symbols aren't useful to you; they are functions that are defined in Common Lisp but which don't exist in ACL2. And, they have nice names that you'd like to use, such as "pop", "union", etc.
One nice thing about packages is that you can explicitly exclude these names from your package, so that you are allowed to define them. Hence, it's possible to define "pop", "union", and so forth in your own package, even though you can't define them in the default ACL2 package.
To do this, simply identify the symbols you want to be able to use, and then use a set-difference-eq to remove them from your combined *acl2-exports* and *common-lisp-symbols-from-main-lisp-package* list.
Examples
Here is the command to define a very basic package that doesn't exclude any symbols.
(defpkg "VANILLA" (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*))
Here's a package that wants to have its own push and pop symbols.
(defpkg "STACK" (set-difference-eq (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*) '(push pop)))
Managing DEFPKG Events with Package Files
It is going to become important that we can reuse these defpkg events. We will need to load the defpkg event whenever:
- we want to interactively work with definitions in our package.
- we want to certify a book that is part of our package.
- we want to include a book from our package.
As a result, it seems best to put your defpkg event into its own file. I will call this file name.defpkg. Of course, this is just a Lisp file, the bizarre extension is just to remind us that it contains our package event.
The point of creating a whole file just for the defpkg event is that, by doing so, you avoid the need to replicate the defpkg event anywhere. (Instead, you can just ld the package file.) That means that if you ever want to change what symbols make up your package, you just need to change the package declaration in one place.
Working With Your Package Interactively
Suppose you are working on a library which is in the MYPKG package. Your library is not done yet, so you want to go ahead and do some work on it interactively. All you need to do is first issue the following commands:
(ld "mypkg.defpkg") (in-package "MYPKG")
The ld command will load your defpkg event, and the in-package event will put you into your package.
From that point forward, everything should be pretty normal. The only problem that you will really run into is that you might try to use some symbol that isn't in *acl2-exports*. If you suspect this is the case, you might need to type ACL2:: in front of that symbol.
Now, almost all of the usual functions are already in *acl2-exports*, so this probably won't be a problem. And, if you think that the symbol you are using ought to be in *acl2-exports*, it's an easy thing for Matt to change in the next release. And in the meantime, you can always union-eq such a symbol into your defpkg event, and then you won't need to worry about it in the future.
Certifying Books In Your Package
Of course you would also like to certify books based on your package. This is easy if you are using the standard ACL2 Makefile system (which is pretty convenient anyway, so perhaps you'd like to switch to it, if you aren't using it yet). If we are working with the FOOBAR package, we just need a Makefile that looks like this:
include .../acl2-sources/books/Makefile-generic # declaration of which books to certify BOOKS = mybook1 mybook2 # dependencies between books mybook1.cert: foobar.defpkg # require package at low levels! mybook1.cert: mybook1.lisp mybook2.cert : mybook1.cert mybook2.cert : mybook2.lisp
I won't explain this in much detail. I will mention that you want all of your package's books to ultimately depend on the defpkg event, so that if you ever change the package, you will trigger a full recertification.
The second thing you need to do is copy the following into a file called cert.acl2.
(value :q) (lp) (ld "foobar.defpkg")
Now, all you have to do is have your (in-package "FOOBAR") command at the top of each file that should be part of package FOOBAR, and everything should work as when you are not certifying books.
In short, the only difference between working with your book interactively versus certifying it is that you need to run the ld command for your package file if you are going to work with it interactively. After you're ready to certify, just remove that line and you're good to go.
Including Books Based On Packages
Once books have been certified based on your package, you can include them into other ACL2 sessions. It is important to make a distinction between if you are including the book for interactive use, or if you are including the book for the certification of another book.
Inclusion for Interactive Use
If you simply want to include a package-based book while running ACL2 interactively, all you need to do is run the appropriate include-book command. You don't have to say anything about what packages are involved, and you don't need to load its defpkg events. Of course, to use your package's functions, you will need to qualify them with the package name. In other words, you can't just type union, you'll need to type SETS::union, and so forth.
If you find that you are working with those functions a lot, you might choose to work within the included book's package. To do this, you simply execute the form (in-package "SETS") or whatever your package is named, and then you do not have to quality the names any more. (Of course, now you might need to write ACL2:: and so forth.)
I'll make a few more comments about these sorts of issues when I talk about working with multiple packages at once.
Inclusion for Certification of Other Books
Now imagine that you have some package-based book, we'll say sets.lisp which is part of the SETS package. And, imagine that you want to certify some new book, graph.lisp, which is just part of the ACL2 package. Finally, imagine that graph.lisp is dependent upon sets.lisp.
Now, if you simply try to run (certify-book "graph" 0), you will get the following message:
Error: There is no package with the name SETS.
To fix this problem, we simply need to include the defpkg event for sets before we try to certify the book. There are two obvious places for this to go:
- Inside your cert.acl2 file which the makefile is using, or
- Inside a new graph.defpkg file
Now, if you are going to define your graph.lisp functionality to be in its own GRAPHS package, then this latter approach makes a great deal of sense. But, even if you are going to have GRAPHS be part of ACL2, there's no reason you can't still have a graph.defpkg file which simply loads the sets.defpkg file and doesn't actually create a new package for GRAPHS.
Example: Working with Multiple Packages at Once
I'll now give a slightly more complicated example, which is taken directly from my set theory library. The following is the contents of my sets.defpkg file, (which is actually called package.lisp in version 0.9 (the version that you'll find in the workshop supporting materials)).
(defpkg "INSTANCE" (union-eq '() (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*))) (defpkg "COMPUTED-HINTS" (union-eq '(mfc-ancestors mfc-clause string-for-tilde-@-clause-id-phrase INSTANCE::instance-rewrite) (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*))) (defpkg "SETS" (set-difference-equal (union-eq '(lexorder COMPUTED-HINTS::rewriting-goal-lit COMPUTED-HINTS::rewriting-conc-lit) (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*)) '(union delete find map)))
As you can see, the set theory library is separated into three packages. The INSTANCE package constains a small rewriter which is used to create concrete instances of generic theories which are stored as constants. The COMPUTED-HINTS package contains support functions for creating and applying our style of computed hints. The SETS package is the main package which our users are interested in.
The INSTANCE package is "vanilla" in the sense that it isn't going to need to use any of the reserved function names -- it just wants the sort of "default" environment that we get by unioning the acl2 exports and the main list package symbols. The empty union-eq just lets me add new symbols to that, if I ever want to.
The computed hints package is a little more complicated. I want a few functions (mfc-ancestors, mfc-clause, and string-for-tilde-@-clause-id-phrase) which are not in *acl2-exports* because they are bizarre things and probably very few people want them. But, since I'd like to use them in this package without typing ACL2:: in front each time, I choose to import them explicitly. Finally, I also choose to import the instance-rewrite function from the INSTANCE package.
The sets package is a little more complex. Again I'm interested in lexorder, an obscure function which didn't make it to *acl2-exports*. But, I'm also interested in a couple of functions defined in the COMPUTED-HINTS package, so I go ahead and import those as well. Finally, I want my set library to go ahead and define the functions union, delete, find, and map, so I remove these from the list of imported symbols so that I am free to redefine them.
Now, these three packages are tightly related, and so I can go ahead and define all three defpkg events in this same file. But, I could make this better by creating an instance.defpkg file and a computed-hints.defpkg file separately, and then loading them as appropriate. If I were to do so, then anyone who wanted to include the INSTANCE package by itself would be able to do that, without needing to include the COMPUTED-HINTS and SETS packages. To really make this work, the computed-hints.defpkg file would need to (ld "instance.defpkg"), and similarly the sets.defpkg file would need to (ld "instance.defpkg"). As long as these paths are all relative, it seems like this should be a fine way of going about things.
In any event, after this is done, the only thing anyone needs to do is ld the sets.defpkg file whenever they need access to the defpkg events.
Should we use Exports List?
Imagine defining a constant, SETS::*exports*, that contained a list of function, theorem, and variable names that I expected users to encounter in their proofs. This could then be imported in the end-user's defpkg events. For example:
(ld "./xyz/osets/sets.defpkg") (defpkg "FOOBAR" (union-eq SETS::*exports* (union-eq *acl2-exports* *common-lisp-symbols-from-main-lisp-package*)))
Given this, functions such as union, intersection, and so forth could perhaps be used in FOOBAR without having to explicitly say that they are from the sets package each time. This could save some typing, but it is perhaps a bad idea.
One problem with this is that union-eq is actually not what we want. I lied a little bit when I said that the list of symbols in a defpkg has to be unique: actually the list of symbols must not contain any two symbols which have the same symbol-name. As a result, COMMON-LISP::union and SETS::union are unfortunately incompatible in this list, so FOOBAR is probably not even a valid package! Realistically, we would need to either subtract from FOOBAR's symbol list the exact same functions we excluded from SETS (leading to an ugly duplication of code), or some kind of new functions, e.g., pkg-union, that would perform their equality tests simply using symbol name equality. These might be easy to add to the system, but I don't think they're available right now.
Even if we had this, would we want it? Right now, a user can still explicitly say which theorems and symbols they would like to import. In the example above, COMPUTED-HINTS does this for INSTANCE::instance-rewrite, and SETS does this for rewriting-goal-lit and rewriting-conc-lit from COMPUTED-HINTS. Although this might seem burdensome, it is safe! The client has to explicitly say what they want to take. In contrast, if the sets package publishes a list of exports, and I later add some symbol foo to that list, I might break your package which included all my exports but defined its own, different version of foo.
Local include-books With Redundant Events
This final topic isn't entirely related to packages, but relates to the discussion we had about conflicting versions required by different books and so forth.
Local Include Books
Imagine that you have constructed some large library of definitions and theorems about frobs, and suppose that for some of the properties you proved, you decided to use the arithmetic-3, data-structures, and the ordinals books.
Now, you really don't want all of these three books to be exported by your library. You're interested in frobs, not in arithmetic, data structures, and ordinals. Besides, you only needed these libraries to prove your theorems.
Just as you can make your lemmas local to an encapsulate or to a file, you can make your includes local as well. So, instead of just including the books verbatim, you can locally include them so that when other people include your book, those libraries won't even be seen. This lets us use your library without having to add all of the reasoning of arithmetic-3 and so forth.
Now, suppose someone else writes a theory about widgets, and they use the arithmetic-2 library to prove their properties. So long as they also used local includes, neither library exports any arithmetic properties and you will be able to include frobs and widgets in the same project, without worrying about these sorts of clashes.
The story is not quite so clean if I depend upon some definition. For example, if I include a sets library and use their notion of union to write my function, then my function depends on that definition of union. I don't have a good solution to this, but hopefully definitions will not change much from version to version, and will be kept in their own packages to avoid name clashes. This might be an argument for distributing libraries written as a set of definitions in one file, and a set of theorems in another. That way, people could publically include the definitions, but only locally include the theorems about them. This leads us nicely into our final topic.
Redundancy and Creating Nice Interfaces
Redundant definitions can be combined with local includes to produce some very nice effects. This technique is due to Eric Smith.
Imagine again that we are building our frobs library. We have a lot of definitions and a lot of theorems, and they are interleaved in a haphazard way that allows us to admit them all, verify their guards, prove some theorems, and so forth.
Just as you should think of a theorem in terms of its usage as a rewrite rule, we should think of our libraries in terms of the proof strategies they will engender. This haphazard and interspersed arrangement of definitions, lemmas, and theorems has allowed us to prove our main conjectures, but in the end, even the very order that our theorems appear will affect the "external" rewriting strategy to some degree. Unfortunately, this means that if we decide to change this internal order, our users will potentially be affected in some way.
So, what we would like to do is create a fiction for the user to believe. We create an interface file, and begin it by locally including all of our implementation books. We then redundantly define all of the events we would like to export.
This approach has a number of advantages. We hide the messy details of how we went about our proofs, we hide all of the internal definitions and induction schemes and what not which were needed. Because all the theorems have already been proven, we have no problems with changing the order to anything we would like. We have given ourselves the ability to clean up our implementation and otherwise rearrange it without impacting the reasoning strategy that we export. And, the end user can very rapidly include our file, because it has all of their definitions right there. (We're only talking about a few seconds of savings regardless, but this is still nice.)
This strategy creates an "impenetrable wall" of abstraction. This isn't just a disabled theorem that a user has to manually enable later, this is a theorem that the user never sees and never has the chance to enable. You effectively guarantee that nobody who is using your (unmodified) library can possibly be using your internal theorems, so you can feel free to axe or change them as you please.
Conclusions
Defpkg events really aren't too bad to work with. You do need to use them in a variety of contexts, so you probably want to make a file that contains them.
Once you have a defpkg event in its own file, it's quite easy to work interactively within the package (just ld the package file and then execute your in-package form), and to certify books based on the package (using the makefile system). Certified books can be easily loaded into interactive sessions, but it is messy to load them into other books which you intend to certify. This is really the most messy part of the whole deal, and you might need to explain it to your users somehow.
The defpkg event is pretty flexible, and lets you have nicely named functions like push, pop, union, etc, which wouldn't be allowed in the regular ACL2 package. You have a great deal of control over what symbols you choose to import, and you can work with several packages at once that pick and choose which symbols they would like to have.
Export lists are somewhat dangerous in the sense that they can break packages in the future, and they are somewhat painful to maintain. So, it seems easiest and safest to just let the people who are using your package import the symbols they intend to use directly.
Packages are, of course, intended to alleviate the namespace problem. The use of redundant events and local include books can also assist with this, and can keep your books from exporting more than you want. Ideally, libraries should be structured so that users include a public interface which is minimal and complete, while internally the implementation may include whatever it likes and do whatever is necessary to get the theorems proven.
Taken all together, hopefully these techniques will help us improve the modularity and reusability of our libraries.
PS - A Note on Generative Programming
By the way, a lot of us write a lot of macros that write code to do various and wonderful things. As part of that, we often have to generate our own symbols. This can be quite confusing because you now have to think about what package all of your symbols are in. Anyway, I don't know that there are any great solutions to this. I tend to add an argument :in-package-of to my macros and all of the new functions I create are created in that package. But, your needs may certainly vary. In any event, it's a bit harder, but still possible. | http://www.cs.utexas.edu/users/moore/acl2/contrib/managing-acl2-packages.html | CC-MAIN-2015-06 | refinedweb | 3,874 | 61.26 |
NOTE: For this entire tutorial, operators will be labled green, keywords labled blue, and variables will be labled red for clarity. Also, a .txt version of the tutoiral is available at the bottom of this post.
Before we analyze any pieces of code, it is important to realize that a pointer does not contain a value as a variable, such as an int or char would. It cannot be emphasized enough that pointers must be viewed as just that, pointers. They simply point to a location in memory which can be modified directly using that pointer.
Let's use an analogy to drive this point: you have two jars, with a piece of paper in each. On one of the pieces of paper you write the number 7, and place it in the jar. You then take this jar and place it somewhere in your home. Now on the second piece of paper, you write the location of the first jar and place it in the second jar. This second jar acts as a pointer to the location of the first jar which contains the number seven.
Syntax
First lets look at the syntax of a basic pointer; this one is of type char:
char* myfirstpointer;
Really the only difference from declaring a variable is the * operator. This operator tells the compiler to create a pointer of type char. In this example the * trailed the datatype (char) however another option to syntax is to place the * before the variable name.
char var,*ptr;
Again, first you state the datatype, then the name of the variable, in this case only ptr would be a pointer, var is still a character because it has no leading *.
So what exactly does this pointer POINT too? Nothing. We have simply created a pointer, ready for pointing, ha! So lets make this pointer POINT to something in our memory.
Value vs. Address - Referencing and De-Referencing
int main() { int x = 7; int* pointer = &x; }
First we make a basic variable of type int and assign the value 7 to it. This 7 is now located somewhere in our memory. Again we declare a pointer as described above, this time of type int. This time though we not only declare the pointer but we make it point to something at the same time; just as you can declare a variable and give it a value at the same time. The & operator may look new to some of you. The & operator references the variable x. In other words it gives the address in memory of x. This is important because we only want the ADDRESS of the variable x, not the actual value.
The best way to see the & operator at work is to output the variable, and the same variable referenced:
#include <iostream> using namespace std; int main() { int x = 1337; cout << x << endl; //Output the value of x cout << &x << endl; //Output the reference, or address, of x }
On the first line you should see the value of x, 1337. On the second line you will see the address of x in memory, which probably looks like 021FF7C. Now that you have seen the & operator at work lets tie this back in with pointers.
Lets try this piece of code:
#include <iostream> using namespace std; int main() { int x = 1337; int* pointer = &x; cout << pointer << endl; cout << &x << endl; cout << *pointer << endl; }
Here we assign variable x to the value of 1337, and assign pointer to the address of x. Finally, you can see that, indeed, pointers only contain addresses. The first line of output should be the assigned address, in this case, the address of x. Just to prove this, the second line outputs the actual address of x. The first two lines should read the same.
Now the third output is the VALUE at the address stored to the pointer. This, of course, is 1337. But I thought pointers only contain addresses and not values. Indeed they do, but this address points to a value in memory. This value can be viewed by DE-referencing the pointers stored address. Just as we can reference values stored to addresses by using the & operator, we can also DE-reference the addresses to values. This de-referencing is done by the * operator. The * operator in front of pointer states that we want to see the value stored to the address pointer holds.
Modifying variables using pointers
The value of x can also be modified by using pointer:
#include <iostream> using namespace std; int main() { int x = 1337; int* ptr = &x; cout << x << endl; *ptr = 54; cout << x << endl; }
x is assigned to 1337, we then assign the address of x to pointer. The first output is used to show that the value of x is 1337, but then we change the value of x by using ptr.
*ptr = 54;
The value of the address stored to ptr = 54
Again, it is very important to remember that ptr does not contain the value of x, only the address, and that by de-referencing the address we can set the value of x through ptr. This could be said as a sentence to better understand it.
The value at(*) this address(ptr) = 54;
Finally the second output will display that 54 is now the value of x.
Arithmetic operators and pointers - Accessing arrays with pointers
Arithmetic operators can be used on pointers to change the address the pointer points to. This can best be shown by making the pointer point to an array, this time of characters.
char str[11] = Dark_Nexus; char* ptr = str;
Because str is an array, just using the variable name supplies the address to the array so there is no need to reference it with &. The following line would achieve the same goal.
char* ptr = &str[0];
Because str[0] actually contains a value we have to reference it with &. Now ptr, either way, points to the head of the array str. Just as you can move through an array with the [] brackets, you can also move through a pointer in the same way. 0 indicates the head of the array, and in this case 10 would be the end of it.
cout << ptr[0] << endl; cout << ptr[10] << endl;
These two outputs would display the first and last character of the character array str, because ptr points to the memory used by str. You may be asking why don't you have to dereference ptr[0] or ptr[10]? They are values not addresses. In this case you do not have to de-reference the address as the brackets [] do it for you. The values of the character array can also be modified in this manner.
ptr[0] = ptr[10];
Because we already know 0 is the head of the array, and 10 is the end we can assign position 0 to position 10 in the string. No de-referencing is required as, again, the brackets[] do it for you. If you were to output str now, it would read "sark_Nexus"
Now that we have seen how to use a pointer with an array, we can actually talk about arithmetic operators and pointers now.
cout << ptr + 1 << endl;;
ptr still points to str, and in this case str has returned to "Dark_Nexus". ptr + 1 indicates a shift (in this case an increment) from the address ptr is pointing to. When arrays are declared, all the memory is allocated at the same time, so each array element will be right next to the other, running from 0 to 10 for this string. ptr only contains the address to the head of the array, so by using ptr + 1, moves to the next address (ptr + 2 would mean move two addresses forward). The output would be ark_Nexus. However ptr still points to the head of the array because we have just added 1 to the address of ptr only for the duration of the statement. Once the statement is over ptr will still point to the head of the array. Now let's say we only want ptr to point to the Nexus portion of the string. This can be achieved by using the += operator.
ptr += 5;
To better understand this operation you could think of the statement as ptr = ptr + 5. "Dark_" is 5 characters long, so we simply have to move 5 characters (or addresses) forward. ptr now points to array slot 5, or N. If you were to output ptr now, it would read Nexus. You guessed it, you can use lots of other arithmetic operators too, here's a list summarizing their effects:
++ (move address forward by one, ptr = ptr + 1)
-- (move address backward by one, ptr = ptr -1)
-= a (move address backward by a, ptr = ptr - a)
These are the basic ways to modify the location in memory a pointer points to, all other arithmetic operators apply. If you were using more than one pointer to point to the same piece of memory, modifying the address one pointer points too will not affect the other pointer's address. However, if the value of the memory was modified, the pointers would reflect that change.
You guessed it again, you can make pointers to objects too!! (along with any sort of datatype). Again, I must stress that a pointer to an object STILL simply points to the address where the object is held in memory. When accessing a member of an object you simply use -> instead of .
struct mystruct { int var1; int var2; } myobject; mystruct* ptr = &myobject; ptr->var1 = 7; ptr->var2 = 493; cout << myobject.var1 << endl << myobject.var2 << endl; cout << ptr->var1 << endl << ptr->var2;
We make a structure called mystruct and then make an object of type mystruct. Finally we make a pointer to the object of type mystruct and assign ptr to the address of myobject using the & operator.
mystruct* ptr = &myobject;
var1 is set to 7 and var2 is set to 493 by using ptr, however you may have noticed that I never de-refrenced ptr. That's because the -> de-references it for us, just as the []'s do for arrays. Lastly, var1 and var2 are outputted by using both the object and the pointer to prove that we have indeed modified the variable.
You would access a class' members the same way as it is an object also.
Managing heap memory - new and delete keywords
In the previous example, the memory our pointers pointed to were part of the stack. This stack is allocated pre-compile, and is also deleted after the program run.
The difference between the stack and the heap is that the stack is allocated before the program executes, where the heap contains memory allocated DURING the program run.
int x; //Allocates 4 bytes in the stack char c; //Allocates 1 byte in the stack
The new keyword is used to allocate memory during the program run. Syntax looks like this:
new char;
new signals to allocate new memory in the heap, char indicates what kind and how much. In this case we have made room for a single character. You could just as easily make room for 20 integers.
new int[20];
Again, new signals to allocate memory in the heap, int indicates of what kind, and [20] defines how much. Remember that all the memory for arrays is allocated at the same time, so they will be placed together in memory. You definitely do not want to just create memory in the heap without being able to manage it though, so how do you manage your newly allocated memory? After the program ends, unlike the stack, the memory in the heap remains, this leads to memory leaks and other nasty things. So never allocate memory in the heap without having a pointer point to it. This can be done when you declare the pointer or later in the program when you wanted to assign another pointer to the same memory, the following illustrates both methods:
int* ptr1 = new int[10];
Declare ptr1, allocate memory in the heap, make ptr1 point to the newly allocated memory.
int* ptr2; ptr2 = ptr1;
Assuming ptr1 still points to the memory allocated in the previous example, you can now assign ptr2 to ptr1, however they are independent of each other, they just both point to the same memory.
I said before that memory placed in the heap is not automatically deleted when the program ends so you have to manually delete memory using the delete keyword.
delete []ptr2;
The delete keyword deletes all memory tied to the pointer. The [] means that you want to delete all of the memory not just the first address (or whatever address) ptr2 is pointing too. If ptr2 did not point to an array, and just a single int, you would not have to use the []'s.
int myint = new int; delete myint; //No brackets for a single peice of data
We don't have to delete pt1 because they both point to the same memory, however when your pointers are idle or don't point to anything, it is a good idea to set them to NULL so that they don't point to random addresses in memory.
ptr1 = NULL; ptr2 = NULL;
memcpy(...) - Application of pointers
Whew, now that we have gotten past the enigma of pointers, let's take a look at an application of them. At the same time we will introduce the memcpy() function
The following function searches through a string for a sequence of characters, once it is found, the position of that sequence in the string is returned as an integer, or -1 if the characters were not found.
Let's say the string we are looking for the character sequence in is "Dark_Nexus" and we are looking for "ex".
int FirstInstanceOf(const char* str,const char* control) { int pos = 0; int len = STRLEN(control); char* buffer = new char[len + 1]; do { memcpy(buffer,str + pos,len); buffer[len] = '\0'; if (CMPSTR(buffer,control)) { delete []buffer; return pos; } } while (str[(len + (++pos)) - 1]); delete []buffer; return -1; }
Let's look at this line by line...
In this function, the keyword const appears in the parameters. This means I won't be able to change the memory that the pointers point too as we are taking them as a constants. This is just safe programming because this function only needs to look, not touch.
char* str is the string which we want to look through
char* control is the string we are looking for within str, or the control
Next, int pos is set to 0. pos will keep track of where we are at in the string, the head of the string being 0. int len is used to store the length of control, we will need to know how many characters are in the control so that we only compare that many characters at once.
The next line allocates memory in the heap of type char. It will make room for len + 1 characters, or the length of our control string + 1. The + 1 is there so we can append a null-terminating character to the end of the string: \0.
Before we look at what the do loop does, let's have a look at that while statement. Remember you can access the value of a pointer with []'s, which is what we do here. str[(len + (++pos)) - 1] . if len = 2 (because control was "ex") and pos starts at 0, it will make sure there is a value (not \0) at str[1] for the first loop. This looks ahead to make sure we don't start checking the wrong memory.
In the loop, the first line uses memcpy(...). memcpy(), or memory copy, takes 3 parameters: destination, source, and how many characters to copy.
memcpy(buffer,str + pos,len);
The destination is buffer, and the source is str + pos, for the first loop pos is 0 the source for the copy will start at str + 0, or the head of the string (for the next loop iteration pos would be 1, so the source for the copy of that iteration would start at str + 1, or "arK_Nexus"). Finally, the count parameter is set to len, in this case len is 2 so it will only copy 2 characters fromstr + pos to buffer.
Example:
-First iteration of the loop
pos = 0;
str + pos = Dark_Nexus;
After memcpy() the memory pointed to by buffer would be "Da"
-Second iteration of the loop
pos = 1 (because of ++p at the end)
str + pos = arK_Nexus
After the memcpy() the memory pointed to by buffer would be "ar"
The next line of code appends a null-terminating character to buffer at position len. In this example len is 2: buffer[2] = '\0'. Remember that when we allocated memory for buffer, it was len + 1, or 3
The next line simply compares buffer against control, if they are the same, then we delete the memory tied to buffer, because we no longer need to use it.
delete []buffer;
And finally return pos, or the position of control in str, starting at 0.
Dark_Nexus
0123456789
So control (ex) is found at position 6, which is what the function returns.
This post has been edited by JackOfAllTrades: 06 July 2011 - 05:22 PM
Reason for edit:: Changed void main() to standard int main() | http://www.dreamincode.net/forums/topic/149780-pointers/ | CC-MAIN-2016-50 | refinedweb | 2,902 | 67.49 |
My favorite coding interview question
Every software engineering interview I have ever participated in has involved a coding exercise. For one position, I would expect three two five separate coding tests. I’ve written previously about why every company asks these questions, and the best way to handle these as a candidate. But what makes a good coding question?
There is very little data out there about effective interviewing. What data does exists seems to suggest interviews are only good for filtering out candidates that do not meet the minimum bar. According to one popular Google internal study, there is no correlation between interview results and on the job performance.
With this in mind, the coding question I ask is simple. I have found that you can hardly ask a question that’s too simple; a substantial number of candidates will have no problem hanging themselves with the smallest amount of rope that you can give them.
The Question
Write a function that takes two parameters, a string and an integer. The function will return another string that is similar to the input string, but with certain characters removed. It’s going to remove characters from consecutive runs of the same character, where the length of the run is greater than the input parameter.
Ex: "aaab", 2 => "aab" Ex: "aabb", 1 => "ab" Ex: "aabbaa", 1 => "aba"
Note: I’m evaluating your answer on the simplicity of your code. The goal is for it to be readable; someone new should be able to walk into this room afterwards and instantly understand what your function is doing.
The Evaluation
I explicitly state the part about what I’m looking for with the candidate. I tell them to use the language they are most comfortable with. Here is what I consider an ideal solution, in Python:
def remove_extra_consecutive(input_str, max_consecutive_chars): output, prev_char, current_char_seen = '', None, 0 for current_char in input_str: if current_char == prev_char: current_char_seen += 1 else: current_char_seen = 0 prev_char = current_char if current_char_seen < max_consecutive_chars: output += current_char return output
While the candidate is writing code, I focus on the following.
- Thought process. How do they get to a solution? Did they talk about their strategy?
- It needs to work. If it doesn’t, I will keep pushing them until it does.
- Are the function and the variables well named?
- Is the code as simple as possible, but not more simple?
- Is the candidate struggling to remember standard library functions? Help them out.
Other Solutions
The most common variant answer I see is conflating the clauses that count the character and append to the output. Typically this results in a bug where the last character in a run is omitted.
def remove_extra_consecutive(input_str, max_consecutive_chars): output, prev_char, current_char_seen = '', None, 1 for current_char in input_str: if current_char == prev_char and current_char_seen < max_consecutive_chars: current_char_seen += 1 output += current_char else: if current_char != prev_char: current_char_seen = 1 output += current_char prev_char = current_char return output
Another variant is using indexes instead of a named variable for the previous character. This can often lead to an index out of bounds bug. It’s also common to forget to add the last character.
def remove_extra_consecutive(input_str, max_consecutive_chars): output, current_char_seen = '', 0 for i in range(len(input_str) - 1): if input_str[i] == input_str[i+1]: current_char_seen += 1 else: current_char_seen = 0 if current_char_seen < max_consecutive_chars: output += input_str[i] if current_char_seen <= max_consecutive_chars: output += input_str[i+1] return output
Finally some candidates try to alter the input string itself, or sometimes loop indexes, which can lead of off by one errors.
def remove_extra_consecutive(str, max_consecutive_chars): for i in range(len(str)): j = i + 1 while j < len(str): if str[i] != str[j]: break if j - i >= max_consecutive_chars: str = str[0:j] + str[j+1:] j += 1 return str
Summary
I like this problem because it has one simple and robust solution, and a number of more complicated and brittle solutions. If a candidate gets through it quickly and correctly, I follow up by asking them about which edge cases they would want to create unit tests for. If it’s in a dynamic language, I ask about how to make the function operate on any iterable. | https://chase-seibert.github.io/blog/2014/10/10/my-favorite-interview-question.html | CC-MAIN-2022-27 | refinedweb | 682 | 53.1 |
Android: loading images from a remote sever, SD card and from the Resources folder
Posted by Dimitri | Jan 26th, 2011 | Filed under Programming
As stated on the title, this post will explain how to load a image from 3 different sources: the SD card, from a remote server and from the Resources folder. Since the methods and code used to load these images from these sources are different from one another, this post is going to be divided into three different parts, one for each location.
Select one of the following links to go to a specific part of the post:
Let’s start by explaining the easiest one: how to load a image from the Resources folder.
The Resources (or the ‘res’) folder is the most simple way to load an image onto the screen. To load an image from the Resources folder, it must be placed inside the drawable subfolder, like this:
Drag and drop the image here.
After the image has been placed into this folder, we use a dedicated Android class that was designed to handle images, called the Bitmap class. The next step is to create an object of that class, and tell the app where the image is being loaded from, in this case it will be the ‘res/drawable’ folder. For that, we need to create a handle to this folder and use a dedicated class that loads images, which is the BitmapFactory class. We will use its methods to open and copy the image information into the instantiated Bitmap object.
Here’s the Canvas class code:
public class CustomView extends View { //create this object to have access to the application's resources private Resources res; //creates an empty bitmap private Bitmap resBMP; public CustomView(Context context) { super(context); //creates a context handler, to link the resource folder to the application context res = context.getResources(); //loads the bitmap that is inside the res/drawable/folder resBMP = BitmapFactory.decodeResource(res, R.drawable.smiley); } //Override this method to render elements on the screen using the Canvas @Override protected void onDraw(Canvas canvas) { //render the resBMP bitmap positioned at 88px horizontally and 35px vertically. canvas.drawBitmap(resBMP, 88, 35, null); //the parent onDraw() method. super.onDraw(canvas); } }
The Activity class will be omitted, because it is going to be pretty standard. As any common Activity, it will just set the CustomView as the current context.
For this example, the image is going to be placed at the root of the SD card. If you are using the Eclipse IDE, it is possible to place the image inside an emulated device in a very simple manner: after the device is loaded and finishes its boot sequence, change to the DDMS perspective. Then, simply drag and drop the image file into the sdcard folder:
Drag and drop the image here.
The code will use practically the same logic as loading a image from the Resources folder explained above, except this time we are not going to need a handle to the application’s resources. Another difference is that we need to check if the file exists before trying to load it and point out the path to where the image file is located. Also, the method from the BitmapFactory class that will be used to decode the file into a Bitmap object is going to be different. So, it makes sense to place this code inside a method at the View class, like this:
public class CustomView extends View { //creates an empty Bitmap that will be used to display the image from the SD card public Bitmap sdcardBMP; //as we inherit the View class, it is necessary to add this constructor public CustomView(Context context) { super(context); sdcardBMP = LoadBMPsdcard("/sdcard/cardimage.gif"); } //this method loads the bitmap from the SD card private Bitmap LoadBMPsdcard(String path) { //creates a 'File' object, to check if the image exists (Thanks to Jbeerdev for the tip) File imageFile = new File(path); //if the file exists if(imageFile.exists()) { //load the bitmap from the given path return BitmapFactory.decodeFile(path); } else { //return the standard 'Image not found' bitmap placed on the res folder return BitmapFactory.decodeResource(getResources(), R.drawable.imagenotfound); } } //Override this method to render elements on the screen using the Canvas @Override protected void onDraw(Canvas canvas) { //render the SD card image canvas.drawBitmap(sdcardBMP, 88, 275, null); //the parent onDraw() method. super.onDraw(canvas); } }
Again, the Activity class will be omitted, because it is going to be pretty standard. As any common Activity, it will just set the CustomView as the current context.
This is surely the most complex way to load an image into the Canvas. The code to do it is more complicated then the above examples, because it isn’t possible to simply load the image on the same thread that the application is being executed from. This is due to the fact that Android applications must respond in 5 seconds, otherwise, they will throw an exception and will be finished by the OS. Since we don’t know the time it will take to download a image, or even if it is available, we can’t simply place the remote image loading code on the application’s thread.
The solution for this problem is to call the method that will get the image from the server from another thread, which leads to the second problem: only the main application thread can update its Canvas. We can avoid this one by instantiating a object from the Handler class that was designed to allow the updating the Canvas from another thread.
There is one last problem to solve: the canvas can’t be recycled (or refreshed, if you prefer), at least not without having permission to control the Canvas’ surface holder. As this is way out of the scope of this post, and for the sake of the following example’s simplicity, the image will be downloaded from the server before the Canvas is drawn onto the screen.
Let’s not forget to add the following line to the Manifest file of the app, so it can access the Internet:
<uses-permission android:</uses-permission>
Add this line right after the one that has the android:versionName on it.
The following code will execute the required steps to load a image from a remote location, which are:
- Instantiate a Bitmap object to store and display the image.
- Create a method to download the image from a remote location.
- Create and start a separated thread.
- Call the method that will download the image inside the recently created thread.
This way, the Activity class should look like this:
public class CanvasExample extends Activity implements Runnable { //set the custom view private CustomView view; //the handler object is used to update the UI thread from another one private Handler handler = new Handler(); //the bitmap to be downloaded public Bitmap downloadedBitmap; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //instantiate a view object view = new CustomView(this); //starts a new thread that will execute the code from the run() method new Thread(this).start(); //set this view as the content of this Activity setContentView(view); } //This method is responsible for downloading the image from a remote location private Bitmap DownloadBMP(String url) throws IOException { //create a URL object using the passed string URL location = new URL(url); //create a InputStream object, to read data from a remote location InputStream is = location.openStream(); //use the BitmapFactory to decode this downloaded stream into a bitmap Bitmap returnedBMP = BitmapFactory.decodeStream(is); //close the InputStream is.close(); //returns the downloaded bitmap return returnedBMP; } //this method must be overridden, as we are implementing the runnable interface @Override public void run() { //update the canvas from this thread handler.post(new Runnable() { @Override public void run() { //try to download the image from a remote location try { downloadedBitmap = DownloadBMP(""); } //if the image couldn't be downloaded, return the standard 'Image not found' bitmap catch (IOException e) { downloadedBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.imagenotfound); } } }); } }
This is how it works: the DownloadBMP() method downloads a image using string passed as a parameter. Both URL and InputStream classes throw an IO Exception, that is why there is the throws statement at the DownloadBMP method declaration.
The first run() method is the code that will run at the thread that was created and started on line 21. The handler object has its post method called so it is possible to update the Canvas from another thread. This method needs a Runnable object as a parameter, which in turn requires another run() method to be implemented.
At this inner run() method implementation, we have a try/catch block that is used to catch the IOException (if necessary) throw by the DownloadBMP() method. If the image exists and it has been downloaded, it is stored into the downloadedBitmap variable. Just in case the exception is thrown, the downloadedBitmap is set as the standard ‘image not found’ bitmap located inside the Resources folder.
That’s it for the Activity. As for the View, it is going to be the same as the above examples, except there is no need to instantiate a Bitmap variable – just call it directly from the Activity class, into the Canvas’ onDraw() method. like this:
//render the downloaded bitmap from the CanvasExample class canvas.drawBitmap( ((CanvasExample)getContext()).downloadedBitmap, 88, 155, null);
Since the downloadedBitmap is a public member of the CanvasExample Activity it is possible to call it directly by getting its context. If the above line appears to be a little strange to you please read the post: Access Activity class from View. This post explains thoroughly how to access Activity public members from its View.
And that’s it. Here’s the source code that features an Activity with a Custom View that render images from all the 3 different locations:
UPDATE 1: To learn how to load an image from a remote location after the View has been created and rendered, check out this post: Android: Creating a button to load images from a remote server.
UPDATE 2: ImageShack changed URL for the image mentioned on this post. Now, it can accessed by either the two following URLs:
- or
-
Hi there, I am trying to display pictures that are stored on the sd card using a gridview. Any suggestions?
Find the Grid2 class at Android 2.1 SDK examples. Change the mThumbIds int array to an Uri array (line 82). Then, add the URI paths of your images from the SD card as the elements of the array, like this:
private Uri[] mThumbIds ={Uri.parse("/sdcard/yourimage.png"),Uri.parse("/sdcard/anotherimage.png")};
Finally, change line 75 from:
imageView.setImageResource(mThumbIds[position]);
to
imageView.setImageURI(mThumbIds[position]);
If you don’t want to use URIs:
Again, find the Grid2 class at Android 2.1 SDK examples. This time, change the mThumbIds int array to a String array (line 82). Then, add the paths of your images from the SD card as the elements of the array, like this:
private String[] mThumbIds ={"/sdcard/yourimage.png","/sdcard/anotherimage.png"};
Finally, change line 75 from:
imageView.setImageResource(mThumbIds[position]);
to
imageView.setImageBitmap(BitmapFactory.decodeFile(mThumbIds[position]));
Really very helpful example.and has been explained very well.
Thanks!
very nice tutorial. Little hard to understand, but I have to try it.
this is not working ??????
can we add something in XML also…. | http://www.41post.com/2744/programming/android-loading-images-from-a-remote-sever-sd-card-and-from-the-resources-folder | CC-MAIN-2020-10 | refinedweb | 1,904 | 50.26 |
I am new to this, so please excuse if I am not getting something obvious here.
Here is the project on Github, if you want to give it a try:
Anyhow, I have a simple Angular 2 service that calls YouTube Api, it looks like this:
Then I have an app.component.ts that looks like this:
And HTML file:
My application doesn't work though, it shows as loading with following error:
Uncaught Error: Can't resolve all parameters for FormControl: (?, ?, ?). metadata_resolver.js:499
<input [FormControl]="search">
should be
<input [formControl]="search"> ^
This needs to be the selector or input name of a directive or component, not the class name.
FormControl should be removed from
imports: [...] in
@NgModule(). Only modules should be listed in
imports of
@NgModule() not individual classes.
There is no need to list
ReactiveFormsModule in
providers. Adding it to
import is enough. | https://codedump.io/share/LM3ue2zOx5iG/1/angular-2-http-service-youtube-api | CC-MAIN-2017-17 | refinedweb | 146 | 62.38 |
I don't think I'm cross threading!
I have a Windows form that starts a new thread to do some work with sockets and asynchronous data. I have events in a static class which are used by the second thread to report what happens to the form. These events are handled by the Windows form, and the windows form has methods which are wired to said events which access a status bar that is on the windows form. The problem is, when one of these events from the 2nd thread is handled, I get an InvalidOperationException on a line that only changes text in the form's status bar saying that a thread other than the one that created the status bar is trying to access it.
Before I post a bunch of code for you to wade through, I'll explain in simpler code what I'm doing just to make sure I've got the right idea.
The static class:
public static AsyncObject
{
public delegate void StartedHandler();
public static event StartedHandler Started;
public static void Start()
{
OnStarted();
}
protected void OnStarted()
{ if(Started != null) Started(); }
}
The windows form:
public class UserInterface : System.Windows.Forms.Form
{
Thread workerThread = new Thread(AsyncObject.Start);
StatusBar statusBar = new StatusBar();
public UserInterface()
{
AsyncObject.Started += new StartedHandler(AsyncObject_Started);
workerThread.Start();
}
private void AsyncObject_Started()
{
statusBar.Text = "The worker thread has started!";
}
}
Keep in mind that this is an oversimplification of what I'm doing. I'm sure there are better ways of doing what I did above, but I'm pretty sure that this is what I need to be able to do.
From what I understand of threading, the original thread would be the one calling the "AsyncObject_Started()" method, but that is where I am getting my error. Thanks in advance for any help, and I can post my *actual* code if this looks right.
Message Board
Articles
Submit Article
Add Blog To Blog Directory
.NET Developer Registry
Software Downloads
Feedback
Win a free License of CryptoLicensing! | http://www.eggheadcafe.com/community/aspnet/2/71725/invalidoperationexception.aspx | crawl-002 | refinedweb | 333 | 63.19 |
SetGeometry: Unable to set geometry
I get the warning message: "setGeometry: Unable to set geometry 75x23+640+280 on QWidgetWindow/'QPushButtonClassWindow'. Resulting geometry: 124x23+640+280 (frame: 8, 31, 8, 8, custom margin: 0, 0, 0, 0, minimum size: 0x0, maximum size: 16777215x16777215)."
This happens with pretty much every widget. The program itself has no problem running. I'm running some tutorials from a book (written in 2007). The code is:
@#include <QApplication>
#include <QPushButton>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QPushButton button( "quit" ); button.show(); QObject::connect( &button, SIGNAL(clicked()), &a, SLOT(quit())); return a.exec();
}@
What is the cause and how do I get rid of this message?
- ankursaxena
In the code , how r you using setGeometry() ??
That's the thing. I'm not using it. The code sample I provided is the whole program.
[quote author="ankursaxena" date="1393408724"]In the code , how r you using setGeometry() ??[/quote]
Same problem here, even without line 11.
Up please ? :/
Information please. What Qt version? What platform? What compiler?
I am using Qt 5.1.2 32bit (with Qt Creator 3.0.1) on Windows 7 64bit. I don't really know what compiler it is, it came with the MinGW installed with Qt. I used the online installer downloaded here :
So I guess it must be g++.
I was also using the MinGW compiler installed with Qt 5.1.2 64-bit..
I'm sure that book from 2007 is a good one, but it's 7 years old, and things for sure have changed in Qt.
If you want it to run in Qt5 without it complaining about geometry, I'm guessing it nowadays needs a hint/default value for that. Maybe try by inserting one geometry call of your own. e.g.:
@ #include <QApplication>
#include <QPushButton>
int main(int argc, char *argv[]) { QApplication a(argc, argv); QPushButton button( "quit" ); button.setGeometry(200,200,200,200); button.show(); QObject::connect( &button, SIGNAL(clicked()), &a, SLOT(quit())); return a.exec(); }@
then it should run without messages :-)
Gotta try that, didn't even think about it. Thanks, I will give feedback :)
- code_fodder
This example is a very old looking way to do it, though it will still work. The problem is the defualt values are going to be "junk" since there is no parent widget to place your button on. I think hskoglund suggestion will certainly fix your issue, but you will make a "floating" button attached to nothing :o
You are probably better off starting with a Qt Widgets application example which you can create in Qt creator:
File --> New Project --> Application --> Qt Widgets application
Then you have a similar startup program, but this time you have a MainWindow on which to put your buttons.
So I tried hskoglund's way, and it worked well, no warning this time :)
I'll try code_fodder's way when I'll start another project with QtGUI.
Thanks to all !
- code_fodder
Well done, remember to mark the post as solved by changing the title to
"[SOLVED] ....title name...." | https://forum.qt.io/topic/38304/setgeometry-unable-to-set-geometry | CC-MAIN-2018-17 | refinedweb | 509 | 65.73 |
t_rcvudata(3nsl) [bsd man page]
t_rcvudata(3NSL) Networking Services Library Functions t_rcvudata(3NSL) NAME
t_rcvudata - receive a data unit SYNOPSIS
#include <xti.h> int t_rcvudata(int fd, struct t_unitdata *unitdata, int *flags); by means of t_open(3NSL) or fcntl(2),. Subse- quent. RETURN VALUES
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and t_errno is set to indicate an error. VALID STATES
T_IDLE. ERRORSOUTSTATE A t_errno value that this routine can return under different circumstances than its XTI counterpart is TBUFOVFLW. It can be returned even when the maxlen field of the corresponding buffer has been set to zero._open(3NSL), t_rcvuderr(3NSL), t_sndudata(3NSL), attributes(5) SunOS 5.10 7 May 1998 t_rcvudata(3NSL) | https://www.unix.com/man-page/bsd/3NSL/t_rcvudata/ | CC-MAIN-2022-40 | refinedweb | 123 | 50.73 |
NAME
Flush CPU data and/or instruction caches.
SYNOPSIS
#include <zircon/syscalls.h> zx_status_t zx_cache_flush(const void* addr, size_t size, uint32_t options);
DESCRIPTION
zx_cache_flush() flushes CPU caches covering memory in the given
virtual address range. If that range of memory is not readable, then
the thread may fault as it would for a data read.
options is a bitwise OR of:
ZX_CACHE_FLUSH_DATA
Clean (write back) data caches, so previous writes on this CPU are visible in main memory.
ZX_CACHE_FLUSH_INVALIDATE (valid only when combined with ZX_CACHE_FLUSH_DATA)
Clean (write back) data caches and then invalidate data caches, so previous writes on this CPU are visible in main memory and future reads on this CPU see external changes to main memory.
ZX_CACHE_FLUSH_INSN
Synchronize instruction caches with data caches, so previous writes on this CPU are visible to instruction fetches. If this is combined with ZX_CACHE_FLUSH_DATA, then previous writes will be visible to main memory as well as to instruction fetches.
At least one of ZX_CACHE_FLUSH_DATA and ZX_CACHE_FLUSH_INSN must be included in options.
RIGHTS
TODO(fxbug.dev/32253)
RETURN VALUE
zx_cache_flush() returns ZX_OK on success, or an error code on failure.
ERRORS
ZX_ERR_INVALID_ARGS options is invalid. | https://fuchsia.dev/fuchsia-src/reference/syscalls/cache_flush | CC-MAIN-2021-31 | refinedweb | 192 | 54.42 |
How do you check a file exits in another file ?
I know you guys will have a lot of doubts regarding to the tile of this topic. Dont worry! I'll give a further explanation of my title. OK, lets start with this scenario, you have a folder A, inside the folder you ll have another three folders, the name of these three folder inside folder A are B,C and D. Now, Folder B and D is a empty folder, which means the folders B and D dont have anything inside the folder, but for folder C ,yes, its not empty, there is a .txt file inside folder C. Right now, i want to check the .txt. file in in which folder , want to check either is in B,C or D. Let me make my question simpler, how do i check which files have the .txt file in folder A?
#include "checkforfiles_form.h" #include "ui_checkforfiles_form.h" #include <QFileInfo> #include <qdebug.h> using namespace std; CheckForFiles_Form::CheckForFiles_Form(QWidget *parent) : QMainWindow(parent), ui(new Ui::CheckForFiles_Form) { ui->setupUi(this); } CheckForFiles_Form::~CheckForFiles_Form() { delete ui; } void CheckForFiles_Form::on_pushButton_clicked() { QFileInfo fileInfo; fileInfo.setFile("C:\\Users\\USER\\Desktop\\folderA"); qDebug()<<fileInfo.exists(); }
Above codes only will tell us folder A is really exist in that directory . But this is not i want , i want to search through the Folder A go inside the folder A, and check for which files in folder A containing .txt file?is it B,C or D? Some one here please give me some knowledge ,tips, reference or live example to proceed my task . Thank you
QStringList QDir::entryList(Filters filters = NoFilter, sort = NoSort)
may be what you are looking for.
For example (if i understood your scenario right)
QDir folderA("C:\\Users\\USER\\Desktop\\folderA"); QStringList foldersinA = folderA.entryList(QDir::Dirs); foreach(QString folder,foldersinA) { QDir f(folder); QStringList txtlist = f.entryList(QStringList() << "*.txt",QDir::Files); //do something with the filenames in txtlist }
- SGaist Lifetime Qt Champion
Hi,
Since you want to check several folder, you can use QDirIterator in addition to what @the_ suggested.
On a side note, since your a using Qt, you should use the unix notation for path (i.e.
"C:/Users/USER/Desktop/") that will avoid you headaches if your forget to escape your backslashes.
Dear All, this is the solution for this specific problem . Thank you
void CheckForFiles_Form ::recursiveAddDir(QDir d, bool recursive) { d.setSorting(QDir::Name); QDir::Filters df = QDir::Files | QDir::NoDotAndDotDot; if(recursive) df |= QDir::Dirs; QStringList qsl = d.entryList (df,QDir::Name|QDir::DirsFirst); foreach (const QString &entry, qsl){ QFileInfo fInfo(d,entry); if(fInfo.isDir()){ QDir sd(fInfo.absoluteFilePath()); recursiveAddDir(sd,true); }else{ if(fInfo.completeSuffix()== "txt") qDebug() << fInfo.filePath(); } } }
void CheckForFiles_Form::on_pushButton_clicked() { QDir r("C:/Users/USER/Desktop/FileA/"); recursiveAddDir(r, true); }
- SGaist Lifetime Qt Champion
Why not use QDirIterator ? You'd only have to do the file listing and check. | https://forum.qt.io/topic/66470/how-do-you-check-a-file-exits-in-another-file | CC-MAIN-2018-09 | refinedweb | 485 | 58.58 |
#include <gromacs/analysisdata/dataframe.h>
Value type wrapper for non-mutable access to a set of data column values.
Default copy constructor and assignment operator are used and work as intended. Typically new objects of this type are only constructed internally by the library and in classes that are derived from AbstractAnalysisData.
Methods in this class do not throw, but may contain asserts for incorrect usage.
The design of the interfaces is such that all objects of this type should be valid, i.e., header().isValid() should always return true.
Note that it is not possible to change the contents of an initialized object, except by assigning a new object to replace it completely.
Constructs a point set reference from given values.
The first element of the point set should be found from
values using the offset in
pointSetInfo.
Constructs a point set reference from given values.
The first element in
values should correspond to the first column.
Constructs a point set reference to a subset of columns.
Creates a point set that contains
columnCount columns starting from
firstColumn from
points, or a subset if all requested columns are not present in
points. If the requested column range and the range in
points do not intersect, the result has columnCount() == 0.
firstColumn is relative to the whole data set, i.e., not relative to points.firstColumn().
Mainly intended for internal use.
Returns error in the x coordinate for the frame (if applicable).
All data do not provide error estimates. Typically returns zero in those cases.
Should not be called for invalid frames.
Returns error estimate for a column in this set if applicable.
Currently, this method returns zero if the source data does not specify errors.
Returns zero-based index of the frame.
The return value is >= 0 for valid frames. Should not be called for invalid frames.
Returns whether a column is present in this set.
If present(i) returns false, it is depends on the source data whether y(i) and/or dy(i) are defined.
Returns reference container for all values.
First value in the returned container corresponds to firstColumn().
Returns the x coordinate for the frame.
Should not be called for invalid frames.
Returns data value for a column in this set. | https://manual.gromacs.org/current/doxygen/html-lib/classgmx_1_1AnalysisDataPointSetRef.xhtml | CC-MAIN-2021-17 | refinedweb | 376 | 60.31 |
In addition to what’s in Anaconda, this lecture deploys the quantecon library:
!pip install --upgrade quantecon
Another describe a tax-smoothing problem of a government that faces roll-over risk.
Let’s start with some standard imports:
import quantecon as qe import numpy as np import matplotlib.pyplot as plt %matplotlib inline
Roll-Over Risk¶ $ \{b_{t+1}, T_t\}_{t=0}^\infty $ to minimize$$ E_0 \sum_{t=0}^\infty \beta^t T_t^2 $$
subject to the constraints$$ T_t + p^t_{t+1} b_{t,t+1} = G_t + b_{t-1,t} $$$$ G_t = U_{g,t} z_t $$$$ z_{t+1} = A_{22,t} z_t + C_{2 $.
This is the same set-up as used in this lecture.
We will consider a situation in which the government faces “roll-over risk”.
Specifically, we shut down the government’s ability to borrow in one of the Markov states.
A Dead End¶.
Better Representation of Roll-Over Risk¶
To force the government to set $ b_{t,t+1} = 0 $, we can instead extend the model to have four Markov states:
- Good today, good yesterday
- Good today, bad yesterday
- Bad today, good yesterday
- Bad today, bad yesterday”.
# R =] lqm = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β) lqm.stationary_values();
This model is simulated below, using the same process for $ G_t $ as in this lecture. today.
x0 = np.array([[0, 1, 25]]) T = 300 x, u, w, state = lq()] lqm2 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β) x, u, w, state = lq”. | https://python-advanced.quantecon.org/tax_smoothing_3.html | CC-MAIN-2020-40 | refinedweb | 259 | 65.01 |
When reading in XML, the ooo.lib is not name space senstive, and this
could cause problems. Let me show how with a simple example.
The first element looks something like this:
<office:document-content
xmlns:mytable="urn:oasis:names:tc:opendocument:xmlns:table:1.0"
...
>
Notice that I've used the prefix mytable instead of table. Also notice
that the mytable points to the same exact identifier as table does.
Not later in my document, I have this:
<mytable:table-cell .....>
This is a perfectly valid ODS document, since the element "table-cell"
has the namespace "urn:oasis:names:tc:opendocument:xmlns:table:1.0". If
Open Office will open this document without complaining.
However, ooolib does not look for the element "table-cell" with the
namespace "urn:oasis:names:tc:opendocument:xmlns:table:1.0". Instead, it
looks for a text string "table:table-cell." Imagine that it tries to
operate on my document: it won't find the table-cell, even though it is
there.
Likewise, namespaces can occur in the element above:
<table-row xmlns = :urn:oasis:names:tc:opendocument:xmlns:table:1.0"
....
<table-cell
....
The namespace of table-cell is still
"urn:oasis:names:tc:opendocument:xmlns:table:1.0", even though there is
no prefix.
The way to fix this is to enable namespace processing:
parser = xml.parsers.expat.ParserCreate( namespace_separator = ' ')
The namespace_separator argument, set to a blank space in my case,
tells the parser to return a name of the namespace, then a space,
followed by the element name. So table cell would now look like:
urn:oasis:names:tc:opendocument:xmlns:table:1.0 table-cell
No matter what prefix was used to point to the namespace, the table-cell
element will always look like my string above.
I can fix the code if the developer wants me to.
Paul
I submitted a patch for the bug in which cell annotations in an ODS file
gets corrupted when ooolib.py reads in that file.
By mistake, I also included other code which handles invalid files in a
much better way. If ooo.lib find an invalid file (invalid XML, a zip
file that is not ODS, or one in which the XML is not valid), the library
will raise an exception.
Paul
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/ooolib/mailman/ooolib-python/?viewmonth=200907 | CC-MAIN-2017-13 | refinedweb | 424 | 58.79 |
There are many options when it comes to captcha but most elegant solution is Google reCaptcha for sure. User is required only to click on checkbox and in some cases he needs to select few similar images and that's all.
I know that too often I bump on websites, which registration form has some weird characters in captcha field and I have to guess that combination few times cause I can't read anything there. So not to stress users I will be using the simplest captcha, Google reCaptcha.
I will work on same code as in previous tutorials Laravel 5 Social and Email Authentication and Laravel 5 client side validation with Parsley.js.
All code from this tutorial is on Github.
Frontend implementation
I will first visit Google reCaptcha site and register new site there. Good thing is that you can insert multiple domains, so same captcha placement can work on your development domain and production domain.
After registration of new site, I am redirected to captcha keys and quick guide how to implement widget.
reCaptcha Keys
I will store these keys into .env file.
RE_CAP_SITE=6LfAcwkTAAAAAIpZmjqpo-DQ4JWedQ2ywsw4T6qK RE_CAP_SECRET=6LfAcwkTAAAAAJx5gxqn0S4jguJ-HrJmu7vfpUyI
I want this widget to appear after password confirmation filed, before Register button. Also I will include in the footer section of the page.
<div class="g-recaptcha" data-</div> <button class="btn btn-lg btn-primary btn-block register-btn" type="submit">Register</button>
With this I have completed frontend and it looks pretty good now.
Register form captcha field
Backend implementation
First I need to pull reCaptcha PHP package by adding
"google/recaptcha": "~1.1" to composer.json require section and doing update on composer dependencies. You may have already pulled this package if you followed first tutorial.
There are probably 10 different ways how I can code this, but I like reusable approach using traits. This way I can include captcha logic in multiple controllers with simple use statement without the need to inject dependencies.
This trait will be located in app/Traits/CaptchaTrait.php.
<?php namespace App\Traits; use Input; use ReCaptcha\ReCaptcha; trait CaptchaTrait { public function captchaCheck() { $response = Input::get('g-recaptcha-response'); $remoteip = $_SERVER['REMOTE_ADDR']; $secret = env('RE_CAP_SECRET'); $recaptcha = new ReCaptcha($secret); $resp = $recaptcha->verify($response, $remoteip); if ($resp->isSuccess()) { return 1; } else { return 0; } } }
This method captchaCheck does everything, it takes input from user, sends request to Google and returns state. No need to pass any arguments in the method, simple
$this->captchaCheck(); will do the job in the controller.
I am using built-in authentication controllers and I don't want to copy/paste any illuminate classes and re-implement them. So I found pretty elegant solution, I will only modify
validator() method of
RegisterController.
/** * Get a validator for an incoming registration request. * * @param array $data * @return \Illuminate\Contracts\Validation\Validator */ protected function validator(array $data) { $data['captcha'] = $this->captchaCheck(); $validator = Validator::make($data, [ 'first_name' => 'required', 'last_name' => 'required', 'email' => 'required|email|unique:users', 'password' => 'required|min:6|max:20', 'password_confirmation' => 'required|same:password', 'g-recaptcha-response' => 'required', 'captcha' => 'required|min:1' ], [ 'first_name.required' => 'First Name is required', 'last_name.required' => 'Last Name is required', 'email.required' => 'Email is required', 'email.email' => 'Email is invalid', 'password.required' => 'Password is required', 'password.min' => 'Password needs to have at least 6 characters', 'password.max' => 'Password maximum length is 20 characters', 'g-recaptcha-response.required' => 'Captcha is required', 'captcha.min' => 'Wrong captcha, please try again.' ] ); return $validator; }
I am calling
captchaCheck() from validator method and I am adding it's response to
$data array, as it was a form field. In validation rule Laravel expects captcha to be min 1, and if
captchaCheck() only returns 0 and 1, you see what will happen.
Don't forget to pull CaptchaTrait with
use statement.
You can find full code used here on Github or you can test Live Demo here. | https://tuts.codingo.me/google-recaptcha-in-laravel-application | CC-MAIN-2017-34 | refinedweb | 644 | 56.66 |
# Testing
Testing. For some it's an essential part of their development workflow. For others it's something they know they should do, but for whatever reason it hasn't struck their fancy yet. For others still it's something they ignore completely, hoping the whole concept will go away. But tests are here to stay, and maybe Redwood can change some opinions about testing being awesome and fun.
# Introduction to Testing
If you're already familiar with the ins and outs of testing and just want to know how to do it in Redwood, feel free to skip ahead. Or, keep reading for a refresher. In the following section, we'll build a simple test runner from scratch to help clarify the concepts of testing in our minds.
# Building a Test Runner
The idea of testing is pretty simple: for each "unit" of code you write, you write additional code that exercises that unit and makes sure it works as expected. What's a "unit" of code? That's for you to decide: it could be an entire class, a single function, or even a single line! In general, the smaller the unit, the better. Your tests will stay fast and focused on just one thing, which makes them easy to update when you refactor. The important thing is that you start somewhere and codify your code's functionality in a repeatable, verifiable way.
Let's say we write a function that adds two numbers together:
const add = (a, b) => { return a + b }
You test this code by writing another piece of code (which usually lives in a separate file and can be run in isolation), just including the functionality from the real codebase that you need for the test to run. For our examples here we'll put the code and its test side-by-side so that everything can be run at once. Our first test will call the
add() function and make sure that it does indeed add two numbers together:
const add = (a, b) => { return a + b } if (add(1, 1) === 2) { console.log('pass')} else { console.error('fail')}
Pretty simple, right? The secret is that this simple check is the basis of all testing. Yes, that's it. So no matter how convoluted and theoretical the discussions on testing get, just remember that at the end of the day you're testing whether a condition is true or false.
# Running a Test
You can run that code with Node or just copy/paste it into the web console of a browser. You can also run it in a dedicated web development environment like JSFiddle. Switch to the Javascript tab below to see the code:
Note that you'll see
document.write()in the JSFiddle examples instead of
console.log; this is just so that you can actually see something in the Result tab, which is HTML output.
You should see "pass" written to the output. To verify that our test is working as expected, try changing the
+ in the
add() function to a
- (effectively turning it into a
subtract() function) and run the test again. Now you should see "fail".
# Terminology
Let's get to some terminology:
- The entire code block that checks the functionality of
add()is what's considered a single test
- The specific check that
add(1, 1) === 2is known as an assertion
- The
add()function itself is the subject of the test, or the code that is under test
- The value you expect to get (in our example, that's the number
2) is sometimes called the expected value
- The value you actually get (whatever the output of
add(1, 1)is) is sometimes called the actual or received value
- The file that contains the test is a test file
- Multiple test files, all run together, is known as a test suite
- You'll generally run your test files and suites with another piece of software. In Redwood that's Jest, and it's known as a test runner
- The amount of code you have that is exercised by tests is referred to as coverage and is usually reported as a percentage. If every single line of code is touched as a result of running your test suite then you have 100% coverage!
This is the basic idea behind all the tests you'll write: when you add code, you'll add another piece of code that uses the first and verifies that the result is what you expect.
Tests can also help drive new development. For example, what happens to our
add() function if you leave out one of the arguments? We can drive these changes by writing a test of what we want to happen, and then modify the code that's being tested (the subject) to make it satisfy the assertion(s).
# Expecting Errors
So, what does happen if we leave off an argument when calling
add()? Well, what do we want to happen? We'll answer that question by writing a test for what we expect. For this example let's have it throw an error. We'll write the test first that expects the error:
try { add(1) } catch (e) { if (e === 'add() requires two arguments') { console.log('pass') } else { console.error('fail') } }
This is interesting because we actually expect an error to be thrown, but we don't want that error to stop the test suite in it's tracks—we want the error to be raised, we just want to make sure it's exactly what we expect it to be! So we'll surround the code that's going to error in a try/catch block and inspect the error message. If it's what we want, then the test actually passes.
Remember: we're testing for what we want to happen. Usually you think of errors as being "bad" but in this case we want the code to throw an error, so if it does, that's actually good! Raising an error passes the test, not raising the error (or raising the wrong error) is a failure.
Run this test and what happens? (If you previously made a change to
add() to see the test fail, change it back now):
Where did that come from? Well, our subject
add() didn't raise any errors (Javascript doesn't care about the number of arguments passed to a function) and so it tried to add
1 to
undefined, and that's Not A Number. We didn't think about that! Testing is already helping us catch edge cases.
To respond properly to this case we'll make one slight modification: add another "fail" log message if the code somehow gets past the call to
add(1) without throwing an error:
try { add(1) console.error('fail: no error thrown')} catch (e) { if (e === 'add() requires two arguments') { console.log('pass') } else { console.error('fail: wrong error') } }
We also added a little more information to the "fail" messages so we know which one we encountered. Try running that code again and you should see "fail: no error thrown" in the console.
Now we'll actually update
add() to behave as we expect: by throwing an error if less than two arguments are passed.
const add = (...nums) => { if (nums.length !== 2) { throw 'add() requires two arguments' } return nums[0] + nums[1] }
Javascript doesn't have a simple way to check how many arguments were passed to a function, so we've converted the incoming arguments to an array via spread syntax and then we check the length of that instead.
We've covered passing too few arguments, what if we pass too many? We'll leave writing that test as homework, but you should have everything you need, and you won't even need any changes to the
add() function to make it work!
# Our Test Runner Compared to Jest
Our tests are a little verbose (10 lines of code to test that the right number of arguments were passed). Luckily, the test runner that Redwood uses, Jest, provides a simpler syntax for the same assertions. Here's the complete test file, but using Jest's provided helpers:
describe('add()', () => { it('adds two numbers', () => { expect(add(1, 1)).toEqual(2) }) it('throws an error for too few arguments', () => { expect(() => add(1)).toThrow('add requires 2 arguments') }) })
Jest lets us be very clear about our subject in the first argument to the
describe() function, letting us know what we're testing. Note that it's just a string and doesn't have to be exactly the same as the function/class you're testing (but usually is for clarity).
Likewise, each test is given a descriptive name as the first argument to the
it() functions ("it" being the subject under test). Functions like
expect() and
toEqual() make it clear what values we expect to receive when running the test suite. If the expectation fails, Jest will indicate that in the output letting us know the name of the test that failed and what went wrong (the expected and actual values didn't match, or an error was thrown that we didn't expect).
Jest also has a nicer output than our cobbled-together test runner using
console.log:
Are you convinced? Let's keep going and see what Redwood brings to the table.
# Redwood and Testing
Redwood relies on several packages to do the heavy lifting, but many are wrapped in Redwood's own functionality which makes them even better suited to their individual jobs:
- Jest
- React Testing Library
- Mock Service Worker or msw for short.
Redwood Generators get your test suite bootstrapped. Redwood also includes Storybook, which isn't technically a test suite, but can help in other ways.
Let's explore each one and how they're integrated with Redwood.
# Jest
Jest is Redwood's test runner. By default, starting Jest via
yarn rw test will start a watch process that monitors your files for changes and re-runs the test(s) that are affected by that changed file (either the test itself, or the subject under test).
# React Testing Library
React Testing Library is an extension of DOM Testing Library, adding functionality specifically for React. React Testing Library lets us render a single component in isolation and test that expected text is present or a certain HTML structure has been built.
# Mock Service Worker
Among other things, Mock Service Worker (msw) lets you simulate the response from API calls. Where this comes into play with Redwood is how the web-side constantly calls to the api-side using GraphQL: rather than make actual GraphQL calls, which would slow down the test suite and put a bunch of unrelated code under test, Redwood uses MSW to intercept GraphQL calls and return a canned response, which you include in your test.
# Storybook
Storybook itself doesn't appear to be related to testing at all—it's for building and styling components in isolation from your main application—but it can serve as a sanity check for an overlooked part of testing: the user interface. Your tests will only be as good as you write them, and testing things like the alignment of text on the page, the inclusion of images, or animation can be very difficult without investing huge amounts of time and effort. These tests are also very brittle since, depending on how they're written, they can break without any code changes at all! Imagine an integration with a CMS that allows a marketing person to make text/style changes. These changes will probably not be covered by your test suite, but could make your site unusable depending on how bad they are.
Storybook can provide a quick way to inspect all visual aspects of your site without the tried-and-true method of having a QA person log in and exercise every possible function. Unfortunately, checking those UI elements is not something that Storybook can automate for you, and so can't be part of a continuous integration system. But it makes it possible to do so, even if it currently requires a human touch.
# Redwood Generators
Redwood's generators will include test files for basic functionality automatically with any Components, Pages, Cells, or Services you generate. These will test very basic functionality, but they're a solid foundation and will not automatically break as soon as you start building out custom features.
# Test Commands
You can use a single command to run your entire suite :
yarn rw test
This will start Jest in "watch" mode which will continually run and monitor the file system for changes. If you change a test or the component that's being tested, Jest will re-run any associated test file. This is handy when you're spending the afternoon writing tests and always want to verify the code you're adding without swapping back and forth to a terminal and pressing
↑
Enter to run the last command again.
To start the process without watching, add the
--no-watch flag:
yarn rw test --no-watch
This one is handy before committing some changes to be sure you didn't inadvertently break something you didn't expect, or before a deploy to production.
You can run only the web- or api-side test suites by including the side as another argument to the command:
yarn rw test web yarn rw test api
# Testing Components
Let's start with the things you're probably most familiar with if you've done any React work (with or without Redwood): components. The simplest test for a component would be matching against the exact HTML that's rendered by React (this doesn't actually work so don't bother trying):
// web/src/components/Article/Article.js const Article = ({ article }) => { return <article>{ article.title }</article> } // web/src/components/Article/Article.test.js import { render } from '@redwoodjs/testing' import Article from 'src/components/Article' describe('Article', () => { it('renders an article', () => { expect(render(<Article article={ title: 'Foobar' } />)) .toEqual('<article>Foobar</article>') }) })
This test (if it worked) would prove that you are indeed rendering an article. But it's also extremely brittle: any change to the component, even adding a
className attribute for styling, will cause the test to break. That's not ideal, especially when you're just starting out building your components and will constantly be making changes as you improve them.
Why do we keep saying this test won't work? Because as far as we can tell there's no easy way to simply render to a string.
renderactually returns an object that has several functions for testing different parts of the output. Those are what we'll look into in the next section.
# Queries
In most cases you will want to exclude the design elements and structure of your components from your test. Then you're free to redesign the component all you want without also having to make the same changes to your test suite. Let's look at some of the functions that React Testing Library provides (they call them "queries") that let you check for parts of the rendered component, rather than a full string match.
# getByText()
In our <Article> component it seems like we really just want to test that the title of the product is rendered. How and what it looks like aren't really a concern for this test. Let's update the test to just check for the presence of the title itself:
// web/src/components/Article/Article.test.js import { render, screen } from '@redwoodjs/testing' describe('Article', () => { it('renders an article', () => { render(<Article article={ title: 'Foobar' } />) expect(screen.getByText('Foobar')).toBeInTheDocument() }) })
Note the additional
screen import. This is a convenience helper from React Testing Library that automatically puts you in the
document.body context before any of the following checks.
We can use
getByText() to find text content anywhere in the rendered DOM nodes.
toBeInTheDocument() is a matcher added to Jest by React Testing Library that returns true if the
getByText() query finds the given text in the document.
So, the above test in plain English says "if there is any DOM node containing the text 'Foobar' anywhere in the document, return true."
# queryByText()
Why not use
getByText() for everything? Because it will raise an error if the text is not found in the document. That means if you want to explicitly test that some text is not present, you can't—you'll always get an error.
Consider an update to our <Article> component:
// web/src/components/Article/Article.js import { Link, routes } from '@redwoodjs/router' const Article = ({ article, summary }) => { return ( <article> <h1>{article.title}</h1> <div> {summary ? article.body.substring(0, 100) + '...' : article.body} {summary && <Link to={routes.article(article.id)}>Read more</Link>} </div> </article> ) } export default Article
If we're only displaying the summary of an article then we'll only show the first 100 characters with an ellipsis on the end ("...") and include a link to "Read more" to see the full article. A reasonable test for this component would be that when the
summary prop is
true then the "Read more" text should be present. If
summary is
false then it should not be present. That's where
queryByText() comes in (relevant test lines are highlighted):
// web/src/components/Article/Article.test.js import { render, screen } from '@redwoodjs/testing' import Article from 'src/components/Article' describe('Article', () => { const article = { id: 1, title: 'Foobar', body: 'Lorem ipsum...' } it('renders the title of an article', () => { render(<Article article={article} />) expect(screen.getByText('Foobar')).toBeInTheDocument() }) it('renders a summary version', () => { render(<Article article={article} summary={true} />) expect(screen.getByText('Read more')).toBeInTheDocument() }) it('renders a full version', () => { render(<Article article={article} summary={false} />) expect(screen.queryByText('Read more')).not.toBeInTheDocument() }) })
# getByRole() / queryByRole()
getByRole() allows you to look up elements by their "role", which is an ARIA element that assists in accessibility features. Many HTML elements have a default role (including
<button> and
<a>) but you can also define one yourself with a
role attribute on an element.
Sometimes it may not be enough to say "this text must be on the page." You may want to test that an actual link is present on the page. Maybe you have a list of users' names and each name should be a link to a detail page. We could test that like so:
it('renders a link with a name', () => { render(<List data={[{ name: 'Rob' }, { name: 'Tom' }]} />) expect(screen.getByRole('link', { name: 'Rob' })).toBeInTheDocument() expect(screen.getByRole('link', { name: 'Tom' })).toBeInTheDocument() })
getByRole() expects the role (
<a> elements have a default role of
link) and then an object with options, one of which is
name which refers to the text content inside the element. Check out the docs for the
*ByRole queries.
If we wanted to eliminate some duplication (and make it easy to expand or change the names in the future):
it('renders a link with a name', () => { const data = [{ name: 'Rob' }, { name: 'Tom' }] render(<List data={data} />) data.forEach((datum) => { expect(screen.getByRole('link', { name: data.name })).toBeInTheDocument() }) })
But what if we wanted to check the
href of the link itself to be sure it's correct? In that case we can capture the
screen.getByRole() return and run expectations on that as well (the
forEach() loop has been removed here for simplicity):
import { routes } from '@redwoodjs/router' it('renders a link with a name', () => { render(<List data={[{ id: 1, name: 'Rob' }]} />) const element = screen.getByRole('link', { name: data.name }) expect(element).toBeInTheDocument() expect(element).toHaveAttribute('href', routes.user({ id: data.id }))})
Why so many empty lines in the middle of the test?
You may have noticed a pattern of steps begin to emerge in your tests:
- Set variables or otherwise prepare some code
-
renderor execute the function under test
-
expects to verify output
Most tests will contain at least the last two, but sometimes all three of these parts, and in some communities it's become standard to include a newline between each "section". Remember the acronym SEA: setup, execute, assert.
# Other Queries/Matchers
There are several other node/text types you can query against with React Testing Library, including
title,
role and
alt attributes, form labels, placeholder text, and more. If you still can't access the node or text you're looking for there's a fallback attribute you can add to any DOM element that can always be found:
data-testid which you can access using
getByTestId,
queryByTestId and others (but it involves including that attribute in your rendered HTML always, not just when running the test suite).
Here's a cheatsheet from React Testing Library with the various permutations of
getBy,
queryBy and siblings:
The full list of available matchers like
toBeInTheDocument() and
toHaveAttribute() don't seem to have nice docs on the Testing Library site, but you can find them in the README inside the main repo.
In addition to testing for static things like text and attributes, you can also fire events and check that the DOM responds as expected. Read more about user-events, jest-dom and more at the official Testing Library docs site.
# Mocking GraphQL Calls
If you're using GraphQL inside your components, you can mock them to return the exact response you want and then focus on the content of the component being correct based on that data. Returning to our <Article> component, let's make an update where only the
id of the article is passed to the component as a prop and then the component itself is responsible for fetching the content from GraphQL:
Normally we recommend using a cell for exactly this functionality, but for the sake of completeness we're showing how to test when doing GraphQL queries the manual way!
// web/src/components/Article/Article.js import { useQuery } from '@redwoodjs/web' const GET_ARTICLE = gql` query getArticle($id: Int!) { article(id: $id) { id title body } } ` const Article = ({ id }) => { const { data } = useQuery(GET_ARTICLE, { variables: { id } }) if (data) { return ( <article> <h1>{data.article.title}</h1> <div>{data.article.body}</div> </article> ) } else { return 'Loading...' } } export default Article
# mockGraphQLQuery()
Redwood provides the test function
mockGraphQLQuery() for providing the result of a given named GraphQL. In this case our query is named
getArticle and we can mock that in our test as follows:
// web/src/components/Article/Article.test.js import { render, screen } from '@redwoodjs/testing' import Article from 'src/components/Article' describe('Article', () => { it('renders the title of an article', async () => { mockGraphQLQuery('getArticle', (variables) => { return { article: { id: variables.id, title: 'Foobar', body: 'Lorem ipsum...', } } }) render(<Article id={1} />) expect(await screen.findByText('Foobar')).toBeInTheDocument() }) })
We're using a new query here,
findByText(), which allows us to find things that may not be present in the first render of the component. In our case, when the component first renders, the data hasn't loaded yet, so it will render only "Loading..." which does not include the title of our article. Without it the test would immediately fail, but
findByText() is smart and waits for subsequent renders or a maximum amount of time before giving up.
Note that you need to make the test function
async and put an
await before the
findByText() call. Read more about
findBy*() queries and the higher level
waitFor() utility here.
The function that's given as the second argument to
mockGraphQLQuery will be sent a couple of arguments. The first—and only one we're using here—is
variables which will contain the variables given to the query when
useQuery was called. In this test we passed an
id of
1 to the <Article> component when test rendering, so
variables will contain
{id: 1}. Using this variable in the callback function to
mockGraphQLQuery allows us to reference those same variables in the body of our response. Here we're making sure that the returned article's
id is the same as the one that was requested:
return { article: { id: variables.id, title: 'Foobar', body: 'Lorem ipsum...', } }
Along with
variables there is a second argument: an object which you can destructure a couple of properties from. One of them is
ctx which is the context around the GraphQL response. One thing you can do with
ctx is simulate your GraphQL call returning an error:
mockGraphQLQuery('getArticle', (variables, { ctx }) => { ctx.error({ message: 'Error' }) })
You could then test that you show a proper error message in your component:
// web/src/components/Article/Article.js const Article = ({ id }) => { const { data, error } = useQuery(GET_ARTICLE, { variables: { id }, }) if (error) { return <div>Sorry, there was an error</div> } if (data) { // ... } } // web/src/components/Article/Article.test.js it('renders an error message', async () => { mockGraphQLQuery('getArticle', (variables, { ctx }) => { ctx.error({ message: 'Error' }) }) render(<Article id={1} />) expect(await screen.findByText('Sorry, there was an error')).toBeInTheDocument() })
# mockGraphQLMutation()
Similar to how we mocked GraphQL queries, we can mock mutations as well. Read more about GraphQL mocking in our Mocking GraphQL requests docs.
# Mocking Auth
Most applications will eventually add Authentication/Authorization to the mix. How do we test that a component behaves a certain way when someone is logged in, or has a certain role?
Consider the following component (that happens to be a page) which displays a "welcome" message if the user is logged in, and a button to log in if they aren't:
// web/src/pages/HomePage/HomePage.js import { useAuth } from '@redwoodjs/auth' const HomePage = () => { const { isAuthenticated, currentUser, logIn } = useAuth() return ( <> <header> { isAuthenticated && <h1>Welcome back {currentUser.name}</h1> } </header> <main> { !isAuthenticated && <button onClick={logIn}>Login</button> } </main> </> ) }
If we didn't do anything special, there would be no user logged in and we could only ever test the not-logged-in state:
// web/src/pages/HomePage/HomePage.test.js import { render, screen } from '@redwoodjs/testing' import HomePage from './HomePage' describe('HomePage', () => { it('renders a login button', () => { render(<HomePage />) expect(screen.getByRole('button', { name: 'Login' })).toBeInTheDocument() }) })
This test is a little more explicit in that it expects an actual
<button> element to exist and that it's label (name) be "Login". Being explicit with something as important as the login button can be a good idea, especially if you want to be sure that your site is friendly to screen-readers or another assistive browsing devices.
# mockCurrentUser()
How do we test that when a user is logged in, it outputs a message welcoming them, and that the button is not present? Similar to
mockGraphQLQuery() Redwood also provides a
mockCurrentUser() which tells Redwood what to return when the
getCurrentUser() function of
api/src/lib/auth.js is invoked:
// web/src/pages/HomePage/HomePage.test.js import { render, screen, waitFor } from '@redwoodjs/testing' import HomePage from './HomePage' describe('HomePage', () => { it('renders a login button when logged out', () => { render(<HomePage />) expect(screen.getByRole('button', { name: 'Login' })).toBeInTheDocument() }) it('does not render a login button when logged in', async () => { mockCurrentUser({ name: 'Rob' }) render(<HomePage />) await waitFor(() => { expect( screen.queryByRole('button', { name: 'Login' }) ).not.toBeInTheDocument() }) }) it('renders a welcome message when logged in', async () => { mockCurrentUser({ name: 'Rob' }) render(<HomePage />) expect(await screen.findByText('Welcome back Rob')).toBeInTheDocument() }) })
Here we call
mockCurrentUser() before the
render() call. Right now our code only references the
name of the current user, but you would want this object to include everything a real user contains, maybe an
roles.
We introduced
waitFor() which waits for a render update before passing/failing the expectation. Although
findByRole() will wait for an update, it will raise an error if the element is not found (similar to
getByRole()). So here we had to switch to
queryByRole(), but that version isn't async, so we added
waitFor() to get the async behavior back.
Figuring out which assertions need to be async and which ones don't can be frustrating, we know. If you get a failing test when using
screenyou'll see the output of the DOM dumped along with the failure message, which helps find what went wrong. You can see exactly what the test saw (or didn't see) in the DOM at the time of the failure.
If you see some text rendering that you're sure shouldn't be there (because maybe you have a conditional around whether or not to display it) this is a good indication that the test isn't waiting for a render update that would cause that conditional to render the opposite output. Change to a
findBy*query or wrap the
expect()in a
waitFor()and you should be good to go!
You may have noticed above that we created two tests, one for checking the button and one for checking the "welcome" message. This is a best practice in testing: keep your tests as small as possible by only testing one "thing" in each. If you find that you're using the word "and" in the name of your test (like "does not render a login button and renders a welcome message") that's a sign that your test is doing too much.
# Mocking Roles
By including a list of
roles in the object returned from
mockCurrentUser() you are also mocking out calls to
hasRole() in your components so that they respond correctly as to whether
currentUser has an expected role or not.
Given a component that does something like this:
const { currentUser, hasRole } = useAuth() return ( { hasRole('admin') && <button onClick={deleteUser}>Delete User</button> } )
You can test both cases (user does and does not have the "admin" role) with two separate mocks:
mockCurrentUser({ roles: ['admin'] }) mockCurrentUser({ roles: [] })
That's it!
# Handling Duplication
We had to duplicate the
mockCurrentUser() call and duplication is usually another sign that things can be refactored. In Jest you can nest
describe blocks and include setup that is shared by the members of that block:
describe('HomePage', () => { describe('logged out', () => { it('renders a login button when logged out', () => { render(<HomePage />) expect(screen.getByRole('button', { name: 'Login' })).toBeInTheDocument() }) }) describe('log in', () => { beforeEach(() => { mockCurrentUser({ name: 'Rob' }) render(<HomePage />) }) it('does not render a login button when logged in', async () => { await waitFor(() => { expect( screen.queryByRole('button', { name: 'Login' }) ).not.toBeInTheDocument() }) }) it('renders a welcome message when logged in', async () => { expect(await screen.findByText('Welcome back Rob')).toBeInTheDocument() }) }) })
While the primordial developer inside of you probably breathed a sign of relief seeing this refactor, heed this warning: the more deeply nested your tests become, the harder it is to read through the file and figure out what's in scope and what's not by the time your actual test is invoked. In our test above, if you just focused on the last test, you would have no idea that
currentUser is being mocked. Imagine a test file with dozens of tests and multiple levels of nested
describes—it becomes a chore to scroll through and mentally keep track of what variables are in scope as you look for nested
beforeEach() blocks.
Some schools of thought say you should keep your test files flat (that is, no nesting) which trades ease of readability for duplication: when flat, each test is completely self contained and you know you can rely on just the code inside that test to determine what's in scope. It makes future test modifications easier because each test only relies on the code inside of itself. You may get nervous thinking about changing 10 identical instances of
mockCurrentUser() but that kind of thing is exactly what your IDE is good at!
For what it's worth, your humble author endorses the flat tests style.
# Testing Pages & Layouts
Pages and Layouts are just regular components so all the same techniques apply!
# Testing Cells
Testing Cells is very similar to testing components: something is rendered to the DOM and you generally want to make sure that certain expected elements are present.
Two situations make testing Cells unique:
- A single Cell can export up to four separate components
- There's a GraphQL query taking place
The first situation is really no different than regular component testing: you just test more than one component in your test. For example:
// web/src/components/ArticleCell/ArticleCell.js import Article from 'src/components/Article' export const QUERY = gql` query GetArticle($id: Int!) { article(id: $id) { id title body } } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div> export const Success = ({ article }) => { return <Article article={article} /> }
Here we're exporting four components and if you created this Cell with the Cell generator then you'll already have four tests that make sure that each component renders without errors:
// web/src/components/ArticleCell/ArticleCell.test.js import { render, screen } from '@redwoodjs/testing' import { Loading, Empty, Failure, Success } from './ArticleCell' import { standard } from './ArticleCell.mock' describe('ArticleCell', () => { it('renders Loading successfully', () => { expect(() => { render(<Loading />) }).not.toThrow() }) it('renders Empty successfully', async () => { expect(() => { render(<Empty />) }).not.toThrow() }) it('renders Failure successfully', async () => { expect(() => { render(<Failure error={new Error('Oh no')} />) }).not.toThrow() }) it('renders Success successfully', async () => { expect(() => { render(<Success article={standard().article} />) }).not.toThrow() }) })
You might think that "rendering without errors" is a pretty lame test, but it's actually a great start. In React something usually renders successfully or fails spectacularly, so here we're making sure that there are no obvious issues with each component.
You can expand on these tests just as you would with a regular component test: by checking that certain text in each component is present.
# Cell Mocks
When the <Success> component is tested, what's this
standard() function that's passed as the
article prop?
If you used the Cell generator, you'll get a
mocks.js file along with the cell component and the test file:
// web/src/components/ArticleCell.mocks.js export const standard = () => ({ article: { id: 42, } })
Each mock will start with a
standard() function which has special significance (more on that later). The return of this function is the data you want to be returned from the GraphQL
QUERY defined at the top of your cell.
Something to note is that the structure of the data returned by your
QUERYand the structure of the object returned by the mock is in no way required to be identical as far as Redwood is concerned. You could be querying for an
articlebut have the mock return an
animaland the test will happily pass. Redwood just intercepts the GraphQL query and returns the mock data. This is something to keep in mind if you make major changes to your
QUERY—be sure to make similar changes to your returned mock data or you could get falsely passing tests!
Why not just include this data inline in the test? We're about to reveal the answer in the next section, but before we do just a little more info about working with these
mocks.js file...
Once you start testing more scenarios you can add custom mocks with different names for use in your tests. For example, maybe you have a case where an article has no body, only a title, and you want to be sure that your component still renders correctly. You could create an additional mock that simulates this condition:
// web/src/components/ArticleCell.mocks.js export const standard = () => ({ article: { id: 1, title: 'Foobar', body: 'Lorem ipsum...' } }) export const missingBody = { article: { id: 2, title: 'Barbaz', body: null } }
And then you just reference that new mock in your test:
// web/src/components/ArticleCell/ArticleCell.test.js import { render, screen } from '@redwoodjs/testing' import { Loading, Empty, Failure, Success } from './ArticleCell' import { standard, missingBody } from './ArticleCell.mock' describe('ArticleCell', () => { /// other tests... it('Success renders successfully', async () => { expect(() => { render(<Success article={standard().article} />) }).not.toThrow() }) it('Success renders successfully without a body', async () => { expect(() => { render(<Success article={missingBody.article} />) }).not.toThrow() }) })
Note that this second mock simply returns an object instead of a function. In the simplest case all you need your mock to return is an object. But there are cases where you may want to include logic in your mock, and in these cases you'll appreciate the function container. Especially in the following scenario...
# Testing Components That Include Cells
Consider the case where you have a page which renders a cell inside of it. You write a test for the page (using regular component testing techniques mentioned above). But if the page includes a cell, and a cell wants to run a GraphQL query, what happens when the page is rendered?
This is where the specially named
standard() mock comes into play: the GraphQL query in the cell will be intercepted and the response will be the content of the
standard() mock. This means that no matter how deeply nested your component/cell structure becomes, you can count on every cell in that stack rendering in a predictable way.
And this is where
standard() being a function becomes important. The GraphQL call is intercepted behind the scenes with the same
mockGraphQLQuery() function we learned about earlier. And since it's using that same function, the second argument (the function which runs to return the mocked data) receives the same arguments (
variables and an object with keys like
ctx).
So, all of that is to say that when
standard() is called it will receive the variables and context that goes along with every GraphQL query, and you can make use of that data in the
standard() mock. That means it's possible to, for example, look at the
variables that were passed in and conditionally return a different object.
Perhaps you have a products page that renders either in stock or out of stock products. You could inspect the
status that's passed into via
variables.status and return a different inventory count depending on whether the calling code wants in-stock or out-of-stock items:
// web/src/components/ProductCell/ProductCell.mock.js export const standard = (variables) => { return { products: [ { id: variables.id, name: 'T-shirt', inventory: variables.status === 'instock' ? 100 : 0 } ] } })
Assuming you had a <ProductPage> component:
// web/src/components/ProductCell/ProductCell.mock.js import ProductCell from 'src/components/ProductCell' const ProductPage = ({ status }) => { return { <div> <h1>{ status === 'instock' ? 'In Stock' : 'Out of Stock' }</h1> <ProductsCell status={status} /> </div> } }
Which, in your page test, would let you do something like:
// web/src/pages/ProductPage/ProductPage.test.js import { render, screen } from '@redwoodjs/testing' import ArticleCell from 'src/components/ArticleCell' describe('ProductPage', () => { it('renders in stock products', () => { render(<ProductPage status='instock' />) expect(screen.getByText('In Stock')).toBeInTheDocument() }) it('renders out of stock products', async () => { render(<ProductPage status='outofstock' />) expect(screen.getByText('Out of Stock')).toBeInTheDocument() }) })
Be aware that if you do this, and continue to use the
standard() mock in your regular cell tests, you'll either need to start passing in
variables yourself:
// web/src/components/ArticleCell/ArticleCell.test.js describe('ArticleCell', () => { /// other tests... test('Success renders successfully', async () => { expect(() => { render(<Success article={standard({ status: 'instock' }).article} />) }).not.toThrow() }) })
Or conditionally check that
variables exists at all before basing any logic on them:
// web/src/components/ArticleCell/ArticleCell.mock.js export const standard = (variables) => { return { product: { id: variables?.id || 1, name: 'T-shirt', inventory: variables && variables.status === 'instock' ? 100 : 0 } } })
# Testing Services
Until now we've only tested things on the web-side of our app. When we test the api-side that means testing our Services.
In some ways testing a Service feels more "concrete" than testing components—Services deal with hard data coming out of a database or third party API, while components deal with messy things like language, layout, and even design elements.
Services will usually contain most of your business logic which is important to verify for correctness—crediting or debiting the wrong account number on the Services side could put a swift end to your business!
# The Test Database
To simplify Service testing, rather than mess with your development database, Redwood creates a test database that it executes queries against. By default this database will be located at the location defined by a
TEST_DATABASE_URL environment variable and will fall back to
.redwood/test.db if that var does not exist.
If you're using Postgres or MySQL locally you'll want to set that env var to your connection string for a test database in those services.
Does anyone else find it confusing that the software itself is called a "database", but the container that actually holds your data is also called a "database," and you can have multiple databases (containers) within one instance of a database (software)?
When you start your test suite you may notice some output from Prisma talking about migrating the database. Redwood will automatically run
yarn rw prisma migrate dev against your test database to make sure it's up-to-date.
# Writing Service Tests
A Service test can be just as simple as a component test:
// api/src/services/users/users.js export const createUser = ({ input }) => { return db.user.create({ data: input }) } // api/src/services/users/users.test.js import { createUser } from './users' describe('users service', () => { it('creates a user', async () => { const record = await createUser({ name: 'David' }) expect(record.id).not.toBeNull() expect(record.name).toEqual('David') }) })
This test creates a user and then verifies that it now has an
id and that the name is what we sent in as the
input. Note the use of
async/
await: although the service itself doesn't use
async/
await, when the service is invoked as a GraphQL resolver, the GraphQL provider sees that it's a Promise and waits for it to resolve before returning the response. We don't have that middleman here in the test suite so we need to
await manually.
Did a user really get created somewhere? Yes: in the test database!
In theory, it would be possible to mock out the calls to
dbto avoid talking to the database completely, but we felt that the juice wouldn't be worth the squeeze—you will end up mocking tons of functionality that is also under active development (Prisma) and you'd constantly be chasing your tail trying to keep up. So we give devs a real database to access and remove a whole class of frustrating bugs and false test passes/failures because of out-of-date mocks.
# Database Seeding
What about testing code that retrieves a record from the database? Easy, just pre-seed the data into the database first, then test the retrieval. Seeding refers to setting some data in the database that some other code requires to be present to get its job done. In a production deployment this could be a list of pre-set tags that users can apply to forum posts. In our tests it refers to data that needs to be present for our actual test to use.
In the following code, the "David" user is the seed. What we're actually testing is the
users() and
user() functions. We verify that the data returned by them matches the structure and content of the seed:
it('retrieves all users', async () => { const data = await createUser({ name: 'David' }) const list = await users({ id: data.id }) expect(list.length).toEqual(1) }) it('retrieves a single user', async () => { const data = await createUser({ name: 'David' }) const record = await user({ id: data.id }) expect(record.id).toEqual(data.id) expect(record.name).toEqual(data.name) })
Notice that the string "David" only appears once (in the seed) and the expectations are comparing against values in
data, not the raw strings again. This is a best practice and makes it easy to update your test data in one place and have the expectations continue to pass without edits.
Did your spidey sense tingle when you saw that exact same seed duplicated in each test? We probably have other tests that check that a user is editable and deletable, both of which would require the same seed again! Even more tingles! When there's obvious duplication like this you should know by now that Redwood is going to try and remove it.
# Scenarios
Redwood created the concept of "scenarios" to cover this common case. A scenario is a set of seed data that you can count on existing at the start of your test and removed again at the end. This means that each test lives in isolation, starts with the exact same database state as every other one, and any changes you make are only around for the length of that one test, they won't cause side-effects in any other.
When you use any of the generators that create a service (scaffold, sdl or service) you'll get a
scenarios.js file alongside the service and test files:
export const standard = defineScenario({ user: { one: { name: 'String', }, two: { name: 'String', } }, })
This scenario creates two user records. The generator can't determine the intent of your fields, it can only tell the datatypes, so strings get prefilled with just 'String'. What's up with the
one and
two keys? Those are friendly names you can use to reference your scenario data in your test.
Let's look at a better example. We'll update the scenario with some additional data and give them a more distinctive name:
export const standard = defineScenario({ user: { anthony: { name: 'Anthony Campolo', email: 'anthony@redwoodjs.com' }, dom: { name: 'Dom Saadi', email: 'dom@redwoodjs.com' } }, })
Note that even though we are creating two users we don't use array syntax and instead just pass several objects. Why will become clear in a moment.
Now in our test we replace the
it() function with
scenario():
scenario('retrieves all users', async (scenario) => { const list = await users() expect(list.length).toEqual(Object.keys(scenario.user).length) }) scenario('retrieves a single user', async (scenario) => { const record = await user({ id: scenario.user.dom.id }) expect(record.id).toEqual(scenario.user.dom.id) })
The
scenario argument passed to the function contains the scenario data after being inserted into the database which means it now contains the real
id that the database assigned the record. Any other fields that contain a database default value will be populated, included DateTime fields like
createdAt. We can reference individual model records by name, like
scenario.user.dom. This is why scenario records aren't created with array syntax: otherwise we'd be referencing them with syntax like
scenario.user[1]. Yuck.
# Named Scenarios
You may have noticed that the scenario we used was once again named
standard. This means it's the "default" scenario if you don't specify a different name. This implies that you can create more than one scenario and somehow use it in your tests. And you can:
export const standard = defineScenario({ user: { anthony: { name: 'Anthony Campolo', email: 'anthony@redwoodjs.com' }, dom: { name: 'Dom Saadi', email: 'dom@redwoodjs.com' } }, }) export const incomplete = defineScneario({ user: { david: { name: 'David Thyresson', email: 'dt@redwoodjs.com' }, forrest: { name: '', email: 'forrest@redwoodjs.com' } } })
scenario('incomplete', 'retrieves only incomplete users', async (scenario) => { const list = await users({ complete: false }) expect(list).toMatchObject([scenario.user.forrest]) })
The name of the scenario you want to use is passed as the first argument to
scenario() and now those will be the only records present in the database at the time the test to run. Assume that the
users() function contains some logic to determine whether a user record is "complete" or not. If you pass
{ complete: false } then it should return only those that it determines are not complete, which in this case includes users who have not entered their name yet. We seed the database with the scenario where one user is complete and one is not, then check that the return of
users() only contains the user without the name.
# Multiple Models
You're not limited to only creating a single model type in your scenario, you can populate every table in the database if you want.
export const standard = defineScenario({ product: { shirt: { name: 'T-shirt', inventory: 5 } }, order: { first: { poNumber: 'ABC12345' } }, paymentMethod: { credit: { type: 'Visa', last4: 1234 } } })
And you reference all of these on your
scenario object as you would expect
scenario.product.shirt scenario.order.first scenario.paymentMethod.credit
# Relationships
What if your models have relationships to each other? For example, a blog Comment has a parent Post. Scenarios are passed off to Prisma's create function, which includes the ability to create nested relationship records simultaneously:
export const standard = defineScenario({ comment: { first: { name: 'Tobbe', body: 'But it uses some letters twice' post: { create: { title: 'Every Letter', body: 'The quick brown fox jumped over the lazy dog.' } } } } })
Now you'll have both the comment and the post it's associated to in the database and available to your tests. For example, to test that you are able to create a second comment on this post:
scenario('creates a second comment', async (scenario) => { const comment = await createComment({ input: { name: 'Billy Bob', body: "A tree's bark is worse than its bite", postId: scenario.comment.jane.postId, }, }) const list = await comments({ postId: scenario.comment.jane.postId }) expect(list.length).toEqual(Object.keys(scenario.comment).length + 1) })
postId is created by Prisma after creating the nested
post model and associating it back to the
comment.
Why check against
Object.keys(scenario.comment).length + 1 and not just
2? Because if we ever update the scenario to add more records (maybe to support another test) this test will keep working because it only assumes what it itself did: add one comment to existing count of comments in the scenario.
# Which Scenarios Are Seeded?
Only the scenarios named for your test are included at the time the test is run. This means that if you have:
posts.test.js
posts.scenarios.js
Only the posts scenarios will be present in the database when running the
posts.test.js and only comments scenarios will be present when running
standard scenario will be loaded for each test unless you specify a differently named scenario to use instead.
During the run of any single test, there is only every one scenario's worth of data present in the database: users.standard or users.incomplete.
# Wrapping Up
So that's the world of testing according to Redwood. Did we miss anything? Can we make it even more awesome? Stop by the community and ask questions, or if you've thought of a way to make this doc even better then open a PR.
Now go out and create (and test!) something amazing! | https://redwoodjs.com/docs/testing | CC-MAIN-2021-17 | refinedweb | 8,365 | 52.6 |
The Arduino’s IDE is just awful. Period. To write sketches is awful as well. Scketches make me sick. Hopefully we can build our solutions from the command line, directly from the main() function, even using our favorite code editor, VIM for me. A more-in-depth tutorial on this topic can be found here.
Installing Arduino
We need to download the latest Arduino‘s release, as for this writing, 1.8.5. Decompress it wherever you want. Then enter into the uncompressed folder and install it with:
./install.sh
Note: Uninstall any previous installation. Older releases will work, but they won’t compile my tweakings.
Installing Arduino.mk
In order for me to work I’d rather install this tool from the Linux Mint repositories:
sudo apt install Arduino.mk
I guess that’s all for the needed tools. If any missing I’ll update this section.
Makefile for our solution
We need to write a Makefile for each of our projects:
ARDUINO_DIR = /your/home/dir/arduino-1.8.5 BOARD_TAG = uno ARDUINO_PORT = /dev/ttyUSB* ARDUINO_LIBS = CFLAGS_STD = -std=c99 -Wall CXXFLAGS_STD = -std=gnu++11 -Wall -DUNO include /usr/share/arduino/Arduino.mk
The flag ‘-DUNO’ isn’t needed at all, but as I’m using the Leonardo and Uno boards, I need it to make the difference between them as easy as possible.
Test it!
Before we can upload our executable we must add our user to the dialout group so the serial port is available to us as humans:
$sudo adduser <your_user> dialout
Then log out and log in into your session again.
Last step is to write a test program. Let’s call it main.cpp:
#include <Arduino.h> int main(void) { init(); // initialize the Arduino's environment pinMode(LED_BUILTIN, OUTPUT); while( 1 ) { digitalWrite( LED_BUILTIN, LOW ); delay( 100 ); digitalWrite( LED_BUILTIN, HIGH ); delay( 100 ); } return 0; }
For compiling write in the console:
make
and for uploading it:
make upload
And the serial monitor?
We have two options:
- The arduino.mk includes one: make monitor (to exit ctrl-a + ctrl-k).
- But if you don’t like it, then you might use one of the many serial monitors available in Linux. I like GtkTerm because you can close and open the USB port on demand (for when you’re going to upload the code, for example) without leaving it.
Greetings!
2 respuestas para “Building a solution from the command line”
For non devs or simple lazy devs, there is a tool named amake that hides the complicated part of the Arduino IDE command line and has some nice features:
* Code is Full compatible with the Arduino IDE platform, so no mo tweaking the code to share it with users of Arduino IDE.
* It uses de Arduino IDE, so you have no problem adding support for libraries/boards/programmers: just fire the Arduino IDE and use the default tools to manage libraries, boards and programmers.
* It’s a smart wrapper of the Arduino IDE Command line interface with some nice features.
For example to verify/compile a sketch is just this:
amake -v [board] [file]
Yes, the parameters are opional, once you compiled it and it worked, you can keep doing just
amake -v
The same for uploading:
amake -u [board] [file] [serial_device]
Even so, it has a auto detect feature for the serial device so most of the time you don’t need to put the /dev/ttyUSB0 argument.
Take a peek at:
You can integrate it with other IDEs, fo example I use it with Geany.
Cheers.
Me gustaMe gusta
Thank you for the advice! Before I try it I need to ask you whether this tool can handle multi-file projects.
Me gustaMe gusta | https://arduinoforadvanceddevelopers.wordpress.com/2018/03/12/building-a-solution-from-the-command-line/ | CC-MAIN-2021-31 | refinedweb | 617 | 62.38 |
NetHackWiki:Community Portal
Welcome! Use this page to discuss general topics with NetHackWiki members. If you want to discuss a topic that is specifically about NetHack, please consider discussing it at the forum. Purely technical issues (bugs, feature requests, etc.) may be reported at Technical issues.
Another way to contact NetHackWiki is to leave comments on the talk pages of individual users.
Other NetHack communities include:
- rec.games.roguelike.nethack - the newsgroup, see rgrn
- #nethack and #nethackwiki on irc.freenode.net - see Freenode
Archives of this page: 1, 2, 3, 4
Start a new section on the bottom of this page for each topic.
Contents
- 1 Watchlists missing on new wiki
- 2 We need a new logo
- 3 Problems on this Wiki
- 4 Move announcements at various places
- 5 Unified login (in the remote future)
- 6 Announcing edits on IRC?
- 7 Google and optimization
- 8 Source code syntax highlighting?
- 9 Site name "NetHack Wiki" or "NetHackWiki", important for Google search
- 10 Too many forums?
- 11 Let's audit advice
- 12 Privacy policy
- 13 Printed Nethack Guidebook
- 14 Help update incoming links
- 15 Talk page signature checker
- 16 Renaming the old site back to Wikihack
- 17 Better "Welcome" logo?
- 18 People have trouble with our captchas
- 19 Standard format for the "Messages" section.
- 20 Wikia Links
- 21 "Did you know" section
- 22 Incorrect title on left menu
- 23 Merge "Ask an expert" with Forum?
- 24 Encyclopedia entries
- 25 Improving the special level maps
- 26 Improving our click-through rate
- 27 Neat ideas to copy
- 28 Collapsable sidebar?
- 29 Default font used in nethack?
- 30 Nethack welcome box:
- 31 Role articles
- 32 Password issues
- 33 Putting alternate tilesets in articles
- 34 Handling monster color changes in UnNetHack
- 35 Template:YouTubePlayer
- 36 The Front Page - Can we have the Table of Content section back?
- 37 Medica Ossium
- 38 New page please: Command line arguments
- 39 Tiles
- 40 I want to play on a tournament or server that uses tiles but I can't find one. help?
- 41 This page may need to be updated for NetHack 3.6.0
- 42 Macro implementation: nh 3.6, Windows 10
Watchlists missing on new wiki
If you just came here from Wikia, you may have noticed that your watchlist is empty. Apparently, the MediaWikiAuth extension doesn't import it automatically. However, you can fix the problem easily:
- Go to Special:Watchlist/raw at Wikia. If you're logged in, you should see a textbox listing the contents of your watchlist on the old wiki. Select it all and copy it to your clipboard.
- Go to the corresponding page on the new wiki and paste in what you just copied. Then click "Update watchlist".
Happy hacking! --Ilmari Karonen 01:08, 10 November 2010 (UTC)
PS. It looks like user preferences don't get imported either, even though the documentation says they should be. Alas, I know of no such easy shortcut to fixing that. --Ilmari Karonen 01:18, 10 November 2010 (UTC)
We need a new logo
Since we're no longer at Wikia, the current logo (which spells "wiki@") seems a bit inappropriate. We should pick a new one. To kick things off, here's my quick suggestion based on the old idea (monsters spelling out the name of the site). There's also an SVG version,
although the SVG renderer here makes an awful mess of it. --Ilmari Karonen 02:12, 10 November 2010 (UTC)
- Looking good! :) Could be a bit less rainbow'ish, though, but that's just my opinion. —ZeroOne (talk / @) 09:05, 10 November 2010 (UTC)
Problems on this Wiki
- Section moved to NetHackWiki:Technical issues. --Ilmari Karonen 16:57, 1 December 2010 (UTC)
Move announcements at various places
Done: rgrn and most nethack forums.
Done: LiveJournal NetHack community. --paxed 16:17, 11 November 2010 (UTC)
Most places in [1] still need announcement postings:
Adom Forum: Angband forum: Dwarf Fortress forum: Kingdom of Loathing forum: Temple of Roguelikes: dplusplus's blog:
BSDForen.de Forum:
Done: FaceBook: (The NetHack Rules! -group with 860+ members)
I'm growing a bit wary of administrivia. Feel free to beat me to the task.Tjr 15:19, 11 November 2010 (UTC)
Unified login (in the remote future)
Each of several major NetHack sites has its own user namespace (this wiki, the old wiki, NAO, freenode, NAO forums, other gameservers). In a long term perspective, some form of unified login would be great. Users would benefit from less signup hassle (and less chance of impersonation). For the wiki, it's a great marketing ploy. Especially as we're competing against ourselves at Wikia. Silly as it seems, each signup act drives away visitors.
I realize this is pie-in-the-sky right now, and we have more pressing chores due to The Move.
Colliding usernames will make unified login difficult. We could let Wikiuser play as w:Wikiuser, and Naoplayer edit as n:Naoplayer until people can merge their accounts themselves. Freenode requires some thought. As a first step, I propose adding a warning "You are about to create a username that already exists on NAO/the wiki/freenode/... Please consider choosing a different name unless you are the same person."
See also:
The CentralAuth extension, which Wikimedia uses, has been designed to address just those username collision issues. (That's why it's so complicated; if starting from scratch, it would be much easier to just share the user table between wikis.) However, it's been designed with the assumption that all the wikis are running on the same server(s). I haven't really looked at the code to see if there might be any way to relax that assumption.--Ilmari Karonen 21:29, 11 November 2010 (UTC)
- Ah, scratch that, I misread what the issue was. Still, looking at how CentralAuth handles username conflicts might be useful. --Ilmari Karonen 21:31, 11 November 2010 (UTC)
Announcing edits on IRC?
MediaWiki can be set up to report edits on IRC. It would be kind of nice to have Rodney announce whenever someone edits the wiki, although some checks would probably have to be set up to avoid flooding the channel if someone makes lots of edits in a short time. (Just having it not report edits marked as minor might be a good start.) --Ilmari Karonen 21:53, 11 November 2010 (UTC)
- Please try it out. The usage data will be valueable, if nothing else. (Personally, I'm too old-fashioned to find value in Twitter.)
- Speaking from RSS/Atom experience, Special:RecentChanges isn't the right thing to broadcast - too much traffic. At the very least, offer some filtering. E.g. keywords: I expect a Crawl person will want to follow all Crawl-related changes, and so on. Tjr 15:17, 13 November 2010 (UTC)
Google and optimization
Google was by far the largest source of traffic to the old site, so our Google ranking is very important. How can we raise it?
- Original and fresh content. When two sites offer identical content, one of them is downranked as copycat. We need to be different from Wikia, and we should offer Googlebot fresh stuff. Can we please have something like Special:AncientPages, except that it shows only pages that have not been modified since Google spidered it, with (by humans) most visited pages sorted first?
Why did our main page see 1722 hits already, but all other articles go virtually unread? The page view counts were much closer on the old wiki. Special:PopularPages, w:c:nethack:Special:Mostvisitedpages.I propose adding a box to our main page that explains briefly what NetHack is, aimed at people stumbling into the wiki.
--Tjr 22:03, 11 November 2010 (UTC)
- The special page idea sounds interesting. I suspect one reason for the main page being so far ahead here is that all the move announcements link to it. People get curious, click the link, go "oh, it's just like the old wiki, just a different skin" and leave. Things ought to even out once the idly curious folks are replaced by people actually interested in looking something up. --Ilmari Karonen 22:26, 11 November 2010 (UTC)
BTW, I think the reason - ranks so high is because MediaWiki uses it as a dummy page name for loading some site JS and CSS code. Although the fact that those hits get counted at all could be considered a bug. (At least, I hope that's it. The other possibility is that there are some broken links somewhere actually sending people to that page.) --Ilmari Karonen 22:32, 11 November 2010 (UTC)
- There are broken links, see above near. Tjr 22:40, 11 November 2010 (UTC)
- Don't Panic! Most of the visitors right now are those who already know the wiki, but have seen the move announcement in RGRN, LJ, whatever. They just pop in, put the new link in their bookmarks, and go on with their lives. --paxed 07:37, 12 November 2010 (UTC)
- Did you know? section from Wikipedia (probably starting off without any age or length limits, basically just a regularly updated list of curious NetHack trivia). It shouldn't be hard to manage, and should serve as a nice hook to lure casual visitors in.
- Ps. I already made some small tweaks to the main page, including greeting logged-in users by name and notifying them of full/new moon like the game does. I also wrote a quick little template for displaying moon phases like NAO does (although my version so far only has one image for each of the 8 phases nethack uses internally). --Ilmari Karonen 08:25, 12 November 2010 (UTC)
- Moon phase does not show up for me.Tjr 17:14, 12 November 2010 (UTC)
- You mean on the main page or at Template:Moon phase? The version I put on the main page only says anything when the moon is new or full, just like in the game. I was thinking of putting (something like) {{moon phase}} on the main page too, but hadn't got around to the that yet. --Ilmari Karonen 19:32, 13 November 2010 (UTC)
- rel="nofollow". I changed most high-ranking incoming links, but virtually all wikis make external links useless. This is the reason for our abysmal Google rank. (Forums fare better.) The only solution I can see is to ask the respective admins to add us to their interwiki table, and then to make those links interwiki links. Of course, we have to link to them with a "good" link first... --Tjr 03:51, 14 November 2010 (UTC)
- The list of incoming links. Admins go to "view deleted". Our former wikihost should not see this.
- SEO stuff
- valid html. errors on main page.
Robots.txt needs a sitemap: "Sitemap:"
- See mw:Manual:GenerateSitemap.php. This should probably be run from cron or something. --Ilmari Karonen 10:14, 18 November 2010 (UTC) now redirects to nethackwiki.com Template:Timeline_of_NetHack generates
- Looks like the EasyTimeline extension was broken before, and some of the broken imagemaps are still cached. Bumping $wgTimelineSettings->epochTimestamp should fix it. --Ilmari Karonen 13:02, 18 November 2010 (UTC)
- Huh? google:link: says "Your search - link: - did not match any documents." to me. --Ilmari Karonen 13:19, 18 November 2010 (UTC)
- It shows up in the Google Webmaster Tools. I've requested a "show as the Googlebot sees it". Please tell me your Google account, and I'll add you to webtools. --Tjr 14:03, 18 November 2010 (UTC)
- shows the offending page. -Tjr 06:55, 20 November 2010 (UTC)
The posts are still reachable through non-disallowed links via Category:Watercooler and Special:RecentChangesLinked/Category:Watercooler. Also, having sitemaps enabled should help search engines find them. --Ilmari Karonen 13:02, 18 November 2010 (UTC) Fixing Forum:Watercooler would be better because that would bestow page rank onto the forums. --Tjr 14:03, 18 November 2010 (UTC) Top Google searches need to be articles: net hack, nethack, nethacker, roguelike, wiki hack, slashem, nethack wiki, sokoban level, nethack spoilers, nethack spoiler, cursed armor, game nethack, nethack game, nethack download, download nethack, nethack windows, nethack for windows, windows nethack, nethack tiles, online nethack, nethack online, nethack guide, nethack sokoban, sokoban nethack, nethack pet, dungeon siege class, nethack armor, linux nethack, nethack commands, play nethack, nethack android, android nethack, nethack for android, qt nethack, nethack qt, cursed artifacts, zork zero, psp nethack, nethack for psp, nethack psp, nethack level, nethack.alt.org. I guess the pure download links should redirect to an article about the Devteam site.
- Our top 6 key words and their relevance are, as reported by Google: nethackwiki 100%, edit 63%, nethack 36%, monster 35%, navigate 33%, search 31%.
Please make {{SITENAME}} report NetHack Wiki. It should match link anchor texts, though... Didn't we just go through a whole bunch of work standardizing on "NetHackWiki"? Seems a bit silly to start changing the site name again already. --Ilmari Karonen 13:19, 18 November 2010 (UTC) People enter "nethack wiki" and "nethack" into Google. So that's where we need to show up. Google External Keyword Tool. I didn't know better when using NetHackWiki, my fault. Changing it shouldn't be that hard with Special:ReplaceText. --Tjr 14:03, 18 November 2010 (UTC)
- Please hide MediaWiki keywords from bots, e.g. coding them in images/scripts/...: edit, navigate, search, stub, png (in monster template), and the other ones from the top, left, footer, and section edit bar.
- That sounds like a hopeless and futile quest to me. This is a wiki, there will always be wiki-ish things on it. --Ilmari Karonen 13:19, 18 November 2010 (UTC)
I am going to rename the category NetHackWiki into NetHack Wiki.Tjr 11:44, 18 November 2010 (UTC)
- I've bought Google Adwords as a stop-gap measure. Please let me know if you see a way to improve the ads. --Tjr 20:31, 23 November 2010 (UTC)
- Unrelated to the ads, but I just noticed that the main page of this wiki is currently on the seventh position of first page the Google search "nethack wiki". :) —ZeroOne (talk / @) 13:22, 26 November 2010 (UTC)
- Amateurgeek even blogged about the move. I our Google rank sticks even after the various blog updates loose freshness. --Tjr 17:23, 26 November 2010 (UTC)
- At the moment this site is at position 7 after 2 en wikipedia articles and 5 old site entries. At the moment we appear to be our one worst enemy shouldn't we start considering removing the old wiki?--IngerAlHaosului 19:57, 12 December 2010 (UTC)
Source code syntax highlighting?
As we can now highlight syntax for several programming languages (Thanks to an extension using geshi), should we change the source code pages to use that? For a short test, see User:Paxed/src geshi test. --paxed 08:48, 17 November 2010 (UTC)
- Yes. --Ilmari Karonen 10:02, 18 November 2010 (UTC)
How would we link to the function definitions in the source files? Geshi only knows the function name, so the links would have to be something like, which would redirect to the correct place in the Source namespace. I already have a list of the functions at User:Paxed/Source Functions, but what would be best format for the function redirects? Source/function_name, or Src/nh343/function_name or what? --paxed 17:35, 18 November 2010 (UTC)
- You mean the text the human user will see? Why not [[Src/feel_cockatrice]]? BTW, your list of source functions is incomplete, and macros are excluded entirely. ---Tjr 20:32, 18 November 2010 (UTC)
- I think something like Source:function_name() (or Source:Nethack 3.4.3/function_name()) might make sense for redirects. --Ilmari Karonen 00:06, 19 November 2010 (UTC)
- BTW, I wonder if it would be better to keep the line numbers hardcoded and somehow kluge GeSHi to skip over them? One nice thing about the way the source files are currently marked up is that adding annotations is really easy and idiot-proof; you can't mess up the line numbering without spending some effort on it. --Ilmari Karonen 00:06, 19 November 2010 (UTC)
- I'm kinda leery about the parenthesis there. Maybe SourceRef:function_name? Also, I've hacked on the hilight extension, see how it looks at User:Paxed/src geshi test2. Biggest problem with it: It currently makes editing page sections impossible. --paxed 08:44, 20 November 2010 (UTC)
- I solved the problem that prevented editing page sections, and I've changed all of the 3.4.3 source files to use this new syntax coloring. --paxed 19:21, 20 November 2010 (UTC)
- Currently the function names link to Source:Ref/function_name, is that good? Comments? If no objections, I'll run a bot to add the redirects... --paxed 15:41, 21 November 2010 (UTC)
Has anyone else noticed that the highlighting seems to make spaces and underscores indistinguishable? I'm using the old MonoBook skin, so that might have something to do with it. -Ion frigate 03:25, 25 November 2010 (UTC)
- Indistinguishable? In what way? I don't see such a thing, even if I switch to monobook. Screenshot? --paxed 21:02, 25 November 2010 (UTC)
- Or rather, underscores seem to show up as spaces; I uploaded a screenshot as Ion frigate 04:02, 26 November 2010 (UTC)
- Sorry, all I can say is that it Works For Me... --paxed 07:11, 26 November 2010 (UTC)
- Perhaps what you're typing in the firefox search field isn't an underscore? Try entering the underscore via the Windows alt-code (underscore is 95). --paxed 07:26, 26 November 2010 (UTC)
- I can replicate the bug if I change the style
#line .de1, #line .de2 { font: 1em/1.2em monospace; }to
font: 1em/1em monospace;. This causes the lowest few pixels of all characters to be cut off, making underscores look like spaces. Perhaps on your browser even the 1.2em line height isn't enough? It might be safer to set the line height to something larger, say 1.4em or more, and, if desired, use negative margins to tighten up the line spacing instead. (For example,
#line .de1, #line .de2 { font: 1em/2em monospace; margin: -0.8em 0; }looks identical to the current style for me, but avoids any risk of characters being cut off.) --Ilmari Karonen 09:55, 26 November 2010 (UTC)
- Apologies if this is a stupid question, but how do I change the font style? My knowledge of firefox/mediawiki is somewhat lacking. -Ion frigate 04:25, 27 November 2010 (UTC)
- Edit User:Ion frigate/vector.css (assuming you're using the Vector skin. You can find the link to each of the skin specific css files in your Preferences -> Appearance -> Custom CSS) put something what Ilmari suggested above. (I tested this with
#line .de1, #line .de2 { font: 1em/1.4em monospace !important; }) --paxed 09:28, 27 November 2010 (UTC)
- Cool, that fixed it. Thanks! -Ion frigate 19:38, 27 November 2010 (UTC)
- Actually, I was talking more about putting that style in MediaWiki:Common.css so it applies to everybody. Which I've just done. --Ilmari Karonen 18:41, 1 December 2010 (UTC)
A lot of geishi links seem to point nowhere. I've put the list of 404 found by Googlebot at [[NetHackWiki:Community_Portal/geishi404]]. --Tjr
Site name "NetHack Wiki" or "NetHackWiki", important for Google search
People overwhelmingly search for "nethack" or "nethack wiki", and don't find us. Google has us keyworded and well-ranked for "nethackwiki". There seems to be no connection between "nethackwiki" and either "nethack wiki" or "nethack". (I'm sorry, I was not aware of this disconnect ahead of time.) This is a real problem because Google was the main referrer on the old wiki.
Monthly searches on Google: 60,500 for "nethack", 165,000 for "net+hack", 2,900 for "nethack+wiki", surprisingly same for "wikihack", negligible for "nethackwiki". Source
Our keywords are, as reported by Google Webmaster Tools: nethackwiki 100%, edit 63%, nethack 36%, monster 35%, navigate 33%, search 31%.
The goal is to make us relevant for "nethack", and for "nethack+wiki".
IMHO things can still be fixed at this time, and should be. Global on-site replacement is easy with Special:ReplaceText. I can edit the various other wikis myself. Most visitors won't even notice. But I would hate to annoy those helpful individuals who switched external links with another requested change. (Not very many have done so using NetHackWiki as anchor text.) I don't know how bad the mis-match between link anchor text + domain "nethackwiki" versus page title + contents "nethack wiki" is.
What are your opinions? --Tjr 14:45, 18 November 2010 (UTC)
- 1) "net+hack" we can ignore, they're not looking for us. "nethackwiki" doesn't get more searches because there hasn't been a "nethackwiki". (the old site was called "wikihack", remember?) I'm sure "nethackwiki" will get more searches in the future when people realize that's our name. IMO, using extra spaces will just annoy, maybe create problems/annoyances with some things (category rename, text renames, etc). And most of the links that link to us from reputable roguelike sites will be either "NetHackWiki" or "NetHack Wiki", so eventually google will start ranking us higher than the old site. (domain name match!) --paxed 14:54, 18 November 2010 (UTC)
Too many forums?
Do we really need a "Help Desk" forum and the Talk forum? I'm thinking maybe we could get rid of the help desk one... Would require moving the articles in the other one into the Talk forum. --paxed 15:30, 24 November 2010 (UTC)
- Good idea. --Tjr 15:53, 24 November 2010 (UTC)
- Done using Special:ReplaceText; however, Special:Statistics shows the commands just sitting in the Manual:job queue. --Tjr 21:32, 6 December 2010 (UTC)
- The runJobs-script runs every 3 hours. --paxed 21:35, 6 December 2010 (UTC)
- It might be worth speeding that up a bit. Every 5 minutes or so could be fine, or even every minute: it's really quite fast if there are no jobs to run. Use the --maxjobs option to make sure it won't run too long if there are lots of jobs. --Ilmari Karonen 15:34, 7 December 2010 (UTC)
Let's audit advice
The wiki seems to contain a lot of bad advice. Most of its bad reputation is due to that. Advice should make clear its assumptions on player skill, conducts, and available in-game resources.
Let's audit all pages for junk.
I suggest starting with special:popularpages, and adding [[Category:AdviceAuditedBy|YourUsername]] to pages that you have checked.
Thoughts? Better ideas? --Tjr 00:50, 29 November 2010 (UTC)
- From a technical viewpoint, a template (e.g. {{Audited|YourUsername|~~~~~}}) would seem better. --Ilmari Karonen 01:06, 29 November 2010 (UTC)
- Not in stock MediaWiki, but I think the DynamicPageList extension should let you do that. --Ilmari Karonen 13:16, 2 December 2010 (UTC)
- While I think it's a great idea for us to have a concerted effort to audit advice found in this wiki, I don't think it's a good idea to have a "Audited by user X" sticker prominently shown on the page, since that has very little meaning to a reader unless he knows exactly who user X is and trusts his opinion. Also, anybody can change that advice, making that "audited sticker" practically worthless once the advice has changed. Of course, the reader can look at the history and see what changes have been made to the advice since it was lasted audited, but he might as well just go back and see who originally wrote the advice if he's going to bother looking that deep into the history of the article.
- IMO, a more low profile list of who audited what that is mainly visible only to the auditors would be a better way of organizing this effort, since it would reduce the chance of the readers having a false confidence in the advice given in the audited article. A simpler way of saying it: Even though an article has been audited, it's still an article on a wiki.
- --Dptr1988 21:07, 24 December 2010 (UTC)
The footer on all of our pages carries a link to NetHackWiki:Privacy policy, but there's currently no page there. Should we
- a) remove the link, or
- b) write a privacy policy, perhaps based on Wikia's or Wikimedia's policies?
--Ilmari Karonen 13:11, 2 December 2010 (UTC)
- It might be better to have some boilerplate. I don't see the need for a privacy policy, but I'm not a lawyer. (Still, why plaster the entire place with those footer links? Wouldn't links to key navigation pages be better, such as strategy etc?) --Tjr 13:21, 2 December 2010 (UTC)
Printed Nethack Guidebook
What would it take to get a comprehensive printed guide to nethack based on this wiki?
A 500 page tome of all nethack sounds like just about the best book you could ever own. --99.241.123.250 19:16, 2 December 2010 (UTC)
- It looks like it takes only mediawikiwiki:Extension:Collection and the tool to make a custom book. In practical terms, you'd likely want to make the auditing advice project finish first. Also, I'm not sure how much would be lost if links aren't handled gracefully. --Tjr 21:53, 4 December 2010 (UTC)
Help update incoming links
Please help me checking and updating/contacting web sites that link to the old wiki. (I've done some 250 URLs.)
Tasks:
- Find out who is in charge of the site, and check they haven't been bugged before.
- Find all pages linking to the old wiki
- Email them once, politely asking them to update. Append a complete list bad links.
- Mark the site(s) in the progress list above.
- Thank them if they do update.
Tell them who you are, what the wiki has that is relevant to site's audience, where they can find that relevant content, and why it's relevant to their visitors. Use your own judgement. I usually write:
- The wiki moved here,
- Please update,
- List of affected URLs,
- The old site is a trap to surfers, we can't do anything about it,
- Really appreciate sending people to the active wiki (and Google ranking)
- Why the move
- Who you are (I say "Tjr, admin at the wiki")
More resources: Yahoo site explorer (please don't bug somebody repeatedly!), and link-building howtos 1, 2.
--Tjr 21:53, 4 December 2010 (UTC)
Talk page signature checker
I've added a script to warn users if they edit a talk page or a forum without signing their edit with
--~~~~. You may need to clear your cache before it starts working. Logged-in users can turn it off in their preferences (under "Gadgets") if they find it more annoying than useful. Any feedback or bug reports are very much appreciated.
One known bug is that the script doesn't work on pages in the NetHackWiki namespace, such as here on the Community Portal. The problem is that this namespace contains both discussion and content pages, and it's not trivial to tell them apart. One possible solution would be to simply hardcode a list of pages that should be treated as talk pages. A better one might be to query the MediaWiki API to tell if the page contains __NEWSECTIONLINK__. --Ilmari Karonen 22:01, 7 December 2010 (UTC)
OK, I think I've made it work right on NetHackWiki pages too. The tricky case is when editing an existing section on such a page, since then an API query is needed to load the complete text of the page. --Ilmari Karonen 00:15, 8 December 2010 (UTC)
Renaming the old site back to Wikihack
I've thought about it again, and I think Bhaak is right. It's too confusing to have two identically branded sites. And I'm not sure if renaming it to anything else except Wikihack wouldn't get reverted. My original motivation why I introduced NetHackWiki was to gently prepare readers for the move, without really thinking through what would happen if Google kept ranking the old site first. With the move notices gone, this point is moot now.
We can't win the fight for the keyword "wikihack" anyway. (We're on position 58.) Let's win "nethackwiki". --Tjr 03:28, 10 December 2010 (UTC)
- Why not suggest it on the Community Portal there? If nobody objects in a week or so, you can assume it has consensus. :) --Ilmari Karonen 04:19, 10 December 2010 (UTC)
- +1, what he said. Competing for "WikiHack" also doesn't make sense as pmostly only those already knowing it will look for it. I guess all others will google for "nethack wiki". --Bhaak 13:53, 10 December 2010 (UTC)
Better "Welcome" logo?
The current "Welcome" logo should be redone. It's got ugly edges, and it could match the wiki logo better... --paxed 16:28, 13 December 2010 (UTC)
- OK, I have created an image called File:Welcoming party.png, presented at right. How about that? :) —ZeroOne (talk / @) 23:53, 13 December 2010 (UTC)
- Bhaak, the font probably should be the same but I think only User:Ilmari Karonen can make it look the same as he is the one who made the current logo and thus knows the correct settings. But let's not rush as no decisions have been made yet. The current version can be considered just a mock-up if you wish.
- Tjr, we can discuss the message contents at Template talk:Welcome.
- —ZeroOne (talk / @) 20:22, 14 December 2010 (UTC)
- There's an editable SVG version of the site logo at File:Nethackwiki-logo.svg. The font is DejaVu Sans Mono, which I think is indeed basically identical to Bitstream Vera Sans Mono. I didn't have any particular reason for picking it, except that it happened to be the nicest looking monospaced font I had installed. (Also, the colors I used are a bit different from the standard CGA colors used in the site CSS; you can see them on the third line of this page. In particular, I made the yellow in the logo fairly dark so that the c wouldn't stand out too much, and brightened the blue to make the e more visible.)
- As for the proposed logos, I think both the original design and ZeroOne's new version are nice. If anyone wants to see the original heart design redone in SVG, I could certainly do that, but I think a new design might be a welcome (pun not intended) change. --Ilmari Karonen 13:24, 15 December 2010 (UTC)
People have trouble with our captchas
Reading s:w:awa:Talk:Moved_wikis#wikis_which_need_to_maybe_reconsider_login_procedures, it seems a reasonable fraction of all visitors have problems with the captchas. As a quick fix, I propose rewording it as "Which symbol represents a wand in-game". --Tjr 04:04, 21 December 2010 (UTC)
- I think that a new system using FancyCaptcha with a mix of English words and NetHack terms would be better. Andrew 04:53, 21 December 2010 (UTC)
- ...or perhaps even "Which symbol represents a wand in NetHack?" --Ilmari Karonen 06:40, 21 December 2010 (UTC)
- This is a NetHackWiki after all, so what other symbols would we be asking? Also, it's really really simple to actually look up the symbols on the wiki itself... --paxed 06:55, 21 December 2010 (UTC)
- ...even on the Main Page, in fact. But I do think adding "in NetHack" to the questions would still be a good idea, not so much because there should be any ambiguity without it, but simply because it sounds better IMO. --Ilmari Karonen 07:01, 21 December 2010 (UTC)
- What about "bad behavior"? It promises to reduce spam without inconveniencing users at all, by filtering on the user agent http info. --Tjr 14:46, 21 December 2010 (UTC)
- That can't combat spam created by actual humans (such as spam bids via Amazon's Mechanical Turk). Our captcha doesn't inconvenience people who know anything at all about the subject. And if you know nothing about NetHack, you should be good enough to be able to look the information up on the wiki. Also, no captcha after registering. Frankly, I don't want to remove the captcha, as I believe this has been an overblown reaction by someone who just was bitching because he didn't know the answer and was too lazy to look it up himself. --paxed 15:05, 21 December 2010 (UTC)
- Sorry. I thought of an additional line of defence.
- I think it's an overblown reaction with a small grain of truth in it. --Tjr 15:16, 21 December 2010 (UTC)
Standard format for the "Messages" section.
I've noticed a variety of different formats used for lists nethack messages, but haven't found standard format. Is there a standard format that I have missed? If not, is there a good place to discuss this issue?
Most places where I've seen messages being listed, they are usually in a table or "definition list" type of format, both of which are, IMO, interchangeable without losing any information or ease of use. I prefer the "definition list" way of doing it, since it appears to be a little more flexible, the messages stand out more and there is space for a bigger description of the message.
Also would there be any benefit for using templates to list individual messages? Not only would that allow for a standard representation of messages, it could also allow for easily cross-referencing messages that appear on multiple pages or some other type of automated features with messages. IMO, it would be nice to have a since source for messages dedicated to listing all types of messages and fully explaining them. And then having other pages referring to those messages to prevent unnecessary duplication.
--Dptr1988 20:54, 24 December 2010 (UTC)
- Yes, a standard format for messages would be nice. I also like the definition list format, even if it has some minor disadvantages (like the fact that colons in messages can mess it up unless encoded). It would also be nice if we could automatically generate anchors for messages (like MediaWiki generates for section headings) so that one could link to them; I suppose a template could be made to do that. I could look into it a bit later. --Ilmari Karonen 09:51, 25 December 2010 (UTC)
Wikia Links
There are many links that point to help pages and community central pages on Wikia such as the links in NetHackWiki:Policy and NetHackWiki:Administrators. Is this intentional or just temporary? --99.239.146.253 08:36, 25 December 2010 (UTC)
- The latter, mostly. We should fix them to point to corresponding pages here, or rewrite the text they're in so that they're not needed. --Ilmari Karonen 08:52, 25 December 2010 (UTC)
- I removed the Wikia links from NetHackWiki:Administrators, and edited NetHackWiki:Policy to clarify that we're no longer part of Wikia (but left the links in for now, since they still describe sensible wiki behavior rules). --Ilmari Karonen 09:39, 25 December 2010 (UTC)
- A full list of pages with Wikia interwiki links ("[[w:c:something:article]]") other than user and user talk pages is: Ancient_Domains_of_Mystery, Dogley_Dimension, Douglas_Adams, Freenode, Vi, Talk:Main_Page/Archive1, Talk:Role_difficulty, NetHackWiki:Community_Portal, NetHackWiki:Community_Portal/Archive2, NetHackWiki:Community_Portal/Archive4, Forum:Main_page_and_skin_changes.
- Ordinary link syntax can be found with Special:Linksearch. --Tjr 19:33, 27 January 2011 (UTC)
- And the string "ikia.com": Curses_interface, Freenode, Graphical_user_interface, Internet_Relay_Chat, Talk:Default_tileset_scaled_to_32x32, Talk:Main_Page/Archive1, Talk:Nethack.alt.org, NetHackWiki:About, NetHackWiki:Community_Portal, NetHackWiki:Community_Portal/Archive1, NetHackWiki:Community_Portal/Archive3, NetHackWiki:Community_Portal/Archive4, NetHackWiki:Community_Portal/Archive4, NetHackWiki:Technical_issues, NetHackWiki_talk:Featured_articles, MediaWiki:Monobook.js, MediaWiki_talk:Common.js, Template:Gameinfo, Template_talk:PD, Template_talk:Wikipedia, Source_talk:Qt_xpms.h, Forum:Main_page_and_skin_changes, Forum:Number_of_articles. --Tjr 20:39, 27 January 2011 (UTC)
- Most pages on that list are either corrected, extremely old, or legitimately linking to Wikia. An exception are the Forum pages - I'm not sure about the best solution. Perhaps deleting them outright? --Tjr 22:21, 27 January 2011 (UTC)
"Did you know" section
Ilmari Karonen suggested a did you know section, as Wikipedia has it. I think that's a good idea. What would it take to implement it? -Tjr
- The only complaint I can see is that people would not want the main page to spoil them on topics they did not specifically look up. --99.239.146.253 20:57, 27 December 2010 (UTC)
- I think we should be able to minimize that issue by choosing what to show and how to phrase it. The idea isn't to plaster blatant spoilers on the main page, but more just to entice readers to look deeper if they want. So, for example, not "Did you know that eating a lizard corpse can cure stoning?", but "Did you know that there are dozens of ways to be killed by a cockatrice?". --Ilmari Karonen 18:53, 1 January 2011 (UTC)
- Anyway, I went and started collecting possible hooks at NetHackWiki:Did you know?. Please add more. Once we have a bunch, we can start putting them on the main page. --Ilmari Karonen 20:40, 2 January 2011 (UTC)
It seems the correct title of this wiki is NetHackWiki without a space. It is used in that form throughout the wiki, but on the left menu, the link to the main page lists the title as NetHack Wiki with a space. --99.239.146.253 20:38, 27 December 2010 (UTC)
Maybe the link should be "Main page". That is more descriptive anyway. --99.239.146.253 02:27, 2 January 2011 (UTC)
- Hmm, the mis-link was an attempt to gain more Google relevance for the keyword phrase "nethack+wiki". A lot more people search for that than for "nehackwiki". Ideally, we'd move the Main page to nethack_wiki and have some link to it on each page, using "nethack wiki" as anchor text. --Tjr
Merge "Ask an expert" with Forum?
I think the "Ask an expert" -page is kinda useless; maybe we should get the intro from that page and merge it with the Forum? --paxed 09:11, 2 January 2011 (UTC)
- I agree it should be done. But I don't quite see how without flooding the forum with bumped-up 4-year-old threads. This goes especially for the 4 archive pages. --Tjr 11:37, 2 January 2011 (UTC)
Encyclopedia entries
The same encyclopedia entries are used on several pages. I think it would make sense to put the individual encyclopedia entries in one place, eg. under the encyclopedia template (for example, Template:encyclopedia/ant, and then you could just use {{encyclopedia|ant}} to get it. (Similar to how Template:Monsym works) --paxed 13:07, 8 January 2011 (UTC)
- Yes, I think this would be a good idea. {{Encyclopedia-redirect}} also appears to address the same problem, but I think that in this case transclusion ( like you suggested ) is much better than redirection, especially given the way redirection was handled in {{Encyclopedia-redirect}}.
- --Dptr1988 16:21, 8 January 2011 (UTC)
- Agreed. A special case is jabberwock - I propose we keep the complete poem. --Tjr 13:07, 12 January 2011 (UTC)
Improving the special level maps
Currently all the special level maps (eg. in Ranger quest, Sokoban) are nothing but <pre>-blocks. I suggest we use the User:Paxed/ReplaceCharsBlock-extension to give them some color, making it easier to see the map features. --paxed 21:14, 8 January 2011 (UTC)
- That's an excellent idea. --Tjr 13:07, 12 January 2011 (UTC)
- There's only one problem: Apparently altar (underscore) will be hidden due to css line-height and such stuff. The same problem as with the c source syntax highlighting had. But the same solution doesn't apply... --paxed 15:05, 16 January 2011 (UTC)
- Problem fixed, check out Barbarian quest. Not all articles have been updated to use that yet. --paxed 20:26, 2 February 2011 (UTC)
Improving our click-through rate
Now that the wiki is running well, we need to focus on how fast newbies find us. Google consistently puts us on the first page for the most important query (nethack wiki). Overwhelmingly, we're positions 6-10 with a click through rate of 7%. The market average CTR in these positions ranges from 13% to 7%.
That means there is a lot of room for improvement, and we need to do some testing and tuning. What do you suggest to put in MediaWiki:Description and MediaWiki:Pagetitle-view-mainpage? --Tjr 00:18, 13 January 2011 (UTC)
- I have a suspicion that the main page title, while possibly helpful for our ranking, may also be hurting our click-through rate precisely because it looks like the kind of "keyword spam" that people have learned to semi-consciously ignore. Something short and simple might be better. Maybe just "NetHackWiki, the NetHack wiki"? --Ilmari Karonen 00:50, 13 January 2011 (UTC)
That backfired -- only 5% CTR and 18% less clicks. Any other suggestions? --Tjr 12:34, 26 January 2011 (UTC)
Neat ideas to copy
- Dead Rising Wiki has a chat box right on their main page. #Nethack would certainly help engage drive-by visitors.
- Zelda Temple has an eye-catching main page. tmbw and Talisman Online aren't bad either. I really should ask the maker of TheGreatestGameYouWillEverPlay.
- Semantic mediawiki, to extract all kinds of lists of monsters, properties, NAO players, etc.
- meatball:TourBusMap. Could we perhaps get on the bus tour through interesting wikis?
- thetransitioner explains copyleft in detail, we could just link to it.
- A clustermap of visitors to the Master of Orion forum.
Opinions, anybody? --Tjr 18:38, 26 January 2011 (UTC)
- Those chat boxes tend to be filled with profanities and other excrement. wikipedia:Sturgeon's_Law applies. Maybe just make the IRC channel link more prominent. (And I totally forgot about the cgi-irc gateway I was going to check out...)
- I like the tmbw main page, but if we do something like that, it would be best to create a whole new skin for our mediawiki. (And allow users to switch back to whatever skin they prefer)
- If I understand correctly, the semantic MW is a bit too involved a patch; it's closer to a MW fork, so might be more work than worth.
- No idea about meatball TourBus, I haven't heard of it (nor meatball) before.
- I'm neutral about the clustrmap.
- --paxed 17:29, 27 January 2011 (UTC)
How about adding a collapsable sidebar to replace the menus Wikia killed? See an example.
I'd like to add the browser tabs I always keep open. Right now I have: Wand, Weapon, Potion, Ring, Minetown#Maps, Magic_marker#Ink_and_charges, Shopkeeper#Shopkeeper_names, armor (with probabilities added), Tool (with prob. added), Passtune solver, Gem#By_color, and a table of good polyforms stating carrying capacity. –Tjr 13:29, 18 May 2011 (UTC)
Default font used in nethack?
All over the wiki (most apparent in the homepage screen shot), the font is much smoother than the one found in my terminal. I was wondering if someone can please tell me the name of this font? —The preceding unsigned comment was added by Matt493 (talk • contribs) 00:35, 9 April 2012.
- It is Courier New. --99.239.147.0 02:47, 9 April 2012 (UTC)
- It varies based on which platform you view the page from; if you're viewing from Windows, it's most likely Courier New. The font used in the logo is DejaVu Sans Mono (incidentally, my preferred font for playing NetHack with), which is a likely choice on Linux. Ais523 11:28, 9 April 2012 (UTC)
Nethack welcome box:
The welcome box on the main menu that displays a welcome message and a rumor seems to be broken (or this feature has not been implemented, if that is the case I would like to propose that this be done then). The rumor from what I have seen is always the same "Acid blobs should be attacked bare-handed." Maybe we can fix it so it can use some of the hooks from the Project:Did you know? page, and some generic rumors that aren't too big on spoilers from the oracle. Not only would it bring a fresh change to recurring visitors, but it will hopefully spark the curiosity in the wiki/game from new visitors as well. Matt493 22:54, 10 April 2012 (UTC)
- Template:Random true rumor mentions in a comment that the acid blob rumor is the default if the javascript fails. Looking at the javascript though, I'm not seeing anything to do with the rumor. So it does look more like "never implemented" than broken. -- Qazmlpok 23:19, 10 April 2012 (UTC)
Role articles
I know this has been said for a while, but the role articles could use some substantial work. They're not bad as they are, but they could be a lot more useful.
In particular, the Strategy section should be split into a couple sections, like Early Game, Midgame, and Late Game. Each section should consider things like weapon choice, general gameplay, and so on.
Should quest information be discussed on the role page, the quest page, or the quest nemesis page? I think we have some of each right now. The others should just link to there.
Other ideas? If we agree on what to actually do, I'd be willing to do some restructuring work. Scorchgeek (talk) 01:23, 1 November 2012 (UTC)
- I think that's a good idea. (The sections might be named "Early game" etc. though for capitalization consistent with the other parts of the wiki.) Strategy can vary quite much between different stages of the game, obviously.
- On quest information - I think it'd make sense for nemesis-specific information to be on the nemesis page, perhaps with a short note on the quest page. Other quest information ("there are many vampire bats in the wizard quest", etc.) should be on the main quest page. The role page might mention the quest (something similar to Wizard#Quest, maybe with a note about its difficulty), using {{main}} to link to the quest page for the role in case readers want to know more things about the quest (thus keeping duplication of content to a minimum while still giving the most basic information).
- The strategy sections should probably mention when to do the quest, too, though details about how to do it are probably best placed on the quest page (or nemesis page in the case of nemesis strategy). This can then be linked to.
- In some cases, the Strategy sections might need to mention different strategies; I don't think we should only specify one of them, especially in cases where all of them are rather common (I'm thinking mostly of wizards here – some people prefer using metal armor early and throwing daggers while others try not to use metal armor at all, instead preferring high spellcasting success rates for all of the game).
- A section about how the respective role is "different" from other roles might also be nice – wizards, for example, are different in that they can use magic markers more easily (and other things). --Bcode (talk) 11:24, 10 November 2012 (UTC)
I'm trying it out on Valkyrie. Scorchgeek (talk) 04:19, 2 December 2012 (UTC)
I'm going to start revamping role articles to include better information on how the roles work in SLASH'EM, which is often significantly different than Vanilla. I've tried it out on Rogue#SLASH'EM, any feedback would be appreciated. --Prometheus77 (talk) 15:23, 23 April 2013 (UTC)
- Thank you, that's clearly a gap you're filling. (Sadly, I'm totally unqualified wrt slashem, so I can't say anything substantial about the content.) --Tjr (talk) 18:10, 23 April 2013 (UTC)
- Hi, new guy here. Been looking at the wiki for quite some time, decided to help. About the role articles, can we have a set formatting throughout all the articles? I propose the one on Arcs. --ASnail (talk 7:30, a7/10/2013 (UTC) +0
Password issues
I used to be a member of the old wiki as Kahran042. So I tried to log in, but it wouldn't let me. I tried getting a new password on Wikia, but that didn't help. Can someone pleas help me? --24.61.180.68 05:35, 29 December 2012 (UTC)
Did you try the reset password option here? Scorchgeek (talk) 16:30, 29 December 2012 (UTC)
Yes, but it claimed that I didn't have an e-mail address recorded. --24.61.180.68 01:27, 30 December 2012 (UTC)
In that case, is there a reason you can't just create a new account? Scorchgeek (talk) 01:35, 30 December 2012 (UTC)
I just did, but would I be allowed to transfer the stuff on my old user page to a new user page? --Kahran042 R (talk) 07:30, 31 December 2012 (UTC)
- Sure, if you're the same person (and there's no reason to assume you're not), just move your old user page to your new one. This will keep the history intact and create a redirect from your old user page to your new one. You might want to edit your new user page after that to clarify this.
- Alternatively, from Template:News: "30th March 2012 - PSA: Wiki login problems? Cannot reset your password? EMail paxed at alt dot org"
- (used to be on the Main Page, but as new items were added, it moved "below the fold".) —bcode talk | mail 08:05, 31 December 2012 (UTC)
Putting alternate tilesets in articles
Conversation moved to Template_talk:Alternate_tilesets
Handling monster color changes in UnNetHack
Version 5.0 of UnNetHack changes color of multiple monsters (r1455). My suggested method of handling this in monsym templates is to it like it is done with Cthulhu (different symbols in UnNetHack and SLASH'EM)
- Create templates for variants (Template:Monsym/master mind flayer (UnNetHack) and Template:Monsym/master mind flayer (vanilla))
- replace Template:Monsym/master mind flayer by proper one (or maybe leave multivariant one)
- modify Template:Monsym/master mind flayer to look like Template:Monsym/Cthulhu (displays both possibilities)
Is there anybody with a better method of handling this change? Second question - when this mass edit should be done? UnNetHack 5.0 will be used during Junethack and released after tournament Bulwersator (talk) 15:32, 29 May 2013 (UTC)
- I think something like Template:Monsym/master mind flayer/UnNetHack/5.0 and Template:Monsym/master mind flayer/NetHack/3.4.3 would be even better, actually. That'd involve some work, but it should be not too difficult to automate.
- Template:Monsym would take a variant parameter and use that variant; it'd also take a version number. I'm not completely sure how to avoid having a duplicated page for every version released, but there's probably some way I could think of.
- Something like mw:Extension:Variables would help avoid specifying all the parameters everytime Template:Monsym is used on a page. Perhaps it could be combined with a modification of the templates suggested on NetHackWiki:Next version, too: have a template that marks individual sections as specific to e.g. UnNetHack 5.0 or vanilla 3.4.3, then have that automatically set the variable. (I've tested something remotely similar to that on a local test installation of MediaWiki once, FWIW.) —bcode talk | mail 15:52, 29 May 2013 (UTC)
- Template:Monsymlink already takes parameters Bulwersator (talk) 15:57, 29 May 2013 (UTC)
- It doesn't use them the way I'd like, though; see my description above. It also doesn't take a version parameter and asking for e.g. the UnNetHack symbol for a giant ant will not work (as that never changed from vanilla; making this work without duplication or at least redirects will probably involve some work). (I know rather well that Template:Monsymlink takes a variant parameter; I added it. It's more of a simple hack currently, though, just appending the variant name in parentheses.) —bcode talk | mail 16:06, 29 May 2013 (UTC)
Template:YouTubePlayer
Can we have the YouTubePlayer template? Also, since this website is not part of wikia, do we have a .CSS page? -WaveDivisionMultiplexer (talk) 05:22, 12 January 2014 (UTC)
The Front Page - Can we have the Table of Content section back?
There used to be a front page that had a sort of "top" TOC with links to Roles/Items/In-Depth/Monsters, etc. What happened to that?
Here is what I mean:
As it is, the front page isn't very useful other than as a welcome page - you have to search for everything which is rather painful. --Raindog308 (talk) 04:56, 11 January 2015 (UTC)
- I added it back :-) Raindog308 (talk) 02:52, 30 January 2015 (UTC)
Medica Ossium
Hey.
I didn't find a way to send a message to someone who could help, so I'm gonna write here. Sorry for that.
I'm here just to tell you I find a creature who wasn't cataloged in this website. I tried to create a new page for it myself, but I've failed.
The creature is named "medica ossium" (same name of the healer's rank).
"Status of the medica ossium (neutral): Level 14 HP 65(65) AC 10."
I took a couple of screenshots of it. [2] [3]
PS: Feel free to send me to hell for doing this. Bye :D
- It's a player monster (or most likely, a doppelganger pretending to be one). Ais523 (talk) 19:48, 9 March 2015 (UTC)
New page please: Command line arguments
Uh-oh! The Style Guide says not to create a missing page and just stick {-{stub}-} in it. I'm real busy this week ... maybe someone else has time to put the command line arguments supported by the NetHack binary into Wiki-ese? Netzhack (talk) 08:23, 24 April 2015 (UTC)
Tiles
I've noticed that a lot of people have been able to upload tiles, but they're only available in the source code as the little "color by letter" 16x16 squares. How would I go about getting the tiles for the defunct quest monsters for the SLASH'EM racial quests onto the wiki? --Kahran042 (talk) 13:09, 18 June 2015 (UTC)
I want to play on a tournament or server that uses tiles but I can't find one. help?
n
This page may need to be updated for NetHack 3.6.0
on a page which has one of these 3.6 banners how do i show that i have checked one of the links to the source?
i cant remove the 3.6 banner because i havent checked them all but i'd like to indicate to others that they dont have to check this particular link.
any ideas?
Fenris (talk) 01:09, 26 May 2016 (UTC)
- Which page? You could always leave a note on the discussion page for starters. MyNameWasTaken (talk) 22:22, 29 June 2016 (UTC)
- Fenris, thanks for getting involved in the wiki!
- You can update in-text 3.4.3 source code references by opening up the NetHack 3.4.3 source code to the line number being referenced, finding the corresponding line in the NetHack 3.6.0 source code, opening the page you are working on for editing, and changing the number in the Refsrc template call to the new line number. The reorganization of the source code has caused some extreme shifts in line numbers, so take a good look at the line's context in the 3.4.3 source before you're sure of the new location. Sometimes the same subheadings are used in both sources, so looking for the nearest subheading in the 3.4.3 source might help you orient yourself in the 3.6.0 source.
- To make it clear that the reference is now to version 3.6.0, change the Refsrc template call to match the new "versioned" format, which you can find at Template:Refsrc. Usually this just means including the name of the folder of the file being referenced (src/allmain.c), and adding the name of the version (version=NetHack 3.6.0). "Category:Pages with unversioned Refsrc templates" has a list of pages that need to be changed to the new syntax, but be aware that some of these are for pages related to patches and variants that reference a source code other than 3.6.0.
- I haven't figured out the correct new syntax for Template:Reffunc, but if I ever do, I plan to post instructions on how to update it.--Cherokee Jack (talk) 13:58, 1 July 2016 (UTC)
Macro implementation: nh 3.6, Windows 10
I'm curious if (and how) there are successful recent implementations of the AutoHotKey macro method described at the Wiki "Macro" page.
In 3.6.0, running under Windows 10, I created the runmeonce and trigger ahk scripts, etc, but there is no NHLauncher.exe executable with which to start nethack in the nh distribution, and within the nethack UI (I tried both the "traditional" and windows graphical versions) CTRL+M does not launch the macro creator interface window.
Any help would be appreciated
--GetOffThisPlanet (talk) 15:14, 2 December 2018 (UTC) | https://nethackwiki.com/wiki/Community_Portal | CC-MAIN-2019-04 | refinedweb | 9,631 | 72.26 |
Php extends site jobs
.. */ && ($acc = getAccount( $email_from)
I have a custom developed plugin that works for Joomla component Jticketing integrated with Easysocial. This plugins extends booking (tickets) for users that are not registered in the Joomla DB. It should work with 3 payments methods: • Offline – it works fine. When the new ticket is created a new ticket holder will get registered in Joomla and 2
...[login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; /** */ public class Demo extends Activity { Button btn_done; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { [login to view URL](savedInstanceState); setContentView([login to view URL]); btn_done
...made dynamically bigger or smaller as per the browser size. Currently it is getting smaller as i make browser small, but does not get bigger as monitor / Browser resolution extends. if u understand the requirement.....Quote me for 1. U have to do for all lightboxes. or 2. Fix one, make a document and leave it to me, so i can take take care of the
..
Hi dear freelancers. My custom cms php ecommerce website receive paypal payment. I want to extends receiving funds directly from credit, debit card so I want to use stripe, alipay. Please send your best offer. Thanks, Touhida
... One to flip the cards back over so their matchable images are hidden One to end the game. (Remember to confirm the choice before exiting) Create and use Card class that extends the JButton class. It should have at least the following: Instance variables: -ImageIcon imageIcon -String imagePath Constructors: Default Overloaded Constructor that takes
.. within each age bracket for men and
..
Dear Freelancers, I need a logo ...as in 2 with the one line being a little bit shorter and the other one extending to a financial chart as in chart. The word itself should be in black while the chart that extends from it should be in lighter color. If you have any other ideas how to make it look more smooth you can incorporate that. Regards, Rene
...phone-motogs5+ This is called from @Override public void onItemClick(AdapterView<?> parent, View view, int position,long id) { method of class public class settings extends ListFragment /*Fragment*/implements OnItemClickListener { ....... On item click, it should scan for BLE devices , then send those scanned BT devices list to "settings" class
...Project: - You
...company, designed an Infinitely:
Sila Dafter atau Log masuk untuk melihat butiran.
...total ESTIMATED time required for each fortnight is from a minimum of 1.5 hours to a maximum of 5 hours. The period for starting/assigning the assignment is immediate and extends at least until the month of August / September. SERIOUS candidates are required to comply with the full term. OBJECT OF THE WORK (DETAILS) : THE MAJORITY OF ACTIVITIES (Questionnaires.
...WebView to do itself all the operation that the user did manually above. This is how i tell the WebView to go to the following links : public class WebViewController extends WebViewClient { @Override public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) { if (!loadingFinished) { redirect
..
...auto populate. But the user should be able to change the insurance details if needed. Before the expected discharge date the staff member calls the insurance company and extends the stay if needed. So each time they call they will enter the call details like Date, INS company called, INS Rep name, INS rep contact number , Reg days approved/denied
...[login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; public class MainActivity extends AppCompatActivity { private TextView T1; private Button B1; @Override public void onCreate(Bundle savedInstanceState) | https://www.my.freelancer.com/job-search/php-extends-site/ | CC-MAIN-2018-30 | refinedweb | 611 | 55.95 |
This blog will discuss features about .NET, both windows and web development.
Whenever it comes to determining whether a business object meets a certain state, I like to do create a property or method for this. For instance, evaluate the property below:
public bool IsCompleted{ get { return this.CompletedDate != null; }}
The IsCompleted property checks to see if a completed date, which is a nullable type, has a value specified to it. If it does, then it's completed in the system. This may seem like a simple thing, and not necessary. Additionally, from an agile perspective, it may be viewed that I'm coding for a situation in the future that may or may not happen.
Though those are all valid points, I disagree. I think this is the perfect way to encapsulate certain behavior and make object states easier to read (is it completed or not, vs. is the completed date null or not null). Plus, it's very little code, and prevents more involved code changes by putting completion checking logic in one place.
For instance, what happens if the status of the object can be closed, and this is now considered a completed item? Furthermore, closing an item does not mark it completed, because a closed item is not technically completed, yet it no longer needs to be visible in the system. Imagine the following:
public bool IsCompleted{ get { return (this.CompletedDate != null) && (this.Status == "Completed" || this.Status == "Closed"); }}
This change is easily encapsulated into one property. If this logic is used 100 times in the system, my approach only requires one change, whereas if I didn't use this approach, 100 lines of code need to change..
The DataTable object has a nice feature when it comes to error handling. It has a HasErrors property that determines whether any errors exist in the table object. Using this, it's possible to loop through all the rows, then check the GetColumnsInError method to collect error information with the GetColumnError method, as illustrated below:
public
public
static string[] GetRowErrorInformation(DataTable table) {if (!table.HasErrors) return new string[] { };
List<string> errors = new List<string>();
foreach (DataRow row in table.GetErrors()) { DataColumn[] columns = row.GetColumnsInError(); foreach (DataColumn column in columns) errors.Add(row.GetColumnError(column));}
return errors.ToArray();}
In this example, the errors are returned as a string array. If you need a single string, you can simply use string.Concat() to concatenate the values.
If you haven't heard about the series I'm doing, the Calendar Day View project is one that I'm trying to use TDD practices to show the process creating a custom control. However, there are times when TDD breaks down, not because TDD fails, but because I failed to implement TDD. Now, that's OK; really, the universe isn't going to come crashing down because I didn't create unit tests ahead of time, or jumped ahead into the code.
Case in point: I started to create new classes and a new approach for rendering items and calendar items. I also used CodeSmith Studio to generate an AJAX component for me; CodeSmith Studio is awesome for generating a lot of code rather quickly; I just had to invest a moderate portion of time to get the script right, and the dividends pay off in the long run. Thanks CodeSmith!
Anyway, I've started to create all these assets without creating tests or thinking about how the tests will come into play in the AJAX scenario. That's not necessarily bad (unless you are a hard core TDD person). However, the question is what I've developed is correct or not, which unit tests would have helped flesh out. That's really what the biggest question around TDD is; does it really help you write better code, and does the extra hours put in to writing tests really mean less hours testing and maintaining later?
So I think I'll take a break, and revisit it another day when I'm more focused on TDD. If you too wonder if unit testing is important, I came across this which might help explain it some:
I had some trouble validating date input using VAB and the PropertyProxyValidator. As I would think would be the norm, for a date that I have specific requirements on, I'll setup a NotNullValidator, along with a DateRangeValidator to make sure the date fits within a certain range. But I was getting errors because the PropertyProxyValidator was tied to a TextBox control, which means that the underlying value is a string. Because it's a string, it couldn't be validated against.
The ValueConvert event fires whenever a server-side validation occurs (which is the mode that it works in), allowing you to convert the value to an alternative format. This allowed me to convert the date from a string to a DateTime reference, and this solved the problem.
There are many resources out there on the web that illustrate exporting data to excel from an ASP.NET page, using the gridview control. This is the most common approach, though the Excel approach is not limited to a GridView control (I got it to work with a ListView that rendered a complex table structure). I found this () as one of them, but it includes an extra tidbit I was having problems with.
For some reason, my default setup had the exception "GridView does not exist in a form that defines runat="server", or something like that. By overriding the Page's method VerifyRenderingInServerForm, this alleviates the problem. Don't know why; this method is meant to check whether the a control exists in a form, but I don't know why a control rendered via Excel has that very problem, being the control was within a server form. Anyway, this caused it to work.
LINQ to SQL is an intricate, yet "sensitive" tool into how it works. A LINQ to SQL query is translated into a SQL query that is performed against the database. This process occurs through a series of components that break the expression down into its simplest parts, so that it can take this query and build up a SQL implementation.
However, what this means is that you cannot do certain things with LINQ to SQL if that query touches the DataContext; the reason is queries against the DataContext are translated through LINQ to SQL; but objects queried not using the DataContext are in the category of LINQ to Objects, so these queries are not translated into SQL (because they don't go against the database).
In my latest findings, I tried to do something like this:
var results = from c in this.Context.Customers where (from p in currentConfiguration.Preferences where p.CustomerKey == c.CustomerKey select p.Customer).Contains(c) select c;
When I did this, I got the following exception: Local sequence cannot be used in LINQ to SQL implementation of query operators except the Contains() operator. The reason was that currentConfiguration is a variable passed into a method. It should have been loaded with the same context object reference, so that shouldn't have been the issue. As soon as I rewrote it like this:
var results = from c in this.Context.Customers where (from p in this.Context.Preferences where p.CustomerKey == c.CustomerKey && p.CustomConfigurationKey == customConfiguration.CustomConfigurationKey select p.Customer).Contains(c) select c;
I no longer got that exception.
So, to start out, I thought a little bit about the initial setup for how I could do some testing on the server side of the control. You have to understand the inner workings of server controls to understand. At the end of the process, no matter whether the control is a simple control, a composite control, or a control employing data binding or templating, there is a rendering process that converts the contents of the control to it's HTML equivalent. For instance, a GridView is rendered as a table, a panel as a span, composite controls as a collection of HTML elements, etc.
The control has a Render method that takes an HtmlTextWriter object, which renders all of the content. With composite control, if no render method is defined, then during rendering phase, the base CompositeControl class calls the Render method on each control. The HtmlTextWriter object simply uses a stream to write HTML tags, DHTML attributes, and any other content to the underlying stream. What this means is that the constructor of HtmlTextWriter could take an instance of a StringWriter class, and all HTML is written to a local writer, and easily retrieved through StringWriter.ToString().
To start out with TDD, the following test was created:
[
Test]public void TestRenderingCalendarHourly() { this.FirstTimeVisible = new TimeSpan(8, 0, 0); this.LastTimeVisible = new TimeSpan(16, 0, 0); this.TimeInterval = CalendarDayViewIncrement.Hour; StringWriter textWriter = new StringWriter(); HtmlTextWriter htmlWriter = new HtmlTextWriter(textWriter);
[
this.RenderCalendar(htmlWriter); StringReader reader = new StringReader(textWriter.ToString()); XElement doc = XElement.Load(reader);
var tdTags = from tdTag in doc.Elements("tr").Elements("td") select tdTag;
Assert.AreEqual(8, tdTags.Count());}
Picturing an Outlook calendar, in the current view is a period of time between 8:00 and 4:00 (in this example above). The length of time between each interval of time is one hour, so every slot in the calendar is an hour. The StringWriter is passed into the HTMLTextWriter, and the contents are read from a StringReader; textReader.ToString() returns the HTML that is generated from that method.
Now, at this point, when the test is created, by some TDD practitioners, none of the classes, properties, or methods have been implemented. I personally create the class definition, so it appears in VS intellisense, but most times do not implement the properties and methods (like I said, I'm not always the most faithful TDD practitioner). What this means is that this unit test being developed drives what is being designed. The reason is that the test really drives what you need to do in an application. In a test, you know you need X, Y, and Z pieces of information, and you know you need to perform action A and B, so the unit test fleshes those out in advance. So from this unit test, it let me know that I needed a FirstTimeVisible and LastTimeVisible properties, a TimeInterval to note the unit of time between each line. I've also made the decision in the test to use a table tag, and when the first time visible is 8:00 AM and the last is 4:00 PM, and the interval is an hour long, there should be 8 TD tags. Creating the test first doesn't waste time creating objects, properties, or methods you end up not needing in the end, because you work through the solution in advance.
From this test, this led me to create the following:
protected override void Render(HtmlTextWriter writer){ base.AddAttributesToRender(writer); writer.RenderBeginTag(HtmlTextWriterTag.Table); writer.RenderBeginTag(HtmlTextWriterTag.Thead);
writer.RenderEndTag(); //thead writer.RenderBeginTag(HtmlTextWriterTag.Tbody);
this.RenderCalendar(writer);
writer.RenderEndTag(); //tbody writer.RenderEndTag(); //table}
protected virtual void RenderCalendar(HtmlTextWriter writer){ TimeSpan currentTime = this.FirstTimeVisible; int index = 0;
while (currentTime < this.LastTimeVisible) { CalendarDayViewItemStyle style = (index % 2 == 0) ? this.GetItemStyle() : this.GetAlternatingStyle(); this.RenderDayRow(writer, style);
currentTime = this.AddIncrement(currentTime); index++; }}
protected virtual void RenderDayRow(HtmlTextWriter writer, CalendarDayViewItemStyle itemStyle){ writer.RenderBeginTag(HtmlTextWriterTag.Tr);
writer.AddAttribute(HtmlTextWriterAttribute.Height, itemStyle.Height.ToString()); if (itemStyle.Width != Unit.Empty) writer.AddAttribute(HtmlTextWriterAttribute.Width, itemStyle.Width.ToString()); ControlRenderingUtility.RenderBorderStyle(writer, itemStyle); writer.RenderBeginTag(HtmlTextWriterTag.Td);
writer.Write("TEST");
writer.RenderEndTag(); //td writer.RenderEndTag(); //tr}
The Render method sets up the table, the RenderCalendar method actually performs the inside looping/calculation work, while RenderDayRow actually renders a row for a specified timeslot (I think I'll be refactoring that name soon as I think about it). However, my test fails. My test calls the RenderCalendar method, which doesn't write XHTML compliant content; instead, it leaves out the beginning table tag, and so the LINQ-to-XML load statement fails, because there isn't a valid root level element.
I also don't think this is the best structure of code also because of separation of concerns, which is defined by Wikipedia as: "separation of concerns (SoC) is the process of breaking a computer program into distinct features that overlap in functionality as little as possible". In my mind, these methods are too interdependent; RenderDayRow and RenderCalendar are dependent on the table structure setup by Render, while RenderCalendar performs most of the work and expects RenderDayRow to render the entire row.
This will lead me to move the RenderCalendar functionality into the Render method or vice versa; I haven't decided yet. By the way, in Martin Fowler's Refactoring book, this is one of his defined refactorings, but I don't remember what the name of the Refactoring is! Anyway, I hope that maybe this is helpful for you to understand that TDD starts out with creating unit tests, even failed ones, and defines the class structures from these tests. This creates more useful code. Also, other techniques associated with TDD, such as refactoring, help make code better.
I've come up with some preliminary design specs of the initial functionality that I want in this control. I've included some of the interactions that can happen in the control, and the basic view (once I figure out how to attach an image, I'll make that available on this post). I want to recreate the Outlook style single day view functionality, while making it functional in a rich AJAX sense. With a TDD approach, I'll create a series of unit tests to test the server-side functionality, and I'll have to come up with some sort of client-side test functionality. To test the client-side functionality, I've seen others develop their own utilities for testing (test harnesses) to test the client-side rendering.
I'm really excited about a new technology coming out nicknamed Ivonna. This framework allows you to test ASP.NET pages at a server-level, which is something that wasn't always easily done before. I can't wait for this to be fully developed. If you haven't heard of TypeMock Isolator, TypeMock is a very powerful tool that allows you to mock objects, the results returned from it's properties and methods, and even affect the path that the code takes. It's really dynamic in how it works, and what makes it really great is that it isn't dependent on interfaces; it can mock any object, unlike other mock frameworks.
With that said, I'm also glad that I should be able to utilize ExactMagic TestMatrix to perform my tests; TestMatrix allows me to compile and run my tests directly in Visual Studio, without having to run NUnit. It also provides code coverage to show me what code wasn't executed through my unit tests. This makes unit testing a lot easier.
In addition, a calendar often works with scheduling items; I do already have a framework in place for scheduling events, which is very similar in nature. I'm going to try to reuse this code, instead of recreating new code, and we'll see as we get into it. I may try to minimize the cost by creating a little bit of abstraction to hide those details. I hope that my posts will be informative and help you in some way understand TDD, agile, and control development.
To start off requirements gathering, I mentioned about a couple of sketches, but really it should start off from a series of stories. Some of the stories for this project include:
However, some sources recommend a more formal story approach, as shown below:
You can find some examples of stories here:
I'm not going to go to the point of performing an estimation, because I am not able to put consistent hours into this that it will be harder to track, but for each story, a level of difficulty and a priority is assigned to it; the priority is assigned by the customer, and from the difficulty or risk level is assigned by the developer. This is what specifies priority.
I really like some of the components you see in computer magazines. They have a visually appealing, yet functional approach to being able to schedule work items and tasks, similar to what you can do in Outlook. I haven't done this yet; it's not coded at all; rather, I'm going to try to blog my progress as I develop this control, and I'm going to try to leverage the newer technologies.
Note all my blog posts won't be related to this, as I do other things that I like to blog about, so I'll prefix my posts with the name of the component (undecided at the moment). But I hope to put some effort into it using ASP.NET, as well as leveraging the ASP.NET AJAX framework, and using TDD practices as much as possible. Note two things: it's hard to leverage TDD in a web environment, and that I'm still trying to learn TDD, but generally my TDD practices and methodologies are yet-to-be-desired.
I hope that you may find this beneficial, and that we all learn something about .NET.
Customers are an important part to Agile development. A portion of the Agile Manifesto reads: "Business people and developers must work together daily throughout the project." This is a key part ot the software development process, and I'm starting to see why. As developers, we have to make assumptions about certain aspects of the business process; however, the less assumptions we make and the more we have backed up by fact from the customer, the better. Communication is key to a healthy application. Having the customer directly involved in the software development process helps reduce the number of assumptions about the business process.
Assumptions aren't aways bad, but sometimes they can be. And when assumptions are made out of assumptions, that can cause a trickling effect. By having the customer available to discuss with at any moment, it weeds out the faulty assumptions about the business process. It also strengthens design because as we know, customers are UI-driven and so customers can see the development of the product as it is being performed, which also helps to eliminate rework.
If you are a customer and you are reading this, I must warn you to make the development of an agile product your top priority. I know that every business user usually has a full schedule, and that some users want to be able to pass along the information and then not be involved in any of the future details. It may seem like the direct communication, which may be frequent, is an annoyance. But communication, however frequent, is the key to developing a good software product, and all of those correspondences are an attempt by us developers to make sure we stay on track. So I implore business users to make any questions that come across from an agile project seriously, because it will directly impact the future of the application.
I'm going to stray from LINQ to SQL to talk a little bit about software development in nature. In software development, developers are trying to achieve a goal of the customer you are working with. During requirements gathering, you find out all the particular details and aspects about the goal. When developing the architecture, you wonder what is the best way to get to the goal without making it too complicated or time-consuming, while making sure that the appropriate functionality is present. Hopefully, security, scalability, and maintainability are factors in heading in the direction of that goal. It is during this phase that you make certain choices about how the software should work, feel, and look, all for the purpose of achieving the goal that the customer needs and that your team needs (in the aspects of security, performance, scalability, maintainability, time consumption, etc.).
Software development is often about staying the course because as us developers think about the problem, we tend to think we have a better answer for the a small piece of the architecture at hand, or because it's the easier option. Easier is never a good solution because it often strays from maintainability. For instance, it's easier to hard-code values in an application, but for maintainability it's good to put constant data in a central place. It may be easier to embed SQL code in the UI for a challenging query, but it's better, more maintainable, more performant, to put it in a proc. It may be better to code your piece of the application a different way, but from the perspective of maintanability, it's better if the code is more uniform than not.
In application development, you have to stay the course sometimes and stick to the game plan. It's often for the good of the application.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro | http://dotnetslackers.com/Community/blogs/bmains/archive/2008/05.aspx | crawl-003 | refinedweb | 3,549 | 52.39 |
This is for a personal hobby project of mine, im working on a console based choose your own adventure with free movement between areas using grid coordinats IE [0,0,0] and im working on static monster encounters and the combat portion of it. I would like to do random dice rolls with dmg based on atk/defense of player and monster as well as hitrate, ive already got some starter code i worked on but there is still a fair bit im not able to grasp.
Several of my questions:
Can i use nested if else statements to continue the fight step by step until the monster or players hp is <= 0 and how do i return the value of the dmg done to a players global variable. I would like to pre-set the stats for each individual monster possible have a Combat() function which then loads a Monster function IE Monster001() or Zombie() Skeleton() etc.
How do i calaculate in hit %. I'm using rand() for the dice rolls but how to incorporate % chance the hit is successful.
How do i fetch the value of total dmg from each random dice to subtract it from player / monster hp.
Then after how do i assign pre-decided exp per monster to the player in the event of victory or return the player to a death message if he fails. I'm using nested switchs for the accual menu options in each area and using goto only for Actions and Error/default for the switch. As well as each area has an AREA### goto for certain moves between areas that are oposite the flow of the code.
At some point in the future id also like to assign equipment to an inventory system using strings and and equip feature which adds stats to players stats pre-combat as well.
Here is my current dmg algorith code.
Code:#include "stdafx.h" #include <iostream> #include <cstdlib> #include <time.h> using namespace std; /* These constants define our upper and our lower bounds. */ int Test = 0; int MonsterLevel = 1; int MonsterHitPoints = 15; int MonsterAttack = 8; int MonsterDefense = 3; int PlayerLevel = 1; int PlayerHitPoints = 25; int PlayerAttack = 10; int PlayerDefense = 5; int LowMonster = 2; int HighMonster = MonsterAttack - PlayerDefense; int LowPlayer = 2; int HighPlayer = PlayerAttack - MonsterDefense; int main() { /* Variables to hold random values for the first and the second die on each roll. */ int first_die, sec_die; /* Declare variable to hold seconds on clock. */ time_t seconds; /* Get value from system clock and place in seconds variable. */ time(&seconds); /* Convert seconds to a unsigned integer. */ srand((unsigned int) seconds); /* Get first and second random numbers. */ if ( (MonsterHitPoints >= 1) && (PlayerHitPoints >= 1) ) { cout << "Monsters & Player Can Fight HP ok.\n"; first_die = rand() % (HighPlayer - LowPlayer + 1) + LowPlayer; sec_die = rand() % (HighPlayer - LowPlayer +1) + LowPlayer; cout<< "Players attack is (" << first_die << ", " << sec_die << "}" << endl << endl; first_die = rand() % (HighMonster - LowMonster + 1) + LowMonster; sec_die = rand() % (HighMonster - LowMonster + 1) + LowMonster; cout<< "Monsters attack is (" << first_die << ", " << sec_die << "}" << endl << endl; } else if ( (MonsterHitPoints <= 0) && (PlayerHitPoints <= 0) ); { cout << "Unable to fight! Player or Monster HP are 0.\n"; cin >> Test; } return 0; } | https://cboard.cprogramming.com/game-programming/129600-help-combat-algorithm.html | CC-MAIN-2017-09 | refinedweb | 512 | 58.01 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 2.3, 2.3.1
-
- Component/s: core/index
- Labels:None
- Lucene Fields:New
Description)
and here's a typical exception hit when opening a searcher:) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:636) at org.apache.lucene.index.DirectoryIndexReader.open(DirectoryIndexReader.java:63) at org.apache.lucene.index.IndexReader.open(IndexReader.java:209) at org.apache.lucene.index.IndexReader.open(IndexReader.java:173) at org.apache.lucene.search.IndexSearcher.<init>(IndexSearcher.java:48)
Sometimes, adding -Xbatch (forces up front compilation) or -Xint
(disables compilation) to the java command line works around the
issue.
Here are some of the OS's we've seen the failure on: the index.
Activity
- All
- Work Log
- History
- Activity
- Transitions
From Mark Miller on the developer's mailing list:
Here's a couple examples of that exclude method syntax (had to use it recently with eclipse):
-XX:CompileCommand=exclude,org/apache/lucene/index/IndexReader\$1,doBody
-XX:CompileCommand=exclude,org/eclipse/core/internal/dtree/DataTreeNode,forwardDeltaWith
I've also been struck by this bug, with Lucene 2.3.2. I'd been running for a while with JRE 1.6.0_05 when I noticed it, so I downgraded to JRE 1.6.0_02 to try and work around it, but no luck.
Could a bugged index created with JRE 1.6.0_05 be causing addIndexesNoOptimize to trigger this bug, even with JRE 1.6.0_02?
Thanks.
Could a bugged index created with JRE 1.6.0_05 be causing addIndexesNoOptimize to trigger this bug, even with JRE 1.6.0_02
Unfortunately, yes. Once the corruption enters the index, then no matter which JRE you are using, you will hit that exception.
In your case, I can see that indeed segment _2y9, which is pre-existing when you call addIndexesNoOptimize, is the corrupt segment.
In general, you can use CheckIndex to see if you have any latent corruption.
I'm afraid you either have to run CheckIndex -fix to remove that segment (and possibly others that are also corrupt) from your index, or, create a new index.
This bug is very frustrating!
Can you describe how you built up this index? EG was this bulk created (open a single writer, add all the docs, close it), or, created with many separate instances of IndexWriter over time? Were documents added via add/updateDocument or via addIndexes*? Do you run the JRE with any "interesting" command-line options? I'd really like to narrow down the "typical" cases when this bug strikes if we can....
Actually, not convinced it is the same bug.. We kept getting complete JVM crashes... I just assumed it was (I wouldn't be surprised if it was related though).
Another datapoint from Ian Lea:
My job () still
fails with java version 1.6.0_06 (build 1.6.0_06-b02), downloaded
today, with both lucene 2.3.1 and 2.3.2.
For me, downgrading to 1.6.0_03-b05 fixed things.
I finally managed to reproduce this JVM bug, except, my case happens
while merging term vectors (mergeVectors) not stored fields as all
other cases seem to be.
I'm running JRE 1.6.0_05 on a Debian Linux box.
In my case, which just uses a modified contrib/benchmark to add 2000
wikipedia docs to a large index, I got to the point (when index was 19
GB) where every single time I run the benchmark alg, it hits an
exception. Often the exception looks like this:
Exception in thread "Lucene Merge Thread #0" org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: read past EOF.io.IOException: read past EOF at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:146) at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38) at org.apache.lucene.store.IndexInput.readInt(IndexInput.java:68) at org.apache.lucene.store.IndexInput.readLong(IndexInput.java:91) at org.apache.lucene.index.TermVectorsReader.get(TermVectorsReader.java:345) at org.apache.lucene.index.SegmentReader.getTermFreqVectors(SegmentReader.java:992) at org.apache.lucene.index.SegmentMerger.mergeVectors(SegmentMerger.java:441))
I then added the same check that we now have for mergeFields, ie, to
verify the size of index file (_X.tvx) matches the number of
documents merged. Sometimes, however, I see this different exception:
Exception in thread "Lucene Merge Thread #0" org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 9375.ArrayIndexOutOfBoundsException: 9375 at org.apache.lucene.store.BufferedIndexOutput.writeByte(BufferedIndexOutput.java:36) at org.apache.lucene.store.IndexOutput.writeVInt(IndexOutput.java:71) at org.apache.lucene.index.TermVectorsWriter.addAllDocVectors(TermVectorsWriter.java:76) at org.apache.lucene.index.SegmentMerger.mergeVectors(SegmentMerger.java:443))
where the particular array index would vary all over the place. This
is VERY odd because that array is the buffer in BufferedIndexOutput
and is always allocated to 16384 bytes so 9375 (and all others I saw)
is not out of bounds.
JRE 1.5.0_08 always runs fine. Likewise running JRE 1.6.0_05 with
-Xint also runs fine. However, JRE 1.6.0_05 with -Xbatch still hits
exceptions.
So then I started testing "trivial" modifications to the Java source
code in the mergeVectors, and found, insanely, that this simple diff
completely stopped the exceptions:
} else { - termVectorsWriter.addAllDocVectors(reader.getTermFreqVectors(docNum)); + // NOTE: it's very important to first assign + // to vectors then pass it to + // termVectorsWriter.addAllDocVectors; see + // LUCENE-1282 + TermFreqVector[] vectors = reader.getTermFreqVectors(docNum); + termVectorsWriter.addAllDocVectors(vectors);
(Ie, just forcing an assignment to a local variable).
It's crazy that such a trivial mod actually makes a difference in this
JRE bug (I would have expected it to be optimized away fairly early on
in compilation), but, I'm quite sure that diff resolves at least the
exceptions I've been seeing. So I plan to commit this JRE bug
workaround to 2.4 & 2.3 branch.
I still haven't been able to hit the JRE bug when merging stored
fields, but, I'm still making that same corresponding mod to
mergeFields.
See, that complex code even confuses the JVM
Awesome job coming up with this workaround! (crosses fingers for stored fields)
Hi,
Great work on tracking this down, it looks like a very nasty bug. Has it been reported to Sun yet? It seems like the kind of bug that could manifest itself in other places too, so important to get a real fix..
Has it been reported to Sun yet? It seems like the kind of bug that could manifest itself in other places too, so important to get a real fix.
Not yet, but I intend to. I'm trying to whittle it down. I agree the bug is nasty and could strike again at any time. The AIOOB exceptions I was hitting were truly bizarre.
One of the classic problems between -client and -server mode is the way the CPU registers are used. Is it possible that some of the fields are suffering from concurrency issues? I was wondering if, say, BufferedInfexOutput.buffer* may need to be marked volatile ?
In my 100% reproducible case of this JRE bug, I'm using only 1 thread, so I don't think a volatile should be necessary here.
But I like your idea to try -client vs -server – I will test that & post back. The more data we can gather the better... I did find it interesting that -Xbatch did NOT resolve it, but has for at least one of the above users.
I'm wondering if it has something to do with writing to large (> 32 bit) files. In my test case, the index keeps kicking off a large merge (produces 2.7 GB segment) and it's that merge that trips the bug.
In my 100% reproducible case of this JRE bug, I'm using only 1 thread, so I don't think a volatile should be necessary here.
Woops: I am, however, using the default ConcurrentMergeScheduler, so this very-large merge runs in its own thread. Still, it's only that one thread that's accessing this code/state, so by the spec volatile should not be necessary.
OK: running with -client prevents the bug.
Running with SerialMergeScheduler still shows the bug.
I'm going to try to make a standalone test that just runs this one merge....
Using the 19 GB index I have that consistently reproduces this hotspot bug, I boiled the bug down to a very small testcase that no longer involves Lucene.
However, this occurence of the bug is slightly different: for me, by specifying -Xbatch to java command line, the bug consistently happens. It only rarely happens without -Xbatch. Nonetheless, I'm hopeful that if Sun fixes this one test case properly, it will fix all the odd exceptions we've been seeing from this code.
I opened the bug 4 days ago (5/15) with, but have yet to hear if it's been accepted as a real bug.
if others could try out the code below on their Linux boxes, using 1.6.0_04/05 of Sun's java, specifying -Xbatch, to see if the bug can be reproduced, that'd be great.
Here's the bug I opened:
Date Created: Thu May 15 11:53:15 MDT 2008 Type: bug Customer Name: Michael McCandless Customer Email: mail@mikemccandless.com SDN ID: mail@mikemccandless.com status: Waiting Category: hotspot Subcategory: runtime_system Company: IBM release: 6 hardware: x86 OSversion: linux priority: 4 Synopsis: Simple code runs incorrectly with -Xbatch Description: FULL PRODUCT VERSION : java version "1.6.0_06" Java(TM) SE Runtime Environment (build 1.6.0_06-b02) Java HotSpot(TM) Server VM (build 10.0-b22, mixed mode) FULL OS VERSION : Linux 2.6.22.1 #7 SMP PREEMPT Tue Mar 18 18:22:09 EDT 2008 i686 GNU/Linux A DESCRIPTION OF THE PROBLEM : On the Apache Lucene project, we've now had 4 users hit by an apparent JRE bug. When this bug strikes, it silently corrupts the search index, which is very costly to the user (makes the index unusable). Details are here: I can reliably reproduce the bug, but only on a very large (19 GB) search index. But I narrowed down one variant of the bug to attached test case. THE PROBLEM WAS REPRODUCIBLE WITH -Xint FLAG: No THE PROBLEM WAS REPRODUCIBLE WITH -server FLAG: Yes STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : Compile and run the attached code (Crash.java), with -Xbatch and it should fail (ie, throw the RuntimeException, incorrectly). It should pass without -Xbatch. EXPECTED VERSUS ACTUAL BEHAVIOR : Expected is no RuntimeException should be thrown. Actual is it is thrown. REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- public class Crash { public static void main(String[] args) { new Crash().crash(); } private Object alwaysNull; final void crash() throws Throwable { for (int r = 0; r < 3; r++) { for (int docNum = 0; docNum < 10000;) { if (r < 2) { for(int j=0;j<3000;j++) docNum++; } else { docNum++; doNothing(getNothing()); if (alwaysNull != null) { throw new RuntimeException("BUG: checkAbort is always null: r=" + r + " of 3; docNum=" + docNum); } } } } } Object getNothing() { return this; } int x; void doNothing(Object o) { x++; } } ---------- END SOURCE ---------- CUSTOMER SUBMITTED WORKAROUND : Don't specify -Xbatch. You can also tweak the code to have it pass the test. Reducing the 10000 or 3000 low enough makes it pass. Changing the doNothing(...) line to assign the result of getNothing() to an intermediate variable first, also passes (this is the approach we plan to use for Lucene). Removing the x++ also passes. workaround: comments: (company - IBM , email - mail@mikemccandless.com)
Great work finding a reduced test-case. I tried the sample application with JDK6u4 32-bit and I can add the following:
- It always happens with -Xbatch -server when the code is compiled with javac.
- It happens sometimes (sometimes after 3 attempts, sometimes after 10, etc.) with -server when the code is compiled with javac.
- I am unable to reproduce it when compiling with the eclipse compiler (running from the command-line to avoid any other differences)
Btw, if I increase the number of iterations to 1000000 for docNum and 300000 for j I can reproduce it every time without -Xbatch.
It's worth noting that jdk 6u10 beta b24 (released today) and openjdk6 in Fedora 9 are also affected by the problem shown in the test-case.
This bug is spooky. I tried another workaround, which is to just increment an unused variable, instead of the above diff but at the same spot. That then causes the JRE to reliably crash (SEGV). I'm attaching the hs_err log.
Sun has not yet "accepted" my bug. If/when they do, I'll attach this error log to it.
OK I've committed the workaround & bug detection to trunk (2.4) and 2.3 branch.
At this point I think that's all we can do here; we are now waiting on Sun to fix the JRE bug.
We've seen this bug (rarely) when indexing quite huge amounts of data.
Just to add some datapoints, attached is crashtest
, using the above Crash.java to test all java VMs I have currently available.
crashtest.log
contains the output.
Tests were run on a loaded EM64T dual core machine with fedora 9/x86_64, all VMs are 64bit. The openjdk is a build from yesterdays public repository contents, build using gcc 4.3 (trivial patches to make it build were added).
Some scary solaris (SunOS 5.10 Generic_120011-14 sun4u sparc SUNW,UltraAX-i2) results as well:
/usr/jdk/jdk1.6.0_04 (java full version "1.6.0_04-b12"):
: 0/200 failed: PASS
-server: 0/200 failed: PASS
-client: 0/200 failed: PASS
-Xbatch: 0/200 failed: PASS
-Xint: 0/200 failed: PASS
/usr/jdk/jdk1.6.0_04 (java full version "1.6.0_04-b12"):
-d64: 0/200 failed: PASS
-server -d64: 0/200 failed: PASS
-client -d64: 0/200 failed: PASS
-Xbatch -d64: 0/200 failed: PASS
-Xint -d64: 0/200 failed: PASS
Here is the bug at Sun:
Sun has posted their evaluation on the bug above and accepted it as High priority.
Can anyone comment as to whether the JRE 1.6.04+ bug affects any earlier versions of Lucene? (say, 2.0.. which we're still using) .
I was just reviewing this issue and noticed Michael mentioned this behaviour shows in both the ConcurrentMergeScheduler and the SerialMergeScheduler. AIUI,. the SerialMergeScheduler is effectively the 'old' way of previous versions of Lucene, so I'm just starting to think about what affect 1.6.04 might have on earlier versions? (this bug is only marked as affecting 2.3+).
The reason I ask is that we're just about to upgrade to 1.6.04 -server in some of our production machines.. (reason why not going to 1.6.06 is we only started our development test cycle months ago and stuck with .04 until next cycle).
Can anyone comment as to whether the JRE 1.6.04+ bug affects any earlier versions of Lucene? (say, 2.0.. which we're still using).
As far as I know, this corruption only happens on Lucene 2.3+. The changes to Lucene that tickled this JRE bug were bulk-merging of stored fields:
which landed in 2.3, and also bulk-merging of term vectors:
As can be seen in the Sun database a fix for this has been committed to OpenJDK and they're looking into backporting it into Java 6 Update 10.
The latest build of JDK 6 Update 10 (b28) includes the fix for this. It can be downloaded from:
In the summary of changes, you can see that it refers to a bug that requests the integration of a new HotSpot build that includes the fix for this:
I have also verified that the test-case now passes on my machine.
Indeed, I can confirm that JDK 6 Update 10 (b28) fixes my 19 GB test case that reliably crashes with earlier JDK 6 versions.
I'll resolve this as fixed, and send an email to users.
Another workaround might be to use '-client' instead of the default '-server' (for server class machines). This affects a few things, not least this switch:
-XX:CompileThreshold=10000 Number of method invocations/branches before compiling [-client: 1,500]
-server implies a 10000 value. I have personally observed similar behaviour like problems like the above with -server, and usually -client ends up 'solving' them.
I'm sure there was also a way to mark a method to not jit compile too (rather than resort to -Xint which disables i for everything), but now I cant' find what that syntax is at all. | https://issues.apache.org/jira/browse/LUCENE-1282 | CC-MAIN-2016-30 | refinedweb | 2,775 | 66.64 |
Abstract:
This article describes what Debian users can do back for the community: report bugs they find. Explained is how to do this and why one should report bugs.
These developers, however, are all volunteers. In contrast with Red Hat and Suse developers, where a company (RH and Suse) employs a number of developers, Debian developers do not get paid. And this means they do not have unlimited time resources. "OK", you might ask, "but what has that to do with me?" As a user you can help these developers by summiting bugs you found.
Debian packages can have two classes of bugs. One class is a real software bug. Since the Debian developer is often not the writer of the software itself (he just made a Debian package for it), he will sometimes tries to solve them but often send them to the software author.
The second class of bugs, consist of bugs in the Debian package or bugs in the installation setup for the software in he package on the Debian system. And these bugs are to be solved by the Debian developer. And finding these bugs is a very time consuming business.
Of these the system crash is easily found, though it might be more difficult to solve the bug. But the second type of bug is much harder to find. The reason is that the author/developer cannot test the software for all possible output. An example: consider a calculator program. The author can test several behavioral aspects: 1+1 must give 2, 2*5 must give 10, etc. But he cannot test all possible summations and multiplications. He will not test 3456733256677*77782882355.
But a user will. Users do things with (and to) the software the author had never anticipated. Since the amount of users exceed the number of software authors and Debian developers they are expected to stumble upon far more bugs. But all these bugs will not be severe bugs. You're system will not crash, your data will not be corrupted. In most cases these bugs will not even cause inconvenience, as in many times they can be circumvented.
And as a member of the community you almost have the moral obligation to post the bug to the Debian developer, so that the software can be made even more stable. And this article is a plea for doing just this. (Of course you will not find many bugs on a Debian system :)
Consider you found a bug in the program dia (my favorite diagram editor). Let's go through the process of submitting a bug to this package. (The actual bug found was not a Debian bug, but bug in the real software, so expect the Debian developer will forward it to the authors.)
I type (at the prompt. I did not find a nice GUI to this program.):
Let us do a proper job and not give one of these categories, but the actual package. For this we terminate this reportbug session with ^C (ctrl-C). We need to find the package which contains the executable "dia". We do this by:
With the last command we see that the executable is contained in the package dia (if you're not sure, check it with: "dpkg -l dia"). Note that whereis gives four files. The first file is a library. The last one is a directory and the middle two are executables. The package dia only supplies the second dia executable, and the origin of the first executable is unknown to me.
Now that we know the package with the bug, we can also quickly check where this package was downloaded (ftp/http) or taken (CD/floppy) from:
With this we see that the current version (0.86-helix1) was installed from HelixCode (To install HelixGnome, type 'echo "#HelixGnome Update\ndeb tions/debian unstable main" >> /etc/apt/sources.list; apt-get update; apt-get install task-helix-gnome'). This bug should this not be sended to the Debian developer but to the HelixGnome Debian packager instead which is not done with the reportbug tool. For the sake of this article, let's assume that version 0.83-2 is installed which is the Debian 2.2 package for dia and (in my case) was downloaded from a Dutch FTP mirror.
Ok, so we know the bug is in the dia-0.83-2.deb package which was downloaded from a Debian FTP site. We now continue with submitting the bug. If you're not online you can add the '-b' option, so the program will not search the Debian Bug Tracking System (BTS). By checking BTS you can check whether the same bug was not already submitted before. So checking BTS is highly preferred.
After entering the package name and consulting BTS, it will check package dependencies. Checking dependencies is important. The software depends on libraries and bugs might have their origin in a version conflict. Actually, this is one major source for bugs. No user input is needed for this check.
The next question it will ask, is you to provide a brief description of the bug. This description will be used as a title and must be complete and short. Later on the bug can be described in high detail. In my case this title is "dia file format incorrectly uses dia namespace". Details and the explanation will follow later.
Now you must give a qualification of the bug. Five classes are available:
Choose an appropriate severity. Normal is the default qualification, and most bugs in Debian 2.2 will have this severity. This is because Debian uses several extensive test cycles in which the complete system is tested before the distribution is released to the public. Note also that you can submit wishes for new features with reportbug, though these are clearly not bugs.
After choosing the classification an editor will be started with all information which was gathered until now:
This is the time when you can elaborate on the title you entered earlier. Between the "Severity: normal" and "-- System Information" lines you can add more details and conditions when the bug occurred. Try to reproduce the same bug and closely describe the steps you took to achieve to bug. This helps the developers trace the bug to a part of the malfunctioning programming code. In more complex situations you also might want to give the expected output or behavior.
Finally, the programs ask you if you want to bug be emailed to the bug list. Sending it, will end the process for now. You just did something back for the community.
You can trace the bug's status by visiting the Debian Bug Track System and choosing the package to which you filed a bug. (Do not expect you're bug to show up in the list within 24 hours.) And then you wait. And hopefully the bug gets fixed.
It is to bad that there is no graphical user interface to reportbug yet, but now any Debian user can submit bugs, independent of system functionality. And an interface is easily build nowadays. Sure hope to see one soon!
2001-01-27, generated by lfparser version 2.8 | http://www.linuxfocus.org/English/September2000/article171.shtml | CC-MAIN-2015-14 | refinedweb | 1,201 | 74.69 |
In this tutorial, we’ll build a REST API to manage users and roles using Firebase and Node.js. In addition, we’ll see how to use the API to authorize (or not) which users can access specific resources.
Introduction
Almost every app requires some level of authorization system. In some cases, validating a username/password set with our Users table is enough, but often, we need a more fine-grained permissions model to allow certain users to access certain resources and restrict them from others. Building a system to support the latter is not trivial and can be very time consuming. In this tutorial, we’ll learn how to build a role-based auth API using Firebase, which will help us get quickly up and running.
Role-based Auth
In this authorization model, access is granted to roles, instead of specific users, and a user can have one or more depending on how you design your permission model. Resources, on the other hand, require certain roles to allow a user to execute it.
Firebase
Firebase Authentication
In a nutshell, Firebase Authentication is an extensible token-based auth system and provides out-of-the-box integrations with the most common providers such as Google, Facebook, and Twitter, among others.
It enables us to use custom claims which we’ll leverage to build a flexible role-based API.
We can set any JSON value into the claims (e.g.,
{ role: 'admin' } or
{ role: 'manager' }).
Once set, custom claims will be included in the token that Firebase generates, and we can read the value to control access.
It also comes with a very generous free quota, which in most cases will be more than enough.
Firebase Functions
Functions are a fully-managed serverless platform service. We just need to write our code in Node.js and deploy it. Firebase takes care of scaling the infrastructure on demand, server configuration, and more. In our case, we’ll use it to build our API and expose it via HTTP to the web.
Firebase allows us to set
express.js apps as handlers for different paths—for example, you can create an Express app and hook it to
/mypath, and all requests coming to this route will be handled by the
app configured.
From within the context of a function, you have access to the whole Firebase Authentication API, using the Admin SDK.
This is how we’ll create the user API.
What We’ll Build
So before we get started, let’s take a look at what we’ll build. We are going to create a REST API with the following endpoints:
Each of these endpoints will handle authentication, validate authorization, perform the correspondent operation, and finally return a meaningful HTTP code.
We’ll create the authentication and authorization functions required to validate the token and check if the claims contain the required role to execute the operation.
Building the API
In order to build the API, we’ll need:
- A Firebase project
firebase-toolsinstalled
First, log in to Firebase:
firebase login
Next, initialize a Functions project:
firebase init ? Which Firebase CLI features do you want to set up for this folder? ... (O) Functions: Configure and deploy Cloud Functions ? Select a default Firebase project for this directory: {your-project} ? What language would you like to use to write Cloud Functions? TypeScript ? Do you want to use TSLint to catch probable bugs and enforce style? Yes ? Do you want to install dependencies with npm now? Yes
At this point, you will have a Functions folder, with minimum setup to create Firebase Functions.
At
src/index.ts there’s a
helloWorld example, which you can uncomment to validate that your Functions works. Then you can
cd functions and run
npm run serve. This command will transpile the code and start the local server.
You can check the results at{your-project}/us-central1/helloWorld
Notice the function is exposed on the path defined as the name of it at
'index.ts: 'helloWorld'.
Creating a Firebase HTTP Function
Now let’s code our API. We are going to create an http Firebase function and hook it on
/api path.
First, install
npm install express.
On the
src/index.ts we will:
- Initialize the firebase-admin SDK module with
admin.initializeApp();
- Set an Express app as the handler of our
apihttps endpoint
import * as functions from 'firebase-functions'; import * as admin from 'firebase-admin'; import * as express from 'express'; admin.initializeApp(); const app = express(); export const api = functions.https.onRequest(app);
Now, all requests going to
/api will be handled by the
app instance.
The next thing we’ll do is configure the
app instance to support CORS and add JSON body parser middleware. This way we can make requests from any URL and parse JSON formatted requests.
We’ll first install required dependencies.
npm install --save cors body-parser
npm install --save-dev @types/cors
And then:
//... import * as cors from 'cors'; import * as bodyParser from 'body-parser'; //... const app = express(); app.use(bodyParser.json()); app.use(cors({ origin: true })); export const api = functions.https.onRequest(app);
Finally, we will configure the routes that the
app will handle.
//... import { routesConfig } from './users/routes-config'; //… app.use(cors({ origin: true })); routesConfig(app) export const api = functions.https.onRequest(app);
Firebase Functions allows us to set an Express app as the handler, and any path after the one you set up at
functions.https.onRequest(app);—in this case,
api—will also be handled by the
app. This allows us to write specific endpoints such as
api/users and set a handler for each HTTP verb, which we’ll do next.
Let’s create the file
src/users/routes-config.ts
Here, we’ll set a
create handler at
POST '/users'
import { Application } from "express"; import { create} from "./controller"; export function routesConfig(app: Application) { app.post('/users', create ); }
Now, we’ll create the
src/users/controller.ts file.
In this function, we first validate that all fields are in the body request, and next, we create the user and set the custom claims.
We are just passing
{ role } in the
setCustomUserClaims—the other fields are already set by Firebase.
If no errors occur, we return a 201 code with the
uid of the user created.
import { Request, Response } from "express"; import * as admin from 'firebase-admin' export async function create(req: Request, res: Response) { try { const { displayName, password, email, role } = req.body if (!displayName || !password || !email || !role) { return res.status(400).send({ message: 'Missing fields' }) } const { uid } = await admin.auth().createUser({ displayName, password, email }) await admin.auth().setCustomUserClaims(uid, { role }) return res.status(201).send({ uid }) } catch (err) { return handleError(res, err) } } function handleError(res: Response, err: any) { return res.status(500).send({ message: `${err.code} - ${err.message}` }); }
Now, let’s secure the handler by adding authorization. To do that, we’ll add a couple of handlers to our
create endpoint. With
express.js, you can set a chain of handlers that will be executed in order. Within a handler, you can execute code and pass it to the
next() handler or return a response. What we’ll do is first authenticate the user and then validate if it is authorized to execute. If the user doesn’t have the required role, we’ll return a 403.
On file
src/users/routes-config.ts:
//... import { isAuthenticated } from "../auth/authenticated"; import { isAuthorized } from "../auth/authorized"; export function routesConfig(app: Application) { app.post('/users', isAuthenticated, isAuthorized({ hasRole: ['admin', 'manager'] }), create ); }
Let’s create the files
src/auth/authenticated.ts.
On this function, we’ll validate the presence of the
authorization bearer token in the request header. Then we’ll decode it with
admin.auth().verifyidToken() and persist the user’s
uid,
role, and
res.locals variable, which we’ll later use to validate authorization.
In the case the token is invalid, we return a 401 response to the client:
import { Request, Response } from "express"; import * as admin from 'firebase-admin' export async function isAuthenticated(req: Request, res: Response, next: Function) { const { authorization } = req.headers if (!authorization) return res.status(401).send({ message: 'Unauthorized' }); if (!authorization.startsWith('Bearer')) return res.status(401).send({ message: 'Unauthorized' }); const split = authorization.split('Bearer ') if (split.length !== 2) return res.status(401).send({ message: 'Unauthorized' }); const token = split[1] try { const decodedToken: admin.auth.DecodedIdToken = await admin.auth().verifyIdToken(token); console.log("decodedToken", JSON.stringify(decodedToken)) res.locals = { ...res.locals, uid: decodedToken.uid, role: decodedToken.role, email: decodedToken.email } return next(); } catch (err) { console.error(`${err.code} - ${err.message}`) return res.status(401).send({ message: 'Unauthorized' }); } }
Now, let’s create a
src/auth/authorized.ts file.
In this handler, we extract the user’s info from
res.locals we set previously and validate if it has the role required to execute the operation or in the case the operation allows the same user to execute, we validate that the ID on the request params is the same as the one in the auth token.
import { Request, Response } from "express"; export function isAuthorized(opts: { hasRole: Array<'admin' | 'manager' | 'user'>, allowSameUser?: boolean }) { return (req: Request, res: Response, next: Function) => { const { role, email, uid } = res.locals const { id } = req.params if (opts.allowSameUser && id && uid === id) return next(); if (!role) return res.status(403).send(); if (opts.hasRole.includes(role)) return next(); return res.status(403).send(); } }
With these two methods, we’ll be able to authenticate requests and authorize them given the
role in the incoming token. That’s great, but since Firebase doesn’t let us set custom claims from the project console, we won’t be able to execute any of these endpoints. In order to bypass this, we can create a root user from Firebase Authentication Console
And set an email comparison in the code. Now, when firing requests from this user, we’ll be able to execute all operations.
//... const { role, email, uid } = res.locals const { id } = req.params if (email === '[email protected]') return next(); //...
Now, let’s add the rest of the CRUD operations to
src/users/routes-config.ts.
For operations to get or update a single user where
:id param is sent, we also allow the same user to execute the operation.
export function routesConfig(app: Application) { //.. // lists all users app.get('/users', [ isAuthenticated, isAuthorized({ hasRole: ['admin', 'manager'] }), all ]); // get :id user app.get('/users/:id', [ isAuthenticated, isAuthorized({ hasRole: ['admin', 'manager'], allowSameUser: true }), get ]); // updates :id user app.patch('/users/:id', [ isAuthenticated, isAuthorized({ hasRole: ['admin', 'manager'], allowSameUser: true }), patch ]); // deletes :id user app.delete('/users/:id', [ isAuthenticated, isAuthorized({ hasRole: ['admin', 'manager'] }), remove ]); }
And on
src/users/controller.ts. In these operations, we leverage the admin SDK to interact with Firebase Authentication and perform the respective operations. As we did previously on
create operation, we return a meaningful HTTP code on each operation.
For the update operation, we validate all fields present and override
customClaims with those sent in the request:
//.. export async function all(req: Request, res: Response) { try { const listUsers = await admin.auth().listUsers() const users = listUsers.users.map(user => { const customClaims = (user.customClaims || { role: '' }) as { role?: string } const role = customClaims.role ? customClaims.role : '' return { uid: user.uid, email: user.email, displayName: user.displayName, role, lastSignInTime: user.metadata.lastSignInTime, creationTime: user.metadata.creationTime } }) return res.status(200).send({ users }) } catch (err) { return handleError(res, err) } } export async function get(req: Request, res: Response) { try { const { id } = req.params const user = await admin.auth().getUser(id) return res.status(200).send({ user }) } catch (err) { return handleError(res, err) } } export async function patch(req: Request, res: Response) { try { const { id } = req.params const { displayName, password, email, role } = req.body if (!id || !displayName || !password || !email || !role) { return res.status(400).send({ message: 'Missing fields' }) } const user = await admin.auth().updateUser(id, { displayName, password, email }) await admin.auth().setCustomUserClaims(id, { role }) return res.status(204).send({ user }) } catch (err) { return handleError(res, err) } } export async function remove(req: Request, res: Response) { try { const { id } = req.params await admin.auth().deleteUser(id) return res.status(204).send({}) } catch (err) { return handleError(res, err) } } //...
Now we can run the function locally. To do that, first you need to set up the account key to be able to connect with the auth API locally. Then run:
npm run serve
Deploy the API
Great! Now that we have our written the role-based API, we can deploy it to the web and start using it. Deploying with Firebase is super easy, we just need to run
firebase deploy. Once the deploy is completed, we can access our API at the published URL.
You can check the API URL at{your-project}/functions/list.
In my case, it is.
Consuming the API
Once our API is deployed, we have several ways to use it—in this tutorial, I’ll cover how to use it via Postman or from an Angular app.
If we enter the List All Users URL (
/api/users) on any browser, we’ll get the following:
The reason for this is when sending the request from a browser, we are performing a GET request without auth headers. These means our API is actually working as expected!
Our API is secured via tokens—in order to generate such a token, we need to call Firebase’s Client SDK and log in with a valid user/password credential. When successful, Firebase will send a token back in the response which we can then add to the header of any following request we want to perform.
From an Angular App
In this tutorial, I’ll just go over the important pieces to consume the API from an Angular app. The full repository can be accessed here, and if you need a step-by-step tutorial on how to create an Angular app and configure @angular/fire to use, it you can check this post.
So, back to signing in, we’ll have a
<form> to let the user enter a username and password.
//... <form [formGroup]="form"> <div class="form-group"> <label>Email address</label> <input type="email" formControlName="email" class="form-control" placeholder="Enter email"> </div> <div class="form-group"> <label>Password</label> <input type="password" formControlName="password" class="form-control" placeholder="Password"> </div> </form> //...
And on the class, we
AngularFireAuth service.
//... form: FormGroup = new FormGroup({ email: new FormControl(''), password: new FormControl('') }) constructor( private afAuth: AngularFireAuth ) { } async signIn() { try { const { email, password } = this.form.value await this.afAuth.auth.signInWithEmailAndPassword(email, password) } catch (err) { console.log(err) } } //..
At this point, we can sign in to our Firebase project.
And when we inspect the network requests in the DevTools, we can see that Firebase returns a token after verifying our user and password.
This token is the one we will use to send on our header’s request to the API we’ve built. One way to add the token to all requests is using an
HttpInterceptor.
This file shows how to get the token from
AngularFireAuth and add it to the header’s request. We then provide the interceptor file in the AppModule.
http-interceptors/auth-token.interceptor.ts
@Injectable({ providedIn: 'root' }) export class AuthTokenHttpInterceptor implements HttpInterceptor { constructor( private auth: AngularFireAuth ) { } intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { return this.auth.idToken.pipe( take(1), switchMap(idToken => { let clone = req.clone() if (idToken) { clone = clone.clone({ headers: req.headers.set('Authorization', 'Bearer ' + idToken) }); } return next.handle(clone) }) ) } } export const AuthTokenHttpInterceptorProvider = { provide: HTTP_INTERCEPTORS, useClass: AuthTokenHttpInterceptor, multi: true }
app.module.ts
@NgModule({ //.. providers: [ AuthTokenHttpInterceptorProvider ] //... }) export class AppModule { }
Once the interceptor is set, we can make requests to our API from
httpClient. For example, here’s a
UsersService where we call the list all users, get the user by its ID, and create a user.
//… export type CreateUserRequest = { displayName: string, password: string, email: string, role: string } @Injectable({ providedIn: 'root' }) export class UserService { private baseUrl = '{your-functions-url}/api/users' constructor( private http: HttpClient ) { } get users$(): Observable<User[]> { return this.http.get<{ users: User[] }>(`${this.baseUrl}`).pipe( map(result => { return result.users }) ) } user$(id: string): Observable<User> { return this.http.get<{ user: User }>(`${this.baseUrl}/${id}`).pipe( map(result => { return result.user }) ) } create(user: CreateUserRequest) { return this.http.post(`${this.baseUrl}`, user) } }
Now, we can call the API to get the user by its ID and list all users from a component like this:
//... <ul * <li class="list-group-item d-flex justify-content-between align-items-center"> <div> <h5 class="mb-1">{{user.displayName}}</h5> <small>{{user.email}}</small> </div> <span class="badge badge-primary badge-pill">{{user.role?.toUpperCase()}}</span> </li> </ul> <ul * <li * <div> <h5 class="mb-1">{{user.displayName}}</h5> <small class="d-block">{{user.email}}</small> <small class="d-block">{{user.uid}}</small> </div> <span class="badge badge-primary badge-pill">{{user.role?.toUpperCase()}}</span> </li> </ul> //...
//... users$: Observable<User[]> user$: Observable<User> constructor( private userService: UserService, private userForm: UserFormService, private modal: NgbModal, private afAuth: AngularFireAuth ) { } ngOnInit() { this.users$ = this.userService.users$ this.user$ = this.afAuth.user.pipe( filter(user => !!user), switchMap(user => this.userService.user$(user.uid)) ) } //...
And here’s the result.
Notice that if we sign in with a user with
role=user, only the Me section will be rendered.
And we’ll get a 403 on the network inspector.
From Postman
Postman is a tool to build and make requests to APIs. This way, we can simulate that we are calling our API from any client app or a different service.
What we’ll demo is how to send a request to list all users.
Once we open the tool, we set the URL-{your-project}.cloudfunctions.net/api/users:
Next, on the tab authorization, we choose Bearer Token and we set the value we extracted from Dev Tools previously.
Conclusion
Congratulations! You’ve made it through the whole tutorial and now you’ve learned to create a user role-based API on Firebase.
We’ve also covered how to consume it from an Angular app and Postman.
Let’s recap the most important things:
- Firebase allows you to get quickly up and running with an enterprise-level auth API, which you can extend later on.
- Almost every project requires authorization—if you need to control access using a role-based model, Firebase Authentication lets you get started very quickly.
- The role-based model relies on validating resources that are requested from users with specific roles vs. specific users.
- Using an Express.js app on Firebase Function, we can create a REST API and set handlers to authenticate and authorize requests.
- Leveraging built-in custom claims, you can create a role-based auth API and secure your app.
You can read further about Firebase auth here. And if you want to leverage the roles we have defined, you can use @angular/fire helpers.
Understanding the basics
Is Firebase Auth a REST API?
Firebase Auth is a service that allows your app to sign up and authenticate a user against multiple providers such as (Google, Facebook, Twitter, GitHub and more). Firebase Auth provides SDKs with which you can easily integrate with web, Android, and iOS. Firebase Auth can also be consumed as a REST API
What is Firebase used for?
Firebase is a suite of cloud products that helps you build a serverless mobile or web app very quickly. It provides most of the common services involved on every app (database, authorization, storage, hosting).
How do I get Firebase Auth API?
You can create a project with your Google account at firebase.google.com. Once the project is created, you can turn on Firebase Auth and start using it in your app.
Which is better, Firebase or AWS?
Firebase is Google-backed product, and one of which Google is trying to grow and add more and more features. AWS Amplify is a similar product, mostly targeted to mobile apps. Both are great products, with Firebase being an older product with more features.
Is Firebase easy to use?
Firebase is a fully managed service with which you can get started very easily and not worry about infrastructure when you need to scale up. There's a lot of great documentation and blog posts with examples to quickly learn how it works.
Is Firebase good for large databases?
Firebase has two databases: Realtime Database and Firestore. Both are NoSQL databases with similar features and different pricing models. Firestore supports better querying features and both databases are designed so that querying latency is not affected by the database size. | https://www.toptal.com/firebase/role-based-firebase-authentication | CC-MAIN-2019-39 | refinedweb | 3,402 | 59.3 |
/>
In this post I will write code in Python to export data from PostgreSQL to an Excel spreadsheet.
There is a baffling selection of reporting software out there with very sophisticated functionality and users can put together reports impressive enough to satisfy any manager or board. However, many people put pragmatics over aesthetics and will say "can't I just get the data in a spreadsheet?". So let's see how we can do that.
For this project I will be using psycopg2 for the database access and openpyxl for the spreadsheet creation. I have already written a few posts on using the psycopg2 PostgreSQL interface for both creating databases and manipulating data. This post will use the database and data created in those earlier posts but you can easily use it with your own database.
openpyxl
openpyxl is "A Python library to read/write Excel 2010 xlsx/xlsm files", and is simple to use but powerful. You can install it using pip/pip3 with this command:
Installing openpyxl
pip(3) install openpyxl
Here are a few useful links:
documentation at readthedocs.io
a handy tutorial on medium.com
The Project
This project consists of a simple module containing one function to take a connection and a query string as arguments and then run the query. The results will be written to an Excel file, the headings and file name of which are also function arguments.
The source code for this project consists of the following two files:
- pgtoexcel.py
- pgtoexcel_test.py
which can be downloaded as a zip or cloned/downloaded from Github. This project shares a file, pgconnection.py, with the previous post so you might like to save the files for this post in the same folder and share pgconnection.py.
Source Code Links
Now let's look at the code...
pgtoexcel.py
import psycopg2 import openpyxl from openpyxl.styles import Font def export_to_excel(connection, query_string, headings, filepath): """ Exports data from PostgreSQL to an Excel spreadsheet using psycopg2. Arguments: connection - an open psycopg2 (this function does not close the connection) query_string - SQL to get data headings - list of strings to use as column headings filepath - path and filename of the Excel file psycopg2 and file handling errors bubble up to calling code. """ cursor = connection.cursor() cursor.execute(query_string) data = cursor.fetchall() cursor.close() wb = openpyxl.Workbook() sheet = wb.get_active_sheet() sheet.row_dimensions[1].font = Font(bold = True) # Spreadsheet row and column indexes start at 1 # so we use "start = 1" in enumerate so # we don't need to add 1 to the indexes. for colno, heading in enumerate(headings, start = 1): sheet.cell(row = 1, column = colno).value = heading # This time we use "start = 2" to skip the heading row. for rowno, row in enumerate(data, start = 2): for colno, cell_value in enumerate(row, start = 1): sheet.cell(row = rowno, column = colno).value = cell_value wb.save(filepath)
Firstly we need to import psycopg2 and openpyxl, as well as Font from openpyxl.styles.
The export_to_excel function takes a connection, which needs to be open and will be left open. This enables calling code to re-use connections or a connection pool. It also takes a query string which would typically be in the form "SELECT...FROM". The other arguments are a list of strings to use as column headings and the path to save the spreadsheet to.
Within export_to_excel we first grab the data using the method already described in this post.
Now we get to the spreadsheet stuff, firstly creating a new Workbook and getting the current worksheet. Then we set the font of the first row to bold as it will contain the column headings. We use [1] as spreadsheet rows and column indexes start at 1 not 0.
Next we iterate the column headings list using the enumerate function with a start argument of 1 rather than the default 0. This is used to index the columns where we set the cell text.
Next we iterate the rows and columns of data using a nested loop, starting the row iteration at 2 to skip the headings row. Within the loop we simply set the spreadsheet cell values to the data items.
Then all we need to do is save the spreadsheet to the specified file path. There is no exception handling in this function so any exceptions will bubble up to the calling code where they can better be dealt with.
As a quick summary of using openpyxl to create a spreadsheet:
- Call openpyxl.Workbook() to create a workbook
- Call get_active_sheet() on the workbook to get a sheet
- Set sheet.cell([row number], [column number]).value for each cell you want to write to
- Call save(filepath) on the workbook
Now we can try out the function.
pgtoexcel_test.py
import psycopg2 import pgconnection import pgtoexcel def main(): print("-----------------------") print("| codedrome.com |") print("| PostgreSQL to Excel |") print("-----------------------") try: conn = pgconnection.get_connection("codeinpython") query_string = "SELECT galleryname, gallerydescription, phototitle, photodescription FROM galleriesphotos" filepath = "galleriesphotos.xlsx" pgtoexcel.export_to_excel(conn, query_string, ("Gallery Name", "Gallery Description", "Photo Title", "Photo Description"), "filepath") except Exception as e: print(type(e)) print(e) main()
This code uses the pgconnection.get_connection function from earlier posts but you can substitute your own database connection here. We also have query_string and filepath variables which again you can change if you wish. Then all we need to do is call pgtoexcel.export_to_excel.
The exception handling here is minimal but as pgtoexcel.export_to_excel tries to connect to a database and then write to the file system there is plenty that can go wrong so you might like to include two (or more) exception handlers here.
My usual approach in these situations is to include exception handling like this before deliberately breaking the code (eg. with an invalid database password) to see what sort of exceptions it throws up, and then expanding the exception handling into something more specialized.
We now have enough to run, so uncomment the first three function calls in main and run this:
Running the Program
python3.7 pgtoexcel_test.py
The console output is far too boring to bother showing but if you open the folder where you saved the source code you'll find a new spreadsheet has been created with the headings and data.
Apart from the bold column headings I have made no attempt to format the spreadsheet so it is pretty raw. However, there is plenty of scope for enhancement of both formatting and content, up to very complex spreadsheet-based reporting. | https://www.codedrome.com/exporting-postgresql-data-to-excel-with-python/ | CC-MAIN-2020-34 | refinedweb | 1,079 | 63.9 |
People watching this port, also watch: virtualbox-ose, chromium, vlc, pidgin, transmission-daemon
make generate-plist
cd /usr/ports/audio/picard/ && make install clean
pkg install picard
Number of commits found: 79
audio/picard: Update to 2.5.6
Changes:
audio/picard: Update to 2.5.5
Changes:
Update PyQt5 to 5.15.2, sip to 5.5.0, py-qtbuilder to 1.6.0 and py-qt5-sip to
12.8.1
PR: 251764
Exp-run by: antoine
audio/picard: Update to 2.5.1
- Raise required version of audio/py-mutagen to 1.37
- Use DISTVERSION in WRKSRC to make it easier to test pre-release versions
audio/picard: Add missing runtime dependency devel/py-dateutil
Reported by: arved (via private mail)
MFH: 2020Q4
audio/picard: Update to 2.4.4
Changes:
audio/picard: Update to 2.4.2
Changes:
Update audio/picard to 2.3
Update audio/picard to 2.2.3
Changes:
audio/picard: Update to 2.2.2
PR: 241681
Submitted by: Adam Jimerson <vendion@gmail.com>
Update audio/picard to 2.2.1
audio/picard: Update to 2.1.3
Changes:
audio/picard: Update to 2.1.2
audio/picard: Update to 2.1
Changes: 2.0.4
- Be more specific with audio/picard-plugins minimum version requirement
Update to 2.0.3
Update to 2.0.2
Update audio/picard to 2.0.1
This is a major version update that switches from using PyQt4 and Python 2.7
to PyQt5 and Python 3.5+
Update audio/picard-plugins to 20180707 snapshot from the 2.0 branch
Use PY_FLAVOR for dependencies.
FLAVOR is the current port's flavor, it should not be used outside of
this scope.
Sponsored by: Absolight
sip is needed as a runtime dependency
Traceback (most recent call last):
File "/usr/local/bin/picard", line 2, in <module>
from picard.tagger import main; main('/usr/local/share/locale', True)
File "/usr/local/lib/python2.7/site-packages/picard/tagger.py", line 22, in
<module>
import sip
ImportError: No module named sip
Update to 1.4.2 [1]
Fix LICENSE
Update WWW
Plugins are now in a separate port (audio/picard-plugins). They are
maintained in a separate repository and no longer shipped with the picard
source.
PR: 223354 [1]
Submitted by: Greg V <greg@unrelenting.technology> , iXsystems, and RootBSD
12 vulnerabilities affecting 113 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2021-01-14 20:39:03 | https://www.freshports.org/audio/picard/ | CC-MAIN-2021-04 | refinedweb | 415 | 53.78 |
Sep 22, 2009 04:09 PM|LINK
Hi all,
I have a need to have some of my web pages accessible from a PDA or cell phone. I want to use one of the device emulators for testing. So I started messing around but I cannot get anything to work. Here is what I did:
1. I created a new ASP.Net Web Site in VS 2008.
2. Added a "Hello World" label control to Default.aspx
3. Clicked Tools, Device Emulator Manager.
4. Right-clicked on Windows Mobile 6 Professional Square Emulator and clicked Coonect
5. Right-clicked on Windows Mobile 6 Professional Square Emulator again and clicked Cradle
6. An emulator appears. I also get another screen that allows me to change settings. I close this screen.
7. I enter this url:. This web site comes up correctly so I assume my emulator is correct?
8. Now I enter my url which is:
9. I get this error message:
Cannot connect with current connection settings. To change your connection settings, tap Settings.
What am I doing wrong? I did click on the Settings link but I have no idea what to do next. Also, the web page displays correctly when I run it on my PC.
Thanks,
Bob
Sep 23, 2009 05:17 AM|LINK
Hi,
If you want to build a website for mobile devices you should use mobile web forms. In your project add a new item and choose "Mobile web form". Probably msn automatically redirects to the mobile version of the site.
Here are some links also:
Hope this helps!
Sep 23, 2009 12:54 PM|LINK
Hi,
It should available if you're creating a website. It's not available if you're creating a web application.
Sep 23, 2009 01:09 PM|LINK
Hello ASP.NET website.
ii) The website will be created with default web form “Default.aspx”, keep the name as it is.
iii) Add a Mobile Web Form to the website using “Add New Item -> Mobile Web Form”. Name the mobile web form to “M.aspx”
Now browse
It should work with above.
Suggestion:
As you are starting with mobile web development, please read this it will help you a lot with customizing your mobile pages as per PDA screens and capabilities supported by different PDA's.
Thanks
Thanks
Sep 23, 2009 09:13 PM|LINK
Ok, I downloaded the zip files from your site. I used the ASP.Net Web Site VB file and unzipped it in the Templates/ItemTemplates/ Visual Basic folder like the Readme said.
I then went back to my VS 2008 ASP.Net web site and did the Add New Item and this time the Mobile Web Form was there like you said. I named it M.aspx and added a label with some text. If I run this in my web site then the web page displays properly. But if I Connect / Cradle my device emulator and then enter:, I get the same error as I posted before.
This is very frustrating. It seems like it would be easier to use these emulators.
Sep 24, 2009 04:47 AM|LINK
Hi
1) Can you send me your source code sample?
2) Try to test on different emulator listed here , go for "Mobile Interactive Testing Environment".
Thanks
Sep 24, 2009 05:06 AM|LINK
Hello,
1. ASP.NET Mobile Controls are obsolete since years. It seems that Microsoft will mark the System.Web.UI.MobileControls namespace as obsolete in Microsoft Visual Studio 2010/.NET Framework 4.0. cf. Instead of ASP.NET Mobile Web Forms I would recommend to use Standard ASP.NET Forms and the controls in System.Web.UI.WebControls namespace.
2. It is well known and this forum contains many threads about it that the Microsoft Device Emulator cannot connect to the development server under localhost. Search this forum or try a search engine
17 replies
Last post Sep 25, 2009 05:10 AM by SKT_01 | http://forums.asp.net/t/1473591.aspx/1 | CC-MAIN-2013-20 | refinedweb | 664 | 77.33 |
This is Part 2 of Q&A. Read first part here – JUnit (Java Unit Testing) interview questions and answers – Part1.
Question1: What is Junit?
Answer: Java + unit testing = Junit
- Junit is open source testing framework developed for unit testing java code and is now the default framework for testing Java development.
- It has been developed by Erich Gamma and Kent Beck.
- It is an application programming interface for developing test cases in java which is part of the XUnit Family.
- It helps the developer to write and execute repeatable automated tests.
- Eclipse IDE comes with both Junit and it’s plug-in for working with Junit.
- Junit also has been ported to various other languages like PHP, Python, C++ etc.
Question2: Who should use Junit, developers or testers?
Answer: Used by developers to implement unit tests in Java. Junit is designed for unit testing, which is really a coding process, not a testing process. But many testers or QA engineers are also required to use Junit for unit testing.
Question3: Why do you use Junit to test your code?
Answer: Junit provides a framework to achieve all the following-:
- Test early and often automated testing.
- Junit tests can be integrated with the build so regression testing can be done at unit level.
- Test Code reusage.
- Also when there is a transfer Junit tests act as a document for the unit tests.
Question4: How do you install Junit?
Answer: Let’s see the installation of Junit on windows platform:
1. Download the latest version of Junit from
2. Uncompress the zip to a directory of your choice.
3. Add JUnit to the classpath:
set CLASSPATH=%CLASSPATH%;%JUNIT_HOME%junit.jar
4. Test the installation by running the sample tests that come along with Junit located in the installation directory..
Question5: What are unit tests?
Answer: A unit test is nothing more than the code wrapper around the application code that can be executed to view pass – fail results.
Question6: When are Unit Tests written in Development Cycle?
Answer: Tests are written before the code during development in order to help coders write the best code. Test-driven development is the ideal way to create a bug free code. When all tests pass, you know you are done! Whenever a test fails or a bug is reported, we must first write the necessary unit test(s) to expose the bug(s), and then fix them. This makes it almost impossible for that particular bug to resurface later.
Question7: How to write a simple Junit test class?
Answer:.
Question8: What Is Junit TestCase?
Answer: JUnit TestCase.
Question9: What Is Junit TestSuite?
Answer: JUnit TestSuite is a container class under package junit.framework.TestSuite. It allows us to group multiple test cases into a collection and run them together. (JUnit 4.4 does not support TestSuite class now).
Question10: What is Junit Test Fixture?
Answer:.
Question11: What happens if a Junit test method is declared as “private”?
Answer: If a Junit test method is declared as “private”, the compilation will pass ok. But the execution will fail. This is because Junit requires that all test methods must be declared as “public”.
Question12:”.
Question13: Why not just use system.out.println () for Unit Testing?
Answer: Debugging the code using system.out.println() will lead to manual scanning of the whole output every time the program is run to ensure the code is doing the expected operations. Moreover, in the long run, it takes lesser time to code Junit methods and test them on our files.
Question14: The methods Get () and Set () should be tested for which conditions?
Answer:.
Question15: For which conditions, the methods Get () and Set () can be left out for testing?
Answer: You should do this test to check if a property has already been set (in the constructor) at the point you wish to call getX(). In this case you must test the constructor, and not the getX() method. This kind of test is especially useful if you have multiple constructors.
Question16:. If you have two groups of tests that you think you’d like to execute separately from one another, it is wise to place them in separate test classes.
Question17: How do you test a “protected” method?
Answer: A protected method can only be accessed within the same package where the class is defined. So, testing a protected method of a target class means we need to define your test class in the same package as the target class.
Question18: How do you test a “private” method?
Answer: A private method only be accessed within the same class. So there is no way to test a “private” method of a target class from any test class. A way out is that you can perform unit testing manually or can change your method from “private” to “protected”.
Question19: Do you need to write a main () method compulsorily in a Junit test case class?
Answer: No. But still developers write the main() method in a JUnit test case class to call a JUnit test runner to run all tests defined in this class like:
public static void main(String[] args) { junit.textui.TestRunner.run(Calculator.class); }
Since you can call a JUnit runner to run a test case class as a system command, explicit main() for a Junit test case is not recommended. junit.textui.TestRunner.run() method takes the test class name as its argument. This method automatically finds all class methods whose name starts with test. Thus it will result in below mentioned findings:
- testCreateLogFile()
- testExists()
- testGetChildList()
It will execute each of the 3 methods in unpredictable sequence (hence test case methods should be independent of each other) and give the result in console.
Question20: What happens if a test method throws an exception?
Answer: If you write a test method that throws an exception by itself or by the method being tested, the JUnit runner will declare that test as fail.
The example test below is designed to let the test fail by throwing the uncaught IndexOutOfBoundsException exception:
import org.junit.*; import java.util.*; public class UnexpectedExceptionTest2 { // throw any unexpected exception @Test public void testGet() throws Exception { ArrayList emptyList = new ArrayList(); Exception anyException = null; // don't catch any exception Object o = emptyList.get(1); } }
If you run this test, it will fail:
OUTPUT:
There was 1 failure:
testGet(UnexpectedExceptionTest2)
java.lang.IndexOutOfBoundsException: Index: 1, Size: 0
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at UnexpectedExceptionTest2.testGet(UnexpectedExceptionTest2.ja
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java
at org.junit.internal.runners.MethodRoadie.runTestMethod(Method
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.j
at org.junit.internal.runners.MethodRoadie.runBeforesThenTestTh
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.jav
at org.junit.internal.runners.JUnit4ClassRunner.invokeTestMetho
at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUni
at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4Cla
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassR
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoa
at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4Class
at org.junit.internal.runners.CompositeRunner.runChildren(Compo
at org.junit.internal.runners.CompositeRunner.run(CompositeRunn: 1, Failures: 1
Question21: When objects are garbage collected after a test is executed?
Answer:.
Question22: What is Java “assert” statement?
Answer: Java assertions allow the developer to put “assert” statements in Java source code to help unit testing and debugging.
An “assert” statement has the following format:
assert boolean_expression : string_expression;
When this statement is executed:
- If boolean_expression evaluates to true, the statement will pass normally.
- If boolean_expression evaluates to false, the statement will fail with an “AssertionError” exception.
Helper methods which help to determine if the methods being tested are performing correctly or not
- assertEquals([String message], expected, actual) –any kind of object can be used for testing equality –native types and objects, also specify tolerance when checking for equality in case of floating point numbers.
- assertNull([String message], java.lang.Objectobject) –asserts that a given object is null
- assertNotNull([String message], java.lang.Objectobject) –asserts that a given object isnotnull
- assertTrue([String message], Boolean condition) –Asserts that the given Boolean condition is true
- assertFalse([String message], Boolean condition) –Asserts that the given Boolean condition is false
- fail([String message]) –Fails the test immediately with the optional message
Example-:
public void testSum() { int num1 = 15; int num2 = 50; int total = 35; int sum = 0; sum = num1 + num2; assertEquals(sum, total); }
Excellent…….
Can u please post some programming or code based questions…… | https://beginnersbook.com/2013/05/junit-interview-questions/ | CC-MAIN-2017-34 | refinedweb | 1,452 | 50.53 |
Before every JUnit test I run certain things need to be set up. Some properties need to be loaded, a database connection needs to be made, a separate J2SE application needs to be started, etc. When every single test has finished the database connection can be killed and J2SE application shut down.
I can accomplish this by using @BeforeClass and @AfterClass annotations in a test suite, but that limits me to only being able to run tests inside the suite. If I want to run an individual test outside of a suite it won't run the suite setup and teardown methods. Likewise, if I want to run an individual test method (through an IDE) it won't run the setup and teardown from the suite.
Is there a way to setup JUnit tests so that, no matter how they're run, through a suite, or a test case, or an individual method, they always run a setup method only once before running anything, and a teardown only once after every test has been executed? Having all of the test cases extend an abstract class with a static initializer solves the setup problem, but not teardown.
I was able to accomplish what I needed using two separate methods. I'd prefer if just one method worked, but this will do...
I have a custom RunListener class that implements testRunFinished:
public class MyRunListener extends RunListener { @Override public void testRunFinished(Result result) throws Exception { //Do whatever teardown needs to be done } }
I then have a custom BlockJUnit4ClassRunner class with the following run() method:
private static boolean listenerAdded = false; @Override public void run(RunNotifier notifier) { //listenerAdded required or else the listener will be added once for every test case, and testRunFinished will be run multiple times if(!listenerAdded) { listenerAdded = true; notifier.addListener(new MyRunListener()); } super.run(notifier); }
An abstract test case class uses the annotation:
@RunWith(MyRunner.class)
I was hoping that I could also implement testRunStarted in MyRunListener but it didn't work. Apparently adding the listener like this in MyRunner, the listener isn't added until after the testRunStarted method would have been run, so it isn't being executed. The workaround for this is, as mentioned in the original question, to use a static initilizer in AbstractTestCase:
@RunWith(MyRunner.class) public class AbstractTestCase { private static boolean setupDone = false; static { if(!setupDone) { setupDone = true; //Do whatever setup needs to be done } } ...
If anyone knows of a way to add MyRunListener that will allow the use of testRunStarted that would be nice, but in the meantime this works. | https://codedump.io/share/kQrsvqx4E17a/1/run-junit-setup-and-teardown-exactly-once-for-any-tests-run | CC-MAIN-2018-05 | refinedweb | 426 | 50.77 |
.
In this article, you’ll learn how to:
- Structure an HTML file to act as the template of a single-page web application
- Use Cascading Style Sheets (CSS) to style the presentation of an application
- Use native JavaScript to add interactivity to an application
- Use JavaScript to make HTTP AJAX requests to the REST API you developed in Part 3 of this series
You can get all of the code you’ll see in this tutorial at the link below:
Download Code: Click here to download the code you'll use to learn about Python REST APIs with Flask, Connexion, and SQLAlchemy in this tutorial.
Who This Article Is For
Part 1 of this series guided you through building a REST API, and Part 2 showed you how to connect that REST API to a database. In Part 3, you added relationships to the REST API and the supporting database.
This article is about presenting that REST API to a user as a browser-based web application. This combination gives you both front-end and back-end abilities, which is a useful and powerful skill set.
Creating Single-Page Applications
In Part 3 of this series, you added relationships to the REST API and database to represent notes associated with people. In other words, you created a kind of mini-blog. The web applications you built in part 3 showed you one way to present and interact with the REST API. You navigated between three Single-Page Applications (SPA) to access different parts of the REST API.
While you could have combined that functionality into a single SPA, that approach would make the concepts of styling and interactivity more complex, without much added value. For that reason, each page was a complete, standalone SPA.
In this article, you’ll focus on the People SPA, which presents the list of people in the database and provides an editor feature to create new people and update or delete existing ones. The Home and Notes pages are conceptually similar.
What Existing Frameworks Are There?
There are existing libraries that provide inbuilt and robust functionality for creating SPA systems. For example, the Bootstrap library provides a popular framework of styles for creating consistent and good-looking web applications. It has JavaScript extensions that add interactivity to the styled DOM elements.
There are also powerful web application frameworks, like React and Angular, that give you complete web application development systems. These are useful when you want to create large, multi-page SPAs that would be cumbersome to build from scratch.
Why Build Your Own?
With the availability of tools like those listed above, why would you choose to create an SPA from scratch? Take Bootstrap, for instance. You can use it to create SPAs that look excellent, and you can certainly use it with your JavaScript code!
The problem is that Bootstrap has a steep learning curve you’ll need to climb if you want to use it well. It also adds a lot of Bootstrap-specific attributes to the DOM elements defined in your HTML content. Likewise, tools like React and Angular also have significant learning curves you’ll need to overcome. However, there’s still a place for web applications that don’t rely on tools like these.
Often, when you’re building a web application, you want to build a proof-of-concept first to see if the application is at all useful. You’ll want to get this up and running quickly, so it can be faster for you to roll your own prototype and upgrade it later. Since you won’t invest much time in the prototype, it won’t be too costly to start over and create a new application with a supported, fully-featured framework.
There’s a gap between what you’re going to develop with the People app in this article and what you could build with a complete framework. It’s up to you to decide where the tipping point is between providing the functionality yourself or adopting a framework.
Parts of Single-Page Applications
There are a few main forms of interactivity in traditional web-based systems. You can navigate between pages and submit a page with new information. You can fill out forms containing input fields, radio buttons, checkboxes, and more. When you perform these activities, the webserver responds by sending new files to your browser. Then, your browser renders the content again.
Single-page applications break this pattern by loading everything they need first. Then, any interactivity or navigation is handled by JavaScript or by calls to the server behind the scenes. These activities update the page content dynamically.
There are three major components of a single-page application:
- HTML provides the content of a web page, or what is rendered by your browser.
- CSS provides the presentation, or style, of a web page. It defines how the content of the page should look when rendered by your browser.
- JavaScript provides the interactivity of a web page. It also handles communication with the back-end server.
Next, you’ll take a closer look at each of these major components.
HTML
HTML is a text file delivered to your browser that provides the primary content and structure for a single-page application. This structure includes the definitions for
id and
class attributes, which are used by CSS to style the content and JavaScript to interact with the structure. Your browser parses the HTML file to create the Document Object Model (DOM), which it uses to render the content to the display.
The markup within an HTML file includes tags, like paragraph tags
<p>...</p> and header tags
<h1>...</h1>. These tags become elements within the DOM as your browser parses the HTML and renders it to the display. The HTML file also contains links to external resources that your browser will load as it parses the HTML. For the SPA you’re building in this article, these external resources are CSS and JavaScript files.
CSS
Cascading Style Sheets (CSS) are files that contain styling information that will be applied to whatever DOM structure is rendered from an HTML file. In this way, the content of a web page can be separated from its presentation.
In CSS, the style for a DOM structure is determined by selectors. A selector is just a method of matching a style to elements within the DOM. For example, the
p selector in the code block below applies styling information to all paragraph elements:
p { font-weight: bold; background-color: cyan; }
The above style will apply to all paragraph elements in the DOM. The text will appear as bold and have a background color of cyan.
The cascading part of CSS means that styles defined later, or in a CSS file loaded after another, will take precedence over any previously defined style. For example, you can define a second paragraph style after the style above:
p { font-weight: bold; background-color: cyan; } p { background-color: cornflower; }
This new style definition would modify the existing style so that all paragraph elements in the DOM will have a background color of
cornflower. This overrides the
background-color of the previous style, but it leaves the
font-weight setting intact. You could also define the new paragraph style in a CSS file of its own.
The
id and
class attributes let you apply a style to specific individual elements in the DOM. For example, the HTML to render a new DOM might look like this:
<p> This is some introductory text </p> <p class="panel"> This is some text contained within a panel </p>
This will create two paragraph elements within the DOM. The first has no
class attribute, but the second has a
class attribute of
panel. Then, you can create a CSS style like this:
p { font-weight: bold; width: 80%; margin-left: auto; margin-right: auto; background-color: lightgrey; } .panel { border: 1px solid darkgrey; border-radius: 4px; padding: 10px; background-color: lightskyblue; }
Here, you define a style for any elements that have the
panel attribute. When your browser renders the DOM, the two paragraph elements should look like this:
Both paragraph elements have the first style definition applied to them because the
p selector selects them both. But only the second paragraph has the
.panel style applied to it because it’s the only element with the class attribute
panel that matches that selector. The second paragraph gets new styling information from the
.panel style, and overrides the
background-color style defined in the
p style.
JavaScript
JavaScript provides all of the interactive features for an SPA, as well as dynamic communication with the REST API provided by the server. It also performs all of the updates to the DOM, allowing an SPA to act much like a full Graphical User Interface (GUI) application like Word or Excel.
As JavaScript has evolved, it’s become easier and more consistent to work with the DOM provided by modern browsers. You’ll be using a few conventions, like namespaces and separation of concerns, to help keep your JavaScript code from conflicting with other libraries you might include.
Note: You’ll be creating Single-Page Applications using native JavaScript. In particular, you’ll use the ES2017 version, which works with many modern browsers, but could be problematic if your goal is to support older browser versions.
Modules and Namespaces
You might already know about namespaces in Python and why they’re valuable. In short, namespaces give you a way to keep the names in your program unique to prevent conflicts. For example, if you wanted to use
log() from both the
math and
cmath modules, then your code might look something like this:
>>> import math >>> import cmath >>> math.log(10) 2.302585092994046 >>> cmath.log(10) (2.302585092994046+0j)
The Python code above imports both the
math and
cmath modules, then calls
log(10) from each module. The first call returns a real number and the second returns a complex number, which
cmath has functions for. Each instance of
log() is unique to its own namespace (
math or
cmath), meaning the calls to
log() don’t conflict with each other.
Modern JavaScript has the ability to import modules and assign namespaces to those modules. This is useful if you need to import other JavaScript libraries where there might be a name conflict.
If you look at the end of the
people.js file, then you’ll see this:
301 // Create the MVC components 302 const model = new Model(); 303 const view = new View(); 304 const controller = new Controller(model, view); 305 306 // Export the MVC components as the default 307 export default { 308 model, 309 view, 310 controller 311 };
The code above creates the three components of the MVC system, which you’ll see later on in this article. The default export from the module is a JavaScript literal object. You import this module at the bottom of the
people.html file:
50 <script type="module"> 51 // Give the imported MVC components a namespace 52 import * as MVC from "/static/js/people.js"; 53 54 // Create an intentional global variable referencing the import 55 window.mvc = MVC; 56 </script>
Here’s how this code works:
Line 50 uses
type="module"to tell the system that the file is a module and not just a JavaScript file.
Line 52 imports the default object from
people.jsand assigns it the name
MVC. This creates a namespace called
MVC. You can give the imported object any name that doesn’t conflict with other JavaScript libraries you might not have control over.
Line 55 creates a global variable, which is a convenient step. You can use this to inspect the
mvcobject with a JavaScript debugger and look at
model,
view, and
controller.
Note: Because
MVC is an imported module and not just an included file, JavaScript will default to strict mode, which has some advantages over non-strict mode. One of the biggest is that you can’t use undefined variables.
Without strict mode turned on, this is perfectly legal:
var myName = "Hello"; myNane = "Hello World";
Do you see the error? The first line creates a variable called
myName and assigns the literal string
"Hello" to it. The second line looks like it changes the contents of the variable to
"Hello World", but that isn’t the case!
In the second line,
"Hello World" is assigned to the variable name
myNane, which is misspelled with an
n. In non-strict JavaScript, this creates two variables:
- The correct variable
myName
- The unintended typo version
myNane
Imagine if these two lines of JavaScript code were separated by many others. This could create a run-time bug that’s difficult to find! When you use strict mode, you eliminate errors like this one by raising an exception should your code attempt to use an undeclared variable.
Naming Conventions
For the most part, the JavaScript code you’re using here is in camel case. This naming convention is widely used in the JavaScript community, so the code examples reflect that. However, your Python code will use snake case, which is more conventional in the Python community.
This difference in naming can be confusing where your JavaScript code interacts with Python code, and especially where shared variables enter the REST API interface. Be sure to keep these differences in mind as you write your code.
Separation of Concerns
The code that drives an SPA can be complicated. You can use the Model–View–Controller (MVC) architectural pattern to simplify things by creating a separation of concerns. The Home, People, and Notes SPAs use the following MVC pattern:
The Model provides all access to the server REST API. Anything presented on the display comes from the model. Any changes to the data go through the model and back to the REST API.
The View controls all display handling and DOM updates. The view is the only part of the SPA that interacts with the DOM and causes the browser to render and re-render any changes to the display.
The Controller handles all user interaction and any user data entered, like click events. Because the controller reacts to user input, it also interacts with the Model and View based on that user input.
Here’s a visual representation of the MVC concept as implemented in the SPA code:
In the illustration above, the Controller has a strong connection with both the Model and the View. Again, this is because any user interaction the Controller handles might require reaching out to the REST API to get or update data. It may even require updating the display.
The dotted line that goes from the Model to the Controller indicates a weak connection. Because calls to the REST API are asynchronous, the data that the Model provides to the Controller returns at a later time.
Creating the People SPA
Your mini-blog demonstration app has pages for Home, People, and Notes. Each of these pages is a complete, standalone SPA. They all use the same design and structure, so even though you’re focusing on the People application here, you’ll understand how to construct all of them.
People HTML
The Python Flask web framework provides the Jinja2 templating engine, which you’ll use for the People SPA. There are parts of the SPA that are common to all three pages, so each page uses the Jinja2 template inheritance feature to share those common elements.
You’ll provide the HTML content for the People SPA in two files:
parent.html and
people.html files. You can get the code for these files at the link below:
Download Code: Click here to download the code you'll use to learn about Python REST APIs with Flask, Connexion, and SQLAlchemy in this tutorial.
Here’s what your
parent.html will look like:
1 <!DOCTYPE html> 2 <html lang="en"> 3 <head> 4 <meta charset="UTF-8"> 5 {% block head %} 6 <title>{% block title %}{% endblock %} Page</title> 7 {% endblock %} 8 </head> 9 <body> 10 <div class="navigation"> 11 <span class="buttons"> 12 <a href="/">Home</a> 13 <a href="/people">People</a> 14 </span> 15 <span class="page_name"> 16 <div></div> 17 </span> 18 <span class="spacer"></span> 19 </div> 20 21 {% block body %} 22 {% endblock %} 23 </body> 24 25 {% block javascript %} 26 {% endblock %} 27 28 </html>
parent.html has a few major elements:
- Line 1 sets the document type as
<!DOCTYPE html>. All new HTML pages begin with this declaration. Modern browsers know this means to use the HTML 5 standard, while older browsers will fall back to the latest standard they can support.
- Line 4 tells the browser to use UTF-8 encoding.
- Lines 10 to 19 define the navigation bar.
- Lines 21 and 22 are Jinja2 block markers, which will be replaced by content in
people.html.
- Lines 25 and 26 are Jinja2 block markers that act as a placeholder for JavaScript code.
The
people.html file will inherit the
parent.html code. You can expand the code block below to see the whole file:
1 {% extends "parent.html" %} 2 {% block title %}People{% endblock %} 3 {% block head %} 4 {% endblock %} 5 {% block page_name %}Person Create/Update/Delete Page{% endblock %} 6 7 {% block body %} 8 <div class="container"> 9 <input id="url_person_id" type="hidden" value="{{ person_id }}" /> 10 <div class="section editor"> 11 <div> 12 <span>Person ID:</span> 13 <span id="person_id"></span> 14 </div> 15 <label for="fname">First Name 16 <input id="fname" type="text" /> 17 </label> 18 <br /> 19 <label for="lname">Last Name 20 <input id="lname" type="text" /> 21 </label> 22 <br /> 23 <button id="create">Create</button> 24 <button id="update">Update</button> 25 <button id="delete">Delete</button> 26 <button id="reset">Reset</button> 27 </div> 28 <div class="people"> 29 <table> 30 <caption>People</caption> 31 <thead> 32 <tr> 33 <th>Creation/Update Timestamp</th> 34 <th>Person</th> 35 </tr> 36 </thead> 37 </table> 38 </div> 39 <div class="error"> 40 </div> 41 </div> 42 <div class="error"> 43 </div> 44 45 {% endblock %}
people.html has just two major differences:
- Line 1 tells Jinja2 that this template inherits from the
parent.htmltemplate.
- Lines 7 to 45 create the body of the page. This includes the editing section and an empty table to present the list of people. This is the content inserted in the
{% block body %}{% endblock %}section of the
parent.htmlfile.
The HTML page generated by
parent.html and
people.html contains no styling information. Instead, the page is rendered in the default style of whatever browser you use to view it. Here’s what your app looks like when rendered by the Chrome browser:
It doesn’t look much like a Single-Page Application! Let’s see what you can do about that.
People CSS
To style the People SPA, you first need to add the
normalize.css style sheet. This will make sure that all browsers consistently render elements more closely to HTML 5 standards. The specific CSS for the People SPA is supplied by two style sheets:
parent.css, which you pull in with
parent.html
people.css, which you pull in with
people.html
You can get the code for these stylesheets at the link below:
Download Code: Click here to download the code you'll use to learn about Python REST APIs with Flask, Connexion, and SQLAlchemy in this tutorial.
You’ll add both
normalize.css and
parent.css to the
<head>...</head> section of
parent.html:
1 <head> 2 <meta charset="UTF-8"> 3 {% block head %} 4 <title>{% block title %}{% endblock %} Page</title> 5 <link rel="stylesheet" href=""> 6 <link rel="stylesheet" href="/static/css/parent.css"> 7 {% endblock %} 8 </head>
Here’s what these new lines do:
- Line 5 gets
normalize.cssfrom a content delivery network (CDN), so you don’t have to download it yourself.
- Line 6 gets
parent.cssfrom your app’s
staticfolder.
For the most part,
parent.css sets the styles for the navigation and error elements. It also changes the default font to Google’s Roboto font using these lines:
5 @import url(); 6 7 body, .ui-btn { 8 font-family: Roboto; 9 }
You pull in the Roboto font from a Google CDN. Then, you apply this font to all elements in the SPA body that also have a class of
.ui-btn.
Likewise,
people.css contains styling information specific to the HTML elements that create the People SPA. You add
people.css to the
people.html file inside the Jinja2
{% block head %} section:
3 {% block head %} 4 {{ super() }} 5 <link rel="stylesheet" href="/static/css/people.css"> 6 {% endblock %}
The file contains a few new lines:
- Line 2 has a call to {{ super() }}. This tells Jinja2 to include anything that exists in the
{% block head %}section of
parent.html.
- Line 3 pulls in the
people.cssfile from your app’s static folder.
After you include the stylesheets, your People SPA will look more like this:
The People SPA is looking better, but it’s still incomplete. Where are the rows of people data in the table? All the buttons in the editor section are enabled, so why don’t they do anything? You’ll fix these issues in the next section with some JavaScript.
People JavaScript
You’ll pull JavaScript files into the People SPA just like you did with the CSS files. You’ll add the following bit of code to the bottom of
people.html file:
48 {% block javascript %} 49 {{ super() }} 50 <script type="module"> 51 // Give the imported MVC components a namespace 52 import * as MVC from "/static/js/people.js"; 53 54 // Create an intentional global variable referencing the import 55 window.mvc = MVC; 56 </script> 57 {% endblock %}
Notice the
type="module" declaration on the opening
<script> tag in line 50. This tells the system that the script is a JavaScript module. The ES6
import syntax will be used to pull the exported parts of the code into the browser context.
The People MVC
All of the SPA pages use a variation of the MVC pattern. Here’s an example implementation in JavaScript:
1 // Create the MVC components 2 const model = new Model(); 3 const view = new View(); 4 const controller = new Controller(model, view); 5 6 // Export the MVC components as the default 7 export default { 8 model, 9 view, 10 controller 11 };
This code doesn’t do anything just yet, but you can use it to see the following elements of MVC structure and implementation:
- Line 2 creates an instance of the Model class and assigns it to
model.
- Line 3 creates an instance of the View class and assigns it to
view.
- Line 4 creates an instance of the Controller class and assigns it to
controller. Note that you pass both
modeland
viewto the constructor. This is how the controller gets a link to the
modeland
viewinstance variables.
- Lines 7 to 11 export a JavaScript literal object as the default export.
Because you pull in
people.js at the bottom of
people.html, the JavaScript is executed after your browser creates the SPA DOM elements. This means that JavaScript can safely access the elements on the page and begin to interact with the DOM.
Again, the code above doesn’t do anything just yet. To make it work, you’ll need to define your Model, View, and Controller.
People Model
The Model is responsible for communicating with the REST API provided by the Flask server. Any data that comes from the database, and any data the SPA changes or creates, must go through the Model. All communication with the REST API is done using HTTP AJAX calls initiated by JavaScript.
Modern JavaScript provides
fetch(), which you can use to make AJAX calls. The code for your Model class implements one AJAX method to read the REST API URL endpoint
/api/people and get all the people in the database:
1 class Model { 2 async read() { 3 let options = { 4 method: "GET", 5 cache: "no-cache", 6 headers: { 7 "Content-Type": "application/json" 8 "accepts": "application/json" 9 } 10 }; 11 // Call the REST endpoint and wait for data 12 let response = await fetch(`/api/people`, options); 13 let data = await response.json(); 14 return data; 15 } 16 }
Here’s how this code works:
Line 1 defines the class
Model. This is what will be exported later as part of the
mvcobject.
Line 2 begins the definition of an asynchronous method called
read(). The
asynckeyword in front of
read()tells JavaScript that this method performs asynchronous work.
Lines 3 to 9 create an
optionsobject with parameters for the HTTP call, like the method and what the call expects for data.
Line 12 uses
fetch()to make an asynchronous HTTP call to the
/api/peopleURL REST endpoint provided by the server. The keyword
awaitin front of
fetch()tells JavaScript to wait asynchronously for the call to complete. When this is finished, the results are assigned to
response.
Line 13 asynchronously converts the JSON string in the response to a JavaScript object and assigns it to
data.
Line 14 returns the data to the caller.
Essentially, this code tells JavaScript to make a
GET HTTP request to
/api/people, and that the caller is expecting a
Content-Type of
application/json and
json data. Recall that a
GET HTTP call equates to
Read in a CRUD-oriented system.
Based on the Connexion configuration defined in
swagger.yml, this HTTP call will call
def read_all(). This function is defined in
people.py and queries the SQLite database to build a list of people to return to the caller. You can get the code for all of these files at the link below:
Download Code: Click here to download the code you'll use to learn about Python REST APIs with Flask, Connexion, and SQLAlchemy in this tutorial.
In the browser, JavaScript executes in a single thread and is intended to respond to user actions. Because of this, it’s a bad idea to block JavaScript execution that’s waiting for something to complete, like an HTTP request to a server.
What if the request went out across a very slow network, or the server itself was down and would never respond? If JavaScript were to block and wait for the HTTP request to complete in these kinds of conditions, then it might finish in seconds, minutes, or perhaps not at all. While JavaScript is blocked, nothing else in the browser would react to user actions!
To prevent this blocking behavior, HTTP requests are executed asynchronously. This means that an HTTP request returns to the event loop immediately before the request completes. The event loop exists in any JavaScript application that runs in the browser. The loop continuously waits for an event to complete so it can run the code associated with that event.
When you place the
await keyword before
fetch(), you tell the event loop where to return when the HTTP request completes. At that point, the request is complete and any data returned by the call is assigned to
response. Then,
controller calls
this.model.read() to receive the data returned by the method. This creates a weak link with the
controller, as the
model doesn’t know anything about what called it, just what it returned to that caller.
People View
this.view is responsible for interacting with the DOM, which is shown by the display. It can change, add, and delete items from the DOM, which are then re-rendered to the display. The
controller makes calls to the view’s methods to update the display. The
View is another JavaScript class with methods the controller can call.
Below is a slightly simplified version of the People SPA’s
View class:
1 class View { 2 constructor() { 3 this.table = document.querySelector(".people table"); 4 this.person_id = document.getElementById("person_id"); 5 this.fname = document.getElementById("fname"); 6 this.lname = document.getElementById("lname"); 7 } 8 9 reset() { 10 this.person_id.textContent = ""; 11 this.lname.value = ""; 12 this.fname.value = ""; 13 this.fname.focus(); 14 } } 36 }
Here’s how this code works:
Line 1 begins the class definition.
Lines 2 to 7 define the class constructor, much like the
def __init__(self):definition in a Python class. The constructor is getting elements from the DOM and creating alias variables to use in other parts of the class. The
this.in front of those variable names is much like
self.in Python. It designates the current instance of the class when used.
Lines 9 to 14 define
reset(), which you’ll use to set the page back to a default state.
Lines 16 to 36 define
buildTable(), which builds the table of people based on the
peopledata passed to it.
The alias variables are created to cache the DOM objects returned by calls to
document.getElementByID() and
document.querySelector(), which are relatively expensive JavaScript operations. This allows quick use of the variables in the other methods of the class.
Let’s take a closer look at
build_table(), which is the second method in the
View }
Here’s how the function works:
- Line 16 creates the method and passes the
peoplevariable as a parameter.
- Lines 21 to 27 iterate over the
peopledata using JavaScript arrow functions to create a function that builds up the table rows in the
htmlvariable.
- Lines 29 to 31 remove any
<tbody>elements in the table if they exist.
- Line 33 creates a new
tbodyelement in the table.
- Line 34 inserts the
htmlstring previously created into the
tbodyelement as HTML.
This function dynamically builds the table in the People SPA from the data passed to it, which is the list of people that came from the
/api/people/ REST API call. This data is used along with JavaScript template strings to generate the table rows to insert into the table.
People Controller
The Controller is the central clearinghouse of the MVC implementation, as it coordinates the activity of both
model and
view. As such, the code to define it is a little more complicated. Here’s a simplified version:
1 class Controller { 2 constructor(model, view) { 3 this.model = model; 4 this.view = view; 5 6 this.initialize(); 7 } 8 async initialize() { 9 await this.initializeTable(); 10 } 11 async initializeTable() { 12 try { 13 let urlPersonId = parseInt(document.getElementById("url_person_id").value), 14 people = await this.model.read(); 15 16 this.view.buildTable(people); 17 18 // Did we navigate here with a person selected? 19 if (urlPersonId) { 20 let person = await this.model.readOne(urlPersonId); 21 this.view.updateEditor(person); 22 this.view.setButtonState(this.view.EXISTING_NOTE); 23 24 // Otherwise, nope, so leave the editor blank 25 } else { 26 this.view.reset(); 27 this.view.setButtonState(this.view.NEW_NOTE); 28 } 29 this.initializeTableEvents(); 30 } catch (err) { 31 this.view.errorMessage(err); 32 } 33 } 34 initializeCreateEvent() { 35 document.getElementById("create").addEventListener("click", async (evt) => { 36 let fname = document.getElementById("fname").value, 37 lname = document.getElementById("lname").value; 38 39 evt.preventDefault(); 40 try { 41 await this.model.create({ 42 fname: fname, 43 lname: lname 44 }); 45 await this.initializeTable(); 46 } catch(err) { 47 this.view.errorMessage(err); 48 } 49 }); 50 } 51 }
Here’s how it works:
Line 1 begins the definition of the Controller class.
Lines 2 to 7 define the class constructor and create the instance variables
this.modeland
this.viewwith their respective parameters. It also calls
this.initialize()to set up the event handling and build the initial table of people.
Lines 8 to 10 define
initialize()and mark it as an asynchronous method. It calls
this.initializeTable()asynchronously and waits for it to complete. This simplified version only includes this one call, but the full version of the code contains other initialization methods used for the rest of the event handling set up.
Line 11 defines
initializeTable()as an asynchronous method. This is necessary because it calls
model.read(), which is also asynchronous.
Line 13 declares and initializes the
urlPersonIdvariable with the value of the HTML hidden input
url_person_id.
Line 14 calls
this.model.read()and asynchronously waits for it to return with people data.
Line 16 calls
this.view.buildTable(people)to fill the HTML table with people data.
Lines 19 to 28 determine how to update the editor portion of the page.
Line 29 calls
this.initializeTableEvents()to install the event handling for the HTML table.
Line 31 calls
this.view.errorMessage(err)to display errors should they occur.
Lines 34 to 49 install a click event handler on the create button. This calls
this.model.create(...)to create a new person using the REST API, and updates the HTML table with new data.
The bulk of the
controller code is like this, setting event handlers for all the expected events on the People SPA page. The controller continues creating functions in those event handlers to orchestrate calls to
this.model and
this.view, so that they perform the right actions when those events occur.
When your code is complete, your People SPA page will look like this:
The content, styling, and functionality are all complete!
Conclusion
You’ve covered a great deal of new ground and should be proud of what you’ve learned! It can be tricky to jump back and forth between Python and JavaScript to create a complete Single-Page Application.
If you keep your content (HTML), presentation (CSS), and interaction (JavaScript) separate, then you can substantially reduce the complexity. You can also make JavaScript coding more manageable by using the MVC pattern to further break down the complexity of user interaction.
You’ve seen how using these tools and ideas can help you create reasonably complex Single-Page Applications. Now you’re better equipped to make decisions about whether to build an app this way, or take the plunge into a larger framework!
You can get all of the code you saw in this tutorial at the link below:
Download Code: Click here to download the code you'll use to learn about Python REST APIs with Flask, Connexion, and SQLAlchemy in this tutorial. | https://realpython.com/flask-connexion-rest-api-part-4/ | CC-MAIN-2019-47 | refinedweb | 5,680 | 64.2 |
Created on 2014-07-17 22:25 by dw, last changed 2015-03-04 21:04 by dw. This issue is now closed.
This is a followup to the thread at , discussing the existing behaviour of BytesIO copying its source object, and how this regresses compared to cStringIO.StringI.
The goal of posting the patch on list was to try and stimulate discussion around the approach. The patch itself obviously isn't ready for review, and I'm not in a position to dedicate time to it just now (although in a few weeks I'd love to give it full attention!).
Ignoring this quick implementation, are there any general comments around the approach?
My only concern is that it might keep large objects alive in a non-intuitive way in certain circumstances, though I can't think of any obvious ones immediately.
Also interested in comments on the second half of that thread: "a natural extension of this is to do something very similar on the write side: instead of generating a temporary private heap allocation, generate (and freely resize) a private PyBytes object until it is exposed to the user, at which point, _getvalue() returns it, and converts its into an IO_SHARED buffer."
There are quite a few interactions with making that work correctly, in particular:
* How BytesIO would implement the buffers interface without causing the under-construction Bytes to become readonly
* Avoiding redundant copies and resizes -- we can't simply tack 25% slack on the end of the Bytes and then truncate it during getvalue() without likely triggering a copy and move, however with careful measurement of allocator behavior there are various tradeoffs that could be made - e.g. obmalloc won't move a <500 byte allocation if it shrinks by <25%. glibc malloc's rules are a bit more complex though.
Could also add a private _PyBytes_SetSize() API to allow truncation to the final size during getvalue() without informing the allocator. Then we'd simply overallocate by up to 10% or 1-2kb, and write off the loss of the slack space.
Notably, this approach completely differs from the one documented in .. it's not clear to me which is better.
Submitted contributor agreement. Please consider the demo patch licensed under the Apache 2 licence.
Be careful what happens when the original object is mutable:
>>> b = bytearray(b"abc")
>>> bio = io.BytesIO(b)
>>> b[:] = b"defghi"
>>> bio.getvalue()
b'abc'
I don't know what your patch does in this case.
Good catch :( There doesn't seem to be way a to ask for an immutable buffer, so perhaps it could just be a little more selective. I think the majority of use cases would still be covered if the sharing behaviour was restricted only to BytesType.
In that case "Py_buffer initialdata" could become a PyObject*, saving a small amount of memory, and allowing reuse of the struct member if BytesIO was also modified to directly write into a private BytesObject
Even if there is no way to explicitly request a RO buffer, the Py_buffer struct that you get back actually tells you if it's read-only or not. Shouldn't that be enough to enable this optimisation?
Whether or not implementors of the buffer protocol set this flag correctly is another question, but if not then they need fixing on their side anyway. (And in the vast majority of cases, the implementor will be either CPython or NumPy.)
Also, generally speaking, I think such an optimisation would be nice, even if it only catches some common cases (and doesn't break the others :). It could still copy data if necessary, but try to avoid it if possible.
This version is tidied up enough that I think it could be reviewed.
Changes are:
* Defer `buf' allocation until __init__, rather than __new__ as was previously done. Now upon completion, BytesIO.__new__ returns a valid, closed BytesIO, whereas previously a valid, empty, open BytesIO was returned. Is this interface change permissible?
* Move __init__ guts into a "reinit()", for sharing with __setstate__, which also previously caused an unnecessary copy. Additionally gather up various methods for deallocating buffers into a single "reset()" function, called by reinit(), _dealloc(), and _close()
* Per Stefan's suggested approach, reinit() now explicitly checks for a read-only buffer, falling back to silently performing a copy if the returned buffer is read-write. That seems vastly preferable to throwing an exception, which would probably be another interface change.
* Call `unshare()` any place the buffer is about to be modified. If the buffer needs to be made private, it also accepts a size hint indicating how much less/more space the subsequent operation needs, to avoid a redundant reallocation after the unsharing.
Outstanding issues:
* I don't understand why buf_size is a size_t, and I'm certain the casting in unshare() is incorrect somehow. Is it simply to avoid signed overflow?
New patch also calls unshare() during getbuffer()
I'm not sure the "read only buffer" test is strong enough: having a readonly view is not a guarantee that the data in the view cannot be changed through some other means, i.e. it is read-only, not immutable.
Pretty sure this approach is broken. What about the alternative approach of specializing for Bytes?
> Pretty sure this approach is broken. What about the alternative approach of specializing for Bytes?
That would certainly sound good enough, to optimize the common case.
Also, it would be nice if you could add some tests to the patch (e.g. to stress the bytearray case). Thank you!
As for whether the "checking for a readonly view" approach is broken, I don't know: that part of the buffer API is still mysterious to me. Stefan, would you have some insight?
I think checking for a readonly view is fine. The protocol is this:
1) Use the PyBUF_WRITABLE flag in the request. Then the provider must
either have a writable buffer or else deny the request entirely.
2) Omit the PyBUF_WRITABLE flag in the request. Then the provider can
return a writable or a readonly buffer, but must set the readonly flag
correctly AND export the same type of buffer to ALL consumers.
It is not possible to ask for a readonly buffer explicitly, but the
readonly flag in the Py_Buffer struct should always be set correctly.
It is hard to guess the original intention of the PEP-3118 authors, but
in practice "readonly" means "immutable" here. IMO a buffer provider would
be seriously broken if a readonly buffer is mutated in any way.
The original wording in the PEP is this:
readonly
--------
an integer variable to hold whether or not the memory is readonly. 1
means the memory is readonly, zero means the memory is writable.
To me this means that a hypothetical compiler that could figure
out at compile time that the readonly flag is set would be allowed
to put the buffer contents into the read-only data section.
Stefan,
Thanks for digging here. As much as I'd love to follow this interpretation, it simply doesn't match existing buffer implementations, including within the standard library.
For example, mmap.mmap(..., flags=mmap.MAP_SHARED, prot=mmap.PROT_READ) will produce a read-only buffer, yet mutability is entirely at the whim of the operating system. In this case, "immutability" may be apparent for years, until some machine has memory pressure, causing the shared mapping to be be flushed, and refreshed from (say, incoherent NFS storage) on next access.
I thought it would be worth auditing some of the most popular types of buffer just to check your interpretation, and this was the first, most obvious candidate.
Any thoughts? I'm leaning heavily toward the Bytes specialization approach
I'm not sure how much work it would be, or even if it could be made sufficient to solve our problem, but what about extending the buffers interface to include a "int stable" flag, defaulting to 0?
It seems though, that it would just be making the already arcane buffers interface even more arcane simply for the benefit of our specific use case
I'm sure many exporters aren't setting the right flags; on the other hand
we already hash memoryviews based on readonly buffers, assuming they are
immutable.
Hi Stefan,
How does this approach in reinit() look? We first ask for a writable buffer, and if the object obliges, immediately copy it. Otherwise if it refused, ask for a read-only buffer, and this time expect that it will never change.
This still does not catch the case of mmap.mmap. I am not sure how do deal with mmap.mmap. There is no way for it to export PROT_READ as a read-only buffer without permitted mutation, so the only options seem to either be a) remove buffer support from mmap, or b) blacklist it in bytesio(!).
Antoine, I have padded out the unit tests a little. test_memoryio.py seems the best place for them. Also modified test_sizeof(), although to the way this test is designed seems inherently brittle to begin with. Now it is also sensitive to changes in Py_buffer struct.
Various other changes:
* __new__ once again returns a valid, open, empty BytesIO, since the alternative breaks pickling.
* reinit() preserves existing BytesIO state until it knows it can succeed, which fixes another of the pickle tests.
* setstate() had CHECK_CLOSED() re-added, again for the pickle tests.
Probably the patch guts could be rearranged again, since the definition of the functions is no longer as clear as it was in cow3.patch.
See also issue15381.
There's also the following code in numpy's getbuffer method:
/*
* If a read-only buffer is requested on a read-write array, we return a
* read-write buffer, which is dubious behavior. But that's why this call
* is guarded by PyArray_ISWRITEABLE rather than (flags &
* PyBUF_WRITEABLE).
*/
if (PyArray_ISWRITEABLE(self)) {
if (array_might_be_written(self) < 0) {
goto fail;
}
}
... which seems to imply that mmap is not the only one with "dubious behaviour" (?).
Actually we have an extra safety net in memory_hash() apart from
the readonly check: We also check if the underlying object is
hashable.
This might be applicable here, too. Unfortunately mmap objects
*are* hashable, leading to some funny results:
>>> import mmap
>>> with open("hello.txt", "wb") as f:
... f.write(b"xxxxx\n")
...
6
>>> f = open("hello.txt", "r+b")
>>> mm = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
>>> x = memoryview(mm)
>>> hash(mm)
-9223363309538046107
>>> hash(x)
-3925142568057840789
>>> x.tolist()
[120, 120, 120, 120, 120, 10]
>>>
>>> with open("hello.txt", "wb") as g:
... g.write(b"yyy\n")
...
4
>>> hash(mm)
-9223363309538046107
>>> hash(x)
-3925142568057840789
>>> x.tolist()
[121, 121, 121, 10, 0, 0]
memoryview (rightfully) assumes that hashable objects are immutable
and caches the first hash.
I'm not sure why mmap objects are hashable, it looks like a bug
to me.
I think the mmap behavior is probably worse than the NumPy example.
I assume that in the example the exporter sets view.readonly=0.
mmap objects set view.readonly=1 and can still be mutated.
Stefan, I like your new idea. If there isn't some backwards compatibility argument about mmap.mmap being hashable, then it could be considered a bug, and fixed in the same hypothetical future release that includes this BytesIO change. The only cost now is that to test for hashability, we must hash the object, which causes every byte in it to be touched (aka. almost 50% the cost of a copy)
If however we can't fix mmap.mmap due to the interface change (I think that's a silly idea -- Python has never been about letting the user shoot themselves in the foot), then the specialized-for-Bytes approach is almost as good (and perhaps even better, since the resulting concept and structure layout is more aligned with Serhiy's patch in issue15381).
tl;dr:
a) mmap.mmap can be fixed - use hashability as strong test for immutability (instead of ugly heuristic involving buffer blags)
- undecided: is calling hash(obj) to check for immutability too costly?
b) mmap.mmap can't be fixed - use the Bytes specialization approach.
I don't like the idea of trying to hash the object. It may be a time-consuming operation, while the result will be thrown away.
I think restricting the optimization to bytes objects is fine. We can whitelist other types, such as memoryview.
This new patch abandons the buffer interface and specializes for Bytes per the comments on this issue.
Anyone care to glance at least at the general structure?
Tests could probably use a little more work.
Microbenchmark seems fine, at least for construction. It doesn't seem likely this patch would introduce severe performance troubles elsewhere, but I'd like to trying it out with some example heavy BytesIO consumers (any suggestions? Some popular template engine?)
cpython] ./python.exe -m timeit -s 'import i' 'i.readlines()'
lines: 54471
100 loops, best of 3: 13.3 msec per loop
[23:52:55 eldil!58 cpython] ./python-nocow -m timeit -s 'import i' 'i.readlines()'
lines: 54471
10 loops, best of 3: 19.6 msec per loop
[23:52:59 eldil!59 cpython] cat i.py
import io
word = b'word'
line = (word * int(79/len(word))) + b'\n'
ar = line * int((4 * 1048576) / len(line))
def readlines():
return len(list(io.BytesIO(ar)))
print('lines: %s' % (readlines(),))
> It doesn't seem likely this patch would introduce severe performance troubles elsewhere, but I'd like to trying it out with some example heavy BytesIO consumers (any suggestions? Some popular template engine?)
I don't have any specific suggestions, but you could try the benchmark suite here:
Hey Antoine,
Thanks for the link. I'm having trouble getting reproducible results at present, and running out of ideas as to what might be causing it. Even after totally isolating a CPU for e.g. django_v2 and with frequency scaling disabled, numbers still jump around for the same binary by as much as 3%.
I could not detect any significant change between runs of the old and new binary that could not be described as noise, given the isolation issues above.
> Even after totally isolating a CPU for e.g. django_v2 and with frequency scaling disabled, numbers still jump around for the same binary by as much as 3%.
That's expected. If the difference doesn't go above 5-10%, then you IMO can pretty much consider your patch didn't have any impact on those benchmarks.
Newest patch incorporates Antoine's review comments. The final benchmark results are below. Just curious, what causes e.g. telco to differ up to 7% between runs? That's really huge
Report on Linux k2 3.14-1-amd64 #1 SMP Debian 3.14.9-1 (2014-06-30) x86_64
Total CPU cores: 4
### call_method_slots ###
Min: 0.329869 -> 0.340487: 1.03x slower
Avg: 0.330512 -> 0.341786: 1.03x slower
Significant (t=-216.69)
Stddev: 0.00067 -> 0.00060: 1.1111x smaller
### call_method_unknown ###
Min: 0.351167 -> 0.343961: 1.02x faster
Avg: 0.351731 -> 0.344580: 1.02x faster
Significant (t=238.89)
Stddev: 0.00033 -> 0.00040: 1.2271x larger
### call_simple ###
Min: 0.257487 -> 0.277366: 1.08x slower
Avg: 0.257942 -> 0.277809: 1.08x slower
Significant (t=-845.64)
Stddev: 0.00029 -> 0.00029: 1.0126x smaller
### etree_generate ###
Min: 0.377985 -> 0.365952: 1.03x faster
Avg: 0.381797 -> 0.369452: 1.03x faster
Significant (t=31.15)
Stddev: 0.00314 -> 0.00241: 1.3017x smaller
### etree_iterparse ###
Min: 0.545668 -> 0.565437: 1.04x slower
Avg: 0.554188 -> 0.576807: 1.04x slower
Significant (t=-17.00)
Stddev: 0.00925 -> 0.00956: 1.0340x larger
### etree_process ###
Min: 0.294158 -> 0.286617: 1.03x faster
Avg: 0.296354 -> 0.288877: 1.03x faster
Significant (t=36.22)
Stddev: 0.00149 -> 0.00143: 1.0435x smaller
### fastpickle ###
Min: 0.458961 -> 0.475828: 1.04x slower
Avg: 0.460226 -> 0.481228: 1.05x slower
Significant (t=-109.38)
Stddev: 0.00082 -> 0.00173: 2.1051x larger
### nqueens ###
Min: 0.305883 -> 0.295858: 1.03x faster
Avg: 0.308085 -> 0.297755: 1.03x faster
Significant (t=90.22)
Stddev: 0.00077 -> 0.00085: 1.0942x larger
### silent_logging ###
Min: 0.074152 -> 0.075818: 1.02x slower
Avg: 0.074345 -> 0.076005: 1.02x slower
Significant (t=-96.29)
Stddev: 0.00013 -> 0.00012: 1.0975x smaller
### spectral_norm ###
Min: 0.355738 -> 0.364419: 1.02x slower
Avg: 0.356691 -> 0.365764: 1.03x slower
Significant (t=-126.23)
Stddev: 0.00054 -> 0.00047: 1.1533x smaller
### telco ###
Min: 0.012152 -> 0.013038: 1.07x slower
Avg: 0.012264 -> 0.013157: 1.07x slower
Significant (t=-83.98)
Stddev: 0.00008 -> 0.00007: 1.0653x smaller
The following not significant results are hidden, use -v to show them:
2to3, call_method, chaos, django_v2, etree_parse, fannkuch, fastunpickle, float, formatted_logging, go, hexiom2, iterative_count, json_dump, json_dump_v2, json_load, mako, mako_v2, meteor_contest, nbody, normal_startup, pathlib, pickle_dict, pickle_list, pidigits, raytrace, regex_compile, regex_effbot, regex_v8, richards, simple_logging, startup_nosite, threaded_count, tornado_http, unpack_sequence, unpickle_list.
> Just curious, what causes e.g. telco to differ up to 7% between runs? That's really huge.
telco.py always varies a lot between runs (up to 10%), even in the
big version "telco.py full":
Using the average of 10 runs, I can't really see a slowdown.
So I wonder why the benchmark suite says that the telco slowdown is significant. :)
I suspect it's all covered now, but is there anything else I can help with to get this patch pushed along its merry way?
New changeset 79a5fbe2c78f by Antoine Pitrou in branch 'default':
Issue #22003: When initialized from a bytes object, io.BytesIO() now
The latest patch is good indeed. Thank you very much!
Shouldn't this fix be mentioned in ?
Attached trivial patch for whatsnew.rst.
New changeset 7ae156f07a90 by Berker Peksag in branch 'default':
Add a whatsnew entry for issue #22003.
> This new patch abandons the buffer interface and specializes for Bytes per the comments on this issue.
Why does it abandon buffer interface? Because of the following?
> Thanks for digging here. As much as I'd love to follow this interpretation, it simply doesn't match existing buffer implementations, including within the standard library.
Shouldn't existing buffer implementations be fixed then and this feature made to use buffer interface instead of specialize for Bytes? If so is there at least any information on this in the comments so that one wouldn't wonder why there is specialization instead of relaying on buffer interface?
Hi Piotr,
There wasn't an obvious fix that didn't involve changing the buffer interface itself. There is presently ambiguity in the interface regarding the difference between a "read only" buffer and an "immutable" buffer, which is crucial for its use in this case.
Fixing the interface, followed by every buffer interface user, is a significantly more complicated task than simply optimizing for the most common case, as done here. FWIW I still think this work is worth doing, though I personally don't have time to approach it just now.
We could have (and possibly should) approach fixing e.g. mmap.mmap() hashability, possibly causing user code regressions, but even if such cases were fixed it still wouldn't be a enough to rely on for the optimization implemented here. | https://bugs.python.org/issue22003 | CC-MAIN-2021-25 | refinedweb | 3,223 | 66.54 |
Google Colab Tips for Power Users. In this post, I will share those features that I’ve discovered from basic usage and their official talks.
1. Scratchpad Notebook
It’s a pretty common scenario that we have a bunch of cluttered untitled notebooks created when we try out temporary stuff on colab.
To solve this, you can bookmark the link given below. It will open a special scratch notebook and any changes you make to that notebook are not saved to your main account.
2. Timing Execution of Cell
It’s pretty common that we manually calculate the difference between start and end times of a piece of code to gauge the time taken.
Colab provides an inbuilt feature to do this. After a cell is executed, just hover over the cell run icon and you will get an estimate of the execution time taken.
3. Run part of a cell
You can also run only a part of the cell by selecting it and pressing the
Runtime > Run Selection button or using the keyboard shortcut
Ctrl + Shift + Enter.
4. Jupyter Notebook Keyboard Shortcuts
If you are familiar with keyboard shortcuts from Jupyter Notebook, they don’t work directly in Colab. But I found a mental model to map between them.
Just add
Ctrl + M before whatever keyboard shortcut you were using in Jupyter. This rule of thumb works for the majority of common use-cases.
Below are some notable exceptions to this rule for which either the shortcut is changed completely or kept the same.
5. Jump to Class Definition
Similar to an IDE, you can go to a class definition by pressing
Ctrl and then clicking a class name. For example, here we view the class definition of the Dense layer in Keras by pressing Ctrl and then clicking the
Dense class name.
6. Open Notebooks from GitHub
The Google Colab team provides an official chrome extension to open notebooks on GitHub directly on colab. You can install it from here.
After installation, click the colab icon on any GitHub notebook to open it directly.
Alternatively, you can also manually open any GitHub notebook by replacing
github.com with
colab.research.google.com/github.
to
An even easier way is to replace
github.com with
githubtocolab.com. It will redirect you to a colab notebook.
to
7. Run Flask apps from Colab
With a library called flask-ngrok, you can easily expose a Flask web app running on colab to demo prototypes. First, you need to install
flask and
flask-ngrok.
!pip install flask-ngrok flask==0.12.2
Then, you just need to pass your flask app object to
run_with_ngrok function and it will expose a ngrok endpoint when the server is started.
from flask import Flask from flask_ngrok import run_with_ngrok app = Flask(__name__) run_with_ngrok(app) @app.route('/') def hello(): return 'Hello World!' if __name__ == '__main__': app.run()
You can try this out from the package author’s official example on Colab.
8. Switch between Tensorflow versions
You can easily switch between Tensorflow 1 and Tensorflow 2 using this magic flag.
To switch to Tensorflow 1.15.2, use this command:
%tensorflow_version 1.x
To switch to Tensorflow 2.2, run this command:
%tensorflow_version 2.x
You will need to restart the runtime for the effect to take place. Colab recommends using the pre-installed Tensorflow version instead of installing it from
pip for performance reasons.
9. Tensorboard Integration
Colab also provides a magic command to use Tensorboard directly from the notebook. You just need to set the logs directory location using the
--logdir flag. You can learn to use it from the official notebook.
%load_ext tensorboard %tensorboard --logdir logs
10. Gauge resource limits
Colab provides the following specs for their free and pro versions. Based on your use case, you can switch to the pro version at $10/month if you need a better runtime, GPU, and memory.
You can view the GPU you have been assigned by running the following command
!nvidia-smi
For information on the CPU, you can run this command
!cat /proc/cpuinfo
Similarly, you can view the RAM capacity by running
import psutil ram_gb = psutil.virtual_memory().total / 1e9 print(ram_gb)
11. Use interactive shell
There is no built-in interactive terminal in Colab. But you can use the
bash command to try out shell commands interactively. Just run this command and you will get an interactive input.
!bash
Now, you can run any shell command in the given input box.
To quit from the shell, just type
exit in the input box.
12. Current memory and storage usage
Colab provides an indicator of RAM and disk usage. If you hover over the indicator, you will get a popup with the current usage and the total capacity.
13. “Open in Colab” Badge
You can add a ‘Open in Colab’ badge to your
README.md or jupyter notebooks using the following markdown code.
In the markdown code, we’re loading an SVG image and then linking it to a colab notebook.
[]()
14. Interactive Tables for Pandas
Colab provides a notebook extension to add interactive sorting and filtering capabilities to pandas dataframes. To use it, run the following code.
%load_ext google.colab.data_table
You can see the regular pandas dataframe and the interactive dataframe after loading the extension below.
15. Setup Conda environment
If you use miniconda as your python environment manager, you can setup it on colab by running these commands at the top of your notebook.
# Download Miniconda installation script !wget # Make it executable !chmod +x Miniconda3-latest-Linux-x86_64.sh # Start installation in silent mode !bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local # Make conda packages available in current environment import sys sys.path.append('/usr/local/lib/python3.7/site-packages/')
After the cell is executed, you can use conda to install packages as usual.
!conda install -y flask
Alternatively, you can use condacolab package to install it easily.
pip install condacolab
Then, run these python commands to install miniconda.
import condacolab condacolab.install_miniconda()
16. Manage Colab Notebooks from Command Line
You can use a library called colab-cli to easily create and sync colab notebooks with your local notebooks.
17. Run background tasks
There are use-cases when we need to start some web server or background tasks before we can execute our regular program.
To run background tasks, use the
nohup command followed by your regular shell command and add
& to the end to run it in the background. This makes sure that you can run cells afterward in the notebook without your background task blocking it.
!nohup bash ping.sh &
18. Notify on Training Completion
If you’re running a long task such as training a model, you can setup Colab to send a desktop notification once it’s completed.
To enable that, goto Tools ⮕ Settings ⮕ Site and enable
Show desktop notifications checkbox.
You will get a popup to enable browser notification. Just accept it and colab will notify you on task completion even if you are on another tab, window or application.
19. Run javascript code
You can run javascript code by using the
%%javascript magic command.
20. Run VSCode on Colab
You can run a full-fledged VSCode editor on Colab by following the method I have explained in another article.
21. Custom snippets
You can save your own collections of useful snippets and access them easily in any colab notebook.
Create a colab notebook called
snippets.ipynb. To add each of your snippets, create a markdown cell and add name of the snippet as header. Below, the markdown cell, add a code cell with the snippet code.
Copy the link of this notebook from the browser tab.
Click
Tools > Settingsin your menu bar to open preference of colab.
Paste the link into the
Custom snippet notebook URLtextbox and click save.
- Now, the snippets are available in any colab notebook you use. Just click the <> icon on sidebar, search for your snippet name and click Insert. The code will be inserted into a new cell.
22. Run JupyterLab on Google Colab
You can start a JupyterLab instance on colab by running the following commands in a cell.
!pip install jupyterlab pyngrok -q # Run jupyterlab in the background !nohup jupyter lab --ip=0.0.0.0 & # Get ngrok URL mapped to port 8888 from pyngrok import ngrok print(ngrok.connect(8888))
Once executed, click the printed ngrok URL to access the JupyterLab interface.
23. Run R programs in Google Colab
You can use R programming language in Google Colab by going to. It will open a new notebook with R set as the kernel instead of Python.
References
- Timothy Novikoff, “Making the most of Colab (TF Dev Summit ‘20)”
- Gal Oshri, “What’s new in TensorBoard (TF Dev Summit ‘19)” | https://amitness.com/2020/06/google-colaboratory-tips/ | CC-MAIN-2021-43 | refinedweb | 1,467 | 65.93 |
BackgroundN factorial (also denoted N!) means computing 1*2*3*...*N and is a classical problem used in computer science to illustrate different programming patterns. In this post I will show how one can use Java 8's Streams to calculate factorials. But first, I will show two other ways that were previously used before Java 8 appeared.
RecursionFrom our computer science classes, we all remember the classical way of computing factorial(). The following method illustrates the concept:
public long factorial(int n) { if (n > 20) throw new IllegalArgumentException(n + " is out of range"); return (1 > n) ? 1 : n * factorial(n - 1); }Because long overflows for n > 20, we need to throw an exception to avoid faulty return values. If we are within the valid input range, we check for the basic case where n is 1 or less for which factorial is 1, otherwise we recurse by returning n multiplied with factorial(n-1). Eventually, we will reach factorial(1) and the recursion stops.
ImperativeYou can also use the standard imperative way of doing it using an intermediate value that is used in a loop, like this:
public long factorial(int n) { if (n > 20) throw new IllegalArgumentException(n + " is out of range"); long product = 1; for (int i = 2; i < n; i++) { product *= i; } return product; }Look at the loop and you might be surprised to see that we start from 2. We could as well have started from 1, but then again, multiplying with 1 always gives back the same result, doesn't it? So we optimize away that case.
StreamsUsing Java 8's stream methods we can implement factorial() in another way as depicted here:
public long factorial(int n) { if (n > 20) throw new IllegalArgumentException(n + " is out of range"); return LongStream.rangeClosed(2, n).reduce(1, (a, b) -> a * b); }Using the LongStream.rangeClosed(2, n) method we create a Stream of longs with the content 2, 3, ... , n. Then we take this Stream and successively applies the reduction (a, b) -> a * b meaning that for each pair a and b we multiply them and return the result. The result then carries over to a for the next round. The value "1" used in the reduced method is the identity value, i.e. the value that is used as a starting value for a for the first iteration.
This way, we abstract away the implementation and instead focus on what to do. For example, the Stream could be made parallel and that way we could calculate the value using several threads. Perhaps not so useful in this particular example where n < 20, but certainly for other applications with longer iteration chains.
Consider using Streams when iterating over values! | http://minborgsjavapot.blogspot.com/2014_11_01_archive.html | CC-MAIN-2018-05 | refinedweb | 454 | 61.67 |
I am running sikulix from cli using `-r` and it works great. However every time when I run the script, it always starts a new IDE and takes 5+ seconds for initialization. Is there any way to reuse current running IDE instance when I run script?
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved:
- 2018-01-31
- Last query:
- 2018-01-31
- Last reply:
- 2018-01-31
when you use runsikulix.cmd -r no new IDE is started, it just loads and runs the script.
the startup delay is due to the preperation of the Jython environment and has to be accepted.
you can try with the experimental runserver
http://
further down the page
or this
http://
Thanks for your answer. How do you set relative path to the script server?
And on Windows?
And how can I pass arguments and get logs? (-C when I run as command)
--- runserver
as mentioned: experimental.
You either have to get on the road with the mentioned page or you have to leave it alone.
parameters are not supported (use option files instaed)
logs can be redirected to a file (see docs)
--- about running scripts from within scripts
using this feature combined with a hotkey setup you might build your own "runserver".
I appreciate your quick reply. It seems like I have to run scripts from within scripts.
Could you elaborate about building my own "runserver" with hotkey setup? How can I choose script name, send parameters to the included scripts, and get console output?
yes, a middle-sized challenge ;-)
workflow idea with hotkey:
-- trigger script:
- accepts scriptname and parameters
- writes a line to a file with these information
- issues hotkey
-- serverscript
-- waits for hotkey
-- reads the one-line file
-- builds the run command and runs it
workflow idea file only:
-- trigger
create a one-line-file with scriptname and parameters
-- serverscript
-- wait for file to exist
-- reads the one-line file
-- delete the file
-- builds the run command and runs it
there are of course other option for the trigger-
That makes sense. How can I make the script not exit and listen for hotkey? Should I use infinite Delay() loop?
yes, a wait loop.
here is a relatively complete example
### hotkey setup section ###
# variants to end the script
Env.addHotkey("x", KeyModifier.
Env.addHotkey("q", KeyModifier.
def quit(event):
print "handler quit ctrl-alt-x"
global running
running = False
# basic hotkey definition with a related handler name
Env.addHotkey("c", KeyModifier.
def handlerC(event):
print "handlerC: seconds since start:", int((time.time() - start))
# at hotkey press a function will be called whose name is currently held by variable todo
Env.addHotkey("v", KeyModifier.
def handlerC1(event):
print "handlerC1: seconds since start:", int((time.time() - start))
todo = handlerC # default at start of script
# at hotkey press the function handlerParam will be called
# with a parameter value currently held by global variable start
Env.addHotkey("b", KeyModifier.
def handlerParam(
print "handlerParam: seconds since start:", int((time.time() - begin))
### main workflow start ###
start = time.time() # a global variable used in the handlers
count = 0
running = True
while running: # will end the loop if running is False
wait(1) # some timeconsuming stuff here
# one can always check in between and leave the loop
if not running: break
wait(1) # some timeconsuming stuff here
# changes the handler for hotkey v after about 20 seconds
count += 1
if count > 10: todo = handlerC1
# here might be some postprocessing before script finally ends
# you might remove your hotkeys before, to avoid interference by the handlers
print "PostProcessing"
Thank you for your great script. It still doesn't make sense how to trigger hot key or passing over param to central script from other application. However I bet that's my job to figure it out :)
Thanks again for your help, and writing great library!
Thanks RaiMan, that solved my question.
Since I am not sure about your final intention, something to think about:
If it is more interactively, what you plan to do (sitting there, watching the workflow and doing some actions to trigger scripts), then the hotkey solution might be appropriate.
Just implement the file read/write and run trigger in the hotkey handler.
If it is more about self-running automation, then the file only solution might be more appropriate.
Generally I guess, you have delegated different tasks to different scripts and now want to generate some workflows from these building blocks. If this is true, then the appropriate solution would be to implement the features in functions inside the scripts, import the scripts and call the functions from a main script, that represents the workflow(s). This way you have everything within the Jython scope and no problems with startup of scripts.
I have a side question. Do you have plan to add argument/output feature to the experimental server?
RE: "implement the features in functions inside the scripts, import the scripts and call the functions from a main script, that represents the workflow(s)"
I am also considering to write main script, but that still requires initial start up time.
Sorry, no.
This version 1.1.x experimental server is a dead-end (hence not in the official docs).
But if it is on the same machine, what you are doing, then it should not be a problem, to implement this handling as mentioned before in your scripts.
The mentioned file-only method could even be implemented based on the import feature with the trigger file being an appropriate script
trigger.script
scriptname = foobar
parameters = [..., ...]
someOther.script
import trigger
runScript(
--- I am also considering to write main script, but that still requires initial start up time.
... yes of course, but only once a day or longer time ;-)
Gotcha. That means wait loop to watch the files and start scripts if any change detected, right?
Yes.
Good luck.
I tried runScript and it worked well. The only problem I have is that `exit(result)` in sub-scripts causes error. Is there any workaround to return value to main script gracefully?
please info about the error.
Maybe I can fix it.
A workaround would be to write something to a file in the sub and read it in the main.
Maybe it's my fault. I tried to exit with string value.
yes, not allowed - must be a number
Thanks RaiMan, that solved my question.
My system - Windows 10, OsX 10.13
Sikuli - 1.1.1 | https://answers.launchpad.net/sikuli/+question/663387 | CC-MAIN-2019-04 | refinedweb | 1,079 | 71.34 |
Consistently, one of the more popular stocks people enter into their stock options watchlist at Stock Options Channel is DuPont (Symbol: DD). So this week we highlight one interesting put contract, and one interesting call contract, from the July expiration for DD. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $68 strike, which has a bid at the time of this writing of $1.18. Collecting that bid as the premium represents a 1.7% return against the $68 commitment, or a 13.8% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to DD's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. So unless DuPont sees its shares decline 1.6% and the contract is exercised (resulting in a cost basis of $66.82 per share before broker commissions, subtracting the $1.18 from $68), the only upside to the put seller is from collecting that premium for the 13.8% annualized rate of return.
Worth considering, is that the annualized 13.8% figure actually exceeds the 2.8% annualized dividend paid by DuPont by 11%, based on the current share price of $69.17. And yet, if an investor was to buy the stock at the going market price in order to collect the dividend, there is greater downside because the stock would have to fall 1.62% to reach the $68uly expiration, for shareholders of DuPont (Symbol: DD) looking to boost their income beyond the stock's 2.8% annualized dividend yield. Selling the covered call at the $70 strike and collecting the premium based on the $1.19 bid, annualizes to an additional 13.7% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 16.5% annualized rate in the scenario where the stock is not called away. Any upside above $70 would be lost if the stock rises there and is called away, but DD shares would have to climb 1.3% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 3% return from this trading level, in addition to any dividends collected before the stock was called.
The chart below shows the trailing twelve month trading history for DuPont, highlighting in green where the $68 strike is located relative to that history, and highlighting the $70 $69.17) to be 20%.
In mid-afternoon trading on Monday, the put volume among S&P 500 components was 696,859 contracts, with call volume at 1.23 DD. | https://www.nasdaq.com/articles/one-put-one-call-option-know-about-dupont-2015-06-08 | CC-MAIN-2019-39 | refinedweb | 466 | 63.29 |
This page is likely outdated (last edited on 06 Sep 2005). Visit the new documentation for updated content.
OLE DB
Info
Provides a System.Data.OleDb like provider for Mono using GDA as the data access layer.
Exists in namespace System.Data.OleDb and assembly System.Data
Created by Rodrigo Moya (the maintainer for GDA)
- LibGDA has providers for:
- MySQL
- PostgreSQL
- XML
- ODBC (via unixODBC)
- Oracle
- Interbase
- Sybase and Microsoft SQL Server via FreeTDS
- IBM DB2 Universal Database
- SQL Lite
- MS Access via MDB Tools
Does not support trusted connections
- Bugs with Mono or the data provider should be reported in Mono’s Bugzilla here. If you do not have Bugzilla user account, it is free and easy to create one here.
Current Status
Not actively maintained. Please use another managed provider, System.Data.Odbc, or GDA# instead.
The OleDb provider is working with libgda. The C-Sharp bindings to libgda currently work - meaning they can compile, run, and you can connect to a PostgreSQL database via libgda via the C-Sharp bindings to libgda.
Basic functionality (execution of commands, data retrieval, transactions, etc) are now working.
An inital implementation of GetSchemaTable() for the OleDbDataReader has been checked into cvs. GetSchemaTable() isn’t correct for OleDb, but the foundation is there.
Action plan
Does not work on Widnows because libgda does not work on Windows. Some people have tried it, but I could not verify this myself.
Only works on Linux with GNOME 2.x
Need to make the OleDb provider compatible with the OleDb provider in Microsoft .NET
Testing OleDb with libgda’s PostgreSQL provider
Prerequisites
Requires a working mono and mcs
Requires Linux because the OleDb provider uses libgda and libgda only works on Linux.
Connection String Format
- Connection String format:
"Provider=providerName;..."
providerName is the name of the Provider you use, such as, PostgreSQL, MySQL, etc. The elipsis … means that the connection parameters are dependent upon the provider being used and are passed to libgda for connecting. Such paramters, can be: Database, User ID, Password, Server, etc…
- See the test TestOleDb.cs found at mcs/class/System.Data/System.Data.OleDb
C# Example
using System; using System.Data; using System.Data.OleDb; public class Test { public static void Main(string[] args) { // there is a libgda PostgreSQL provider string connectionString = "Provider=PostgreSQL;" + "Addr=127.0.0.1;" + "Database=test;" + "User ID=postgres;" + "Password=fun2db"; IDbConnection dbcon; dbcon = new OleDb =
- Build:
mcs TestExample.cs -r:System.Data.dll
- Running the Example:
mono TestExample.exe | http://www.mono-project.com/archived/ole_db/ | CC-MAIN-2016-22 | refinedweb | 415 | 59.09 |
Talk:Key:contact
- Key:phone
- Key:url and Key:website (see also Proposed_features/External_links)
User:Emka 19:10, 10 June 2009
- Yes. I use 'phone' rather than 'contact:phone' because that's what most other mappers do. This is no coincidence. It's simpler. I generally disapprove of most proposals to introduce "namespaces". While they have the feel of something nice and rational and organised, that comes at a price. Tags are supposed to be simple. We're not developing a programming language here. New mappers have to learn to type these things in and remember them. The 'contact:' prefix offers very little real benefit but makes a tag much less simple.
- (Copied from my diary I realised I should post this here)
- -- Harry Wood 10:21, 25 August 2011 (BST)
Types of phone numbers
What do you think about tagging various types of phone numbers? Like mobile phone (or cell phone), fixed phone or sip phone for example. I thought:
- contact:phone:fixed=1234
- contact:phone:mobile=1234
- contact:phone:sip=1234@567.com
--Dirk86 15:12, 4 March 2010 (UTC)
- I like it, especially mobile phones are in wide use and are often given together with a fixed phone number.--Scai 09:20, 22 September 2010 (BST)
- Drowning in colon characters. How about good old simple tags like phone=* ...and maybe mobilephone=* ? -- Harry Wood 11:38, 22 September 2010 (BST)
More than one phone number
When a business has more than one phone number, what's the best way to capture this?
- adding a contact:phone for each number
- including all phone numbers in contact:phone
Mafeu 12:58, 26 October 2010 (BST)
- you can't add several identical tags, so probably something like contact:phone=phone1;phone2;phone3 --Richlv 19:45, 17 December 2010 (UTC)
Webcam?
A webcam shall be a means to contact somebody? What are you planning, jumping up and down in front of the webcam, hoping to catch some attention? Lulu-Ann
- I confirm. Webcams are normaly not a communication-channel--CMartin (talk) 17:37, 26 August 2014 (UTC)
additional forms of communication
i think we could add some more forms of communication:
- contact:skype
- contact:twitter
- contact:facebook
what's your opinion?
- I agree. Gallaecio 19:52, 27 July 2011 (BST)
- I certainly agree for the Facebook page. Must have. These are now often used instead of a Website home page. --Neil Dewhurst, Lyon France (talk) 08:31, 18 September 2013 (UTC)
Contact:name?
Many small shop is known by owners name, I think contact:name would be the best way to tag. --BáthoryPéter 11:14, 28 July 2011 (BST)
Deprecate this tag family
Stats (taginfo) and usage (editors presets) show that the old tags without the prefix "contact:" remain more popular even after two years of coexistence. After a discussion on the tagging list, I suggest to deprecate this tag serie on the wiki and recommand the old but simple keys.--Pieren 22:13, 1 May 2012 (BST)
- AGREE! for the reasons given in the discussion above -- Harry Wood 01:41, 12 September 2012 (BST)
- I think it is useful to maintain the namespace setaside, but NOT for general mapping use; rather as a reserved namespace to support content transformation, that is "contact:phone" as a synonym for "phone" for data conversion purposes but not for user mapping purposes. To this end, suggest creating a bot to revise instances of "contact:phone" to "phone" where "phone" is not currently in use, and to highlight where both are present for manual resolution. This would also mean retiring this page and creating a few wiki redirects. --Ceyockey (talk) 15:39, 3 April 2015 (UTC)
- So what was decided? Did this even go through a voting process or so? Dhiegov (talk) 12:10, 10 February 2019 (UTC)
- The situation is largely unchanged: Keys without a contact prefix remain far more commonly used (and are being added in greater numbers, too, so it's not just existing tagging), but there are mappers who really like the contact prefix and would oppose a deprecation. As far as I know, no one has attempted to put the issue to a vote. --Tordanik 20:52, 14 February 2019 (UTC)
- I made the first step and explicitly described at Wiki page that alternative is considered as preferable by mappers Mateusz Konieczny (talk) 12:57, 19 September 2019 (UTC)
format inconsistency with the key “phone”:50, 20 June 2013 (UTC)
Review websites
I have seen a few mappers adding contact:[ yelp | tripadvisor | foursquare ] to businesses. IMHO these are not means of contact, instead these are review website. While I personally think that we do not need them in OSM at all, they certainly do not belong in the contact:* namespace. --Polarbear w (talk) 21:03, 15 September 2017 (UTC)
- In a discussion on the Tagging list, adding those websites was discouraged by most contributors. They are not designed as means of contact to the businesses, and focus on individual reviews. They are seen as an instrument of search engine optimizers and spammers. --Polarbear w (talk) 19:50, 9 October 2017 (UTC)
- Personally, id put 99% of the Facebook links I've seen in that category also. Even if you can technically contact the business through Facebook, its main purpose is to make the business look overly good and they use of a lot of the same SEO/spam tactics. Would something like Yelp qualify also? --Adamant1 (talk) 17:31, 24 January 2019 (UTC)
contact:website vs website
Is there any difference in meaning between contact:website and website tag? Mateusz Konieczny (talk) 18:52, 9 October 2017 (UTC)
- Not in my opinion. It was just the attempt by some mappers to group "phone, fax, email, website" under a common key prefix. --Polarbear w (talk) 19:37, 9 October 2017 (UTC)
- It might be splitting hairs, but I think there is a difference. To me, visiting a website doesn't qualify as "contact." Anymore then it would be if I stand outside of a business and look at the hours on their door (Maybe that's more to do with it being a bad tag though and not them having different meanings per say).--Adamant1 (talk) 17:26, 24 January 2019 (UTC)
- Totally agree with Adamant1. --The knife (talk) 22:23, 30 July 2019 (UTC)
- I also agree that contact:website makes tagging more complex, because it eludes to the differentiation of websites which include contact possibilities and those that don't. IMHO we should discourage the use of contact:website for this reason. --Dieterdreist (talk) 15:16, 19 September 2019 (UTC)
I've seen a few people around that have deleted the older more widely accepted tags like phone=* and replaced them with these. It wasn't on a mass scale or anything, one person in particular did it a lot though, but there should still be something on this page about how its inappropriate replace the old tags with these ones. As they can coexist. Hopeful it will help the situation at least a little. I don't think banner at the top is sufficient enough. --Adamant1 (talk) 07:46, 28 December 2018 (UTC)
- While it's unfortunate that we have two synonymous keys and I wish we could just finally decide which set of keys to use (it's been 10 years!), deleting them like that isn't really acceptable and a recipe for edit wars. Feel free to add something to the page. --Tordanik 20:14, 14 January 2019 (UTC)
contact:youtube?
Is it just me, or is it quite absurd to describe it as a contact method? Mateusz Konieczny (talk) 13:22, 19 May 2020 (UTC)
- Just ignore it Mateusz, the "contact"-prefix is not about sense. Sooner or later it will disappear, if we simply do not use it. --Dieterdreist (talk) 14:42, 19 May 2020 (UTC)
- contact:website in theory can work and link to a contact form - though it is basically never used in that way Mateusz Konieczny (talk) 16:05, 19 May 2020 (UTC)
- It still makes sense in this respect. The extent and nature of contact:*=* isn't well-defined. You can comment on videos, reply to Community posts, (these 2 alone already make contact:youtube=* more directly contact=*-fiting than most contact:website=* tags you observe) and there will be a list of email address and links in the About section. Keys like contact:webcam=* (cf Talk:Key:contact#Webcam.3F could be worse than contact:website=* - there's usually not even any contact channel listed. We simply need to clarify how to use contact:*=*, *:website=* and *:url=*, etc. -- Kovposch (talk) 10:34, 20 May 2020 (UTC)
Addresses?
These tags IMHO are not about Addresses, let’s remove the group or find a better one.—-Dieterdreist (talk) 17:42, 1 October 2020 (UTC)
emergency phone number
Is there any way to include emergency numbers? Eg. the opening_hours of a vet are Mo-Fr 08:00-16:00, but in case of emergency you can call under phone number X. (Mabye contact:emergencyphone ?) --TBKMrt (talk) 07:45, 2 December 2020 (UTC)
- I am not aware of any. There is emergency_telephone_code=* which has 90% values of "112" and might eventually apply, although the term "code" seems strange? Also it is not documented in the wiki. Some usage also for emergency:phone=* (4191 uses, undocumented but eventually suitable) and much less for emergency_phone=* (337 instances, undocumented but from the values it looks as if it could be suitable for your scope). From this short lookup it seems emergency:phone=* is the best available option. The contact prefix should be avoided unless you want to make everyone's life harder by using multiple keys with the same meaning. For completely there is also "emergency:contact:phone=*" with 115 uses and contact:phone:emergency with 106 uses, i.e. both combined are at 5% of emergency:phone=*. --Dieterdreist (talk) 09:03, 2 December 2020 (UTC)
- @Dieterdreist: I really did not expect an answer that quickly so thanks for the quick reply!
- @ emergency_telephone_code: I also thought about that code does not really sound as if it would fit the rest of contact:*
- @ emergency:phone: I saw that one, but I honestly don't really like it since it's awfully close to emergency=phone and this is something completelly else. I thought about using the emergency key in general, but the key as such seems to be more in use for public emergencies (eg. fire_hydrant, life_ring, phone, siren).
- Because of the existing mixes of contact, phone and emergency I would more tend to use emergency_phone as key even it does not have a huge ammount of uses. The reason simply is that it's name is short and descriptive and it would fit the rest of the contact key. So phone = contact:phone and emergency_phone = contact:emergency_phone. But after all I don't really mind the way how it is included.
- Usecase would be this vet that has a public phone number for cases of emergency.
- --TBKMrt (talk) 16:20, 4 December 2020 (UTC)
- I don't know if this would help in any way. I checked/searched taginfo. If you want use a mix of contact & phone & emergency i found two schemes which people starts to use:
- => 115x emergency:contact:phone:
- => 106x contact:phone:emergency:
- Additionally you could look to the values which are used for these both versions and look to the taginfo chronology tab to see if there is organic growth --MalgiK (talk) 17:00, 4 December 2020 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Key:contact | CC-MAIN-2021-04 | refinedweb | 1,925 | 61.06 |
Scrapy - Importing Excel .csv as start_url
So I'm building a scraper that imports a .csv excel file which has one row of ~2,400 websites (each website is in its own column) and using these as the start_url. I keep getting this error saying that I am passing in a list and not a string. I think this may be caused by the fact that my list basically just has one reallllllly long list in it that represents the row. How can I overcome this and basically put each website from my .csv as its own seperate string within the list?
raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__) exceptions.TypeError: Request url must be str or unicode, got list: import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http import HtmlResponse from tutorial.items import DanishItem from scrapy.http import Request import csv with open('websites.csv', 'rbU') as csv_file: data = csv.reader(csv_file) scrapurls = [] for row in data: scrapurls.append(row) class DanishSpider(scrapy.Spider): name = "dmoz" allowed_domains = [] start_urls = scrapurls def parse(self, response): for sel in response.xpath('//link[@rel="icon" or @rel="shortcut icon"]'): item = DanishItem() item['website'] = response item['favicon'] = sel.xpath('./@href').extract() yield item
Thanks!
Joey
Just generating a list for
start_urls does not work as it is clearly written in Scrapy documentation.
From documentation:.
I would rather do it in this way:
def get_urls_from_csv(): with open('websites.csv', 'rbU') as csv_file: data = csv.reader(csv_file) scrapurls = [] for row in data: scrapurls.append(row) return scrapurls class DanishSpider(scrapy.Spider): ... def start_requests(self): return [scrapy.http.Request(url=start_url) for start_url in get_urls_from_csv()]
Scrapy - Importing Excel .csv as start_url, So I'm building a scraper that imports a .csv excel file which has one row of ~ 2,400 websites (each website is in its own column) and using these as the start_url. class scrapy.exporters.CsvItemExporter (file, include_headers_line = True, join_multivalued = ',', ** kwargs) [source] ¶ Exports items in CSV format to the given file-like object. If the fields_to_export attribute is set, it will be used to define the CSV columns and their order. The export_empty_fields attribute has no effect on this exporter
Try opening the .csv file inside the class (not outside as you have done before) and append the start_urls. This solution worked for me. Hope this helps :-)
class DanishSpider(scrapy.Spider): name = "dmoz" allowed_domains = [] start_urls = [] f = open('websites.csv'), 'r') for i in f: u = i.split('\n') start_urls.append(u[0])
how to output a CSV file for each start_url? : scrapy, When I provide a list of start_urls my spider crawls them and outputs the data all into 1 csv file. newline='') as output_file: wr = csv.writer(output_file, dialect=' excel') for data in cleaned_list: import scrapy class ToScrapeSpiderEU(scrapy. Scrapy – Importar Excel .csv como start_url Así que estoy construyendo un raspador que importa un file . csv excel que tiene una fila de ~ 2,400 sitios web (cada website está en su propia columna) y los usa como el start_url.
I find the following useful when in need:
import csv import scrapy class DanishSpider(scrapy.Spider): name = "rei" with open("output.csv","r") as f: reader = csv.DictReader(f) start_urls = [item['Link'] for item in reader] def parse(self, response): yield {"link":response.url}
Scrapy import websites from CSV file : scrapy, I dont know how to import csv file that Scrapy can read website URLs from first column start_urls=[] allowed_domains=[] df=pd.read_excel("xyz.xlsx") for url in � When FEED_EXPORT_FIELDS is empty or None (default), Scrapy uses the fields defined in item objects yielded by your spider. If an exporter requires a fixed set of fields (this is the case for CSV export format) and FEED_EXPORT_FIELDS is empty or None, then Scrapy tries to infer field names from the exported data - currently it uses field names
for row in data: scrapurls.append(row)
row is a list [column1, column2, ..]
So I think you need to extract the columns, and append to your start_urls.
for row in data: # if all the column is the url str for column in row: scrapurls.append(column)
How to scrape Alibaba.com product data using Scrapy, Tutorial to build a scrapy spider to crawl Alibaba.com search results and extract that spider is allowed to crawl; start_urls is the urls which the spider will start crawling when it Export Product Data into JSON or CSV using Scrapy Rating, Date, Author etc from Product Reviews into an Excel Spreadsheet. scrapy crawl myspider -o data.json scrapy crawl myspider -o data.csv scrapy crawl myspider -o data.xml Scrapy has its built-in tool to generate json, csv, xml and other serialization formats . If you want to specify either relative or absolute path of the produced file or set other properties from command line you can do it as well.
Try this way also,
filee = open("filename.csv","r+") # Removing the \n 'new line' from the url r=[i for i in filee] start_urls=[r[j].replace('\n','') for j in range(len(r))]
How to save scraped data as a CSV file using Scrapy, Here's a basic Scrapy spider I built: import scrapy. class SpiderSpider(scrapy. Spider):. name = 'spider'. allowed_domains = ['books.toscrape.com']. start_urls� Scrap
Look into Scrapy web-scraping framework. There is also aiohttp which is based on AsyncIO. Gathering scraping results. I think you don't actually need an Excel writer here since you are only writing simple text data - you are not concerned with advanced data types or workbook style and formatting. Use a CSV writer - Python has a built-in csv module.
First, we import scrapy. Then, a class is created inheriting ‘Spider’ from Scrapy. That class has 3 variables and a method. The variables are the spider’s name, the allowed_domains and the start_URL. Pretty self-explanatory.
Finally, export the dataframe to a CSV file which we named quoted.csv in this case. Don't forget to close the chrome driver using driver.close(). Adittional resources. 1. finding elements You'll notice that I used the find_elements_by_class method in this walkthrough. This is not the only way to find elements.
- Update your error log please
- i always get: KeyError: 'Link'
- If your csv file doesn't have a column header with the name
Link, you should get that error.
- thanks @SIM you helped me! | http://thetopsites.net/article/56229351.shtml | CC-MAIN-2020-45 | refinedweb | 1,058 | 67.86 |
The function remove_edge removes an edge given by one of the twin halfedges that forms it, from a given arrangement. Once the edge is removed, if the vertices associated with its endpoints become isolated, they are removed as well. The call remove_edge(arr, e) is equivalent to the call arr.remove_edge (e, true, true). However, this free function requires that Traits be a model of the refined concept ArrangementXMonotoneTraits_2, which requires merge operations on -monotone curves. If one of the end-vertices of the given edge becomes redundant after the edge is removed (see remove_vertex() for the definition of a redundant vertex), it is removed, and its incident edges are merged. If the edge-removal operation causes two faces to merge, the merged face is returned. Otherwise, the face to which the edge was incident before the removal is returned.
#include <CGAL/Arrangement_2.h> | http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Arrangement_2_ref/Function_remove_edge.html | crawl-001 | refinedweb | 144 | 53.92 |
Let us start with an introduction to
the GAME OF LIFE problem. As per wiki ,
as if by loneliness.
Any live cell with more than three live neighbours
dies, as if by overcrowding.
Any live cell with two or three live neighbours lives,
unchanged, to the next generation.
Any dead cell with exactly three live neighbours comes.
The inputs below represent the cells in the universe as X or - . X is a alive
cell. - is a dead cell or no cell. The below inputs provide the provide pattern
or initial cells in the universe. The output is the state of the system in the
next tick (one run of the application of all the rules), represented in the
same format.
Block pattern
Input AOutput A
Boat pattern Input BOutput B
Blinker pattern Input C Output C
Toad pattern
Input D Output D
It is important to note that in Output D, the two new rows have been due to Rule #4 i.e. Any dead cell with exactly three live neighbours comes
to life. Thus, the next state will include two new auto grown rows.
This looks interesting and it seems that implementing the logic in a structured language will not be very difficult. However, there are a couple of catch here.
To be able to implement such solution in object oriented manner is quite a challenging task. However, we will come up with a design which obeys object oriented principles.
The first idea which comes into my mind is to keep separate Grid in initial generation and next Generation. In the beginning, we will have two grids say Input Grid and Output Grid. Input Grid is the initial state of game off life to start with. Output grid will contain the next generation of Input Grid. We need to apply rules for each cell in Input Grid and get the Cell's next generation in Output Grid. In case, where Row or Column growth is required then these will be added to Output Grid.
Please note that the state of Input grid will not be updated to get the next generation, so there is no run-time consideration of cell state changes. In other words, consider Input Grid as grid with freezed cells and Output Grid will change until all cells in Input Grid is evaluated. Please see figure below:
Once a generation is complete, we will do a swapping of Output Grid to Input Grid and the process will continue.
We need to implement a two dimensional matrix having every cell containing one of two boolean values; live or dead. This two dimensional structure should be able to grow in either side e.g. new row could be added to top as well as bottom and new column can be added before first column and last column. It gives a sense of a list as we need not refer a cell from its index but need to enumerate from first item till the last element.
From the above discussion, it is clear that we are moving into the utilizing list collection classes in C#. We have a custom class for cell having a boolean property as Isalive. This will make the implementation more extensible.
Further, to hold a list of cell a custom object called Row which contains a list of cells. Therefore, our grid will contain a list of such rows, which in turn contains a list of Columns.
Let us have a look at how will our Grid look like:
public class Grid
{
// List of Rows in Grid
public List<Row> GridObj { set; get; }
// other code ...
}
Each row is represented as follows:
public class Row
{
//List of Cells
public List<Cell> Cells { get; set; }
// other code ...
}
Finally each Cell will look like this:
public class Cell
{
//A Cell can be alive or dead based on this property - true = alive and false = dead
public Boolean IsAlive { get; set; }
// other code ...
}
This is very important point on simultaneously updating all cells in the Game of Life Grid. At first sight this seems trivial. Let us analyze the below questions before moving on:
Answer to the Question 1 is rather simple. Here it does not require CPU to execute cell update operation at the same time, but it requires that all tasks are done in parallel in different threads as cell's next generation is not dependent upon next generation of other cells. Here we need to implement threading to apply simultaneous update to all cells.
The answer to the second question is a bit tricky. As discussed in the last sentence, Cell's next generation is not dependent upon next generation of other cells. This is true, we always need to read Input Grid (which is freezed) and stuff the result to Output Grid cell. So Output Grid cell state is independent of other cell state in Output Grid. However, in some situations the OutputGrid can grow; a new row or column can be added to Output Grid. This will cause issues in parallel operations. How? Think about a situation where a Thread1 is updating the first row of grid. However, other thread Thread2 has added a new row in output grid before Thread1 has completed the operation. Now, when Thread1 will now update cell in the first row, the result will be incorrect as the first row (before Thread2 had executed) has now become second row, but the changes have been applied to the first row. Thus, there will be two kind of tasks required to be done on Output Grid:
I will explain about these task in detail when discussing about code.
Here
is the list of classes used to implement Game of Life:
public Game(int rows, int columns)
{
if (rows <= 0 || columns <= 0) throw new ArgumentOutOfRangeException("Row
and Column size must be greater than zero");
_inputGrid = new Grid(rows, columns);
_outputGrid = new Grid(rows, columns);
}
objLifeGame.MaxGenerations = maxGenerations;
Game objLifeGame = new Game(3, 3);
objLifeGame.ToggleGridCell(0, 1);
objLifeGame.ToggleGridCell(1, 1);
// 1. Task for changing all existing Cell Status
private Task EvaluateCellTask;
// 2. Task for expanding output gird if respective rule satisfies
private Task EvaluateGridGrowthTask;
public struct CoOrdinates
{
public int X;
public int Y;
public CoOrdinates(int x, int y)
{
X = x;
Y = y;
}
}
public static void ChangeCellsState(Grid inputGrid, Grid outputGrid, CoOrdinates coOrdinates)
private static int CountAliveNeighbours(Grid grid, CoOrdinates coOrdinates)
private static int IsAliveNeighbour(Grid grid, CoOrdinates baseCoOrdinates, CoOrdinates offSetCoOrdinates)
private static Boolean IsAliveInNextState(Cell cell, int liveNeighbourCount)
public static void ChangeGridState(Grid inputGrid, Grid outputGrid)
private static void CheckColumnGrowth(Grid inputGrid, Grid outputGrid, int colId)
private static void CheckRowGrowth(Grid inputGrid, Grid outputGrid, int rowId)
// Dictionary to hold list of reachable cells co-ordinates for specified cell type
public static Dictionary<CellTypeEnum, List<CoOrdinates>> ReachableCellDictionary;
/// <summary>
/// Cell types are unique types of cell in grid of any size
/// Every cell type has distinct reachable djacent cells which can be traversed
/// </summary>
public enum CellTypeEnum
{
TopLeftCorner,
TopRightCorner,
BottomLeftCorner,
BottomRightCorner,
TopSide,
BottomSide,
LeftSide,
RightSide,
Center,
OuterTopSide,
OuterRightSide,
OuterBottomSide,
OuterLeftSide,
None
}
The below figure shows all cell types available in 4x4 Grid. Cell type for any other grid can be used in similar manner.
GridHelper:
This is a static helper class to perform operations like displaying the grid
and Copy source grid to destination.
/// <summary>
/// Display the grid
/// </summary>
public static void Display(Grid grid)
/// <summary>
/// Deep copy Copy Source grid to target grid
/// </summary>
/// <param name="sourceGrid"></param>
/// <param name="targetGrid"></param>
public static void Copy(Grid sourceGrid, Grid targetGrid)
/// <summary>
/// Set target grid schema similar to source grid schema
/// </summary>
/// <param name="sourceGrid"></param>
/// <param name="targetGrid"></param>
private static void MatchSchema(Grid sourceGrid, Grid targetGrid)
/// <summary>
/// Assign Source grid cell values to target grid
/// </summary>
/// <param name="sourceGrid"></param>
/// <param name="targetGrid"></param>
private static void AssignCellValues(Grid sourceGrid, Grid targetGrid)
This is the first version of the article. Reader comments are welcome.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public static void Display(Grid grid)
{
Console.Clear();
foreach (Row row in grid.GridObj)
{
foreach (Cell cell in row.Cells)
{
Console.Write(cell.ToString());
}
Console.WriteLine();
}
Thread.Sleep(500);
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/329881/Game-of-life-Code-solution-in-Csharp | CC-MAIN-2016-18 | refinedweb | 1,391 | 59.84 |
davexunit pushed a commit to branch wip-container in repository guix. commit 51532e175ae941ddc362f4dd92c1e73953b6bdba Author: David Thompson <address@hidden> Date: Tue Jun 2 08:48:16 2015 -0400 gnu: build: Add Linux container module. * gnu/build/linux-container.scm: New file. * gnu-system.am (GNU_SYSTEM_MODULES): Add it. * .dir-locals.el: Add Scheme indent rules for 'call-with-clone', 'with-clone', 'call-with-container', and 'container-excursion'. --- .dir-locals.el | 5 + gnu-system.am | 1 + gnu/build/linux-container.scm | 271 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 277 insertions(+), 0 deletions(-) diff --git a/.dir-locals.el b/.dir-locals.el index cbcb120..65e1c6d 100644 --- a/.dir-locals.el +++ b/.dir-locals.el @@ -59,6 +59,11 @@ (eval . (put 'run-with-state 'scheme-indent-function 1)) (eval . (put 'wrap-program 'scheme-indent-function 1)) + (eval . (put 'call-with-clone 'scheme-indent-function 1)) + (eval . (put 'with-clone 'scheme-indent-function 1)) + (eval . (put 'call-with-container 'scheme-indent-function 1)) + (eval . (put 'container-excursion 'scheme-indent-function 1)) + ;; Recognize '~', '+', and '$', as used for gexps, as quotation symbols. ;; This notably allows '(' in Paredit to not insert a space when the ;; preceding symbol is one of these. diff --git a/gnu-system.am b/gnu-system.am index a3c56a8..d625d9c 100644 --- a/gnu-system.am +++ b/gnu-system.am @@ -357,6 +357,7 @@ GNU_SYSTEM_MODULES = \ gnu/build/file-systems.scm \ gnu/build/install.scm \ gnu/build/linux-boot.scm \ + gnu/build/linux-container.scm \ gnu/build/linux-initrd.scm \ gnu/build/linux-modules.scm \ gnu/build/vm.scm diff --git a/gnu/build/linux-container.scm b/gnu/build/linux-container.scm new file mode 100644 index 0000000..3eaba3b --- /dev/null +++ b/gnu/build/linux-container.scm @@ -0,0 +1,271 @@ +;;; GNU Guix --- Functional package management for GNU +;;; Copyright © 2015 David Thompson build linux-container) + #:use-module (ice-9 format) + #:use-module (ice-9 match) + #:use-module (srfi srfi-98) + #:use-module (guix utils) + #:use-module (guix build utils) + #:use-module (guix build syscalls) + #:export (%namespaces + run-container + call-with-container + container-excursion)) + +(define %namespaces + '(mnt pid ipc uts user net)) + +(define (call-with-clean-exit thunk) + "Apply THUNK, but exit with a status code of 1 if it fails." + (dynamic-wind + (const #t) + thunk + (lambda () + (primitive-exit 1)))) + +(define (mount-flags->bit-mask flags) + "Return the number suitable for the 'flags' argument of 'mount' that +corresponds to the symbols listed in FLAGS." + (let loop ((flags flags)) + (match flags + (('read-only rest ...) + (logior MS_RDONLY (loop rest))) + (('bind-mount rest ...) + (logior MS_BIND (loop rest))) + (('no-suid rest ...) + (logior MS_NOSUID (loop rest))) + (('no-dev rest ...) + (logior MS_NODEV (loop rest))) + (('no-exec rest ...) + (logior MS_NOEXEC (loop rest))) + (() + 0)))) + +(define* (mount-file-system spec root) + "Mount the file system described by SPEC under ROOT. SPEC must have the +form: + + (DEVICE TITLE MOUNT-POINT TYPE (FLAGS ...) OPTIONS CHECK?) + +DEVICE, MOUNT-POINT, and TYPE must be strings; OPTIONS can be a string or #f; +FLAGS must be a list of symbols. CHECK? is ignored." + (match spec + ((source title mount-point type (flags ...) options _) + (let ((mount-point (string-append root mount-point)) + (flags (mount-flags->bit-mask flags))) + (mkdir-p mount-point) + (mount source mount-point type flags options) + + ;; For read-only bind mounts, an extra remount is needed, as per + ;; <>, which still applies to Linux 4.0. + (when (and (= MS_BIND (logand flags MS_BIND)) + (= MS_RDONLY (logand flags MS_RDONLY))) + (let ((flags (logior MS_BIND MS_REMOUNT MS_RDONLY))) + (mount source mount-point type flags #f))))))) + +(define (purify-environment) + "Unset all environment variables." + (for-each unsetenv + (match (get-environment-variables) + (((names . _) ...) names)))) + +;; The container setup procedure closely resembles that of the Docker +;; specification: +;; +(define (mount-file-systems root mounts) + "Mount the essential file systems and the those in the MOUNTS list relative +to ROOT, then make ROOT the new root directory for the process." + (define (scope dir) + (string-append root dir)) + + (define (bind-mount src dest) + (mount src dest "none" MS_BIND)) + + ;; Like mount, but creates the mount point if it doesn't exist. + (define* (mount* source target type #:optional (flags 0) options + #:key (update-mtab? #f)) + (mkdir-p target) + (mount source target type flags options #:update-mtab? update-mtab?)) + + (purify-environment) + + ;; The container's file system is completely ephemeral, sans directories + ;; bind-mounted from the host. + (mount "none" root "tmpfs") + + ;; Create essential file systems. + (mount* "none" (scope "/proc") "proc" + (logior MS_NOEXEC MS_NOSUID MS_NODEV)) + (mount* "none" (scope "/sys") "sysfs" + (logior MS_NOEXEC MS_NOSUID MS_NODEV MS_RDONLY)) + (mount* "none" (scope "/dev") "tmpfs" + (logior MS_NOEXEC MS_STRICTATIME) + "mode=755") + + ;; Create essential device nodes via bind-mounting them from the + ;; host, because a process within a user namespace cannot create + ;; device nodes. + (for-each (lambda (device) + (when (file-exists? device) + ;; Create the mount point file. + (call-with-output-file (scope device) + (const #t)) + (bind-mount device (scope device)))) + '("/dev/null" + "/dev/zero" + "/dev/full" + "/dev/random" + "/dev/urandom" + "/dev/tty" + "/dev/ptmx" + "/dev/fuse")) + + ;; Setup standard input/output/error. + (symlink "/proc/self/fd" (scope "/dev/fd")) + (symlink "/proc/self/fd/0" (scope "/dev/stdin")) + (symlink "/proc/self/fd/1" (scope "/dev/stdout")) + (symlink "/proc/self/fd/2" (scope "/dev/stderr")) + + ;; Mount user-specified file systems. + (for-each (lambda (spec) + (mount-file-system spec root)) + mounts) + + ;; Jail the process inside the container's root file system. + (let ((put-old (string-append root "/real-root"))) + (mkdir put-old) + (pivot-root root put-old) + (chdir "/") + (umount "real-root" MNT_DETACH) + (rmdir "real-root"))) + +(define (initialize-user-namespace pid) + "Configure the user namespace for PID." + (define proc-dir + (string-append "/proc/" (number->string pid))) + + (define (scope file) + (string-append proc-dir file)) + + (let* ((uid (getuid)) + (gid (getgid)) + ;; Only root can map more than a single uid/gid. + (uid-range (if (zero? uid) 65536 1)) + (gid-range (if (zero? gid) 65536 1))) + + ;; Map the user/group that created the container to the root user + ;; within the container. + (call-with-output-file (scope "/setgroups") + (lambda (port) + (display "deny" port))) + (call-with-output-file (scope "/uid_map") + (lambda (port) + (format port "0 ~d ~d" uid uid-range))) + (call-with-output-file (scope "/gid_map") + (lambda (port) + (format port "0 ~d ~d" gid gid-range))))) + +(define (namespaces->bit-mask namespaces) + "Return the number suitable for the 'flags' argument of 'clone' that +corresponds to the symbols in NAMESPACES." + (apply logior SIGCHLD + (map (match-lambda + ('mnt CLONE_NEWNS) + ('uts CLONE_NEWUTS) + ('ipc CLONE_NEWIPC) + ('user CLONE_NEWUSER) + ('pid CLONE_NEWPID) + ('net CLONE_NEWNET)) + namespaces))) + +(define (run-container root mounts namespaces thunk) + "Run THUNK in a new container process and return its PID. ROOT specifies +the root directory for the container. MOUNTS is a list of file system specs +that specify the mapping of host file systems into the container. NAMESPACES +is a list of symbols that correspond to the possible Linux namespaces: mnt, +ipc, uts, user, and net." + ;; The parent process must initialize the user namespace for the child + ;; before it can boot. To negotiate this, a pipe is used such that the + ;; child process blocks until the parent writes to it. + (match (pipe) + ((in . out) + (let ((flags (namespaces->bit-mask namespaces))) + (match (clone flags) + (0 + (call-with-clean-exit + (lambda () + (close out) + ;; Wait for parent to set things up. + (read in) + (mount-file-systems root mounts) + (thunk)))) + (pid + (initialize-user-namespace pid) + ;; TODO: Initialize cgroups. + (close in) + (write 'ready out) + (close out) + pid)))))) + +(define* (call-with-container mounts thunk #:key (namespaces %namespaces)) + "Run THUNK in a new container process and return its exit status. +MOUNTS is a list of file system specs that specify the mapping of host file +systems into the container. NAMESPACES is a list of symbols corresponding to +the identifiers for Linux namespaces: mnt, ipc, uts, pid, user, and net. By +default, all namespaces are used." + (call-with-temporary-directory + (lambda (root) + (let ((pid (run-container root mounts namespaces thunk))) + ;; Catch SIGINT and kill the container process. + (sigaction SIGINT + (lambda (signum) + (false-if-exception + (kill pid SIGKILL)))) + + (match (waitpid pid) + ((_ . status) status)))))) + +(define (container-excursion pid thunk) + "Run THUNK as a child process within the namespaces of process PID." + (define (namespace-file pid namespace) + (string-append "/proc/" pid "/ns/" namespace)) + + (let ((pid (number->string pid))) + (match (primitive-fork) + (0 + (call-with-clean-exit + (lambda () + (for-each (lambda (ns) + (call-with-input-file (namespace-file "self" ns) + (lambda (current-ns-port) + (call-with-input-file (namespace-file pid ns) + (lambda (new-ns-port) + ;; Joining the namespace that the process + ;; already belongs to would throw an error. + (unless (= (port->fdes new-ns-port) + (port->fdes current-ns-port)) + (setns (port->fdes new-ns-port) 0))))))) + ;; It's important that the mount namespace is joined last, + ;; otherwise the /proc mount point would no longer be + ;; accessible. + '("ipc" "net" "pid" "uts" "user" "mnt")) + (purify-environment) + (chdir "/") + (thunk)))) + (child-pid (waitpid child-pid))))) | https://lists.gnu.org/archive/html/guix-commits/2015-07/msg00043.html | CC-MAIN-2019-35 | refinedweb | 1,429 | 56.45 |
hi,
one of our customer is facing an issue with jiffies wrap up.
on a 32 bit machine, the variable jiffies count upto 472 days.
the customer's server was up for 472 days ('uptime') and to reproduce
the same, i tried to change the variable HZ in linux-2.6..23.9/include/asm-i386/param.h
from 100 to 10000.
after which i rebuilt the kernel with following steps:
# make oldconfig
# make modules_install
# make install
Now when i boot from this newly built kernel, i wrote a small kernel module
to read the jiffies and HZ global variable,which is as follows:
[root@localhost drivers]# cat get_jiffies.c
#include <linux/init.h>
#include <linux/module.h>
#include <asm/current.h>
#include <linux/sched.h>
#include <linux/time.h>
#include <linux/jiffies.h>
static int __init jiffies_init(void)
{
unsigned long j,z;
j = z = 0;
j = jiffies;
z = HZ;
printk(KERN_ALERT "jiffies value is %lu\n",j);
printk(KERN_ALERT "jiffies value in seconds %lu\n",(jiffies/HZ));
printk(KERN_ALERT "HZ value is %lu\n",z);
return 0;
}
static void __exit jiffies_exit(void)
{
printk(KERN_ALERT "Goodbye, world!\n");
}
module_init(jiffies_init);
module_exit(jiffies_exit);
MODULE_LICENSE("GPL");
[root@localhost drivers]# insmod get_jiffies.ko
[root@localhost drivers]# dmesg
jiffies value is 372939
jiffies value in seconds 1491
HZ value is 250 <====
why this HZ variable is shown as 250 ?
i am a newbie in kernel programming and i might be doing something really stupid as well
:oops:
~amit | http://www.linuxforums.org/forum/kernel/how-modify-read-tick-rate-hz-print-112088.html | CC-MAIN-2014-49 | refinedweb | 244 | 66.54 |
New release
OpenXava 4.4 goes for Java 7
The notion of enterprise Java developers wanting a speedy web framework solution, with no added gripes to their daily development life isn't a new one. Several options are now available allowing developers to create a UI on the fly. One such example of a rapid-development platform is OpenXava, allowing developers to create an AJAX application, purely from writing Java or Groovy domain classes. Nothing else is needed.
Don't think it's that simple? Check out the code below or look at this quickstart guide:
org.openxava.school.model package; javax.persistence import *.; org.openxava.annotations import *.; @ Entity public class Teacher { @ Id @ Column (length = 5) @ Required private String id; @ Column (length = 40) @ Required private String name; public String getId () { return id; } public void setId (String id) { this.id = id; } public String getName () { return name; } public void setName (String name) { this.name = name; } }
The latest version, OpenXava 4.4 is out and there's been some big changes this time round such as full Java 7 support. Other new bonuses mostly relate to database controls such as the ability to total a column when foldering and filtering collections. You can also filter by range in lists and collections.
In addition, OpenXava 4.4 now supports HtmlUnit 2.9 as well as jQuery and jQuery UI 1.5.2 and 1.8.12 respectively. LifeRay 4.1 is no longer supported.
OpenXava is ideally suited for enterprises needing AJAX database solutions. If you like what you see, check out the Changelog. You can download the latest version over at OpenXava's website. If you're still sceptical of how quick it is with the correct tools, here's how to create an application with the trio of OpenXava, Eclipse and Tomcat. | http://jaxenter.com/openxava-4-4-goes-for-java-7-41721.html | CC-MAIN-2014-15 | refinedweb | 300 | 59.3 |
Automated IP Configuration for React Native DevelopmentReact
There are three types of application builds we need to do for React Native development: debug, debug on target device, and production. As React Native is currently set up, you need to edit your AppDelegate.m file to switch between these three.
If you have done any kind of development beyond running your app in an iOS simulator, you are familiar with this bit of the AppDelegate.m file for your project:
The localhost URL in “OPTION 1” is fine until you try to debug your application on a physical device. Then the URL has to be changed to the IP address of your development host machine before you build and run. Depending on the version of React Native you are using in your project, you may have to change that URL back to localhost to debug in the simulator again.
When you want to build a production version of your app, you need to change your product scheme to Production and edit your AppDelegate.m to enable OPTION 2.
It’s a bit messier when you have a laptop and use it while out and about so it’s IP address changes dynamically. Similarly, if you are working with a team of people, each of them have systems with differing IP addresses and everyone editing the AppDelegate.m file is sure to cause conflicts and hassles.
Surely there is a better way.
Andrew Phillipo posted his solution on GitHub here:. The solution presented here is Inspired by his effort, and should automate and fix this issue with Native React.
In this gist, I have made a couple of improvements to Andrew’s Run Script:
First, there was only one PLISTCMD in his version, the one on line 5 in my version. When I tried to do a build with his Run Script, I got an error, “SERVER_IP” does not exist. In my version, I first attempt to ADD the SERVER_IP value (lines 3-4) and then attempt to set the SERVER_IP value (lines 5-6) in the project’s Info.plist file. Note the “|| true” at the end of lines 4 and 6 – this prevents the script from aborting on the failure of either PListBuddy command. The command line to determine your development machine’s current IP is within the $() on lines 3 and 5 – you can copy and paste this into a terminal to verify it works:
$ ifconfig | grep inet\ | tail -1 | cut -d " " -f 2
Second, I replaced the entire OPTION 1 vs OPTION 2 logic in AppDelegate.m with preprocessor directives to generate the proper option based upon your actual IP address. See this (second) gist:
Third, I fixed the logic in RTWebSocketExecutor.m in React Native to use the IP address we set in the Run Script. The code for setting up the WebSocket is buried in the React Native core source, and it is hardcoded to localhost:8081. This breaks debugging in chrome while executing on a physical device. My fix solves this issue. See this (third) gist:
Step by step instructions
Until the changes are made to React Native, you can follow these instructions to fix your projects to benefit from these proposed changes.
We will be editing the two files indicated on the left. We will be adding a Run Script in the Build Phases.
Step 1: Add Run Script
Click on the plus sign indicated in the screenshot above. Select “New Run Script Phase” from the popup menu.
The Run Script appears at the end of the Build Phase items list:
Click on the arrow to open it.
In the code block, we’re going to copy from the first gist I presented and paste it verbatim.
Step 2: Edit AppDelegate.m
For your convenience, the code to add to AppDelegate.m is in the second gist above.
Note: I used #if 0 … #endif to remove the original code. I left it in the source file for reference.
Also note the #warning for DEBUG DEVICE is hit – this is because I have selected my iPhone as the build target.
If I choose one of the simulators as target, the #warning for DEBUG SIMULATOR is hit:
To build a production version, I select my phone as target again and then I select Product -> Scheme -> Edit Scheme:
On the dialog that pops up, select Run on the left and set Build Configuration to Release:
In AppDelegate.m, the #warning for PRODUCTION DEVICE is hit.
Step 3: Fix the hardcoded URL in RCTWebSocketExecutor.m
Note that I commented out the original return statement and get the proper IP from the plist file, similar logic as was added to AppDelegate.m. The code for this init function is in the third gist above.
Conclusion
These trivial changes should improve your workflow when using React Native. It is especially effective if you develop on more than one machine or are part of a team. You will no longer be required to frequently edit the AppDelegate.m file for the three build scenarios. The number of merge conflicts you experience should be fewer since this file won’t be edited to contain per-user custom IP addresses.
If you like this work and want to see it become part of React Native, let the team know by commenting on the issue here:.
Image “There’s no place like 127.0.0.1” at beginning of article via Torley under cc license. | https://moduscreate.com/blog/automated-ip-configuration-for-react-native-development/ | CC-MAIN-2017-51 | refinedweb | 911 | 71.14 |
When I learned C using Microsoft Visual Studio, it didn't allow me to create an array with a non-constant size. I had to either put a value like
int arr[5];
#define size 5
int arr[size];
#include <stdio.h>
int main()
{
printf("Enter a value: ");
int x;
scanf("%d", &x);
int arr2[x];
for (int i = 0; i < x; i++)
{
arr2[i] = i;
printf("Array at %d is %d.\n", i, arr2[i]);
}
return 0;
}
This is valid C. It is referred to as a variable length array (VLA). This feature was added to the language as part of the C99 standard.
MSVC is well known for not supporting many C99 and later features, including VLAs. | https://codedump.io/share/2B6lsAdRQbIJ/1/c---invalid-syntax-allowed-by-compiler | CC-MAIN-2017-43 | refinedweb | 119 | 72.87 |
0
Trying to write a very simple function that boxes in a hello world print out. I have been stuck here for hours now and would appreciate some help as the frustration is killing me.
The print out should look like this.
//////////////////////
/ Hello World /
/ Hello World /
//////////////////////
This is my current code
#include <stdio.h> void print_hello(int n); void print_char(int *); void main() { print_hello(10); print_hello(5); print_char(1); } void print_hello(int n, int *) { int i; int j; for(i=1 ; i<=n ; i++) { printf("Hello World\n"); printf("\n\n"); } for (int j = 1; j<=i ; j++) { printf ("*"); } } }
If this will not take you long, please help as I have been here for hours, and would like to move on now.
Edited by Nick Evan: Added code-tags | https://www.daniweb.com/programming/software-development/threads/272868/function-will-not-work | CC-MAIN-2017-17 | refinedweb | 129 | 86.23 |
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum!
Originally posted by Eddie Vanda: Hi Nikhilesh, I think that your main problems is that you don't update your calendar object. The most efficient algorithm for getting seconds would be to get the milliscount from System, divide by 1000. and take modulo 60. I just newed the calendar object each time to get the current time. I tried to simplify things a bit as follows:
import java.util.*;
import javax.swing.*;
import javax.swing.event.*;
import java.awt.*;
import java.awt.event.*;
public class DigClock implements ActionListener {
static final JTextField jtf = new JTextField (5);
public void actionPerformed (ActionEvent ae) {
GregorianCalendar calendar = new GregorianCalendar ();
int time = calendar.get(Calendar.SECOND);
jtf.setText (String.valueOf (time));
}
public static void main(String[] args) {
DigClock myTask = new DigClock();
javax.swing.Timer myTimer = new javax.swing.Timer(400,myTask);
JFrame myFrame = new JFrame();
Container c = myFrame.getContentPane ();
c.add (jtf);
myFrame.pack ();
myFrame.setVisible (true);
myTimer.start ();
}
} | http://www.coderanch.com/t/337775/GUI/java/refresh-system-time | CC-MAIN-2015-11 | refinedweb | 177 | 50.12 |
Search
MediaWiki's full-text search is split between back-end and front-end classes, allowing for the backend to be extended with plugins.
Front end
Work for MediaWiki 1.13
Some work is going on now (March 2008) to improve the front-end for MediaWiki 1.13.
- Thumbnails
- done Hits that return 'Image:' pages now include a thumbnail and basic image metadata instead of raw text extracts.
- Thumbnails for media gallery pages and categories might be great!
- Text extracts
- These need to be prettified... a lot :)
- Categories and other metadata?
- Be nice to add some stuff while keeping it clean
- Search refinement field
- Recommend moving this up to the top, like the limited one LuceneSearch extension had.
- The namespace checklist is currently awful, should be cleaned up in some nice way.
- ??
Back end
MediaWiki currently ships with two functional search backends; one for MySQL 4.x and later, and the other for PostgreSQL. Custom classes can also be loaded, and $wgSearchType set to the desired class name.
SearchEngine
This is an abstract base class, with stubs for full-text searches in the title and text fields, and a standard implementation for doing exact-title matches.
SearchMySQL4
The SearchMySQL and SearchMySQL4 classes implement boolean full-text searches using MySQL's fulltext indexing for MyISAM tables. (In old versions of MediaWiki there was a SearchMySQL3 class as well, which did some extra work to do boolean queries.)
Work for MediaWiki 1.13
Put some notes on my blog...
SearchPostgres
I'm not actually sure how SearchPostgres and SearchTsearch2 classes relate. Is one of them obsolete, or... ?
Extensions
MWSearch
The MWSearch extension provides a SearchEngine subclass which contacts Wikimedia's Lucene-based search server. This replaces the older LuceneSearch extension which reimplemented the entire Special:Search page. | http://www.elinux.org/Search | CC-MAIN-2013-20 | refinedweb | 295 | 66.54 |
The most startling thing about developing with the Django on the Google App Engine is how similar the simple things are. On the App Engine a concise front page view might look like this:
from snippet.models import Snippet def index(request): recent = Snippet.all().order("-entry_time")[:5] return render_to_response( 'snippets/index.html', {'recent':recent}, RequestContext(request, {}))
Translating this to a normal Django view involves only one change:
Snippets.all().order("-entry_time')[:5] becomes
Snippets.all().order_by('-entry_time')[:5]. That is the comforting side of the Google App Engine: in most cases someone used to Django can simply start coding with occasional glimpses at the documentation. However, that doesn't really tell the whole story. Fortunately, I'm in a storytelling mood, so we'll delve a bit deeper than the superficial "It's the Same Damn Thing" angle1.
Actually, it turns out that really addressing the differences here involves telling two stories: the first is a short one about the Google App Engine platform, and the second is a longer one looking at Django on the Google App Engine.
The Google App Engine Platform
The platform is the real attraction of the App Engine. This is because Google has made it amazingly simple to develop and deploy applications. If the Google App Engine had instead been free access to world-class dedicated servers, I don't think it would be nearly as appealing to developers2 as the present incarnation.
Easy Deployment
Quite simply, the platform handles all the messiness of deployment for you (what was my PostgreSQL username again, and what did I name the tables, and why did SSH stop accepting my key for automatic login anyway?). Its difficult to believe how easy the deployment process is. You fill in a simple YAML file, and then run a simple command at the command line. Or, you can use an even simpler GUI to handle the deployment. The upload and syncing may take a minute or two, but your involvement is all of five seconds long, and utterly painless.
BigTable
Not only does GAE free you from most of the difficulties of deployment, it also shields you from many of the details of managing scaling. You won't be setting up servers or sharding your database, instead you'll just pay more money to Google. Or, at least that is the dream scenario of using BigTable instead of cluster of relational databases. Exactly how well the scaling will work out has yet to be seen, and the keys to successful scaling (or at minimum the keys to failure) will still be firmly in the hands of the developers and their application design choices.
This will be an interesting area to watch once several GAE applications become successful and start placing higher demands on the platform. Also--as a brief aside--there is definitely an open niche for a few rocking tutorials on how to design models for BigTable.
Google Accounts & Mail
Another benefit, although probably far less important in the long run, is the ability to integrate with Google Accounts and send email via Google's mail servers. For applications oriented towards technical users, the former is a great feature, since most people in that group will already have a Google account3. In less technical user groups, I suspect the penetration of Google accounts is much lower, but it will still be a helpful option.
The ability to send emails with Google's servers is a nice little benefit as well, and sufficiently necessary for many webapps that using GAE without the ability would have been rather difficult. Other than saying "I'm glad I won't have to set up Postfix" there isn't too much more to say on that account.
Now that we've looked at the Google App Engine platform briefly, lets move on and look at the joys and sorrows of deploying Django on the Google App Engine.
Django on the Google App Engine
The previous segment began by saying the platform is the real attraction of the App Engine, but inevitably the source of its gifts is also the source of its inconveniences and frustrations. Django and GAE are not bad bedfellows, but GAE does have the tendency to steal the sheets sometimes.
The larger a framework grows the less agile it becomes. People often bemoan this point about Ruby on Rails with glum frowns and comments like "It works really well when you do what it wants you to." Django has largely avoided that fate because of its design goal of having interchangable components (use your own templating system, use your own ORM, etc), but it is not immune to the underlying problem: standardization makes it easy to repeat, but hard to adapt.
Even though its possible to use only a subset of Django, doing so invalidates your existing experience with the pieces that are being replaced. Thus Django developers coming over to the GAE have to adjust their eyes to grok the new environment, even though it is filled with mostly familiar sights.
Models and BigTable
The biggest change for Django developers is the model framework: Django's ORM has been replaced with something of Google's design. This is a necessary change because the models are no longer existing in a relational database but are instead existing in BigTable, but this is also the place where attempts to port an application from Django@Elsewhere to Django@GoogleAppEngine will run into the most resistance.
For an example, lets look at part of the
Snippet model I use in my simple syntax highlighting app on GAE.
from appengine_django.models import BaseModel from google.appengine.ext import db CHOICES = ('Scheme','Python','Ruby') # etc, etc class Snippet(BaseModel): title = db.StringProperty(required=True) highlighter = db.StringProperty(required=True, default='Python',choices=CHOICES) content = db.TextProperty(required=True) url_hash = db.StringProperty() parent_hash = db.StringProperty()
Instead of storing a direct link to the parent Snippet, I am storing the parent's hash. This means I can build a link to the parent Snippet without doing an additional query. But how would I get the parent's title? Well, one way to do it would be like this:
from django.shortcuts import render_to_response from models import Snippet def view_snip(request, hash): snip = Snippet.all().filter("url_hash =", hash) parent = Snippet.all().filter("url_hash =", snip.parrent_hash) return render_to_response( 'snippets/snippet_detail.html', {'object':snip,'parent_title':parent.title}, RequestContext(request, {}))
But doing it that way is the relational database way, and since we're using BigTable, it turns out that is now know as the wrong way. Instead, we 'retrieve' the parent's name by preemptively storing it in the child. Thus we would change the model to look like this:
class Snippet(BaseModel): title = db.StringProperty(required=True) highlighter = db.StringProperty(required=True, default='Python',choices=CHOICES) content = db.TextProperty(required=True) url_hash = db.StringProperty() parent_hash = db.StringProperty() parent_title = db.StringProperty()
And we would do the necessary fetching once and only once when we created the Snippet.
# assuming parent_hash variable exists in environment new_snip = form.save(commit=False) parent = Snippet.all().filter("url_hash =", parent_hash) snip.parent_hash = parent_hash snip.parent_title = parent.title
This feels a little awkward for someone used to relational databases--I mean what the hell, why are you caching data in the model itself?--but this is how BigTable is best utilized: duplicate data to avoid extra lookups. For data that changes very frequently, then it may become necessary to fetch that data frequently (at which point you would start using a different kind of caching: memcached), but often simple data redundency is sufficiently flexible to substantially reduce the quantity of performed lookups.
Certainly, this data redundancy also brings with it certain costs. For example, what do you do if you change the title of a Snippet? Well, you have to change the title for the Snippet itself, and for each of its children snippets. The code might look something like this:
def update_title(hash, title): parent = Snippet.all().filter("url_hash =", hash) parent.title = title parent.put() for snip in Snippet.all().filter("parent_hash =", hash): snip.parent_title = title snip.put()
Which is, in all fairness, pretty ugly. The cost and benefit of this kind of data duplication is going to depend entirely on how much your data is duplicated (if the average Snippet has zero children, then it has no performance cost, although there is a programmer cost for writing the extra code), and how often you make changes (if a Snippet can't change its title, all the sudden the inconveniences become irrelevant).
I think the most important thing to remember here is that different caching and design patterns will be necessary to use BigTable effectively. This is the least cosmetic difference caused by using Django on Google App Engine, and one that needs to be carefully considered while designing an application to run on GAE.
Django.contrib.* and Middleware
Another chunk of Django functionality that is not available on Google App Engine is many of the middlewares and
django.contrib apps. The authentication and sessions frameworks in particular will be missed. As one would expect, Google does provide workable--and in some places perhaps more useful--replacements, but its yet another piece of Django that has been replaced and will have to be relearned by switchers to GAE.
Unlike the mismatch between BigTable and relational databases, the difference here is between details instead of concepts. A ported app will need to have chunks rewritten, but the apps themselves--for the most part--won't need to be redesigned.
Django Helper Project
There are many links and vague but optimistic praises for the Django Helper project which aims to abstract away differences between Django@Elsewhere and Django@GAE. In the end, I think its a somewhat misguided--although well intentioned and at times helpful--effort. The problem is that creating a layer of abstraction on top of two different things will lead to the minimal subset of both. Imagine trying to create a library that abstracted away whether you were using Git or Mercurial for your repository: you'd lose out on the unique features of both, and end up with something less compelling than either on its own.
My experience is that the helper doesn't quite work as advertised in a variety of ways, and it makes more sense to do what you can with Django, and then use the Google framework components to supplement Django, rather than pouring a third blend of the two into the koolaid.
Ending Thoughts
Really, I had a very positive experience developing with the Google App Engine, and I'm looking forward to refining my existing project, and to trying out new projects on it as well. It eliminates the pain of deploying webapps, which is usually the most difficult--or at least most capital intense--part of pushing out a new project. Before the AppEngine I had two choices for pushing out projects with a small budget (for the most part consisting of small personal projects):
- put it on a VPS and pay more $20-$40 per month, or
- put it on a shared host and accept often abysmal performance.
Now, Google App Engine has given me a third option with most of the benefits of the first two, and for that I am grateful. Whether the barrier to web applications needed to be lowered even further is a discussion for another day, but GAE is certainly an exciting experiment to observe and to participate in.
If you have tried developing with GAE, what were your impressions? Do you think it will be a viable platform for webapps?
One of my students recently delivered a speech whose title was 世界のみんなは同じだ or All The World's People Are the Same. Its a pleasant thought, usually one we arrive at somewhere between being paralyzed by lack of detail, and being paralyzed by excessive detail, but not one that seems particularly true. The same goes for the distinction betweeen Django on Google App Engine (DOGAE?) and Django in Others Places (DIOP?): it may seem the same, but it isn't.↩
And that is who the Google App Engine really seems to be aimed at. Small groups of developers who are lacking either the capital, the desire or the technical prowess to handle the deployment and scaling of their web application.↩
Well, except for the ones who have decided Google is becoming a terrifying creature to be avoided. I'm pretty sure those people won't be that happy with using a Google App Engine application anyway...↩
Reply to this entry | http://lethain.com/entry/2008/jun/17/overview-of-using-django-on-the-google-app-engine/ | crawl-002 | refinedweb | 2,099 | 53.21 |
is, naturally, a patch which makes the creation of probe points
possible; it is called Linux
kernel markers. This patch has been under development for some years.
Its path into the mainline has been relatively rough, but there are signs
that the worst of the roadblocks have been overcome. So perhaps a quick
look at this patch is called for.
With kernel markers, the placement of a probe point is easy:
#include <linux/marker.h>
trace_mark(name, format_string, ...);
The name is a unique identifier which is used to access the probe;
the documentation recommends a subsystem_event format, describing
the subsystem in which the probe is found and the event which is being
traced. For example: in a part of the patch which instruments the block subsystem, a
probe placed in elv_insert(), which inserts a request into its
proper location in the queue, is named blk_request_insert. The
format string describes the remaining arguments, each of which will be some
variable of interest at the time the trace point is hit.
Code which wants to hook into a trace point must call:
int marker_probe_register(const char *name, const char *format,
marker_probe_func *probe, void *pdata);
Here, name is the name of the trace point, format is the
format string describing the expected parameters from the trace point (it
must match the format string provided when the trace point was
established), probe() is the function to call when the trace point
is hit, and pdata is a private data value to pass to
probe(). The probe() function will have this prototype:
void (*probe)(const struct __mark_marker *mdata, void *pdata,
const char *format, ...);
The mdata structure includes the name of the trace point, if need
be, along with a formatted version of the arguments. The arguments
themselves are passed after the format string.
Registration of a marker does not, yet, set up the probe()
function to be called. First, the marker must be armed with:
int marker_arm(const char *name);
Once the marker has been armed, probe() will be called every time
execution arrives at the given trace point.
When probe points are no longer of interest, they can be shut down with:
int marker_disarm(const char *name);
void marker_probe_unregister(const char *name);
Calls to marker_arm() will nest - if a given marker has been armed
three times, then three marker_disarm() calls will be required to
turn it off again.
Internally, there are a lot of details to the management of markers. The
code at the actual trace point, in the end, looks much like one would
expect:
if (marker_is_armed) {
preempt_disable();
(*probe)(...);
preempt_enable();
}
In reality, it is not quite so simple. Getting marker support into the
kernel requires that the runtime impact of kernel markers be as close to
zero as possible, especially when the marker is not armed. A common use
case for markers is to investigate performance problems on systems running
in production, so they have to be present in production kernels without
causing performance problems themselves. Adding a test-and-jump operation
to a kernel hot path will always be a hard sell; the cache effects of
referencing a set of global marker state variables could also be
significant.
To get around this problem, the marker code comes with a separate patch
called immediate values. In
the architecture-independent implementation, an immediate value just looks
like any other shared variable. The purpose of immediate values, though,
is to provide variables with the assumption that they will be frequently
read but infrequently changed, and that the read operations must have the
lowest impact possible. So, in an architecture-specific implementation
(which only exists for i386 at the moment), changing an immediate value
actually patches any code which reads the value.
To say that the details of doing this sort of patching safely are ugly
would be to understate the point. But Mathieu Desnoyers has dealt with
those details, and nobody else need look at the resulting code.
Through
the use of immediate values, the code inserted by trace_mark() can
query the setting of a trace point without generating a memory reference at
all; instead, that setting is stored directly in the inserted code. So
there will be no potential for an expensive cache miss at the probe point.
The
patch also provides an immediate_if() construct which is intended
to allow jumps to be patched directly into the code, eliminating the test
altogether, but that functionality has not yet been implemented. Even
without this feature, immediate values allow the creation of trace points
whose runtime impact is very nearly zero, eliminating the most common
objection to their existence.
If and when this code is merged, the way will be clear for the creation of
a set of well-defined trace points for utilities like SystemTap and LTTng. That, in turn, could make the
internal operations of the kernel more visible to system administrators and
others who are not necessarily well versed in how the kernel works. This
sort of tracing ability has been on many users' wish lists for some time;
they might just be, finally, getting close to having that wish fulfilled.
Kernel markers
Posted Aug 16, 2007 15:58 UTC (Thu) by compudj (subscriber, #43335)
[Link]
Being the developer behind the Linux Kernel Markers, I would like to add some clarifications:
The argument "For example, the rate of change of the kernel code base would make the maintenance of a large set of probe points difficult, especially given that developers working on many parts of the code might not be particularly aware of - or concerned about - those points." should be put in context. Actually, if we think of tracing as being provided not only to Linux users, but also to kernel developers, embedded developers, etc, it makes sense to have an intrumentation set which follows the kernel source as closely as possible.
SystemTap's approach of creating an external tapset that have to be ported to each new kernel is correct for users of distribution kernels, but requires much more effort when trying to adapt the tapsets to a rapidly changing code base.
One of the advantages of putting markers at important locations in the kernel code is that they follow the code flow and use the underlying revision control system of the kernel. Moreover, the developers who are the most likely to know what trace points must be inserted and where are probably the maintainers of a subsystem themself. Therefore, it makes sense for them to be the final deciders on wether or not a trace point belongs to a kernel code path.
Second point, the immediate values optimized versions are currently implemented for i386 and powerpc.
Thanks for this thorough coverage of the state of the markers/immediate values.
Mathieu
Posted Aug 16, 2007 16:22 UTC (Thu) by davecb (subscriber, #1574)
[Link]
A decent explanation of that was Brian Cantrill's comment
at
I speculate that developers of particular kernel areas
would be active in creating and maintaining high-level and
correlated trace information, and end users could take
advantage of them, but being able to trace on, for example,
entry to an arbitrary function would be th next-most-used
functionality.
--dave
Posted Oct 2, 2007 19:35 UTC (Tue) by fuhchee (subscriber, #40059)
[Link]
Indeed. It may be interesting to point out the systemtap script interface
to the markers - for those who don't want to write their instrumentation
in C. This one just traces the first argument. (It's already working,
which makes sense since we prototyped markers with/before Mathieu.)
probe kernel.mark("name") { println ($arg1) }
We look forward to the inclusion of markers in linux, and their gradual
utilization throughout the code base.
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/245671/ | crawl-002 | refinedweb | 1,286 | 54.86 |
Subject: Re: [boost] [test]enable_if, etc. : was : ...multiprecision types ... for unit tests?
From: Christopher Kormanyos (e_float_at_[hidden])
Date: 2013-06-13 14:37:54
> Christopher Kormanyos <e_float <at> yahoo.com> writes:
>> * OK for GCC 4.7.2
>> * Errors for VC10 (ambiguous symbols), AKA VisualStudio 2010
>> * Errors for VC11 (ambiguous symbols), AKA VisualStudio 2012
>Sorry. I've misinterpreted MSVC output. I do see the error now. And... this
>is really weird one. I did not see something like this in a long time. What
>it comes to can be illustrated in this example:
<snip code sample>
>I'm sure it can be simplified further by removing specifics of
>multiprecision library. And the offending line is ... the template
>instantiation, in unrelated namespace HAS NOTHING TO DO with enable_if at
>all.
So if I understand, we are actually dealing with a compiler
issue here regarding the proper resolution of namespaces.
Is that what you are saying?
<snip>
>Any hints are welcome.
>Gennadiy
I think it would be best to qualify enable_if and disable_if
with the boost namespace in multiprecision. John, what
is you opinion?
But you are right in principle, it seems to be a compiler
issue. I apologize for wrongly involving your code in
this test case.
Gennadiy, do you really want to go the hard road and
use symbols inject enable_if, etc. in namespaces in your code.
It just seems like playing with fire since this symbol has such
a clear meaning in C++11. It's your decision, but it would
scare me enough to create my_enable_if, etc.
But again, it seems as though your code is right and
MSVC is wrong. Just a tough waiting game on a compiler
issue like that.
Have we cleared this one up now? Should I investigate
anything further? John, should we decorate multiprecision?
No Hurry here, it's something for 1.55.
Sincerely, Chris.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2013/06/204577.php | CC-MAIN-2021-43 | refinedweb | 336 | 70.29 |
Clever trick, curious on how safe it is for platform neutralityPosted Friday, 31 August, 2012 - 18:30 by chronosifter in
Having had to do a lot of NextPowerOfTwo calls in a proof of concept I found the performance to be abysmal using the traditional methods. So I resorted to the IEEE specification that C# relies upon and came up with the following:
[StructLayout(LayoutKind.Explicit)] struct IntFloatUnion { [FieldOffset(0)] public float Float; [FieldOffset(0)] public int Int; public int Sign { get { return +1 | (Int >> 31); } } public int Exponent { get { return ((Int >> 23) & 0xFF) - 127; } } public IntFloatUnion(float f) { Int = 0; Float = f; } }
And a similar structure for double precision.
Then the following function:
public static float NextPowerOfTwo(float n) { IntFloatUnion union = new IntFloatUnion(n); if (union.Sign < 0) throw new ArgumentOutOfRangeException("n", "Must be positive."); int exp = union.Exponent; if (union.Float - exp > 0) exp++; return (float)(1 << exp); }
Since this relies upon the bit structure of a float (and double in that case), I'm curious whether or not this sort of thing would be safe to use across multiple platforms without additional work to correct for platform differences?
Re: Clever trick, curious on how safe it is for platform ...
You might encounter some endianness issues, but other than that it should work just fine.
Re: Clever trick, curious on how safe it is for platform ...
When I did research regarding the half-precision float type, it turned out that endianness is not really anything to worry about when you use .Net/Mono (little-endian everywhere). Just add some line:
if(NextPow(7)!=8 throw...
and just use it until that throw pops ;) | http://www.opentk.com/node/3143 | CC-MAIN-2014-35 | refinedweb | 274 | 62.78 |
C++ Strings
A free video tutorial from Tim Buchalka's Learn Programming Academy
Professional Programmers and Teachers - 1.24M students
53 courses
1,593,222 students
Lecture description
In this video we learn about the sthe std::string class in C++
Learn more from the full course
Beginning C++ Programming - From Beginner to Beyond
Obtain Modern C++ Object-Oriented Programming (OOP) and STL skills. C++14 and C++17 covered. C++20 info see below.
45:59:04 of on-demand video • Updated July 2022
In this video, we'll learn about c++ strings. Standard string is a class in the c++ standard template library or stl. We could do an entire course on just the scl, and that course would be very long and complex. So in this video, I'll only talk about the major elements of the c++ string class. In order to use c++ plus strings, you must include the string header file. Strings are in the standard namespace. So in order to use them without using namespace standard, you must prefix them with standard and the scope resolution operator. This is also true for the standard string methods that work with c++ strings. Like c-style strings, c++ strings are stored contiguously in memory. However, unlike c-style strings which are fixed in size, c++ strings are dynamic and can grow and shrink as needed at runtime. C++ strings work with the stream insertion and extraction operators just like most other types in c++. The c++ string class provides a rich set of methods or functions that allow us to manipulate strings easily. Chances are that if you need to do something with the string that functionality is already there for you without having to rewrite it from scratch. C++ strings also work with most of the operators that we're used to for assigning, comparing and so forth. This is a huge advantage over c-style strings since c-style strings don't work well with those operators. Even though c++ strings are preferred in most cases sometimes you need to use c-style strings. Maybe you're interfacing with a library that's been optimized for c-style strings. Well, in this use case, you can still use c++ strings and take advantage of them. And when you need to you can easily convert the c++ string into a c-style string and back again. Like vectors, c++ strings are safer since they provide methods that can bounce check and allow you to find errors in your code so you can fix them before your program goes into production. Let's see how we can declare and initialize c++ strings. In all the examples in this video, I'm assuming that the string header file has been included and that we're using the standard namespace. Here you can see six examples of declaration and initialization of c++ strings. There are other ways as well using constructor and assignment syntax. But I'm mainly using the initializer syntax in this video. In the first example, we declare S1 as a string. Notice that the string type is lowercase. Unlike c-style strings, c++ strings are always initialized. In this case, S1 is initialized to an empty string. No garbage and memory to have to worry about. In the second example, I'm declaring and initializing S2 to the string Frank. Notice that frank is a c-style literal, that's okay. It will be converted to a c++ string. In the third example, S3 is initialized from S2, so a copy of S2 is created. S2 and S3 will both be Frank, but different Franks in different areas of memory. In the fourth example, I'm declaring and initializing S4 from Frank. But I'm only using the first three characters of the string Frank. So S4 will be initialized to the string fra. In the fifth example, I'm initializing S5 from S3 which is Frank. But notice the two integers that follow the S3 and the initializer. The first integer is the starting index and the second is the length. So in this case, we initialize S5 with the two characters that index 0 and 1 from S3. So S5 will be initialized to fr. Finally, we can initialize the string to a specific number of a specific character. In this case, three x's. Note that this uses constructor syntax with the parentheses instead of the curlies. Now that we've declared some strings, let's see how we can assign other values to them. With c++ strings, you can use the assignment operator. This feels much more natural than having to use the stream copy function like we would have to in c-style strings. In this example, I've declared S1 and it's empty. Then I can assign the c-style literal c++ rocks to S1. Pretty cool and pretty easy. S1 will grow dynamically as needed. In the second example, I've declared S2 and initialized it to hello. Then I assign S1 to S2. In this case, S2 will no longer contain hello. It will contain a copy of S1, c++ rocks. Let's see how we can concatenate strings together. Concatenation of strings just means building up a string from two other strings. We can use the plus operator to concatenate c++ strings. In this example, I created two strings part one which is c++ and part two which is powerful. Then I have an empty string sentence. Notice that I'm assigning two sentence the concatenated result of part one plus a space plus part two plus a space plus language. If I displayed sentence now, it would display c++ is a powerful language. Notice that the last example on the slide will not compile. This is because we have two c-style literals. And you can't concatenate c-style literals. It only works for c++ strings. A combination of c++ strings and c-style strings is okay though as we saw in the previous assignments. Just like we did with vectors, we can use the same operators to access string elements. In this case, the elements of a string are characters. So we can use the subscript operator as well as the at method. Remember, the app method performs bounce checking. So if you go over bounds, you'll get an exception which you can fix. Let's see how we can display screen characters one at a time. In this example, we have a string S1 initialized to Frank. We can use the range based for loop to display the string characters. In this case, f-r-a-n-k and the null character will be displayed. Pretty much what you expected, right. Notice that the type of the loop variable is char in this case. What do you think will happen if we change that to integer. In this case, I've changed it to integer. Is this what you expected. We told the compiler to use an integer and that's exactly what it's doing. So instead of displaying the character value of each element in the string, it's displaying the integer value that represents those characters. So in this case, 70 114 97 110 107 and 0 which represent f r a n k and of course the null character. These are the ascii codes for those characters. Comparing c++ strings couldn't be any easier or more intuitive. We use the same equality and relational operators that we've been using all along. We're comparing two string objects, so they'll be compared character by character, and their character values will be compared lexically. So a capital a is less than a capital z, and a capital a is less than a lowercase a. That's because the capital letters come before the lowercase letters in the ascii table. We can't use these operators on two c-style literals, but we can use them in the following cases. If we have two c++ strings, if we have one c++ string and a c-style literal or if we have one c++ string and one c-style string. Let's see some examples. Here we're defining five c++ string variables, S1 through S5. And then we perform some comparison operations and see the results. Of course, you would normally use these Boolean expressions in an if statement or looping conditional expressions. In the first example, we check to see if S1 is equal to S5. This is true since they both contain the string apple. S1 equals S2 is false since S1 is apple and S2 is banana. How about S1 not equal to S2. This is true since apple is not equal to banana. In the case of S1 less than S2, this is also true since apple comes before banana lexically in the ascii table. S2 greater than S1 is also true since banana comes before apple lexically. Notice that banana has an uppercase b whereas apple has a lowercase a. S4 less than S5 is false since apple with a lowercase does not come before apple with an upper case. And then finally, A1 equal apple is true because they're the same. Notice in this case, apple is a c-style string literal. The c++ string class has a rich set of very useful methods, too many methods to cover in detail in this video. I encourage you to study the c++ string class since it's going to be a class that you'll use often, and it's important that you know what it provides, so you don't reinvent the wheel when you need to solve a problem. The substring method extracts a substring from a c++ string. It doesn't change the string. It simply returns the substring and you could do whatever you want with it. In this case, I'm simply displaying it. But you can easily assign it to a string variable. Here, I've initialized S1 to this is a string. The first example takes a substring of this string starting at index 0 and including exactly 4 characters. If there are less than 4 characters left in the original string then all the remaining characters are included. In this case, the substring is the first word in the string, this. In the second example, we return the substring starting at index five and include two characters. That's the substring is, IS. Finally the last example starts at index 10 and includes four characters, this will return the substring test. Let's see how we can search a string for another. The c++ string class has a very handy method named find. Find works with characters and strings. It expects a string or character and returns the index or position of the beginning of that string or character in the original string. So if we have a string S1 that's initialized to this is a test and we want to find the string this, we'd get back a 0 since this starts at index 0. In the second example we're looking for the string is. In this case, it would return 2 since the first is starts at index 2. In the third example, we're finding the string test, and we get back a 10. In the fourth example, we're searching for a single character, the lowercase e, which is found at index 11. In the fifth example, we use a variant of the method that also allows the index where you want to start the search from. In this case, I want to find the is substring again. But I want to start at index 4. So this time it finds the is that's located at index 5. Finally, what happens if the string or character we want to find just isn't there. Well, in this case the method returns an end position, which means no position information available. You can check for this value in an if statement. And if true, you know what you were searching for wasn't there. Very easy, very powerful. There's also an r find method that starts searching from the end of the string to the beginning of the string. We can also remove characters from a c++ string using the erase and clear methods. For the erase method, you provide the starting index and how many characters to delete. The clear method deletes all the characters in the string so the string becomes the empty string. We've seen a lot of string methods and you can see how powerful this class is. Let's look at one more useful method and one more useful operator that are commonly used. The method is the length method. It returns the number of characters currently in the string object. In this example, S1 is Frank. So s1.length will return a 5. This is so easy and something that's impossible to do with c cell strings since they don't contain size information. The operator i wanted to cover is the compound concatenation assignment operator. In this case, S1 is Frank. And I can say S1 plus equals James. And James will be concatenated to Frank and the entire result string will be assigned back to S1. This is really handy and works very much the same way that the compound assignment operators worked with integers and doubles and so forth. There are also many more methods in the c++ string class for you to discover as you study c++. Okay, there's one more thing I'd like to talk about before we end this video, input with c++ strings. C++ strings work great with input and output streams. As you've seen, inserting c++ string variables to an output stream like cout is pretty easy and works just like we've been doing all along. Extracting a c++ string from an input stream like cin also works the same way we expect. However, there's one issue that's also true for c-style strings. Suppose we've defined S1 as a c++ string and we extract a string from cin as usual. Now suppose I type in hello space there. When I display S1, I will only see hello. The there was not extracted. This is because the extraction operator stops when it sees whitespace. In many cases, we want to read an entire line of text up to when the user presses enter. For example, I want the string to be hello there. Suppose I asked you to enter your full name. I want to be able to read William Smith, not just William. In this case, we can use the getlined function. The getline function has a couple of variants. The first variant expects two elements inside the parentheses. The first element is the input stream. In this case, we're using cin which defaults to the keyboard. The second element is the name of the c++ string where you want the text that the user enters stored. That's it. Very easy. In the example, I'm saying getline cin S1. Now everything the user types is stored into S1. Getline stops reading when it sees the new line. It doesn't include the new line in the string it just discards it. The other variant of getline has another element in the parentheses. The first two are the same as before, the input stream and the c++ string variable name. The third is called the delimiter. This is the character that you want getline to stop reading input at. So as long as the user doesn't enter this character, everything will be stored in the string variable. Once the delimiter is seen, it's not included in the string variable and it's discarded. In the last example, I'm using a lowercase x as the delimiter. So if i type this is x, then the string stored in S1 will be this is and the x is discarded. Well, we've covered a lot of material in this video, and there's much more in the string class to learn. But this gives you a good starting point so you can use the c++ string class effectively. Also you've now been introduced to object oriented programming with both vectors and strings. Pretty soon, we'll be developing our own classes which is pretty cool. That completes this video. Please play with the string class. Create examples, assign, delete, display and try out some of the methods in this video. It won't take long before you're really comfortable working with c++ strings. | https://www.udemy.com/tutorial/beginning-c-plus-plus-programming/c-strings/ | CC-MAIN-2022-33 | refinedweb | 2,810 | 74.9 |
Is it possible to get the current method name in C# similar like you can get the current function name in C/C++ using __FUNCTION__?
I tried to do this once with reflection but I couldn't figure out how to do it.
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Is it possible to get the current method name in C# similar like you can get the current function name in C/C++ using __FUNCTION__?
I tried to do this once with reflection but I couldn't figure out how to do it.
OK looks like StackFrame will do what I need. Just wondering how slow it will be since I want to use it for logging purposes. Something like:
public void Foo() { StartMethod(new StackFrame()); ... ... }
StartMethod will then log whatever is passed in as the stack frame.
52 minutes ago, BitFlipper wrote
OK looks like StackFrame will do what I need. Just wondering how slow it will be since I want to use it for logging purposes. Something like:
StartMethod will then log whatever is passed in as the stack frame.
It's not speedy. Heck it's more expensive that throwing exceptions.I'd suggest passing the name in as a string and hoping people use it right.
If you must go this route you don't have to pass it in at all though. Just to this in the logger.
StackTrace stackTrace = new StackTrace();
MethodBase methodBase = stackTrace.GetFrame(1).GetMethod();
CallerMemberName ?
Slow, but useful if you've wrap it in #if DEBUG ... #endif directive. (DEBUG is automatically defined when building with DEBUG configuration. Can be changed in "Build" tab of Project Properties, via "Define DEBUG constant" checkbox in corresponding build configuration.)
So when you compile without DEBUG, this won't eat away your performance.
@BitFlipper: New in C# 5.0 (Caller Information));
}
The reflection route won't always work, due to inlining. The CallerMemberNameAttribute is more reliable, but it doesn't give you the current method, but rather the calling method. Should still work for what you're trying to do, obviously, but you specifically asked about getting the current method name. I'd also point out that the CallerMemberName approach can be misused, though that's probably not something you're worried about.
Actually, it turns out StackFrame isn't that slow, less than 1 ms. And since I can't use C# 5.0, I'll stick with that for now.
But it is good to know this feature was added to 5.0.
BTW this is all my own code so I have full control over what happens where. However I need this because hand-coding each method is going to take a long time, vs simply copy/pasting the same thing everywhere.
1 day ago, BitFlipper wrote
vs simply copy/pasting the same thing everywhere.
Whoa There!
If copy and pasting is the answer, it's probably time to write a function and call that instead.
OK, I'm possibly missing something here, but wouldn't:
string method = string.Format("{0}.{1}", MethodBase.GetCurrentMethod().DeclaringType.FullName, MethodBase.GetCurrentMethod().Name);
Work?
@BitFlipper: If you have a logger, you shouldn't expose this glue outside the logger, make it transparent. So you need the caller not the current function name.
void Foo() {
this.Log("something happened")
}
void Log(string message) {
LogInternal(string.Format("[timestamp] callerName message", ...));
}
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | https://channel9.msdn.com/Forums/TechOff/Getting-current-method-name-in-C-similar-to-FUNCTION-in-C | CC-MAIN-2015-32 | refinedweb | 614 | 65.93 |
If you need an unguessable random string (for a session cookie or access token, for example), it can be tempting to reach for a random UUID, which looks like this:
88cf3e49-e28e-4c0e-b95f-6a68a785a89d
This is a 128-bit value formatted as 36 hexadecimal digits separated by hyphens. In Java and most other programming languages, these are very simple to generate:
import java.util.UUID; String id = UUID.randomUUID().toString();
Under the hood this uses a cryptographically secure pseudorandom number generator (CSPRNG), so the IDs generated are pretty unique. However, there are some downsides to using random UUIDs that make them less useful than they first appear. In this note I will describe them and what I suggest using instead.
How random is a UUID anyway?
As stated on the Wikipedia entry, of the 128 bits in a random UUID, 6 are fixed variant and version bits leaving 122 bits of actual random. 122 bits is still quite a lot of random, but is it actually enough? If you are generating OAuth 2 access tokens, then the spec says no:
The probability of an attacker guessing generated tokens (and other
credentials not intended for handling by end-users) MUST be less than
or equal to 2^(-128) and SHOULD be less than or equal to 2^(-160).
Well, even if the attacker only makes a single guess, the probability of guessing a 122-bit random value can never be less than 2-122, so strictly speaking a random UUID is in violation of the spec. But does it really matter?
To work out how long it would take an attacker to guess a valid token/cookie, we need to consider a number of factors:
- How quickly can the attacker make guesses? An attacker that can try millions of candidate tokens per second can find a match much faster than somebody who can only try a hundred. We will call this rate (in guesses per second) A.
- How many bits of randomness are in each token? A 128-bit random token is more difficult to guess than a 64-bit token. We will label the number of random bits B.
- How many tokens are valid at any given time? If you have issued a million active session cookies then an attacker can try and guess any of them, making their job easier than if there was just one. Such batch attacks are often overlooked. We will label the number of valid tokens in circulation at any one time S.
Given these parameters, OWASP give a formula for calculating how long it will take an attacker to guess a random token as:
Let’s plug in some numbers and see what we get. But what are reasonable numbers? Well, for security we usually want to push the numbers well beyond what we think is actually possible to be really sure. So what is actually possible now?
When we consider how fast a well resourced attacker could guess a token, we can use the Bitcoin hash rate as a reasonable upper-bound approximation. A lot of people are investing a lot of time and money into generating random hashes, and we can view this as roughly equivalent to our attacker’s task. When I looked back in February (you can see how long my blog queue is!), the maximum rate was around 24293141000000000000 hashes per second, or around 264.
That’s a pretty extreme number. It’s fairly unlikely that anyone would direct that amount of resource at breaking your site’s session tokens, and you can use rate limiting and other tactics. But it is worth considering the extremes. After all, this is clearly possible with current technology and will only improve over time.
How many tokens might we have in circulation at any time? Again, it is helpful to consider extremes. Let’s say your widely successful service issues access tokens to every IoT (Internet of Things) device on the planet, at a rate of 1 million tokens per second. As you have a deep instinctive trust in the security of these devices (what could go wrong?), you give each token a 2-year validity period. At a peak you will then have around 63 trillion (246) tokens in circulation.
If we plug these figures into the equation from before, we can see how long our 122-bit UUID will hold out:
A = 264
B = 122
S = 246
That comes out as … 2048 seconds. Or a bit less than 35 minutes. Oh.
OK, so those extreme numbers look pretty terrifying, but they are also quite extreme. The Bitcoin community spend enormous sums of money (certainly in the tens of millions of dollars) annually to produce that kind of output. Also, testing each guess most likely requires actually making a request to one of your servers, so you are quite likely to notice that level of traffic – say by your servers melting a hole through the bottom of your datacentre. If you think you are likely to attract this kind of attention then you might want to carefully consider which side of the Mossad/not-Mossad threat divide you live on and maybe check your phone isn’t a piece of Uranium.
All this is to say that if you have deployed random UUIDs in production, don’t panic! While I would recommend that you move to something better (see below) at some point, plugging more likely numbers into the equation should reassure you that you are unlikely to be at risk immediately. An attacker would still have to invest considerable time and money into launching such an attack.
Other nits
The borderline acceptable level of entropy in a random UUID is my main concern with them, but there are others too. In the standard string form, they are quite inefficient. The dash-separated hexadecimal format takes 36 characters to represent 16 bytes of data. That’s a 125% expansion, which is pretty terrible. Base64-encoding would instead use just 24 characters, and just 22 if we remove the padding, resulting in just 37.5% expansion.
Finally, a specific criticism of Java’s random UUID implementation is that internally it uses a single shared
SecureRandom instance to generate the random bits. Depending on the backend configured, this can acquire a lock which can become heavily contended if you are generating large numbers of random tokens, especially if you are using the system blocking entropy source (don’t do that, use /dev/urandom). By rolling your own token generation you can use a thread-local or pool of SecureRandom instances to avoid such contention. (NB – the NativePRNG uses a shared static lock internally, so this doesn’t help in that case, but it also holds the lock for shorter critical sections so is less prone to the problem).
What should we use instead?
My recommendation is to use a 160-bit (20 byte) random value that is then URL-safe base64-encoded. The URL-safe base64 variant can be used pretty much anywhere, and is reasonably compact. In Java:
import java.security.SecureRandom; import java.util.Base64; public class SecureRandomString { private static final SecureRandom random = new SecureRandom(); private static final Base64.Encoder encoder = Base64.getUrlEncoder().withoutPadding(); public static String generate() { byte[] buffer = new byte[20]; random.nextBytes(buffer); return encoder.encodeToString(buffer); } }
This produces output values like the following:
Xl3S2itovd5CDS7cKSNvml4_ODA
This is both shorter than a UUID and also more secure having 160 bits of entropy. I can also make the SecureRandom into a ThreadLocal if I want.
So how long would it take our extreme attacker to find a 160-bit random token? Around 17.9 million years. By tweaking the format of our tokens just a little we can move from worrying about attacker capabilities and resources to inner peace and happiness. It is so far beyond the realm of possible that we can stop worrying.
Why not go further? Why not 256 bits? That would push the attack costs into even more absurd territory. I find that 160 bits is a sweet spot of excellent security while still having a compact string representation.
One thought on “Moving away from UUIDs”
I received a couple of pieces of feedback indirectly. I’m not sure if they want to be named, so I will post their comments anonymously. To summarise they make two points:
1. The comparison to Bitcoin hash rate is irrelevant as the attacker must make an online attack, and we can apply rate limiting to thwart that.
2. A more likely threat is a timing attack when the token/session id is looked up in a prefix tree index. (See for a good background on timing attacks).
My responses to these would be:
1. Firstly, there are potential offline attack vectors. For instance, some applications or frameworks log a hash of the session id in access logs to correlate requests from the same session. Anyone with read-only access to the logs can then mount an offline brute force attack to recover session ids.
Secondly, I would make a defense-in-depth argument. You can easily eliminate the threat entirely for basically no cost by increasing the entropy, without needing to rely on rate limiting, so just do it. Then add rate limiting as well.
2. I’m not sure how many databases actually use prefix tree (trie) indexes rather than the more common B-trees, but it is certainly possible. Even without a trie index, timing attacks are possible due to the common use of memcmp for string comparisons in databases. Such timing attacks themselves require tens or hundreds of thousands of requests (to eliminate noise), so are also somewhat mitigated by rate limiting.
Paragon Initiative Enterprises have a great write up of how to address these timing attacks using “split tokens”: . Another approach is to append a MAC tag to each token and to verify it (in constant time!) before hitting the database at all – an approach I recommended in | https://neilmadden.blog/2018/08/30/moving-away-from-uuids/ | CC-MAIN-2019-04 | refinedweb | 1,664 | 62.98 |
I trying to create project ,where I can recalculate my data based the user input . below is my flow in dataiku project
Data - this data set ,i have uploaded through csv file
webapp_input - This dataset ,I am loading from webapp (created in html,js and python backend )
Output- this data set I am creating using python recipe
Problem statement :
whenever ,I am adding more data in webapp_input through dataiku webapp .I have to run the python receipt manually to re-build the "output"dataset .
But I am trying to trigger it through webapp itself .As soon as will update the data in UI .it should store in webapp_input dataset (which is working fine ) and re-build the "output" dataset .
I have tried following python scripts for this .
import pandas as pd
import numpy as np
import dataiku
import json
import dataikuapi
client = dataiku.api_client()
def jobrun():
project = client.get_project('Testing')
definition = {
"type" : "NON_RECURSIVE_FORCED_BUILD",
"outputs" : [{
"id" : "output",
"partition" : "NP"
}]
}
job = project.start_job(definition)
state = ''
while state != 'DONE' and state != 'FAILED' and state != 'ABORTED':
time.sleep(1)
state = job.get_status()['baseStatus']['state']
Even though the dataset is present in the Project, it’s giving us below error message.
DataikuException: java.lang.IllegalArgumentException: Dataset not found or not buildable: Testing.output
I have tried second option :
from dataiku.scenario import Scenario
client = dataiku.api_client()
scenario.build_dataset("output ")
Non of the options are working .would you be to help me with solving this issue .or any other approach for my requirement .
Thanks | https://community.dataiku.com/t5/Using-Dataiku/re-build-a-dataset-from-dataiku-webapp/m-p/2907/highlight/true | CC-MAIN-2021-49 | refinedweb | 248 | 68.97 |
2012-10-01 20:42:16 8 Comments
I have a
pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ',').
So far, I have tried various simple functions, but the
.apply method seems to only accept one row as return value when it is used on an axis, and I can't get
.transform to work. Any suggestions would be much appreciated!
Example data:
from pandas import DataFrame import numpy as np a = DataFrame([{'var1': 'a,b,c', 'var2': 1}, {'var1': 'd,e,f', 'var2': 2}]) b = DataFrame([{'var1': 'a', 'var2': 1}, {'var1': 'b', 'var2': 1}, {'var1': 'c', 'var2': 1}, {'var1': 'd', 'var2': 2}, {'var1': 'e', 'var2': 2}, {'var1': 'f', 'var2': 2}])
I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do:
def fun(row): letters = row['var1'] letters = letters.split(',') out = np.array([row] * len(letters)) out['var1'] = letters a['idx'] = range(a.shape[0]) z = a.groupby('idx') z.transform(fun)
Related Questions
Sponsored Content
17 Answered Questions
[SOLVED] How to iterate over rows in a DataFrame in Pandas?
15 Answered Questions
[SOLVED] Delete column from pandas DataFrame by column name
16 Answered Questions
[SOLVED] Selecting multiple columns in a pandas dataframe
17 Answered Questions
[SOLVED] Get list from pandas DataFrame column headers
14 Answered Questions
[SOLVED] Select rows from a DataFrame based on values in a column in pandas
18 Answered Questions
[SOLVED] Add one row to pandas DataFrame
22 Answered Questions
[SOLVED] Adding new column to existing DataFrame in Python pandas
- 2012-09-23 19:00:01
- tomasz74
- 1502845 View
- 737 Score
- 22 Answer
- Tags: python pandas dataframe chained-assignment
12 Answered Questions
[SOLVED] How to drop rows of Pandas DataFrame whose value in certain columns is NaN
11 Answered Questions
[SOLVED] How do I get the row count of a Pandas dataframe?
13 Answered Questions
[SOLVED] "Large data" work flows using pandas
- 2013-01-10 16:20:32
- Zelazny7
- 244865 View
- 823 Score
- 13 Answer
- Tags: python mongodb pandas hdf5 large-data
@Naga Kiran 2018-10-24 16:29:52
There is a possibility to split and explode the dataframe without changing the structure of dataframe
Input:
Out:
@MaxU 2016-11-06 13:12:51
UPDATE2: more generic vectorized function, which will work for multiple
normaland multiple
listcolumns
Demo:
Multiple
listcolumns - all
listcolumns must have the same # of elements in each row:
preserving original index values:
Setup:
CSV column:
using this little trick we can convert CSV-like column to
listcolumn:
UPDATE: generic vectorized approach (will work also for multiple columns):
Original DF:
Solution:
first let's convert CSV strings to lists:
Now we can do this:
OLD answer:
Inspired by @AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution):
@Wen-Ben 2017-09-01 16:45:34
dude, if you can open a discussion in Git pandas , I think we do need a build in function like this !!! I have seen so many question about unlistify and unnesting in SO for pandas
@Jaskaran Singh Puri 2018-08-21 13:52:46
how to use this for multiple columns. Like if I have comma separated data in 2 columns and want to do it in sequence?
@MaxU 2018-08-21 15:14:49
@JaskaranSinghPuri, you want to convert all CSV columns to lists first.
@Guido 2018-11-28 14:12:52
Unfornately, it doesn't work if your list elements are tuples. But after converting the entire tuple to string, it works like a charm!
@krassowski 2019-01-23 16:53:20
This solution appears to be the fastest one except for the case there are many very short lists, see stackoverflow.com/a/54318064 for benchmarks.
@krassowski 2019-01-22 23:45:09
I have been struggling with out-of-memory experience using various way to explode my lists so I prepared some benchmarks to help me decide which answers to upvote. I tested five scenarios with varying proportions of the list length to the number of lists. Sharing the results below:
Time: (less is better, click to view large version)
Peak memory usage: (less is better)
Conclusions:
Full details (functions and benchmarking code) are in this GitHub gist. Please note that the benchmark problem was simplified and did not include splitting of strings into the list - which most solutions performed in a similar fashion.
@MaxU 2019-01-23 17:03:07
Nice comparison! Do you mind to post a code, that you used for plotting the benchmarks ?
@krassowski 2019-01-23 17:14:45
Please see this link: gist.github.com/krassowski/0259a2cd2ba774ccd9f69bbcc3187fbf (already included in the answer) - IMO it would be a bit too long to paste it all here.
@piRSquared 2018-08-08 17:23:44
TL;DR
Demonstration
Let's create a new dataframe
dthat has lists
General Comments
I'll use
np.arangewith
repeatto produce dataframe index positions that I can use with
iloc.
Why don't I use
loc?
Because the index may not be unique and using
locwill return every row that matches a queried index.
Why don't you use the
valuesattribute and slice that?
When calling
values, if the entirety of the the dataframe is in one cohesive "block", Pandas will return a view of the array that is the "block". Otherwise Pandas will have to cobble together a new array. When cobbling, that array must be of a uniform dtype. Often that means returning an array with dtype that is
object. By using
ilocinstead of slicing the
valuesattribute, I alleviate myself from having to deal with that.
Why do you use
assign?
When I use
assignusing the same column name that I'm exploding, I overwrite the existing column and maintain its position in the dataframe.
Why are the index values repeat?
By virtue of using
ilocon repeated positions, the resulting index shows the same repeated pattern. One repeat for each element the list or string.
This can be reset with
reset_index(drop=True)
For Strings
I don't want to have to split the strings prematurely. So instead I count the occurrences of the
separgument assuming that if I were to split, the length of the resulting list would be one more than the number of separators.
I then use that
septo
jointhe strings then
split.
For Lists
Similar as for strings except I don't need to count occurrences of
sepbecause its already split.
I use Numpy's
concatenateto jam the lists together.
@cgels 2018-06-05 23:42:09
The string function split can take an option boolean argument 'expand'.
Here is a solution using this argument:
@Dennis Golomazov 2018-01-05 20:16:01
Based on the excellent @DMulligan's solution, here is a generic vectorized (no loops) function which splits a column of a dataframe into multiple rows, and merges it back to the original dataframe. It also uses a great generic
change_column_orderfunction from this answer.
Example:
Note that it preserves the original index and order of the columns. It also works with dataframes which have non-sequential index.
@Evan 2018-02-01 03:07:31
this cracked this one for me, nice work: stackoverflow.com/a/48554655/6672746
@Ted Petrou 2017-11-04 17:34:22
Here is a fairly straightforward message that uses the
splitmethod from pandas
straccessor and then uses NumPy to flatten each row into a single array.
The corresponding values are retrieved by repeating the non-split column the correct number of times with
np.repeat.
@Michael Dorner 2018-06-21 14:08:05
That could be a very beautiful answer. Unfortunately, it does not scale for lots of columns, does it?
@Ankit Maheshwari 2017-06-18 10:27:08
Another solution that uses python copy package
@Daniel Himmelstein 2016-10-09 17:57:42
Here's a function I wrote for this common task. It's more efficient than the
Series/
stackmethods. Column order and names are retained.
With this function, the original question is as simple as:
@bold 2016-12-12 00:09:38
Really efficient & great help for my problem !
@Harry M 2018-07-24 19:43:13
This worked the best for me!
@Derryn Webster-Knife 2016-06-19 15:42:16
Just used jiln's excellent answer from above, but needed to expand to split multiple columns. Thought I would share.
@inodb 2015-06-24 21:01:57
Similar question as: pandas: How do I split text in a column into multiple rows?
You could do:
@Jesse 2017-06-04 07:13:49
It works after add one more rename code
s.name = 'var1'
@jlln 2015-04-21 09:02:49
I came up with a solution for dataframes with arbitrary numbers of columns (while still only separating one column's entries at a time).
@KWubbufetowicz 2016-06-22 18:51:20
nice but sadly slow because of this todict() conversion :(
@Pavel 2015-03-17 21:07:42
I have come up with the following solution to this problem:
@DMulligan 2015-01-28 00:28:46
After painful experimentation to find something faster than the accepted answer, I got this to work. It ran around 100x faster on the dataset I tried it on.
If someone knows a way to make this more elegant, by all means please modify my code. I couldn't find a way that works without setting the other columns you want to keep as the index and then resetting the index and re-naming the columns, but I'd imagine there's something else that works.
@cyril 2017-04-15 00:06:43
This solution worked significantly faster and appears to use less memory,
@Dennis Golomazov 2018-01-05 19:20:50
This is a nice vectorized pandas solution, I was looking for that. Thanks!
@user5359531 2018-08-23 22:10:07
When I try this on my own dataset, I keep getting
TypeError: object of type 'float' has no len()at the very first step (
DataFrame(df.var1.str.split(',').tolist()))
@Flair 2018-10-01 23:00:20
@user5359531 your dataset probably has some
NaNin that column, so the replacement is
b = DataFrame(a.var1.str.split(',').values.tolist(), index=a.var2).stack()
@Chang She 2012-10-01 21:15:03
How about something like this:
Then you just have to rename the columns
@Vincent 2012-10-02 00:22:15
Looks like this is going to work. Thanks for your help! In general, though, is there a prefered approach to Split-Apply-Combine where Apply returns a dataframe of arbitrary size (but consistent for all chunks), and Combine just vstacks the returned DFs?
@Chang She 2012-10-02 01:43:48
GroupBy.apply should work (I just tried it against master). However, in this case you don't really need to go through the extra step of grouping since you're generating the data by row right?
@Vincent 2012-10-02 03:00:45
Yes, that's right. Thanks for the tip. iterrows is nice.
@horatio1701d 2014-06-25 20:20:25
Hey guys. Sorry to jump into this so late but wondering if there is not a better solution to this. I'm trying to experiment with iterrows for the first time since that seems like the ticket for this. I'm also confused by the solution proposed. What does the "_" represent? Can you possibly explain how the solution works? --Thank you
@horatio1701d 2014-06-25 21:54:00
Can the solution be extended to more than two columns?
@horatio1701d 2014-09-03 18:06:26
Is there an implementation with the new API enhancements that might make this a little more performant? This implementation takes far too long to be practical on my large datasets.
@MaxU 2017-02-02 22:30:29
please check this vectorized approach...
@Ando Jurai 2017-05-30 11:43:48
@horatio1701d, _ is a nameplace meaning that you will not take the output into account. here the output is the unpacking of iterrows() for each line, hence both the index and the content of the line. Hence you ignore the index and use the content of the line. The solution works by using a trick; the repeated content is actually passed as "data", the second parameter is a list and represent the index of the Series. I tried it out because I was a little puzzled too but this is clever, while not so efficient computationally-wise.
@krassowski 2019-01-23 16:55:28
This approach seems to be very memory efficient, especially for big lists, see stackoverflow.com/a/54318064 for benchmarks. | https://tutel.me/c/programming/questions/12680754/split+explode+pandas+dataframe+string+entry+to+separate+rows | CC-MAIN-2019-13 | refinedweb | 2,150 | 69.52 |
Angular 9/8 Ajax Get and Post Requests Example
In this post, we'll create a simple example with Angular 9/8 and
HttpClient that sends Ajax Get and Post requests to fetch and post data from/to a backend server.
The server can be either your own server or a third-party server.
In our case, we'll be using a third-party server.
We assume you already have a project ready and Angular CLI installed.
You can also simply use the online Stackblitz IDE if you just want to experiment with the code and don't want to set up a development environment for Angular, yet!
What is Ajax?
Ajax stands for Asynchronous JavaScript and XML. It is used to request data from the server without full-page refresh, and use the result, which was originally XML, to re-render a part of the page.
Nowadays, Ajax refers to any asynchronous request sent to a server from a JavaScript. Mostly the response is JSON, or HTML fragments.
Ajax was the first step into building modern single page apps or SPAs.
Modern libraries and frameworks, like Angular, make building SPAs simpler.
Http GET and POST Requests?
The GET method of http is used to retrieve a resource from a server while POST is used to create and update data in the server.
Angular HttpClient?
HttpClient is a built-in service for sending http requests in Angular. It's built on top of the
XMLHttpRequest interface available in modern and legacy web browsers.
Importing
HttpClientModule
HttpClientModule is the module that exports the
HttpClient service, so you'll need to import it in your project.
Open the
src/app/app.module.ts file and update it { }
We simply need to import
HttpClientModule from
@angular/common/http and add it to the
imports array of
NgModule.
Generate an Angular Service
In your terminal, simply execute the following command from inside your project's folder:
$ ng generate service http
You'll get the files for your service with some basic code.
Go to the
src/app/http.service.ts file and import
HttpClient:
import { HttpClient } from '@angular/common/http';
Next, inject
HttpClient using the constructor of your http service:
import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class HttpService { constructor(private httpClient: HttpClient) { } }
That's it! We are ready to send get and post requests in our app.
Sending an Ajax GET Request
Let's start with defining a service method for sending a get request to the server to fetch some data:
import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class HttpService { apiUrl = "/api/endpoint"; constructor(private httpClient: HttpClient) { } sendGetRequest() { return this.httpClient.get(this.apiUrl); } }
Let's suppose our endpoint returns the following data:
[ { id: '1', name: 'Product 1'}, { id: '2', name: 'Product 2'} ]
Sending an Ajax POST Request
Next, let's define a method for sending a post request:
import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class HttpService { apiUrl = "/api/endpoint"; constructor(private httpClient: HttpClient) { } sendPostRequest(data: Object): Observable<Object> { return this.http.post(this.apiUrl, data); } }
You next need to inject
HttpService in your component(s) and call the methods but to actually send the requests to the server you need to subscribe to the RxJS observables returned from the defined methods. For example:
this.httpService.sendGetRequest().subscribe((responseBody) => { console.log(responseBody); });
This will work provided that you have injected your service as
httpService in your component's constructor. | https://www.techiediaries.com/angular-9-ajax-get-and-post-requests-example/ | CC-MAIN-2020-34 | refinedweb | 596 | 52.19 |
/* PHP & MySQL Blog */:
After refactoring:
This site is a digital habitat of Sameer, a freelance web developer working from Pune.More
10 Responses
1
Wally
January 4th, 2009 at 11:58 am
this works and a bit shorter.
private function check_security_status() {
return ($this->_isHttps === false ||
$this->_isAdmin === false ||
$this->_totalLoginsPerDay > 100);
}
2
Matthew Weier O'Phinney
January 5th, 2009 at 9:49 am.
3
Sameer Borate’s Blog: Refactoring 1: Consolidating Conditional Expressions : Dragonfly Networks
January 6th, 2009 at 3:58 am
[...] has posted the first article in his “Refactoring” series today - a look at boiling down conditional expressions to [...]
4
Bugz
January 6th, 2009 at 4:45 am.
5
Les
January 6th, 2009 at 5:22 am.
6
lloyd27
January 6th, 2009 at 10:23 am..
7
Joseph Gutierrez
January 7th, 2009 at 12:16 pm
Where are your unit tests? Any refactorings require unit tests. If you want to make sure you don’t change the behavior (logic) of your conditionals you need some unit tests.
sameer
January 7th, 2009 at 9:15 pm
I understand the importance of unit tests as mentioned in my earlier post ‘Refactoring: An introduction for PHP programmers’ but have left it out on purpose.
9
Joseph Gutierrez
January 8th, 2009 at 8:56 am
“…The important part is to test your code after refactoring.”
In this quote from your initial post do you write tests first, then refactor? Or do you write code then tests?
sameer
January 8th, 2009 at 10:10 am
Always write tests first and then refactor. | http://www.codediesel.com/php/refactoring-1-consolidate-conditional-expression/ | crawl-002 | refinedweb | 258 | 64.61 |
We’re all familiar these days with the AJAX buzz word – but while it’s relatively simple to use these browser technologies to implement a specific feature in an individual page, it is much more complex to build an entire application according to the Ajax paradigm.
Ajax pushes a new approach to programming with a lot of work being done on the client and subsequently the need for a much better organization of the code on the client side. Presentation patterns such as MVC and MVVM are gaining momentum in the JavaScript space; Separation of Concerns (SoC) is the inspiring principle of hot new Unobtrusive JavaScript methodology. Meanwhile developer’s tools for packaging, logging, unit testing, intercepting, validating JavaScript code are flourishing—even frameworks for providing forms of software contracts. Ajax contrasts with some of the well-established frameworks for Web development—specifically ASP.NET Web Forms. Not so much because you can’t do good Ajax with Web Forms, but because you’re constrained to certain forms of Ajax programming.
Especially with the advent of ASP.NET, the world of Web programming has been commoditized. Frameworks offer a thick layer of abstraction over basic HTML and HTTP interaction. And all of it works on the assumption that the browser sends an HTML form to get back an HTML page. It is relatively easy to change the paradigm for a single feature in a single page. It may be quite hard to extend the paradigm to the whole application. Why? Because the world of Ajax programming has not been commoditized. Yet.
Today, you have a lot of great tools for Ajax programming, but we are still waiting for the killer framework that makes the Ajax paradigm available to everyone. ASP.NET MVC 3 is taking steps in the right direction and so are some commercial products. Beyond that, there’s a virtually endless list of tools and libraries you have to combine together to build your ideal recipe.
This article offers a brief overview of a few aspects you might want to take into account for building successful applications.
The First Law of Ajax Development
If I have to summarize in a short sentence the essence of Ajax development I’d say “Organize your JavaScript code”. The poor, neglected JavaScript language is emerging as a surprisingly powerful language, and more and more people are changing their sentiment about the it. Like it or not, Ajax programming will be JavaScript programming. Whether you use a rich dialect such as jQuery, or write your own library, you need programming discipline to nearly the same extent as you apply to your server development. With Ajax, you are writing a share of your presentation layer with JavaScript.
Scattered JavaScript code is a thing of the past. It may still work; you may still be able to take the project home, but that’s not the way to go. And nonetheless that’s not the way to show. Get some serious JavaScript training, get a great book, search the Web, experiment—you must become a serious JavaScript developer. You can’t keep on writing Web code if you don’t know what an immediate function is or what kind of peril you’re facing by blissfully ignoring the impact of hoisting in JavaScript functions.
You need to refresh your patterns of JavaScript development, pick up your tools for logging, debugging and, yes, unit testing. A painful lesson I learned myself is that polluting the global namespace can only damage the environment. Properly applied, the Module pattern will take you to emulate namespaces which would make it simple, clean and clear to manage global variables and functions. Sure, many libraries out there offer namespaces—the Microsoft Ajax Client library being the first in line. You don’t need libraries to apply the Module pattern and it will make it so easy and safe to track the relative root URL when you have a lot of JSON calls and an application that can be hosted on the root of the site as well as a virtual application.
Unobtrusive JavaScript
An interesting corollary to the first law of Ajax development is unobtrusive JavaScript. In its simplest form, unobtrusive JavaScript couldn’t be simpler: just take out of HTML elements any attributes that refer to script code. Stop using onclick attributes on buttons and onsubmit attributes on HTML forms. Hey, wait a moment—do you realize what this means? ASP.NET Web Forms has hard-coded onclicks and onsubmits in several places. ASP.NET partial rendering is built in fully intrusive way.
Recently released, ASP.NET MVC 3 makes a point of enabling unobtrusive JavaScript in its Ajax controllers. It couldn’t be simpler—just enable a Boolean flag and the framework will do the rest. Well done, but this is simply a way to stop things from getting worse.
Unobtrusive JavaScript is a wider and deeper concept. It aims at getting a neat separation between content (HTML document), presentation (CSS style sheets) and behavior (script code). Years of ASP.NET programming taught us to build the page visually in a smart WYSIWYG editor. We bent HTML tags to any sort of use that could serve our purposes. We used TABLE for any sort of misdeed whereas the tag was created (and should be used) only to render a table of records.
A page should come back to a plain sequence of document tags—headers, P, and the excellent new tags coming up in HTML 5 such as section, footer, article. The page should be able to present its content in a readable way as plain HTML. This is content separated from everything else. Next, you add CSS and can choose different CSS styles on, say, a per-device basis. Next up, you add JavaScript. Both CSS and JavaScript should be linked in a single place and should be easy to switch them on and off. As a software developer, you probably use everyday forms of dependency injection. How is CSS and JavaScript code different from services you inject into a HTML page?
The Second Law of Ajax Development
With Ajax, the client code gains the ability to bypass the browser and can handle the connection itself. Which HTTP endpoints should Ajax clients call? They can’t certainly be the same HTTP endpoints you would call via the browser’s address bar or page links. Here’s the second law of Ajax development: give your applications an Ajax specific service layer.
In software architecture, the Service Layer pattern is defined as “a layer of services that establishes a set of available operations and coordinates the application's response in each operation.” The definition is adapted from Martin Fowler’s excellent book “Patterns of Enterprise Application Architecture.” The service layer is the next stop on the way to the back end of an application after a user action.
In an Ajax context, the service layer is a collection of HTTP endpoints that Ajax clients will call. It contains URLs callable from clicks and taps of the end user. An Ajax request doesn’t likely need an entire HTML page; it probably wants just a bunch of data, presumably XML or JSON. You need to have these endpoints readymade in a well done Ajax application.
The Third Law of Ajax Development
Most of the time, these endpoints are under your direct control as they configure as the application layer of your system. Sometimes, however, the Ajax presentation layer needs to call into external, cross-domain endpoints. Cross-domain calls are not supported by browsers regardless of what the servers actually hosting the endpoints may think.
JSONP is an approach that today represents the best possible compromise between Ajax calls and cross-domain sites. The W3C has something in the works to make legal, authorized cross-domain calls seamless. The W3C released a working draft of a protocol known as Cross-Origin Resource Sharing (CORS) according to which clients place a request with a particular header and server reply with an ad hoc response header. Lacking the response header, browsers should refuse the response. This behavior is expected to be incorporated into the XMLHttpRequest object. To be precise, most browsers already implement this with the notable exception of Internet Explorer, including upcoming version 9. IE does have its own solution for cross-domain calls but it is based on a proprietary object. Waiting for CORS to become a standard, JSONP or JSON with Padding is today the most reliable and common way to implement cross-domain calls for sites that support the protocol.
In addition to location, HTTP endpoints for Ajax calls may also implement long-running tasks. Typically, you might want to be able to monitor and perhaps interrupt these tasks if they hang. You will learn on your own that no standard exist for this and not even frameworks. The reason is that the problem is open and solutions seem to be constrained to a project. However, if the endpoint is aware of being long-running and therefore monitorable and interruptible, something can be done with a bit of creativity and infrastructure. For example, you can identify each long-running Ajax task with a GUID and spawn a parallel timer that periodically checks the status of the task using the GUID. This requires you have a server-side persistent infrastructure where the running task saves its state that polling services can read.
For functions that are available only to authenticated users a cookie is created at the successful completion of the authentication process. The cookie has its own expiration and for the duration of it the user is allowed to access protected pages without having to retype credentials. If the cookie expires, on the next access the user will be automatically redirected to the login page and gently asked to reenter user name and password.
This pattern may get you problems in an Ajax scenario. Your page places an Ajax call and the server detects that the authentication cookie has expired. The server returns a HTTP 302 status which redirects the user to the login page. However, in an Ajax scenario it’s the XMLHttpRequest object—not the browser—that handles the request. XMLHttpRequest correctly handles the redirect and goes to the login page. Unfortunately, the original issuer of the Ajax call will get back the markup of the login page instead of the expected bunch of data. As a result, the login page will likely be inserted in any DOM location where the original response was expected.
There are a couple of approaches you can take to solve the problem. One consists in intercepting the authentication phase of the request and checking whether the ongoing request is an Ajax request. The other consists in adding code to the actual login page to detect the Ajax call and return only a compatible chunk of HTML.
Summary
Writing Ajax applications is not the picnic that the availability of powerful and rich libraries and framework may suggest. Writing Ajax applications is hard because we have tools but not common and consolidated strategies. Quite clearly, it’s the ASP.NET paradigm that is on a dead track and we are trying to develop practices that one day someone will incorporate into a new super framework. When this happens it will be the beginning of the era of commoditization for Ajax development. | https://www.developerfusion.com/article/94426/lessons-learned-writing-ajax-applications/ | CC-MAIN-2020-34 | refinedweb | 1,898 | 61.97 |
As you can tell from my recent articles, I‘ve been working a lot with Bootstrap. I've been learning how to extend Bootstrap with my own customizations. In this article, I'll show you how to create a custom product selection system like the one shown in Figure 1. To build this system, you'll use the Bootstrap panel classes, button groups, and glyphs. Along with these classes, you'll write a little jQuery to keep a running total of the selected products and make the glyphs change on the button groups. Finally, a little CSS is used to enhance the look of your product selection system.
Create a Product Box
To start, create a product box using the Bootstrap panel classes. Figure 2 shows an example of a single product box. The description and price of the product are contained in the panel-body class within an HTML div element. Each of these are in their own span element and some CSS is applied to these span elements.
To style the product description, the price, and the size of the panel body, you need the CSS shown in the following code snippet:
.desc { display: block; font-weight: bold; font-size: 1.25em; } .price { font-size: 1.25em; } .panel-body { min-height: 7em; }
In the div with the Panel-Footer class is something that looks like a button (Figure 2), but has a glyph and some words next to it. As you can see from the code in Listing 1, you build this using the Bootstrap class btn-group and a data- attribute called data-toggle=“buttons”. This class gives you the appearance of a button, but you place a check box inside it to give the ability to toggle the button between a checked and an unchecked state. This toggling makes it ideal for showing a user those products they have selected for purchase (Figure 3) and those they haven't (Figure 2).
The nice thing about check boxes expressed this way is that you can tell that the label is a part of the check box. These buttons are also much easier to press on a mobile device than a normal check box. With just a little bit of jQuery, you can toggle the color and the glyph on these buttons.
When the user clicks on the button/check box to select a product, you need to do three things: change the glyph used, change the color of the button, and change the words on the button. The HTML to express the selected state of the button is shown in the following code snippet.
<div class="btn-group" data- <label class="btn btn-primary active"> <span class="glyphicon glyphicon-ok-circle"> </span> <input id="chk1" type="checkbox" name="products" checked="checked" /> <label for="chk1">Selected</label> </label> </div>
You add the Active class to the label element, change the glyph to glyphicon-ok-circle, add checked=“checked” to the check box, and change the words in the label to “Selected”. After setting all of these, your button will look like the one shown in Figure 3.
You need to change each button dynamically in response to a user clicking on the button. For that, you'll need to use jQuery, as discussed in the next section. By the way, feel free to use whatever icons you want. Font Awesome () has a lot of different CSS glyphs that you can use instead of the limited options in Bootstrap.
Change Glyphs Dynamically Using jQuery
Bootstrap takes care of toggling the Active class on the label elements for you because of the btn-group class and the data-toggle=“button” attribute. However, you need to write jQuery to change the glyphs, the color, and the text on the buttons. Listing 2 shows the jQuery you need to write in order to connect to the change event of the check boxes, determine whether the check box is checked or unchecked, and add or remove CSS classes as appropriate.
The jQuery in Listing 2 is very straight-forward. When the user clicks on the check box, you determine if the check box is checked or unchecked. Based on this setting, you either add or remove the appropriate classes from the span element that is just prior to the check box control. You modify the btn- class on the label element that is the parent element to the check box. Finally, you change the text on the next element, which is the label after the check box.
Make the jQuery Generic
Although the code shown in Listing 2 works, it leaves a little to be desired. First off, if you wish to change the glyph you're using, you have to change it in a couple of places. The same goes for the class you use for the btn- classes. A nicer approach would be to create a JavaScript object into which you can define the properties for each item that you wish to replace. The following code snippet is a JavaScript object definition that you might create.
var checkOptions = { id: "", checkedGlyph: "glyphicon-ok-circle", uncheckedGlyph: "glyphicon-unchecked", checkedBtnClass: "btn-success", uncheckedBtnClass: "btn-primary", checkedText: "Selected", uncheckedText: "Not Selected" };
Once you have this data setup, let's break out the lines of code in the if statement and the else statement in Listing 2 into two separate functions that'll use this JavaScript object. The first function is called setChecked() and is used to assign the appropriate values when the user changes the check box to a checked state.
function setChecked(ctl) { $(ctl).prev() .removeClass(checkOptions.uncheckedGlyph) .addClass(checkOptions.checkedGlyph); $(ctl).parent() .removeClass(checkOptions.uncheckedBtnClass) .addClass(checkOptions.checkedBtnClass); $($(ctl).next()).text(checkOptions.checkedText); }
The second function is called setUnchecked() and is used to assign the appropriate values when the user changes the check box to an unchecked state. You pass in the check box that the user clicked on to each of these functions.
function setUnchecked(ctl) { $(ctl).prev() .removeClass(checkOptions.checkedGlyph) .addClass(checkOptions.uncheckedGlyph); $(ctl).parent() .removeClass(checkOptions.checkedBtnClass) .addClass(checkOptions.uncheckedBtnClass); $($(ctl).next()).text(checkOptions.uncheckedText); }
You now rewrite the code within the $(document).ready() function to call the setChecked() and setUnchecked() functions, as shown in Listing 3. There are a couple more changes to make within this new jQuery code. Instead of using a simple jQuery selector to search for input[type=‘checkbox’], you should make the selector a little more targeted in case you have other check boxes on your page that aren't product select buttons. Fill in the checkOptions.id property with the identity of a div element where all of your product boxes are located. For example, if you wrap a div around the HTML that contains all your product panels, and give that div an ID of “products”, then you modify the checkOptions.id to be “#products”. To further qualify the group of check boxes to connect to, you target those contained within the .btn-group class. These two options together ensure that this code is only run against those check boxes that are your product selection buttons.
One last item in Listing 3 is detecting those check boxes that are set to checked in the HTML when the page is loaded and making sure the appropriate glyphs, the text, and the button's color are set. This is accomplished using the jQuery selector $(checkOptions.id + " .btn-group input:checked") to retrieve a list of all checked check boxes. Call the setChecked() function passing in this list of check boxes so the appropriate attributes can be set.
Calculate Total
Each time a user selects or un-selects a product, you'll want to update the total amount of selected products in the upper-right hand corner of the page (Figure 1). First, get the total amount by selecting the text within the text box with the id=“total”. The price of the product has currency symbols that need to be stripped so you only have the numeric value in order to add or subtract from the total. Second, determine whether or not the product was checked or unchecked. Based on this, you'll either add or subtract the price of the product from the total displayed. Third, format the new total as a currency value and redisplay it back in the total area in the upper right-hand corner of the page. All of this logic is in the calculateTotal() function shown in Listing 4.
When the page is loaded, you need to do two things, as shown in Listing 5. First, connect to the change event of the check boxes in order to run the calculateTotal() function to add or subtract from the total. Second, calculate the total of any pre-checked items by calling the calculateTotal() function for each check box that has checked=“checked” set when the page loads.
Additional Numeric Functions
There are two functions called by the calculateTotal() function: stripCurrency(), which is shown in Listing 6, and formatCurrency(), shown in Listing 7.
The Strip Currency Function
The stripCurrency function has two optional parameters: symbol and separator. If you don't pass them, they default to the US currency symbol ($) and a comma (,) respectively. Use the replace function on the string to remove the symbol, the separator, and spaces from the value passed in. This leaves you with a numeric value that you can then use to calculate a total.
Format Currency Function
The formatCurrency() function has four optional parameters: decimal, decpoint, separator, and symbol. Similar to the stripCurrency() function, these parameters default to US currency values: two places after the decimal, period, dollar sign, and comma respectively. Use the toFixed() and split() functions to break apart the pieces so you can add the separator (a comma), and add the currency symbol to the front of the numeric value.
Summary
In this article, you learned to use the Bootstrap panel class, a little bit of jQuery, and some CSS to create a product selection system. Use these product selection panels when you need a small set of items from which a user can select or unselect. In addition, you learned how to work with some numeric formatting functions, and to calculate totals from the selected items. As you can see, using Bootstrap for some functionality and adding just a little bit of your own jQuery can make creating new components very simple.
Listing 1: A simple panel to contain product information
<div class="panel panel-warning"> <div class="panel-body"> <span class="desc">Introduction to CSS/CSS 3</span> <span class="price">$10.00</span> </div> <div class="panel-footer text-center"> <div class="btn-group" data- <label class="btn btn-primary"> <span class="glyphicon glyphicon-unchecked"></span> <input id="chk1" type="checkbox" name="products" /> <label for="chk1">Not Selected</label> </label> </div> </div> </div>
Listing 2: Simple, hard-coded jQuery to toggle the buttons
<script> $(document).ready(function () { $("input[type='checkbox']").change(function () { if ($(this).prop('checked')) { $(this).prev().removeClass('glyphicon-unchecked') .addClass('glyphicon-ok-circle'); $(this).parent().removeClass('btn-primary') .addClass('btn-success'); $(this).next().text("Selected"); } else { $(this).prev().removeClass('glyphicon-ok-circle') .addClass('glyphicon-unchecked'); $(this).parent().removeClass('btn-success') .addClass('btn-primary'); $(this).next().text("Not Selected"); } }); }); </script>
Listing 3: Call the two new methods to set checked and unchecked states.
$(document).ready(function () { // Connect to 'change' event in order to toggle glyphs $(checkOptions.id + " .btn-group input[type='checkbox']") .change(function () { if ($(this).prop("checked")) { setChecked($(this)); } else { setUnchecked($(this)); } }); // Detect checkboxes that are checked and toggle glyphs var checked = $(checkOptions.id + " .btn-group input:checked"); setChecked(checked); });
Listing 4: Calculate the total each time the user selects or un-selects a product.
function calculateTotal(ctl) { // Get the total amount var total = $("#total").text(); // Strip currency symbols and thousands separator total = stripCurrency(total); // Get the price from within this panel var price = $(ctl).closest(".panel").find(".price").text(); // Strip currency symbols and thousands separator price = stripCurrency(price); if ($(ctl).prop("checked")) { // Add to total total = parseFloat(total) + parseFloat(price); } else { // Subtract from total total = parseFloat(total) - parseFloat(price); } // Format the total and place into HTML $("#total").text(formatCurrency(total)); }
Listing 5: When the page is loaded, calculate the total of any products selected.
$(document).ready(function () { // Connect to 'change' event to get price data $(checkOptions.id + ".btn-group input[type='checkbox']") .change(function () { calculateTotal($(this)); }); // Get checkboxes that are checked var checked = $(checkOptions.id + ".btn-group input:checked"); // Add all 'checked' values to get total for (var i = 0; i < checked.length; i++) { calculateTotal($(checked[i])); } });
Listing 6: This function strips any non-numeric characters from a string.
function stripCurrency(value, symbol, separator) { symbol = (typeof symbol == 'undefined' ? '$' : symbol); separator = (typeof separator == 'undefined' ? ',' : separator); value = value.replace(symbol, "") .replace(separator, "") .replace(" ", ""); return value; }
Listing 7: This function formats a decimal value in any currency format.
function formatCurrency(value, decimals, decpoint, symbol, separator) { decimals = (typeof decimals == 'undefined' ? 2 : decimals); decpoint = (typeof decpoint == 'undefined' ? '.' : decpoint); symbol = (typeof symbol == 'undefined' ? '$' : symbol); separator = (typeof separator == 'undefined' ? ',' : separator); var parts = value.toFixed(decimals) .toString() .split(decpoint); parts[0] = parts[0] .replace(/\B(?=(\d{3})+(?!\d))/g, separator); return (symbol + parts.join(decpoint)).toLocaleString(); } | https://www.codemag.com/Article/1505041/Extending-Bootstrap-A-Product-Selection-System | CC-MAIN-2020-40 | refinedweb | 2,192 | 53.71 |
Suppose there is a binary tree. We will run a preorder depth first search on the root of a binary tree.
At each node in this traversal, the output will be D number of dashes (Here D is the depth of this node), after that we display the value of this node. As we know if the depth of a node is D, the depth of its immediate child is D+1 and the depth of the root node is 0.
Another thing we have to keep in mind that if a node has only one child, that child is guaranteed to be the left child. So, if the output S of this traversal is given, then recover the tree and return its root.
So, if the input is like "1-2--3--4-5--6--7", then the output will be
To solve this, we will follow these steps −
Define one stack st
i := 0, n := size of S
lvl := 0, num := 0
while i < n, do −
for initialize lvl := 0, when S[i] is same as '-', update (increase lvl by 1), (increase i by 1), do −
do nothing
num := 0
while (i < n and S[i] is not equal to '-'), do −
num := num * 10 + (S[i] - '0')
(increase i by 1)
while size of st > lvl, do −
delete element from st
temp = create a new Tree Node with num value
if not st is empty and not left of top element of st is null, then −
left of top element of st := temp
otherwise when not st is empty, then −
right of top element of st := temp
insert temp into st
while size of st > 1, do −
delete element from st
return (if st is empty, then NULL, otherwise top element of st)
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h> using namespace std; class TreeNode{ public: int val; TreeNode *left, *right; TreeNode(int data){ val = data; left = NULL; right = NULL; } }; void inord(TreeNode *root){ if(root != NULL){ inord(root->left); cout << root->val << " "; inord(root->right); } } class Solution { public: TreeNode* recoverFromPreorder(string S) { stack<TreeNode*> st; int i = 0; int n = S.size(); int lvl = 0; int num = 0; while (i < n) { for (lvl = 0; S[i] == '-'; lvl++, i++) ; num = 0; while (i < n && S[i] != '-') { num = num * 10 + (S[i] - '0'); i++; } while (st.size() > lvl) st.pop(); TreeNode* temp = new TreeNode(num); if (!st.empty() && !st.top()->left) { st.top()->left = temp; } else if (!st.empty()) { st.top()->right = temp; } st.push(temp); } while (st.size() > 1) st.pop(); return st.empty() ? NULL : st.top(); } }; main(){ Solution ob; TreeNode *root = ob.recoverFromPreorder("1-2--3--4-5--6--7"); inord(root); }
"1-2--3--4-5--6--7"
3 2 4 1 6 5 7 | https://www.tutorialspoint.com/recover-a-tree-from-preorder-traversal-in-cplusplus | CC-MAIN-2022-21 | refinedweb | 466 | 76.15 |
Java String Exercises: Get a substring of a given string between two specified positions
Java String: Exercise-27 with Solution
Write a Java program to get a substring of a given string between two specified positions.
Pictorial Presentation:
Sample Solution:
Java Code:
public class Exercise27 { public static void main(String[] args) { String str = "The quick brown fox jumps over the lazy dog."; // Get a substring of the above string starting from // index 10 and ending at index 26. String new_str = str.substring(10, 26); // Display the two strings for comparison. System.out.println("old = " + str); System.out.println("new = " + new_str); } }
Sample Output:
old = The quick brown fox jumps over the lazy dog. new = brown fox jumps
Flowchart:
Java Code Editor:
Improve this sample solution and post your code through Disqus
Previous: Write a Java program to check whether a given string starts with the contents of another string.
Next: Write a Java program to create a character array containing the contents of a string.
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming | https://www.w3resource.com/java-exercises/string/java-string-exercise-27.php | CC-MAIN-2019-18 | refinedweb | 181 | 55.74 |
8.11. Bidirectional Recurrent Neural Networks¶
So far we assumed that our goal is to model the next word given what we’ve seen so far, e.g. in the context of a time series or in the context of a language model. While this is a typical scenario, it is not the only one we might encounter. To illustrate the issue, consider the following three tasks of filling in the blanks in a text:
In [1]:
# I am _____ # I am _____ very hungry. # I am _____ very hungry, I could eat half a pig.
Depending on the amount of information available we might fill the blanks with very different words such as ‘happy’, ‘not’, and ‘very’. Clearly the end of the phrase (if available) conveys significant information about which word to pick. A sequence model that is incapable of taking advantage of this will perform poorly on related tasks. For instance, to do well in named entity recognition (e.g. to recognize whether Green refers to Mr. Green or to the color) longer-range context is equally vital. To get some inspiration for addressing the problem let’s take a detour to graphical models.
8.11.1. Dynamic Programming¶
This section serves to illustrate the problem. The specific technical details do not matter for understanding the deep learning counterpart but they help in motivating why one might use deep learning and why one might pick specific architectures. These insights will come in handy later when it comes to Natural Language Processing.
If we want to solve the problem using graphical models we could for instance design a latent variable model as follows: we assume that there exists some latent variable \(h_t\) which governs the emissions \(x_t\) that we observe via \(p(x_t|h_t)\). Moreover, the transitions \(h_t \to h_{t+1}\) are given by some state transition probability \(p(h_t|h_{t-1})\). The graphical model then looks as follows:
Fig. 8.15 Hidden Markov Model.
For a sequence of \(T\) observations we have thus the following joint probability distribution over observed and hidden states:
Now assume that we observe all \(x_i\) with the exception of some \(x_j\) and it is our goal to compute \(p(x_j|x^{-j})\). To accomplish this we need to sum over all possible choices of \(h = (h_1, \ldots, h_T)\). In case \(h_i\) can take on \(k\) distinct values this means that we need to sum over \(k^T\) terms - mission impossible! Fortunately there’s an elegant solution for this: dynamic programming. To see how it works consider summing over the first two hidden variable \(h_1\) and \(h_2\). This yields:
In general we have the forward recursion
The recursion is initialized as \(\pi_1(h_1) = p(h_1)\). In abstract terms this can be written as \(\pi_{t+1} = f(\pi_t, x_t)\), where \(f\) is some learned function. This looks very much like the update equation in the hidden variable models we discussed so far in the context of RNNs. Entirely analogously to the forward recursion we can also start a backwards recursion. This yields:
We can thus write the backward recursion as
with initialization \(\rho_T(h_T) = 1\). These two recursions allow us to sum over \(T\) variables in \(O(kT)\) (linear) time over all values of \((h_1, \ldots h_T)\) rather than in exponential time. This is one of the great benefits of probabilistic inference with graphical models. It is a very special instance of the Generalized Distributive Law proposed in 2000 by Aji and McEliece. Combining both forward and backward pass we are able to compute
Note that in abstract terms the backward recursion can be written as \(\rho_{t-1} = g(\rho_t, x_t)\), where \(g\) is some learned function. Again, this looks very much like an update equation, just running backwards unlike what we’ve seen so far in RNNs. And, indeed, HMMs benefit from knowing future data when it is available. Signal processing scientists distinguish between the two cases of knowing and not knowing future observations as filtering vs. smoothing. See e.g. the introductory chapter of the book by Doucet, de Freitas and Gordon, 2001 on Sequential Monte Carlo algorithms for more detail.
8.11.2. Bidirectional Model¶
If we want to have a mechanism in RNNs that offers comparable look-ahead ability as in HMMs we need to modify the recurrent net design we’ve seen so far. Fortunately this is easy (conceptually). Instead of running an RNN only in forward mode starting from the first symbol we start another one from the last symbol running back to front. Bidirectional recurrent neural networks add a hidden layer that passes information in a backward direction to more flexibly process such information. The figure below illustrates the architecture of a bidirectional recurrent neural network with a single hidden layer.
Fig. 8.16 Architecture of a bidirectional recurrent neural network.
In fact, this is not too dissimilar to the forward and backward recurrences we encountered above. The main distinction is that in the previous case these equations had a specific statistical meaning. Now they’re devoid of such easily accessible interpretaton and we can just treat them as generic functions. This transition epitomizes many of the principles guiding the design of modern deep networks - use the type of functional dependencies common to classical statistical models and use them in a generic form.
8.11.2.1. Definition¶
Bidirectional RNNs were introduced by Schuster and Paliwal, 1997. For a detailed discussion of the various architectures see also the paper by Graves and Schmidhuber, 2005. Let’s look at the specifics of such a network. For a given time step \(t\), the mini-batch input is \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs: \(d\)) and the hidden layer activation function is \(\phi\). In the bidirectional architecture: We assume that the forward and backward hidden states for this time step are \(\overrightarrow{\mathbf{H}}_t \in \mathbb{R}^{n \times h}\) and \(\overleftarrow{\mathbf{H}}_t \in \mathbb{R}^{n \times h}\) respectively. Here \(h\) indicates the number of hidden units. We compute the forward and backward hidden state updates as follows:
Here, the weight parameters \(\mathbf{W}_{xh}^{(f)} \in \mathbb{R}^{d \times h}, \mathbf{W}_{hh}^{(f)} \in \mathbb{R}^{h \times h}, \mathbf{W}_{xh}^{(b)} \in \mathbb{R}^{d \times h}, and \mathbf{W}_{hh}^{(b)} \in \mathbb{R}^{h \times h}\) and bias parameters \(\mathbf{b}_h^{(f)} \in \mathbb{R}^{1 \times h} and \mathbf{b}_h^{(b)} \in \mathbb{R}^{1 \times h}\) are all model parameters.
Then we concatenate the forward and backward hidden states \(\overrightarrow{\mathbf{H}}_t\) and \(\overleftarrow{\mathbf{H}}_t\) to obtain the hidden state \(\mathbf{H}_t \in \mathbb{R}^{n \times 2h}\) and input it to the output layer. In deep bidirectional RNNs the information is passed on as input to the next bidirectional layer. Lastly, the output layer computes the output \(\mathbf{O}_t \in \mathbb{R}^{n \times q}\) (number of outputs: \(q\)):
Here, the weight parameter \(\mathbf{W}_{hq} \in \mathbb{R}^{2h \times q}\) and bias parameter \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\) are the model parameters of the output layer. The two directions can have different numbers of hidden units.
8.11.2.2. Computational Cost and Applications¶
One of the key features of a bidirectional RNN is that information from both ends of the sequence is used to estimate the output. That is, we use information from future and past observations to predict the current one (a smoothing scenario). In the case of language models this isn’t quite what we want. After all, we don’t have the luxury of knowing the next to next symbol when predicting the next one. Hence, if we were to use a bidirectional RNN naively we wouldn’t get very good accuracy: during training we have past and future data to estimate the present. During test time we only have past data and thus poor accuracy (we will illustrate this in an experiment below).
To add insult to injury bidirectional RNNs are also exceedingly slow. The main reason for this is that they require both a forward and a backward pass and that the backward pass is dependent on the outcomes of the forward pass. Hence gradients will have a very long dependency chain.
In practice bidirectional layers are used very sparingly and only for a narrow set of applications, such as filling in missing words, annotating tokens (e.g. for named entity recognition), or encoding sequences wholesale as a step in a sequence processing pipeline (e.g. for machine translation). In short, handle with care!
8.11.2.3. Training a BLSTM for the Wrong Application¶
If we were to ignore all advice regarding the fact that bidirectional LSTMs use past and future data and simply apply it to language models we will get estimates with acceptable perplexity. Nonetheless the ability of the model to predict future symbols is severely compromised as the example below illustrates. Despite reasonable perplexity numbers it only generates gibberish even after many iterations. We include the code below as a cautionary example against using them in the wrong context.
In [2]:
import sys sys.path.insert(0, '..') import d2l from mxnet import nd from mxnet.gluon import rnn corpus_indices, vocab = d2l.load_data_time_machine() num_inputs, num_hiddens, num_layers, num_outputs = len(vocab), 256, 2, len(vocab) ctx = d2l.try_gpu() num_epochs, num_steps, batch_size, lr, clipping_theta = 500, 35, 32, 1, 1 prefixes = ['traveller', 'time traveller'] lstm_layer = rnn.LSTM(hidden_size = num_hiddens, num_layers=num_layers, bidirectional = True) 19.273547, time 11.44 sec epoch 250, perplexity 1.304388, time 11.86 epoch 375, perplexity 1.182964, time 11.52 sec epoch 500, perplexity 1.166502, time 11.53
The output is clearly unsatisfactory for the reasons described above. For a discussion of more effective uses of bidirectional models see e.g. the later section on Sentiment Classification.
8.11.3. Summary¶
- In bidirectional recurrent neural networks, the hidden state for each time step is simultaneously determined by the data prior and after the current timestep.
- Bidirectional RNNs bear a striking resemblance with the forward-backward algorithm in graphical models.
- Bidirectional RNNs are mostly useful for sequence embedding and the estimation of observations given bidirectional context.
- Bidirectional RNNs are very costly to train due to long gradient chains.
8.11.4. Exercises¶
- If the different directions use a different number of hidden units, how will the shape of \(\boldsymbol{H}_t\) change?
- Design a bidirectional recurrent neural network with multiple hidden layers.
- Implement a sequence classification algorithm using bidirectional RNNs. Hint - use the RNN to embed each word and then aggregate (average) all embedded outputs before sending the output into an MLP for classification. For instance, if we have \((\mathbf{o}_1, \mathbf{o}_2, \mathbf{o}_3)\) we compute \(\bar{\mathbf{o}} = \frac{1}{3} \sum_i \mathbf{o}_i\) first and then use the latter for sentiment classification. | http://d2l.ai/chapter_recurrent-neural-networks/bi-rnn.html | CC-MAIN-2019-18 | refinedweb | 1,834 | 54.52 |
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
A simple character-level RNN to generate new bits of text based on text from a novel.
%load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch import torch import torch.nn.functional as F from torchtext import data from torchtext import datasets import time import random import unidecode import string import random import re torch.backends.cudnn.deterministic = True
Sebastian Raschka CPython 3.7.1 IPython 7.4.0 torch 1.0.1.post2
RANDOM_SEED = 123 torch.manual_seed(RANDOM_SEED) DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') TEXT_PORTION_SIZE = 200 NUM_ITER = 20000 LEARNING_RATE = 0.005 EMBEDDING_DIM = 100 HIDDEN_DIM = 100 NUM_HIDDEN = 1
!wget
--2019-04-26 04:03:36-- Resolving ()... 152.19.134.47, 2610:28:3090:3000:0:bad:cafe:47 Connecting to ()|152.19.134.47|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 804335 (785K) [text/plain] Saving to: ‘98-0.txt.11’ 98-0.txt.11 100%[===================>] 785.48K 1.68MB/s in 0.5s 2019-04-26 04:03:36 (1.68 MB/s) - ‘98-0.txt.11’ saved [804335/804335]
Convert all characters into ASCII characters provided by
string.printable:
string.printable
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>[email protected][\\]^_`{|}~ \t\n\r\x0b\x0c'
with open('./98-0.txt', 'r') as f: textfile = f.read() # convert special characters textfile = unidecode.unidecode(textfile) # strip extra whitespaces textfile = re.sub(' +',' ', textfile) TEXT_LENGTH = len(textfile) print(f'Number of characters in text: {TEXT_LENGTH}')
Number of characters in text: 776058
Divide the text into smaller portions:
random.seed(RANDOM_SEED) def random_portion(textfile): start_index = random.randint(0, TEXT_LENGTH - TEXT_PORTION_SIZE) end_index = start_index + TEXT_PORTION_SIZE + 1 return textfile[start_index:end_index] print(random_portion(textfile))
left his saw sticking in the firewood he was cutting, set it in motion again; the women who had left on a door-step the little pot of hot ashes, at which she had been trying to soften the pain in her
Define a function to convert characters into tensors of integers (type long):
def char_to_tensor(text): lst = [string.printable.index(c) for c in text] tensor = torch.tensor(lst).long() return tensor print(char_to_tensor('abcDEF'))
tensor([10, 11, 12, 39, 40, 41])
Putting it together to make a function that draws random batches for training:
def draw_random_sample(textfile): text_long = char_to_tensor(random_portion(textfile)) inputs = text_long[:-1] targets = text_long[1:] return inputs, targets
draw_random_sample(textfile)
(tensor([94,]), tensor(, 10]))
class RNN(torch.nn.Module): def __init__(self, input_size, embed_size, hidden_size, output_size, num_layers): super(RNN, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.embed = torch.nn.Embedding(input_size, hidden_size) self.gru = torch.nn.GRU(input_size=embed_size, hidden_size=hidden_size, num_layers=num_layers) self.fc = torch.nn.Linear(hidden_size, output_size) self.init_hidden = torch.nn.Parameter(torch.zeros( num_layers, 1, hidden_size)) def forward(self, features, hidden): embedded = self.embed(features.view(1, -1)) output, hidden = self.gru(embedded.view(1, 1, -1), hidden) output = self.fc(output.view(1, -1)) return output, hidden def init_zero_state(self): init_hidden = torch.zeros(self.num_layers, 1, self.hidden_size).to(DEVICE) return init_hidden
torch.manual_seed(RANDOM_SEED) model = RNN(len(string.printable), EMBEDDING_DIM, HIDDEN_DIM, len(string.printable), NUM_HIDDEN) model = model.to(DEVICE) optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
def evaluate(model, prime_str='A', predict_len=100, temperature=0.8): ## based on ## blob/master/char-rnn-generation/char-rnn-generation.ipynb hidden = model.init_zero_state() prime_input = char_to_tensor(prime_str) predicted = prime_str # Use priming string to "build up" hidden state for p in range(len(prime_str) - 1): _, hidden = model(prime_input[p].to(DEVICE), hidden.to(DEVICE)) inp = prime_input[-1] for p in range(predict_len): output, hidden = model(inp.to(DEVICE), hidden.to(DEVICE)) # Sample from the network as a multinomial distribution output_dist = output.data.view(-1).div(temperature).exp() top_i = torch.multinomial(output_dist, 1)[0] # Add predicted character to string and use as next input predicted_char = string.printable[top_i] predicted += predicted_char inp = char_to_tensor(predicted_char) return predicted
start_time = time.time() for iteration in range(NUM_ITER): ### FORWARD AND BACK PROP hidden = model.init_zero_state() optimizer.zero_grad() loss = 0. inputs, targets = draw_random_sample(textfile) inputs, targets = inputs.to(DEVICE), targets.to(DEVICE) for c in range(TEXT_PORTION_SIZE): outputs, hidden = model(inputs[c], hidden) loss += F.cross_entropy(outputs, targets[c].view(1)) loss /= TEXT_PORTION_SIZE loss.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING with torch.set_grad_enabled(False): if iteration % 1000 == 0: print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min') print(f'Iteration {iteration} | Loss {loss.item():.2f}\n\n') print(evaluate(model, 'Th', 200), '\n') print(50*'=')
Time elapsed: 0.00 min Iteration 0 | Loss 4.58 Th4izvh?=lw2ZaCV_}xEt5y.gA^+r[email protected]<$.1KRQe/c\ {a5A55Dun}_*czf.o6Hmy$l"[email protected]fi{7rjKsvnEMJ mr`PaKygiE+VSbR#RF|SC^g^CZK,aenDc)t.O_ D^(M]1w'^Wd_HDws\>_2)iavp?*c-npOvoQE>i L ================================================== Time elapsed: 2.63 min Iteration 1000 | Loss 1.81 Th Prost into he forn a wock, abrould with his lother the star a caide with the Jue turnd face. Breaknay when and and of or, street were work have the long is on the proseing bove wabres. Throk a mean h ================================================== Time elapsed: 5.29 min Iteration 2000 | Loss 1.72 Ther face. And civery ire head shook the lange's was note my booked she cray. The grance for that the with Lerary swere were, and for young to-the wank the tanger brother whas at a for the requestone-st ================================================== Time elapsed: 7.91 min Iteration 3000 | Loss 1.73 Thou my menal known a purntatieful a might Frent fargefuch by sour that reforned after as as a mists and the countice of the Founk "I among him your for the you glason in?" "I constrance yhabuing a ================================================== Time elapsed: 10.55 min Iteration 4000 | Loss 1.77 The seeantelition pricomer; I have had the passess bestious had be patriender one up thow, such the even the line and that ins show was somen of his openey, but fine had a raghter? "I! And at a sifulra ================================================== Time elapsed: 13.17 min Iteration 5000 | Loss 1.46 The Bask tree. "The intame!" "Neothing and fam and if you brow lisert, to the mouther desk to an to the Gells that immered of the indence an aftionation bud, undering to went remark down off work; doe! ================================================== Time elapsed: 15.83 min Iteration 6000 | Loss 1.64 The Pross. What of moon, and worth her knitting, and is he see myself the was seeper on prisoner her been on him our, and yet in the poors; is stooness of a morned this things more, were benthell name, ================================================== Time elapsed: 18.47 min Iteration 7000 | Loss 1.64 The here an the ferty care it was of the streach. As Miss Pross Borring of her surfounds of comprelegual saken which his returnes, shall in Heaved the arrows of the retore, then for Defarge. Jark, he wa ================================================== Time elapsed: 21.12 min Iteration 8000 | Loss 1.66 Thur and the decients than any. Monsiever such put her cite out over the cermarded and in herce then the repariey who grance the stalled be of the own and conversicted way of his anterom cold the cirse ================================================== Time elapsed: 23.77 min Iteration 9000 | Loss 1.53 Thrat to his man that extenss of the said her and had and world at it, she had was as breat--how had asseet triatile of the pationed, and that worked he works of one and nobainly, and out of that at the ================================================== Time elapsed: 26.43 min Iteration 10000 | Loss 1.30 Ther his moth wooten a new blood, a sile, the lactriden nother were noter, who had from his father to prettorers his fation. Then. He are is him a sloke it soits in him woired to the paper women, maning ================================================== Time elapsed: 29.07 min Iteration 11000 | Loss 1.73 The eighs while Miss Pross was saying a could they last the done by, and pressed to the been hackeful hight in mending, and to the done into-raid to have little faming shall now, with the said to go of ================================================== Time elapsed: 31.72 min Iteration 12000 | Loss 1.68 The here. It were would done. "It alread, I was say?" seen in not in the culles the sunded Miss, sure to be there were the would close he dark see radfe taken it is instend me had done-all I spy so str ================================================== Time elapsed: 34.35 min Iteration 13000 | Loss 1.42 The part, but, at had were tosen in it are of a proined serverently passing the fars, and the friended that a fiffer the knouttial backle and day, list, and from to could my deting; and very dark of the ================================================== Time elapsed: 37.00 min Iteration 14000 | Loss 1.78 Thre it was a days. "You and deperianned of the moved there way, and a socions of the proppiouse and this must a dively? "Yest!" (And in the care befon this there," asked nother, to the two in this ex ================================================== Time elapsed: 39.63 min Iteration 15000 | Loss 1.54 Tho a man in all looking the mannen were trangs; he more at man and in the had believe the sick of their an than the man the prioned in a golderate scattered no stup, and look, all thoused shall law sca ================================================== Time elapsed: 42.28 min Iteration 16000 | Loss 1.52 Thrishe forth, his have like him and words of it is a peeched in the eyes farge what it went exciect the deing and the mittions. The mounged the repalling's citines of mineurmt you not thinks, Charlee ================================================== Time elapsed: 44.93 min Iteration 17000 | Loss 1.58 Thrithest the prisonened I staid be, short, and not morright door with with the mitting to my worthud no paid it." "He do I as a more through a passed and go more. No and me, the far bold to fears and ================================================== Time elapsed: 47.56 min Iteration 18000 | Loss 1.49 Tho you would fro in his intides rather sation and chocal went in the things, asked the have hand of the distened did of the cately roar chifulures. What the His of a not his have pourty the took this l ================================================== Time elapsed: 50.19 min Iteration 19000 | Loss 1.78 Thragges," said some of a puncher in the Gabody old, was a Fants tall to know of the complight--seat more inten asse interancame my any went med Courable hands in that he behing make no will never see t ================================================== | https://nbviewer.jupyter.org/github/rasbt/deeplearning-models/blob/master/pytorch_ipynb/rnn/char_rnn-charlesdickens.ipynb | CC-MAIN-2020-40 | refinedweb | 1,752 | 68.57 |
Numpy-based NIST SPH audio-file reader
Project description
Numpy-based NIST SPH audio-file reader. This is for use with NIST SPH audio-files, the most likely use being extracting the TEDLIUM_release2 audio into formats that standard tools can easily process.
Note that this library doesn’t require any external tools such as vox or gstreamer. It just loads the data into a numpy array and then lets you dump it back out to wave files.
Note that the library does not support files with embedded-shorten-* encodings, only the base ulaw encoding. You will need to convert such files with:
sph2pipe file.sph file-raw.sph
to allow them to be loaded.
Usage
from sphfile import SPHFile sph =SPHFile( 'TEDLIUM_release2/test/sph/JamesCameron_2010.sph' ) # Note that the following loads the whole file into ram print( sph.format ) # write out a wav file with content from 111.29 to 123.57 seconds sph.write_wav( 'test.wav', 111.29, 123.57 )
Requirements
- numpy
License
MIT License (c) 2017 Mike C. Fletcher
History
- 1.0.3 – Allow for other header keys during header format parsing
- 1.0.2 – Use signed integers for 2 and 4-byte sample_n_bytes
- 1.0.1 – Fix to allow for files that have non-sample-multiple bytes in the data section
- 1.0.0 – Initial release
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/sphfile/ | CC-MAIN-2021-39 | refinedweb | 242 | 68.77 |
Difference between revisions of "Command-line shell"
Latest revision as of 16:32, 13 January 2018
-
- Nash — Nash is a system shell, inspired by plan9 rc, that makes it easy to create reliable and safe scripts taking advantages of operating systems namespaces (on linux and plan9) in an idiomatic way.
- OS X and Linux.
- rc — Command interpreter for Plan 9 that provides similar facilities to UNIX’s Bourne shell, with some small additions and less idiosyncratic syntax.
- xonsh — A retrocompatible shell based on the python interpreter.
- || xonsh.
Changing your default shell
After installing one
where full-path-to-shell is the full path as given by
chsh -l.
If you now log out and log in again, you will be greeted by the other shell. | https://wiki.archlinux.org/index.php?title=Command_Shell&diff=cur&oldid=197057 | CC-MAIN-2018-05 | refinedweb | 125 | 59.84 |
Problem with movieclip heightbhargavi reddy Apr 13, 2012 4:18 AM
I faced one problem with movieclip height. Actually I created a moveiclip which is having 4 labels, 1 textinput, 1 textarea, 1 button (all these are components). The height of the movieclip is 280. I added this movieclip to another movieclip for displaying it onto the stage. At that time height of the movieclip is 337.5. I want to know the reason for why the height is changed? Anyone please help me.
and my code is as follows:
import flash.display.MovieClip;
var mc:MovieClip = new MovieClip();
addChild(mc);
for(var i:int=0;i<2;i++)
{
var cell;
if(i == 0)
{
cell = new email_cell();
}
if(i == 1)
{
cell = new details_cell();
}
cell.x = 20;
cell.y = mc.height;
mc.addChild(cell);
trace(cell.height);
trace(mc.height);
}
Both email_cell and details_cell height is changed.
1. Re: Problem with movieclip heightNed Murphy Apr 13, 2012 4:38 AM (in response to bhargavi reddy)
Are you saying that the line trace(cell.height); indicates a value greater than what you believe the height is (280)?
What value of height does it trace is you just manually add an instance to the stage?
2. Re: Problem with movieclip heightbhargavi reddy Apr 13, 2012 4:43 AM (in response to Ned Murphy)
ya i manually added an instance to the stage.
3. Re: Problem with movieclip heightbhargavi reddy Apr 13, 2012 4:46 AM (in response to bhargavi reddy)
according to my i got the output as follows:
337.5 - cell.height for i=0
337.5 - mc.height
197.5 - cell.height for i=1
535 - total mc height (cell height for i=0 + cell height for i=1)
4. Re: Problem with movieclip heightNed Murphy Apr 13, 2012 4:57 AM (in response to bhargavi reddy)
If the manually placed instance still measures 337.5 then chances are that is its true height. If you can show a screenshot of the offending movieclip, I can probably recreate it to see if I get the same result and then maybe determine which object within it it might be creating height that you can't see.
5. Re: Problem with movieclip heightbhargavi reddy Apr 13, 2012 5:05 AM (in response to Ned Murphy)
6. Re: Problem with movieclip heightNed Murphy Apr 13, 2012 6:10 AM (in response to bhargavi reddy)1 person found this helpful
Here's what I have concluded... the label elements are what are causing the problem, the one inside the button component being the contributor to the 337.5 value. If you were to remove that button I am guessing you would still see something around the 311 height value due to the label above the button.
I cannot answer why this is the case, but that's what it is. The labels apparently carry some unseen baggage. I generally avoid using components for a few reasons... they tend to weigh too much byte-wise, some can be a pain to manage, and then they go and pull stuff like this.
What I did to investigate this is to place each component on a separate layer in the movieclip and made each layer a guide layer. Then I gradually changed them back and watched the height values reported by your code.
You options for a solution...
1) don't use labels or button components... use textfields and button symbols (ones you create yourself)
2) create the backgrounds as movieclips and use those to find the height.
3) assign some property to the movieclips for their height values and use those instead of the height you read as a property.
7. Re: Problem with movieclip heightbhargavi reddy Apr 15, 2012 10:35 PM (in response to Ned Murphy)
Thank you so much for your answer that helps me alot.
8. Re: Problem with movieclip heightNed Murphy Apr 16, 2012 4:19 AM (in response to bhargavi reddy)
You're welcome | https://forums.adobe.com/thread/989255?tstart=0 | CC-MAIN-2017-51 | refinedweb | 663 | 71.65 |
Scala on Heroku
Posted by Adam
The sixth official language on the Heroku polyglot platform is Scala, available in public beta on the Cedar stack starting today.
Scala deftly blends object-oriented programming with functional programming. It offers an approachable syntax for Java and C developers, the power of a functional language like Erlang or Clojure, and the conciseness and programmer-friendliness normally found in scripting languages such as Ruby or Python. It has found traction with big-scale companies like Twitter and Foursquare, plus many others. Perhaps most notably, Scala offers a path forward for Java developers who seek a more modern programming language.
More on those points in a moment. But first, let's see it in action.
Scala on Heroku in Two Minutes
Create a directory. Start with this sourcefile:
src/main/scala/Web.scala
import org.jboss.netty.handler.codec.http.{HttpRequest, HttpResponse} import com.twitter.finagle.builder.ServerBuilder import com.twitter.finagle.http.{Http, Response} import com.twitter.finagle.Service import com.twitter.util.Future import java.net.InetSocketAddress import util.Properties object Web { def main(args: Array[String]) { val port = Properties.envOrElse("PORT", "8080").toInt println("Starting on port:"+port) ServerBuilder() .codec(Http()) .name("hello-server") .bindTo(new InetSocketAddress(port)) .build(new Hello) } } class Hello extends Service[HttpRequest, HttpResponse] { def apply(req: HttpRequest): Future[HttpResponse] = { val response = Response() response.setStatusCode(200) response.setContentString("Hello from Scala!") Future(response) } }
Add the following files to declare dependencies and build with sbt, the simple build tool for Scala:
project/build.properties
sbt.version=0.11.0
build.sbt
import com.typesafe.startscript.StartScriptPlugin seq(StartScriptPlugin.startScriptForClassesSettings: _*) name := "hello" version := "1.0" scalaVersion := "2.8.1" resolvers += "twitter-repo" at "" libraryDependencies ++= Seq("com.twitter" % "finagle-core" % "1.9.0", "com.twitter" % "finagle-http" % "1.9.0")
Declare how the app runs with a start script plugin and Procfile:
project/build.sbt
resolvers += Classpaths.typesafeResolver addSbtPlugin("com.typesafe.startscript" % "xsbt-start-script-plugin" % "0.3.0")
Procfile
web: target/start Web
Commit to Git:
$ git init $ git add . $ git commit -m init
Create an app on the Cedar stack and deploy:
$ heroku create --stack cedar Creating warm-frost-1289... done, stack is cedar | git@heroku.com:warm-frost-1289.git Git remote heroku added $ git push heroku master Counting objects: 14, done. Delta compression using up to 4 threads. Compressing objects: 100% (9/9), done. Writing objects: 100% (14/14), 1.51 KiB, done. Total 14 (delta 1), reused 0 (delta 0) -----> Heroku receiving push -----> Scala app detected -----> Building app with sbt v0.11.0 -----> Running: sbt clean compile stage Getting net.java.dev.jna jna 3.2.3 ... ... [success] Total time: 0 s, completed Sep 26, 2011 8:41:10 PM -----> Discovering process types Procfile declares types -> web -----> Compiled slug size is 43.1MB -----> Launching... done, v3 deployed to Heroku
Then view your app on the web!
$ curl Hello from Scala!
Dev Center: Getting Started with Scala on Heroku/Cedar
Language and Community
Scala is designed as an evolution of Java that addresses the verbosity of Java syntax and adds many powerful language features such as type inference and functional orientation. Java developers who have made the switch to Scala often say that it brings fun back to developing on the JVM. Boilerplate and ceremony are replaced with elegant constructs, to express intent in fewer lines of code. Developers get all the benefits of the JVM — including the huge ecosystem of libraries and tools, and a robust and performant runtime — with a language tailored to developer happiness and productivity.
Scala is strongly- and statically-typed, like Java (and unlike Erlang and Clojure). Its type inference has much in common with Haskell.
Yet, Scala achieves much of the ease of use of a dynamically-typed language (such as Ruby or Python). Though there are many well-established options for dynamically-typed open source languages, Scala is one of the few with compile-time type safety which is also both practical and pleasant to use. The static vs dynamic typing debate rages on, but if you're in the type-safe camp, Scala is an obvious choice.
Language creator Martin Odersky's academic background shines through in the feel of the language and the community. But the language's design balances academic influence with approachability and pragmatism. The result is that Scala takes many of the best ideas from the computer science research world, and makes them practical in an applied setting.
Members of the Scala community tend to be forward-thinking, expert-level Java programmers; or developers from functional backgrounds (such as Haskell or ML) who see an opportunity to apply the patterns they love in a commercially viable environment.
There is some debate about whether Scala is too hard to learn or too complex. One answer is that the language is still young enough that learning resources aren't yet fully-baked, although Twitter's Scala School is one good resource for beginners. But perhaps Scala is simply a sharper tool than Java: in the hands of experts it's a powerful tool, but copy-paste developers may find themselves with self-inflicted wounds.
Scala Days is the primary Scala conference, although the language is well-represented at cross-community conferences like Strange Loop.
The language community has blossomed, and is now in the process of accumulating more and more mainstream adoption. Community members are enthusiastic about the language's potential, making for an environment that welcomes and encourages newcomers.
Open Source Projects
Open source is thriving in the Scala world. The Lift web framework is a well-known early mover, but the last two years have seen an explosion of new projects showcasing Scala's strengths.
Finagle is a networking library coming out of the Twitter engineering department. It's not a web framework in the sense of Rails or Django, but rather a toolkit for creating network clients and servers. The server builder is in some ways reminiscent of the Node.js stdlib for creating servers, but much more feature-full: fault-tolerance, backpressure (rate-limiting defense against attacks), and service discovery to name a few. The web is increasingly a world of connected services, and Finagle (and Scala) are a natural fit for that new order.
Spark runs on Mesos (a good example of hooking into the existing JVM ecosystem) to do in-memory dataset processing, such as this impressive demo of loading all of Wikipedia into memory for lightning-fast searches. Two other notable projects are Akka (concurrency middleware) and Play! (web framework), which we'll look at shortly.
The Path Forward for Java?
Some Java developers have been envious of modern, agile, web-friendly languages like Ruby or Python — but they don't want to give up type safety, the Java library ecosystem, or the JVM. Leaders in the Java community are aware of this stagnation problem and see alternate JVM languages as the path forward. Scala is the front-runner candidate on this, with support from influential people like Bruce Eckel, Dick Wall and Carl Quinn of the Java Posse, and Bill Venners.
Scala is a natural successor to Java for a few reasons. Its basic syntax is familiar, in contrast with Erlang and Clojure: two other functional, concurrency-focused languages which many developers find inscrutable. Another reason is that Scala's functional and object-oriented mix allows new developers to build programs in an OO model to start with. Over time, they can learn functional techniques and blend them in where appropriate.
Working with Java libraries from Scala is trivial and practical. You can not only call Java libraries from Scala, but go the other way — provide Scala libraries for Java developers to call. Akka is one example of this.
There's obvious overlap here between Scala as a reboot of the Java language and toolchain, and the Play! web framework as a reboot of Java web frameworks. Indeed, these trends are converging, with Play! 2.0 putting Scala front-and-center. The fact that Play! can be used in a natural way from both Java and Scala is another testament to JVM interoperability. Play 2.0 will even use sbt as the builder and have native Akka support.
Typesafe and Akka
Typesafe is a new company emerging as a leader in Scala, with language creator Martin Odersky and Akka framework creator Jonas Bonér as co-founders. Their open-source product is the Typesafe Stack, a commercially-supported distribution of Scala and Akka.
Akka is an event-driven middleware framework with emphasis on concurrency and scale-out. Akka uses the actor model with features such as supervision hierarchies and futures.
The Heroku team worked closely with Typesafe on bringing Scala to our platform. This collaboration produced items like the
xsbt-start-script-plugin, and coordination around the release of sbt 0.11.
Havoc Pennington of Typesafe built WebWords, an excellent real-world demonstration of using Akka's concurrency capabilities to scrape and process web pages. Try it out, then dig in on the sourcecode and his epic Dev Center article explaining the app's architecture in detail. Havoc also gave an educational talk at Dreamforce about Akka, Scala, and Play!.
Typesafe: we enjoyed working with you, and look forward to more productive collaboration in the future. Thanks!
Conclusion
Scala's explosive growth over the past two years is great news for both Java developers and for functional programming. Scala on Heroku, combined with powerful toolsets like Finagle and Akka, are a great fit for the emerging future of connected web services.
Further reading:
- Getting Started with Scala on Heroku/Cedar
- Scaling Out with Scala and Akka on Heroku
- Twitter Scala School
- Planet Scala
- Book: Programming in Scala by Martin Odersky, Lex Spoon, and Bill Venners
- Book: Programming Scala by Dean Wampler and Alex Payne
- Video: Scala, Akka, and Play!: An Introduction in the Cloud
Special thanks to Havoc Pennington, Jeff Smick, Steve Jenson, James Ward, Bruce Eckel, and Alex Payne for alpha-testing and help with this post. | https://blog.heroku.com/archives/2011/10/3/scala | CC-MAIN-2015-32 | refinedweb | 1,668 | 56.96 |
I've been working on Windows Mobile applications lately, one of which I'm building in WTL. I chose WTL over MFC or the .NET Compact Framework because of speed, size and dependency limitations of the latter two.
I started out with an SDI (single document interface) WTL wizard-based application, added some control-derived windows and some dialog windows (form views) and only then realized I had to find a way to make the SDI framework dynamically load and unload child windows the same way that you can with MFC or .NET applications — to be honest it's not a whole lot easier to do in MFC.
MDI (multiple document interface) is certainly an option, usually, but the WTL MDI framework doesn't support Windows Mobile/CE. As with most things, there are work-arounds (here's one such example). But even if MDI can be stretched to work, it irks me to have to make all that effort to make one architecture do something that another (SDI) ought to do itself.
This article demonstrates two techniques which can be used to do just that: dynamically switch between views in an SDI application. I'm sure there are other ways to do it but these are the two that I use. Above, I've included full source and a working example for you. I hope this article saves at least one other person the trouble of having to figure this out for themselves.
SetWindowLongPtr
SetWindowLong
The default WTL wizard-built SDI application has a single client, CWindow derived, window or "view" within a single parent "frame" which usually descends from CFrameWindowImpl.
CWindow
CFrameWindowImpl
{For a fuller discussion of WTL's organization and usage, please see Michael Dunn's excellent series here on CodeProject — particularly "WTL for MFC Programmers, Part II - WTL GUI Base Classes".}
This design philosophy is essentially continued in MDI where there is an owning child frame for each view (that is, it's one-to-one vs one-to-many).
To be clear, WTL doesn't implement anything like MFC's Document/View model so where above and elsewhere I refer to a "view", I mean simply a child window (i.e. a window, dialog or wrapped child control) within the application's main frame.
In many cases, the best way to gain the ability to switch between views is to go with the flow and simply build your application as an MDI application. That said, there are cases where MDI, as mentioned above, isn't available or desirable. Getting back to SDI, the main frame stores a handle (HWND) to its child view in a public variable called m_hWndClient.
m_hWndClient
Knowing how the frame stores a reference to its child view, you might at first be tempted to solve the problem by re-assigning the new child window to the frame's m_hWndClient and then updating the layout.
// Somewhere within your frame class -- Doesn't work!!!
this->m_hWndClient = m_hWndNewView; // m_hWndNewView is a handle to the
// view I want to switch to
UpdateLayout();
Unfortunately this doesn't work, principally because the frame doesn't know about the window whose handle you've just given it.
The trick, if there is one, to solving this problem is to understand that Windows implicitly references the first "pane" within the frame window that is not a control bar — which happens to be the child view. This is also, I should mention, why you need to do the same thing in both MFC and in WTL in order to switch views.
In MFC, this pane is identified as AFX_IDW_PANE_FIRST. If you poke around inside ATL (atlres.h), you'll find a similarly named definition called ATL_IDW_PANE_FIRST. But both have the same value of "0xE900".
As I hinted above, you can either destroy the current child "view," create the new view and then re-assign the new view's handle (implicitly setting the first pane ID) — Technique #1; or you can explicitly change the IDs of the two views so that the current view's ID is no longer ATL_IDW_PANE_FIRST and then assign this ID to the new view using some direct windows calls. (Which I'll show you how to do in a just a bit.)
Interestingly, Technique #1 doesn't require switching the IDs as described — so I'm guessing that when you create the second view, which definitely has a different internal ID, that the frame or windows re-assigns the ID to "0xE900". If you don't switch the IDs but just create the second view with the frame as its parent HWND, the frame will continue to reference the first view as its child as long as it exists. I'll leave it to someone wiser in the ways of Windows to explain further.
I first saw this technique used in Chris Sell's White Paper "WTL Makes UI Programming a Joy - Part 2" (which you can find on. Look for a function called TogglePrintPreview() in the BitmapView example).
TogglePrintPreview()
I'll use a slightly different version so that my example code matches the demo and source I've provided. I've also extended it so that I can support an arbitrary number of views. The steps can be broken down roughly into:
m_hWndClient
PreTranslateMessage
// View is just an enum that makes it more convenient to address views.
// There's a defined VIEW enum for each view/dialog class you want
// to be able to switch to.
//
// You could accomplish the same thing with simple integers, member windows
// handles, or whatever else distinguishes the requested and current views.
enum VIEW {BASIC, DIALOG, EDIT, NONE};
// Member views
CBasicView m_view; // Basic view derived from wizard
CEditView m_edit; // Basic dialog derived from wizard
CBasicDialog m_dlg; // Basic edit control view derived from wizard
...
void SwitchView(VIEW view)
{
// Pointers to old and new views
CWindow *pOldView, *pNewView;
// Get current window/view
pOldView = GetCurrentView(); // Defined below
// Get/create requested view
pNewView = GetNewView(view); // Defined below
// Check if requested view is current view or default
if(!pOldView || !pNewView || (pOldView == pNewView))
return; // Nothing to do
// Show/Hide
pOldView->ShowWindow(SW_HIDE); // Hide the old
pNewView->ShowWindow(SW_SHOW); // Show the new window
// Delete the old view
pOldView->DestroyWindow();
// Ask frame to update client
UpdateLayout();
}
GetCurrentView() is a helper function that compares m_hWndClient to each view's handle and then returns the matching view, cast to a CWindow*. Like so:
GetCurrentView()
CWindow*
// Helper method to get current view ~ MFC GetActiveView() not available!
CWindow* GetCurrentView()
{
if(!m_hWndClient)
return NULL;
if(m_hWndClient == m_view.m_hWnd)
return (CWindow*)&m_view;
else if(m_hWndClient == m_dlg.m_hWnd)
return (CWindow*)&m_dlg;
else if(m_hWndClient == m_edit.m_hWnd)
return (CWindow*)&m_edit;
else
return NULL;
}
GetNewView(VIEW view) is a helper function that returns the requested view, cast to a CWindow*. In the process, it creates the view object if necessary and also assigns its handle to the frame's m_hWndClient. Like so:
GetNewView(VIEW view)
// Helper method to get/create new view
CWindow* GetNewView(VIEW view)
{
CWindow* newView = NULL;
// Now set requested view
switch(view)
{
case BASIC:
// If doesn't exist, create it and set reference to frame's
// m_hWndClient
if(m_view.m_hWnd == NULL)
m_view.Create(m_hWnd);
m_hWndClient = m_view.m_hWnd;
newView = (CWindow*)&m_view;
break;
case DIALOG:
if(m_dlg.m_hWnd == NULL)
m_dlg.Create(m_hWnd);
m_hWndClient = m_dlg.m_hWnd;
newView = (CWindow*)&m_dlg;
break;
case EDIT:
if(m_edit.m_hWnd == NULL)
m_edit.Create(m_hWnd);
m_hWndClient = m_edit.m_hWnd;
newView = (CWindow*)&m_edit;
break;
}
return newView;
}
SwitchView(VIEW view)
As I mentioned above, you should also consider updating the frame's PreTranslateMessage override to ensure that the views get a chance to execute their own PreTranslateMessage on messages. PreTranslateMessage essentially allows the frame and/or your view to preview messages and do something with them before they get translated and dispatched. (Return TRUE to prevent the message being translated and dispatched.)
TRUE
Most applications don't override PreTranslateMessage unless they need to do some special message handling, such as when they're subclassing a lot of controls. That said, the WTL wizard will automatically generate PreTranslateMessage functions in your CWindowImpl and CDialogImpl views and will also add the code necessary to route messages to them from the main frame's PreTranslateMessage, another reason I considered it mandatory to ensure messages were routed to my views from the main frame.
CWindowImpl
CDialogImpl
Here's how I've modified the frame's PreTranslateMessage to give my views a chance to look at the messages:
// Implemented in CMainFrame
virtual BOOL PreTranslateMessage(MSG* pMsg)
{
if(CFrameWindowImpl<CMainFrame>::PreTranslateMessage(pMsg))
return TRUE;
if(m_hWndClient != NULL)
{
// Call PreTranslateMessage for the current view
CWindow* pCurrentView = GetCurrentView(); // Get the current view
// (cast as a CWindow*) ~ function shown above
if(m_view.m_hWnd == pCurrentView->m_hWnd)
return m_view.PreTranslateMessage(pMsg);
else if(m_dlg.m_hWnd == pCurrentView->m_hWnd)
return m_dlg.PreTranslateMessage(pMsg);
else if(m_edit.m_hWnd == pCurrentView->m_hWnd)
return m_edit.PreTranslateMessage(pMsg);
}
return FALSE;
}
Here I first ensure that the frame has a valid child handle, and then I call the same GetCurrentView() function described above to return a CWindow*. I then use that CWindow*'s HWND member to compare to each of my view's. I do it this way because I need the view in order to call the view's own PreTranslateMessage. I can't use the CWindow to call it because it doesn't implement PreTranslateMessage.
It goes without saying that you don't need to include message routing to any views that don't implement PreTranslateMessage.
There are probably more elegant ways to do this, such as through run-time type information (RTTI), templates or other forms of inheritance, containment, etc.. Keep in mind that RTTI in particular can be an expensive way to solve this because it will traverse the inheritance hierarchy for each object, for each message. Given the number of messages that'll pass through PreTranslateMessage and the fact that I wanted to focus on the core problem I'm trying to solve, I'll have to leave more elegant solutions to the reader as a follow-on exercise.
PreTranslateMessage, by the way, is the sole method in the CMessageFilter interface — and a method which the main frame implements. It isn't, however, in the inheritance hierarchy of either CWindowImpl or CDialogImpl. Meaning that it isn't implicitly available in either and it isn't required of implementors of either.
CMessageFilter
This discussion will be much shorter as most of the preparation has already been done. All that's required at this point is a simple change to the SwitchView method to persist the views between switches instead of destroying them. If you refer back to SwitchView above, replace:
SwitchView
// Delete the old view
pOldView->DestroyWindow();
...with...
// Change the current view's ID so it isn't the first child w/in the frame
pOldView->SetWindowLongPtr(GWL_ID, 0);
// Make the new view the frame's first pane/child
pNewView->SetWindowLongPtr(GWL_ID, ATL_IDW_PANE_FIRST);
... that's it! As discussed above, the frame uses the first pane ID in order to update its client view so you need to change the current view's GWL_ID to something other than ATL_IDW_PANE_FIRST and then change the new view's GWL_ID to ATL_IDW_PANE_FIRST.
GWL_ID
ATL_IDW_PANE_FIRST
VIEW
SwitchView(BASIC)
SwitchView(EDIT)
enum VIEW {}
CMyView m_myView
GetNewView(VIEW)
void SwitchView(VIEW view, BOOL bPreserve = FALSE) // Destroy by default
{
...
if(bPreserve)
{
// Use Technique #2
}
else
{
// Use Technique #1
}
}
Please consider this but a starting point. There are any number of improvements that can be made to the code accompanying this article but which I deemed non-essential to my main subject or which time and space precluded exploring further. Some specific improvements I might make include:
CWindow* CPocketMDFrame::GetCurrentView()
{
if(!m_hWndClient)
return NONE;
if((m_pView) && (m_hWndClient == m_pView->m_hWnd))
return (CWindow*)m_pView;
...
}
This article is copyrighted material, (c) 2007 by Tim Brooks. This article has been researched and written with the intention of helping others benefit from my knowledge and experience just as I've benefited from the knowledge and experience of countless others. If you would like to translate this article please email me to let me know. I would like to know about derivation of this article and also be able to reference said translations here and elsewhere.
The demo code accompanying this article is released to the public domain. This article, however, is not public domain. If you use the code in your own application, I'd appreciate an email telling me about it — but I don't require it. Finally, attribution in your own source code would be appreciated but is likewise not required.
September, 17,1083: Cannot open include file: 'atlapp.h': No such file or directory d:\sdiapp\sdimultiview_src\sdimultiview\sdimultiview\stdafx.h 14 1 SDIMultiView
WTL::CTabView
Tim Brooks wrote:I also think, going that route, you'd probably want to resize the window to remove tab titles, etc....
WTL::CTabView::ShowTabControl(false)
Tim Brooks wrote:BTW .... I looked over your articles. Great Stuff!!!
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://codeproject.freetls.fastly.net/Articles/20566/Switch-Views-in-a-WTL-SDI-Application?msg=2235845 | CC-MAIN-2021-39 | refinedweb | 2,171 | 61.16 |
In this second episode of Tablet Python, I'm going to tell you about an easy way to make resource modules. A resource module is a Python module which you can import for loading resources such as images, fonts, or whatever you want.
For example:
import graphics
img = gtk.Image()
img.set_from_pixbuf(graphics.icon)
Implementation
In episode #1 we talked about writing relocatable software. We are going to apply the same technique for resource modules, since they will be relocatable as well.
Our resource modules are Python packages, i.e. a subdirectory with a __init__.py file in it. The name of the directory will be the module's name. Be careful to only use valid characters (characters valid for Python variables) in the directory name and the filenames of the resource files (e.g., my_icon.png is OK, while my-icon.png would be not!).
Let's assume the following filesystem structure for our example:
graphics/
+ __init__.py
+ icon.png
+ logo.png
+ background.png
test.py
The package directory will also contain the resource files. In order to access them, we'll have to put them into the module namespace. Write the following into graphics/__init__.py:
import os
import gtk
_RESOURCE_PATH = os.path.dirname(__file__)
def _load_resources():
# Find all .png files in the resource directory. This construct is called
# "list comprehension" and will be covered in detail in episode #3.
# This returns a list of the names of all files in the resource directory
# ending with ".png".
resources = [ f for f in os.listdir(_RESOURCE_PATH)
if f.endswith(".png") ]
# load resources into module namespace
for r in resources:
# the filename without extension will be the name to access the
# resource, so we strip off the extension
name = os.path.splitext(r)[0]
# this is the full path to the resource file
path = os.path.join(_RESOURCE_PATH, r)
# Now we can load the resource into the module namespace.
# globals() gives us the dictionary of the module-global variables,
# and we can easily extend it by assigning new keys.
globals()[name] = gtk.gdk.pixbuf_new_from_file(path)
#end for
# load the resources when importing the module
_load_resources()
That's all. Let's test it with a tiny test program test.py:
import gtk
import graphics
win = gtk.Window(gtk.WINDOW_TOPLEVEL)
img = gtk.Image()
img.set_from_pixbuf(graphics.logo)
win.add(img)
win.show_all()
gtk.main()
You may put as many resource files as you want into the resource directory and simply access them by their name. The files are loaded only once when the module gets imported the first time. Subsequent import statements reuse the module without reloading the resources. I'm going to talk about the "singleton" nature of Python modules in a later episode.
A more sophisticated example of resource modules can be found in the theming engine of MediaBox (the subdirectory called theme). | http://pycage.blogspot.com/2007/ | CC-MAIN-2018-39 | refinedweb | 473 | 59.7 |
Usability
Usability
Just tag your strings to make them translatable. Use simple ttag-cli tool for translations extraction
Integration
Integration
Can be easily integrated with almost any workflow as it uses babel-plugin for strings extraction. Can be easily used with the typescript.
Performance
Performance
Allows you to place translations in to the sources on a build step
Based on GNU gettext
Gettext is a simple localization format with the rich ecosystem. Ttag has support for plurals, contexts, translator comments and much more.
Simple use case:Simple use case:
import { t } from "ttag"; t`This string will be translated`;
Plurals:Plurals:
import { ngettext, msgid } from "ttag"; ngettext(msgid`${n} banana`, `${n} bananas`, n);
Contexts:Contexts:
import { c } from "ttag"; c('email').t`this text will be in email context`;
JSX:JSX:
import { jt } from "ttag"; jt`can use ${<JSXElement/>} inside the translations`;
Command line utility that is used for translations extraction and different .po files manipulations. Works with js, jsx, ts, tsx files out of the box.
Simple translations extraction to .po file:Simple translations extraction to .po file:
ttag extract index.js
Update .po file with new translations:Update .po file with new translations:
ttag update out.po index.js
Create a new file with all strings replaced with translations from .po file:Create a new file with all strings replaced with translations from .po file:
ttag replace out.po index.js index-translated.js
Who's Using This?
This project is used by all these people | https://ttag.js.org/index.html | CC-MAIN-2021-25 | refinedweb | 248 | 59.5 |
Andreas Röhler <address@hidden> writes: > Thierry Volpiatto wrote: >> Thierry Volpiatto <address@hidden> writes: >> >>> Hi, >>> >>> Christian Wittern <address@hidden> writes: >>> >>>> Hi there, >>>> >>>> Here is the problem I am trying to solve: >>>> >>>> I have a large list of items which I want to access. The items are in >>>> sequential order, but many are missing in between, like: >>>> >>>> (1 8 17 23 25 34 45 47 50) [in reality, there is a value associated >>>> with this, but I took it out for simplicity] >>>> >>>> Now when I am trying to access with a key that is not in the list, I >>>> want to have the one with the closest smaller key returned, so for 6 >>>> and 7 this would be 1, but for 8 and 9 this would be 8. >>>> >>>> Since the list will have thousands of elements, I do not want to simply >>>> loop through it but am looking for better ways to do this in Emacs lisp. >>>> Any ideas how to achieve this? >>> ,---- >>> | (defun closest-elm-in-seq (n seq) >>> | (let ((pair (loop with elm = n with last-elm >>> | for i in seq >>> | if (and last-elm (< last-elm elm) (> i elm)) return >>> (list last-elm i) >>> | do (setq last-elm i)))) >>> | (if (< (- n (car pair)) (- (cadr pair) n)) >>> | (car pair) (cadr pair)))) >>> `---- >>> >>> That return the closest, but not the smaller closest, but it should be >>> easy to adapt. >> >> Case where your element is member of list, return it: >> >> ,---- >> | (defun closest-elm-in-seq (n seq) >> | (let ((pair (loop with elm = n with last-elm >> | for i in seq >> | if (eq i elm) return (list i) >> | else if (and last-elm (< last-elm elm) (> i elm)) return >> (list last-elm i) >> | do (setq last-elm i)))) >> | (if (> (length pair) 1) >> | (if (< (- n (car pair)) (- (cadr pair) n)) >> | (car pair) (cadr pair)) >> | (car pair)))) >> `---- >> For the smallest just return the car... >> > > if n is member of the seq, maybe equal-operator too > > (<= last-elm elm) > > is correct? No, in this case: if (eq i elm) return (list i) ==> (i) ; which is n and finally (car pair) ==> n > Thanks BTW, very interesting > > Andreas > > > -- Thierry Volpiatto Gpg key: | http://lists.gnu.org/archive/html/help-gnu-emacs/2010-03/msg00099.html | CC-MAIN-2014-41 | refinedweb | 353 | 54.73 |
Manpage of SET_TID_ADDRESS
SET_TID_ADDRESSSection: Linux Programmer's Manual (2)
Updated: 2014-07-08
Index
NAMEset_tid_address - set pointer to thread ID
SYNOPSIS
#include <linux/unistd.h>long set_tid_address(int *tidptr);
DESCRIPTIONFor each thread, the kernel maintains two attributes (addresses) called set_child_tidand clear_child_tid. These two attributes contain the value NULL by default.
- set_child_tid
- If a thread is started using clone(2) with the CLONE_CHILD_SETTIDflag, set_child_tidis set to the value passed in the ctidargument of that system call.
- When set_child_tidis set, the very first thing the new thread does is to write its thread ID at this address.
- clear_child_tid
- If a thread is started using clone(2) with the CLONE_CHILD_CLEARTIDflag, clear_child_tidis set to the value passed in the ctidargument of that system call.
The system call set_tid_address() sets the clear_child_tidvalue for the calling thread to tidptr.
When a thread whose clear_child_tidis not NULL terminates, then, if the thread is sharing memory with other threads, then 0 is written at the address specified in clear_child_tid VALUEset_tid_address() always returns the caller's thread ID.
ERRORSset_tid_address() always succeeds.
VERSIONSThis call is present since Linux 2.5.48. Details as given here are valid since Linux 2.5.49.
CONFORMING TOThis system call is Linux-specific.
SEE ALSOclone(2), futex(2), gettid(2)
Index
This document was created by man2html, using the manual pages.
Time: 16:30:06 GMT, December 12, 2016 Click Here! | https://www.linux.com/manpage/man2/set_tid_address.2.html | CC-MAIN-2017-09 | refinedweb | 227 | 56.45 |
Will do; this is on my list of things to look at very soon. Right now
I'm focusing on making some general performance improvements.
- James
Woody Anderson wrote:
> so this is my current solution:
>
> Reader reader = response.getReader();
> String docStr = StreamUtils.getString( reader );
>
> docStr = docStr.replace( "",
>
> "" );
> Parser parser = ABDERA.getParser();
> Document<Entry> doc = parser.parse( new StringReader( docStr ) );
>
> which works but is clearly a complete hack.
> if it becomes easier to alias this namespace or handle this with an
> extension factory, please let me know.
> -w
>
> On 10/31/07, Woody Anderson <woody@xoopit.com> wrote:
>> ok. i tried to do this, but it seems i'm not understanding something
>> with regard to the getElementWrapper method.. and now the mimetype
>> (per Element?)
>>
>> public class AliasNamespacesFactory implements ExtensionFactory {
>> HashSet<String> _namespaces = new HashSet<String>();
>> public AliasNamespacesFactory( Set<String> namespaces ) {
>> _namespaces.addAll( namespaces );
>> }
>>
>> public <T extends Element> T getElementWrapper(Element internal) {
>> // ????????
>> }
>>
>> public <T extends Base> String getMimeType(T base) {
>> // ??????????
>> }
>>
>> public String[] getNamespaces() {
>> return _namespaces.toArray( new String[ _namespaces.size() ] );
>> }
>>
>> public boolean handlesNamespace(String namespace) {
>> return _namespaces.contains( namespace );
>> }
>> }
>>
>>
>> do i have to have a if/else to instantiate FOMEntry, FOMLink, etc.. ?
>> most of the constructors are protected, which makes that sort of a non-starter.
>>
>> FOMFactory seems to have a lot of methods, but these take objects that
>> i don't have at my disposal. And these seem much more geared toward
>> creation for population, not for cloning or refacing an existing
>> "Element".
>>
>> do i need to basically recreate the FOMFactory for myself? and
>> subsequently all of the objects?
>>
>> on a complete hack approach:
>> if i ask the ClientResponse for a reader and put the entire response
>> in a string replacing the namespace with the "correct" namespace. i
>> could just ask for the default parser and continue right?
>> i mean this would be ugly in the extreme, but it sounds only slightly
>> worse than recreating the entire FOM factory just to change broken-ns
>> to atom-ns.
>>
>> and the only other way i see to get in front of the system is to front
>> a stream reader via. createXMLStreamReader, but that seems tied up in
>> axiom.. which is an extra helping of confusing.
>>
>> thoughts?
>> -w
>>
>> On 10/31/07, James M Snell <jasnell@gmail.com> wrote:
>>> Ok, this is actually somewhat difficult right now but definitely
>>> possible. What you'd need to do is create a custom ExtensionFactory
>>> implementation (that does not extend AbstractExtensionFactory) and use
>>> that to create the appropriate objects. The getElementWrapper method is
>>> not required to return an instance of an actual ElementWrapper subclass.
>>>
>>> It would take a bit of work to do, because the internal element passed
>>> in to getElementWrapper would need to be used to create a new instance
>>> of the appropriate FOM impl.
>>>
>>> It's entirely possible that this could be made easier.
>>>
>>> - James
>>>
>>> Woody Anderson wrote:
>>>> hello,
>>>>
>>>> i'm getting an errant namespace in responses from various servers.
>>>>
>>>> e.g.
>>>> <?xml version="1.0" encoding="utf-8"?>
>>>> <entry xmlns="">
>>>> <title xmlns="">Example Title</title>
>>>> <summary xmlns="">Example Text</summary>
>>>> <content xmlns="" mode="xml">
>>>> <div xmlns="">Example Text</div>
>>>> </content>
>>>> <id xmlns="">urn:lj:livejournal.com:atom1:username:number</id>
>>>> <link xmlns="" type="application/x.atom+xml"
>>>>>>> href="" title="Example
>>>> Title"/>
>>>> <link xmlns="" type="text/html"
>>>>>>>
>>>> </entry>
>>>>
>>>> i want to handle this as though it were a correctly namespaced entry.
>>>> all types of elements from this server comeback with this namespace,
>>>> so i need this for each of the model elements.
>>>>
>>>> I'm a bit confused about how i do this for all elements. I've been
>>>> looking at AbstractExtensionFactory.
>>>> and ExtensionFactory docs, but it is unclear what getElementWrapper()
>>>> is supposed to do to make it all "work".
>>>>
>>>> i would hope that it's fairly simple to consume this bogus namespace.
>>>>
>>>> i found an old example doing something *similar* that extended
>>>> FOMExtensionFactory (which no longer exists..) to handle an Atom03
>>>> feed. This example doesn't work anymore and was still pretty confusing
>>>> as it seemed to work for Feed only.
>>>>
>>>> is there a simple way to handle this errant ns?
>>>> thanks
>>>> -w
>>>>
> | http://mail-archives.apache.org/mod_mbox/abdera-user/200711.mbox/%3C472FBB41.8000801@gmail.com%3E | CC-MAIN-2018-34 | refinedweb | 677 | 50.63 |
Create Shader Not Rendering
On 05/03/2013 at 15:45, xxxxxxxx wrote:
I have a problem where I have code that creates a shader and loads a file into the color and alpha channel and activates the alpha channel, but then nothing will render in either the viewport or the Picture Viewer. If I disable the alpha I can render again, but I dont know why I can render with both? Any ideas? Thanks!
Using R14 Studio
import c4d from c4d import documents #Welcome to the world of Python def main() : #replace with your path file = "/Volumes/Textures/Leaves/Leaf1.tif" def insertobj() : obj = c4d.BaseObject(c4d.Oplane) doc.InsertObject(obj) c4d.EventAdd() doc = documents.GetActiveDocument() mat = c4d.BaseMaterial(c4d.Mmaterial) sha = c4d.BaseList2D(c4d.Xbitmap) sha[c4d.BITMAPSHADER_FILENAME] = file mat.InsertShader( sha ) mat[c4d.MATERIAL_COLOR_SHADER] = sha mat[c4d.MATERIAL_USE_ALPHA]=True mat[c4d.MATERIAL_ALPHA_SHADER] = sha mat.Message( c4d.MSG_UPDATE ) mat.Update( True, True ) doc.InsertMaterial( mat ) insertobj() mat = doc.SearchMaterial("Mat") obj = doc.SearchObject("Plane") doc.SetActiveObject(obj) doc.SetActiveMaterial(mat) c4d.CallCommand(12169); if __name__=='__main__': main()
On 05/03/2013 at 16:18, xxxxxxxx wrote:
you cannot assign the same instance of a shader to multiple channels. that is a limitation of
the c4d material system. you have to create a copy first.
mat[c4d.MATERIAL_ALPHA_SHADER] = sha.GetClone()
edit : note that you might run into more problems, when you try to copy more complex
shader setups this way, due to the unpredictable way c4d.c4datom.getclone() works.
copying shaders which contain object (instance) references themselves will not always
return the expected results.
On 06/03/2013 at 07:22, xxxxxxxx wrote:
The shader also has to be inserted into the material's shader-list. Use BaseMaterial.InsertShader()
for this. This has to be done for every single clone of the shader as well. Do not invoke it twice
per shader.
Best,
-Niklas
On 06/03/2013 at 08:18, xxxxxxxx wrote:
Thanks for your responses, that makes sense. However, when add the .GetClone() on the end it doesn't load the image into the alpha for me? was there more that has to be done?
I tried what Niklas said about inserting the shader again with something like this:
mat[c4d.MATERIAL_ALPHA_SHADER] = sha.GetClone() mat.InsertShader( sha.GetClone() )
Edit: ok I got it, I think it was a newb thing. I assigned the sha.GetClone to variable first then it worked. Thanks again guys!
sha2 = sha.GetClone() mat[c4d.MATERIAL_ALPHA_SHADER] = sha2 mat.InsertShader( sha2 ) | https://plugincafe.maxon.net/topic/6996/7891_create-shader-not-rendering/ | CC-MAIN-2020-50 | refinedweb | 417 | 52.46 |
0
Here is Assignment07:
public class Assignment07 { public static void main(String []args) { Asn07Employees emps = new Asn07Employees(ids, hours, rate); Asn07Employees allEmps[] = new Asn07Employees[7]; int ids[] = {5658845, 4520125, 7895122, 8777541, 8451277, 1302850, 7580489 }; int hours[]={ 12, 15, 7, 16, -1, 20, 15 }; float rate[]={6.5f, 12.5f, 1.5f, 10, 16.5f, 20, 32.5f }; for(int i=0; i < allSt.length; i++) { allEmps[i] = new Asn07Employees(names[i], scores[i]); } for(Asn07Employees oneEmps :allEmps) print(oneSt.toString()); emps.calculateWages(); System.out.println(emps); System.out.println("Id\t\tHours\t\tRate\t\tWages" + "\n--\t\t-----\t\t----\t\t-----"); } public static void print(String s) { System.out.println(s); } }
Here is Asn07Employees:
public class Asn07Employees { private int ids[]; private int hours[]; private float payrate[]; private float wages[] = new float [7]; public Asn07Employees(int _ids[], int _hours[], float _payrate[]) { ids = _ids; hours = _hours; payrate = _payrate; } public String toString() { String rv = " "; return rv; } public void setIds(int _ids) { ids = _ids; } public int getIds() { return ids; } public void setHours(int _hours) { hours = _hours; } public int getHours() { return hours; } public void calculateWages() { } }
I am having trouble getting the arrays for ids, hours, and payrate to work across both classes.
I must calculate the wages for each id (hours * payrate) and display all arrays in the output.
I know it has something to do with my Asn07Employees class and because an int is required but it keeps finding an int[].
I've been at it for quite some time now, solving the problem may be easy for you but I am still trying to understand Java. | https://www.daniweb.com/programming/software-development/threads/358045/help-with-array | CC-MAIN-2017-43 | refinedweb | 265 | 57.71 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.