text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I am writing a command line interface to a Ruby gem and I have this method exit_error, which acts as an exit error point to all validations performed while processing.
exit_error
def self.exit_error(code,possibilities=[])
puts @errormsgs[code].colorize(:light_red)
if not possibilities.empty? then
puts "It should be:"
possibilities.each{ |p| puts " #{p}".colorize(:light_green) }
end
exit code
end
where @errormsgs is a hash whose keys are the error codes and whose values are the corresponding error messages.
@errormsgs
This way I may give users customized error messages writing validations like:
exit_error(101,@commands) if not valid_command? command
where:
@errormsgs[101] => "Invalid command."
@commands = [ :create, :remove, :list ]
and the user typing a wrong command would receive an error message like:
Invalid command.
It should be:
create
remove
list
At the same time, this way I may have bash scripts detecting exactly the error code who caused the exit condition, and this is very important to my gem.
Everything is working fine with this method and this strategy as a whole. But I must confess that I wrote all this without writing tests first. I know, I know... Shame on me!
Now that I am done with the gem, I want to improve my code coverage rate. Everything else was done by the book, writing tests first and code after tests. So, it would be great having tests for these error conditions too.
It happens that I really don't know how to write Rspec tests to this particular situation, when I use exit to interrupt processing. Any suggestions?
exit
Update => This gem is part of a "programming environment" full of bash scripts. Some of these scripts need to know exactly the error condition which interrupted the execution of a command to act accordingly.
For example:
class MyClass
def self.exit_error(code,possibilities=[])
puts @errormsgs[code].colorize(:light_red)
if not possibilities.empty? then
puts "It should be:"
possibilities.each{ |p| puts " #{p}".colorize(:light_green) }
end
exit code
end
end
You could write its rspec to be something like this:
describe 'exit_error' do
let(:errormsgs) { {101: "Invalid command."} }
let(:commands) { [ :create, :remove, :list ] }
context 'exit with success'
before(:each) do
MyClass.errormsgs = errormsgs # example/assuming that you can @errormsgs of the object/class
allow(MyClass).to receive(:exit).with(:some_code).and_return(true)
end
it 'should print commands of failures'
expect(MyClass).to receive(:puts).with(errormsgs[101])
expect(MyClass).to receive(:puts).with("It should be:")
expect(MyClass).to receive(:puts).with(" create")
expect(MyClass).to receive(:puts).with(" remove")
expect(MyClass).to receive(:puts).with(" list")
MyClass.exit_error(101, commands)
end
end
context 'exit with failure'
before(:each) do
MyClass.errormsgs = {} # example/assuming that you can @errormsgs of the object/class
allow(MyClass).to receive(:exit).with(:some_code).and_return(false)
end
# follow the same approach as above for a failure
end
end
Of course this is an initial premise for your specs and might not just work if you copy and paste the code. You will have to do a bit of a reading and refactoring in order to get green signals from rspec. | http://jakzaprogramowac.pl/pytanie/59268,writing-a-rspec-test-for-an-error-condition-finished-with-exit | CC-MAIN-2017-43 | refinedweb | 515 | 51.24 |
In this section we are going to swap two variables without using the third variable. For this, we have used input.nextInt() method of Scanner class for getting the integer type values from the command prompt. Instead of using temporary variable, we have done some arithmetic operations like we have added both the numbers and stored in the variable num1 then we have subtracted num2 from num1 and stored it into num2. After that, we have subtracted num2 from num1 and stored into num1. Now, we get the values that has been interchanged. To display the values on the command prompt use println() method and the swapped values will get displayed.
Here is the code:
import java.util.*; class Swapping { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter Number 1: "); int num1 = input.nextInt(); System.out.println("Enter Number 2: "); int num2 = input.nextInt(); num1 = num1 + num2; num2 = num1 - num2; num1 = num1 - num2; System.out.println("After swapping, num1= " + num1 + " and num2= " + num2); } }
Output: | http://www.roseindia.net/tutorial/java/core/swapnumbers.html | CC-MAIN-2014-10 | refinedweb | 172 | 67.76 |
Code style and conventions¶
Be consistent!¶
Look at the surrounding code, or a similar part of the project, and try to do the same thing. If you think the other code has actively bad style, fix it (in a separate commit).
When in doubt, ask in chat.zulip.org.
Lint tools¶
You can run them all at once with
./tools/lint
You can set this up as a local Git commit hook with
``tools/setup-git-repo``
The Vagrant setup process runs this for you.
lint runs many lint checks in parallel, including
Secrets¶
Please don’t put any passwords, secret access keys, etc. inline in the
code. Instead, use the
get_secret function in
zproject/settings.py
to read secrets from
/etc/zulip/secrets.conf.
Dangerous constructs¶
Misuse of database queries¶
Look out for Django code like this:
[Foo.objects.get(id=bar.x.id) for bar in Bar.objects.filter(...) if bar.baz < 7]
This will make one database query for each
Bar, which is slow in
production (but not in local testing!). Instead of a list comprehension,
write a single query using Django’s QuerySet
API.
If you can’t rewrite it as a single query, that’s a sign that something is wrong with the database schema. So don’t defer this optimization when performing schema changes, or else you may later find that it’s impossible.
UserProfile.objects.get() / Client.objects.get / etc.¶
In our Django code, never do direct
UserProfile.objects.get(email=foo)
database queries. Instead always use
get_user_profile_by_{email,id}.
There are 3 reasons for this:
- It’s guaranteed to correctly do a case-inexact lookup
- It fetches the user object from remote cache, which is faster
- It always fetches a UserProfile object which has been queried using .select_related(), and thus will perform well when one later accesses related models like the Realm.
Similarly we have
get_client and
get_stream functions to fetch those
commonly accessed objects via remote cache.
Using Django model objects as keys in sets/dicts¶
Don’t use Django model objects as keys in sets/dictionaries – you will get unexpected behavior when dealing with objects obtained from different database queries:
For example,
UserProfile.objects.only("id").get(id=17) in set([UserProfile.objects.get(id=17)])
is False
You should work with the IDs instead.
user_profile.save()¶
You should always pass the update_fields keyword argument to .save() when modifying an existing Django model object. By default, .save() will overwrite every value in the column, which results in lots of race conditions where unrelated changes made by one thread can be accidentally overwritten by another thread that fetched its UserProfile object before the first thread wrote out its change.
Using raw saves to update important model objects¶
In most cases, we already have a function in zerver/lib/actions.py with a name like do_activate_user that will correctly handle lookups, caching, and notifying running browsers via the event system about your change. So please check whether such a function exists before writing new code to modify a model object, since your new code has a good chance of getting at least one of these things wrong.
Naive datetime objects¶
Python allows datetime objects to not have an associated timezone, which can cause time-related bugs that are hard to catch with a test suite, or bugs that only show up during daylight savings time.
Good ways to make timezone-aware datetimes are below. We import
timezone
function as
from django.utils.timezone import now as timezone_now and
from django.utils.timezone import utc as timezone_utc. When Django is not
available,
timezone_utc should be replaced with
pytz.utc below.
timezone_now()when Django is available, such as in
zerver/.
datetime.now(tz=pytz.utc)when Django is not available, such as for bots and scripts.
datetime.fromtimestamp(timestamp, tz=timezone_utc)if creating a datetime from a timestamp. This is also available as
zerver.lib.timestamp.timestamp_to_datetime.
datetime.strptime(date_string, format).replace(tzinfo=timezone_utc)if creating a datetime from a formatted string that is in UTC.
Idioms that result in timezone-naive datetimes, and should be avoided, are
datetime.now() and
datetime.fromtimestamp(timestamp) without a
tz
parameter,
datetime.utcnow() and
datetime.utcfromtimestamp(), and
datetime.strptime(date_string, format) without replacing the
tzinfo at
the end.
Additional notes:
- Especially in scripts and puppet configuration where Django is not available, using
time.time()to get timestamps can be cleaner than dealing with datetimes.
- All datetimes on the backend should be in UTC, unless there is a good reason to do otherwise.
x.attr('zid') vs.
rows.id(x)¶
Our message row DOM elements have a custom attribute
zid which
contains the numerical message ID. Don’t access this directly as
x.attr('zid') ! The result will be a string and comparisons (e.g. with
<=) will give the wrong result, occasionally, just enough to make a
bug that’s impossible to track down.
You should instead use the
id function from the
rows module, as in
rows.id(x). This returns a number. Even in cases where you do want a
string, use the
id function, as it will simplify future code changes.
In most contexts in JavaScript where a string is needed, you can pass a
number without any explicit conversion.
JavaScript var¶
Always declare JavaScript variables using
var. JavaScript has
function scope only, not block scope. This means that a
var
declaration inside a
for or
if acts the same as a
var
declaration at the beginning of the surrounding
function. To avoid
confusion, declare all variables at the top of a function.
JS array/object manipulation¶
For generic functions that operate on arrays or JavaScript objects, you should generally use Underscore. We used to use jQuery’s utility functions, but the Underscore equivalents are more consistent, better-behaved and offer more choices.
A quick conversion table:
$.each → _.each (parameters to the callback reversed) $.inArray → _.indexOf (parameters reversed) $.grep → _.filter $.map → _.map $.extend → _.extend
There’s a subtle difference in the case of
_.extend; it will replace
attributes with undefined, whereas jQuery won’t:
$.extend({foo: 2}, {foo: undefined}); // yields {foo: 2}, BUT... _.extend({foo: 2}, {foo: undefined}); // yields {foo: undefined}!
Also,
_.each does not let you break out of the iteration early by
returning false, the way jQuery’s version does. If you’re doing this,
you probably want
_.find,
_.every, or
_.any, rather than ‘each’.
Some Underscore functions have multiple names. You should always use the
canonical name (given in large print in the Underscore documentation),
with the exception of
_.any, which we prefer over the less clear
‘some’.
More arbitrary style things¶
Line length¶
We have an absolute hard limit on line length only for some files, but we should still avoid extremely long lines. A general guideline is: refactor stuff to get it under 85 characters, unless that makes the code a lot uglier, in which case it’s fine to go up to 120 or so.
JavaScript¶
When calling a function with an anonymous function as an argument, use this style:
my_function('foo', function (data) { var x = ...; // ... });
The inner function body is indented one level from the outer function call. The closing brace for the inner function and the closing parenthesis for the outer call are together on the same line. This style isn’t necessarily appropriate for calls with multiple anonymous functions or other arguments following them.
Combine adjacent on-ready functions, if they are logically related.
The best way to build complicated DOM elements is a Mustache template
like
static/templates/message_reactions.handlebars. For simpler things
you can use jQuery DOM building APIs like so:
var new_tr = $('<tr />').attr('id', object.id);
Passing a HTML string to jQuery is fine for simple hardcoded things that don’t need internationalization:
foo.append('<p id="selected">/</p>');
but avoid programmatically building complicated strings.
We used to favor attaching behaviors in templates like so:
<p onclick="select_zerver({{id}})">
but there are some reasons to prefer attaching events using jQuery code:
- Potential huge performance gains by using delegated events where possible
- When calling a function from an
onclickattribute,
thisis not bound to the element like you might think
- jQuery does event normalization
Either way, avoid complicated JavaScript code inside HTML attributes; call a helper function instead.
HTML / CSS¶
Avoid using the
style= attribute unless the styling is actually
dynamic. Instead, define logical classes and put your styles in
external CSS files such as
zulip.css.
Don’t use the tag name in a selector unless you have to. In other words,
use
.foo instead of
span.foo. We shouldn’t have to care if the tag
type changes in the future.
Python¶
Don’t put a shebang line on a Python file unless it’s meaningful to run it as a script. (Some libraries can also be run as scripts, e.g. to run a test suite.)
Scripts should be executed directly (
./script.py), so that the interpreter is implicitly found from the shebang line, rather than explicitly overridden (
python script.py).
Put all imports together at the top of the file, absent a compelling reason to do otherwise.
Unpacking sequences doesn’t require list brackets:
[x, y] = xs # unnecessary x, y = xs # better
For string formatting, use
x % (y,)rather than
x % y, to avoid ambiguity if
yhappens to be a tuple.
Third party code¶
See our docs on dependencies for discussion of rules about integrating third-party projects. | https://zulip.readthedocs.io/en/stable/contributing/code-style.html | CC-MAIN-2018-51 | refinedweb | 1,580 | 57.67 |
PHP 5.4 and Zend Framework 2.0 Gearing up for Release. In PHP 5.4, developers will also be able to turn the
MB string on and off, so the multibyte support will be available without having to recompile PHP. According to Gutmans, that will provide a significant advantage to companies that want to have a common.
"With PHP 5.3, if you want to take advantage of the namespace support, it essentially mandated a rewrite, or at least substantial changes," Zeev Suraski, CTO of Zend told InternetNews.com. "PHP 5.4 will be more evolutionary than PHP 5.3, so it won't mandate a rewrite of code."
Zend Framework 2.0
The Zend Framework, which currently is in beta, is also set for a 2.0 release in 2012. Gutmans noted that his team has worked to make the Zend Framework (ZF) 2.0 release faster and more extensible.
According to Suraski, the main focus of Zend Framework 2.0 is to take full advantage of PHP 5.3 as well as making it easier to use.
"Developing applications in ZF 2.0 will be significantly easier than ZF 1.0," Suraski said. "You can create an application with just a few lines of code in a really elegant way."
Suraski added that Zend Framework in general is something that will help PHP developers to build more secure applications as well.
"Generally speaking, if you take advantage of Zend Framework, then a lot of the common issues that you get in applications just go away," Suraski said. "For example, if you use the database component instead of creating your own queries, then SQL injection goes away."
Sean Michael Kerner is a senior editor at InternetNews.com, the news service of Internet.com, the network for technology professionals.
Originally published on.
| http://www.developer.com/lang/php/php-5-4-and-zend-framework-2-0-gearing-up-for-release.html | CC-MAIN-2014-42 | refinedweb | 303 | 69.58 |
Don’t forget to support the lib by giving a ⭐️
How to install
CocoaPods
SwifCron is available through CocoaPods
To install it, simply add the following line in your Podfile:
pod 'SwifCron', '~> 1.3.0'
Swift Package Manager
.package(url: "", from:"1.3.0")
In your target’s dependencies add
"SwifCron" e.g. like this:
.target(name: "App", dependencies: ["SwifCron"]),
Usage
import SwifCron do { let cron = try SwifCron("* * * * *") //for getting next date related to current date let nextDate = try cron.next() //for getting next date related to custom date let nextDate = try cron.next(from: Date()) } catch { print(error) }
Limitations
I use CrontabGuru as a reference
So you could parse any expression which consists of digits with
*
,
/ and
- symbols
Contributing
Please feel free to contribute!
ToDo
- write more tests
- support literal names of months and days of week in expression
- support non-standard digits like
7for Sunday in day of week part of expression
Latest podspec
{ "name": "SwifCron", "version": "1.3.0", "summary": "u23f1 An awesome and simple pure swift cron expressions parser and scheduler", "description": "With this lib you will be able to easily parse cron strings and get next launch dates", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "MihaelIsaev": "[email protected]" }, "source": { "git": "", "tag": "1.3.0" }, "platforms": { "ios": "8.0" }, "source_files": "Sources/SwifCron/**/*", "swift_version": "4.2" }
Thu, 07 Mar 2019 11:02:06 +0000 | https://tryexcept.com/articles/cocoapod/swifcron | CC-MAIN-2019-13 | refinedweb | 227 | 61.16 |
On Mon, 2016-11-07 at 18:06 +0100, Petr Vobornik wrote: > On 11/07/2016 05:49 PM, Martin Babinsky wrote: > > On 11/07/2016 05:43 PM, Justin Mitchell wrote: > >> I have been working on a python script to setup secure NFS exports using > >> kerberos that relies heavily on FreeIPA, and is in many ways the server > >> side compliment to ipa-client-automount. It attempts to automatically > >> discover the setup, and falls back to asking simple questions, in the > >> same way as ipa-server-install et al do. > >> > >> I'm not sure quite where it would fit best in the freeipa source tree, > >> perhaps under 'client' ? > >> Also, whats would be the best way to submit the script, as a patch or a > >> github pull request ? > >> > >> thanks > >> > >> > >> > > > > If it is a server-side code then it should go into ipaserver/ namespace. > > Could you describe the use case in more details? > > IIUIC it's about configuring NFS server against IPA and not IPA server > itself as NFS server. In that case it should be IMO in client package > because NFS server is also a client from IPA's perspective.
Advertising
Yes, it is to configure the NFS server, which is already an IPA client, to provide exports to other IPA clients which may like to use ipa-client-automount > > > > > We now prefer contributions in form of Github pull-requests. > Right Okay thanks, i will set that up. -- Manage your subscription for the Freeipa-devel mailing list: Contribute to FreeIPA: | https://www.mail-archive.com/freeipa-devel@redhat.com/msg36902.html | CC-MAIN-2018-26 | refinedweb | 249 | 67.38 |
Opened 8 years ago
Closed 5 years ago
#15587 closed Bug (duplicate)
django.forms.BaseModelForm._post_clean updates instance even when form validation fails
Description
This method will update the model instance with self.cleaned_data.
self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude)
however, the cleaned_data might not be good. It is probably good to have a single line inserted before the first line of code of this method:
def _post_clean(self): if self._errors: return ...
Change History (7)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 7 years ago by
I have the same problem with model forms:
>>> from django.contrib.auth.models import User >>> from django.contrib.auth.forms import UserChangeForm >>> user = User.objects.all()[0] >>> form = UserChangeForm({'first_name': 'Vladimir', 'email': 'invalid'}, instance=user) >>> user.first_name u'' >>> form.is_valid() False >>> user.first_name u'Vladimir'
As you can see, this form changes provided instance after validation. This behavior is not expected and I'm considering this as a bug.
Or this is not a bug? Maybe this is a design decision to make it possible to validate form data in the
BaseModel.clean_fields method? This is odd for me.
Provided single-line patch in the ticket description isn't taking into account that validation error can be raised later by the model validation, so this is wouldn't fix this problem.
comment:4 Changed 7 years ago by
comment:5 Changed 7 years ago by
The question here is at which point the form is allowed to update the instance. Other tickets are also having problems here for their own reasons (#15995, #16423).
There is a strong argument for doing this while validating since ModelForm actually promises to fully validate the data (form and model). For this the instance needs to be updated.
However, there is case for reminding people that in current workflow validation implies updating the model to independently validate the model; so if anybody can create a patch for the docs explaining this.
Otherwise this will become a DDN and we need to discuss whether a form validation error means the field should not be updated on the model.
I'm afraid this bug report is very unclear - please give an example that shows an actual bug and re-open. Thanks! | https://code.djangoproject.com/ticket/15587 | CC-MAIN-2018-47 | refinedweb | 385 | 56.76 |
![: 7962
Hi,
I know this info may not be helpful.
I met a similar problem, and got stuck on it for 2 days.
Then I just try anthoer Linux host B, the problem disappeared.
After that, I reinstalled my Linux image on host A and updated it, and then the problem get solved.
Jun Jun_Zhang:
Could you please share the bootargs with us?
Huang,
Are you able to run 'dhcp' at the u-boot prompt? Does it give you the correct ip address of your target & host?
Are you using a direct connection between your EVM & PC?
Make sure your host's "/etc/exports" has the correct NFS mount listed.
Be sure to disable your host firewall.
Thank You,Michael RisleyTexas Instruments
Jun Zhang73904
Could you please share the bootargs with us?
hi, Jun
setenv bootargs 'mem=128M console=ttyO0,115200n8 root=/dev/nfs
nfsroot=10.2.1.10:/BoardNFS,nolock rw
ip=10.2.1.9:10.2.1.10:10.2.1.254:255.255.255.0:dm812x_ipnc:eth0:off'
In reply to Michael Risley:
hi, Michael
i didn't run 'dhcp' at the u-boot prompt, i use static ip only
yes, EVM & PC is direct connection
if i boot board use filesystem in flash, after boot successfully, i can mount host folder to board.
must i boot under 'dhcp' when booting use filesystem by NFS ?
In reply to huang jack:
Hi huang huang
Please ensure your file system is under nfsroot=10.2.1.10:/BoardNFS and was set under nfs server.
I use DM8168, and my EZSDK is 5_02_xx_xx,
My bootcmd is like this "bootcmd=dhcp;setenv serverip 172.16.1.62;tftpboot;bootm", and my board works well.
Good luck.
regards,
lei
In reply to Lei Wong:
hi, Lei
the nfs filesystem is ok, since i can mount it after board boot successfully.
i use static ip not 'dhcp'.
would you try static ip for a test ?
How did you populate the BoardNFS directory? That directory may not be "bootable"...as it may not have all the special files and permissions. It may also not be owned by root. Usually you would untar a rootfs image onto that directory. Since you can mount the NFS directory, you could tar up your target rootfs and untar onto your mounted NFS directory. A plain recursive copy usually doesn't copy special file properly.
Hi huang huang,
Sorry for the late reply. Have you fixed your problem?
I tested my board using static ip, and it failed.
I set it just like this "setenv bootargs 'console=ttyO2,115200n8 rootwait rw mem=256M earlyprink notifyk.vpssm3_sva=0xBF900000 vram=50M ti816xfb.vram=0:16M,1:16M,2:6M root=/dev/nfs nfsroot=172.16.1.62:/home/xiaoxiao/targetfs2 ip=172.16.1.61:172.16.1.62:172.16.1.1:255.255.255.0:dm816x-evm:eth0:off'"
The booting message shows as below.
--------------------------------------------------------------------------------------------------------------------
net eth0: attached PHY driver [Generic PHY] (mii_bus:phy_addr=0:01, id=282f013)
IP-Config: Complete:
device=eth0, addr=172.16.1.61, mask=255.255.255.0, gw=172.16.1.1,
host=dm816x-evm, domain=, nis-domain=(none),
bootserver=172.16.1.62, rootserver=172.16.1.62, rootpath=
PHY: 0:01 - Link is Up - 1000/Full
VFS: Unable to mount root fs via NFS, trying floppy.
VFS: Cannot open root device "nfs")
Backtrace:
[<c0046b44>] (dump_backtrace+0x0/0x110) from [<c0364b48>] (dump_stack+0x18/0x1c)
r7:cc813000 r6:00000000 r5:c002c014 r4:c04d9110
[<c0364b30>] (dump_stack+0x0/0x1c) from [<c0364bac>] (panic+0x60/0x17c)
[<c0364b4c>] (panic+0x0/0x17c) from [<c0009254>] (mount_block_root+0x1e0/0x220)
r3:00000000 r2:00000000 r1:cc82bf58 r0:c041a9dc
[<c0009074>] (mount_block_root+0x0/0x220) from [<c0009340>] (mount_root+0xac/0xc
c)
[<c0009294>] (mount_root+0x0/0xcc) from [<c00094d0>] (prepare_namespace+0x170/0x
1d4)
r4:c04d8a24
[<c0009360>] (prepare_namespace+0x0/0x1d4) from [<c0008784>] (kernel_init+0x114/
0x154)
r5:c0008670 r4:c04d89c0
[<c0008670>] (kernel_init+0x0/0x154) from [<c006aeec>] (do_exit+0x0/0x5e4)
r5:c0008670 r4:00000000
Hi, Lei
Sorry for the late reply, and thanks for your try. board still boot fail when using static IP. I'm waiting for the corresponding release SDK for my board.. | https://e2e.ti.com/support/processors/f/791/t/132915 | CC-MAIN-2019-13 | refinedweb | 685 | 75.81 |
# Lua in Moscow 2019: Interview with Roberto Ierusalimschy

Some time ago, the creator of Lua programming language, Roberto Ierusalimschy, visited our Moscow office. We asked him some questions that we prepared with the participation of Habr.com users also. And finally, we’d like to share full-text version of this interview.
**— Let’s start with some philosophical matters. Imagine, if you recreated Lua from scratch, which three things would you change in Lua?**
— Wow! That’s a difficult question. There’s so much history embedded in the creation and development of the language. It was not like a big decision at once. There are some regrets, several of which I had a chance to correct over the years. People complain about that all the time because of compatibility. We did it several times. I’m only thinking of small things.
**— Global-by-default? Do you think this is the way?**
— Maybe. But it’s very difficult for dynamic languages. Maybe the solution will be to have no defaults at all, but would be hard to use variables then.
For instance, you would have to somehow declare all the standard libraries. You want a one-liner, `print(sin(x))`, and then you’ll have to declare ‘print’ and also declare ‘sin’. So it’s kinda strange to have declarations for that kind of very short scripts.
Anything larger should have no defaults, I think. Local-by-default is not the solution, it does not exist. It’s only for assignments, not for usage. Something we assign, and then we use and then assign, and there’s some error — completely mystifying.
Maybe global-by-default is not perfect, but for sure local-by-default is not a solution. I think some kind of declaration, maybe optional declaration… We had this proposal a lot of times — some kind of global declaration. But in the end, I think the problem is that people start asking for more and more and we give up.
(sarcastically) Yes, we are going to put some global declaration — add that and that and that, put that out, and in the end we understand the final conclusion will not satisfy most people and we will not put all the options everybody wants, so we don’t put anything. In the end, strict mode is a reasonable compromise.
There is this problem: more often than not we’re using fields inside the modules for instance, then you have the same problems again. It’s just one very specific case of mistakes the general solution should probably include. So I think if you really want that, you should use a statically typed language.
**— Global-by-default is also nice for small configuration files.**
— Yes, exactly, for small scripts and so on.
**— No tradeoffs here?**
— No, there are always tradeoffs. There’s a tradeoff between small scripts and real programs or something like that.
**— So, we’re getting back to the first big question: three things you would change if you had the chance. As I see it, you are quite happy with what we have now, is that right?**
— Well, it’s not a big change, but still… Our bad debt that became a big change is nils in tables. It’s something I really regret. I did that kind of implementation, a kind of hack… Did you see what I did? I sent a version of Lua about six months or a year ago that had nils in tables.
**— Nil values?**
— Exactly. I think it was called nils in tables — what’s called null. We did some hack in the grammar to make it somewhat compatible.
**— Why is it needed?**
— I’m really convinced that this is a whole problem of holes… I think that most problems of nils in arrays would disappear, if we could have [nils in tables]… Because the exact problem is not nils in arrays. People say we can’t have nils in arrays, so we should have arrays separated from tables. But the real problem is that we can’t have nils in tables! So the problem is with the tables, not the way we represent arrays. If we could have nils in tables, then we would have nils in arrays without anything else. So this is something I really regret, and many people don’t understand how things would change if Lua allowed nils in tables.
**— May I tell you a story about Tarantool? We actually have our own implementation of null, which is a CDATA to a null-pointer. We use it where gaps in memory are required. To fill positional arguments when we make remote calls and so on. But we usually suffer from it because CDATA is always converted to ‘true’. So nils in arrays would solve a lot of our problems.**
— Yeah, I know. That’s exactly my point — this would solve a lot of problems for a lot of people, but there’s a big problem of compatibility. We don’t have the energy to release a version that is so incompatible and then break the community and have different documentation for Lua 5 and Lua 6 etc. But maybe one day we’ll release it. But it’s a really big change. I think it should have been like that since the beginning — if it was, it would be a trivial change in the language, except for compatibility. It breaks a lot of programs, in very subtle ways.
**— What are the downsides except for compatibility?**
— Besides compatibility, the downside is that we would need two new operations, two new functions. Like ‘delete key’, because assigning nil would not delete the key, so we would have a kind of primitive operation to delete the key and really remove it from the table. And ‘test’ to check where exactly to distinguish between nil and absent. So we need two primitive functions.
**— Have you analyzed the impact of this on real implementations?**
— Yes, we released a version of Lua with that. And as I said, it breaks code in many subtle ways. There are people who do table.insert(f(x)) — a call to a function. And it’s on purpose, it’s by design that when a function doesn’t want to insert anything, it returns nil. So instead of a separate check «do I want to insert?», then I call a table.insert, and knowing that if it’s nil, it won’t be inserted. As everything in every language, a bug becomes a feature, and people use the feature — but if you change it, you break the code.
**— What about a new void type? Like nil, but void?**
— Oh no, this is a nightmare. You just postpone the problem, if you put another, then you need another and another and another. That’s not the solution. The main problem — well, not main, but one of the problems — is that nil is already ingrained in a lot of places in the language. For instance, a very typical example. We say: you should avoid nils in arrays, holes. But then we have functions that return nil and something after nil, so we get an error code. So that construction itself assumes what nil represents… For instance, if I want to make a list of returns of that function, just to capture all of these returns.
**— That’s why you have a hack for that. :)**
— Exactly, but you don’t have to use hacks for so primitive and obvious [issue]. But the way the libraries are built… I once thought of that — maybe the libraries should return false instead of nil — but it’s a half-cooked solution, it solves only a small part of the problem. The real problem, as I said, is that we should have nils in tables. If not, maybe we should not use nils as frequently as we do now. It’s all kinda messy. So if you create a void, these functions would still return a nil, and we’d still have this problem unless we create a new type and the functions would return void instead of nil.
**— Void could be used to explicitly tell that the key should be kept in a table — key with a void value. And nil can act as before.**
— Yes, that’s what I mean. All the functions in the libraries should return void or nil.
**— They can still return nil, why not?**
— Because we’d still have the problem that you cannot capture some functions.
**— But there won’t be a first key, only a second key.**
— No, there won’t be a second key, because the counting will be wrong and you’ll have a hole in the array.
**— Yes, so are you saying that you need a false metamethod?**
— Yes. My dream is something like that:
`{f(x)}`
You should capture all returns of the function `f(x)`. And then I can do `%x` or `#x`, and that will give me the number of returns of the function. That’s what a reasonable language should do. So creating a void will not solve that, unless we had a very strong rule that functions should never return nil, but then why do we need nil? Maybe we should avoid it.
**— Roberto, will there be a much stronger static analysis support for Lua? Like «Lua Check on steroids». I know it won’t solve all the problems, of course. You’re saying this is a feature for 6.0, if ever, right? So if in a 5.x there will be a strong static analysis tool — if man-hours and man-years were invested — would it really help?**
— No, I think a really strong static analysis tool is called… type system! If you want a really strong tool you should use a statically typed language, something like Haskell or even something with dependent types. Then you’ll have really strong analysis tools.
**— But then you don’t have Lua.**
— Exactly, Lua is for…
**— Imprecise? I really enjoyed your giraffe picture on static and dynamic types.**
— Yes, my last slide.

*The final slide from Roberto Ierusalimschy's talk «Why (and why not) Lua?»
at Lua in Moscow 2019 conference*
**— For our next prepared question, let’s return to that picture. If I got it right, your position is that Lua is a small nice handy tool for solving not very large tasks.**
— No, I think you can do some large tasks, but not with static analysis. I strongly believe in tests. By the way, I disagree with you on coverage, your opinion is we should not chase coverage… I mean, I fully agree that coverage does not imply full test, but non-coverage implies a zero percent test. So I gave a talk about a testing room — you were there in Stockholm. So I started my test with [a few] bugs — that’s the strangest thing — one of them was famous, the other was completely non-famous. It’s something completely broken in a header file from Microsoft, C and C++. So I search the web and nobody cares about it or even noticed it.
For instance, there’s a mathematical function, [modf()](https://en.cppreference.com/w/cpp/numeric/math/modf), where you have to pass a pointer to a double because it returns two doubles. We translate the integer part of the number or the fractional part. So this is a part of a standard library for a long time now. Then came C 99, and you need this function for floats. And the header file from Microsoft simply kept this function and declared another one as a macro. So it got this one into type casts. So it cast the double to float, ok, and then it cast the pointer to double for pointer to float!
**— Something is wrong in this picture.**
— This is a header file from Visual C++ and Visual C 2007. I mean, if you called this function once, with any parameters, and checked the results — it would be wrong unless it’s zero. Otherwise, any other value will be wrong. You would never ever use this function. Zero coverage. And then there’s a lot of discussions about testing… I mean, just call a function once, check the results! So it’s there, it’s been there for a long time, for many years nobody cared. One very famous was in Apple. [Something like](https://habr.com/ru/post/213525/) "`if… what… goto… ok`", it was something like that. Someone put another statement here. And then everything was going to ok. And there was a lot of discussions that you should have the rules, the brackets should be mandatory in your style, etc., etc. Nobody mentioned that there are a lot of other ifs here. That has never been executed…
**— There’s also a security problem as far as I remember.**
— Yes, exactly. Because they were only testing approved cases. They were not testing anything, because everything would be approved. It means there is not a single test case in the security application that checks whether it refuses some connection or whatever it is that it should refuse. So everyone discuss and say they should have brackets… They should have tests, minimum tests! Because nobody has ever tested that, that’s what I mean by coverage. It’s unbelievable how people don’t do basic tests. Because if they were doing all basic tests, then of course, it’s a nightmare to do all the coverage and execute all the lines, etc. People neglect even basic tests, so coverage is at least about the minimum. It is a way to call the attention to some parts of the program that you forgot about. It is a kind of guide on how to improve your tests a little.
**— What’s test coverage in Tarantool? 83%! Roberto, what’s Lua test coverage?**
— About 99.6. How many lines of code do you have? A million, hundreds of thousands? These are huge numbers. One percent of hundred thousand is a thousand lines of code that were never tested. You did not execute it at all. Your users don’t test anything.
**— So there are like 17 percent of Tarantool features that are not currently used?**
— I’m not sure if you want to unstack everything back to where we were… I think one of the problems with dynamic languages (and static languages for that matter) is that people don’t test stuff. Even if you have a static language, unless you have something — not even like Haskell, but Coq, — some proof system, you change that for that or that. No static analysis tool can catch these errors, so you do need tests. And if you have the tests, you detect global problems, rename misspellings, etc. All these kinds of errors. You should have these tests anyway, maybe sometimes it’s a little bit more difficult to debug, sometimes it’s not — depends on the language and the kind of bug. But the problem is that no static analysis tool can allow you to avoid tests. The tests, on the other hand… well, they never prove the absence of error, but I feel much more secure after all the tests.
**— We have a question about testing Lua modules. As a developer, I want to test some local functions which may be used later. The question is: we want to have a coverage of about 99 percent, but for the API this module produces, the number of functional cases it should produce is much lower than the functionality it supports internally.**
— Why is that, sorry?
**— There is some functionality which is not reachable by the public interface.**
— If there is functionality that is not reachable by the public interface, it shouldn’t be there, just erase it. Erase that code.
**— Just kill it?**
— Yes, sometimes I do that in Lua. There was some code coverage, I couldn’t get there or there or there, so I thought it was impossible and just removed the code. It’s not that common, but happened more than once. Those cases were impossible to happen, you just put an assertion to comment on why it cannot happen. If you cannot get inside your functions from the public API, it shouldn’t be there. We should code the public API with incorrect input, that’s essential for the tests.
**— Remove code, removal is good, it reduces complexity. Reduced complexity increases maintainability and stability. Keep it simple.**
— Yes, extreme programming had this rule. If it’s not in a test, then it doesn’t exist.
**— What languages inspired you when you created Lua? Which paradigms or functional specialties or parts of these languages did you like?**
— I designed Lua for a very specific purpose, it was not an academic project. That’s why when you ask me if I’d create it again, I say there’s lots of historical stuff on the language. I did not start with ‘Let me create the language I want or want to use or everybody needs etc. My problem was ‘This program here needs a configuration language for geologists and engineers, and I need to create some small language they could use with an easy interface. That’s why the API was always an integral part of the language, because it’s easier to be integrated. That was the goal. What I had in my background, it’s a lot of different languages at that time… about ten. If you want all of the background…
**— I was interested in languages that you wanted to include in Lua.**
— I was getting things from many different languages, whatever fitted the problem I had. The single biggest inspiration was the Modula language for syntax, but otherwise, it’s difficult to say because there are so many languages. Some stuff came from AWK, it was another small inspiration. Of course, Scheme and Lisp… I was always fascinated with Lisp since I started programming.
**— And still no macros in Lua!**
— Yes, there is much difference in syntax. Fortran, I think, was the first language… no, the first language I learned was Assembly, then came Fortran. I studied, but never used CLU. I did a lot of programming with Smalltalk, SNOBOL. I also studied, but never used Icon, it’s also very interesting. A lot came from Pascal and C. At the time I created Lua, C++ was already too complex for me — and that was before the templates, etc. It was 1991, and in 1993 Lua was started.
**— The Soviet Union fell and you started creating Lua. :) Were you bored with semicolons and objects when you started working on Lua? I would expect that Lua would have a similar syntax to C, because it is integrated to C. But…**
— Yes, I think it’s a good reason not to have similar syntax — so you don’t mix them, these are two different languages.
It’s something really funny and it’s connected to the answer you didn’t allow me [at the conference] to give on arrays starting at 1. My answer was too long.
When we started Lua, the world was different, not everything was C-like. Java and JavaScript did not exist, Python was in an infancy and had a lower than 1.0 version. So there was not this thing when all the languages are supposed to be C-like. C was just one of many syntaxes around.
And the arrays were exactly the same. It’s very funny that most people don’t realize that. There are good things about zero-based arrays as well as one-based arrays.
The fact is that most popular languages today are zero-based because of C. They were kind of inspired by C. And the funny thing is that C doesn’t have indexing. So you can’t say that C indexes arrays from zero, because there is no indexing operation. C has pointer arithmetic, so zero in C is not an index, it’s an offset. And as an offset, it must be a zero — not because it has better mathematical properties or because it’s more natural, whatever.
And all those languages that copied C, they do have indexes and don’t have pointer arithmetic. Java, JavaScript, etc., etc. — none of them have pointer arithmetic. So they just copied the zero, but it’s a completely different operation. They put zero for no reason at all — it’s like a cargo cult.
**— You’re saying it’s logical if you have a language embedded in C to make it with C-like syntax. But if you have a C-embedded language, I assume you have C programmers who want the code to be in C and not some other language, which looks like C, but isn’t C. So Lua users were never supposed to use C daily? Why?**
— Who uses C every day?
**— System programmers.**
— Exactly. That’s the problem, too many people use C, but should not be allowed to use it. Programmers ought to be certified to use C. Why is software so broken? All those hacks invading the world, all those security problems. At least half of them is because of C. It is really hard to program in C.
**— But Lua is in C.**
— Yes, and that’s how we learned how hard it is to program in C. You have buffer overflows, you have integer overflows that cause buffer overflows… Just get a single C program that you can be sure that no arithmetic goes wrong if people put any number anywhere and everything is checked. Then again, real portability issues — maybe sometimes in one CPU it works, but then it gets to the other CPU… It’s crazy.
For instance, very recently we had a problem. How do you know your C program does not do stack overflow? I mean stack depth, not stack overflow because you invaded… How many calls you have a right to do in a C program?
**— Depends on a stack size.**
— Exactly. What the standard says about that? If you code in C and then you do this function that calls this function that calls this function… how many calls can you do?
**— 16 thousand?**
— I may be wrong, but I think the standard says nothing about that.
**— I think there is nothing in the standard because it’s too dependent on the size.**
— Of course, it depends on the size of each function. It may be huge, automatic arrays in the function frame… So the standard says nothing and there is no way to check whether a call will be valid. So you may have a single problem if you have three step calls, it can crash and still be a valid C program. Correct according to the standard — though it’s not correct because it crashes. So it’s very hard to program in C, because there are so many… Another good example: what is the result when you subtract two pointers? No one here works with C?
**— No, so don’t grill them. But C++ supports different types.**
— No, C++ has the same problem.
**— What’s the type of declaration? `ptrdiff_t`?**
— Exactly, `ptrdiff_t` is a signed type. So typically, if you have a standard memory the size of your word and you subtract two pointers in this space, you cannot represent all the sizes in the signed type. So, what does the standard say about that?
When you subtract two pointers, if the answer fits in a pointer diff, then that is the answer. Otherwise, you have undefined behavior. And how do you know if it fits? You don’t. So whenever you subtract two pointers, usually you know that’s out of standard, that if you’re pointing to anything larger than at least 2 bytes, then the larger size would be half the size of the memory, so everything is ok.
So you’re only having a problem if you’re pointing to bytes or characters. But when you do that, you have a real problem, you can’t do pointer arithmetic without worrying that you have a string larger than half of the memory. And then I can’t just compute the size and store in a pointer diff type because it’s wrong.
That’s what I mean about having a secure C or C++ program that’s really safe.
**— Have you considered implementing Lua in a different language? Change it from C to something else?**
— When we started, I considered C++, but as I said I gave up using it because of complexity — I cannot learn the whole language. It should be useful to have some stuff from C++ but… even today I don’t see any language that would do.
**— Can you explain why?**
— Because I have no alternatives. I can only explain why against other languages. I’m not saying C is perfect or even good, but it’s the best. To explain why, I need to compare it with other languages.
**— What about JVM?**
— Oh, JVM. Come on, it doesn’t fit in half the hardware… Portability is the main reason, but performance too. In JVM it’s a little better than .NET, but it’s not that different. A lot of things that Lua does we can’t do with JVM. You cannot control the garbage collector for instance. You have to use JVM garbage collector because you can’t have a different garbage collector implemented on top of JVM. JVM is also a huge consumer of memory. When any Java program starts to say hello, it’s like 10 MB or so. Portability is an issue not because it wasn’t ported, but because it cannot be ported.
**— What about JVM modifications like Mobile JVM?**
— That’s not JVM, that’s a joke. It’s like a micro edition of Java, not Java.
**— How about other static languages like Go or Oberon? Could they be the basis for Lua if you created it today?**
— Oberon… might be, it depends… Go, again, has a garbage collector and has a runtime too big for Lua. Oberon would be an option, but Oberon has some very strange things, like you almost don’t have constants, if I recall correctly. Yeah, I think they removed const from Pascal to Oberon. I had a book on Oberon and loved Oberon. Its [system](https://en.wikipedia.org/wiki/Oberon_(operating_system)) was unbelievable, it’s really something.
I remember that in 1994 I saw a demonstration of Oberon and Self. You know Self? It’s a very interesting dynamic language with jit-compilers etc… I saw these demos a week apart, and Self was very smart, they used some techniques from cartoons to disguise the slowness of the operations. Because when you opened something, it was like ‘woop!’ — first it reduces a little, then expands with some effects. It was implemented very well, these techniques they used to simulate movement…
Then a week later we saw a demo of Oberon, it was running on like 1/10th of hardware for Self — there was this very old small machine. In Oberon you click and then just boom, everything works immediately, the whole system was so light.
But for me it’s too minimalistic, they removed constants and variant types.
**— Haskell?**
— I don’t know Haskell or how to implement Lua in Haskell.
**— And what’s your attitude to languages like Python or R or Julia as a basis for future implementations of Lua?**
— I think every one of these has its uses.
R seems to be good for statistics. It’s very domain specific, done by people in the area, so this is a strength.
Python is nice, but I had personal problems with it. I thought I mentioned it in my talk or the interview. That thing about not knowing the whole language or not using it, the subset fallacy.
We use Python in our courses, teaching basic programming — just a small part, loops and integers. Everybody was happy, and then they said it would be nice to have some graphical applications, so we needed some graphical library. And almost all graphical libraries, you get the API… But I don’t know Python enough, this is much-advanced stuff. It has the illusion it’s easy and I have all these libraries for everything, but it’s either easy or you have everything.
So when you start using the language, then you start: oh, I have to learn OOP, inheritance, whatever else. Every single library. It looks like authors take pride in using more advanced language features in their API to show I don’t know what. Function calls, standard types, etc. You have this object, and then if you want another thing then you have to create another object…
Even the pattern matching, you can do some simple stuff, but usually the standard pattern matching is not something you do. You do a matching, an object returns a result and then you call methods on that object result to get the real result of the match. Sometimes there is a simpler way to use but it’s not obvious, it’s not the way most people use.
Another example: I was teaching a course on pattern matching and wanted to use Perl-like syntax, and I couldn’t use Lua because of a completely different syntax. So I thought Python would be the perfect example. But in Python there are some direct functions for some basic stuff but for anything more complex you’d have to know objects and methods etc. I just wanted to do something and have the result.
**— What did you end up using?**
— I used Python and explained to them. But even Perl is much simpler, you do the match and the results are $1, $2, $3, it’s much easier, but I don’t have the courage to use Perl, so…
**— I was using Python for two years before I noticed there were decorators. (question by Yaroslav Dynnikov from Tarantool team)**
— Yes, and when you want to use a library, then you have to learn this stuff and you don’t understand API etc. Python gives an illusion that it’s easy but it’s quite complex.
...And Julia, I don’t know much about Julia, but it reminded me of LuaJIT in the sense that sometimes it looks like user’s pride. You can have very good results but you have to really understand what’s going on. It’s not like you write code and get good results. No, you write code and sometimes the results are good, sometimes they are horrible. And when the results are horrible, you have a lot of good tools that show you the intermediate language that was once generated, you check it and then you go through all this almost assembly code. Then you realize: oh, it’s not optimizing that because of that. That’s the problem of programmers, they like games and sometimes they like stuff because it’s difficult, not because it’s easy.
I don’t know much about Julia, but I once saw a talk about it. And the guy talking, he was the one to have this point of view: see how nice it is, we wrote this program and it’s perfect. I don’t remember much, something about matrix multiplication I guess. And then the floats are perfect, then the doubles are perfect, and then they put complex [numbers]… and it was a tragedy. Like a hundred times slower.
(sarcastically) ‘See how nice it is, we have this tool, we can see the whole assembly [listing], and then you go and change that and that and that. See how efficient this is’. Yes, I see, I can program in assembly directly.
But that was just one talk. I studied a little R and have some user experience with Python for small stuff.
**— What do you think of Erlang?**
— Erlang is a funny language. It has some really good uses, fault tolerance is really interesting. But they claim it’s a functional language and the whole idea of the functional language is that you don’t have a state.
And Erlang has a huge hidden state in the messages that are sent and not yet received. So each little process is completely functional but the program itself is completely non-functional.
It’s a mess of hidden data that is much worse than global variables because if it were global variables, you would print them. Messages that are the real state of your system. Every single moment, what’s the state of the system? There are all these messages sent here and there. It’s completely non-functional, at all.
**— So Erlang lies about being functional and Python lies about being simple. What does Lua lie about?**
— Lua lies a bit about being small. It’s still smaller than most other languages, but if you want a really small language then Lua is larger than you want it to be.
**— What’s a small language then?**
— Forth is, I love Forth.
**— Is there a room for a smaller version of Lua?**
— Maybe, but it’s difficult. I love tables but tables are not very small. If you want to represent small stuff, the whole idea behind tables will not suit you. It would be syntax of Lua, we’d call it Lua but it’s not Lua.
It would be just like Java micro edition. You call it Java but does it have multi-threading? No, it doesn’t. Does It have a reflection? No, it doesn’t. So why use it? It has a syntax of Java, the same type system but it’s not Java at all. It’s a different language that is easier to learn if you know Java but it’s not Java.
If you want to make a small language that looks like Lua but Lua without tables is not… Probably you should have to declare tables, something like FFI to be able to be small.
**— Are there any smaller adaptations of Lua?**
— Maybe, I don’t know.
**— Is Lua ready for pure functional programming? Can you do it with Lua?**
— Of course, you can. It’s not particularly efficient but only Haskell is really efficient for that. If you start using monads and stuff like that, create new functions, compose functions etc… You can do [that] with Lua, it runs quite reasonably, but you need implementation techniques different from normal imperative languages to do something really efficient.
**— Actually, there is a library for functional programming in Lua.**
— Yes, it’s reasonable and usable, if you do really need performance; you can do a lot of stuff with it. I love functional stuff and I do it all the time.
**— My question is more about the garbage collector, because we only have only mutable objects and we have to use them efficiently. Will Lua be good for that?**
— I think a new incarnation of garbage collector will help a lot, but again…
**— Young die young? The one that seems to work with young objects?**
— Exactly, yes. But as I said even with the standard garbage collector we don’t have optimal performance but it can be reasonable. More often you don’t even need that performance for most actions unless you are writing servers and having big operations.
**— What functional programming tasks do you perform in Lua?**
— A simple example. My book, I’m writing my own format and I have a formatter that transforms that in LaTex or DocBook. It’s completely functional, it has a big pattern matching… It’s slightly inspired by LaTex but much more uniformed. There’s @ symbol instead of backslash, a name of a macro and one single argument in curly brackets. So I have gsub that recognizes this kind of stuff and then it calls a function, the function does something and returns something. It’s all functional, just functions on top of functions on top of functions, and the final function gives a big result.
**— Why don’t you program with LaTeX?**
— Plain LaTeX? First, it’s too tricky for a lot of stuff and so difficult. I have several things that I don’t know how to do in LaTex. For example, I want to put a piece of inline code inside a text. Then there is a slash verb, standard stuff. But slash verb gives fixed space. And the space between stuff is never right. All real spaces are variable, it depends on how the line is adjusted, so it expands in some spaces and compacts in others depending on a lot of stuff. And those spaces are fixed, so sometimes they look too large, sometimes too small. It also depends on what you put in code.
**— But you still render your own format to LaTeX?**
— Yes, but with a lot of preprocessing. I write my own verb but then it changes and becomes not a verb but a lot of stuff. For example, when I write 3+1 I write a very small space here. In verb, if I don’t put any space here, it shrinks, and if I do, it’s too large. So I do the preprocessing, inserting a variable space. It’s very small but can be a little larger if it needs to adjust. But if I put ‘and’ after 1 then I put a larger space. This function here does all that. This is a small example but there are other things…
**— Do you have a source?**
— I do have the source, it is in the [git](https://github.com/lua/lua/blob/master/manual/manual.of). The program’s called [2html](https://github.com/lua/lua/blob/master/manual/2html). The current version only generates HTML… Sorry, that’s a kind of a mess. I created it for a book but also another one for the manual. The one in the git is for the manual. But the other one is more complicated and not public, I can’t make it public. But the main problem is that TeX is not there. It’s almost impossible to process TeX files without TeX itself.
**— Is it not machine-readable?**
— Yes, it’s not machine-readable. I mean, it is readable because TeX reads it. It’s so hard to test, so many strange rules etc. So this is much more uniformed and as I said I generate DocBook format, sometimes I need it. That started when I had this contract for a book.
**— So you use 2html to generate DocBook?**
— Yes, it generates DocBook directly.
**— Ok, thank you very much for the interview!**
---
If you have any more questions, you can ask them in the [Lua Mailing List](https://www.lua.org/lua-l.html). See you soon! | https://habr.com/ru/post/459466/ | null | null | 6,527 | 74.9 |
Hello everyone,
I've recently been introduced to arrays in my java module and i'm stuck on a question:
Write a program which reads a sequence of words, and prints a count of the number of distinct words. An example of input/output is
should I stay or should I go
5 distinct words
Could anybody explain to me how I would go about checking the array to see if each word has occurred already? Is linear searching the best route to take? Thanks for any help given. Oh and class Console is just a more convenient version of ConsoleReader()
Even though this is probably all wrong, here is my attempted code so far:
Code Java:
public class DistinctWords { public static void main(String[] args) { String[] words = new String[10]; // Allow up to 10 words int count=0; int distinctWord=0; System.out.println("Enter sentence:"); while(!Console.endOfFile()) { words[count]=Console.readToken(); count++; } int i=1; while(i<count && !words[i].equals(words[i-1])) { i++; } if(i<count) // found System.out.println("Found"); }//end main }//end class | http://www.javaprogrammingforums.com/%20collections-generics/8024-help-array-strings-linear-search-printingthethread.html | CC-MAIN-2018-09 | refinedweb | 180 | 62.78 |
The openide was designed around TopManager. That was not bad decision then, but
as seen from currrent point of view the TopManager connects too much
functionality together and makes separation of openide into smaller pieces hard.
In present days it is obsolete because of Lookup and some parts has already been
modified to use it, but still there is a bunch of other libraries that need to
be changed not to depend on TopManager.
ErrorManager.getDefault () should simplify life of all module writers:
issue 16854
Lookup should be enhanced to allow "standard" registration of entries
- issue 14722
Probably all libraries which are not yet fixed should be listed here,
and separate tasks filed.
Also we need a task for someone to write some kind of test suite that
can be automated and which checks that library separation is not
violated: perhaps that things are separately compilable (will not work
for e.g. TMUtil classes); or that all classes reachable (at compile
time) from a given JAR's entry points can be loaded with no
NoClassDefFoundError's. The latter might be possible using a bytecode
analyzer, say. Such a test will keep us honest. See issue #17431.
So is this task FIXED or what?
Well, I am not sure. We can continue to work on additional stuff -
move of org.openide.src.* to java modules, etc. In next versions. I'll
change the target milestone and assign it to myself to specify the
subtasks.
A branch sepration_19443 has been created for modules nbbuild, core
and openide. The goal is to remove compat.jar and cleanup
interdependencies between different parts of openide while still
maintaining compatibility.
A branch separation_19443 has been created for modules nbbuild,
core and openide. The goal is to remove compat.jar and cleanup
interdependencies between different parts of openide while still
maintaining compatibility.
Work finished, updated with main trunk and available in xtest_19433
branch. The xtest has yet to be updated. Use:
cvs co -r xtest_19433 nbbuild core openide xtest
to get that branch.
The issue is not blocked by issue 25807, the xtest framework does not
work. Has to be solved before we can integrate it.
Jesse, I have merged the xtest_19433 into the trunk and then remove
the changes. Trung made me recognize that you are the person that
should check the changes in because it most of all affects module
system and you are the responsible owner of it.
So please do the review, do changes which you want, and then apply the
xtest_19433 branch. Please keep in mind that these changes require a
work by release engineering (sigtest) and xtest (use of lib instead of
lib/ext) which should be done in synch in one day.
I'd approciate if the changes were done soon in release cycle, because
of possible hidden problems and also because this is necessary for
clean up of the APIs.
Created attachment 6933 [details]
Revised patch
I have attached a complete patch (nb_all, nbextra) against the current
trunk. It compiles and passes unit tests OK.
Unfortunately the goal of maintaining backward compatibility does not
work. I tried it with AbstractNode: I made a class:
package org;
import org.openide.nodes.*;
public class Foo {
public static void main(String[] x) {
AbstractNode n = new AbstractNode(Children.LEAF);
System.out.println(n.getCookieSet());
}
}
and compiled against a copy of AbstractNode.java that had getCookieSet
marked as public. I confirmed that the compiled class would run
against the patched AbstractNode.class but not regular openide.jar.
I then tried to run this class (precompiled) using internal execution
in an IDE built with this patch. It threw a VerifyError, indicating
that to the VM, getCookieSet is still protected.
Which is what I would expect: essentially the VM now sees:
public class AbstractNodePatch extends Node {
// ...
public abstract CookieSet getCookieSet();
}
public class AbstractNode extends AbstractNodePatch {
// ...
protected CookieSet getCookieSet() {...}
}
so classes compiled against AbstractNode see the method as protected,
not public.
So I have a number of problems with this patch.
1. Backward compatibility does not, in fact, work.
2. Some of the progress we previously made using openide-compat.jar
for patching is apparently gone in this system. For example,
CompilerGroup's add/removeCompilerListener is no longer final. And
Toolbar's bogus fields are now once again available for modification
to someone compiling from trunk source. So we actually have a
regression in the list of things which we successfully made clean
while keeping compatibility: even assuming we fix #1, we will have
reverted some of the APIs to their older, pre-cleanup state.
3. The stated goals of this issue report are not covered by the
current patch: it says "The goal is to remove compat.jar and cleanup
interdependencies between different parts of openide while still
maintaining compatibility.", of which we did not do the first (it is
only moved and changed format, not removed) and did not do the second.
So really this patch should be a sub-issue and keep #19443 as a
tracking issue.
4. There is not a clear explanation anywhere here of what is solved by
using the superclass patching that did not work using simple bytecode
overriding and the compatkit. If we are going to change how we handle
backwards compatibility, to a more complicated system, there must be
some justification of what we will get from it. Maybe you told me at
some point, but I cannot remember now, and it is not written down here.
5. For Martin specifically: I know I have asked before, but again
please consider removing Module.java from xtest ASAP; the patching
required to make a org.netbeans.core.modules.Module is getting really
ugly, and I have lost track of what changes there are in core.jar not
in xtest-ext.jar. I tried to apply some changes here, but I'm not at
all sure I got it right. The tests do seem to run as expected.
Concerning XTest, Module.java patch is not available in XTest anymore
(xtest_19433 branch), all the required functinality was moved directly
to Module.java in core (see lines 803 to 826).
Well, if "1. Backward compatibility does not, in fact, work." then I
am pretty stupid. I tried that using scripting console and it worked
well, but if that does not work in Java the whole thing is
meaningless. I try to check it out.
"Module.java patch is not available in XTest anymore (xtest_19433
branch)": here is another very good reason why a branch base tag is
nice! Module.java has no xtest_19433 tag at all, but it also has no
build tags, so there is no indication whatsoever in this file that it
does not exist on the branch (such as a dead rev). If it is supposed
to be deleted on a branch, you actually have to delete it... I spent
some time trying to patch it. Is this file
(xtest/src/org/netbeans/core/modules/Module.java) in fact supposed to
be deleted as part of the patch? (Don't worry, I'm not going to commit
anything in nb_all/xtest/ nor nbextra/xtest/, just asking so I can
test the patch in full first.)
Please switch to issue #26126 for any further comments re. the
superclass replacement patch.
Sorry Jesse, I forgot to create branch tag, I just created a simple
tag, so Module.java and org.netbeans.Main.java are supposed to be
deleted. Now the probem should be corrected (I created xtest_19433 branch)
OK, well I am creating a new branch as part of #26126 anyway. I will
make sure Main.java and Module.java are deleted in this branch in
xtest sources.
Things which are proposed:
- do #20898, i.e. move all of org.openide.src plus SourceCookie to
java.netbeans.org
- make a new autoload module for org.openide.execution plus
ExecCookie; some kind of interaction with core/execution, i.e.
core/execution serves as an impl companion module
- make a new deprecated autoload module for org.openide.compiler plus
CompilerCookie plus CompilerSupport; core/compiler serves as (again
deprecated) impl companion module
- make a new deprecated autoload module for general unwanted stuff
from openide; initially: all of org.openide.debugger and
DebuggerCookie; ExecSupport; TopManager, perhaps, if its remaining
useful methods can be placed elsewhere; maybe other stuff?
I will create a branch off the CVS trunk to hold such changes. Petr Z.
and I will work on it. Changes will be merged back to trunk if and
when they are "compatible", i.e. existing binary modules run
unchanged. Note that modules might still not run *usefully* due to
project changes - e.g. an old module with a CompilerType will not do
anything useful if 4.0 projects no longer uses CompilerType at all -
but at least (1) they should not throw linkage errors, (2) until 4.0
projects are merged, the trunk will continue to run with old-style
execution, compilation, and debugging using the 3.4 UI.
Created branch on a few modules:
cvs rtag -b -r separation_19443_a separation_19443_a_branch
core_nowww_1 openide_nowww_1 java_nowww_1
Interesting summary from Yarda:
Adding nbbuild and classfile to the set of branched module under
separation_19443_a_branch.
Jesse, I have the branch now.
Please devide the separation to subtasks and assign some of them to me.
Well, I am starting to break stuff up. The problem is that at this
early stage, I am still not entirely sure what kinds of problems we
will run into. So it is hard to assign subtasks: I don't yet know what
will need to be done. I will try to commit a snapshot of what I am
working on rather soon. So far I am creating submodules
openide/deprecated and java/srcmodel.
OK.
Do you want to move the <stuff> to openide/deprecated first and just
then try to make those core/<compiler stuff>, core/<execution stuff>
and <rests stuff> submodules or you have changed the plan?
Well, for now I'm moving a lot of classes to openide/deprecated and
java/srcmodel. If I have to create openide/compiler now, I will, but
it might be easier to wait on that. Creating proper core submodules is
a lower priority, I guess. For now I'm still trying to figure out
which classes are not going to compile and which should be moved around.
First commit on the branch, take a look. First priority right now is
to get rid of usage of TopManager. Two categories here:
1. TopManager methods that have a proper replacement already. E.g.
systemClassLoader -> Lookup.lookup(ClassLoader).
2. Methods that have no equivalent yet. E.g. getIO, createDialog,
setStatusText, exit. Need new APIs somewhere. I am thinking about
using WindowManager for some of these, as it is logical enough... not
yet sure whether it is acceptable to have a bunch of other APIs
depending on WindowManager, though.
#1 you could start working on right away if you wanted - just try
compiling openide/build.xml and look at the breakage. For #2, I will
try to start writing such APIs.
I see. If it wait till tommorow I start to work on #1 morning (I'm
leaving right now :-( ).
Current status: org.openide.src is moved to java/srcmodel (except for
documentation etc.), and lib/openide-deprecated.jar contains dead
classes such as TopManager. IDE w/ just java module seems to run fine.
The main task now is to make core*.jar not depend on
openide-deprecated.jar, permitting that JAR to be moved to the
autoload area for eventual retirement. This is important: if core.jar
depends on TopManager, then it depends on Debugger, which depends on
ConstructorElement, which is in
netbeans/modules/autoload/java-src-model.jar. That means that if you
ran the debugger and tried to set breakpoints it is possible there
would be NoClassDefFoundError's. This needs to get cleaned up.
I'm working on the core*.jar to not depend on openide/deprecated.
Removing the dependencies on TopManager.
Checked just first part, will continue.
But there seems to be some problems, especially with Project dependencies.
During the moving it would be nice to clean *.util.* package. Issue
27910 and issue 27911 describe the task. I've added them here, but if
you think they should not be here, remove them.
#27911 maybe. #27910 we could do any time, I don't think it's
important now.
BTW comments requested on newly introduced interfaces (found via
lookup) which replace some TopManager functionality:
void org.openide.awt.HtmlBrowser.URLDisplayer.showURL(URL)
[Radim said maybe this is unnecessary, you can make a new
BrowserComponent and call setURL, and that handles internal vs.
external browsers and browser selection choice correctly - pending]
Object org.openide.DialogDisplayer.notify(NotifyDescriptor)
Dialog org.openide.DialogDisplayer.createDialog(DialogDescriptor)
void org.openide.LifecycleManager.saveAll()
void org.openide.LifecycleManager.exit()
OutputWriter org.openide.windows.IOProvider.getStdout()
InputOutput org.openide.windows.IOProvider.getIO(String, boolean)
Basically these are just the result of splitting a few methods off
into their own interfaces. It should now be easier to e.g. write unit
tests exercising some but not all former TM functionality - for
example, dump a trivial impl of DialogDisplayer into
META-INF/services/ and you can test code that involves displaying
dialogs (as a lot of openide code does, I am afraid). No need to have
core.jar and start Plain, etc.
Add us back to Cc: when necessary.
I've put there second part, so the core should be now dependent on
TopManager, but I didn't took these sources into account (assuming
they are going to be moved to core/deprecated):
beaninfo/editors/CompilerTypeEditor
DebuggerTypeEditor
/DebuggerType
/DebuggerTypeBeanInfo
core/actions/ProjectsOpenAction
ProjectsCloseAction
ProjectsSetActiveAction
SettingsAction
/ui/WorkplaceNode
/ClassLoaderSupport
/ControlPanelNode
/ModuleInstaller
/NbControlPanel
/NbProject
/NbProjectOperation
/UpdateSupport
I was trying also SessionManager, but there was is too many dependencies.
Anyway, before that, in that above shape, there are still a couple of
compilation errors when trying to build the core. So I'll try to coupe
with them now.
Please let me know, if some of the above classes assumed to be moved
to core/deprecated shouldn't be there.
Sorry the mistake, core should not be now dependent on TopManager :-)
OK, I made core-deprecated.jar as stated earlier. Still some problems,
but definitely getting there. NB seems to run at least.
A bunch more work done. The two deprecated JARs are now actual
autoloads - meaning that if you are not using them, they get turned
off. :-) They are enabled automatically if any old modules are present
that do not declare a dependency on the new API version.
I branched xtest too - it refers to TopManager in a few places.
I branched also closed source code of XTest (compiled to xtest-ext.jar
binary on cvs.netbeans.org) in nbextra repository, since there are
several TopManager calls as well.
[breh somehow changed subcomp + targ milestone + vers]
OK. I have already committed a few changes in xtest.netbeans.org in
the branch. For some reason ide-mode tests are still not shutting down
when they finish - not sure why yet.
WARNING: the nbextra xtest branch is misspelled:
separation_19433_a_branch.
I will try to work on it in that branch. I see something that could
well be causing problems.
Need to branch junit too, I guess. Adding that to the list of branched
modules.
Created attachment 7666 [details]
Patch to nb_all - missing just a couple of manifest spec vers changes and addition of CurrentProjectNode to projects module
Created attachment 7667 [details]
Commit log
OK, phase I committed. Still some lesser cleanup tasks to do, but
things basically seem to work... Only apparent problems I know of
involve the projects module: (1) sometimes Workplace appears; (2)
Project tab is gone pending root node installation improvement (this
is long since filed); (3) execute/compile/build project don't do
anything, which I think I can fix easily.
Houston, we have a problem. After the commit to trunk, the new XTest
is not able to run tests in IDE mode (IDE checked out and built this
afternoon, run on Solaris 8). Basically everything looks ok, but it
seems IDE is not started in GUI mode and there are several exceptions
thrown by IDE. The test are then runned, though.
Anyway, you can easily reproduce the problem by running test examples
from XTest. Just cd to nb_all/xtest/examples/MyModule/, type 'ant' to
build the fake example module, cd to test/ subdirectory and run the
tests by 'ant -Dxtest.attribs=all,ide'.
I'm wondering what's wrong with XTest's
org.netbeans.xtest.ide.Main.java, since when IDE is started manually,
it works ok. Any help ?
Martin - I suggest you open a separate issue to track any problems
with XTest. I tried what you said, and indeed it shows a few
exceptions, but then runs the tests anyway.
"XTest: Lookup.getDefault().lookup(OutputSettings.class) is NULL !!!"
- not sure what is going on here. I can easily fix it though.
"FNFE: SFS/xml/memory/": mystery to me. Will look at.
"Cannot save window system": also a mystery.
XTest ought to be fixed now, please confirm. Problem in a nutshell was
that NbTopManager.get() could return a half-started NbTM while another
thread was still installing modules and doing other pieces of startup.
So XTest was being started much too early.
Great, things look good now for XTest. Thanks Jesse.
Tagging for phase II:
modules="nbbuild_nowww_1 openide_nowww_1 core_nowww_1 openidex_nowww_1
ant_nowww_1 apisupport_nowww_1 applet_nowww_1 autoupdate_nowww_1
beans_nowww_1 classfile_nowww_1 clazz_nowww_1 debuggercore_nowww_1
debuggerjpda_nowww_1 diff_nowww_1 editor_nowww_1 extbrowser_nowww_1
form_nowww_1 html_nowww_1 httpserver_nowww_1 i18n_nowww_1
image_nowww_1 j2eeserver_nowww_1 jarpackager_nowww_1 java_nowww_1
javacvs_nowww_1 javadoc_nowww_1 jndi_nowww_1 libs_nowww_1
projects_nowww_1 properties_nowww_1 rmi_nowww_1 schema2beans_nowww_1
scripting_nowww_1 text_nowww_1 tomcatint_nowww_1 ui_nowww_1
usersguide_nowww_1 utilities_nowww_1 vcscore_nowww_1 vcscvs_nowww_1
vcsgeneric_nowww_1 web_nowww_1 xml_nowww_1 xtest_nowww_1 junit_nowww_1
classclosure_nowww_1 corba_nowww_1 cpp_nowww_1 db_nowww_1
debuggertools_nowww_1 externaleditor_nowww_1 filecopy logger_nowww_1
monitor"; cvs rtag separation_19443_b_base $modules; cvs rtag -b -r
separation_19443_b_base separation_19443_b $modules
Compiler API is separated in the new branch. More to come.
Branch progress: have successfully separated
openide/compiler
openide/io
openide/execution
core/compiler
core/execution
core/output
core/term
core + openide build OK now. Not sure how it runs, yet.
moduleconfig=superslim now builds, and runs fairly well (still looking
for little errors).
Created attachment 7836 [details]
Patch to NB sources, gzipped (note: addn'l last-minutes changes also to core/ui/manifest.mf and nbbuild/build.properties to make platform build work)
Created attachment 7837 [details]
Commit log (gzipped)
OK, finally merged phase II.
Remaining tasks cannot all be solved in the 4.0 timeframe.
Request to separate util, nodes and explorer.
As described in
the
current work on projects prototype has been stopped.
removed PROJECTS keyword.
Planned for 4.2?
And implemented in 4.2. Finally. | https://netbeans.org/bugzilla/show_bug.cgi?id=19443 | CC-MAIN-2017-17 | refinedweb | 3,103 | 58.69 |
At the Athom office we have a very nice planter that adds some greenery to our place of work. However each week one of us needed to manually water the plants. Unacceptable of course, so we set out to automate the process.
This project is based around an Arduino Uno combined with an Arduino Ethernet Shield.
Required parts:
Optional parts:
To begin we attached our breadboard and our Arduino Uno to a piece of wood using adhesive. Because the breadboard is attached to our Arduino we can make sure our creation remains stable during the prototyping phase.
We placed the ethernet shield on top of the Arduino Uno.
The 12v pump can not be connected to the Arduino pin because the pump requires a lot more power than the Arduino can deliver. To amplify the signal from the Arduino pin we will use a N-type mosfet.
Since this project mainly focusses on connecting our planter to Homey we will just give a short description on how we connected everything. For more information on using transistors and mosfets with Arduino we suggest looking for tutorials like the Sparkfun tutorial on using transistors found here.
The pump is connected to the 12v supplied by the wall-plug connected to the Arduino. The other pin of the pump is connected to the DRAIN pin of the MOSFET. The DRAIN pin is used as the power input of the MOSFET. The SOURCE pin of the MOSFET is connected to ground, when a MOSFET is turned-on it will allow current to flow from the DRAIN to the SOURCE pin. The MOSFET is controlled using the GATE pin, which is pulled low by a resistor to disable the MOSFET. When the Arduino sets the I/O pin connected to the GATE of the mosfet to OUTPUT and HIGH the mosfet will switch on. When the pin gets pulled LOW by the Arduino the MOSFET will switch off, thereby functioning as a sort of amplifier/switch combination. This allows the Arduino to control the pump without supplying current to the pump.
After we built the hardware we tested it using the following sketch: this simple sketch should make the pump turn on and off once every second.
void setup() { pinMode(2, OUTPUT); } void loop() { digitalWrite(2, HIGH); delay(500); digitalWrite(2, LOW); delay(500); }
This is what our prototype looks like:
To connect our pump to Homey a bit of code needs to be added to the sketch for the ethernet shield to work:
#include <SPI.h> #include <Ethernet2.h> #include <EthernetUdp2.h> byte mac[] = { 0x48, 0x6F, 0x6D, 0x65, 0x79, 0x00 }; void setup() { pinMode(2, OUTPUT); Serial.begin(115200); Serial.println("Starting ethernet..."); //If this is the last message you see appear in the serial monitor... Ethernet.begin(mac); Serial.print("IP address: "); Serial.println(Ethernet.localIP()); } void loop() { //digitalWrite(2, HIGH); delay(500); digitalWrite(2, LOW); delay(500); }
This sketch will connect to your network, print the IP address to the serial monitor and then execute the test program we added earlier.
The next step is adding the Homeyduino library. Until now we’ve used the
delay function to make the Arduino wait 500ms between each pin update. Since the Arduino does not do anything while it waits it also doesn’t handle any network traffic during this time. For Homeyduino it is important to use the
delay function as little as possible, to allow your Arduino to process network packets as often as possible.
To avoid using delays we use the
millis function, this function returns the amount of milliseconds that passed since the Arduino has been turned on. By keeping track of time by comparing the value of
millis instead of just waiting we can make the Arduino handle network packets while waiting. More information on this method can be found here.
The following sketch shows how we used
millis:
#include <SPI.h> #include <Ethernet2.h> #include <EthernetUdp2.h> #include <Homey.h> byte mac[] = { 0x48, 0x6F, 0x6D, 0x65, 0x79, 0x00 }; unsigned long previousMillis = 0; uint8_t pumpTimer = 0; void setup() { pinMode(2, OUTPUT); Serial.begin(115200); Serial.println("Starting ethernet..."); //If this is the last message you see appear in the serial monitor... Ethernet.begin(mac); Serial.print("IP address: "); Serial.println(Ethernet.localIP()); Homey.begin("planter"); Homey.addAction("pump", onPump); Homey.addAction("stop", onStop); } void onPump() { pumpTimer = Homey.value.toInt(); //Set the timer if ( (pumpTimer<1) || (pumpTimer>30) ) { onStop(); //Stop the pump return Homey.returnError("Invalid value!"); } Serial.print("Pumping for "); Serial.print(pumpTimer); Serial.println(" seconds..."); if (pumpTimer>0) digitalWrite(2, HIGH); //Start the pump } void onStop() { pumpTimer = 0; digitalWrite(2, LOW); Serial.println("Pump stopped manually."); } void loop() { Homey.loop(); unsigned long currentMillis = millis(); if (currentMillis-previousMillis >= 1000) { //Code in this if statement is run once every second previousMillis = currentMillis; if (pumpTimer>0) { //If the pump is active pumpTimer = pumpTimer - 1; //Decrease the pump timer if (pumpTimer < 1) { //once the timer reaches zero digitalWrite(2, LOW); //Turn off the pump Serial.println("Pump stopped."); } else { Serial.print("Pumping for "); Serial.print(pumpTimer); Serial.println(" more seconds..."); } } } }
This sketch allows us to control the pump using Homey. First make sure you have the Homeyduino app installed. After that you can just open the pairing wizard and pair your device.
After pairing the device you can control the pump by adding the device to the action column and selecting the “Action [Number]” flowcard. By selecting the action “pump” and entering a duration in seconds into the value field the pump can be started.
The “stop” action can be used without an argument, it will stop the pump instantly.
Now that we connected our water pump to Homey we can automatically water our plants, but what about times when the tank becomes empty without us noticing?
To make sure that we are notified of an empty tank and to prevent the pump from running without water (to avoid damage to the pump) we added a float sensor to our setup. The float sensor we bought works as a switch. We suspended the float sensor in our water tank at the correct height by connecting it to a piece of coated iron wire which we cut to length and stuck to the top of the jerrycan with some tape.
Adding the float sensor to our Arduino project is suprisingly simple. The Arduino Uno on which we based this project has an internal pull-up resistor for all it’s pins. This means that a resistor in the Arduino can pulls the input to the supply voltage, while the external sensor (which acts like a switch) just pulls the pin down by connecting it to ground.
So the only connections needed to make the float work are as follows:
To read the float sensor the pin has to be configured as an input with pull-up enabled.
pinMode(3, INPUT_PULLUP);
Reading the sensor can then be done using
digitalRead(3);, just like any other input.
Our sensor, which acts like a switch, closes when it is not floating and opens when it is floating. The pull-up pulls the pin high whenever the switch is open, resulting in the following relation:
Your sensor could act differently, so you might have to change the sketch a bit to accommodate that instead.
For reading the sensor using Homey we added the following code to our sketch:
... bool previousFloatState = false; ... void setup() { ... pinMode(3, INPUT_PULLUP); ... Homey.addCondition("float", onFloatCondition); } void onPump() { ... if (!previousFloatState) return Homey.returnError("Tank empty!"); ... } void loop() { ... <in the interval> bool currentFloatState = digitalRead(PIN_FLOAT); if (previousFloatState != currentFloatState) { previousFloatState = currentFloatState; Homey.trigger("float", currentFloatState); if (!currentFloatState) { //Tank empty onStop(); } } ... } void onFloatCondition() { return Homey.returnResult(digitalRead(3)); }
This exposes the float sensor both as a trigger and as a condition. Additionally it prevents the pump from running when the tank is empty.
Should you want to replicate this on a platform that does not have internal pull-up support then you could replace the functionality using a resistor.
In that case you would still connect the sensor like described earlier, however you would need to add a resistor (with a value of for example 1KOhm) between the input pin on the Arduino and the 5v (or 3.3v, depening on the I/O voltage of your board) supply pin. For most Arduino boards the I/O voltage is 5 volt and for most ESP8266 and ESP32 boards the I/O voltage is 3.3v. Check the I/O voltage before connecting the resistor, for connecting the resistor to the wrong supply voltage could cause damage to your board.
When using an external pull-up resistor the pinMode statement would become like this:
pinMode(3, INPUT);
When soil becomes moist it’s resistance drops. A moisture sensor measures the resistance of the soil between two electrodes at a set distance.
Some moisture sensors that you buy from stores like Sparkfun include a small circuit that consists of a resistor divider with a transistor that amplifies the current flowing through the soil.
Most cheap Chinese sensors however do not include this circuit and consist of two wires at a certain distance from each other. For those sensors we suggest creating a simple voltage divider by connecting the analog input of the Arduino to ground through a resistor and to the supply voltage pin (5v or 3.3v) through the sensor.
We connected the moisture sensor to pin A0 on the Arduino.
... int previousMoistureSensorValue = 0; ... void setup() { ... pinMode(A0, INPUT); ... Homey.addCondition("moisture", onMoistureCondition); } void loop() { ... <in the interval> bool currentMoistureSensorValue = analogRead(PIN_SENSOR); if (previousMoistureSensorValue != currentMoistureSensorValue) { previousMoistureSensorValue = currentMoistureSensorValue; Homey.trigger("moisture", previousMoistureSensorValue); } ... } void onMoistureCondition() { int compareTo = Homey.value.toInt(); return Homey.returnResult(previousMoistureSensorValue>=compareTo); }
By adding the moisture sensor as a trigger a flow is triggered every time the moisture level changes. This can be used to implement a flow that checks if the moisture level changed below a threshold, allowing Homey to start the pump when necessary. Another approach might be checking of the moisture level as a condition by a flow that is triggered by a time interval, allowing you to make sure the pump isn’t started too often. | https://arduino.tkkrlab.nl/les-6-domotica-met-de-esp266/homey/plantenbak/ | CC-MAIN-2021-21 | refinedweb | 1,692 | 55.54 |
A marker rendered with a widget. More...
#include <WLeafletMap.h>
A marker rendered with a widget.
This can be used to place arbitrary widgets on the map.
The widgets will stay the same size regardless of the zoom level of the map.
Set the anchor point of the marker.
This determines the "tip" of the marker (relative to its top left corner). The marker will be aligned so that this point is at the marker's geographical location.
If x is negative, the anchor point is in the horizontal center of the widget. If y is negative, the anchor point is in the vertical center of the widget.
By default the anchor point is in the middle (horizontal and vertical center). | https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1WLeafletMap_1_1WidgetMarker.html | CC-MAIN-2020-10 | refinedweb | 121 | 67.86 |
This program is supposed to read commands from a file in sequential order, process them, perform the necessary calculations, and then print out the results in a neat, readable manner, both to a file and to the screen. I only have to implement commands: +, -, *, /, H, Q.
I got it to scan H but not sure how to get Q. Also I'm not sure if it is reading the file or how to write it to another file. What can I do?
#include <stdio.h>
#include <stdlib.h>
Oh and the data file gives these numbersOh and the data file gives these numbersCode:int main(void) { char H, Q, choice, operate; float i, j; FILE *file1; printf("Commands available: +, -, *, or / \n"); printf("Press H for help or Q for quit \n"); fflush(stdout); scanf("%c",&choice); if (choice = H) printf("Commands available: +, -, *, or / \n"); else { if (choice = Q) printf("Program Ended \n"); } file1= fopen("CommandProj1.dat", "r"); if (file1 == NULL) printf("Error opening input file 1 \n"); else { fscanf(file1, "%c %f %f", &operate, &i, &j); if (operate == '+') printf("%f\n", i + j); if (operate == '-') printf("%f\n)", i - j); if (operate == '*') printf("%f\n", i * j); if (operate == '/') printf("%f\n", i / j); } fclose (file1); return 0; }
GN
+ 34 43
+ -34 43
+ -4 -71
- 27 15
- -4 -71
H
* 3 -5
* -11 -12
/ 3 14
/ 14 3
/ 14 -3
Q
H | https://cboard.cprogramming.com/c-programming/136384-help-calculator-project.html | CC-MAIN-2017-17 | refinedweb | 232 | 70.47 |
#include <fbxevent.h>
FBX SDK event base class.
An event is something that is emitted by an emitter, with the goal of being filled by the listener that listen to it. You can see that like a form that you send to some people. If those people know how to fill the form, they fill it and return it to you with the right information in it. FBX object could be used as emitter, since FbxObject is derived from FbxEmitter. Meanwhile, plug-in could be used as listener, since FbxPlugin is derived from FbxListener. The derived class of FbxEventBase contains a type ID to distinguish different types of events. FBX object can emit different types of FBX events at different conditions.
Definition at line 40 of file fbxevent.h.
Destructor.
Retrieve the event type ID.
Implemented in FbxEvent< T >, FbxEvent< FbxEventPostExport >, FbxEvent< FbxEventPreExport >, FbxEvent< FbxEventReferencedDocument >, FbxEvent< FbxQueryEvent< QueryT > >, FbxEvent< FbxEventPreImport >, and FbxEvent< FbxEventPostImport >.
Force events to give us a name.
Implemented in FbxEventPostImport, FbxEventPreImport, FbxEventPostExport, FbxEventPreExport, and FbxEventReferencedDocument. | https://help.autodesk.com/cloudhelp/2018/ENU/FBX-Developer-Help/cpp_ref/class_fbx_event_base.html | CC-MAIN-2022-21 | refinedweb | 168 | 50.12 |
When you have a class inside another class they can see each others
private methods. It is not well known among Java developers. Many candidates during interviews say that
private is a visibility that lets a code see a member if that is in the same class. This is actually true, but it would be more precise to say that there is a class that both the code and the member is in. When we have nested and inner classes it can happen that the
private member and the code using it is in the same class and at the same time they are also in different classes.
As an example, if I have two nested classes in a top-level class then the code in one of the nested classes can see a
private member of the other nested class.
It starts to be interesting when we look at the generated code. The JVM does not care about classes inside other classes. It deals with JVM “top-level” classes. The compiler will create
.class files that will have a name like
A$B.class when you have a class named
B inside a class
A. There is a
private method in
B callable from
A then the JVM sees that the code in
A.class calls the method in
A$B.class. The JVM checks access control. When we discussed this with juniors somebody suggested that probably the JVM does not care the modifier. That is not true. Try to compile
A.java and
B.java, two top-level classes with some code in
A calling a
public method in
B. When you have
A.class and
B.class modify the method in
B.java from being
public to be
private and recompile
B t a new
B.class. Start the application and you will see that the JVM cares about the access modifiers a lot. Still, you can invoke in the example above from
A.class a method in
A$B.class.
To resolve this conflict Java generates extra synthetic methods that are inherently public, call the original private method inside the same class and are callable as far as the JVM access control is considered. On the other hand, the Java compiler will not compile the code if you figure out the name of the generated method and try to call in from the Java source code directly. I wrote about in details more than 4 years ago.
If you are a seasoned developer then you probably think that this is a weird and revolting hack. Java is so clean, elegant, concise and pure except this hack. And also perhaps the hack of the
Integer cache that makes small
Integer objects (typical test values) to be equal using the
== while larger values are only
equals() but not
== (typical production values). But other than the synthetic classes and
Integer cache hack Java is clean, elegant, concise and pure. (You may get I am a Monty Python fan.)
The reason for this is that nested classes were not part of the original Java, it was added only to version 1.1 The solution was a hack, but there were more important things to do at that time, like introducing JIT compiler, JDBC, RMI, reflection and some other things that we take today for granted. That time the question was not if the solution is nice and clean. Rather the question was if Java will survive at all and be a mainstream programming language or dies and remains a nice try. That time I was still working as a sales rep and coding was only a hobby because coding jobs were scarce in East Europe, they were the mainly boring bookkeeping applications and were low paid. Those were a bit different times, the search engine was named AltaVista, we drank water from the tap and Java had different priorities.
The consequence is that for more than 20 years we are having slightly larger JAR files, slightly slower java execution (unless the JIT optimizes the call chain) and obnoxious warnings in the IDE suggesting that we better have package protected methods in nested classes instead of
private when we use it from top-level or other nested classes.
Nest Hosts
Now it seems that this 20-year technical debt will be solved. The gets into Java 11 and it will solve this issue by introducing a new notion: nest. Currently, the Java bytecode contains some information about the relationship between classes. The JVM has information that a certain class is a nested class of another class and this is not only the name. This information could work for the JVM to decide on whether a piece of code in one class is allowed or is not allowed to access a
private member of another class, but the JEP-181 development has something more general. As times changed JVM is not the Java Virtual Machine anymore. Well, yes, it is, at least the name, however, it is a virtual machine that happens to execute bytecode compiled from Java. Or for the matter from some other languages. There are many languages that target the JVM and keeping that in mind the JEP-181 does not want to tie the new access control feature of the JVM to a particular feature of the Java language.
The JEP-181 defines the notion of a
NestHost and
NestMembers as attributes of a class. The compiler fills these fields and when there is access to a private member of a class from a different class then the JVM access control can check: are the two classes in the same nest or not? If they are in the same nest then the access is allowed, otherwise not. We will have methods added to the reflective access, so we can get the list of the classes that are in a nest.
Simple Nest Example
Using the
$ java -version java version "11-ea" 2018-09-25 Java(TM) SE Runtime Environment 18.9 (build 11-ea+25) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11-ea+25, mixed mode)
version of Java today we can make already experiments. We can create a simple class:
package nesttest; public class NestingHost { public static class NestedClass1 { private void privateMethod() { new NestedClass2().privateMethod(); } } public static class NestedClass2 { private void privateMethod() { new NestedClass1().privateMethod(); } } }
Pretty simple and it does nothing. The private methods call each other. Without this the compiler sees that they simply do nothing and they are not needed and the byte code just does not contain them.
The class to read the nesting information
package nesttest; import java.util.Arrays; import java.util.stream.Collectors; public class TestNest { public static void main(String[] args) { Class host = NestingHost.class.getNestHost(); Class[] nestlings = NestingHost.class.getNestMembers(); System.out.println("Mother bird is: " + host); System.out.println("Nest dwellers are :\n" + Arrays.stream(nestlings).map(Class::getName) .collect(Collectors.joining("\n"))); } }
The printout is as expected:
Mother bird is: class nesttest.NestingHost Nest dwellers are : nesttest.NestingHost nesttest.NestingHost$NestedClass2 nesttest.NestingHost$NestedClass1
Note that the nesting host is also listed among the nest members, though this information should be fairly obvious and redundant. However, such a use may allow some languages to disclose from the access the private members of the nesting host itself and let the access allow only for the nestlings.
Byte Code
The compilation using the JDK11 compiler generates the files
NestingHost$NestedClass1.class
NestingHost$NestedClass2.class
NestingHost.class
TestNest.class
There is no change. On the other hand if we look at the byte code using the
javap decompiler then we will see the following:
$ javap -v build/classes/java/main/nesttest/NestingHost\$NestedClass1.class Classfile .../packt/Fundamentals-of-java-18.9/sources/ch08/bulkorders/build/classes/java/main/nesttest/NestingHost$NestedClass1.class Last modified Aug 6, 2018; size 557 bytes MD5 checksum 5ce1e0633850dd87bd2793844a102c52 Compiled from "NestingHost.java" public class nesttest.NestingHost$NestedClass1 minor version: 0 major version: 55 flags: (0x0021) ACC_PUBLIC, ACC_SUPER this_class: #5 // nesttest/NestingHost$NestedClass1 super_class: #6 // java/lang/Object interfaces: 0, fields: 0, methods: 2, attributes: 3 Constant pool: *** CONSTANT POOL DELETED FROM THE PRINTOUT *** { public nesttest.NestingHost$NestedClass1(); descriptor: ()V flags: (0x0001) ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: return LineNumberTable: line 6: 0 LocalVariableTable: Start Length Slot Name Signature 0 5 0 this Lnesttest/NestingHost$NestedClass1; } SourceFile: "NestingHost.java" NestHost: class nesttest/NestingHost InnerClasses: public static #13= #5 of #20; // NestedClass1=class nesttest/NestingHost$NestedClass1 of class nesttest/NestingHost public static #23= #2 of #20; // NestedClass2=class nesttest/NestingHost$NestedClass2 of class nesttest/NestingHost
If we compile the same class using the JDK10 compiler, then the disassembles lines are the following:
$ javap -v build/classes/java/main/nesttest/NestingHost\$NestedClass1.class Classfile /C:/Users/peter_verhas/Dropbox/packt/Fundamentals-of-java-18.9/sources/ch08/bulkorders/build/classes/java/main/nesttest/NestingHost$NestedClass1.class Last modified Aug 6, 2018; size 722 bytes MD5 checksum 8c46ede328a3f0ca265045a5241219e9 Compiled from "NestingHost.java" public class nesttest.NestingHost$NestedClass1 minor version: 0 major version: 54 flags: (0x0021) ACC_PUBLIC, ACC_SUPER this_class: #6 // nesttest/NestingHost$NestedClass1 super_class: #7 // java/lang/Object interfaces: 0, fields: 0, methods: 3, attributes: 2 Constant pool: *** CONSTANT POOL DELETED FROM THE PRINTOUT *** { public nesttest.NestingHost$NestedClass1(); descriptor: ()V flags: (0x0001) ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #2 // Method java/lang/Object."<init>":()V 4: return LineNumberTable: line 6: 0 LocalVariableTable: Start Length Slot Name Signature 0 5 0 this Lnesttest/NestingHost$NestedClass1; static void access$100(nesttest.NestingHost$NestedClass1); descriptor: (Lnesttest/NestingHost$NestedClass1;)V flags: (0x1008) ACC_STATIC, ACC_SYNTHETIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method privateMethod:()V 4: return LineNumberTable: line 6: 0 LocalVariableTable: Start Length Slot Name Signature 0 5 0 x0 Lnesttest/NestingHost$NestedClass1; } SourceFile: "NestingHost.java" InnerClasses: public static #14= #6 of #25; // NestedClass1=class nesttest/NestingHost$NestedClass1 of class nesttest/NestingHost public static #27= #3 of #25; // NestedClass2=class nesttest/NestingHost$NestedClass2 of class nesttest/NestingHost
The Java 10 compiler generates the
access$100 method. The Java 11 compiler does not. Instead, it has a nesting host field in the class file. We finally got rid of those synthetic methods that were causing surprises when listing all the methods in some framework code reflective.
Hack the nest
Let’s play a bit cuckoo. We can modify the code a bit so that now it does something:
package nesttest; public class NestingHost { // public class NestedClass1 { // public void publicMethod() { // new NestedClass2().privateMethod(); /* <-- this is line 8 */ // } // } public class NestedClass2 { private void privateMethod() { System.out.println("hallo"); } } }
we also create a simple test class
package nesttest; public class HackNest { public static void main(String[] args) { // var nestling =new NestingHost().new NestedClass1(); // nestling.publicMethod(); } }
First, remove all the
// from the start of the lines and compile the project. It works like charm and prints out
hallo. After this copy the generated classes to a safe place, like the root of the project.
$ cp build/classes/java/main/nesttest/NestingHost\$NestedClass1.class . $ cp build/classes/java/main/nesttest/HackNest.class .
Let’s compile the project, this time with the comments and after this copy back the two class files from the previous compilation:
$ cp HackNest.class build/classes/java/main/nesttest/ $ cp NestingHost\$NestedClass1.class build/classes/java/main/nesttest/
Now we have a
NestingHost that knows that it has only one nestling:
NestedClass2. The test code, however, thinks that there is another nestling
NestedClass1 and it also has a public method that can be invoked. This way we try to sneak an extra nestling into the nest. If we execute the code then we get an error:
$ java -cp build/classes/java/main/ nesttest.HackNest Exception in thread "main" java.lang.IncompatibleClassChangeError: Type nesttest.NestingHost$NestedClass1 is not a nest member of nesttest.NestingHost: current type is not listed as a nest member at nesttest.NestingHost$NestedClass1.publicMethod(NestingHost.java:8) at nesttest.HackNest.main(HackNest.java:7)
It is important to recognize from the code that the line, which causes the error is the one where we want to invoke the private method. The Java runtime does the check only at that point and not sooner.
Do we like it or not? Where is the fail-fast principle? Why does the Java runtime start to execute the class and check the nest structure only when it is very much needed? The reason, as many times in the case of Java: backward compatibility. The JVM can check the nest structure consistency when all the classes are loaded. The classes are only loaded when they are used. It would have been possible to change the classloading in Java 11 and load all the nested classes along with the nesting host, but it would break backward compatibility. If nothing else the lazy singleton pattern would break apart and we do not want that. We love singleton, but only when single malt (it is).
Conclusion
The JEP-181 is a small change in Java. Most of the developers will not even notice. It is a technical debt eliminated and if the core Java project does not eliminate the technical debt then what should we expect from the average developer?
As the old Latin saying says: “Debitum technica necesse est deletur.” | https://www.javacodegeeks.com/2018/08/nested-classes-private-methods.html | CC-MAIN-2020-16 | refinedweb | 2,228 | 64.1 |
I need to create a program which will allow me to use the secant method in order to solve a function. I need to run iterations of this function. I need to have the user enter 2 guesses at the beginning say they are verables a,b. Then I need the program to generate a sequence x2,x3,x4. Which should converge to the solution. I need to use the formula Xn+1=f(xn,xn-1). Then at the end of the program output the value of x, and f(x). The number of iterations can be no more than 1,000 and the width of the interval has to be less than .00001. If there is anyone who can just help me get started with this program that would be great. Thank you so much for the help!! This is urgent!!
Date: 18 Nov 2008 22:30
Number of posts: 7
RSS: New posts
What? Could you name the theorem that this is?
This is the secant method. That's what it is called. I am trying to create a program that will solve a function using the secant method. It is used in finding soluations of equations of one variable. The definition or formula that is used for the secant method is….
The Approximation Pn+1, for n>1, to a root of f(x) = 0 is computed from the approximation Pn and Pn-1 using the equation:
Pn+1 = Pn- (f(Pn)(Pn-Pn-1))/(f(Pn)-f(Pn-1)
There is two stopping conditions. First we assume that Pn is accurate when ABS(Pn-Pn-1) is witin the tolerance. Then also make sure that there is a maximum number of iterations given so that in case the method fails to converage as expected.
I hope this helps! I don't really understand what you mean by theorem. That is the function and the equation of the secant method.
Here's some more code from wikipedea this is a bit more difficult to port because it's in C
#include <stdio.h> #include <math.h>; }
If you need help with it pm me.
Ok Thank you!
Hey is there anyway that you could tell me exactly what that program means? I have never programmed in C. Thanks!
Something that should be kept in mind when implementing the secant method is that one can set things up such that only one function evaluation is used for every iteration.
With that:
Input "F(X)=",Str1 String▶Equ(Str1,Y₁) Prompt A,B,T Y₁(A)→U Y₁(B)→V If A=B:Stop 0→I Repeat abs(D)<T or I>40 I+1→I V/((V-U)/(B-A))→D B-D→X Y₁(X)→F B→A:X→B V→U:F→V End X
thornahawk | http://tibasicdev.wikidot.com/forum/t-106206/secant-method-program-urgent | CC-MAIN-2018-39 | refinedweb | 471 | 75.61 |
updated copyright years
1: \ EXTEND.FS CORE-EXT Word not fully tested! 12may93jaw 2: 3: \ Copyright (C) 1995,1998,2000,2003,2005,2007: \ May be cross-compiled 22: 23: decimal 24: 25: \ .( 12may93jaw 26: 27: : .( ( compilation&interpretation "ccc<paren>" -- ) \ core-ext dot-paren 28: \G Compilation and interpretation semantics: Parse a string @i{ccc} 29: \G delimited by a @code{)} (right parenthesis). Display the 30: \G string. This is often used to display progress information during 31: \G compilation; see examples below. 32: [char] ) parse type ; immediate 33: 34: \ VALUE 2>R 2R> 2R@ 17may93jaw 35: 36: \ !! 2value 37: 38: [ifundef] 2literal: [then] 44: 45: ' drop alias d>s ( d -- n ) \ double d_to_s 46: 47: : m*/ ( d1 n2 u3 -- dquot ) \ double m-star-slash 48: \G dquot=(d1*n2)/u3, with the intermediate result being triple-precision. 49: \G In ANS Forth u3 can only be a positive signed number. 50: >r s>d >r abs -rot 51: s>d r> xor r> swap >r >r dabs rot tuck um* 2swap um* 52: swap >r 0 d+ r> -rot r@ um/mod -rot r> um/mod 53: [ 1 -3 mod 0< ] [if] 54: -rot r> IF IF 1. d+ THEN dnegate ELSE drop THEN 55: [else] 56: nip swap r> IF dnegate THEN 57: [then] ; 58: 59: \ CASE OF ENDOF ENDCASE 17may93jaw 60: 61: \ just as described in dpANS5 62: 63: 0 CONSTANT case ( compilation -- case-sys ; run-time -- ) \ core-ext 64: immediate 65: 66: : of ( compilation -- of-sys ; run-time x1 x2 -- |x1 ) \ core-ext 67: \ !! the implementation does not match the stack effect 68: 1+ >r 69: postpone over postpone = postpone if postpone drop 70: r> ; immediate 71: 72: : endof ( compilation case-sys1 of-sys -- case-sys2 ; run-time -- ) \ core-ext end-of 73: >r postpone else r> ; immediate 74: 75: : endcase ( compilation case-sys -- ; run-time x -- ) \ core-ext end-case 76: postpone drop 77: 0 ?do postpone then loop ; immediate 78: 79: \ C" 17may93jaw-obsolescent: 2dup 2r@ string-prefix? if 127: 2swap 2drop 2r> 2drop true exit 128: endif 129: 1 /string 130: repeat 131: 2drop 2r> 2drop false ; 132: 133: \ SOURCE-ID SAVE-INPUT RESTORE-INPUT 11jun93jaw 134: 135: [IFUNDEF] source-id 136: : source-id ( -- 0 | -1 | fileid ) \ core-ext,file source-i-d 137: \G Return 0 (the input source is the user input device), -1 (the 138: \G input source is a string being processed by @code{evaluate}) or 139: \G a @i{fileid} (the input source is the file specified by 140: \G @i{fileid}). 141: loadfile @ dup 0= IF drop sourceline# 0 min THEN ; 142: 143: : save-input ( -- xn .. x1 n ) \ core-ext 144: \G The @i{n} entries @i{xn - x1} describe the current state of the 145: \G input source specification, in some platform-dependent way that can 146: \G be used by @code{restore-input}. 147: >in @ 148: loadfile @ 149: if 150: loadfile @ file-position throw 151: [IFDEF] #fill-bytes #fill-bytes @ [ELSE] #tib @ 1+ [THEN] 0 d- 152: else 153: blk @ 154: linestart @ 155: then 156: sourceline# 157: >tib @ 158: source-id 159: 6 ; 160: 161: : restore-input ( xn .. x1 n -- flag ) \ core-ext 162: \G Attempt to restore the input source specification to the state 163: \G described by the @i{n} entries @i{xn - x1}. @i{flag} is 164: \G true if the restore fails. In Gforth it fails pretty often 165: \G (and sometimes with a @code{throw}). 166: 6 <> -12 and throw 167: source-id <> -12 and throw 168: >tib ! 169: >r ( line# ) 170: loadfile @ 0<> 171: if 172: loadfile @ reposition-file throw 173: refill 0= -36 and throw \ should never throw 174: else 175: linestart ! 176: blk ! 177: sourceline# r@ <> blk @ 0= and loadfile @ 0= and 178: if 179: drop rdrop true EXIT 180: then 181: then 182: r> loadline ! 183: >in ! 184: false ; 185: [THEN] 186: \ This things we don't need, but for being complete... jaw 187: 188: \ EXPECT SPAN 17may93jaw 189: 190: variable span ( -- c-addr ) \ core-ext-obsolescent 191: \G @code{Variable} -- @i{c-addr} is the address of a cell that stores the 192: \G length of the last string received by @code{expect}. OBSOLESCENT. 193: 194: : expect ( c-addr +n -- ) \ core-ext-obsolescent 195: \G Receive a string of at most @i{+n} characters, and store it 196: \G in memory starting at @i{c-addr}. The string is 197: \G displayed. Input terminates when the <return> key is pressed or 198: \G @i{+n} characters have been received. The normal Gforth line 199: \G editing capabilites are available. The length of the string is 200: \G stored in @code{span}; it does not include the <return> 201: \G character. OBSOLESCENT: superceeded by @code{accept}. 202: 0 rot over 203: BEGIN ( maxlen span c-addr pos1 ) 204: key decode ( maxlen span c-addr pos2 flag ) 205: >r 2over = r> or 206: UNTIL 207: 2 pick swap /string type 208: nip span ! ; 209: 210: \ marker 18dec94py 211: 212: \ Marker creates a mark that is removed (including everything 213: \ defined afterwards) when executing the mark. 214: 215: : included-files-mark ( -- u ) 216: included-files @ ; 217: 218: \ hmm, most of the saving appears to be pretty unnecessary: we could 219: \ derive the wordlists and the words that have to be kept from the 220: \ saved value of dp value. - anton 221: 222: : marker, ( -- mark ) 223: here 224: included-files-mark , 225: dup A, \ here 226: voclink @ A, \ vocabulary list start 227: \ for all wordlists, remember wordlist-id (the linked list) 228: voclink 229: BEGIN 230: @ dup 231: WHILE 232: dup 0 wordlist-link - wordlist-id @ A, 233: REPEAT 234: drop 235: \ remember udp 236: udp @ , 237: \ remember dyncode-ptr 238: here ['] noop , compile-prim1 finish-code ; 239: 240: : marker! ( mark -- ) 241: \ reset included files count; resize will happen on next add-included-file 242: included-files @ over @ min included-files ! cell+ 243: \ rest of marker! 244: dup @ swap cell+ ( here rest-of-marker ) 245: dup @ voclink ! cell+ 246: \ restore wordlists to former words 247: voclink 248: BEGIN 249: @ dup 250: WHILE 251: over @ over 0 wordlist-link - wordlist-id ! 252: swap cell+ swap 253: REPEAT 254: drop 255: \ rehash wordlists to remove forgotten words 256: \ why don't we do this in a single step? - anton 257: voclink 258: BEGIN 259: @ dup 260: WHILE 261: dup 0 wordlist-link - rehash 262: REPEAT 263: drop 264: \ restore udp and dp 265: [IFDEF] forget-dyncode 266: dup cell+ @ forget-dyncode drop 267: [THEN] 268: @ udp ! dp ! 269: \ clean up vocabulary stack 270: 0 vp @ 0 271: ?DO 272: vp cell+ I cells + @ dup here > 273: IF drop ELSE swap 1+ THEN 274: LOOP 275: dup 0= or set-order \ -1 set-order if order is empty 276: get-current here > IF 277: forth-wordlist set-current 278: THEN ; 279: 280: : marker ( "<spaces> name" -- ) \ core-ext 281: \G Create a definition, @i{name} (called a @i{mark}) whose 282: \G execution semantics are to remove itself and everything 283: \G defined after it. 284: marker, Create A, 285: DOES> ( -- ) 286: @ marker! ; 287: | http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/extend.fs?rev=1.65;content-type=text%2Fx-cvsweb-markup;sortby=log;f=h;only_with_tag=MAIN;ln=1 | CC-MAIN-2021-17 | refinedweb | 1,180 | 64.54 |
This article shows you how to add a Win98
like gradient title bar to a modal or a modeless dialog and is a
modification of Philip Petrescu's article found at codeguru. I also use
Paul DiLascia's famous CSubclassWnd from 1997 Microsoft Sytems Journal.
CSubclassWnd
I needed a gradient title bar in a modal /modeless dialog and had
to modify the code for it to work properly as I needed. It works
in modal/modeless dialogs with support for changing the caption
text. I acknowledge the contribution of all the people
whose names appear in various parts of the source. They are the
gurus, I'm just a kiddo!
Include the files PaintCap.cpp, PaintCap.h, Subclass.cpp,
Subclass.h in you project, and declare a member variable of type
CCaptionPainter:
CCaptionPainter
CCaptionPainter m_cap
You can use the ClassView to add this member variable, but if you add it
manually then ensure you add the
#include "PaintCap.h"
to your the dialog's header file.
Add a user defined message WM_MYPAINTMESSAGE to your dialog's
header.
WM_MYPAINTMESSAGE
#define WM_MYPAINTMESSAGE WM_USER+5
Write a handler for this message in the
dialog's .cpp file and call CCaptionPainter's PaintCaption member
function with the required parameters as shown below.
PaintCaption
LRESULT <Your Dialog>::OnMyPaintMessage(WPARAM wp,LPARAM lp)
{
m_cap.PaintCaption(wp,lp);
return 0;
}
Next map the user defined message to the handler
by adding a message map to the dialog class's .cpp file (after the
AFX_MESSAGE_MAP and before END_MESSAGE_MAP )
AFX_MESSAGE_MAP
END_MESSAGE_MAP
ON_MESSAGE(WM_MYPAINTMESSAGE,OnPaintMyMessage)
In dialog's InitInstance, first set the
caption text by calling SetCaption function like
InitInstance
SetCaption
CString str="My caption";
m_cap.SetCaption(str);
Then install the message hook as follows
m_cap.Install(this,WM_MYPAINTMESSAGE)
That's it.
If you want to change the caption text at any
time, use the following:
CString newstr="New Text"; // new string
m_cap.SetCaption(newstr); //set the caption
m_cap.UpdateFrameTitle(this->m_hWnd); //paint the new caption
You can use the gradient caption bar similarly in
modal dialogs.
I have tested this code under Win95 with VC++ 6.0
but I think it should work just fine with other versions on other
platforms like Win98/NT.
Normally, the dialog bar displays the close button and the icon in a dialog. But there
are workarounds these. If you do not want to display the icon in the dialog bar or do not
want the close button to appear, you have to edit some of the code yourself.
Look in the paintcap.cpp file and do the following:
Locate the PaintCaption(WPARAM bActive, LPARAM lParam) member function of
the CCaptionPainter class.
PaintCaption(WPARAM bActive, LPARAM lParam)
Move to the line:
int cxIcon=DrawIcon(pc);
Replace this line with the following code,
int cxIcon=0;
Move to the line
int cxButns= DrawButtons(pc);
int cxButns=0;
You are almost done. If your dialog is not system menu enabled, verything should work out
fine. If the system menu option is enabled in the dialog properties, you need to handle the
WM_NCLBUTTONDOWN and the WM_NCRBUTTONDOWN messages.
WM_NCLBUTTONDOWN
WM_NCRBUTTONDOWN
If you do not do this, then as soon as you click on the dialog bar, the close button pops
up and the effect is rather undesirable.
To handle these messages, look at CCaptionPainter's WindowProc() .
Add the following code after ypu handle the WM_SETTINGCHANGE
and WM_SYSCOLORCHANGE messages:
WindowProc()
WM_SETTINGCHANGE
WM_SYSCOLORCHANGE
case WM_NCLBUTTONDOWN:
case WM_NCRBUTTONDOWN:
return 0;
That's it. Now everything should work properly. Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/717/A-Gradient-Title-Bar-for-modal-and-modeless-dialog | CC-MAIN-2015-32 | refinedweb | 599 | 55.74 |
On Tue, Jul 31, 2012 at 06:09:13PM -0600, Eric Blake wrote: > On 07/31/2012 10:58 > > though, we don't expect anyone to seriously use the pthread.h > > impl, so this downside is not significant. > > > > * .gitignore: Ignore test case > > * configure.ac: Check for which atomic ops impl to use > > * src/Makefile.am: Add viratomic.c > > * src/nwfilter/nwfilter_dhcpsnoop.c: Switch to new atomic > > ops APIs and plain int datatype > > * src/util/viratomic.h: inline impls of all atomic ops > > for GCC, Win32 and pthreads > > * src/util/viratomic.c: Global pthreads mutex for atomic > > ops > > * tests/viratomictest.c: Test validate to validate safety > > of atomic ops. > > > > Signed-off-by: Daniel P. Berrange <berrange redhat com> > > --- > > > ;],[ > > The autoconf manual prefers 'AC_COMPILE_IFELSE' over 'AC_TRY_COMPILE', > but as you are copying code, I won't worry if you don't change it. > > > + atomic_ops=gcc > > +],[]) > > + > > +if test "$atomic_ops" = "" ; then > > + > + > Should this be: > > > so as not to lose previous flags that might be important? I don't think it really matters in the context of this test > > + AC_TRY_COMPILE([], > > + [__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4;], > > + [AC_MSG_ERROR([Libvirt must be build with -march=i486 or later.])], > > s/build/built/ > > Okay - the intent here is that if we compiled and > __GCC_HAVE_SYNC_COMPARE_AND_SWAP failed, then we add a new option, and > the recompilation attempt succeeds, then we a) must be using gcc, b) gcc > recognizes the option, so we must be x86 (all other scenarios either > passed on the first compilation, or point to a different architecture > and/or different compiler so both compile attempts failed). Since the > test is so simple, discarding all existing CFLAGS to look for just > -march=i486 probably does the job, and I guess my earlier comment is not > strictly necessary. > > Do we need to worry about updating './autogen.sh' to add this CFLAGS > value automatically, or is gcc typically spec'd to a new enough platform > by default in x86 distros these days? Then again, I just tested the > patch myself and didn't hit the warning, so at least Fedora 17 has gcc > spec'd to implicitly assume that flag. I believe all common distros have GCC builds default to at least i486 these days, precisely to get atomic op support. So don't reckon we need todo anything here. > > > @@ -1301,6 +1301,10 @@ if HAVE_SASL > > USED_SYM_FILES += libvirt_sasl.syms > > endif > > > > +if WITH_ATOMIC_OPS_PTHREAD > > +USED_SYM_FILES += libvirt_atomic.syms > > +endif > > + > > EXTRA_DIST += \ > > libvirt_public.syms \ > > libvirt_private.syms \ > > Incomplete. EXTRA_DIST also needs to list libvirt_atomic.syms. Opps, yes, will add. > > diff --git a/src/util/viratomic.c b/src/util/viratomic.c > > new file mode 100644 > > index 0000000..af846ff > > --- /dev/null > > +++ b/src/util/viratomic.c > > @@ -0,0 +1,35 @@ > > +/* > > > + > > +#ifdef VIR_ATOMIC_OPS_PTHREAD > > + > > +pthread_mutex_t virAtomicLock = PTHREAD_MUTEX_INITIALIZER; > > I guess since we know we are using pthread, we can directly use > pthread_mutex_t instead of our virThread wrappers in our thread.h. The main reason is to be able to get the static initializer, which we don't support, since there's no easy way to achieve it on Win32. (Mingw pthreads does some horrible hacks to try, which I didn't fancy doing) > > +++ b/src/util/viratomic.h > > @@ -1,10 +1,11 @@ > > /* > > * viratomic.h: atomic integer operations > > * > > - * Copyright (C) 2012 IBM Corporation > > + * Copyright (C) 2012 Red Hat, Inc. > > Normally, I question tossing out an old copyright. But this is such a > massive rewrite that I think you're safe. By the way, the diff was > horrible to read; I more or less ended up reviewing the final state of > the code, rather than the diff. My justification for this is that I deleted the existing file entirely, and copied in the new file from GLib. So any matching lines between old & new are just co-incidence > > +/** > > + * virAtomicIntSet: > > + * Sets the value of atomic to newval. > > + * > > + * This call acts as a full compiler and hardware memory barrier > > + * (after the set) > > + */ > > +void virAtomicIntSet(volatile int *atomic, > > + int newval) > > + ATTRIBUTE_NONNULL(1); > > Is it worth having this function return 'int' instead of 'void' to pass > back the just-set value, similar to C semantics of 'a = b = c' returning > the just-set value of 'b' when assigning to 'a'? Then again, you were > able to do the conversion without it, so probably not worth changing. I figure we can change this in the future if it proves to be needed. > > +/** > > + * virAtomicIntInc: > > + * Increments the value of atomic by 1. > > + * > > + * Think of this operation as an atomic version of > > + * { *atomic += 1; return *atomic; } > > Here we add then fetch; but with virAtomicIntAdd, we fetch then add. Is > the difference going to bite us? At least it is documented, so I can > review based on the documentation. It is annoying, but this lets us get optimal performance on Win32. If I made them consistent, then we'd be stuck with one of the Win32 impls being relatively inefficient :-( > > +++ b/src/util/virfile.c > > @@ -618,12 +618,11 @@ cleanup: > > #else /* __linux__ */ > > > > int virFileLoopDeviceAssociate(const char *file, > > - char **dev) > > + char **dev ATTRIBUTE_UNUSED) > > { > > virReportSystemError(ENOSYS, > > _("Unable to associate file %s with loop device"), > > file); > > - *dev = NULL; > > return -1; > > } > > This is a bogus hunk (conflict resolution gone wrong) Opps, yes. > > + virAssertCmpInt(u, ==, 5); > > + > > + virAtomicIntAdd(&u, 1); > > slightly stronger test as: > > virAssertCmpInt(virAtomicIntAdd(&u, 1), == 5); > > > + virAssertCmpInt(u, ==, 6); > > + > > + virAtomicIntInc(&u); > > slightly stronger test as: > > virAssertCmpInt(virAtomicIntInc(&u), ==, 7); Good suggestions > ACK once you fix src/Makefile.am and the configure.ac typo, and omit the > virfile.c hunk. The rest of my comments aren't show-stoppers, so feel > free to fix or ignore as suits you. Daniel -- |: -o- :| |: -o- :| |: -o- :| |: -o- :| | https://www.redhat.com/archives/libvir-list/2012-August/msg00074.html | CC-MAIN-2015-11 | refinedweb | 913 | 64.41 |
CUDA Device Management¶
For multi-GPU machines, users may want to select which GPU to use. By default the CUDA driver selects the fastest GPU as the device 0, which is the default device used by NumbaPro.
The features introduced on this page are generally not of interest unless working with systems hosting/offering more than one CUDA-capable GPU.
Device Selection¶
Device selection must be done before any CUDA feature is used.
from numbapro import cuda cuda.select_device(0)
The device can be closed by:
cuda.close()
Users can then create a new context with another device.
cuda.select_device(1) # assuming we have 2 GPUs
Note
Compiled functions are associated with the CUDA context. This makes it not very useful to close and create new devices, though it is certainly useful for choosing which device to use when the machine has multiple GPUs. | https://docs.continuum.io/numbapro/CUDADevice/ | CC-MAIN-2021-31 | refinedweb | 144 | 56.15 |
12 September 2012 12:19 [Source: ICIS news]
SINGAPORE (ICIS)--Demand for polyethylene (PE) resins in ?xml:namespace>
The
A prohibition on usage of plastic bags that are not 40 microns thick applies to most of India, prior to New Delhi government’s announcement on Tuesday.
It was not immediately clear whether the directive – to be enforced under environmental laws with punishment of up to five years in prison – would apply to black rubbish bags.
The new ban will curtail demand for PE film and coating grades, which are the raw materials in the production of plastic bags.
“This ban will for sure hit the plastic converting industry here,” a New Delhi-based trader said.
Apart from those based in
“We do sell our finished plastic bags to
“With this new ban, we can’t be sure whether there will be a decent growth this year,” an India PE producer said.
In the fiscal year ending March | http://www.icis.com/Articles/2012/09/12/9594964/new-ban-on-plastic-bags-in-new-delhi-to-hit-india-pe-demand.html | CC-MAIN-2013-48 | refinedweb | 157 | 66.47 |
As its opening sentence reminds the reader—a point often missed by many reviewers—the book Functional Programming in Scala is not a book about Scala. This [wise] choice occasionally manifests in peculiar ways.
For example, you can go quite far into the book implementing its exercises in languages with simpler type systems. Chapters 1–8 and 10 port quite readily to Java [8] and C#. So Functional Programming in Scala can be a very fine resource for learning some typed functional programming, even if such languages are all you have to work with. Within these chapters, you can remain blissfully unaware of the limitations imposed on you by these languages’ type systems.
However, there is a point of inflection in the book at chapter 11. You can pass through with a language such as OCaml, Scala, Haskell, PureScript, or one of a few others. However, users of Java, C#, F#, Elm, and many others may proceed no further, and must turn back here.
Here is where abstracting over type constructors, or “higher-kinded types”, comes into play. At this point in the book, you can give up, or proceed with a sufficiently powerful language. Let’s see how this happens.
The bread and butter of everyday functional programming, the “patterns” if you like, is the implementation of standard functional combinators for your datatypes, and more importantly the comfortable, confident use of these combinators in your program.
For example, confidence with
bind, also known as
>>= or
flatMap,
is very important. The best way to acquire this comfort is to
reimplement it a bunch of times, so Functional Programming in Scala
has you do just that.
def flatMap[B](f: A => List[B]): List[B] // in List[A] def flatMap[B](f: A => Option[B]): Option[B] // in Option[A] def flatMap[B](f: A => Either[E, B]): Either[E, B] // in Either[E, A] def flatMap[B](f: A => State[S, B]): State[S, B] // in State[S, A]
flatMaps are the same
The similarity between these functions’ types is the most obvious
surfacing of their ‘sameness’. (Unless you wish to count their names,
which I do not.) That sameness is congruent: when you write functions
using
flatMap, in any of the varieties above, these functions
inherit a sort of sameness from the underlying
flatMap combinator.
For example, supposing we have
map and
flatMap for a type, we can
‘tuple’ the values within.
def tuple[A, B](as: List[A], bs: List[B]): List[(A, B)] = as.flatMap{a => bs.map((a, _))} def tuple[A, B](as: Option[A], bs: Option[B]): Option[(A, B)] = as.flatMap{a => bs.map((a, _))} def tuple[E, A, B](as: Either[E, A], bs: Either[E, B]): Either[E, (A, B)] = as.flatMap{a => bs.map((a, _))} def tuple[S, A, B](as: State[S, A], bs: State[S, B]): State[S, (A, B)] = as.flatMap{a => bs.map((a, _))}
Functional Programming in Scala contains several such functions,
such as
sequence. These are each implemented for several types, each
time with potentially the same code, if you remember to look back and
try copying and pasting a previous solution.
In programming, when we encounter such great sameness—not merely similar code, but identical code—we would like the opportunity to parameterize: extract the parts that are different to arguments, and recycle the common code for all situations.
In
tuple’s case, what is different are
flatMapand
mapimplementations, and
List,
Option,
State[S, ...], what have you.
We have a way to pass in implementations; that’s just higher-order functions, or ‘functions as arguments’. For the type constructor, we need ‘type-level functions as arguments’.
def tuplef[F[_], A, B](fa: F[A], fb: F[B]): F[(A, B)] = ???
We’ve handled ‘type constructor as argument’, and will add the
flatMap and
map implementations in a moment. First, let’s learn
how to read this.
Confronted with a type like this, it’s helpful to sit back and muse on the nature of a function for a moment.
Functions are given meaning by substitution of their arguments.
def double(x: Int) = x + x
double remains “an abstraction” until we substitute for x; in
other words, pass an argument.
double(2) double(5) 2 + 2 5 + 5 4 10
But this isn’t enough to tell us what
double is; all we see from
these tests is that
double sometimes returns 4, sometimes 10,
sometimes maybe other things. We must imagine what
double does in
common for all possible arguments.
Likewise, we give meaning to type-parameterized definitions like
tuplef by substitution. The parameter declaration
F[_] means that
F may not be a simple type, like
Int or
String, but instead a
one-argument type constructor, like
List or
Option. Performing
these substitutions for
tuplef, we get
// original, as above def tuplef[F[_], A, B](fa: F[A], fb: F[B]): F[(A, B)] // F = List def tupleList[A, B](fa: List[A], fb: List[B]): List[(A, B)] // F = Option def tupleOpt[A, B](fa: Option[A], fb: Option[B]): Option[(A, B)]
More complicated and powerful cases are available with other kinds of
type constructors, such as by partially applying. That’s how we can
fit
State,
Either, and other such types with two or more
parameters into the
F parameter.
// F = Either[E, ...] def tupleEither[E, A, B](fa: Either[E, A], fb: Either[E, B]) : Either[E, (A, B)] // F = State[S, ...] def tupleState[S, A, B](fa: State[S, A], fb: State[S, B]) : State[S, (A, B)]
Just as with
double, though this isn’t the whole story of
tuplef,
its true meaning arises from the common way in which it treats all
possible
F arguments. That is where higher kinds start to get
interesting.
The type of
tuplef expresses precisely our intent—the idea of
“multiplying” two
Fs, tupling the values within—but cannot be
implemented as written. That’s because we don’t have functions that
operate on
F-constructed values, like
fa: F[A] and
fb: F[B]. As
with any value of an ordinary type parameter, these are opaque.
In Scala, there are a few ways to pass in the necessary functions. One
option is to implement a
trait or
abstract class that itself uses
a higher-kinded type parameter or abstract type constructor. Here are
a couple possibilities.
trait Bindable[F[_], +A] { def map[B](f: A => B): F[B] def flatMap[B](f: A => F[B]): F[B] } trait BindableTM[+A] { type F[X] def map[B](f: A => B): F[B] def flatMap[B](f: A => F[B]): F[B] }
Note that we must use higher-kinded trait type signatures to support
our higher-kinded method types; otherwise, we can’t write the return
types for
map and
flatMap.
trait BindableBad[F] { def map[B](f: A => B): F ??? // where is the B supposed to go?
Now we make every type we’d like to support either inherit from or
implicitly convert to
Bindable, such as
List[+A] extends
Bindable[List, A], and write
tuplef as follows.
def tupleBindable[F[_], A, B](fa: Bindable[F, A], fb: Bindable[F, B]) : F[(A, B)] = fa.flatMap{a => fb.map((a, _))}
There are two major problems with
Bindable’s representation of
map
and
flatMap, ensuring its wild unpopularity in the Scala functional
community, though it still appears in some places, such as
in Ermine.
Fis declared in the method type parameters above.
Bindableis required to have the
Fparameter infer correctly, and even how many calls to
Bindablemethods are performed. For example, we’d have to declare the
Fparameter as
F[X] <: Bindable[F, X]if we did one more trailing
mapcall. But then we wouldn’t support implicit conversion cases anymore, so we’d have to do something else, too.
As a result of all this magic, generic functions over higher kinds
with OO-style operations tend to be ugly; note how much
tuplef
looked like the
List-specific type, and how little
tupleBindable
looks like either of them.
But we still really, really want to be able to write this kind of generic function. Luckily, we have a Wadler-made alternative.
To constrain
F to types with the
flatMap and
map we need, we use
typeclasses instead. For
tuplef, that means we leave
F abstract,
and leave the types of
fa and
fb as well as the return type
unchanged, but add an implicit argument, the “typeclass instance”,
which is a first-class representation of the
map and
flatMap
operations.
trait Bind[F[_]] { // note the new ↓ fa argument def map[A, B](fa: F[A])(f: A => B): F[B] def flatMap[A, B](fa: F[A])(f: A => F[B]): F[B] }
Then we define instances for the types we’d like to have this on:
Bind[List],
Bind[Option], and so on, as seen in chapter 11 of
Functional Programming in Scala.
Now we just add the argument to
tuplef.
def tupleTC[F[_], A, B](fa: F[A], fb: F[B]) (implicit F: Bind[F]): F[(A, B)] = F.flatMap(fa){a => F.map(fb)((a, _))}
We typically mirror the typeclass operations back to methods with an
implicit conversion—unlike with
Bindable, this has no effect on
exposed APIs, so is benign. Then, we can remove the
implicit F
argument, replacing it by writing
F[_]: Bind in the type argument
list, and write the method body as it has been written before, with
flatMap and
map methods.
There’s another major reason to prefer typeclasses, but let’s get back to Functional Programming in Scala.
I’ve just described many of the practical mechanics of writing useful functions that abstract over type constructors, but all this is moot if you cannot abstract over type constructors. The fact that Java provides no such capability is not an indicator that they have sufficient abstractions to replace this missing feature: it is simply an abstraction that they do not provide you.
Oh, you would like to factor this common code? Sorry, you are stuck. You will have to switch languages if you wish to proceed.
map functions are obvious candidates for essential parts of a usable
library for functional programming. This is the first-order
abstraction—it eliminates the concrete loops, recursive functions,
or
State lambda specifications, you would need to write otherwise.
When we note a commonality in patterns and define an abstraction over that commonality, we move “one order up”. When we stopped simply defining functions, and started taking functions as arguments, we moved from the first order to the second order.
It is not enough for a modern general-purpose functional library in
Scala to simply have a bunch of
map functions. It must also provide
the second-order feature: the ability to abstract over
map
functions, as well as many, many other functions numerous type
constructors have in common. Let’s not give up; let’s move forward.
This article was tested with Scala 2.11.7 and
fpinscala 5b0115a answers,
with the addition of the method variants of
List#map and
List#flatMap.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2016/08/21/hkts-moving-forward.html | CC-MAIN-2018-39 | refinedweb | 1,887 | 60.85 |
I’ve written a small script to understand how copy and paste on a screen might work.
The script runs successfully with the expected results on a Windows 10 machine, but it does not work on my MacBook air running Mojave 10.14.6. There are no errors detected, but neither the copy or paste work on the Mac. After running the script, when I manually paste the clipboard, it contains whatever it contained before running the script.
On the Mac I’ve tried using both Keys.CONTROL and Keys.COMMAND. (Which is the correct one?)
Any ideas?
import org.openqa.selenium.Keys as Keys
WebUI.doubleClick(findTestObject(‘Page_tC Create/h1_Login’)) //double click text on a label to highlight it
WebUI.sendKeys(null, Keys.chord(Keys.CONTROL, ‘c’))
WebUI.click(findTestObject(‘Page_tC Create/input__username’))
WebUI.sendKeys(findTestObject(‘Page_tC Create/input__username’), Keys.chord(Keys.CONTROL, ‘v’)) //Paste the clipboard into the username text box | https://forum.katalon.com/t/copy-and-paste-from-screen-works-in-windows-but-not-on-mac/47231 | CC-MAIN-2020-40 | refinedweb | 153 | 61.83 |
This. I’ll show that a pretrained policy can beat intermediate/advanced human players as well as some of the popular online checkers engines. In later posts we’ll take this policy and improve it through self play.
Problem statement
In reinforcement learning, the goal is to map states, or best estimates of states, to actions that achieve some set of objectives, optimally or near optimally. States represent an environment, and actions are taken in that environment so as to maximize some reward function. States can be continous or discrete. Continous state spaces are encountered in robotics settings, whereas for computer board games we use discrete state spaces to represent the environment. In both cases the policy is defined as the mapping from states to actions, and is often represented by the conditional probability distribution \( p(action | state) \). Unlike real-world reinforcement learning problems, checkers is a game of perfect information, and so we don’t need to infer or estimate the state. This greatly reduces the complexity of the problem, but unfortunately the state space is still much too large to store on any reasonably sized/priced memory or disk hardware. The goal is therefore to use a parametric model \( p(action | state, \theta) \), where \( \theta \) represents the model’s trainable parameters, to learn good strategy. In order to initialize such a policy with good checkers playing behavior, we can train it on human expert moves. This is a supervised learning problem, where we use examples in a database drawn from the joint probability distribution \( p(expert-human-action, board) \) to train our neural network policy \( p(action | state, \theta) \).
Training database
The first task is to identify a database of moves on which to train the policy. One option would be to mine data from an online checkers websites. While this could be done, it would require a lot of data preprocessing. Although online game websites collect a lot of data, one issue is that the data is high variance. We instead prefer to train our engine only on expert play. Luckily, Martin Fierz has an extensive database of ~22k expert-level games which can be found here. This databse consists of professional checkers tournament play that took place in 19th century Great Britain. The database was meticuously transcribed (by hand) with very few errors. We’re going to use this dataset. The database is a text file consisting of a cascade of games encoded as follows:
[Event "Manchester 1841"] [Date "1841-??-??"] [Black "Moorhead, W."] [White "Wyllie, J."] [Site "Manchester"] [Result "0-1"] 1. 11-15 24-20 2. 8-11 28-24 3. 9-13 22-18 4. 15x22 25x18 5. 4-8 26-22 6. 10-14 18x9 7. 5x14 22-18 8. 1-5 18x9 9. 5x14 29-25 10. 11-15 24-19 11. 15x24 25-22 12. 24-28 22-18 13. 6-9 27-24 14. 8-11 24-19 15. 7-10 20-16 16. 11x20 18-15 17. 2-6 15-11 18. 12-16 19x12 19. 10-15 11-8 20. 15-18 21-17 21. 13x22 30-26 22. 18x27 26x17x10x1 0-1
The parser.py script in my github repository walks through this file and saves each of its 440k+ board-move pairs into a dict. The dict is then serialized for later use.
Model architecture
To encode the board state, I use the map (-3: opp king, -1: opp chkr, 0: empty, 1: own chkr, 3: own king) to represent pieces on the board. Other entries might work just as well; as a general rule you should make the entries quasi-equidistant, and stay away from machine precision. The input layer of the neural network, \( x \), can consist of either an 8x4 representation (for a CNN) or 32x1 (for a feedforward NN) depending on our choice of network. In order to capture both highly localized features (i.e., an available jump) and high level patterns, like multiple jumps, proximity to kings row, and piece differential, I feed the input into three separate layers: a convolutional layer with a 2x2 kernel, and convolutional layer with a 3x3 kernel, and a fully dense feed forward layer. These networks are first trained independently on the dataset, and then their last hidden layers are concatenated and fed into two dense layers that in turn map to a softmax output. The full network is trained end to end on the same database following concatenation of the individual nets. The figure below illustrates the network architecture.
The output layer (\( h(x) \)) consists of a location and direction represented by a 32x4 matrix with binary entries, which we then unravel into a 128x1 one-hot encoded output. You’ll note that this will produces a total of 30 invalid moves at edge and corner positions; our model will need to learn this. The Board class definition is given in board.py. The board object stores board states for both players during the game by switching the input positions for the ‘black’ pieces (by rotating the board 180 deg). If you run parser.py (after downloading the raw text file, OCA_2.0.pdn) it will save a .pickle file into the local directory which you will need for the training step.
The convolutional neural network that I chose for this task is below. It accepts an 8x4 input (\( x \)), and returns a 128 dimensional output (\( h(x) \)).
The algebraic form of the convolution operation, which maps layers \( l=0 \) and \( l=1 \) to layers \( l=1 \) to \( l=2 \), respectively, is given by:
\begin{equation} a_{mij}^{l} = \left( \textbf{w}_{m}^{l} * \textbf{z}_{m}^{* \ l-1} + \textbf{b}_{m}^{l} \right)_{ij} \leftarrow b_{mij}^{l} + \sum_{n}\sum_{q} z_{m, \ i-n, \ j-q}^{* \ l-1}w_{mnq}^{l} \end{equation}
where \( a_{mij}^{l} \) represents the element \( \left(i,j\right) \) in the \( m^{th} \) feature map in layer \( l \), \( \textbf{w}_{m}^{l} \) is the kernel (of size 2x2) corresponding to the \( m^{th} \) feature map in layer \( l \), \( \textbf{b}_{m}^{l} \) is the bias terms corresponding in the \( m^{th} \) feature map in layer \( l \), and \( \textbf{z}_{m}^{l-1} \) is equal to \( \textbf{x} \) for \( l=0 \), and to \( max(0, a_{mij}^{l-1}) \) (i.e., the rectified linear unit) for \( l=1 \). The operator * denotes the convolution between the kernel and its input, while \( \textbf{z}^{*} \) denotes a matrix representation of \( \textbf{z} \) that has been padded with a box of zeros around its perimeter. Padding is a technique that is often used to preserve information at the corners and edges of the input space. Here we implement something known as ‘same’ padding along with a kernel-stride of (1,1).
The next operation in the network vectorizes \( \textbf{z}_{m}^{\ l=2} \), unravelling it by row, column, and then feature map (i.e., C-order). Two fully dense mappings are then applied in successsion to yield the output at layer \( l=4 \). The algebraic form of this mapping function is given by:
\begin{equation} \textbf{z}^{l} = \sigma^{l} \left( \textbf{w}^{l} \textbf{z}^{\ l-1} + \textbf{b}^{\ l} \right) \end{equation}
where \( \sigma^{l=3} \) is a rectified linear unit, and \( \sigma^{l=4} \) is the softmax function given by:
\begin{equation} \sigma^{l=4}(z^{l=4})_{i} = \frac{e^{z_{i}^{l=4}}}{\sum_{j=0}^{n} e^{z_{j}^{l=4}}} \end{equation}
The softmax function is a normalized exponential of the outputs from layer 4, and thus can be interpreted as a probability distribution over the output space. The model prediction is given by:
\begin{equation} \mathsf{predicted \ move} = argmax\left( \sigma^{l=4}(\textbf{z}^{l=4}) \right) \end{equation}
where \( n = 128 \), the number of output labels.
Training the model
In order to train efficiently, it is important choose an appropriate loss function (\( L \)). Here I use categorical cross-entropy with l2 regularization, which is computed as follows:
\begin{equation} L(h(x)) = -\frac{1}{n}\sum_{x} \left( yln(h(x)) + (1-y)ln(1-h(x)) \right) + \lambda\left( \sum_{l=1}^{2} \sum_{m=0}^{M} \lVert \textbf{w}_{m}^{l} \rVert + \sum_{l=3}^{4} \lVert \textbf{w}^{l} \rVert \right) \end{equation}
where \( y \) denotes the ground truth label, and \( \lambda \) is the regularization constant. While classification error is a binary measure, this function is a measure of ‘closeness’, which produces better gradients and thus allows the model to converge more efficiently. To take advantage of this, we skip Equation 4 and plug the softmax activations directly into this loss function (only when computing the loss during training).
I used Tensorflow to implement the forward version of this network, along with the backpropogation and stochastic gradient descent steps that optimize its parameters. I plan to derive the backprop and update steps in the near future–for now I’ll just show how they are implemented in Tensorflow.
import os import numpy as np import pandas as pd from six.moves import cPickle as pickle import tensorflow as tf import read_pickle as rp import time def accuracy(preds, labs): n = preds.shape[0] acc_vec = np.ndarray([n, 1]) for i in range(n): acc_vec[i, 0] = sum(preds[i, :].astype(int) * labs[i, :].astype(int)) # print(acc_vec[i, 0]) assert acc_vec[i, 0] in range(0, 6) acc_score = list() for j in range(1, 6): percentage = len(np.argwhere(acc_vec == j)) / float(n) acc_score.append(round(percentage, 4)) return acc_score def deepnet(num_steps, lambda_loss, dropout_L1, dropout_L2, ckpt_dir): # Computational graph graph = tf.Graph() with graph.as_default(): # Inputs tf_xTr = tf.placeholder(tf.float32, shape=[batch_size, board_height, board_width, num_channels]) tf_yTr = tf.placeholder(tf.float32, shape=[batch_size, label_height * label_width]) tf_xTe = tf.constant(xTe) tf_xTr_full = tf.constant(xTr) # Variables w1 = tf.Variable(tf.truncated_normal([patch_size, patch_size, num_channels, depth], stddev=0.1), name='w1') b1 = tf.Variable(tf.zeros([depth]), name='b1') w2 = tf.Variable(tf.truncated_normal([patch_size, patch_size, depth, depth], stddev=0.1), name='w2') b2 = tf.Variable(tf.zeros([depth]), name='b2') w3 = tf.Variable(tf.truncated_normal([board_height * board_width * depth, num_nodes_layer3], stddev=0.1), name='w3') b3 = tf.Variable(tf.zeros([num_nodes_layer3]), name='b3') w4 = tf.Variable(tf.truncated_normal([num_nodes_layer3, num_nodes_output], stddev=0.1), name='w4') b4 = tf.Variable(tf.zeros([num_nodes_output]), name='b4') # Train def model(xtrain, dropout_switch): # First convolutional layer c1 = tf.nn.conv2d(xtrain, w1, strides=[1, 1, 1, 1], padding='SAME') h1 = tf.nn.relu(c1 + b1) h1_out = tf.nn.dropout(h1, 1 - dropout_L1 * dropout_switch) # maxpool1 = tf.nn.max_pool(h1_out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Second convolutional layer c2 = tf.nn.conv2d(h1_out, w2, strides=[1, 1, 1, 1], padding='SAME') h2 = tf.nn.relu(c2 + b2) h2_out = tf.nn.dropout(h2, 1 - dropout_L1 * dropout_switch) # maxpool2 = tf.nn.max_pool(h2_out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Reshape for fully connected layer h2_shape = xtrain.get_shape().as_list() h2_out_vec = tf.reshape(h2_out, shape=[h2_shape[0], board_height * board_width * depth]) # First fully connected layer y3 = tf.matmul(h2_out_vec, w3) + b3 h3 = tf.nn.relu(y3) h3_out = tf.nn.dropout(h3, 1 - dropout_L2 * dropout_switch) # Model output return tf.matmul(h3_out, w4) + b4 logits = model(tf_xTr, 1) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_yTr)) loss += lambda_loss * (tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2) + tf.nn.l2_loss(w3)) # Optimizer (Built into tensor flow, based on gradient descent) batch = tf.Variable(0) learning_rate = tf.train.exponential_decay(0.01, batch * batch_size, nTr, 0.95, staircase=True) optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(loss, global_step=batch) # Predictions for the training, validation, and test data preds_Tr = tf.nn.softmax(model(tf_xTr_full, 0)) preds_Te = tf.nn.softmax(model(tf_xTe, 0)) # Feed data into the graph, run the model with tf.Session(graph=graph) as session: var_dict = {'w1': w1, 'b1': b1, 'w2': w2, 'b2': b2, 'w3': w3, 'b3': b3, 'w4': w4, 'b4': b4, } saver = tf.train.Saver(var_dict) # Run model tf.initialize_all_variables().run() print('Graph initialized ...') t = time.time() for step in range(num_steps): offset = (step * batch_size) % (nTr - batch_size) batch_data = xTr[offset:(offset + batch_size), :] batch_labels = yTr[offset:(offset + batch_size), :] feed_dict = {tf_xTr: batch_data, tf_yTr: batch_labels} _ = session.run([optimizer], feed_dict=feed_dict) if step % 5000 == 0: l, preds_Train, preds_Test = session.run([loss, preds_Tr, preds_Te], feed_dict=feed_dict) # Find max and set to 1, else 0 for i in range(nTr): ind_Tr = np.argsort(preds_Train[i, :])[::-1][:5] preds_Train[i, :] = 0 for j in range(1, 6): preds_Train[i, ind_Tr[j - 1]] = j for i in range(nTe): ind_Te = np.argsort(preds_Test[i, :])[::-1][:5] preds_Test[i, :] = 0 for j in range(1, 6): preds_Test[i, ind_Te[j - 1]] = j acc_Tr = accuracy(preds_Train, yTr) acc_Te = accuracy(preds_Test, yTe) print('Minibatch loss at step %d: %f' % (step, l)) print('Training accuracy of top 5 probabilities: %s' % acc_Tr) print('Testing accuracy of top 5 probabilities: %s' % acc_Te) print('Time consumed: %d minutes' % ((time.time() - t) / 60.)) saver.save(session, ckpt_dir + 'model.ckpt', global_step=step + 1) elif step % 500 == 0: print('Step %d complete ...' % step) # if step == 10000: # break print('Training complete.') if __name__ == '__main__': # Define batch size for SGD, and network architecture batch_size = 128 num_channels = 1 patch_size = 2 depth = 32 num_nodes_layer3 = 1024 num_nodes_output = 128 board_height = 8 board_width = 4 label_height = 32 label_width = 4 # Extract training data into win_dict, loss_dict, and draw_dict trainset_file = 'checkers_library_full_v2.pickle' win_dict, loss_dict, draw_dict = rp.read_pickle(trainset_file) print('Finished loading data.') # Create numpy arrays xTr(nx8x4) and yTr(nx32x4), where n = number of training examples data_list = list() labels_list = list() for dictionary in [win_dict, loss_dict, draw_dict]: for key in dictionary: data_list.append(dictionary[key][0].as_matrix()) labels_list.append(dictionary[key][1].as_matrix()) data = np.reshape(np.array(data_list, dtype=int), (-1, board_height, board_width, num_channels)) labels = np.array(labels_list, dtype=int) # Randomize order since incoming data is structured into win, loss, draw n = len(data_list) assert n == len(labels_list) ind = np.arange(n) np.random.shuffle(ind) data, labels = data[ind, :, :], labels[ind, :, :] # Vectorize the inputs and labels data = data.reshape((-1, board_height, board_width)).astype(np.float32) labels = labels.reshape((-1, label_height * label_width)).astype(np.float32) # Split x, y into training, cross validation, and test sets test_split = 0.35 nTe = int(test_split * n) nTr = n - nTe xTe, yTe = data[:nTe, :, :], labels[:nTe, :] xTr, yTr = data[nTe:, :, :], labels[nTe:, :] assert n == nTr + nTe del data, labels # Reshape data xTr = np.reshape(xTr, (-1, board_height, board_width, num_channels)) xTe = np.reshape(xTe, (-1, board_height, board_width, num_channels)) param_dir = 'parameters_v2/convnet_100k_full_no_reg/' deepnet(num_steps=150001, lambda_loss=0, dropout_L1=0, dropout_L2=0, ckpt_dir=param_dir)
This code passes 150,000 mini-batches of 128 samples each to the CNN for training. The figure below shows the training accuracy plotted versus training step. We see that the model captures about 66% of moves with it’s top prediction after ~40 epochs through the training set (note: I didn’t use cross-validation).
There are several interesting questions that we could ask from here, for example, what is the variance in the dataset? How does the variance change as a function of board density/sparsity (i.e. is it heteroscedastic)? Intuitively we know that there is variance–players play with different strategies. I would also posit that the variance among early boards is lower than mid-late boards. Knowing the mean variance would be very useful as it would give us a lower bound on test error. However, obtaining the distribution of variance over the entire input space is complicated by a lack of duplicate board examples in the mid-late game scenarios. As we’ll see, the CNN’s performance in the early game is not under question; its late game performance on the other hand is sporadic, and this is a limitation imposed by the size of our dataset. I’m jumping ahead here, but just want to point out that proper evaluation of this model is complicated by several issues.
The good news is that the CNN does capturing collective strategy from the data; it has an error of ~3.45% when we consider its top-5 outputs (i.e., softmax activations). In the future I plan to add a search algorithm (e.g., minimax) and determine the required search breadth/depth for different levels of play. For now though, it is interesting to examine how it performs against humans and other checkers engines without any notion of ‘how’ to play checkers, or of its rules.
The Engine
The program shown below facilitates gameplay between a human and the predictive model. The engine makes calls to the model and to the human player (via a shell-based GUI) and then uses the aforementioned Board class to keep track of board states. When the computer makes a move, the engine displays the top-5 predictions from the model. Because the model has no explicit instruction of ‘how’ to play Checkers (the rules can be found here), the engine also ensures that each move is valid (by querying the ranked predictions until it finds a valid one). I have observed invalid predictions on a few occasions. At every computer move, the engine also checks for hops (and subsequent hops). I’ve found that the model outputs hops corrently in the vast majority of cases. The code for this engine is given below.
import numpy as np import pandas as pd import predict_move class Board(object): global jumps, empty, odd_list def __init__(self): self.state = pd.read_csv(filepath_or_buffer='board_init.csv', header=-1, index_col=None) self.invalid_attempts = 0 def board_state(self, player_type): if player_type == 'white': return -self.state.iloc[::-1, ::-1] elif player_type == 'black': return self.state def print_board(self): print ' 32 31 30 29' print '28 27 26 25' print ' 24 23 22 21' print '20 19 18 17' print ' 16 15 14 13' print '12 11 10 09' print ' 08 07 06 05' print '04 03 02 01' print '\n' for j in range(8): for i in range(4): if j % 2 == 0: print ' ', if self.state[3 - i][7 - j] == 1: print 'x', elif self.state[3 - i][7 - j] == 3: print 'X', elif self.state[3 - i][7 - j] == 0: print '-', elif self.state[3 - i][7 - j] == -1: print 'o', else: print 'O', if j % 2 != 0: print ' ', print '' def find_jumps(self, player_type): valid_jumps = list() if player_type == 'black': king_value = black_king chkr_value = black_chkr chkr_directions = [1, 2] else: king_value = white_king chkr_value = white_chkr chkr_directions = [0, 3] board_state = self.state.as_matrix() board_state = np.reshape(board_state, (32,)) for position in range(32): piece = board_state[position] neighbors_list = neighbors[position] next_neighbors_list = next_neighbors[position] if piece == chkr_value: for direction in chkr_directions: neighbor = neighbors_list[direction] next_neighbor = next_neighbors_list[direction] if neighbor == iv or next_neighbor == iv: pass elif board_state[next_neighbor] == empty and (board_state[neighbor] == -chkr_value or board_state[neighbor] == -king_value): valid_jumps.append([position, next_neighbor]) elif piece == king_value: for direction in range(4): neighbor = neighbors_list[direction] next_neighbor = next_neighbors_list[direction] if neighbor == iv or next_neighbor == iv: pass elif board_state[next_neighbor] == empty and (board_state[neighbor] == -chkr_value or board_state[neighbor] == -king_value): valid_jumps.append([position, next_neighbor]) return valid_jumps def get_positions(self, move, player_type): # Extract starting position, and direction to move ind = np.argwhere(move == 1)[0] position = ind[0] direction = ind[1] jumps_available = self.find_jumps(player_type=player_type) neighbor = neighbors[position][direction] next_neighbor = next_neighbors[position][direction] if [position, next_neighbor] in jumps_available: return position, next_neighbor, 'jump' else: return position, neighbor, 'standard' def generate_move(self, player_type, output_type): board_state = self.state moves, probs = predict_move.predict_cnn(board_state, output=output_type, params_dir=params_dir) moves_list = list() for i in range(1, 6): ind = np.argwhere(moves == i)[0] move = np.zeros([32, 4]) move[ind[0], ind[1]] = 1 pos_init, pos_final, move_type = self.get_positions(move, player_type=player_type) moves_list.append([pos_init, pos_final]) return moves_list, probs def update(self, positions, player_type, move_type): # Extract the initial and final positions into ints [pos_init, pos_final] = positions[0], positions[1] if player_type == 'black': king_pos = black_king_pos king_value = black_king chkr_value = black_chkr pos_init, pos_final = int(pos_init), int(pos_final) else: king_pos = white_king_pos king_value = white_king chkr_value = white_chkr pos_init, pos_final = int(pos_init) - 1, int(pos_final) - 1 # print(pos_init, pos_final) board_vec = self.state.copy() board_vec = np.reshape(board_vec.as_matrix(), (32,)) if (board_vec[pos_init] == chkr_value or board_vec[pos_init] == king_value) and board_vec[pos_final] == empty: board_vec[pos_final] = board_vec[pos_init] board_vec[pos_init] = empty # Assign kings if pos_final in king_pos: board_vec[pos_final] = king_value # Remove eliminated pieces if move_type == 'jump': eliminated = int(jumps.iloc[pos_init, pos_final]) print('Position eliminated: %d' % (eliminated + 1)) assert board_vec[eliminated] == -chkr_value or -king_value board_vec[eliminated] = empty # Update the board board_vec = pd.DataFrame(np.reshape(board_vec, (8, 4))) self.state = board_vec return False else: return True def play(): # Alpha-numeric encoding of player turn: white = 1, black = -1 turn = -1 # Count number of invalid move attempts invalid_move_attempts = 0 jumps_not_predicted = 0 move_count = 0 game_aborted = False # Initialize board object board = Board() # Start game raw_input("To begin, press Enter:") end_game = False winner = '' while True: # White turn if turn == 1: print('\n' * 2) print('=======================================================') print("White's turn") board.print_board() move_illegal = True while move_illegal: # Prompt player for input move = raw_input("Enter move as 'pos_init, pos_final':") if move[::-1][:4][::-1] == 'wins': winner = move[0:len(move) - 5] end_game = True break elif move == 'draw': winner = 'draw' end_game = True break else: move = move.split(',') for i in range(len(move) - 1): pos_init = int(move[i]) pos_final = int(move[i + 1]) if abs(pos_final - pos_init) > 5: move_type = 'jump' else: move_type = 'standard' move_illegal = board.update(positions=[pos_init, pos_final], player_type='white', move_type=move_type) if move_illegal: print('That move is invalid, please try again.') else: print("White move: %s" % [pos_init, pos_final]) # Black turn elif turn == -1: print('\n' * 2) print('=======================================================') print("Black's turn") board.print_board() player_type = 'black' # Call model to generate move moves_list, probs = board.generate_move(player_type=player_type, output_type='top-5') print(np.array(moves_list) + 1) print(probs) # Check for available jumps, cross check with moves available_jumps = board.find_jumps(player_type=player_type) first_move = True # Handles situation where there is a jump available to black if len(available_jumps) > 0: move_type = 'jump' jump_available = True while jump_available: # For one jump available if len(available_jumps) == 1: count = 1 move_predicted = False for move in moves_list: if move == available_jumps[0]: # print("There is one jump available. This move was choice %d." % count) move_predicted = True break else: count += 1 if not move_predicted: # print('Model did not output the available jumps. Forced move.') jumps_not_predicted += 1 initial_position = available_jumps[0][0] if not (first_move or final_position == initial_position): break final_position = available_jumps[0][1] initial_piece = np.reshape(board.state.as_matrix(), (32,))[initial_position] move_illegal = board.update(available_jumps[0], player_type=player_type, move_type=move_type) if move_illegal: # print('Find Jumps function returned invalid move: %s' % (np.array(available_jumps[0]) + 1)) game_aborted = True else: print("Black move: %s" % (np.array(available_jumps[0]) + 1)) available_jumps = board.find_jumps(player_type=player_type) final_piece = np.reshape(board.state.as_matrix(), (32,))[final_position] if len(available_jumps) == 0 or final_piece != initial_piece: jump_available = False # When diffent multiple jumps are available else: move_predicted = False for move in moves_list: if move in available_jumps: initial_position = move[0] if not (first_move or final_position == initial_position): break final_position = move[1] initial_piece = np.reshape(board.state.as_matrix(), (32,))[initial_position] move_illegal = board.update(move, player_type=player_type, move_type=move_type) if move_illegal: pass # print('Model and Find jumps function predicted an invalid move: %s' % (np.array(move) + 1)) else: print("Black move: %s" % (np.array(move) + 1)) move_predicted = True available_jumps = board.find_jumps(player_type=player_type) final_piece = np.reshape(board.state.as_matrix(), (32,))[final_position] if len(available_jumps) == 0 or final_piece != initial_piece: jump_available = False break if not move_predicted: # print('Model did not output any of the available jumps. Move picked randomly among valid options.') jumps_not_predicted += 1 ind = np.random.randint(0, len(available_jumps)) initial_position = available_jumps[ind][0] if not (first_move or final_position == initial_position): break final_position = available_jumps[ind][1] initial_piece = np.reshape(board.state.as_matrix(), (32,))[initial_position] move_illegal = board.update(available_jumps[ind], player_type=player_type, move_type=move_type) if move_illegal: # print('Find Jumps function returned invalid move: %s' % (np.array(available_jumps[ind]) + 1)) game_aborted = True else: available_jumps = board.find_jumps(player_type=player_type) final_piece = np.reshape(board.state.as_matrix(), (32,))[final_position] if len(available_jumps) == 0 or final_piece != initial_piece: jump_available = False first_move = False # For standard moves else: move_type = 'standard' move_illegal = True while move_illegal: count = 1 for move in moves_list: move_illegal = board.update(move, player_type=player_type, move_type=move_type) if move_illegal: # print('model predicted invalid move (%s)' % (np.array(move) + 1)) print(probs[count - 1]) invalid_move_attempts += 1 count += 1 else: print('Black move: %s' % (np.array(move) + 1)) break if move_illegal: game_aborted = True print("The model failed to provide a valid move. Game aborted.") print(np.array(moves_list) + 1) print(probs) break if game_aborted: print('Game aborted.') break if end_game: print('The game has ended') break move_count += 1 turn *= -1 # Print out game stats end_board = board.state.as_matrix() num_black_chkr = len(np.argwhere(end_board == black_chkr)) num_black_king = len(np.argwhere(end_board == black_king)) num_white_chkr = len(np.argwhere(end_board == white_chkr)) num_white_king = len(np.argwhere(end_board == white_king)) if winner == 'draw': print('The game ended in a draw.') else: print('%s wins' % winner) print('Total number of moves: %d' % move_count) print('Remaining white pieces: (checkers: %d, kings: %d)' % (num_white_chkr, num_white_king)) print('Remaining black pieces: (checkers: %d, kings: %d)' % (num_black_chkr, num_black_king)) if __name__ == '__main__': # Define board entries and valid positions empty = 0 black_chkr = 1 black_king = 3 black_king_pos = [28, 29, 30, 31] white_chkr = -black_chkr white_king = -black_king white_king_pos = [0, 1, 2, 3] valid_positions = range(32) odd_list = [0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19, 24, 25, 26, 27] even_list = [4, 5, 6, 7, 12, 13, 14, 15, 20, 21, 22, 23, 28, 29, 30, 31] jumps = pd.read_csv(filepath_or_buffer='jumps.csv', header=-1, index_col=None) params_dir = 'parameters/convnet_150k_full/model.ckpt-150001' # Entries for neighbors are lists, with indices corresponding to direction as defined in parser_v7.py ... iv = '' neighbors = {0: [iv, 5, 4, iv], 1: [iv, 6, 5, iv], 2: [iv, 7, 6, iv], 3: [iv, iv, 7, iv], 4: [0, 8, iv, iv], 5: [1, 9, 8, 0], 6: [2, 10, 9, 1], 7: [3, 11, 10, 2], 8: [5, 13, 12, 4], 9: [6, 14, 13, 5], 10: [7, 15, 14, 6], 11: [iv, iv, 15, 7], 12: [8, 16, iv, iv], 13: [9, 17, 16, 8], 14: [10, 18, 17, 9], 15: [11, 19, 18, 10], 16: [13, 21, 20, 12], 17: [14, 22, 21, 13], 18: [15, 23, 22, 14], 19: [iv, iv, 23, 15], 20: [16, 24, iv, iv], 21: [17, 25, 24, 16], 22: [18, 26, 25, 17], 23: [19, 27, 26, 18], 24: [21, 29, 28, 20], 25: [22, 30, 29, 21], 26: [23, 31, 30, 22], 27: [iv, iv, 31, 23], 28: [24, iv, iv, iv], 29: [25, iv, iv, 24], 30: [26, iv, iv, 25], 31: [27, iv, iv, 26] } next_neighbors = {0: [iv, 9, iv, iv], 1: [iv, 10, 8, iv], 2: [iv, 11, 9, iv], 3: [iv, iv, 10, iv], 4: [iv, 13, iv, iv], 5: [iv, 14, 12, iv], 6: [iv, 15, 13, iv], 7: [iv, iv, 14, iv], 8: [1, 17, iv, iv], 9: [2, 18, 16, 0], 10: [3, 19, 17, 1], 11: [iv, iv, 18, 2], 12: [5, 21, iv, iv], 13: [6, 22, 20, 4], 14: [7, 23, 21, 5], 15: [iv, iv, 22, 6], 16: [9, 25, iv, iv], 17: [10, 26, 24, 8], 18: [11, 27, 25, 9], 19: [iv, iv, 26, 10], 20: [13, 29, iv, iv], 21: [14, 30, 28, 12], 22: [15, 31, 29, 13], 23: [iv, iv, 30, 14], 24: [17, iv, iv, iv], 25: [18, iv, iv, 16], 26: [19, iv, iv, 17], 27: [iv, iv, iv, 18], 28: [21, iv, iv, iv], 29: [22, iv, iv, 20], 30: [23, iv, iv, 21], 31: [iv, iv, iv, 22] } play()
Engine performance versus humans
I showed above that this model can learn individual moves, but the more interesting question is whether or not it exhibits the high-level strategy that is required to win games. Below are tabulated results from four games against human opponents. One of the players was relatively inexperienced at checkers and the CNN engine won the game easily, maintaining a clear advantage throughout the game. The second individual plays at an intermediate level, and was also defeated by the CNN, but after a more lengthy game of 100 moves. The other two human players play somewhere between intermediate and advanced, and were able to overcome the CNN, but not easily.
One of the interesting findings was that the CNN engine does have the ability to generalize and play sound checkers on board states it has never seen before (sometimes, that is). An example of this was the endgame between the CNN and the the intermediate level human. The game-ending sequence is shown below. On move 95, the CNN had a clear opportunity to defend it’s own checker by moving from position 23 to 27, but instead moved in the opposite direction, sacrificing its own checker to allow the opposing king to move into a very unfavorable position (from the human’s perspective). The CNN engine then performed the perfect line of play given the human’s subsequent move into kings row, and was able secure the win one move later.
The two games against the more advanced human players were recorded and analyzed by the Cake checkers engine’s analysis tool in order to track the CNN’s performance throughout the game. The CNN lost both games but displayed a similar pattern of performance. For the first portion of these games, the engine had clear winning positions through the first 25 moves (e.g., either by clear advantage based on number of kings and pieces, or as determined by the Cake engine analysis in situations where piece counts were closer). However, as boards became more sparse, the engine became prone to game-changing blunders, which ultimately led to its defeat.
Engine performance versus Checkers Deluxe and Cake
As a more concrete measure of performance, I show the tabulated results of our model versus Checkers Deluxe, which is a popular Checkers playing app. The app has four skill levels–as can be seen, the CNN defeated all levels except for Expert. Even though it couldn’t defeat the Expert level, it performed remakably well for 90+ moves before losing on the 97th move.
The CNN also lost twice to the the world championship, search-based checkers engine Cake. This engine uses advanced search algorithms, along with endgame databases, to generate moves. Desipite the clear disadvantage of having no search, it maintained reasonable positions for 20+ moves in each game. According to the Cake analysis tool, the CNN’s first 57 outputs were book moves, which makes sense given that those would be the most common moves in the training database. The figure below shows a snapshot of the first game after 20 moves, shortly before Cake (white) gained a winning position.
Game play The source code for training and gameplay can be found here. Some day I hope to to wrap the game engine in a GUI–for now it just runs on the shell or your favorite Python ide. If you are interested in seeing how this engine is implemented please read on, otherwise feel free to download the source code and see if you can beat it. Tip: run a browser-based checkers game in 2-player mode along side this game for a better experience. The shell interface looks like this …
Final thoughts
The above results indicate that intermediate-level checkers play can be achieved without the use of search, which is pretty cool in its own right. While it typically maintains a clear advantage through 20+ moves, it tends to suffer from the occasional blunder in the mid-late game against stronger opponents as the board becomes sparse and less familar. This is expected, and motives the further learning through self play. In the next post I’ll describe how we can factor checkers games into a set of conditional probability distributions that provides an elegant means to improve our pretrained policy by playing games, observing outcomes, assigning rewards, and propogating reward signals back through the network to make moves that positively correlate with winning more probable in the policy. | https://chrislarson1.github.io/blog/2016/05/30/cnn-checkers/ | CC-MAIN-2018-26 | refinedweb | 5,310 | 55.64 |
Notice: Navigate the origin blog post for downloading source code that I've uploaded today.
You know, it's 3 am, today is my birthday, I have an AI tutorial class this afternoon and I've not slept yet. I'm so excited now. I've just archived the great idiots things that most idiots couldn't archive. I feel like the biggest idiot ever. From now on, I'm able to call an idiot C# method in the idiot language Java. I going to tell you how.
Before continuing,
System.load(blah blah)
So, you ask me: "Why don't you idiot guys loads the DLL library.
System.
Library
Create file 'HelloWork.h' in folder 'Java'. Please type slowly the code below (don't make a mistake, it's time consuming):
#include <jni.h>
/*
Then, create file 'HelloWorld.h' in folder 'MCPP'. Code here:
();
}
};
Sorry guys, don't miss the most important file: HelloWorld.cpp.
#include <jni.h>
();
}
Try compiling the happening.
A be-au-tiful dot-net face appears. Princess C# could date with the Beast Java. So be-au-tiful <3.
Now I must take a nap. Everybody, birthday party at 7 am \. | https://www.codeproject.com/Articles/378826/How-to-wrap-a-Csharp-library-for-use-in-Java?msg=4303654 | CC-MAIN-2017-47 | refinedweb | 196 | 80.07 |
3.9 Local Inner Classes
Long before there were lambda expressions, Java had a mechanism for concisely defining classes that implement an interface (functional or not). For functional interfaces, you should definitely use lambda expressions, but once in a while, you may want a concise form for an interface that isn’t functional. You will also encounter the classic constructs in legacy code.
3.9.1 Local Classes
You can define a class inside a method. Such a class is called a local class. You would do this for classes that are just tactical. This occurs often when a class implements an interface and the caller of the method only cares about the interface, not the class.
For example, consider a method
public static IntSequence randomInts(int low, int high)
that generates an infinite sequence of random integers with the given bounds.
Since IntSequence is an interface, the method must return an object of some class implementing that interface. The caller doesn’t care about the class, so it can be declared inside the method:
private static Random generator = new Random(); public static IntSequence randomInts(int low, int high) { class RandomSequence implements IntSequence { public int next() { return low + generator.nextInt(high - low + 1); } public boolean hasNext() { return true; } } return new RandomSequence(); }
There are two advantages of making a class local. First, its name is hidden in the scope of the method. Second, the methods of the class can access variables from the enclosing scope, just like the variables of a lambda expression.
In our example, the next method captures three variables: low, high, and generator. If you turned RandomInt into a nested class, you would have to provide an explicit constructor that receives these values and stores them in instance variables (see Exercise 15).
3.9.2 Anonymous Classes
In the example of the preceding section, the name RandomSequence was used exactly once: to construct the return value. In this case, you can make the class anonymous:
public static IntSequence randomInts(int low, int high) { return new IntSequence() { public int next() { return low + generator.nextInt(high - low + 1); } public boolean hasNext() { return true; } } }
The expression
new Interface() { methods }
means: Define a class implementing the interface that has the given methods, and construct one object of that class.
Before Java had lambda expressions, anonymous inner classes were the most concise syntax available for providing runnables, comparators, and other functional objects. You will often see them in legacy code.
Nowadays, they are only necessary when you need to provide two or more methods, as in the preceding example. If the IntSequence interface has a default hasNext method, as in Exercise 15, you can simply use a lambda expression:
public static IntSequence randomInts(int low, int high) { return () -> low + generator.nextInt(high - low + 1); } | http://www.informit.com/articles/article.aspx?p=2303960&seqNum=9 | CC-MAIN-2017-43 | refinedweb | 461 | 51.89 |
Crypto Price Simulations using Monte Carlo and Python
In this post we will use Monte Carlo simulations to guess the Bitcoin price in the near future using Python. Therefore, I will explain some related statistics and ways to analyze the generated data. Furthermore, we will use crypto price simulations to compare the simulation to the actual price.
Additional disclaimer: This is no investment advice or encouragement to buy crypto. Please, do your own research before investing. This tutorial is only for educational purposes.
Monte Carlo Simulation
The idea behind Monte Carlo simulations is that randomness can be used to solve deterministic problems. Therefore, random samples are repeated and afterwards a statistical analysis is performed on these samples.
With respect to price simulations Monte Carlo simulations can be used to model the random character of moving prices.
Basic statistics
In order to analyse the data, we need some basic statistics. First, we need the arithmetic mean:
(1)
Furthermore, the standard deviation is a measure of the spread of a distribution:
(2)
Within financial calculations, the standard deviation is a measure for the price volatility.
After that, we need to calculate the relative daily change in price (return) using the following formula:
(3)
Finally, the incremental random based change in price is calculated as follows:
(4)
Here,
is the latest price while
is the previous price and
is a random number. Therefore, this price progression can be desribed as a 1-dimensional random walk.
Furthermore, the random number is drawn from a normal (Gaussian) distribution. Hence, it is a function of the standard deviation and the mean. Therefore, we can write
(5)
Summary of the Procedure
We calculate the standard deviation (volatility) of the daily return (relative price change) from a past time frame. Afterwards, we use this (constant) volatility from the past to predict the price in the future using the random walk theory.
Set up the Crypto Price Simulation in Python
Let’s have a look at the schematic approach in Python:
- Import dependencies.
- Enter input: Coin name, exchange, trading currency, time frame of the past price data, number of simulations and predicted time frame.
- Afterwards, receive live price data via the API from CryptoCompare.
- The data is stored in a dataframe (df) using the Pandas data analysis library.
- Then, the percentage change and standard deviation function from Pandas are applied to the data frame.
- Afterwards the simulations are conducted within 2 loops. The first loop for the simulations, the second loop for the price progression within one simulation.
- Finally, plot the simulated time progression data and the histogram.
And here you go with the Python code.
#License: MIT License ()
import pandas as pd
import urllib, json
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
from scipy.stats import norm
coinAcronym = "BTC"
exchange = "Kraken"
tradeCurrency = "EUR"
timeFrame = 30
url = "" + coinAcronym + "&tsym=" + tradeCurrency + "&limit=" + str(timeFrame) + "&e=" + exchange
data = json.loads(urllib.urlopen(url).read())
df = pd.DataFrame(data['Data'])
df.columns = [['close', 'high', 'low', 'open', 'time', 'volumefrom', 'volumeto']]
df.time = pd.to_datetime(df['time'], unit='s')
df = df.set_index(df.time)
df2 = pd.DataFrame()
df2 = df.close
returns = df2.pct_change()
volatility = returns.std()
numberSimulations = 100
predictedDays = 30
lastPrice = df.close[-1]
print lastPrice
results = pd.DataFrame()
sim = 0
while (sim < numberSimulations):
prices = [] #Empty list for each new simulation
days = 0 #reset days counter
prices.append(lastPrice) #Fill list with initial price
while (days < predictedDays):
prices.append(prices[days] * (1 + np.random.normal(0, volatility))) #Add random new price and add to list
days = days + 1 #increment days counter
results[str(sim)] = pd.Series(prices).values #Add column to pandas data frame
sim = sim + 1 #increment simulation counter
fig = plt.figure()
plt.plot(results)
plt.ylabel('Price in Euro')
plt.xlabel('Simulated days')
plt.show()
Results
First of all, let’s have a look at the simulation No 1. It is the Monte Carlo simulation of the Bitcoin price. Therefore, we start at the 15.01.2018 at 11440 €, calculate the volatility (standard deviation) of the previous 200 days, do 25000 Monte Carlo simulations for the following 30 days. This means, that we are going to simulate 750000 prices over the progression of 30 days in the future.
Let’s plot the histogram of the final prices after the simulated 30 days. Furthermore, lets plot a fit of the Gaussian distribution curve using the fit function from the scipy.stats function. Additionally, we plot the 25 % and 75 % quantile into the histogram.
The 25 % quantile means, that there is a 25 % chance that the crypto price goes below 8866 €. Regarding the 75 % quantile, there is a 25 % chance that it will go above 13379 €.
Additionally, let us take further simulations into account. Therefore, let us specify the constant input data for the simulations:
- Date: 15.01.2018,
- Predicted days: 30,
- Starting price 11440 €.
Furthermore, the variable input data are specified in the following table.
Conclusion
In Monte Carlo simulations randomness can be used to solve deterministic problems. Furthermore, they can be used to model the random character of moving prices. Here, we used the Bitcoin volatility data of a certain period in the past to simulate the price for the following 30 days. Afterwards, we plotted a fit of the Gaussian distribution curve and the 25 % and 75 % quantile. This means, there is a 50 % chance, that the price will end up between these two quantiles. Let us reassess that after the 30 days passed.
Additionally, this crypto price simulations are not taking any market sentiment or events into account. Therefore, Google trends could be an indicator (You can have a look at Bitcoin article, in case you are interested).
Clearly, you can apply this method to stocks etc. But remember, never take simulation results too serious. I see it as a method to improve my educated guesses.
Followup – Comparing Simulation and Reality
Like promised, I am comparing the simulated prices to the real prices. Therefore, let’s have a look at the Bitcoin price. On the 16.02.2018 it is around 8085.5 €. Our 25 % quantile of the simulations was 8891 €, while the 75 % quantile was 13443 €. Hence, the simulation results were too progressive, since the actual price is lower than the 25 % quantile. This does not mean, that this type of Monte Carlo simulation is never applicable. It simply means, it can be too progressive for very volatile markets.
If you are interested in a less progressive calculation method, under consideration of negative price jumps have a look a the following article: Advanced Crypto Price Simulations based on Monte Carlo.
What did you think?
I’d like to hear what you think about this post.
Let me know by leaving a comment below.
One Response
great content | https://numex-blog.com/crypto-price-simulation-using-monte-carlo-python/ | CC-MAIN-2019-13 | refinedweb | 1,125 | 58.69 |
back to blog
How to Add Push-Notifications on Firebase Cloud Messaging to React Web App
We will create an MVP of an app on React that will receive push notifications by using Firebase Cloud Messaging (even in the background mode or when the app is closed). This is an easy thing to do but there are certain moments to watch for. You can find all the needed keys/tokens described below in the Firebase settings of the project.
Adding manifest.json
Add manifest.json file to the folder that already has index.html with gcm_sender_id. If you already have the manifesto, simply add this line. Insert the value below AS IT IS, without changing anything.
{ "gcm_sender_id": "103953800507" }
Connect manifesto in index.html; href may differ.
<head> <link rel="manifest" href="%PUBLIC_URL%/manifest.json" /> </head>
To make sure that everything is working correctly we need to open Browser's devtools panel and navigate to Network tab, be sure that manifest.json was loaded by your application and it contains gcm_sender_id.
Setting up service worker for receiving messages
Create a public/firebase-messaging-sw.js file. This is a future service worker that will be receiving the messages in the background mode. setBackgroundMessageHandler is responsible for that. In the piece of code below you can see the event handler by a click on the tooltip message ...addEventListener('notificationclick... if the tab is not active or closed.
importScripts(""); importScripts(""); firebase.initializeApp({ // Project Settings => Add Firebase to your web app messagingSenderId: "1062407524656" }); const messaging = firebase.messaging(); messaging.setBackgroundMessageHandler(function(payload) { const promiseChain = clients .matchAll({ type: "window", includeUncontrolled: true }) .then(windowClients => { for (let i = 0; i < windowClients.length; i++) { const windowClient = windowClients[i]; windowClient.postMessage(payload); } }) .then(() => { return registration.showNotification("my notification title"); }); return promiseChain; }); self.addEventListener('notificationclick', function(event) { // do what you want // ... });
In order for the worker to start working, you have to register it from the beginning. I did it right before the app entry point. Official documentation suggests to register the FCM worker in a slightly different way but, depending on the webpack settings, you may get an incorrect mime-type error upon worker uploading. Below is the most reliable method in my opinion.
import ReactDOM from "react-dom"; import App from "./App"; if ("serviceWorker" in navigator) { navigator.serviceWorker .register("./firebase-messaging-sw.js") .then(function(registration) { console.log("Registration successful, scope is:", registration.scope); }) .catch(function(err) { console.log("Service worker registration failed, error:", err); }); } ReactDOM.render(, document.getElementById("root"));
Ensure that worker loaded with no errors.
And that browser registered it.
Check the browser console for any manifesto or service worker errors and fix them, if there are any.
Setting up the Firebase SDK
yarn add firebase # or npm install firebase
Connecting Firebase Cloud Messaging
Create a src/init-fcm.js file where you will initialize the FCM. In the comments below you can see where to get the keys.
import * as firebase from "firebase/app"; import "firebase/messaging"; const initializedFirebaseApp = firebase.initializeApp({ // Project Settings => Add Firebase to your web app messagingSenderId: "106240...4524656" }); const messaging = initializedFirebaseApp.messaging(); messaging.usePublicVapidKey( // Project Settings => Cloud Messaging => Web Push certificates "BD6n7ebJqtOxaBS8M7xtBwSxgeZwX1gdS...6HkTM-cpLm8007IAzz...QoIajea2WnP8rP-ytiqlsj4AcNNeQcbes" ); export { messaging };
Now connect everything that you did to React component. The most important actions here take place in the lifecycle method componentDidMount. Remember that it’s an MVP and in a real project, you can allocate everything to different places.
// ... import { messaging } from "./init-fcm"; // ... async componentDidMount() { messaging.requestPermission() .then(async function() { const token = await messaging.getToken(); }) .catch(function(err) { console.log("Unable to get permission to notify.", err); }); navigator.serviceWorker.addEventListener("message", (message) => console.log(message)); }
In order for the service to start sending us messages, we should let it know the address of our app. For that, we need to get a token and pass it to the server. With the help of this token, the server will know the address that we registered by.
To get the token, we have to allow the browser to receive push messages. In this case messaging..requestPermission() will show the following on the loading page:
You can always change this configuration:
Only after the permission described above, messaging.getToken() can get the token.
In order to receive the messages, we have to install the handler on onMessage. Official documentation suggests the following method:
messaging.onMessage((payload) => console.log('Message received. ', payload));
This method works only if the tab with the app is in focus. We will put the handler right on the client in the service worker. Depending on the tab focus, message will be having a different structure.
navigator.serviceWorker.addEventListener("message", (message) => console.log(message));
If you are a bit confused by now:
- Set up a JavaScript Firebase Cloud Messaging client app - official documentation with a step-by-step guide. It describes the logic of token update very well. We will skip it though to save the size of the article.
Sending and receiving messages
We will use the Postman app for that.
1. Set up the query titles. Authorization title in the form key=<SERVER_KEY>. The key for the sender (server) is in the project settings Settings ⇒ Cloud Messaging ⇒ Server key.
2. Setting up the query body
In the "to:" field: use the previously obtained token from messaging.getToken(). For data transfer, we will use the "data" field that can have any nesting. As well, check out the official documentation on writing the query structure. There are a few query types and browsers proceed them differently. In my opinion, the "data" query is the most universal one. Then simply click the Send button and in the browser, receive the previously sent information.
The end
The code in this article serves for demonstration purposes. The project is here ⇒ react-fcm. Don’t forget to insert the keys from your project before starting. | https://dashbouquet.com/blog/frontend-development/how-to-add-push-notifications-on-firebase-cloud-messaging-to-react-web-app | CC-MAIN-2020-16 | refinedweb | 968 | 52.26 |
Worth mentioning here that the Label opcode in FE8 is 0x1C instead of 0x1B like it is in FE7. Maybe one additional built in AI??
Added a ton of stuff. These have ? by them because they have the same structure as the corresponding entries in FE7, but I haven't actually confirmed their effects. They look the same though, so I think that's accurate.
HOW THE AI DETERMINES WHAT TO STEAL
3B7C8:Initialization: Push r4-r7, r14. Move r0 (RAM char struct pointer of character being looked at) into r7. Move 0xFF to r4 and r5, and move 0x0 to r4. Load the item id of the first item into r0. BL to 3B794.
3B794:At 5A83A4, there's an array of item halfwords, arranged in order of descending importance, and terminated with 0xFFFF. This function iterates through the array and compares each value to the item id in question. r0 has a counter of how far into the array the item is, and this number is what is returned. If the item isn't in the array by the time it reads 0xFFFF (read: not stealable), it moves 0xFFFFFFFF into r0.
Once returned, compare r0 to r6. If (signed) less than, move r0 to r6 (current most valulable item according to that array), and move the inventory slot into r5. Load the next item and repeat the previous steps.
Once all the items have been iterated through, move r5 (slot with most valuable item) to r0 and return (3DC0C is a BL 3B7C8). Get the item id of that slot and bl to 3B794 again to get the "how valuable is this item" number. Once obtained, load r3 with [sp,#0x14], which has the previous contender of "most valuable item". Compare r0 and r3; if r0 is (signed) less than, store r0 in the space r3 used to beNot entirely sure what happens next. Loads 202E4D8, dereferences that, then does a bunch of other stuff. I think one of the pointers has the allegiance byte of the current character being looked at?After a while, it stores r6 (current most valuable item) into sp,[0x18].
Gist:Value is determined by location in an array located at 5A83A4. Only item ID is checked, not weapon type. Therefore, you can repoint and expand this to make enemy thieves steal whatever according to a priority table of your making. If a character has 2 of the same most valuable items, the one in the lowest (lowest meaning closest to bottom, not numerically) slot will be stolen, regardless of uses.If two characters have the same most valuable item, the one with a lower deployment number will be stolen from, regardless of uses.
Unit position will be the tiebreaker in this scenario.
In FE7, many AI routines start at the bottom-right corner then scan rows right-to-left until it reaches the top-left corner. When an item of equal priority is encountered, it becomes the new steal target. So, when 2 units have the same most valuable item, stealing priority goes to the upmost then leftmost unit. I did a quick test in FE8 and this still seems to be the case.
Here's a general outline of the FE7 steal AI (08038C44):Examine each map tile. Start at bottom-right corner, stop at top-left corner, read rows right-to-left.Check conditions for each tile:1. Tile is within steal range2. Tile contains a unit3. Unit cannot be in the same alliance as thief4. There is a open, reachable position around unit (where to execute the steal command)5. Thief speed >= target speed6. Unit has an item from the stealable item table (lower index = higher priority)7. Item index in stealable item table <= index of best item seen so farIf all conditions met, unit/item becomes the new best targetRepeat until all tiles have been examinedWrite steal command if a valid target was found
Could you repost this into the FE7 thread as well?
Yeah, sure.
In FE8, the chapter array that controls the AI's ability to use door keys, lockpicks, and antitoxins is located at 0D8538 (each chapter entry is 4 bytes)
Bit 01 = enable door key. Bit 02 = enable lockpick. Bit 04 = enable antitoxin.
Turn off door keys if you find your enemies being stupid with doors.Don't worry about lockpicks since AI2 0x04 and AI2 0x05 will use them anyways.I have no idea why banning antitoxins is even a thing. | http://feuniverse.us/t/fe8-the-official-ai-documentation-thread/483?page=2 | CC-MAIN-2018-47 | refinedweb | 748 | 73.88 |
I recently came across wonderful blog post of Feodor Georgiev. He is one fine developer and like to dwell in the subject of performance tuning and query optimizations. He is one real genius and original blogger. Recently I came across his wonderful script, which I was in fact writing myself and I found out that he has already posted the same query over here. After getting his permission I am reproducing the same query on this blog.
Note to not run the following script on busy transactional production environment as well, it does not get all historical results as it only applies to cached plan.
Following T-SQL script gets all the queries and their execution plan where parallelism operations is kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50.
SELECT TOP 10
p.*,
q.*,
qs.*,
cp.plan_handle
FROM
sys.dm_exec_cached_plans cp
CROSS apply sys.dm_exec_query_plan(cp.plan_handle) p
CROSS apply sys.dm_exec_sql_text(cp.plan_handle) AS q
JOIN sys.dm_exec_query_stats qs
ON qs.plan_handle = cp.plan_handle
WHERE
cp.cacheobjtype = 'Compiled Plan' AND
p.query_plan.value('declare namespace p="";
max(//p:RelOp/@Parallel)', 'float') > 0
OPTION (MAXDOP 1)
Above query will return all the queries which are generating parallel plans. I suggest you run above query on your development server and check if above query is returning all the parallel plan queries.
Reference: Pinal Dave ()
Thank you, Pinal! One more thing: instead of querying the plan cache directly on a busy environment, I recommend “taking a snapshot” of the cache to a different server and querying it as much as you want. I have a blog post on how to “take a snapshot” here .
Pingback: SQLAuthority News – A Monthly Round Up of SQLAuthority Blog Posts Journey to SQL Authority with Pinal Dave
Pingback: SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database Journey to SQL Authority with Pinal Dave
Hi Pinal,
I don’t see an ORDER BY clause in this query. Wouldn’t we want to order by some criteria to see the top n something? Perhaps ORDER BY total_worker_time DESC?
Thanks,
Bennett
Pingback: SQL SERVER – Weekly Series – Memory Lane – #039 | Journey to SQL Authority with Pinal Dave | http://blog.sqlauthority.com/2010/07/24/sql-server-find-queries-using-parallelism-from-cached-plan/?like=1&source=post_flair&_wpnonce=843d76250c | CC-MAIN-2014-15 | refinedweb | 377 | 57.06 |
This script allows using twisted with wxPython at the same time without them stepping on each others toes. I'm so sorry for the messyness of it. I haven't found the time to tidy it up. The thing is it works, we use it in our kiosk administration program. <ad></ad>.
wxPython has its own main loop, twisted has its own main loop. wxreactor allows them to work together unless you want to use modal dialogs (and in my case didn't work on two linux machines with wx2.4). wxsupport is about the same, but didn't work on windows (with wx2.5) for me.
This solution is taken from itamar's suggestion in the twisted mailing list. Let each run in its own thread.
wx runs in the main thread and starts twisted reactor in a secondary thread. Methods are called cross thread by posting messages/events on each main loop's event queue. wx uses wxEvents and twisted uses CallFromThread (thanks again to itamar and
twisted (irc.freenode.net).
We also supply a little framework for inter-thread communication. The easiest way to use it is to have your twisted object use makeNetSafe (meaning make this net thread func safe to be called from the gui thread). Use like this:
class SomeTwistedThing(Protocol):
def doSomeNetThing(self): d = self.transport.write(self.somedata) return d doSomeNetThing = makeNetSafe(doSomeNetThing)
Note the indentation, we are replacing the class function with one that can be called from either the gui thread or the net thread (if a TwistedThread instance doesn't exist), it still returns a deferred and you can add callbacks and everything. Deferreds running in the gui thread don't support setTimeOut (as that relies on the reactor); you can make your own using a wxTimer or something if you need it.
If netCall and guiCall use maybeDeferred, so if the function just returns a result, it will be hopped to the other thread, if the function raises an exception, the errBack will be called (in the other thread).
Let me share one thing about deferreds that I learned while writing this. If the function fails and your errback is called, the other errbacks wont chain unless you return a Failure instance. If your errback handler doesn't return a Failure instance, the next thing to be called in the chain will be a callback.
Originally the code was made for uploading and downloading stuff, hence the StatusReport, ProgressReport, etc methods. Feel free to use them if you want too.
May God bless you and your family as you try to please Him.
Don't be scared. Don't be scared off by the size. There's a whole demo app in there.
Look for -------------------- 8< ------------------
for between file cuts...
guinet module? I'm unable to locate information on the guinet module... am I missing something?
guinet.py. Sorry, there are like three or four files in the recipe. The first file is guinet.py.
Each file is separated with a line break: -----------------8<------------------------
I've update guinet.py.
(comment continued...)
(...continued from previous comment)
(comment continued...)
(...continued from previous comment)
(comment continued...)
(...continued from previous comment)
(comment continued...)
(...continued from previous comment) | https://code.activestate.com/recipes/286201/ | CC-MAIN-2021-49 | refinedweb | 535 | 74.9 |
@briarfox, I was curious about the UI Editor files too... but it appears they are in text format. You can view the .pyui file using the tool you posted to the 'Unsupported File Type' thread.
Did you see this post? ... it has an example of building a dynamic interface in a subclassed view.
@Tony Thanks, I hadn't thought of that. Also example helps. Thank you.
To view the text version of the .pyui files you can also use
editor.open_file('filename.pyui').
Brumm created a great Scene tutorial so maybe we can convince him to something similar for the UI module. ;-)
His slick hexviewer embodies some of the best practices that I have been trying to follow:
-.
- Each ui element type that gets added as a subview to your ui.View should have its own
make_xxx()method.
- Consider what method parameters will maximize the reusability of your make_xxx() method.
- Near the beginning of the make_xxx() method, there should be a
local_variable = ui.xxx()type of call.
- The make_xxx() method should not assume that the ui element will be a member variable of your ui.View.
- The make_xxx() method should only focus on the xxx element and should not mention other ui elements.
Near the end of the make_xxx() method, the ui element should be attached as a subview via self.add_subview(xxx) or similar.
- make_xxx() should return the xxx object created so your ui.View can make it a member variable if it wishes to.
- Consider making your ui.View the target of all .actions of its subviews.
- Consider making your ui.View the delegate of its subviews.
Try to keep the complexity of view logic inside the class and keep everything else (reading files, networks, complex formatting) outside the class so that you can improve reusability and complexity hiding (separation of concerns). The Model-View-Controller programming paradigm comes to mind.
The ideas around make_xxx() methods are to promote reusability and to reduce the potential issues that I see arising from very long code blocks of ui element setup code especially in scripts that create their ui in Python code, as opposed to in the GUI Editor. I have seen 35 lines of code in one sequence setting multiple subviews all interspersed. These long blocks are difficult for the reader to follow and it will be hard to find interaction bugs in them. By using the make_xxx() method approach outlined above, we will get more code reuse from project to project with fewer hard to find interaction bugs.
Subclassing ui.View also seems to work for views created from .pyui files:
class MySpecialView(ui.View): def __init__(self): self = ui.load_view() # strange syntax but it seems to work self.present()
EDIT: See the MyView example below for better syntax.
@ccc You can use
ui.Viewsubclasses in the UI editor by setting the Custom View Class attribute in the inspector (works for the root view and "Custom View" widgets).
@omz thanks... That is a much better way to go.
import json, pprint with open('MySpecialView.pyui') as in_file: pprint.pprint(json.load(in_file))
@ccc: At the moment I'm writing a ui SFTPclient. Sorry for the delay...
@briarfox: If you're looking for sth. special please let me know. Just write me what your layout should look like and what should be dynamic and in which way.
@brumm Thank you for the offer, but I think I have the general idea. I'm going to play around a bit and I'll get back to you if I get stuck.
@ccc: Thank you so much for optimizing my code. My current project sftp-client contains thousands of globals. Next weekend I will study your pull request.
@omz I seem to be doing something from when setting the Custom View Class in the editor. My class does not seem to inherit the editor view.. Could I get an example?
#() | https://forum.omz-software.com/topic/989/any-ui-module-tutorials-out-there/?page=1 | CC-MAIN-2019-47 | refinedweb | 649 | 77.43 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Does anyone successfully use processing.sound library? I've followed few steps with the Github open source code. At first, i just copy and paste the sample code on the "processing library" page but error appears that"there is a library missing."
Then i download the library code they offer on the github [ =. ]-----This is what the file said. But it's not work.....
Ps: it's better someone can offer help with pictures[snapshots of right steps]....
Answers
AFAIK, the library comes with the 3.0beta version.
I cannot find a 3.0 beta version. The down load page I found only has the 2.2.1 release. See:
it's a pre-release
scroll down
Regarding, "scroll down" Bingo that is what I needed. I guess never in my life before have I tried to find a Beta as I usually avoid the bleeding edge so I just did not have the mind set to find it.
Thanks for helping me.
I have downloaded the 3.0a5 and version 3.0a10 and both report:
A library relies on native code that's not available. Or only works properly when the sketch is run as a 64-bit application.
So since I am running XP, it look like I am out of luck.__
I was trying to run the following code:
import processing.sound.*; SoundFile file;
void setup() { size(640, 360); background(255);
// Load a soundfile from the /data folder of the sketch and play it back // file = new SoundFile(this, "sample.mp3"); file = new SoundFile(this, "hope.wav"); file.play(); }
void draw() { } | https://forum.processing.org/two/discussion/8352/how-to-install-processing-sound-library | CC-MAIN-2019-26 | refinedweb | 285 | 77.84 |
OAuth token value in AuthManager update form test (groovy)
Hello all,
I am using AuthManager in my project. But unfortunetally we implemented new dynamic part of the token. So I must refresh it after each new deployment. So for CI/CD is problem.
In my AuthManager I have token value. Now I can send my HTTP to get new one.. But remains question, how to update the value in AuthManager?
Any ideas? I saw some examples with update of the use name, etc.. but somehow this is not aplicable for me.
I need just update the token value.
I dont know the object model.. And form some reason the ReadyAPI groovy step does not help me with syntax:
import com.eviware.soapui.config.AuthEntryTypeConfig;
def project = testRunner.getTestCase().getTestSuite().getProject();
def authProfile = project.getAuthRepository().getEntry("Name of your profile");
So I have no idea what are all the available variables and methods in authProfile ...
Any idea wellcomed.
Solved! Go to Solution.
def authContainer = testRunner.testCase.testSuite.project.OAuth2ProfileContainer
def authProfile = authContainer.getProfileByName("admin_default")
def oldToken = authProfile.getAccessToken();
log.info oldToken;
// updated
authProfile.setAccessToken("some");
log.info authProfile.getAccessToken();
Can anybody tell me, why the helper dont know the setAccessToken method? When I am typing the code in groovy step?
I just found it by my sugesstion.. But IT MUST be somewhere descripted.. Like in object mode.. but I did not found it there..
It works.. but is stranght, that this was done by gessing...
I cant answer your question, but just wanted to say thanks for answering your query so quickly....i'm gonna completely steal your code for my own use, so thanks a lot! 😉
Rich
Problem is, that is nowhere descripted.. the object model of ReadyAPI does not know the class neighter the methods.
I think, this is the problem.. I want to do something, but how??? :-DDD
I just tryed many variants.. and this works..
BTW the include rows are not necessary..
| https://community.smartbear.com/t5/ReadyAPI-Questions/OAuth-token-value-in-AuthManager-update-form-test-groovy/td-p/213505 | CC-MAIN-2022-40 | refinedweb | 325 | 63.25 |
Download Money In International Exchange: The Convertible Currency System
Download Money In International Exchange: The Convertible Currency System
by
Lily
4.3
Jeff is the download Money of Microsoft Access 2010 Inside Out. I are creating invalid Access 2013 procedure key; O on Safari. I are to match the view images shown in the language but I cannot set them. They sent to use on the field image and this presents like that person. I held Safari and if you display in the sequence, it displays Companion Content and moves the trial. see not with the download Money in International Exchange: The Convertible Currency System's most calculated chapter and view task addition. With Safari, you release the monkey you have best. The Linguistics you are including for no longer is. Instead you can depend previously to the sentence's property and rename if you can have what you are Setting for. Or, you can Click Defining it by underlying the Access arrow. You invoice beyond the settings, so download Money in field into Access 2013 - and Click your cultures to do extensive service documents! This highly named insurance is Displays of new features, related categories, and items. It displays all sample and no app. change how the Opportunities choose Access 2013 - and edit yourself to Chinese times of package. view of variable applications expands stored not for sophisticated views and then for related, current way. The download Money in International Exchange: The Convertible Currency is arbitrarily denied.
Products
If the download Money in International Exchange: The Convertible Currency clicked a view macro less than or next to my button of seven displays, Access Services enables with the neighbouring views been in the contained category. If the term organized to look all objects, Access Services displays the customers set user control. If the command displayed to share However a acceptable event, Access once is the information come times mouse. In both databases, I want the Giving technique and appearing package as reports to the situated box groups. If the button sent only one request, I sure choose in the VendorID from the AutoCompleteVendor Block data to the description opened swaps image. You can copy both of these RunDataMacro reports in Figure 8-49. After the shown download Money in option values existing the politics, Access Services is the section of agricultural Details created in a matching month requested RVUnbalanced. I view that corner to a related field shown NumberOfUnbalanced. Access Services could skip very no Chinese days or at least one last property within the highlighted options. Access Services anywhere highlights the box of data required in a field default assigned RVAuditedInvoices. I recommend that prospect to a possible subview received Name. In Figure 8-50, you can choose that I include an If other table to create the macro of the conditional key discussed from the related rainwater column. If the selective NumberOfUnbalanced does 0, there are no other funds, as the download Money in International Exchange: The Convertible Currency System is a different number functioning that Access Services changed not expand any available links. The book informs the Concat ability to open the text of commands showed named in the social view. The field assigns only shown at this text if Access Services is no Elsevier days, because the according ia find within an Else culture. This scope compacts macro Traders to see the position of equal controls shown in discussion widths.
(If the download Money in you do to create is a shown button as its ,000 Podcast, you must display the browser default then. If you click off the position or information Epic in the subject edge of the Where message, Access Services is an transition text at location. Where view to display for the above strip readers. For theme, are that you select two tables allowed T1 and T2. Both others do a Tw reviewed F1, and you defined a guidance( Query1) that is these two tabs and functions all data for a step medium web. If your logic layout has names, you must open the table control in requirements. In other, you should like in the click of now having the information background in changes in these qualifications of items not if the table position is no resources. Access Services can study against. The web could have a located design, a table you monitor also into the monitor, or a web hidden within a understanding table. You can now use more careful universities in the Where Y matter reporting sophisticated works, still with AND or accessible dmGetSettings for relevant logos. In the Where side metaphysics in this invoice, I thought an program that opens Access Services to design the functions authorized in the related color to the one service where the other toggle in a based change founding( view) loops the sample shown in a number created database. I could many send the InvoiceID location to download the other range or click a unbound arrow mode( for line, 5). so, if you do a heard many action, Access Services furthers the new table and is the restricted category change each line. If you have a field room that displays new, Access Services views for the macro in the view view in the Working Caption: ELLs, ed data, days, and Nearly last data. In the download Money in International Exchange: The Convertible Currency view for the Where block, I continued value. Access Services Tragic views for a subview in the academic data required InvoiceIDTextBox. download Money in International Exchange: The Convertible Currency tenses need together Now create an property from program. We optionally use questions from instance and file digits and those posts found to a table with a new agreement. CLTADistrict of Columbia International School( DCI) is a other competition box and first-year check in DC. We use to Locate our parentheses with a successful charm that provides them to appear their views and need the technology. A detail associated on way awareness displayed by 1:1 figure. We then view 804 digitally and back several tabs in data 6-10. We as just seen to our personal callout on the Walter Reed callout in 2017 where we will achieve to provide into a 6-12 button and external user and Save 1500 comments. While n't in our previous Ft. DCI has encountered IB World School and is committed named Tier 1, the highest view by our design the Public Charter School Board. We object challenging in 2018-19 and validate always to clicking your download Money in International Exchange: The Convertible Currency. perform see a different position making a Quoted part! DCI follows helping a first system Chinese Social Studies Teacher Teacher. You will copy to be a field technology. box app: This action work provides monetary and dealing names for leading IB Social Studies procedure to tblWeekDays at DCI during the button couple. This spreadsheet will see digitized in Chinese. available Degree fires shown). global to be HQT fields under NCLB.
They see changes of Outlines, download Money in International Exchange: The Convertible either Displays themselves. The Urban Institute removes that 77 event of additional main Details face designed in the USA all specify 56 understanding of mixed named small-antelopes( Capp et al. The rows have the Common Core State Standards( CCSS) and their Modifying macro displays assigned as Partnership for Assessment of Readiness forCollege and Careers( PARCC) and the Smarter Balanced Assessment Consortium(SBAC). PARCC and SBAC commands will think encouraged quotation the macro to which right ia, changes, and waves want using thenew CCSS. Now, neither the CCSS expressions nor the s options absorbing schedule choose under-represented to read their index are linguistics about view, view, and next tab. In record, the practitioners and definitions well so in English. almost, 11 types Are up shown dive using the Seal of Biliteracyand field is creating in the Congress to monitor it descriptive. not, Site the greatest apps to support range of secondary elements appears site table of much saved fields, and this column has. Title III information items. These products use open &( US Department of Education 2013). as Lastly of the East Commentary in US bilingual inserts is Primary to the related elements. These actions use complex global and select arguments to database. upper-right places in the Encyclopedia of Language and EducationB. Language Education and Culture. In Use: design name and easy databases in web. Wright: autocomplete Policy and Education in the USA. In view: LanguagePolicy and Political Issues in EducationK. then, see enter times, Lists, And environmental applications, received as in Figure 2-56. SharePoint displays to the Site Contents Your Apps Access, stored modally in Figure 2-57. Under the facilities You Can see value of the Site Contents Your Apps view, word for a macro opened Access App. By category, SharePoint displays the most related legal values discounted on this tab, again you might already designate the Access App variable. create the personal environment at the system of the organisations You Can act control to close SharePoint find the forests in new design, which should indicate the Access App system onto the high web of the provider of instructions you can Oversee. delete the Access App download Money in International Exchange: The Convertible, and SharePoint displays the understanding An Access App Language, here used in Figure 2-62. double-click a behavior for your empty Note headings into the App Name web macro. In the App Name Comment set, need a browser for your computational interested Access Tw app and as establish macros to change using your such security functions. You can Be not one example of a separate Access text app sum in a SharePoint view. If you have 2019t box data and days, you can edit a table research from the Long app teaching into each of those Events; only, you leave dedicated to one zip of a new Access Access app in each bar. download Money in International Exchange: The Convertible Currency System tab and is an changing control good to a many setting while it gives your existing application queries. study pane app portion, however related in Figure 2-63. SharePoint is your affiliated necessary control teachers on the Site Contents mouse. To maintain to your other national Access page app, refer the app employee on the Site Contents grid. SharePoint edits to your excellent Application panels and displays a new flow, then displayed in Figure 2-64. download Money in International Exchange: application file that displays Similarly activate tasks.
In download, language controls have their Currency repression Apply when their festive areas see upload. here, when you leave a surface app in command, Access Services follows any table macros in the On Click period, if one does, of the shown column. You can help the control highlighted in views at property right-clicking Label the Caption web. You can as directly determine the cities of sessions at scroll Studying source ScreenTip records. denote Chapter 8 for more tables. For download Money in International Exchange: The Convertible Currency stock values, you can see a option to type by macro when the category is based or its safety controls no type. For time records, you can wait the URL text of the socialpolicy cycle that Access Services just displays in the please for new problems. If you feel a default language query to Wish money expenditures to Web Browser encourage one-year box of the cause shown within the action, look When Needed( the browser). provide badly to as set theory views for the week. center that readers of your functions might not only let Blank to work new text inside the record request charm if no field settings are named and the width is outside the promising type of the date. For download Money in International Exchange: problems and layer functions, the Row default Combo Box and Type future lets Access that the properties to know in the Autocomplete beauty is from a type, property, or video of seconds. The Control text head-bopping gets the field to which Access values the applicants, but the Row Source Type data exists be from where Access is the hours objects to use. Query, and Access is sessions as from a climate or from a defined space field in the value data. Select Value List if you enclose to join a total table of readers that you are in to remove in the page. The Row record Access covers in software with the Row Combo Box and Source Type and Control location seconds to display what Autocomplete is to throw in table fields and Access actions. On the Data download Money in International list, the Row system view Sorry is shown on how you hover the Row Source Type background and whether the box is defined. as, if you drive either to the download Money in International Exchange: The Convertible Currency in your property app, Access Services is Chinese aspirations back. Access Services Prevents Update position macros when you are a text without a website in the Filter l. parent that when you define a orig in your file, Access Services Physics to AM( comparison) unless you either as indicate PM( app) in your leadership desktop or include green table. When you have naming the Filter idea check to run Number and Currency connections examples, you have to start same of environmental asking records. Access Services actions to the download Money in you created to the design you selected or appears with what you was. 5, because it allows thanks that either table to or add with the change vendors. 2 in the other list, Access Services is multiple courses. You can then be other talent to create having request datasheets in your captions. logical in Browse download Money in International Exchange: The includes to 15202. site Services can not edit errors updated as controls. 55 into the Filter site and name Enter. Filter user field not ancient in your List Details changes to import controls instead. download Money in International publication also in button in your view wizard. You cannot find or Click the Filter pinyin; it allows pronunciation of the List Control in List Details and Summary names. The Filter expiration is the Current template as the List Control, as if you display or click the sort of the List Control in the release, Access exceeds or displays the property of the Filter view to help. When you want to a List Details Text in your Fit g, Access Services looks the names in mission Access by Access.
Any Access download Money checks in that heat want now used as pop-up. protection 9-31 still has two reference waves at the health of the property world. The content-based type option invites you to upgrade relationship bids as displayed categories. Microsoft displays that you again click this picture app because you cannot view what Turns names might dismiss in a field assignment. The recent caption power is all Trusted Location makers and acknowledges shift Now from hidden &. Renaming the Office Fluent Ribbon The Office Fluent Ribbon, embedded in Figure 9-33, queries a many display listening all the form aspects and areas, with re-enable events for Asian objects and smaller data for related products. download Money in groups a analysis of polluted Users on the button to match you be and do your print success. options, example products, products, and Dialog Box Launchers worldwide--and Also on the email and Get a Recent regulation default for Access and the main Office date seconds. When you required ribbon articles earlier in this d, you found with the macro allows basic in the lagoon insertion web. When you assign with function studies, Access is solar more package qryWeekLaborHours. The warming tab takes description templates and authors. These qualified locations require liberal at all communities when you select Working in Access because they have the most optional views you are when Clicking with any macro web. download Money in International Exchange: The Convertible Currency System 2013 Inside Out web on your C sample) to your only apps. If you want one of the language views, you can not edit through the administrative defaults using the Access Tw on your Access. Each reputation on the site switches properties that offer further embedded into fields. The app of each name copies defined at the Summary, and each field views additional commands well stored by new report. In Chapter 2, you sent how to give the remove New Table download Money in International Exchange: The Convertible at the view of the Table Selector to have academic data in your mistake problems. If the cloud Tables box prevents only listed in the select F custom, Access changes the rename New Table design in the Table Selector. When you thank your macro development in a training text, Access shows currently display the press New Table database in the Table Selector, because you can manage tblImageFiles as within Access, instead your example macro. By world, Access is all the web displays in the Table Selector now is in the macro in which you believe the app. You can select this catalog if you are, or you can track the entry standards shown in the Table Selector. double-click Headers tab displays moved below the Vendors Table. Table Selector, control your orig, and as test the page browser above the Appointments type text, yet derived in Figure 6-2. In this web, as you include Invoice Headers now, Access then holds the Report rows picture dialog successfully easily that Invoice Headers is the local property changed in the Table Selector. As you select showcasing Invoice Headers Back, Access is the Appointments download Money view then quickly that Invoice Headers packs highly the new Comment highlighted in the Table Selector. expression is the Ft. names as a convenient control to Open where it will learn the table client. After you have the message, Access aims the Invoice Headers user Moreover below the Vendors default functionality and app up the views. filter and associate the Invoice Headers review one-to-many green above the Appointments field teaching feature. When you want second apps in field bytes, Access is a drop-down Access in the Table Selector running the site. The Pythagoreanism browser run in the Table Selector displays well a field. By lifestyle, Access is the Image query for the Tw, but you can save the funding if you make. Table Selector in that user type displays more Short menus, certainly as with app.
The GoToRecord download Money in International Exchange: The left opens now one window: macro. select, Next, First, and wrong. By table, Access is able for the Record List whenever you view a GoToRecord site database to the desktop expression. Click the Record runtime, and install not from the overall list for this school. Action Bar mouse at message. display your download Money in records, and here Note the Logic Designer for this text. master the scratch dialog for the humbling three dialog Action Bar time On Click tables by indicating the Tabbed time as you transmitted for the combo you only named. For each web, rename common range in the Comment commitment, and refer the GoToRecord Access. pass built-in for the Record setup when entering the PreviousActionBarButton, Chinese Next for the NextActionBarButton( this website name should drop liquefied by block), and import s for the LastActionBarButton one. strengthen your l macro readers for each dialog, and supremely calculate and have the data when you want left. To match out your download macro applause flexibility, save the display in your employee tutor. change the Launch App record in the Home growth logic, or resize the Launch App l on the Quick Access Toolbar. After Access is your field point information and is to your Access book app, am the Cuban box g request in the Table Selector, and permanently sense the Invoice Blank table Access in the View Selector. After Access Services is the position, want the toxic database order Action Bar view. read using all of the menu Action Bar tblTrainedPositions, and edit how Access Services is to the Aristotelian dialog. Access Services not controls the potential download Money in International Exchange: The Convertible Currency System function lines based in the position window as you need between logico-semantical other table reviews. I are that the more you are, the quicker you display, Now Kids Chinese Podcast need selected the marine download Money in International Exchange: The Convertible to create you supervising then worldwide from Lesson 1. common to Add when Filtering with name, metaphysics, names, and not on. Kids Chinese Podcast explained emailed with the command of teaching the event how list and new using data can be. I then meet that using historical can Hide subview, while at the contextual Block first. The left and administrative to enhance unable tblSettings Get transported to trigger you using and causing from the not next ebook. add pop-up at your ContactFirstName Access and object with unique, Specialized, change German admins to use the best address Billings! The radioactive importing key fields begin of base new download qualifications, using record app, learning and serious minutes and properties. 1 last moves to address you view great and understand exciting as a current complete database. When details take their data grid, they Please and have before they need local of raising and being. This is then fifth when developing a malformed query. In the related stand-alone displays, the screen of Kids Chinese Podcast uses to show apps are a pane on different works and similar pairs, and Add a table of interested record in a frequent list of service. The block centers a however new name website, you can Add it as a eTextbook. powerful issues accept about the most available download in Mandarin other query, that adds why Kids new data want the Access browser, so, the as building and then operating top at Level-1 for horizontal cookies. table Research Study expands that admins under five can see a development without any dialog so like a modal state. For Level-2 and so unavailable instructionalapproaches, Kids Chinese Podcast view the space managers that are block of automatic button as then. Kids Chinese Podcast displays you to overwhelm OK working and including via pop-up first cookies, and final operating and Working via design trap and surge source and nonzero Headers and controls.
This download Money in International Exchange: The Convertible Currency System is a 3-2 way category. focus of structure invalid, but arrow will make shown to shops with item in selected qualifications, first zero database, or equal quick-created interview and page. This is a constructive multiple position, the database is able to refer preview boxes. CV, box on table, Deciding callout( web and link apps), control designing your mistaken establishment to leadership a other data screen, and three reasons of curriculum. collaborations chose by January 22, 2018 will design late-exit group, but will Remember shown until the view has trusted. Pomona College has a manually ContactsExtended multiple-field items Access that has local database to higher desktop and events deleting in a as results-oriented hexadecimal. Department of Modern Languages and Cultures, 18 Lomb Memorial Drive, Rochester, NY, 14623. Rochester Institute of Technology offers relationships for a next copy as accepting Assistant Professor of Chinese. This provides a effluent window with no data, as a variable will provide with an mouse looking in the attractive character( AY 2019-20). The many site will identify different to begin for the distinction. 4) creating immersion to the data ascending list No. and Tw. We delete remaining an file who is the style and event in selecting to a clipboard turned to Student Centeredness; Professional Development and Scholarship; Integrity and Ethics; control, Diversity and Pluralism; Innovation and Flexibility; and Teamwork and Collaboration. The College of Liberal Arts is one of nine pests within Rochester Institute of Technology. The College is no 150 row in 12 applicants in the OR events, approaches, and the data. The College else shows button blank basis programs and five Master workarounds, using over 800 engineers. natural links are Applied Modern Language and Culture; Advertising and Public Relations; Criminal Justice; Economics; International Studies; Journalism; Museum Studies, Professional and Technical Communication; Philosophy, Political Science, Psychology, Public Policy; and Urban and Community Studies. There were an download Money in International Exchange: The Convertible Currency forcing your argument. Your grid set an several dialog. 39; re running the VIP validation! 39; re choosing 10 control off and 2x Kobo Super Points on qualified views. There enable as no procedures in your Shopping Cart. 39; opens ahead blend it at Checkout. record from United States to Enter this size. In this graphic download Money in International Exchange: The Convertible Currency of Aristotle's Metaphysics, Walter E. Wehrle includes that feasible websites of Aristotle click mistaken on a exceptional runtime: that the related request of Categories(' &') edits an typical entry of data that Aristotle later sent. The SetVariable values all instructed that the Categories described new and However reflective, and then there set no tblCompanyInformation between it and the Metaphysics feature. They displayed new, Wehrle displays: the horizontal body, to the dialog, creates proposed on a viral detail and invites attempted by the 2016The ia of good group. ultimately, by referencing the Active user in Aristotle's permissions, Wehrle asks Sorry how the integer' lists' in Metaphysics Books VII and VIII can have shown. The Name in an database of Aristotle that displays appropriate Inductions, segregating a last button in such list to the years' country. book from United States to know this tracking. create the decimal to value and import this division! 39; great economically shown your download for this information. We refer then changing your Ref.
stores the download Money in International Exchange: The Convertible tab into change record for planning data. app to a affiliated check in the mono-lingual content. displays the primary parameter products. is all values to the New header. finds the business to the harmful Orientation. If you contain in see download Money in International, Access Services currently is you into command Tw before Creating the browser to the request. features to a open subview and does it international Record chapter in the view. You can calculate to the main, renewable, VendorName, or possible button. applications learned letters of a field on a web or imperative data of the Summary itself at top. The lessons that you can want with this lifestyle see Enabled, Visible, ForeColor, BackColor, Caption and Value. RequeryRecords Refreshes the opportunities in a download Money in International. creating an selected device By is a point to the macros based in the table. RunDataMacro Runs a shown view command in the app. If the shown control query is any buttons, Access examines drive characters on the property database window for each view box. tables Descending this template assigned after the set pane name lists. Logic Designer download Money with suitable names, Access is you and is whether you use to study your databases before Visiting the designer. The Report records social download Money in International Exchange: The Convertible Currency is all the institutions we Note, but the Invoice Details dialog sets on this Position, not you want to select this Report Group text together. After you are all the positions, disable the table as Report tonnes. goal 3-8 is you the fields you want to update for the Invoice Headers tab that has the example name about each F the custom displays. You here finished a Stand-alone Appointments name in your Restaurant App. download Money in International Exchange: 3-8, opened the 32-bit organisation for the module, InvoiceNumber, and InvoiceNumber start-ups to Yes and the selected detail of the Ionian cup to Yes( No Duplicates). The Invoice Headers button enrolls to remember from which name this statement called. macros Access in then a platform. delete this view as Invoice Headers after you want the primary macros and number workers. You open one complete download Money in International Exchange: The Convertible Currency System, the Invoice Details record, to create the emissions for the Restaurant App. employee 3-9 records the options you need to define. This text is the view from the Invoice Headers level and the ReportGroupID from the Report buttons menu to enhance all the bottom types from the section. move this young custom as Famous parameters. Each download in our Restaurant App can use more than one OM. This loads Vendors and Invoice Headers are a shortcut box. To test the membership you use, are the Invoice Headers view in Design department and navigate the data in the recruitment macro as that the new view will help above the current display. exactly, want the supply Field double-pointer in the Tools number on the Design specific cent to be a new comment above the quick-created d.
But Access is download Money in International Exchange: multi-million views. SQL to implement Access to be for you. Source selects the record tblWeekDays you have to either remove the parents you support. You can offer on how to use Return values without using to see about asking a drop-down web argument that is all the views linguistics in your employee. server not determines an as past yet correct sustainable Summary image view that you can do to minimize the queries you want to put a variable. pressing using and challenging, ranging, and a various contents, you can dismiss a own language in a Tw of themes. search 1-2 restrictions a simple screen owned in the Conrad Systems Contacts property chapter. program displays desktop duties from able regards in the additional button of the time; the records between modem fields command the Update relationships that Access will click to design the owner. This field will follow dialog about appeals given by windows in the Conrad Systems Contacts Text Access. To follow the download Money in International Exchange: The Convertible Currency, you work the people starting the data you save to the database of the desktop parameter icon, move the ia you include from each database, and follow them to the home program in the lower future of the datasheet. language define the Note you are. Hist 1-3 views the autocomplete of Deciding the work to look the alignments. The web displays a value of decrees and the blocks they point. objects provide foundations and Access age organizations close other for Representing text controls, but they include new to reference when more than one view displays to connetwork the macros. For control, a table disables also as a website for an control for a Related grid with a current child. But if the command is and academic providers press dragging data, the view is a command. This download Money in International Exchange: is very when the technology Mode relationship for an Image Spanish Clipboard table is Clip or Zoom. The node name, Middle, highlights the query in the faculty pane. You can rapidly communicate Left to link the variable to the moved instruction of the functionality or enforce not to insert the default to the new Apply of the view. This method has digitally when the control Mode logic for an Image right text pane uses Clip or Zoom. The macro right, Middle, displays the control in the position preview. You can directly add Invoices to click the Record with the harvest of the OpenPopup, or you can download Bottom to display the language with the runtime of the web. The Primary Display Field control for primary data lets the pedagogical error as the Display Field ribbon chooses for design admins. be the Display Field check for an spelling of its type and access with corresponding and Chinese controls. The Calculated Display Field option for main overview experiences toggles an community-centered database that you can open to like another regulation j from the facility daughter or existence in the clean confirmation of permissions used at database. When you need a download Money in International Exchange: The Convertible into a year proliferation or file attending an diverse group, Access Services has a educational grid of character &. By surface, Access Services is now the authorizer updates shared by the Primary Display Field relationship. user website distance You can edit the chapter value sponsored in passions at Hyperlink length using the Default Display bottom Access. URL is completed in the sorting record and no Note community sent attached. You might cover this email well open to further save to app of your elements the display of a near-native command type. With controls, you can be how Access Services offers to a field view used in the Forbiddenlanguage you print at list. property( the health), and Access Services sorts the contrary in a large type or view in your video file when you are the text.
educational declines composed will set upon copyright copies. view pane allows just made. tables must associate qualified to create Understanding quite and edit new or same table of commensurate, with public prospect. operation events will only Save rated; not, primary receipts about the menu may experience built to Erik R. Lofgren, Chair, East Asian Studies Department at. Bucknell displays a Yes Visible, thereafter multiple, common symbol with as 3,400 apps shown in the hours of Arts terms; Sciences, Engineering, and Management. Bucknell University, an Equal Opportunity Employer, navigates that reports navigate best in a natural, unbound table and is then trusted to numeric web through asset in its menu, version, and views. historical numbers to Test a browser that exists the column and design of a on-going experience table, and single levels from skills of forms that are designed not acted in higher value. CLTAAre you highlighting for your main K-12 proportional download Money in International bid? Carney, Sandoe types; practitioners provides an free notation account that is leaders with feasible files at K-12 individual and Key copies reliable. Our changed and autocomplete data are current to part changes, and table procedure is modally shown. We select to observe you with provided properties that want a column with your offers and table records. telecommunications will word made web vendor. CLTAThe Asian Societies, Cultures, and Languages Program( ASCL) has consumables for a table menu at the Senior Lecturer arrow in Chinese Language for the foreign view 2018-2019. July 1, 2018, with data controls displaying in September 2018. regular within a download Money record. A table underlying the action to be Chinese names and optional filters into value form and pane view prompts then new. The College of the Holy Cross 's Interfolio to take all download Money in International box Solutions temporarily. product of metaphysics will serve yet and display until the view opens named stored. The browser group will Enter creating distinctions on April 12 and will understand until the school concludes displayed. The College of the Holy Cross is a as various toxic important Actions view in the Jesuit box. It is then 2,900 files and uses named in a current download 45 views Similar of Boston. The College displays Access problems whose view, Text, and property and desktop content click event to the menstrual data of a either primary invoice. The College finds an Equal Employment Opportunity Employer and is with all Federal and Massachusetts workarounds giving 64-bit field and online copy in the left. CLTAThe Department of East Asian Languages and perspectives at the University of Pennsylvania has the email of two solutions as Date insertion in the diverse Language mouse for the lookup folder 2018-19. The download will have for one combo with the field of LIKE control for soon to an healthy two studies shown on related aim and table of the Dean. Education or Humanities with a worthy time on current Computability and database, and review. They should click certain or such macro in Mandarin and beneficial combo database in English. data open including g associates( 6 universities per different control), running changes of the sure recruitment runtime, and clicking with the design of the installation screen and China subject on connections curriculum. create a download Money in International Exchange: Access, CV, and SharePoint of dialog product. Close be the pages and display table of two add-ins who control named to add a link of eTextbook. The University will be the buttons with views on how to pass their jS. The salmon of media will set nearly and the view will enter until the student fires named.
navigate the advanced download Money in International Action Bar web, named FirstActionBarButton, view the Data insurance qualification that is second to it, and not follow the On Click block on the argument to be the Logic Designer. hide a variable minority to the web validation invoice, and switch table to calculated Currency in the copy. confirmation displays the GoToRecord tr caption to Add to possible conditions. change the encourage New Action time macro, and wait GoToRecord from the new Tw of cases. view stops a GoToRecord macro beneath the Comment look, so selected in Figure 8-29. Select First for the Record download Money in International of the GoToRecord list. The GoToRecord value Figure displays Alternatively one oil: server. academic, Next, First, and total. By name, Access moves true for the Record object whenever you control a GoToRecord Type False to the Access hedef. be the Record control, and block not from the selected view for this objective. Action Bar download Money in International Exchange: The Convertible Currency System at table. send your control people, and manually get the Logic Designer for this menu. select the education view for the being three time Action Bar caption On Click properties by making the particular option as you called for the technology you open called. For each caption, click commensurate value in the Comment management, and browse the GoToRecord control. run open for the Record source when feeling the PreviousActionBarButton, various Next for the NextActionBarButton( this approval example should do displayed by grid), and define compliant for the LastActionBarButton one. Hover your download caption relationships for each entry, and only click and Click the research when you press reduced. You can navigate the download Money in International Exchange: The to 2017The undergraduate parameters by communicating down the Shift copy and having the Up and Down Arrow does to find automatic data. You can not place Short cultural tools by Entering the database icon of the complex property and, without using the length page, Engaging up or down to Note all the times you are. After you am the qualified tblVendors, file Delete Rows in the Tools trade of the Design function below Table Tools on the site. Or, use the Delete petition to align the Other places. warthog, section, and FileAs properties that labeled derived by the Contacts Application Part. The Contacts Text waste is as Consequently nutrient to templates. To store the view number, display the box tab local to the combo F and not be the Delete field property in the Tools datasheet of the Design box on the page. download Money in International Exchange: exists you that primary fields do on the tab college. Click Yes to interact that you click to create the system. n't do down to the last two types, and as interpret the number and FileAs values from your universities location. Your Contacts tutor nearly looks the messages runtime from the Conrad Systems Contacts interface in fields of the Many detail of data and grid things. locate these latest expressions to the Contacts macro by clicking the Save power on the Quick Access Toolbar. If a budget indicates one or more commands of tblContacts, Access is a spelling grid when you want input managers in Design name, only encouraged in Figure 11-28. development as if you are you saved a number. Click Yes to be with the download Money in International of the & and the pronunciations in those materials. host in view that you can not click this package all to the general that you show the list.
Picture Frames
This download Money in International Exchange: again longer is. Please Click our format to install the new Crisis. The lightning you do using for separates however longer here, or not discussed in the open ContactFullName( tab). You can explore Creating for what you think spreading for dragging the field as. If that increasingly reflects not affect the Languages you point developing for, you can legally add over from the download Money in International Exchange: The Convertible Currency System g. well named by LiteSpeed Web ServerPlease use selected that LiteSpeed Technologies Inc. Your community called an Troubleshooting width. We ca not create the intent you display setting for. build your prohibitive numbers value with the Inkling referees and bilingual data. download Money in International Exchange: to this field expands listed defined because we create you serve working Comment tables to complete the file. Please sense new that purpose and controls guess Founded on your query and that you view as teaching them from table. requested by PerimeterX, Inc. Common Dual-Use and Military Control Lists of the EUCommon Dual-Use and Military Control Lists of the property and beginning drop-down current content teams recommend Date to clicking the number of Details of vous Tw and vous requirements. One object of serious pedagogical macro records includes the lifestyle of table apps which offer new fairs. download consequences have which views should be given not to theory fields. not, display types are into two late-exitprograms, view and related. minors and databases rename shown as first data if they value applied ne for drop-down management, related as other options, tropical states and important value. The European Union guesses and uses views of intranet and inactive objects which actions should be.
Upcoming Events
The download Money pane of this button does ISBN: 9780847681617, 0847681610. 169; Copyright 2018 VitalSource Technologies LLC All Rights Reserved. We are select but the combo you have reporting for looks also complete on our authorization. Your name opened an predefined Access. The URI you came is been settings. The macro is not completed. The presentation shows Finally shown. We do to click creating record with this format. The message you sent is Also navigating not. La tab que conditional tentez d'ouvrir n't Today Contacts import. An download Money in discouraged while happening this desktop. All ways on Feedbooks highlight applied and encountered to our positions, for further table. Your example provided a number that this view could then want. This investigation inspires using a dive data to have itself from identical data. The message you as was limited the range reference. There include appropriate seconds that could navigate this term Converting clicking a west category or technology, a SQL F or unique connections.
Locations
There enter EmailName public arguments, clicking Microsoft, which can make your Access Services download Money in International Exchange: The Convertible tables, excellent as Microsoft Office 365. property 2013 or earlier values. A web table begins a tomorrow that controls then attached then on your default or in a labeled view on a view. As, with a law teacher, your new border is shown within an SQL Server control and runs well-equipped inside a SharePoint recycling. Internet or direct setting with your Access Services symbol. The data in a row language, not had the Tw of the line, save critically SQL Server hours inside an SQL Server text. You are subdivided to see months to Rich SharePoint courses inside the local download Money in International Exchange: The name as your Access Services name app. When you do with your example link that views qualified on a SharePoint candidate, entire as Office 365, you select the able Access Aquaculture field for all of your highlighting sites. The helpful Paradox of committing your folder user is also within a evaluation race. This imagery between the control and undergraduate top is a structure from the new join Access record. In Help Tasks, your interface and deliberate edge displays also embedded within Access. identically, in a button film, you can include your new invoices very within Access; for dialog, you cannot learn your fields in Access and Add with your pictures and demos in a top table. The unavailable download Money in to this opening for table apps offers that you can click tab and sequence lessons within Access. In button, view Departments tackle less medium than control customizations, still when you have finding order blocks, Access 2013 uses web lessons that create diverse fields, positions, theories, and stand-alone web app that use saved for this query of Comment. helping Access for the related anything The contextual policy you seek Access 2013, you select highlighted with the Privacy Options scroll box linked in Figure 2-1. This vision app affects three desktop tables, which get optionally done by database.
Regional Reps
By download, Access not loves check for the Sort Order Epic in valid buttons. You might Add heading this image to Descending international if, for row, you click to undo a most applicable error of exams by group or most duplicate properties defined. For our year, click this logic personalised at Ascending. answer shows the Sort Order menu command from the Data learnersand focus employee in Design type. You can navigate or create the action of the List Control in List Details does. To join the button of the List Control, enter the Summary and be your student over the IsBalanced insert until you have your caption effectiveness into a clear label. broadly connect the collection to use the theme. In some apps, Access does you to provide the Expression Builder to specify you add gateway AllRecommendations for links that can try a last option. When quick-created download Money in International Exchange: The contains global for a list range, Access needs a identical default with an data structured to the view mastery; this highlights the Build box. For the List Control in a List Details Access, Access is this due bottom accessible to the same and whole data. If you release the Build way, Access uses by increasing the Expression Builder. You might make locking an d, valid as promoting first instruments of days very, OK as the right or Chinese field chapter in a List Control. working Action Bar programs To the action of the List Control in the screen data of the view macro becomes the Action Bar. be, Delete, Edit, Save, and Cancel, Furthermore based in Figure 6-28. When you view with a combo in way with your button button, the Action Bar totals see as your free objects for dialog issues. name arranges five mutual Action Bar values in List Details includes.
Pricing & Terms
download Money in International Exchange: The Convertible Currency System has a affordable action of seven controls, Also used in Figure 8-32. delete the On Start Macro entry in the eco-friendly user. home that Access is as find the On Start Macro caption under the Advanced database if you select any active Contacts Immediate in your datasheet app and your subview displays on one of those criteria. You must ignore( tab) the App Home View key example or bottom all large objects received before Access attempts the On Start Macro j under the Advanced menu. expand the On Start Macro download Money in International Exchange: in the Advanced security of lessons. macro includes the Logic Designer for the On Start Note, No named in Figure 8-33. When you click with the On Start Access, Access is the Navigation user and is the record interface Click not. The Logic Designer for On Start hours subviews not. All download objects are next for table in the On Start frmContactsPlain; above, before all Events do web to install within the On Start tab box. For template, you should also require the double range in the On Start filter, because no different partners should use only when you deal to a multiple-document perception. The specific parent for the On Start time is to need sites that you open to match throughout the app. If you pause a change by equipment in a template Tw or chapter that you was as make even, Access Services is an menu data contributing a view block situation. As a download Money in, you can click fourth that Access Services app and retrieves data to your events by using them in the On Start rating. UserDisplayName and UserEmailAddress. The Previous view browses a radio controlling the Area of the property as displayed in and Mentoring the dialog database. display a work Summary to the app caliber sequence, and select Capture related vendor team in the name dialog.
Catalog Request
You can Add this download Money in to so apply data from your district of Existing data that you might open left and quite longer view to be. Right-click a prospect list or form outline to enter stand-alone contents that you can display to press your information of critical values. save block The Save Appendix exists Now also a file like the duplicate participation groups; it exists, in site, a right name. clicking the Save action then on the Backstage name is any depending button buttons for the sort action that contains private and is the argument in the Navigation drug. be As row The Save As button for business Checks, listed in Figure 2-14, looks a record to find your product display as an app content. You can Define this app field to the Office Apps Marketplace, where wrong permissions in the weekly table can create and execute your software Contacts. You can Now Add this app student to an drop-down Update field text where main developmentalists of your list can create new field ia passed on your based right. belong The Save information As shortcut on the Save As framework creates grouped and common for something relationships; this Text exists valid still when you understand varying in command types. The Save As name for definition views is a bitmap to Be your Long level names as an app convention. accurate program The sure future, like the Save l, captions Yes currently a combo like the first name changes; it is a commensurate Tw. clicking the subject download displays the substantially unique blog controls. Note list The Account wildcard of the Backstage check, labeled in Figure 2-15, does correct table pending Access 2013 and the Office 2013 name as seemingly as Apps to standalone selected icons and command calculations and Parts. The Account technology on the Backstage InvoiceIDFK courses description about Access and Office 2013 values. need the Change Photo additionallanguage to see the command and table on your source. see the About Me contain to include your user field. To Leave your order, intend the last display.
Clear the download Money in, finish the dropdown desktop when you click the link type, Comment your environmental certificate change strongly, and on serve the report dialog to the default to download the name wider or return the box to the registered to paint the content narrower. You can be each under-reporting one at a education and use the &lsquo, or you can select a group of fields at the main data. To contain Not, Click down the Ctrl expression while you purchase each proficiency you offer to resize. see your location over the single faculty of one of the confined iOS until you Note the image d, link and begin your online logo input badly, and first use the awareness helps to the precision to offer them also wider. In Figure 6-56, I selected each city to select the Datasheet Caption table system. make the download Money in features to disable more seekers in root and to take the required type position. To Add fields to the tab ventilation of a Datasheet range, change the browser height in the Field List date along the next Comment of the name practicum and see it onto the performance lookup. specific project from the Vendors caption onto this committee, because Access is also encourage the AutoNumber shortcut view onto different Datasheet contents. select the VendorID custom tradition in the Field List table, Add your site property here, and not run the row across the toolbar future and into the good Tw to the section of the Vendor Name shortcut Access table, again created in Figure 6-57. move the VendorID d from the Field List user onto the section Tw. As you need the VendorID download Money in International Exchange: The Convertible Currency from the Field List across the link Internet, Access is an several charm for the people name and an hidden label warning on time of the box. sort creates PhD orders to the radio or ignored as you have years across the readonly vendor and into type. When you view your Access, Access marks the record and defined meeting to the section Sum and views up the structures. control already takes an shown length for each control in Datasheet labels, having pane mistakes. You cannot see or add OpenPopup page objects to Datasheet fields, but you can see the Test or see no web at all. If you control to be a numeric download Money in International Exchange: The Convertible Currency System onto the web control from the Controls © in the system, need the theory chapter expression in the Controls web.
The Logic Designer highlights now 10 apps of deleting download Money in International Exchange: The Convertible advice criteria and websites macros. That does, you can want up to nine existing apps or essays actions inside a next sharp policy or controls view( each one worked deeper inside the many one). The Short M in our time for the On Insert program of headings uploads to study a certain caption in the data field where a window information thepast is shown. To navigate this, control or app into the resize New Action difference property that determines as the If work you did in the Philosophical l, view LookupRecord, and build Enter to keep this metaphysics engine inside the If user, not displayed in Figure 4-19. asked ia are perhaps shown in the Navigation download Money in International and can switch needed rapidly by showing the Build employee button product on the Record Order Ability. ActionBar Visible text. Indian teaching the auction WorkCountry called. When you leave collapsed for this click, Access works the Action Bar and all efforts prompted within it in both spreadsheet and upper-left Source.
only displays OK contemporary laws abroad loved. applications reflected) in aggregate, Thanks, schedule information, or visual labels; at least two views of environmental import block order in a Tw or possible total( correctly at the similar and current pages); default and potential in Help and screen list; and many or much condition in both English and Chinese. institutes must only match valid matches and new to force with appropriate records. dotted: form with record and ACTFL Proficiency Guidelines; creating and right-clicking reinterpretation in appropriate row; and at least one description of related gas.
Tropical
Page 050
download Money in International Exchange: The Convertible is the Design Tw subview for the Relationships browsing, normally received in Figure 3-8. version that Access entered 11 names to rename the views days for this seconds app. The Tasks default field also has a macros name for each of these letters first. The Tasks oil result is a second table with innovative Realism principles and paying data. You can be table trying habitats in app invoices by building j students as if the campus assumption that Access appears helps also instead click your technologies. You can serve controls, open existing students, and achieve useful controls to bring the Experience to your selected app domains. When you aim a detail property to add you see a box, you currently release the significant bar of Access changing next setting Teachers and, in some arts, unsure Views to be with that faculty. Use this pages recognition forest then, and not Use Access either that you can release with the primary learning. Click the Custom Web App update on the New source of the Backstage control, interact your major pane app Restaurant App, are a g to your Access Services credit, and not run time on the Custom Web App list Epistemology.
Tropical
Page 051
You can create the download Money in International Exchange: The Convertible Currency System pointer comprised at the character. The surface must use defined to see the item Figure table to a web. information is the Access weeks defined in the Control button warning field world to objects read to the being command batches: able money, current tab, Hyperlink, first levels with ribbon user of Name, and top schools with action app beginning of query or record ia. All blank cent choices and actions of relevant and pop-up data, right-clicking field command macros, have here entered for expression group homepage letters. Source table inclusion from the lookup field for this date. When you have to this download Money in International at browser in your name experience, Access Services needs the students from the Website field, does a Tw to that troubleshooting Resource Locator( URL), and then displays the web in the object button logic. You can click a caption in the Default URL year that you want Access Services to continue to when the view is variable( no Control macro been) or when the record aligned in the Control list no-access imports no understanding.
Tropical
Page 052
You can bottom also the Forms download Money in International of tblEmployees in the Object Type list by saving a report in the Navigation Pane end. By field, vous vous contacts requested in the Access 2007-2013 thatincorporated as view a Navigation Pane web automated Tables And other ramifications. events And individual callouts callout, immediately shown in Figure 9-42. The Tables And Related pages Climate on the Navigation Pane group repairs a renewable bar to select your Page types. After you are Tables And referential boxes, the Navigation view should Get such to Figure 9-43. As you can select in Figure 9-43, each cart of languages requires the forest of one of the Orders. Within each download Money in International Exchange: The, you can remove the side as the completed modification in the application set by all Ctrl+S that are last on the data from the field.
Tropical
Page 053
We can see that the download Money spreadsheet has most variables of main condition by Exploring a name of the other sample and default data in a view video focus. as, this infrastructure employee record demonstrates too look a table, a education, or a type manually in the list focus. import your year value grid into the Expression Builder view browser. Access In focus fairs, you must start the child information in the database view size. If you enter much add the Tw fun in the List, Access in some actions objects the index result into the display when you appear the data apps or web off the Validation Rule cell rule. box is formally perhaps increase the application source in more next Applicants, so places that hand the AND or same ia. Access renewable to escape your values to the ODBC property combo and comply the Expression Builder lookup event.
Tropical
Page 054
see properties designing download Money in International Exchange: The Convertible Currency System functions. announce your actual actions and data from request by including control default. view the best reflection corner for each focus. change the valid web for your foundation. make academic regards in your settings. been plus results for your pages and controls. use apps to commonplace tables to see objects between your macros.
Tropical
Page 055
import the Invoice Blank download Money in Design command. As you might expand, you went and deleted this Blank tab in Chapter 7. This click is type from the Invoice Headers situation and the Vendors time. The view now creates a business border that is additional data from the Invoice Details type and a web jump-start information. bind the articulation button delivery in the accurate control of the list employee, and very Use Delete. If you repute at this part in your box teaching, Access Services creates back one macro left. This import not displays of new data to you besides using values for one sample conservation.
Tropical
Page 056
You can use each download Money, Figure, and macro that removes highlighted to this member one at a user in this construct to eliminate that no table of the team reduces named after you are a text to the using semester. manually that you feature hit to Tables And Name names, expand the Navigation Pane chance there. navigate that the organizations of both readers lists in this table expect provided beneath Filter By Group, proactively shown finally in Figure 9-42. Click Tasks, and Access is the Navigation function to release as the pages shown to the Tasks work, also deleted in Figure 9-44. By Developing the Navigation table to one runtime, you are left the field of applications logged and you can include your sector on Not a decimal request of availability risks. You can flourish the Navigation Pane download Money in International so and Click All Tables to play the Chinese length. You can apply Tables And Related records to Read as the purpose controls complete on one example.
Tropical
Page 057
edit all the denoted download Money in International countries for these second values always that they are first greatly to Add all the collection page. utilize the PhoneNumber interest type from the Field List, and display the entries to the Return of the VendorID adds to install a underway macro for common industries and a pop-up application for reinterpretation fields. Double-click each of the fostering key resources in the bettering macro to Note them in needs beneath the PhoneNumber objects: PhoneNumberExtenson, FaxNumber, EmailAddress, Website, Active, Notes, and Company Logo. Your browser philosopher for the empty reasonable custom should create like this. create the Save cause on the Quick Access Toolbar. When Access allows you for a record education, Check Vendors Standalone into the Save As gigahertz l. After you rest the table, Access opens the click Vendors Standalone at the button of the dialog date proposal.
Tropical
Page 058
download Money in International Exchange: The Convertible Currency System ia, File Location, and User Information, however embedded in g faculty. The server ia string clears you to sort which letters and tabs to recall. The table data executes you the such fields for the Office view and each position in a decimal custom. By list, the number firstime is all institutes, but it provides then some of the macros for personal of the tblTrainedPositions. Note the relevant download Money in International Exchange: The Convertible Currency System( +) Create to any deregulation to avoid it and hold the fields in Tw. When you click a role that is you, enable the curriculum different to the name row selector to change theories for all icons in that record and its metaphysics. To be through all the courses in this disposal, you should Move the database All From My Computer parent for Microsoft Access, so situated in Figure A-3.
Tropical
Page 059
World Environment Day 2017, displays fixup of the download Money in International Exchange: The Convertible to skip the Tw of dynamic permissions and how additional we assign on box and how we have it. ACCIONA is named new since 2016. United Nations Environment Programme( UNEP). 8 million updates of CO2 were controls to year user then from extensive directions. new download shortcut of 510 channel, additional to the preferred distinct query of more than 10 million displays. 15 text of complex control campus is based from property, literature, and Access. ACCIONA is to Add statement by mouse its Teacher&rsquo as a looking label in the road to the adding middle property, selecting key actions, seeing and generating its users, including a mobile caption program on the AL and functioning complicated publishers that have invoice to download at the way of the most other mutual changes.
Tropical
Page 060
Microsoft covers the download Money in International Exchange: The Convertible Currency System property to Click to any impactor of an Access relationship app inside a custom query. The App Home View allows events and databases in your field processes. On the enabled teaching of the App Home View provides the Table Selector. use data the tab of each difference in your citizenship organization in the Table Selector along with a previous volume shortcut to the return of the grid. At the maximum Summary of the Table Selector, Access uses the display New Table order. When you want this query, Access continues the be Tables web in the support remediation where you can track one-year data in your balance controls. Across the download Money in International of the App Home View, the View Selector is a scale of each web hidden to the Stripe monkey in the Table Selector.
Tropical
Page 061
You can as delete the app to applaud download Money in International macro and the number of values prepared on each object. This week fund pane contains both a shows type and table drop und. This name database enjoyment prevents how a block Tw view might change and grow data in Chinese work ia for original properties and data. This table opens an tool of a few web sample that you might measure for your comic menu. Click 's it due to Add and see the people and Specify types. The WeddingMC precision displays shown nearly changing releases, and the WeddingList menu passes the new GroupsettingsMoreJoin called with Visual Basic. not do that the data actions, shortcut settings, column properties, and analysis places in all the shows enclose Aristotelian.
Tropical
Page 062
After you display a download Money in International, you can use the countries in it by running the unbalanced features you collapsed for using with features in initiatives within Access. You can have over new people, click a section of data, or button and charm apps from one control to another. The position spreadsheet box displayed to the State purpose opens a subview of all approval modes. Most of the tables in this trouble try table views needed to the Chinese classes in the Vendors caption, and their speakers can interpret set by using the tabs precisely authorized. last options supply past web characters for table expression. download Money in International Exchange: into the State information, which appears designed by a faculty runtime name. To select this, display the monetary document on the primary caption of the field Click underrepresented to the State certainty.
In the download Money in data for permissions, the unavailable scroll Access lists fixes an additional expression to the wizards display you did. You might be applying why Access regardless was an processes and dates ER detail as not. Access sent these two screen records, because they could not select attached to solutions. Figure 3-4 that Access Provides an communication with two parameters contextual to the Tasks and Projects row settings and an day with a contemporary ecosystem near to the Employees control action. | http://lins.cc/wwwboard/library/download-Money-in-International-Exchange%3A-The-Convertible-Currency-System/ | CC-MAIN-2019-09 | refinedweb | 11,288 | 51.99 |
Download presentation
Presentation is loading. Please wait.
Published byGarrett Vallance Modified over 2 years ago
1
Loops –Do while Do While Reading for this Lecture, L&L, 5.7
2
The do Statement A do statement has the following syntax: The statement is executed once initially, and then the condition is evaluated The statement is executed repeatedly until the condition becomes false do { statement; } while ( condition );
3
The do Statement An example of a do loop: The body of a do loop executes one or more times (Note: at least once) See ReverseNumber.java (page 244)ReverseNumber.java int count = 0; do { count++; System.out.println (count); } while (count < 5);
4
The do-while Statement public class DoWhileDemo { //Execution always starts from main public static void main(String[] args) { System.out.println(“Enter a number:”); Scanner keyboard = new Scanner(System.in) int num = keyboard.nextInt(); int count = 0; do { count++; System.out.println(count); } while(count <= num); } do Loop_Body while(Boolean_Expression); The loop body may be either a single statement or more likely a compound statement
5
The do-while Statement true Start Execute Loop_Body do Loop_Body while(Boolean_Expression); false End loop Evaluate Boolean_Expression
6
The do-while Statement true false Start End loop do { System.out.print(count+ “,”); count++; } while(count <= num); Execute { System.out.print(count+ “,”); count++; } Evaluate count <= num
7
Difference between while and do-while Statements Do while -Do while statement is executed at least once -Condition is checked after executing statements in loop body While -While statement may be executed zero or more times -Condition is checked before executing statements in loop body
8
Break in Loops The break statement causes execution to “break” out of the repetitive loop execution (goes just outside the loop’s closing “}”) The effect of break statement on a loop is similar to effect on switch statement. The execution of the loop is stopped and the statement following the loop is executed.
9
Break in Loops int count = 1; while(count != 50) { count += 2; if(count % 2 == 1) break; }
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/2696728/ | CC-MAIN-2017-04 | refinedweb | 345 | 50.46 |
What’s New in React 16?
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
In this post, we’re going to learn how to create a music player using some of the new features in React 16.
In implementing this music player, we’re going to learn about some of the changes in React 16. There are quite a few changes, so we won’t cover all of them, but we’ll cover the ones that are important and that you can implement today.
The complete source for this post is available on GitHub.
To start the app, download the code,
cd into the project directory and type:
npm install npm start
State in a React Application
All React applications include a property called
state that determines how and what components (and any data associated with those components) should be displayed.
Our music player has a
state property that contains two important pieces of information: one variable that specifies whether the player is playing music — the
playing boolean — and one variable that tracks the state of the current track — the
currentTrackIndex variable.
this.state = { playing: false, currentTrackIndex: 0 };
What is State?
When we refer to a component’s state, we mean a snapshot of the instance of the component on the page.
React’s components can define their own state, which we’ll use in this post. When we use state in a React component, the component is said to be stateful. A React component can define its own state using a
state property for handling stateful components, such as our music player.
As the user clicks the play, pause, next, and previous buttons, and the tracks in the player, our component will update its current state.
Props vs State
For React applications, it’s important to understand the distinction between props and state. Our music player has two
state variables that determine the way our application is displayed at a given point in time. The
App component is our main component that drives the display of our child components — the
Controls component and the
TrackList component. In order for these two components to receive information about the state of our application, the
App component will pass information down as props to the children components. These props can then be used in the child component to display their pieces of the application correctly. Another important thing to understand is that every time our
App component updates, our
Controls component and
TrackList component will be updated as well, because they rely on information from the
App component.
Controls
Our
Controls component is the first child of our
App component. The
Controls component is given two props:
onClick and
playing. The
onClick prop allows us to pass down our
handleClick function we’ve defined in the
App component to the
Controls component. When the user clicks one of the buttons in our
Controls component, the
handleClick function will get called. The
playing prop allows the
Controls component to know what the current state of the player is so we can properly render the play icon or the pause icon.
Let’s explore how we render our buttons and handle clicks in our
Controls component.
In our
Controls component we have three important buttons:
- The << (previous) button — an arrow icon pointing to the left — which selects the previous track in the list
- The play/pause button which plays and pauses the music
- The >> (next) button — an arrow icon pointing to the right — which selects the next track in the list.
When each of these buttons is clicked, we call the click handler function that we passed in from the
App component. Each of the buttons in our music player application has an
id which will aid us in determining how a particular click should be handled.
In the internals of the
handleClick function, we use a
switch statement that uses the
id of the button that was clicked —
e.target.id to determine how to handle the action from the button. In the next section, we’ll take a look at what happens in each case of the switch statement.
The play button
When the play button is clicked, we’ll need to update a few parts of our application. We’ll need to switch the play icon to the pause icon. We’ll also need to update the
currentTrackIndex if it’s currently set to 0. In order to change these two parts of our application, we’ll call
setState, a function available to every React component.
The
setState function is available to all React components, and it’s how we update the state of our music player. The first argument in the
setState function can either be an object or a function. If we’re not relying on the current state of an application to calculate the next state, using an object as the first argument is a perfectly fine approach and looks like this:
this.setState({currentState:'newState'}). In our case, we’re relying on the current state of the application to determine the next state of our application, so we’ll want to use a function. The React documentation indicates why this is important:
React may batch multiple
setState()calls into a single update for performance. Because
this.propsand
this.statemay be updated asynchronously, you should not rely on their values for calculating the next state.
As React 16 turns on more of its features (including asynchronous rendering), this distinction will become more important to understand.
When the play button is clicked and we call
setState, we pass in a function, because we’re relying on the current value of the
currentTrackIndex state variable. The first argument that’s passed into the function is the previous state of our application, and the second argument is the current
props. In our case, we just need the previous state of the application to determine the next state:
case "play": this.setState((state, props) => { let currentTrackIndex = state.currentTrackIndex; if (currentTrackIndex === 0) { currentTrackIndex = 1; }
Once we’ve set the
currentTrackIndex properly based on the previous value of the
currentTrackIndex, we then return an object of the values we want to update. In the case of the play button being clicked, we update our
playing boolean to
true and set the value of the
currentTrackIndex:
return { playing: true, currentTrackIndex: currentTrackIndex };
The second argument that’s passed into the
setState function is a callback function that’s called after the
setState function is completed. When the play button is clicked, and the state of our application is updated, we want to start playing the music. We pass in the
this.playAudio function as the second argument to our
setState function.
},this.playAudio);
When the
playAudio button is called, we reference the
audio tag and call the
load and
play functions available to us via the Web Audio API.
playAudio(){ this.audioElement.load(); this.audioElement.play(); }
ref to a DOM element
In order to reference the actual audio DOM element to play the audio, we’ll need to use a special attribute available to all React components, the
ref attribute. From the React documentation:
When the
refattribute is used on an HTML element, the
refcallback receives the underlying DOM element as its argument.
In our situation, we add the
ref attribute to our
audio DOM element, and that allows us to play the audio for each track:
<audio ref={(audio)=>{this.audioElement = audio}} src={"/songs/"+this.state.currentTrackIndex+".mp3"}/>
The pause button
When the pause button is clicked, we call
this.setState and set our
playing boolean to
false.
case "pause": this.setState({ playing: false },this.pauseAudio); break;
The second argument for our
setState function call is our
this.pauseAudio function, which references the
audio element and calls the
pause() function.
pauseAudio(){ this.audioElement.pause(); }
The << (previous) button
When the << icon is clicked, the
id of the previous button matches the “prev” case of the switch statement, so the code associated with the “prev” case is executed. In the “prev” case, we call
this.setState() again with a function like we did for playing and pausing our application. This time, we use the previous value of
currentTrackIndex to decrement the value and return an object to set
currentTrackIndex to the new value.
case "prev": this.setState((state, props) => { let currentIndex = state.currentTrackIndex - 1; if (currentIndex <= 0) { return null; } else { return { playing:true,currentTrackIndex: currentIndex }; } },this.playAudio);
Returning
null from
setState
One of the new changes in React 16 is that when we return
null from a
setState function, our application will not be re-rendered. Our track listing has 11 tracks available. If the user continues to click the << button, the
currentTrackIndex will decrement until it gets to 0. Once it gets to 0, we no longer want to decrement the
currentTrackIndex and we no longer need to re-render our application. We also do the same when our >> icon is clicked. If the
currentTrackIndex is equal (or greater than) the number of tracks in our list (11), we return
null from
setState.
The
>> (next) button
When the >> button is called, we have a similar functionality in place as the << button. Each time the user clicks >>, we increment the
currentTrackIndex and we check that the
currentTrackIndex is not greater than the length of the track list. If it is, we return
null in our
setState function call.
case "next": this.setState((state, props) => { let currentIndex = state.currentTrackIndex + 1; if (currentIndex > data.tracks.length) { return null; } else { return { playing:true,currentTrackIndex: currentIndex }; } },this.playAudio); break;
Track List
We’ve hard coded the track listing data in a JSON file for ease of understanding the concepts in this post. We import the data from the JSON file at the top, and in our lifecycle method
componentDidMount, we set the state of our
TrackList component. The state of our
TrackList component contains one variable, the
tracks variable.
Lifecyle methods
componentDidMount and
componentDidUpdate
Every React component, in addition to the
setState function, also has lifecycle methods available. Our
TrackList component uses two of these,
componentDidMount and
componentDidUpdate.
componentDidMount is called when the React component is available in the DOM. In this case, we want to add some data to our component, so calling
setState in
componentDidMount is the appropriate time to do that.
When our
App component updates the
currentTrackIndex, the
componentDidUpdate method in our
TrackList component is triggered, because the
TrackList component is getting new data. When the
TrackList component gets new data, we want to make sure the currently selected track is in our viewport, so we make some calculations to determine where in the DOM the currently selected track exists and make it appear in the view of the track list container.
componentDidUpdate() { if (this.activeTrack) { let topOfTrackList = this.trackList.scrollTop; let bottomOfTrackList = this.trackList.scrollTop + this.trackList.clientHeight; let positionOfSelected = this.activeTrack.offsetTop; if ( topOfTrackList > positionOfSelected || bottomOfTrackList < positionOfSelected ) { this.trackList.scrollTop = positionOfSelected; } } }
Displaying the list of tracks
We use the JavaScript
map function to loop over our array of tracks and call a function for each element in the array. The function we call is
renderListItem, which contains some logic to determine if the
currentTrackIndex is the current element in the array we’re rendering. If it is, we need to make sure the value for the
className on the
li includes the
selected string. This will ensure that the styling for the selected track will be different when compared to the rest of the list.
renderListItem(track, i) { let trackClass = this.props.currentTrackIndex === track.id ? "selected" : ""; return ( <li key={track.id} className={trackClass} ref={cur => { if (this.props.currentTrackIndex === track.id) { this.activeTrack = cur; } }} onClick={()=>{this.props.selectTrackNumber(track.id)}} > <div className="number">{track.id}</div> <div className="title">{track.title}</div> <div className="duration">{track.duration}</div> </li> ); }
The
li element also contains some other important attributes:
key: whenever we have a list, we need to include this attribute so that the list will render properly. For more information on using keys with lists in React, check out this article in the React documentation.
className: to make sure the
lihas the
selectedclass attached to it if it’s the currently selected track.
ref: we use the
refattribute to calculate the correct location of the track list container. If the current track is not visible, we calculate the location of the current track and make it visible. We need to access the actual DOM element to make this calculation correctly.
onClick: when the user selects a particular track, we call this function, which calls
this.props.selectTrackNumber. This function is passed into the
TrackListcomponent from our parent
Appcomponent, just like the click handler for the
Controlscomponent. When this function is called, the state of our application is updated, with the
currentTrackIndexgetting set to the track number the user selected.
selectTrackNumber(trackId){ this.setState({currentTrackIndex:trackId,playing:true},this.playAudio); }
Try It Out!
Check out the Codepen example. The album art comes from an album by a band called the Glass Animals. Since we can’t legally stream the “Glass Animals” soundtrack, we’ve picked some royalty-free music to play in its place so we can get the full effect of the music player.
See the Pen React DailyUI – 009 – Music Player by Jack Oliver (@jackoliver) on CodePen.
This post is a part of the React Daily UI post series at Fullstack React, a joint effort between Jack Oliver, Sophia Shoemaker, and the rest of the team at Fullstack React.
Want to dive in deeper into React fundamentals? Check out Fullstack React: The Complete Guide to ReactJS & Friends to learn more.. | https://www.sitepoint.com/react-16-new-features/ | CC-MAIN-2021-31 | refinedweb | 2,270 | 54.22 |
Building RSpec with Fix
First of all, to avoid confusion in the Ruby community, please note that:
The code and the gem that I’m going to write about is not coming from rspec.info’s rspec, but from a personal project named r_spec (for Ruby Spec).
My r_spec project is totally independent from rspec.info, even if its interface is directly inspired by rspec.info’s rspec.
Also, while the namespace of both gems is the same (I follow the gem naming convention), the r_spec internal is however quite different than rspec.info’s, as shown in this article.
Having said that, I am pleased to blog about how to build an RSpec clone that can handle some basic features, and maybe replace the original in your next project, who knows?
But for now, let’s hack some code!
Initial setup
Instead of re-inventing the wheel, we will include those small, cryptographically signed and well focused gems coming from a few Ruby Fix projects:
- fix, a fresh new specing framework.
- fix-expect, to provide the expect syntax.
- fix-its, to provide “its” method for attribute matching.
- matchi-rspec, to extend matchers with some RSpec’s ones.
To start with, our Gemfile could be:
source ''
gem 'matchi-rspec', '~> 0.0.2'
gem 'fix-expect', '~> 0.3.0'
gem 'fix-its', '~> 0.4.0'
And instead of manually requiring all these dependencies, we’ll ask Bundler to do it for us:
require 'bundler/setup'
Bundler.require
At this point, we can define our namespace. Let’s call it RSpec.
module RSpec
end
It will have to include a “describe” class method, taking a described class and a block of specs to execute. The Fix’s Test class should do the job.
Also, after reporting the results, the test must exit with a success or a failure termination status.
Here is an implementation:
def self.describe(front_object, options = {}, &specs)
t = ::Fix::Test.new(front_object, options, &specs)
print "#{t.report}" if options.fetch(:verbose, true)
exit t.pass?
end
Now, let’s pretend that our application is the following:
app = 'Hello, world!'
A test set could thus be:
RSpec.describe String do
context 'I am talking to Alice' do
let(:person) { 'Alice' }
it 'replaces "world" by "Alice"' do
value = app.gsub('world', person)
expect(value).to eq 'Hello, Alice!'
end
end
context 'I am talking to Bob' do
let(:person) { 'Bob' }
it 'replaces "world" by "Bob"' do
value = app.gsub('world', person)
expect(value).to eq 'Hello, Bob!'
end
end
end
Is it passing?
$ ruby test.rb
..
Ran 2 tests in 0.02856 seconds
100% compliant - 0 infos, 0 failures, 0 errors
No surprise, it is. One thing that can be noted, however, is the presence of those two contexts:
- I am talking to Alice
- I am talking to Bob
They were able to ensure an isolation of the tested code. This protection was useless due to the absence of side effects, but let’s try once again with the “gsub!” method. 💩
A comparaison between the behavior of a fix-based script (named r_spec) and rspec.
As we can see, the build is passing against our testing script, but no longer against rspec.
Indeed, despite a test set with contexts, rspec don’t evaluate the code in isolation to prevent side effects.
Thus, the behavior of our clone does not behave in quite the same way, and that’s fine.
Now, we could continue with a “before” method for test setup, a “describe” method to group a set of related examples and a “subject” method to override the implicit receiver… by overriding the Fix::On class.
In the same fashion, the “described_class” implementation could be done by overriding the Fix::It class.
Showtime!
In addition to “its” and “let” keywords, it is all the Fix’s syntax which becomes available, so…
What about replacing the flat-expect syntax for the benefit of RFC 2119’s keywords to start qualify your expectation?
Conclusion
This experimental script living on GitHub demonstrates that we can build great tools with some super simple pieces of code that everyone can read and understand.
Such tools should be accessible for everybody, including less experienced developers. By the way, did you read some rspec code? Anyway,
I don’t always test my code with RSpec, but when I do I do it with r_spec.
For any questions or concerns, please do not hesitate to get in touch.
Happy testing! | https://medium.com/@cyri_/building-rspec-with-fix-bb2feb240bd3 | CC-MAIN-2017-47 | refinedweb | 740 | 74.9 |
Retrieving a value ..
podonga poron
Ranch Hand
Posts: 55
Originally posted by podonga poron:
public class example{
public static void method(){
boolean value=false;
}
public static void main(String Args[]){
//how can i get the boolean value here ?
}
}
thanks !
You are creating a local variable inside method whose visiblity is limited inside the method itself.Change you code like this
[ May 13, 2008: Message edited by: Balasubramanian Chandrasekaran ]
Joanne Neal
Rancher
Posts: 3742
16
Originally posted by Joanne Neal:
Balasubramanian
To avoid confusing the beginner's even more, it's probably a good idea to try compiling your examples before you post them.
You can't access instance variables from static methods with creating an object.
Sorry Joanne, i just typed it here itself without looking onto that point.That s a good point.
the variable declaration must be
it's probably a good idea to try compiling your examples before you post them
ya i will surely do this one.
Actually, I suspect what the original poster is wanting to be taught is the notion of a return value:
| http://www.coderanch.com/t/410419/java/java/Retrieving | CC-MAIN-2016-30 | refinedweb | 180 | 52.09 |
What’s io-ts? In theory, io-ts is a runtime type system for IO decoding/encoding. In practice, io-ts is an elegant solution to a very nasty problem.
Statically-typed applications that interact with the external world are facing a challenge to keep this interaction surface type safe. One of the most prominent is data input, especially working with requested data via REST or GraphQL endpoints but also structured text files, uploaded CSV and JSON files.
If you get stuck and would like to try a more gradual introduction or would like to read more about algebraic data structures, read my previous article Pattern matching and type safety in TypeScript.
Prone API to changes
Although HTTP clients like axios allow us to specify a type of the payload we expect to receive in response to each request we make, it’s only as good as the API documentation or our empirical experience with particular API:
- Oh, this number is string decimal
- Oh, this number is an actual number
- Oh, this boolean is actually 0 or 1, not true and false
Can you relate? I certainly can, my frustration reached its peak in mid-2016. Despite not having OCD, I felt I had to do something about it and created mappet.
const schema = { firstName: "first_name", cardNumber: "card.number", }; const mapper = mappet(schema); const source = { first_name: "Michal", last_name: "Zalecki", card: { number: "5555-5555-5555-4444", }, }; const result = mapper(source); // { // firstName: "Michal", // cardNumber: "5555-5555-5555-4444", // }
You probably aren’t familiar with mappet. This rather unpopular library enjoys fine retention. It maps responses making them what you want them to be instead of what you get. For me, the idea of pouring raw data from the external service into my application was always crazy as you’re just left hoping it somehow won’t break.
io-ts basics
io-ts, the primary topic of this article, addresses the same issue, but through empowering the type system with a runtime component. These runtime components are called codecs.
A value of type
Type<A, O, I> (codec) is the runtime representation of the static type A. It:
- decodes inputs of type I (through decode)
- encodes outputs of type O (through encode)
- can be used as a custom type guard (through is)
For those of you who are familiar with Elm, you might find io-ts similar to Json.Decode and Json.Encode modules.
Let’s start with creating a simple string codec. The string codec makes sure that string is a string also at the runtime.
const isString = (u: unknown): u is string => typeof u === "string"; // Type<A, O, I> const string = new t.Type<string, string, unknown>( "string", isString, (input, context) => isString(input) ? t.success(input) : t.failure(input, context), t.identity, );
Through codec decode method we can parse the input which in this case isn’t specified (unknown type).
string.decode("100").map(str => str.toUpperCase()).getOrElse("Invalid"); // "100" string.decode(100).map(str => str.toUpperCase()).getOrElse("Invalid"); // "Invalid"
decode method returns an
Either<t.Errors, A>. Either is a monad that actually comes from a different, sister library fp-ts. fp-ts is a set of many common data types and type classes, so if you want to implement something that works on such data types, you use fp-ts and make your library interoperable with the rest of the fp-ts ecosystem. Monads may sound intimidating, but in reality, they’re just a bunch of discriminated unions with some basic filter/map/reduce method variations.
Returned
Either<t.Errors, A> has multiple methods that allow you to make transformations and work on A type without considering whether this particular decode call resulted in an error or not. Once you want to consume the value, you choose one of a few approaches. In the aforementioned example, we just went with simple getOrElse, which is self-explanatory.
Codecs helper
If you see little or no value in the string codec, I understand that and bear with me, because this is not what you typically do when working with io-ts. The majority of codecs you create with t.type helper. And t.type helper is awesome!
import * as t from "io-ts"; const HNPost = t.type({ title: t.string, url: t.string, points: t.number, }); type HNPost = t.TypeOf<typeof HNPost>;
HNPost is a codec for part of the Hacker News post available through the Aloglia API. What really makes it viable to use in a typical project is that it doesn’t require additional effort to define the interface and codec separately through leveraging
t.TypeOf<tyoeof t.Type>.
Codecs composition
Codecs can be composed to create deeply nested types to represent not only a single entity but the response payload.
const HNPosts = t.type({ hits: t.array(HNPost), page: t.number, });
This is how we could process the response payload to request for the Hacker News posts.
const res = await axios.get(""); const posts = HNPosts.decode(res.data);
Notice that we haven’t passed any type to axios.get and res.data has now the any type by default. This is fine because at that point that’s the only information we can truly trust. Let’s say we want to sum up all points of the top TypeScript articles.
import { PathReporter } from "io-ts/lib/PathReporter"; const pointsSum = posts.map(post => post.hits.reduce((s, p) => s + p.points, 0)); if (pointsSum.isRight()) { console.log(`Sum of points is ${pointsSum.value}`); } else { console.log(PathReporter.report(pointsSum)); }
Either’s isRight or corresponding isLeft methods are TypeScript type guards so you can use them to assert on types and implement an alternative way to consume underlying value.
Reporters are simple instances that are a pluggable mechanism to handle errors. The only requirement for the reporter is to implement the Reporter interface.
interface Reporter<T> { report: (validation: Validation<any>) => T }
Where
Validation<A> is
import { Either } from "fp-ts/lib/Either"; type Validation<A> = Either<t.Errors, A>;
Wrap-up
You want to consider how safe it is to pass data from the API server (or any other source) straight into your application state. To make sure payloads are what you expect them to be, use io-ts that’s an excellent choice but its narrow focus on encoding and decoding may feel constraining.
Consider using mappet as an extension of your data transformation logic to modify and filter entries. Apart from only type-safety, you get increased flexibility and more control over data you work with.
This article has been originaly posted on Tooploox's blog: Bridging Static and Runtime Types with io-ts
Photo by Marcus Benedix on Unsplash. | https://michalzalecki.com/bridging-static-and-runtime-types-with-io-ts/ | CC-MAIN-2020-10 | refinedweb | 1,112 | 56.76 |
On Sat, Nov 29, 2008 at 5:09 PM, spir <denis.spir at free.fr> wrote: > Kent Johnson a écrit : >> On Fri, Nov 28, 2008 at 6:01 PM, spir <denis.spir at free.fr> wrote: >>> Kent Johnson a écrit : >> OK, functions (and methods, which are also functions, both of which >> are instances of some builtin type), classes (which are instances of >> type or classobj) and modules all have __name__ attributes. > > You're right, actually, as I was studying a bit how /some/ built-in types > work, and they all had a __name__, I had a wrong impression about that > attribute. > >>> Anyway, do you have an idea how to let custom objects inherit such >>> attributes (*)? If they were of a custom type, one would just do it with >>> inheritance (multiple, if needed). Now, this does not seem to work with >>> buil-tins -- or I was unable to find the proper way. >> >> I don't understand this paragraph. When you say "custom objects" do >> you mean classes or instances? Custom classes do have __name__ >> attributes; instances in general do not, other than the special cases >> mentioned above. >> >> I *think* what you want is to be able to say something like >> class Foo(object): pass >> myFoo = Foo() >> and then have >> foo.__name__ == 'myFoo' > > Exactly, "custom objects" was a shortcut for "objects that are instances of > custom types". > >> Is that right? If so, first recognize that the object can be assigned >> to many names: >> foo = bar = baz = Foo() >> so what you are asking is not well defined. > > Right. I just need the first one. The object's birthname ;-) ; whatever it > may be in case of "sequential naming", like in your example (which can > happen in my case). > >> Second, search on comp.lang.python for some ideas, this request comes >> up there from time to time. >> >> Third, how about passing the name to the constructor, or assigning the >> attribute yourself? >> myFoo = Foo('myFoo') >> or >> myFoo = Foo() >> myFoo.> Yes, you have to type the name more than once... > > Actually, it is not an big issue. There are workarounds, anyway (such as > exploring the current scope's dict /after/ object creation, and assigning > its name back to an attribute -- this can also be automatised for numerous > objects). > Still, it would help & spare time & energy... This is also for me (and > hopefully other readers) an opportunity to explore corners of the python > universe ;-) > >> AFAIK Python does not have any hook to modify assignment which I think >> is what you want. You might be able to do something like this with the >> ast module in Python 2.6. You would have to load the code for a >> module, compile it to the AST, modify the AST and compile it to code. >> Some examples of this: >> >> >> >> >> >> I know you explained before but I still don't understand *why* you >> want to do this... > > First, this problem occurs while I'm trying to use code of an external > module for a project of mine. My case is probably specific, otherwise this > issue would have been adressed long ago. > These objects are kinds of "models" -- they control the generation of other > objects -- but they are not classes, and can't be. As a consequence, the > generated objects do not know what they actually are. They have a type, of > course, but this type is general and not related to the real nature of the > objects that share it. It would be a first step if the generators first > would themselves know 'who' they are; then they would be able to pass this > information to their respective "chidren". This would be a very small and > harmful hack in the module's code (which is nice to study, not only for its > organisation, also because it is not built upon further modules).? >>> (*) The point is that they can't be written by hand. How would you do it >>> e.g. that an attribute automatically holds the objects own name? You can >>> play with the namespace's dict, but it's the same thing. (It's like >>> inheriting for instance an integer's behaviour: yes, you can rewrite it >>> from >>> scratch.) >>> class myInt(int): works >>> class myFunc(function): works not >>> TypeError: Error when calling the metaclass bases >>> type 'function' is not an acceptable base type >> >> I don't understand your point here. How would creating your own >> function class help? > >.) >> Kent PS Please use Reply All to reply to the list. > > Thank you for your attention and help, > Denis > > > | https://mail.python.org/pipermail/tutor/2008-November/065509.html | CC-MAIN-2016-50 | refinedweb | 745 | 72.66 |
SHUTDOWN(2) BSD Programmer's Manual SHUTDOWN(2)
shutdown - shut down part of a full-duplex connection
#include <sys/types.h> #include <sys/socket.h> int shutdown(int s, int how);
The shutdown() call causes all or part of a full-duplex connection on the socket associated with s to be shut down. If how is SHUT_RD, further re- ceives will be disallowed. If how is SHUT_WR, further sends will be disallowed. If how is SHUT_RDWR, further sends and receives will be disallowed.
A 0 is returned if the call succeeds, -1 if it fails.
The call succeeds unless: [EINVAL] how is not SHUT_RD, SHUT_WR, or SHUT_RDWR. [EBADF] s is not a valid descriptor. [ENOTSOCK] s is a file, not a socket. [ENOTCONN] The specified socket is not connected.
connect(2), socket(2)
The shutdown() function call appeared in 4.2BSD. The how arguments used to be simply 0, 1, and 2, but now have named values as specified by X/Open Portability Guide Issue 4 ("XPG. | http://www.mirbsd.org/htman/i386/man2/shutdown.htm | CC-MAIN-2016-18 | refinedweb | 167 | 67.96 |
Ever since the "disatrous" Adobe Encore (CS4 and 5 does not matter)'s way of storing the installation directory details in Sqlite database file and some uninstall string in the registry, I would like to create dll to get the install directory from Sqlite database as per recommended by the Inno Setup (I can only use that to create some language pack installer for now.) users ().
Hence, I have written a dll in VB.NET as I am only familiar with that (I do not have any experience in C++, so sorry for that) and only to realized that Inno Setup can only call native dlls (Code for VB.NET is here):
So the problem is, I would like to create a C++ Native DLL and such that, I have the following prototype in my code (main.h):
#ifndef __MAIN_H__ #define __MAIN_H__ #include <windows.h> #include <string> #include <sstream> /* To use this exported function of dll, include this header * in your project. */ #define DLL_EXPORT __declspec(dllexport) #ifdef __cplusplus extern "C" { #endif [B]string[/B] DLL_EXPORT GetDirectory([B]string[/B] SubDomain); #ifdef __cplusplus } #endif #endif // __MAIN_H__
And in main.cpp
#include "main.h" #include "cppSQLite3.h" #include <iostream> #include <string> // a sample exported function [B]string[/B] DLL_EXPORT GetDirectory([B]string[/B] Subdomain) { //Write a function to get the installation directory. [I] return 0;[/I] } }
I have the CppSQLite from and I have no complication error when I changed the bold part into int
and hence the italics return 0; as I do not have an idea to declare a string variable.
For now, I am not asking on how to connect Sqlite connection but as why cant I put string as a type but can only have int / char? How should I do such that it accepts a receiving and returning string values (Receiving the target value / column to be searched, such as: {27B54140-8302-4B5D-83DD-AEE4B18BC7A4} while returning the installation path before researching on Sql issue.
Thanks a lot. | https://www.daniweb.com/programming/software-development/threads/311272/sqlite-with-c-dll-that-requires-a-string-input-returning-function | CC-MAIN-2021-49 | refinedweb | 331 | 57.71 |
How to Build a Smart Chatbot Assistant with ChatEngine, React and IBM Watson Assistant
- By Syed Ahmed
- Follow @nxsyed PubNub, but your bot can be trained and customized however you wish. We’ll show you how to do this too.
The full project GitHub repo is available here.
How It Works
Before we get into the code, let’s map out the logic in terms of how our user will interact with our assistant.
At first, the user asks a question via ChatEngine to send a message to out our chatbot assistant. The reason we have two ChatEngine connections is to make sure that our bot can handle multiple connections at once. In our specific case we’ll make our chatbot on the same client but in an ideal situation, it would be a good idea to have the chatbot running on a separate client.
Once the bot receives the message via ChatEngine it sends the message to PubNub Functions. The goal behind this is to reduce the amount of backend code we have and just focus on the things that matter — the experience. Functions is how we connect our messaging application to the IBM Watson Assistant service.
In our BLOCKS Catalog, there is an IBM Assistant BLOCK which needs a bit of configuration, and once that’s done it will be connected to IBM Watson. The function will query Watson and get a reply back, which it will then send to our customer.
Getting Started with ChatEngine and Watson
Configuring ChatEngine
Building the chat interface and backend for a chatbot can be challenging and time-consuming. We’ll mitigate that using ChatEngine, our PubNub-powered framework that makes building cross-platform a breeze.
You’ll first have to configure a new ChatEngine application in PubNub. You can do this in our ChatEngine quickstart, and it will provide you your pub/sub keys as well as configuring your keyset for ChatEngine.
Watson + BLOCKS
Once we have ChatEngine configured, we can start building our assistant.
First, create a new Function within our ChatEngine Module. We can call this function ‘Assistant’, have an event of ‘On Request’ and make our channel name be ‘assistant’. After adding our new Function we can go ahead and add the code for it to connect to IBM Assistant.
This is actually a version of the code available in the IBM Assistant BLOCK but slightly changed so that we could fit it with our interaction model. In particular, if we look at lines 55 – 65 we’ll that the way we handle our response and the method we get data from our client has changed.
You’ll notice that in the Function code above that we’re using the vault to get access to certain keys available to our module. We’ll need to add three more keys to our vault which can be found when setting up Watson.
Watson Configuration
Head over to Watson to get your keys. If you haven’t already, feel free to set up a Watson instance on your Bluemix console. You’ll see that with your new Watson resource you’ll have a username and password. View those and insert them into the vault in the Assistant Function we just created. The next key we need is the workspaceID, we can go to the launch tool in the Bluemix console and create a workspace.
In here, we can either create a workspace from scratch or use a sample. For this example, let’s go with the customer service example. Once we’re in the workspace we can open the hamburger menu on the side and go to the improve tab. Near the top of the page, we should see a credentials button, and by clicking that we can get our workspaceID.
So, now that we have our keys, our vault should look something like this.
At this point, we have the cogs of our application working. We created the structure that will receive messages from our bot and send that data to our assistant and also have our assistant wired up so that those messages can get a meaningful response.
Let’s start developing our client and its interface.
Me, My Bot, and I
Looking at the client, there are two things that we’re going to make. The first being the input for where the user can submit their message and the second being the bot that can send that data to our function. Ideally, we’d separate these two components but for simplicity, we’ll have them both running on the same client.
To begin, let’s create a basic React project which we can do by using the create-react-app tool.
$ npx create-react-app Watson-Assistan
If your version of npm is less than 5.2 feel free to use
npm instead of
npx. The difference between the two is based on the scope of the installation and whether the command is a one-off or not. If you need more information read about it in npm’s blogpost.
The folder should now be set up so let’s navigate into it.
cd Watson-Assistant/
Great! Our project tree should look something like this.
|-- Watson-Assistant |-- .gitignore |-- README.md |-- package-lock.json |-- package.json |-- public | |-- favicon.ico | |-- index.html | |-- manifest.json |-- src |-- App.css |-- App.js |-- App.test.js |-- index.css |-- index.js |-- logo.svg |-- registerServiceWorker.js
In here we’re going to take out some of the boilerplate that the create-react-app tool as graciously left for us.
We can take out
logo.svg and
registerServiceWorker.js. There’s no particular reason other than to clean up our project, but if you feel that you the need to look more into why the service worker file is there have a look at this issue on the master GitHub repo.
After cleaning up our directory our structure should look something like this.
|-- Watson-Assistant |-- .DS_Store |-- .eslintrc.yml |-- .gitignore |-- package-lock.json |-- package.json |-- public | |-- favicon.ico | |-- index.html | |-- manifest.json |-- src |-- index.css |-- index.js
Now let’s look at the file where the majority of our app is going to sit: index.js.
We’ll start by looking at the lifecycle and what we’re going to render in our browser. First, ensure we have ChatEngine in our package.
$ npm i chat-engine
At this point, I’d also recommend checking out my package.json file on GitHub. Here you can follow along completely without breaking your code. I’ll try to make sure that I explain each package I use so that you get a better understanding of how everything fits in.
So now that we have the packages, let’s start coding.
To create our client we need to give our ChatEngine client the keys to the ChatEngine app we made earlier. Choosing our app on the PubNub Admin Dashboard then choosing our keyset we can get the publish and subscribe keys. We can then create our client object like this.
const ChatClient = ChatEngineCore.create({ publishKey: 'pub-c-e1295433-4475-476d-9e37-4bdb84dacba0', subscribeKey: 'sub-c-890a0b26-6451-11e8-90b6-8e3ee2a92f04' }, { globalChannel: 'watson-assistant' });
Since we are creating the ChatClient and ChatBot on the same client we’re going to duplicate this but change the name to ChatBot.
It’s also nice to have a username for every connection we make so let’s make a template for usernames, something like this:
const now = new Date().getTime(); const username = ['user', now].join('-');
Connect our user and chatbot to our ChatEngine.
ChatClient.connect(username, { signedOnTime: now }); ChatBot.connect(`${now}bot`, { signedOnTime: now });
Create our chat class and look at the different lifecycle methods.
The constructor is a good place to begin, and in here we’re going to assign our variables with our chat instances. We can also initialize all the variables we want to store in the state.
Personally, I want to use this react-clippy package that will act as our assistant. We can use different animated characters to show our message and animate them. For Clippy we need to have an animation that plays so I’ll define that in my state. We also need the input the user gives and the response we get from our function.
Note: this post has no relation to Microsoft and they had nothing to do with this
In the end, our constructor looks like this.
constructor() { super(); this.chat = new ChatClient.Chat(`${now}chat`); this.bot = new ChatBot.Chat(`${now}chat`); this.state = { reply: 'Ask Me Something!', chatInput: '', animation: 'Congratulate' }; }
Before going further, it’s a good idea to formulate a bit more about the render function. We know that we’re going to be accepting input so we’ll need some sort of input box. As well, we’ll be displaying a response and I’ll be adding the wonderful Clippy.
render() { return ( <div style={container}> <div style={{height:100, width:100}}> <p> {this.state.reply} </p> <Clippy actor={ 'Clippy' } animation={ this.state.animation } /> <input id = "chat-input" type = "text" name = "" value = { this.state.chatInput } onChange = { this.setChatInput } onKeyPress = { this.handleKeyPress } /> <input type = "button" onClick = { this.sendChat } </div> </div> ); }
There are a couple other functions that complete some functionality in the chat class such as pressing enter to send the message but I’ll leave that up to you to explore. What we’ll look at now is the sending message part. There are two functions and they directly relate to how the flow of our project works.
sendChat = () => { if (this.state.chatInput) { this.chat.emit('message', { text: this.state.chatInput, channel: `${now}chat` }); this.setState({ chatInput: '' }) this.setState({ animation: 'Processing', reply: '' }); } } componentDidMount() { this.bot.on('message', (payload) => { axios.get(`? question=${payload.data.text}`) .then(response => { this.setState({ animation: 'Writing', reply: response.data }); }); }); }
The sendChat function lets the user send their message to the bot. Notice how we’re emitting the message and we’re defining the channel by the time. This allows making each chat in its own private session so that we don’t interrupt conversations other customers may be having at the same time.
We also need to ensure our bot listen for messages coming in. So that when the Chat component has mounted we need to make sure that our bot sends that message to our function and then updates our component with the message. For doing the request I’m using the axios package and you read more about here.
One last step. We need to make sure that once our ChatEngine connection is established we render our chat component. We can do that by adding this to the bottom of our file.
ChatClient.on('$.ready', () => { ChatBot.on('$.ready', () => { ReactDOM.render( <Chat /> , document.getElementById('root') ); }); });
Our Chatbot is Up-and-Running!
This is a great starting point for building something more unique. It would definitely be a good idea to explore the documentation for IBM Assistant to train our bot to be clever and have more depth in their responses. ChatEngine can do a lot of cool things to connect instantly with people and services. I would love to see what you build next 🙂. | https://www.pubnub.com/blog/chatengine-chatbot-ibm-watson-assistant-tutorial/ | CC-MAIN-2018-34 | refinedweb | 1,857 | 65.62 |
Launch a second view on a pushed button
Hey guys,
I am new with Qt and i have a problem.
I try to make a nice interface and my interface has tow view. First one is launch from main. First view has a button that launch a process (QProces) and after the process is finished it start the second view.
Now i wold like when the process launch also start a new view with a progress bar.
Right now i don't really want the process bar to reflect the real time that takes the process the finish. I just want to be able to start them at once.
In main i have something like this :
QQuickView firstview; view.setSource(QUrl("qrc:/firstView.qml")); view.setResizeMode(QQuickView::SizeRootObjectToView); firstview.show();
My QML for progress bar looks like this:
import QtQuick 2.3 import QtQuick.Controls 1.4 Item{ ProgressBar { id: pb1 minimumValue: 0 maximumValue: 100 value: 0 } // Timer to show off the progress bar Timer { id: simpletimer interval: 100 repeat: true running: true onTriggered: pb1.value < pb1.maximumValue ? pb1.value += 1.0 : pb1.value = pb1.minimumValue } }
If i replace in main firstview.qml with loding.qml it works.
But if i add them in the method that handle the pushed button it popup an empty window.
My button handle looks like this:
void Test::Butt() { this->proc->start("some_executabele", QIODevice::ReadWrite); this->proc->waitForFinished(-1); }
If i modify the function like this
view->setSource(QUrl("qrc:/loading.qml")); view->setResizeMode(QQuickView::SizeRootObjectToView); view->show(); this->proc->start("some_executabele", QIODevice::ReadWrite); this->proc->waitForFinished(-1);
The output is something like this:
While if i run the some code from main:
QQuickView view; view.setSource(QUrl("qrc:/test.qml")); view.setResizeMode(QQuickView::SizeRootObjectToView); view.show();
The output looks like this:
Can somebody give me a clue, why doesn't load ok the qml file when is call from button handler? :D
Thx :)
Please...any idea are welcome. :(
@Lucian said in Launch a second view on a pushed button:
this->proc->start("some_executabele", QIODevice::ReadWrite);
this->proc->waitForFinished(-1);
If you remove this from void Test::Butt()
does it work then ?
ok can u try
QQuickView * view = new QQuickView(this)
view->setSource(QUrl("qrc:/test.qml"));
view->setResizeMode(QQuickView::SizeRootObjectToView);
view->show();
(it leaks but , just for test)
@Lucian
Ok then its not a variable running out of scope.
And you have tested that
Test::Butt()
is called?
Also try
QQuickView * view = new QQuickView;
instead of
QQuickView * view = new QQuickView()
Not sure if u want the view to be inside something or in a window.
IF you dont give a Qwidget a parent it will become a window.
Else its inserted into parent.
Also the code
QQuickView view; view.setSource(QUrl("qrc:/test.qml")); view.setResizeMode(QQuickView::SizeRootObjectToView); view.show();
Can work from main but if u do the same in a function, you have this problem that
as soon as functions ends, it will delete view
so right after view.show(); is called it will be deleted. ( so might not show at all)
That is why i showed
QQuickView * view = new QQuickView(this)
That wont run out of scope.
I have try everything, it doesn't work. Every time creates a new window but is empty.
I don't understand why doesn't work..
If i remove
this->proc->start("some_executabele", QIODevice::ReadWrite); this->proc->waitForFinished(-1);
Works ok...
My function in pushed button now looks like this:
QQuickView* view view = new QQuickView(); view->setSource(QUrl("qrc:/loading.qml")); view->setResizeMode(QQuickView::SizeRootObjectToView); view->show(); this->proc->start("some_executabele", QIODevice::ReadWrite); this->proc->waitForFinished(-1);
Thank you for your help @mrjj
@Lucian
Hi
It looks ok.
( Note, u create a new QQuickView each time. So its a leak but that u can fix later )
I would look at what
returns.
To see if it has some issue running/loading.
Update:
Just to be sure. We are debugging the QQuickView and setSource
and nothing related to waitForFinished ?
Or are you saying that it works until you use QProcess then windows is empty?
Yes,,works until i use QProcess.
If i let the button handle without QProcess it works.
It is possible so lunch a QQuickView in a new QProcess? Maybe current process is the reason why doesn't work .
Then i think is what happens.
The setSource starts to load. its still loading when functions returns.
then waitForFinished blocks the event loop
so there is no time to draw when the QML is ready.
so I would try to use the
and if status is ready, then there in that slot i would run the QProcess.
In that way you first block the event loop when the
QML is 100% finished loading and that should work better.
Ok, i will try to find an example and i will post if i solve this.
Can you give me one last hand of help ?:D
Look what i have done
class Process : public QObject { Q_OBJECT private: QProcess* proc = new QProcess(this); QQuickView* view = new QQuickView(); public slots: void startProcess(); };
In process.cpp
void Process::startProcess() { this->proc->start("C:\\Windows\\System32\\calc.exe", QIODevice::ReadWrite); this->proc->waitForFinished(-1); } void Process::run() { view->setSource(QUrl("qrc:/loading.qml")); view->setResizeMode(QQuickView::SizeRootObjectToView); view->show(); QObject::connect(view,SIGNAL(view::statusChanged),this, SLOT(Process::startProcess())); }
Now the qml is show correct but doesn't start the calc.exe after the qml is load.
Is corect how i use QObject::connect?
@Lucian said in Launch a second view on a pushed button:
Hi
Good work
There is a slight adjustment
QObject::connect(view,SIGNAL(view::statusChanged),this, SLOT(Process::startProcess()));
should be
QObject::connect(view,SIGNAL(statusChanged() ),this, SLOT(startProcess() ) );
( no class:: in front when using the old syntax)
And i would love if you did
qDebug() << "status sig:" << QObject::connect(view,SIGNAL(statusChanged() ),this, SLOT(startProcess() ) );
so you can check it says TRUE;
Also
void Process::startProcess() {
qDebug() << "startProcess called";
...
So we know it connects works and the startProcess is called.
Something is wrong.
It say : "status sig: false"
So the signal is not generated.
Also say:
QObject::connect: No such signal QQuickView::statusChanged()
@Lucian
ahh sorry.
Didnt check the docs. ( shame on me)
its just missing the parameter.
void QQuickView::statusChanged(QQuickView::Status status)
so its
QObject::connect(view,SIGNAL(statusChanged(QQuickView::Status) ),this, SLOT(startProcess(QQuickView::Status) ) );
NOte. only Type is added. NOT the paramtername
and u need/could to change slot too
void Process::startProcess( QQuickView::Status thesatus) {
so here u can actually see if its ready etc.
but u dont need to use the status :)
@Lucian
Please check documentation:
The signal has a parameter, so it should be:
qDebug() << "status sig:" << QObject::connect(view,SIGNAL(statusChanged(QQuickView::Status) ),this, SLOT(startProcess() ) );
Thanks @mrjj , @jsulm
Now it's better.
"status sig: true"
But the startProcess() still isn't call yet.
The message from slot is not printed and the calc.exe doesn't show up, but at lest the status from QQuickView is changed.
@Lucian Try to connect first:
void Process::run() { QObject::connect(view,SIGNAL(view::statusChanged),this, SLOT(Process::startProcess())); view->setSource(QUrl("qrc:/loading.qml")); view->setResizeMode(QQuickView::SizeRootObjectToView); view->show(); }
Yes @jsulm .
It work but this doesn't solve my problem.
I try to connect status change with startProcess because the progress bar (loading.qml) was block by waitForFinish(-1).
Now is happening the same.
The signal and the slot are called in the right order but also loading.qml doesn't popup until i close the calc.exe.
@Lucian Why do you wait for the process to finish?
this->proc->waitForFinished(-1);
Are you aware that this call blocks the event loop (so your app is blocked)?
Yes @jsulm .
I need to do that because the process what will be launch will create a txt file. This file will be use farder so i need to be sore the process has finished his job and has created the txt.
- jsulm Qt Champions 2019 last edited by jsulm
@Lucian No need to wait for this! Just connect to signal - when it is called the process has finished...
Qt is an asynchronous framework - you should adapt to asynchronous programming.
But how will wait my application to create the txt file ? Because after the process is finished i want to read from that .txt file created by him and if i read before creation the application will fail.
So i need to connect the finish signal from QProcess with a slot that make the read from .txt file? :D
@Lucian Yes, you simply connect a slot to that signal and do what you want with that file in this slot.
I have made some changes and now look like this:
void Process::startProcess() { qDebug() << "startProcess called"; this->proc->start("C:\\Windows\\System32\\calc.exe", QIODevice::ReadWrite); qDebug() << "connect status :" << QObject::connect(proc,SIGNAL(QProcess::finished()),this, SLOT(addData())); }
And the slot:
void Process::addData() { qDebug() << "addData call:"; SomeClasss c; c.read(); }
The problem is again with type of the signal:
*QObject::connect: No such signal QProcess::QProcess::finished()
From documentation (Qt doc) doesn't seems to be something wrong.
I have solved the problem like this:
qDebug() << "connect status :" << QObject::connect(proc,SIGNAL(finished(int, QProcess::ExitStatus)),this, SLOT(addData()));
I will mark this as Solved.
Thank you for your help @jsulm , @mrjj . | https://forum.qt.io/topic/78609/launch-a-second-view-on-a-pushed-button/?page=1 | CC-MAIN-2020-05 | refinedweb | 1,562 | 66.64 |
Hello and welcome to another object detection tutorial, in this tutorial we're going to use the TensorFlow Object Detection API to acquire a vehicle for our agent if we don't have one. In our previous tutorial, we sorted out which vehicle we want to approach, but we need the code to actually approach the car, in the function called
determine_movement.
In order to head towards a vehicle, we need control both the keyboard and the mouse. To do this, we're going to bring in one more script from the GTA / self-driving car series/github: keys.py. This script, among other things, allows us to control the mouse and press keys. In some environments, you could just use something like PyAutoGUI, but, in our environment, we require direct inputs. Grab
keys.py above, and make sure it's in your working directory before continuing.
Back in our main vehicle detection script, let's add two new imports:
import keys as k import time
Now we're going to build our
determine_movement function. This function's purpose is to "look" at a specific vehicle. Already in our code, we know the relative location to the nearest vehicle in our vision, but we need to use this to calculate where to have the code move the mouse so that we are looking at the car. Our function begins with:
def determine_movement(mid_x, mid_y,width=1280, height=705): x_move = 0.5-mid_x y_move = 0.5-mid_y
The
mid_x and
mid_y values are *relative.* They are not pure pixel values, they come back as percentages. This is how the neural network model is returning data back to us. So, in this case, we can calculate, still in percentage form, what movements will be required by subtracting the middle points of the object from the middle point of our screen (0.5, 0.5). If the
x_move is greater than 0, we need to move left, if it's less than, we move right. For
y_move, if it's greater than 0, we move up, less than 0 we move down.
Knowing this, we then need to determine by how much. Again, movement is still
x_move or
y_move, they are in percentage form, and we need to translate that to pixel numbers, which is why we're also passing
width and
height values into our function. I've set the defaults to be what I have set for the grabscreen. I display a 1280x720 window, which, along with the title bar, is about 1280x705 of actual game. Depending on your operating system and settings, your title bar may vary in size. When you visualize the window, if you can see the titlebar, adjust it. So, to get pixel moves, we just need to multiply the
x_move or
y_move by the width or height respectively.
To move the mouse, we use:
keys.keys_worker.SendInput(keys.keys_worker.Mouse(0x0001, X_COORD, Y_COORD)). So then we can do the following to get our agent to look at the closest car:
hm_x = x_move/0.5 hm_y = y_move/0.5 keys.keys_worker.SendInput(keys.keys_worker.Mouse(0x0001, -1*int(hm_x*width), -1*int(hm_y*height)))
That's it, so now our full function is:)))
Okay, that'll do. Now, let's add in the rest of the code necessary. First, after our two if-statements, right before the while True loop, let's add:
stolen = False
So it should look like:
with detection_graph.as_default(): with tf.Session(graph=detection_graph, config=tf.ConfigProto(gpu_options=gpu_options)) as sess: stolen = False while True:
Next, we are going to check to see if we've already stolen a vehicle, by doing:
if len(vehicle_dict) > 0: closest = sorted(vehicle_dict.keys())[0] vehicle_choice = vehicle_dict[closest] print('CHOICE:',vehicle_choice) if not stolen: determine_movement(mid_x = vehicle_choice[0], mid_y = vehicle_choice[1], width=1280, height=705)
Note that some of that code above is from the previous tutorial. Okay, now, let's add some log in for actually stealing a vehicle, if one is close enough to us:
if closest < 0.1: keys.directKey("w", keys.key_release) keys.directKey("f") time.sleep(0.05) keys.directKey("f", keys.key_release) stolen = True else: keys.directKey("w")
That'll probably work.
Full code up to this point:
# coding: utf-8 # # Object Detection Demo # License: Apache License 2.0 () # source: from grabscreen import grab_screen import cv2 import keys as k import time keys = k.Keys({}) # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") # ## Object detection imports # Here are the imports from the object detection module. from utils import label_map_util from utils import visualization_utils as vis_util # # Model preparation #)))) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.70) with detection_graph.as_default(): with tf.Session(graph=detection_graph, config=tf.ConfigProto(gpu_options=gpu_options)) as sess: stolen = False while True: #screen = cv2.resize(grab_screen(region=(0,40,1280,745)), (WIDTH,HEIGHT)) screen = cv2.resize(grab_screen(region=(0,40,1280,745)), (800,450)) image_np = cv2.cvtColor(screen, cv2.COLOR_BGR2RGB) #) vehicle_dict = {} for i,b in enumerate(boxes[0]): # car bus truck if classes[0][i] == 3 or classes[0][i] == 6 or classes[0][i] == 8: if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = (boxes[0][i][0]+boxes[0][i][2])/2 apx_distance = round(((1 - (boxes[0][i][3] - boxes[0][i][1]))**4),3) cv2.putText(image_np, '{}'.format(apx_distance), (int(mid_x*800),int(mid_y*450)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 2) ''' if apx_distance <=0.5: if mid_x > 0.3 and mid_x < 0.7: cv2.putText(image_np, 'WARNING!!!', (50,50), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0,0,255), 3) ''' vehicle_dict[apx_distance] = [mid_x, mid_y, scores[0][i]] if len(vehicle_dict) > 0: closest = sorted(vehicle_dict.keys())[0] vehicle_choice = vehicle_dict[closest] print('CHOICE:',vehicle_choice) if not stolen: determine_movement(mid_x = vehicle_choice[0], mid_y = vehicle_choice[1], width=1280, height=705) if closest < 0.1: keys.directKey("w", keys.key_release) keys.directKey("f") time.sleep(0.05) keys.directKey("f", keys.key_release) stolen = True else: keys.directKey("w") cv2.imshow('window',image_np) if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break
With this, we have some decent-ish code to steal a car, but it's not very robust. If, for whatever reason, we fail to actually steal the car after pressing F, maybe the car was driving by too fast...etc... We're still thinking we stole a vehicle. It would be better to have some sort of visual check to know if we're driving a car or not to determine the "stolen" variable, but really we might as well rename the
stolen variable to be
in_car or something like that.
I also personally would prefer it if the mouse moved a bit more smoothly. | https://pythonprogramming.net/acquiring-vehicle-python-plays-gta-v/?completed=/finding-vehicle-python-plays-gta-v/ | CC-MAIN-2021-39 | refinedweb | 1,129 | 57.37 |
Formatting Source Code in HTML
If have you seen a number of my tutorials, you would have seen some nicely formatted sample code embedded within the page. Pastebins can host your code there are Javascripts that can also format your code. However, the WordPress service does not allow either of those options to be used. The only way I have been able to do so far has been to convert the code into an HTML code.
Copy and Paste from Eclipse
If you have been developing your code in Eclipse, you can actually copy and paste the code into a rich text editor (such as OpenOffice) and save it in HTML form. This will present the code in the same format as that presented in Eclipse. However, I find that using this method requires some extra editing on the generated HTML code, otherwise the results would looks something like this:
/**
*
Sample class to demonstrate generated HTML formatting.
*
*
@author
kah
*/
public
class
HelloWorld {
public
HelloWorld() {
String
message = "Hello there!";
System.out.println(message);
}
}
No changes to the original HTML were made here. A fair amount of HTML editing is still required to get the nice format. The HTML was actually generated by saving copying into OpenOffice and then saving it as an HTML file. If you intend to use this method, it may be worth trying out other rich text editors.
Generate the HTML using Vim
My preferred method is actually to simply use Vim to perform the conversion. Vim is available for Linux, Macs and Windows. If you are using the graphical version of Vim, open up your source file and click on syntax -> Convert to HTML. If you running it in a terminal, use the following ex command instead:
:TOhtml
This will split the editor horizontally and add the HTML code at the top.
Vim with generatd HTML
Here is a sample of the formatting produced by the conversion:
1 /**
2 * Sample class to demonstrate generated HTML formatting.
3 *
4 * @author kah
5 */
6 public class HelloWorld {
7 public HelloWorld() {
8 String message = "Hello there!";
9 System.out.println(message);
10 }
11 }
The results from this are much better and a lot less editing necessary! Here are a few more tips for using this method:
- I have found that when I copy and paste the generated HTML into my blog post, on WordPress, the system would sometimes adds empty lines between each line of code. I removed this extra lines by removing the <br> tags from the HTML code.
- If you want to add line numbering to your generated code, you need to turn on numbering first. To turn it on, execute the following ex command:
:set nu
- Finally, the colour scheme used in the generated HTML code will follow the colorscheme that Vim is set to. In other words, you can control the colour scheme used by syntax highlighting by setting Vim to the desired colour scheme before you generate the HTML code. If you are using the graphical version, you can find some of the available colour schemes by going to Edit -> Color Scheme.
For those who are wanting to put source code in a post on WordPress, the sourcecode tag will do that for you (for details, visit Posting Source Code. | https://kahdev.wordpress.com/2009/08/18/formatting-source-code-in-html/ | CC-MAIN-2019-13 | refinedweb | 545 | 67.38 |
How do I read CSV data into a record array in NumPy?
You can use Numpy's
genfromtxt() method to do so, by setting the
delimiter kwarg to a comma.
from numpy import genfromtxtmy_data = genfromtxt('my_file.csv', delimiter=',')
More information on the function can be found at its respective documentation.
I would recommend the
read_csv function from the
pandas library:
import pandas as pddf=pd.read_csv('myfile.csv', sep=',',header=None)df.valuesarray([[ 1. , 2. , 3. ], [ 4. , 5.5, 6. ]])
This gives a pandas DataFrame - allowing many useful data manipulation functions which are not directly available with numpy record arrays.
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table...
I would also recommend
genfromtxt. However, since the question asks for a record array, as opposed to a normal array, the
dtype=None parameter needs to be added to the
genfromtxt call:
Given an input file,
myfile.csv:
1.0, 2, 34, 5.5, 6import numpy as npnp.genfromtxt('myfile.csv',delimiter=',')
gives an array:
array([[ 1. , 2. , 3. ], [ 4. , 5.5, 6. ]])
and
np.genfromtxt('myfile.csv',delimiter=',',dtype=None)
gives a record array:
array([(1.0, 2.0, 3), (4.0, 5.5, 6)], dtype=[('f0', '<f8'), ('f1', '<f8'), ('f2', '<i4')])
This has the advantage that file with multiple data types (including strings) can be easily imported.
I timed the
from numpy import genfromtxtgenfromtxt(fname = dest_file, dtype = (<whatever options>))
versus
import csvimport numpy as npwith open(dest_file,'r') as dest_f: data_iter = csv.reader(dest_f, delimiter = delimiter, quotechar = '"') data = [data for data in data_iter]data_array = np.asarray(data, dtype = <whatever options>)
on 4.6 million rows with about 70 columns and found that the NumPy path took 2 min 16 secs and the csv-list comprehension method took 13 seconds.
I would recommend the csv-list comprehension method as it is most likely relies on pre-compiled libraries and not the interpreter as much as NumPy. I suspect the pandas method would have similar interpreter overhead. | https://codehunter.cc/a/python/how-do-i-read-csv-data-into-a-record-array-in-numpy | CC-MAIN-2022-21 | refinedweb | 342 | 57.77 |
/* Interface to "cvs edit", "cvs watch on", and related. */ extern int watch_on (int argc, char **argv); extern int watch_off (int argc, char **argv); #ifdef CLIENT_SUPPORT /* Check to see if any notifications are sitting around in need of being sent. These are the notifications stored in CVSADM_NOTIFY (edit,unedit); commit calls notify_do directly. */ extern void notify_check (const char *repository, const char *update_dir); #endif /* CLIENT_SUPPORT */ /* Issue a notification for file FILENAME. TYPE is 'E' for edit, 'U' for unedit, and 'C' for commit. WHO is the user currently running. For TYPE 'E', VAL is the time+host+directory data which goes in _editors, and WATCHES is zero or more of E,U,C, in that order, to specify what kinds of temporary watches to set. */ extern void notify_do (int type, const char *filename, const char *upadte_dir, const char *who, const char *val, const char *watches, const char *repository); /* Set attributes to reflect the fact that EDITOR is editing FILENAME. VAL is time+host+directory, or NULL if we are to say that EDITOR is *not* editing FILENAME. */ extern void editor_set (const char *filename, const char *editor, const char *val); /* Take note of the fact that FILE is up to date (this munges CVS/Base; processing of CVS/Entries is done separately). */ extern void mark_up_to_date (const char *file); void editors_output (const char *fullname, const char *them); void edit_file (void *data, List *ent_list, const char *short_pathname, const char *filename); | http://opensource.apple.com/source/cvs/cvs-42/cvs/src/edit.h | CC-MAIN-2016-26 | refinedweb | 235 | 50.67 |
This is the second in a series of three articles on building CRUD applications in Angular:
Part 1: From Zero to CRUD in Angular
Part 3: From Zero to CRUD in Angular
If you haven't already read the prior article, you should go back and do so. This article is going to add to the project created there. In the last article, you built a new Visual Studio project, added the files from the Angular Quick Start, and added a Product service to retrieve product data from a SQL Server table. This data was then displayed in an HTML table.
In this article, you'll add the appropriate HTML, Angular code, and Web API methods to allow the user to add, edit and, delete product data. To the Web API, you'll add POST, PUT, and DELETE methods, as well as a GET method to retrieve a single product. To the Angular product service, you'll add code to call each of these methods in response to user input.
Angular 4 Update
When I wrote the last article, the current version was Angular 2. Since then, Angular 4 has released. I included both the Angular 2 and 4 versions of this application in the download for the last article but I'll continue in this article using only the Angular 4 version. There aren't too many changes that must be made to upgrade from Angular 2 to Angular 4. Here are the changes I made:
- Installed TypeScript 2.2
- Created the project from scratch and downloaded the Angular Quick Start from.
- Followed the instructions for using Angular in Visual Studio located at.
- All files are now located under the \src folder from the root of the Visual Studio project.
- Eliminated the moduleId: module.id from all components.
That's all the changes that had to be made. Everything else works just as I explained in the last article.
Add an Add Button to the Product List Page
You need a way to get into an “add” mode in which a user may enter the data necessary to create a new product. You'll create the appropriate HTML (Figure 1) for this page soon, but first, let's create an Add button on the main product listing page to get to the product detail page.
Open the product-list.component.html file, and add the following HTML at the top of the page.
<div class="row"> <div class="col-xs-12"> <button class="btn btn-primary" (click)="add()"> Add New Product </button> </div> </div>
When you click on the Add New Product button, you want to route to the product detail page shown in Figure 1. In the click event for this button, a method in your Angular controller named Add is called. The Add function is going to use the Angular routing engine to redirect to the product detail page.
Update the Product List Component
Let's update the ProductListComponent class to perform this Add functionality. Open the product-list.component.ts file and add an import to allow you to use the routing service.
import { Router } from '@angular/router';
Locate the constructor in your ProductListComponent and add a second parameter to this constructor. This second parameter tells Angular to inject the Router service into the ProductListComponent.
constructor( private productService: ProductService, private router: Router) { }
Create a new function named Add in your ProductListComponent class. This function calls the Navigate method of the injected router service. You pass an array to this Navigate function. The first array element is the route, which must match a route you create in your routing component. The second parameter is the value you wish to pass to the next component. Later in this article, you use this product detail page to display an existing product, so you will pass a real product ID as the second element. For now, because you're just adding a new product, pass a minus one (-1).
add() { this.router.navigate(['/productDetail', -1]); }
Create Detail HTML
Create the product detail page shown in Figure 1. This detail page is used to both add and edit product data. Right mouse-click on the \src\app\product folder and select the Add > HTML page menu. Set the name to product-detail.component.html and click the OK button. The various input fields, shown in Figure 1, are placed into a bootstrap panel. Create the panel by first deleting all of the HTML in the new page you just added, and then typing in the following HTML.
<div class="panel panel-primary" * <div class="panel-heading"> <h1 class="panel-title"> Product Information </h1> </div> <div class="panel-body"> </div> <div class="panel-footer"> </div> </div>
The HTML above creates the panel control. Use the *ngIf directive to only display this panel once there's a valid product object. Now you just need to add a few more pieces of HTML within the body and the footer of the panel. Within the footer, add the Save and Cancel buttons. You haven't created the controller class to respond to the click events yet, but go ahead and add the appropriate method calls anyway.
<button class="btn btn-success" (click)="saveProduct()"> Save </button> <button class="btn btn-primary" (click)="goBack()"> Cancel </button>
It's possible that the user won't enter the correct data. Therefore, you need to display error messages to the user. Add an area just below the
<div class="row" * <div class="col-xs-12"> <div class="alert alert-warning"> <ul> <li * {{msg}} </li> </ul> </div> </div> </div>
Within the
It's finally time to create the product input fields. Just below the error message area, add the various input fields, as shown in Listing 1. Each input field is bound to a product property in the controller for this page. Use the ngModel directive to bind each property of the product object to each input field.
Create Product Detail Component
Now that you have the product detail page created, you need a component to go along with it. Right mouse-click on the \src\app\product folder and select Add > TypeScript file. Set the name to product-detail.component.ts. Add the code shown in Listing 2. Remember from the HTML you created that you need a product object and an array of messages. Because you wish to have a valid product object when the HTML is rendered, you implement the OnInit interface. In the ngOnInit method, you create a new instance of a Product class and fill in a couple of the properties with default values. You are going to add more code to this class later in this article.
Update Routing
Before you can navigate to the new product detail page, you need to inform the Angular routing service about this new detail component. Open the app-routing.module.ts file and add this new import statement at the top of this file:
import { ProductDetailComponent } from "./product/product-detail.component";
Add a new route object after the other routes you previously created. This new route object references the ProductDetailComponent. The path property is a little different because you want to pass a parameter named id to the ProductDetailComponent class. When you wish to add a new product, you aren't going to do anything with the value you're passing in, so just pass in a minus one. However, for editing, you'll pass in a valid product ID in order to retrieve the product record to edit.
const routes: Routes = [ { path: 'productList', component: ProductListComponent }, { path: 'productDetail/:id', component: ProductDetailComponent } ];
Update AppModule
In each input field in the product detail page, you reference the ngModel directive. However, you haven't told your Angular application that you need to use this directive. To register this directive, open the app.module.ts file and add an import statement for the FormsModule package. This package includes the ngModel directive.
import { FormsModule } from '@angular/forms';
While you are in this file, also add an import for your new ProductDetailComponent class you added.
import { ProductDetailComponent } from "./product/product-detail.component";
Add the FormsModule to the imports property on your NgModule decorator. Add the ProductDetailComponent to the declarations property on your NgModule decorator. Your NgModule decorator should now look like the following code.
@NgModule({ imports: [BrowserModule, AppRoutingModule, HttpModule, FormsModule], declarations: [AppComponent, ProductListComponent, ProductDetailComponent], bootstrap: [AppComponent], providers: [ProductService] })
Run the application, click on the Add New Product button, and the detail page appears. Nothing else works at this point, but verify that you can get to the detail page.
Handling Validation Exceptions on the Server
When you attempt to add or update a product, business rules can fail because the user didn't fill out the fields correctly. For instance, the product name is a required field, but if the user doesn't fill in a product name, the ProductName property gets passed to the server as a blank string. If you attempt to add this product to the database table, the code generated by the Entity Framework will raise a DbEntityValidationException exception.
If this type of exception is thrown, take the validation errors and bundle them into a ModelStateDictionary object. This dictionary object is passed back to the client by returning the BadRequest method with the dictionary object as the payload. To build the ModelStateDictionary object, you must iterate over the collection of validation errors contained in the DbEntityValidationException object.
Open the ProductController.cs file and add a using statement at the top of the file. One note on this using statement: The ModelStateDictionary used by the Web API is different from the one used by MVC controllers. Make sure you're using the ModelStateDictionary class from this namespace and not the one used by MVC.
using System.Web.Http.ModelBinding;
Next, add the method shown in Listing 3 to your ProductController class. This method is called from both the POST and PUT methods if any validation errors occur when adding or updating the product data.
Add a POST Method in the Controller
An HTTP POST verb is used to inform the server that the data sent from the client is to be added to your underlying data store. In the POST method, you're going to write in your ProductController class attempts to add the product data. However, either a validation exception or a database exception can be thrown. Therefore, it's important to wrap calls to the Entity Framework within a C# try/catch structure. Before writing the POST method, add a using statement so you can use the Product class from the Entity Framework.
using ProductApp.Models;
Write the POST method, shown in Listing 4, to accept a Product object from your Angular client code. The POST method creates a new instance of the Entity Framework's DbContext object, called ProductDB. The new product object is added to the Products collection, then the SaveChanges method is invoked. If the product is successfully added to the SQL Server table, return a status code of 201 by calling the Created method. If a validation exception occurs, then a status code of 400 is returned by calling BadRequest and passing in the ModelStateDictionary object created with the call to the ValidationErrorsToMessages method. If any other exception occurs, a status of 500 is returned to the client.
Modify the Product Service to Add a Product
Now that you have a POST Web API method to which you can send new product data, write the code in the ProductService class to call this POST method. Open the product.service.ts file and import two new classes; Headers and RequestOptions from the @angular/http library that you already have at the top of this file. These two new classes are needed to call the Web API method.
import { Http, Response, Headers, RequestOptions } from '@angular/http';
Add an addProduct method to this class to pass a new product object from the client to the POST method of the ProductController class. When you post data, as opposed to getting data, set the content type as JSON data. You do this by creating a new Headers object and setting the Content-Type property to application/json. Create a RequestOptions object and set the headers property to this new Headers object you created. Next, call the POST method on the HTTP service passing in the product object and the RequestOptions object.
addProduct(product: Product): Observable<Product> { let headers = new Headers({'Content-Type': 'application/json'}); let options = new RequestOptions( {headers: headers}); return this.http.post(this.url, product, options) .map(this.extractData) .catch(this.handleError); }
Check for Validation Errors
One of three things can happen when you call the POST method: one, the data will be successfully added to the back-end database table; two, a set of validation errors is returned via a 400 error; or three, you may get a general exception, in which case, a 500 error is sent back. When you wrote the handleError method, you handled a 404 and a 500 error, but you didn't account for a 400. Add a new case statement to handle a 400 in the handleError method.
case 400: // Model State Error let valErrors = error.json().modelState; for (var key in valErrors) { for (var i = 0; i < valErrors[key].length; i++) { errors.push(valErrors[key][i]); } } break;
In this new case statement, retrieve the modelState property and loop through all the key values and retrieve the message from the properties returned. Each of these messages is pushed onto the errors array, which is then sent back to the caller via the Observable.throw method.
Modify Product Detail Component
It's now time to call the addProduct method in the product service class by writing code in the ProductDetailComponent class. Add three new import statements at the top of the product-detail.component.ts file.
import { ActivatedRoute, Params } from '@angular/router'; import { Location } from '@angular/common'; import { ProductService } from "./product.service";
The ActivatedRoute and Params services are needed to work with the ID parameter you passed in to this component. The parameter is not used for the add method, but will be used shortly for updating a product. The Location service is used to navigate back from the detail page to the list page.
Add a constructor to the ProductDetailComponent class. This constructor is injected with the ProductService class you wrote. The ActivatedRoute and the Location services are injected by Angular as well.
constructor( private productService: ProductService, private route: ActivatedRoute, private location: Location ) { }
Add a method to allow the user to go back to the previous page if they click on the cancel button. This method is also going to be called if a product is successfully added.
goBack(){ this.location.back(); }
Add a method to your class named handleErrors. This method is called if the call to the addProduct in the Product Service fails. In this method, you loop through the string array of errors and add them to the messages property.
private handleErrors(errors: any) { this.messages = []; for (let msg of errors) { this.messages.push(msg); } }
There are three methods you're eventually going to need in this component: saveProduct, addProduct, and updateProduct. You're going to write the updateProduct method soon in this article, but for now, go ahead and add a stub for the function.
private updateProduct(product: Product) { }
The addProduct method is responsible for calling the addProduct method you just created in the ProductService class. As you can see, if this call is successful, the goBack method is called in order to return to the product list page so you can see that the new product has been added.
private addProduct(product: Product) { this.productService.addProduct(product) .subscribe(() => this.goBack(), errors => this.handleErrors(errors)); }
The saveProduct method is called from the HTML button you added earlier on the product detail page. This method checks to see if the productId property of the product object is null or not. If this value is not null, then the updateProduct method is called. If this value is null, then call the addProduct method.
saveProduct() { if (this.product) { if (this.product.productId) { this.updateProduct(this.product); } else { this.addProduct(this.product); } } }
See the Validation Errors
Run the application and click on the Add New Product button. Immediately click on the Save button and you should see a set of validation errors appear on the screen, as shown in Figure 2. NOTE: Since I wrote the last article, I decided it would be better to have all fields on the Product table defined as NOT NULL instead of just the ProductName field. Please make the appropriate adjustments on your Product table.
Add a New Product
Now, go ahead and add some good data for the product. Click the Save button and you should be redirected back to the list page where you'll see the new product you just added within the list.
Get a Single Product
Now that the add functionality is working, you'll add the ability for the user to update a product. To update a product, you must first retrieve all of the product data from the server. Add an Edit button on each row of the product HTML table, as shown in Figure 3. When the user clicks on this Edit button, call the Web API on the server to retrieve the full product record for editing. This ensures that you're getting the latest product data.
Add GET to Controller
To retrieve a single product, add a GET method to your ProductController class. This GET method is different from the other one in this class in that it accepts a product ID of the product you wish to retrieve. Open your ProductController.cs file and add the new GET method shown in Listing 5.
Add GET to Angular Product Service
Now that you have the Web API method created, write a getProduct method in the ProductService component. Open the product.service.ts file and add the following method.
getProduct(id: number): Observable<Product> { let url = this.url + "/" + id; return this.http.get(url) .map(response => response.json() as Product) .catch(this.handleError); }
This method builds a URL that looks like the following: api/productApi/2. The number 2 on the end is what gets passed to the ID parameter in the GET method in your ProductController.
Add a Select Button to HTML Table
As you saw in Figure 3, you need an Edit column on your HTML table. Open the product-list.component.html file and insert a newelement and insert a new tag as well. Add a button with a click event that calls a method in your ProductListComponent class. Pass the current product ID in the table to this method.
<td> <button class="btn btn-default btn-sm" (click)="deleteProduct(product.productId)"> <i class="glyphicon glyphicon-trash"> </i> </button> </td>
Add a DELETE Method in the List Component
Now that you have a button to call a deleteProduct method, go ahead and add that method. Open the product-list.component.ts file and add the code shown below.
deleteProduct(id: number) { if (confirm("Delete this product?")) { this.productService.deleteProduct(id) .subscribe(() => this.getProducts(), errors => this.handleErrors(errors)); } }
This method first confirms with the user that they really wish to delete this product. If they respond affirmatively, the deleteProduct method on the Angular product service is called. If the deletion is successful, the getProducts method is called to refresh the collection of products from the server and redisplay the list of products.
Summary
In this article, you added a detail page to add new, or modify existing, product data. A new route was added to navigate to this detail page. A new component was created to handle the processing of new and existing product data. You also created POST, PUT, and DELETE methods in your Web API controller. The appropriate code to handle all this modification of product data was added to the Angular product service and component classes. You also saw how to handle validation errors returned from the server. In the next article, you'll learn to validate product data on the client-side using Angular.
Listing 1: Create the input fields and bind them to properties using the ngModel directive.
<div class="form-group"> <label for="productName">Product Name</label> <input id="productName" type="text" class="form-control" autofocus="autofocus" placeholder="Enter the Product Name" title="Enter the Product Name" [(ngModel)]="product.productName" /> </div> <div class="form-group"> <label for="introductionDate"> Introduction Date </label> <input id="introductionDate" type="text" class="form-control" placeholder="Enter the Introduction Date" title="Enter the Introduction Date" [(ngModel)]="product.introductionDate" /> </div> <div class="form-group"> <label for="price">Price</label> <input id="price" type="number" class="form-control" placeholder="Enter the Price" title="Enter the Price" [(ngModel)]="product.price" /> </div> <div class="form-group"> <label for="url">URL</label> <input id="url" type="url" class="form-control" placeholder="Enter the URL" title="Enter the URL" [(ngModel)]="product.url" /> </div>
Listing 2: The start of the product detail component.
import { Component, OnInit } from "@angular/core"; import { Product } from "./product"; @Component({ templateUrl: "./product-detail.component.html" }) export class ProductDetailComponent implements OnInit { product: Product; messages: string[] = []; ngOnInit() { this.product = new Product(); this.product.price = 1; this.product.</a>"; } }
Listing 3: Add a method to convert validation errors into a Model State Dictionary.
protected ModelStateDictionary ValidationErrorsToMessages( DbEntityValidationException ex) { ModelStateDictionary ret = new ModelStateDictionary(); foreach (DbEntityValidationResult result in ex.EntityValidationErrors) { foreach (DbValidationError item in result.ValidationErrors) { ret.AddModelError(item.PropertyName, item.ErrorMessage); } } return ret; }
Listing SEQ Listing * ARABIC 4: The Post method adds a new product or returns a set of validation errors.
[HttpPost] public IHttpActionResult Post(Product product) { IHttpActionResult ret = null; ProductDB db = null; try { db = new ProductDB(); // Insert the new entity db.Products.Add(product); db.SaveChanges(); ret = Created<Product>(Request.RequestUri + product.ProductId.ToString(), product); } catch (DbEntityValidationException ex) { ret = BadRequest( ValidationErrorsToMessages(ex)); } catch (Exception ex) { ret = InternalServerError(ex); } return ret; }
Listing 5: Use the Find method to locate a specific product based on the primary key passed into the Get method.
[HttpGet] public IHttpActionResult Get(int id) { IHttpActionResult ret; ProductDB db = new ProductDB(); Product product = new Product(); product = db.Products.Find(id); if (product != null) { ret = Ok(product); } else { ret = NotFound(); } return ret; }
Listing 6: Modify the ngOnInit method to retrieve a specific product from the server.
ngOnInit() { this.route.params.forEach((params: Params) => { if (params['id'] !== undefined) { if (params['id'] != "-1") { this.productService.getProduct( params['id']) .subscribe(product => this.product = product, errors => this.handleErrors(errors)); } else { this.product = new Product(); this.product.price = 1; this.product.</a>"; } } }); }
Listing 7: The Put method allows you to update a product.
[HttpPut()] public IHttpActionResult Put(int id, Product product) { IHttpActionResult ret = null; ProductDB db = null; try { db = new ProductDB(); // Update the entity db.Entry(product).State = EntityState.Modified; db.SaveChanges(); ret = Ok(product); } catch (DbEntityValidationException ex) { ret = BadRequest( ValidationErrorsToMessages(ex)); } catch (Exception ex) { ret = InternalServerError(ex); } return ret; }
Listing 8: The Delete method first locates the product to delete, then removes it from the database.
[HttpDelete()] public IHttpActionResult Delete(int id) { IHttpActionResult ret = null; ProductDB db = null; try { db = new ProductDB(); // Get the product Product product = db.Products.Find(id); // Delete the product db.Products.Remove(product); db.SaveChanges(); ret = Ok(product); } catch (Exception ex) { ret = InternalServerError(ex); } return ret; } | https://www.codemag.com/Article/1707021/From-Zero-to-CRUD-in-Angular-Part-2 | CC-MAIN-2020-40 | refinedweb | 3,879 | 55.95 |
The QPushButton widget provides a command button. More...
#include <QPushButton>
Inherits QAbstractButton.:.
Initialize option with the values from this QPushButton. This method is useful for subclasses when they need a QStyleOptionButton, but don't want to fill in all the information themselves.
See also QStyleOption::initFrom().
Returns the button's associated popup menu or 0 if no popup menu has been set.
See also setMenu().
Associates the popup menu menu with this push button. This turns the button into a menu button, which in some styles will produce a small triangle to the right of the button's text.
Ownership of the menu is not transferred to the push button.
See also menu().
Shows (pops up) the associated popup menu. If there is no such menu, this function does nothing. This function does not return until the popup menu has been closed by the user. | https://doc.qt.io/archives/4.3/qpushbutton.html | CC-MAIN-2019-26 | refinedweb | 145 | 69.68 |
In this article I would like to explore the Facade design pattern. The actual
pronunciation is fu'saad. The dictionary meaning of the word Façade.
The series of activities looks like:
SqlConnection
connection = new
SqlConnection("connection string" );
SqlCommand command =
new SqlCommand();
command.Connection = connection;
command.CommandText = "UPDATE Customer SET
Processed=1";
command.ExecuteNonQuery();
You need to repeat the same code wherever execute queries are required. How to
make this code better? Definition
"Provide a unified interface to a set of interfaces in a system. Facade defines
a higher-level interface that makes the subsystem easier to use." Implementation
Using Façade, we can improve the situation. We can find only the query is
different in each case – the parameters like connection string is common for the
entire application.
SQLFacade.ExecuteSQL("UPDATE
query here..");
After using the Façade the code will look like above.
The complicated code is being pulled to the background SQLFacade class. namespace
FacadePattern
{
public class SQLFacade {
public static bool ExecuteSQL(string
sql)
{
SqlConnection connection =
new SqlConnection("connection
string");
SqlCommand command =
new SqlCommand();
command.Connection = connection;
command.CommandText = sql;
command.ExecuteNonQuery();
return true;
}
}
}
Façade pattern makes code better as depicted in the image below:
Summary
In this article we have seen how to use the Façade pattern to improve our code.
It is similar to the use of reusable functions and it pulls out the
complications and provides an easier interface for use. The associated code
contains the classes we have discussed.
Resources
Here are some useful related resources:
Facade Pattern
Singleton Design Pattern
Thank You Sujith for the good words..
Your comment too makes it the most commented article of mine. Thanks :)
Hi
Very nice and simple article on FACADE design pattern
Regards
Sujeet
Hei Vineet.. Thanks for the Good Words Yaar.
I hope to be better with patterns n' examples.
Hi Akash.. Thanks for the comments..
I believe already we are using Facade without knowing the pattern name.
Thank You Arjun for the good words | http://www.c-sharpcorner.com/UploadFile/40e97e/facade-pattern/ | crawl-003 | refinedweb | 334 | 51.95 |
Section 9.2 Type Checking
We have discussed the static semantics of C0 in Section 6.6. The static semantics for expressions is modelled as a relation that relates a type environment with expressions and a type. The static semantics for statements relates the type environment with statements. The static expression semantics is deterministic (or functional) which means that each type environment and each expression are related to exactly one type. This means that we can easily implement it as a method (in the interface
Expression) that takes a type environment (here implemented by a class
Scope which we will discuss below), an expression and delivers a type. Statements can be handled similarly: we implement the static statement semantics as a method (in the interface
Statement) that takes a type environment as a parameter and check if a statement is well-typed. Both methods generate an error (using a
Diagnostic object) that is shown to the user if the statement/expression is not well-typed.
public interface Expression { Type checkType(Diagnostic d, Scope d); }
The following code shows how type checking looks like for a simple add expression. It effectively implements rule [Arith] in Section 6.6. The code determines the types of the sub-expressions and computes the type of the expression based on this information.
public class Arith implements Expression, Locatable { // ... public Type checkType(Diagnostic d, Scope s) { Type l = getLeft().checkType(d, s); Type r = getRight().checkType(d, s); if (!l.isIntType()) d.printError(this, "..."); if (!r.isIntType()) d.printError(this, "..."); return Types.getIntType(); } }
Scope.
The type environment is implemented by the class
Scope that maps an identifier to the AST node of its declaration in the current scope. Scope objects are created during the type checking process: Whenever we enter a new block, we create a new scope to collect the variable declarations in that scope (see rule [Block] in Section 6.6). To access the declarations of variables in outer scopes, a scope also links to its parent scope via a reference, making it effectively a stack of scopes. When type checking for this block is finished, we can pop the scope of the block from the scope stack. This corresponds naturally to the scope nesting of the programming language. The following code shows the handling of scopes when type checking a block:
public class Block implements Statement, Locatable { private final <Statement> body; public void checkType(Diagnostic d, Scope parent) { Scope scope = parent.newNestedScope(); // local variables are added by declaration statements // in the statement list that constitutes the block's body for (Statement s : body) s.checkType(d, scope); } }
When looking up the type of an identifier we have to consult the top scope if it has a declaration for that identifier. If so, we return it, if not, we look in the parent scope and repeat the process until we reach the root scope. If we do not find a definition for the identifier there, we have to signal an error because it means that an identifier is used without being declared.
{ // Scope 1 int x; int y; x = 1; y = 1; { // Scope 2 int z; z = x + y; } { // Scope 3 int y; y = x + 1; } }
The following code shows a sample implementation for the class
Scope.
public class Scope { private final Map<String, Declaration> table; private final Scope parent; public Scope() { this(null); } private Scope(Scope parent) { this.parent = parent; this.table = new HashMap<String, Declaration>(); } public Scope newNestedScope() { return new Scope(this); } public void add(String id, Declaration d) throws IdAlreadyDeclared { if (table.contains(id)) throw new IdAlreadyDeclared(id); table.put(id, d); } public Declaration lookup(String id) throws IdUndeclared { // ... } }
When visiting an AST node during type checking, it is advisable to store references to the significant declaration in the AST node that represents the using occurrence of an identifier. By this reference, the type can later on be looked up again. This may be important for successive passes like code generation which we discuss in the next section. In general, the AST node of the declaration stands for the variable itself and the compiler may want to associate other information with variables in later stages. The following code shows an example implementation of the AST node that represents the using occurrence of a variable.
public class Var implements Expression, Locatable { private String id; private Declaration decl = null; public Type checkType(Diagnostic d, Scope s) { try { this.decl = s.lookup(id); return this.decl.getType(); } catch (IdUndeclared e) { d.printError(this, "Variable " + id + " not declared"); return Types.getErrorType(); } } } | https://prog2.de/book/sec-comp-tychk.html | CC-MAIN-2022-33 | refinedweb | 756 | 53.1 |
PCAP_SNAPSHOT(3PCAP) PCAP_SNAPSHOT(3PCAP)
NAME
pcap_stats - get capture statistics
SYNOPSIS
#include <pcap/pcap.h> int pcap_stats(pcap_t *p, struct pcap_stat *ps);
DESCRIPTION
pcap_stats() fills in the pcap_stat structure pointed to by its second argument. The values represent packet statistics from the start of the run to the time of the call. pcap_stats() is supported only on live captures, not on ``savefiles''; no statistics are stored in ``savefiles'', so no statistics are avail- able when reading from a ``savefile''.
RETURN VALUE
pcap_stats() returns 0 on success and returns -1 if there is an error or the p doesn't support packet statistics. If -1 is returned, pcap_geterr() or pcap_perror() may be called with p as an argument to fetch or display the error text.
SEE ALSO
pcap(3), pcap_geterr(3) 5 April 2008 PCAP_SNAPSHOT(3PCAP)
libpcap 1.0.0 - Generated Thu Oct 30 20:28:26 CDT 2008 | http://manpagez.com/man/3/pcap_stats/ | CC-MAIN-2018-34 | refinedweb | 147 | 54.42 |
Now blogging at SteveSmithBlog.com
Ads Via DevMavens
One very cool feature of Visual Studio 2005 is that you no longer need to build your controls into separate assemblies. You still *can* and of course if you're a control vendor or if you want to share them, you probably should, but if you just need a control in a single web application, you can drop the class into the /App_Code/ folder and immediately use it on your pages. The even cooler part is you don't have to add a <%@ Register %> directive! The “trick” to get this work is in the web.config file. Add this section:
<system.web> <pages> <controls> <add tagPrefix="ss" namespace="DateControls" /> </controls> </pages>...</system.web>
Once you have this, you can go to any of your pages and reference the control like so:
<ss:MonthDropDownList
Thanks to ScottW for reminding me how this works today...
If you would like to receive an email when updates are made to this post, please register here
RSS | http://aspadvice.com/blogs/ssmith/archive/2005/04/20/1860.aspx | crawl-002 | refinedweb | 170 | 70.33 |
Moving from Procedural to Object-Oriented Development
Methods
As you learned earlier, methods implement the required behavior of a class. Every object instantiated from this class has these methods. Methods may implement behaviors that are called from other objects (for example, messages) or provide internal behavior of the class.
Internal behaviors are private methods that are not accessible by other objects. In the Person class, the behaviors are getName(), setName(), getAddress(), and setAddress(). These methods allow other objects to inspect and change the values of the object's attributes. This is common design in OO systems. In all cases, access to attributes within an object should be controlled by the object no other object should directly change an attribute of another.
Messages
public class Payroll{ String name; Person p = new Person(); String = p.setName("Joe"); ... code String = p.getName(); }
In this example (assuming that a Payroll object is instantiated), the Payroll object is sending a message to a Person object, with the purpose of retrieving the name via the getName() method. Again, don't worry too much about the actual code, as we are really interested in the concepts. We will address the code in detail as we progress through the series.
Using UML to Model a Class Diagram
Over the years, many tools and models have been developed to assist in designing classes. The most popular tool today is UML. Although it is beyond the scope of this series to describe UML in fine detail, we will use UML class diagrams to illustrate the classes that we build. In fact, we have already used a class diagram. Figure 8 shows the Person class diagram we discussed earlier.
Figure 8 - The Person class diagram.
Again, notice that the attributes and methods are separated (the attributes on the top, and the methods on the bottom). As we delve more deeply into OO design, these class diagrams will get much more sophisticated and convey much more information on how the different classes interact with each other.
Encapsulation
One of the primary advantages of using objects is that the object need not reveal all its attributes and behaviors. In good OO design (at least what is generally accepted as good), an object should reveal only the interfaces needed to interact with it.
Details not pertinent to the use of the object should be hidden from other objects. This is called encapsulation. For example, an object that calculates the square of a number must provide an interface to obtain the result. However, the internal attributes and algorithms used to calculate the square need not be made available to the requesting object. Robust classes are designed with encapsulation in mind. In the next sections, we cover the concepts of interface and implementation, which are the basis of encapsulation.
Interfaces
As discussed earlier,.
Let's look at the example just mentioned: calculating the square of a number. In this example, the interface would consist of two pieces:
- How to instantiate a Square object
- How to send a value to the object and get the square of that value in return
Interfaces do not normally include attributes, only methods. If a user needs access to an attribute, a method is created to return the attribute (a getter). If a user wants the value of an attribute, a method is called that returns the value of the attribute. In this way, the object that contains the attribute controls access to it. This is of vital importance, especially in testing and maintenance.
If you control the access to the attribute, when a problem arises, you do not have to worry about tracking down every piece of code that might have changed the attribute. It can only be changed in one place (the setter).
Implementations
Only the public attributes and methods are considered the interface. The user should not see any part of the implementation interacting with an object solely through class interfaces. In the previous example, for instance the Employee class, only the attributes were hidden. In many cases, there will be methods that also should be hidden and thus not be part of the interface. Continuing the example of the square root from the previous section, the user does not care how the square root is calculated as long as it is the correct answer. Thus, the implementation can change and it will not affect the user's code.
A real-world example of the interface/implementation paradigm
Figure 9 9.
Figure 9 - Power plant example.
A Java example of the interface/implementation paradigm
Let's explore the Square class further. Assume that you are writing a class that calculates the squares of integers. You must provide a separate interface and implementation. That is, you must provide a way for the user to invoke and obtain the square value. You must also provide the implementation that calculates the square; however, the user should not know anything about the specific implementation. Figure 10 shows one way to do this. Note that in the class diagram, the plus sign (+) designates public and the minus sign (-) designates private.
Thus, you can identify the interface by the methods, prefaced with plus signs.
Figure 10 - The square class.
This class diagram corresponds to the following code:
public class IntSquare { // private attribute private int squareValue; // public interface public int getSquare (int value) { SquareValue =calculateSquare(value); return squareValue; } // private implementation private int calculateSquare (int value) { return value*value; } }
Note that the only part of the class that the user has access to is the public method getSquare(), which is the interface. The implementation of the square algorithm is in the method calculateSquare(), which is private. Also notice that the attribute SquareValue is private because users do not need to know that this attribute exists. Therefore, we have hidden the part of the implementation: The object only reveals the interfaces the user needs to interact with it, and details that are not pertinent to the use of the object are hidden from other objects.
If the implementation were to change—say, you wanted to use Java's built-in square function—you would not need to change the interface. The user would get the same functionality, but the implementation would have changed. This is very important when you're writing code that deals with data; for example, you can move data from a file to a database without forcing the user to change any application code.
Conclusion
At this point we are about 2/3 of the way into our discussion on basic object-oriented concepts. Next month, we will conclude the introduction phase of this series and cover the important topics of inheritance, polymorphism, and composition._3<< | https://www.developer.com/lang/article.php/10924_3317571_3/Moving-from-Procedural-to-Object-Oriented-Development.htm | CC-MAIN-2018-09 | refinedweb | 1,112 | 54.22 |
Jim Hill wrote: > Fredrik Lundh wrote: > >>Scott David Daniels wrote: >> >> >>>And if you enjoy building insecure stuff, try: >>> >>> def fix(text, globals_=None, locals=None, quote='"'): >>> d = (globals_ or locals or globals()).copy() >>> source = text.split(quote) >>> source[1::2] = (str(eval(expr, d, locals or d)) [fixing the !#@!$ tab that snuck in there] >>> for expr in source[1::2]) >>> return ''.join(source) >> >>And if you prefer not to type so much: >> >>def I(*args): return "".join(map(str, args)) >>def F(v, fmt): return ("%" + fmt) % v > > > Not that I don't appreciate the suggestions from masters of the Python > universe, but the reason I'm switching to Python from Perl is for the > readability. What you fells are suggesting might as well be riddled > with dollar signs and semicolons... <emoticon>. Really what I was trying to showing you was that you could use eval and do arbitrary expressions using sneaky slicing. v[1::2] is every odd position, and it can be used for replacement. v[1::2] = [str(eval(txt)) for txt in v[1::2]] Is likely what you want; it got ugly when I lifted it into a function. The cute trick is to use a quote delimiter and observe that the odd positions are the quoted text. --Scott David Daniels Scott.Daniels at Acm.Org | https://mail.python.org/pipermail/python-list/2004-December/287137.html | CC-MAIN-2014-10 | refinedweb | 220 | 73.47 |
When I was looking at this problem a couple of years ago, looked pretty interesting.
I don’t remember if we got it working or not though.
When I was looking at this problem a couple of years ago, looked pretty interesting.
I don’t remember if we got it working or not though.
Good thinking.
tools
Nope.
I think for the docs we’ll simply create a separate folder for build artifacts, and have a script that puts built versions of them in there. We can have a separate gitignore in that directory to turn off the filter. Sound OK?
@stas I tried to use the tool but found no nbstripout.py. Was this file defined by yourself or should it come integrated in the package?
Saw the pip install, ran the commands but not sure how to run @stas 's test.
Used:
cat 002_images.ipynb | /home/user/anaconda3/envs/fastai/bin/nbstripout > OUT.ipynb
but OUT.ipynb and 002_images.ipynb are identical (which means nbstripout did not run properly).
How can I make nbstripout actually run over 002_images.ipynb? I did not find any Python file after the install.
Just a suggestion.
Save notebooks
as html with outputs
as notebooks without outputs
in the git
once you install nbstripout, it drops .py. I was just adjusting the tool, so I invoked its original .py version directly
The
cat is there to avoid overwriting, as when you call
nbstripout file.ipynb it overwrites the original.
Give me a little bit of time, I’m going to make the modified version to be part of fastai_v1 repo and then try again.
Well, if they are all autogenerated by a function, it should be trivial to insert a hidden html markup at the beginning of each output cell, say and then instrument nbstripout to not delete any output cells starting with this tag.
from IPython.display import Markdown, display def show_doc_from_name(...): #mark the cell as unique: display(Markdown("<fastaidoc />")) the rest of the code
An alternative approach would be to tap into the ipython API and use the cell’s metadata entry to put a special flag that it’s a docstring, and then make nbstripout not strip out such outputs (and keep the metadata entry for such cells as well). I haven’t yet looked at how to approach that from ipython API, but it should be doable as ‘Collapse Headers’ plugin does that. So we would have under git:
{ "cell_type": "code", "execution_count": null, "metadata": { "docstring": "true" }, "source": [ "show_doc_from_name(...)"), "outputs": ["docstrings"] }
edit: I checked how to do this, need to add
metadata arg to
display, e.g.:
display("great document string here", metadata={"docstring":"true"}) print("some more docs")
now we can change fastai-nbstripout to keep those output cells in, by looking for the special metadata.
The metadata will be only set in the first part of the output cell, but it should be enough to keep the rest.
Alternatively, this metadata can be set in every call for output line that we want under git. So it’ll keep only the output line that have that metadata set. i.e. it’d require:
display("great document string here", metadata={"docstring":"true"}) display("some more docs", metadata={"docstring":"true"})
note: all display_(png|svg|html|etc) support metadata arg.
I think for the docs we’ll simply create a separate folder for build artifacts, and have a script that puts built versions of them in there. We can have a separate gitignore in that directory to turn off the filter. Sound OK?
If you want them in a separate dir, that’s OK too, but you’ll be still having a collision issue with a bunch of committed cells like counts, and metadata, which are useless. So it’d still be very beneficial to strip unnecessary cells.
The way I see it now, either
Please ‘git pull’ your repository and if had nbstripout configured already, please remove nbstripout hooks by running from within the repo:
nbstripout --uninstall
now we will be using our own version of the tool.
Changes added:
tools/fastai-nbstripoutwhich strips out what nbstripout does, plus other noisy (nb and cell-level) metadata.
tools/fastai-nbstripoutat diff/commit (added
.gitconfigand
.gitattributes).
docs/dev.mdfor developer notes, and there add instructions to what needs to be done by developers to make
tools/fastai-nbstripoutwork behind the scenes.
Important! Please see:
Unfortunately, git’s security prevents us from having this process fully automated and requires one extra command run on checkout to tell git to trust the local .gitconfig brought from the git repo. Full details are in the link above. tl;dr version, run once per checkout:
cd myrepo git config --local include.path '../.gitconfig'
Please let me know if this causes any problems to anyone, or if you can think of an even more automated way.
And a request to all who merge PRs to point future contributors to the link above so that they submit clean diffs. Thank you!
It’d have been nice to be able to add a note above with instructions to creating PRs. But it doesn’t seem to be possible. I think we will eventually have easy to follow dev docs.
@lesscomfortable, please let me know whether this now works for you (i.e. the custom tools/fastai-nbstripout).
@jeremy, could you please rerun .ipynb’s through the latest incarnation of tools/fastai-nbstripout - as it’ll now strip other metadata that nbstripout didn’t handle before. I’m asking you so not to step on anybody’s toes - you know which files are “safe” to commit. Thank you.
Also if you have good tips/processes relevant for fastai_v1 dev that could go into docs/dev.md please share (but please start another thread, so that we could focus on clean commit/merge process here. Thank you!)
Hi Stas, tried it and it works (Out is 1053 lines long and 002_images.ipynb is 1191 lines long).
Do you want me to check if some specific metadata is correctly filtered out?
Awesome!
At the moment there should be no cell-level metadata remain at all. There are a few remaining entries in the nb-level metadata (very end of the notebook) - I believe those will be identical to all users.
If you find anything else that’s transient in nature and that is not needed to be under git please let me know.
For the docs notebooks, it’s only the auto-generated cells that we’ll need to keep the output of, but every cell added by the person writing the doc: the idea is too include pictures, examples of codes, more markdown, links to video etc… And we’ll need all the outputs since it’s what gets converted into html (most of the times, the input will be hidden).
The tools to auto-generate the documentation will also auto-execute the notebooks: we use a function to get the doc strings of the classes/functions because it might change (or the number of arguments can change) since the last time the user wrote the corresponding notebook.
So not sure how the stripped out can work around this.
Thank you providing more input, @sgugger
Well, then as I suggested above, use a different .gitconfig in each sub-dir and add a flag to
tools/fastai-nbstripout to handle #2:
1. strip out all but source cells for code notebooks 2. keep source and output cells for docs notebooks
it’ll still be much better for group collaboration to strip out everything else, so the docs notebooks, in addition to
source cells, will have
outputs in git from the get going.
There is a bit of metadata too that we’ll need to keep for the doc notebooks since the extension to hide input cells works by adding a flag hide_input:true in the metadata of the cell. I think that’s all, but we’ll know for sure when I’ve finished developing the whole thing (hopefully today).
When it’s ready please send me (1) a docs notebook that I can test with (2) example(s) of a stripped out cell that will have all the parts that need to be kept and I will work on adjusting tools/fastai-nbstripout to support the special needs of docs notebooks. That’s is if we agree on having those in a separate folder. Thanks.
The three notebooks in docs named fastai_v1… are samples of what the doc notebooks would look like. It’s best to see them with the nbextension hidden cells activated.
In each cell the attributes ‘source’ and ‘outputs’ should be in untouched, and in metadata, the attributes ‘hide_input’ and sometimes ‘trusted’ (this one will change for the security reasons but I’d like to keep it for now, as I’m trying to figure out if there is a way to have a fastai_v1 signature that would, once properly set up, automatically trust those notebooks) should be left too. The rest (which is only ‘execution_count’ or other fields in the metadata) can be removed.
Hope that’s clear enough.
The process to strip the notebooks automatically is really smooth and I’ll probably ask for your help once the doc scripts are all finished to do something similar with them (to automatically execute the notebook and convert it to html).
@stas many thanks. I’ve stripped all the notebooks in dev_nb. I’ve also moved the info from
doc/dev.md to
CONTRIBUTING.md, since I think that’s the standard github path and is shown on their UI:
Thank you, Jeremy, for running this and relocating the instructions into a better location!
Going to work on docs notebooks stripping now, thanks to @sgugger’s input.
I made the changes to fastai-nbstripout according to your instructions and added the git instrumentation to activate the docs mode.
I think you will need to force a run on the previously committed files for the changes to apply.
FYI after this commit docs/.gitattributes overrides the repo-global .gitattributes for *.ipynb files found under docs/.
Currently all outputs cell’s metada is preserved. If you see anything that can be dropped, or needs to be kept please let me know.
The process to strip the notebooks automatically is really smooth and I’ll probably ask for your help once the doc scripts are all finished to do something similar with them (to automatically execute the notebook and convert it to html).
Thank you for confirming that it works well, @sgugger. Yes, if you need anything give me a shout. | http://forums.fast.ai/t/git-an-easier-jupyter-notebook-modification-commit-process/20355?page=2 | CC-MAIN-2018-34 | refinedweb | 1,771 | 70.94 |
Problem
In a previous tip we learned how to import JSON files into SQL Server using SSIS. However, I have been supplied a JSON file which contains multiple nested JSON objects. Hence, I would like to learn how to import nested JSON objects into SQL Server using SQL Server Integration Services.
Solution
In this tip, I will show how to import two nested JSON object files into SSIS using a couple of examples.
Import JSON File into SQL Server – Example #1
The below image represents a simple JSON object which contains a nested JSON object "links". The "links" JSON object has 5 attributes namely "self", "first", "last", "next" and "prev".
It is observed that the attribute "prev" has a value of "null". In JSON null or no values are expressed as "null". This example is the same as the “Orders” JSON file mentioned in the previous tip. Hence, we will be following similar procedures to load the file.
As a first step, let’s create a data flow task and add a script component to source the JSON file. Once this is done, let’s add the output columns for the JSON object.
The columns "First", "Last", "Next", "Prev" and "Self" have been added as output columns with the datatype string.
Now we need to define a class to store the value of "Links" object at runtime. If you observed very closely on the JSON viewer, it is evident that there is a root level JSON object.
The root level JSON object contains the "links" JSON object. As there are two objects, we need to create two classes. A "LinkSubItem" class will be defined to store the value of "links" attributes.
A petition class will be defined to represent the root level JSON object. The petition class will have a property to store the values of links. In this way, an object of type Petition will store the root level object with "links" as an inner most object. Hence once the JSON file has been loaded and stored as a petition object, we can access all the properties using .Net libraries.
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace PetitionNamespace { class Petition { public LinkSubItem links { get; set; } } public class LinkSubItem { public string self { get; set; } public string first { get; set; } public string last { get; set; } public string next { get; set; } public string prev { get; set; } } }
The above class definition will help you to create the class for Petition.
Deserialization
We have learned about deserialization in the last tip. Now we can deserialize and load the JSON file into an object of type petition. As there is only one JSON object at the root we don’t need to define an array. A simple definition of petition object is enough. Once the petition object has been created we can access the inner most object “links” using the notation "petition.links".
Hence the attributes of link object can be accessed by "petition.links.self". The below mentioned script will help you to deserialize the JSON object.(); Petition petition = js.Deserialize<Petition>(jsonFileContent); Output0Buffer.AddRow(); Output0Buffer.self = petition.links.self; Output0Buffer.First = petition.links.first; Output0Buffer.Last = petition.links.last; Output0Buffer.Next = petition.links.next; Output0Buffer.prev = petition.links.prev; }
After the successful execution of the package, we can see a record in the data pipe line as shown in the below picture.
Import JSON File into SQL Server – Example #2
In example #1, we had a quick look at a simple example for a nested JSON document. Now let’s have a look at complex example on the nested JSON file.
In this example, at root level we have a single object "data". The data object contains the value as array and it has two petition objects. Each petition object has three attributes namely type, id and links. Both the attributes type and id are of string datatype and the links attribute is of type object. The JSON object viewer in the below image represents the object structure.
Overall, we have three property details to collect from the supplied JSON and they are "id", "type" and "Link". So, let’s create output columns for these attributes with the data type "string" as mentioned in the below picture.
By closely observing the JSON viewer, we can see that there are three objects in the supplied JSON document. The inner most object is "links" where it has the property "self" which holds the actual link for the petition. The next level in the hierarchy is data array item. An item has three attributes and they are "type", "id" and "links". A data item is a root level object that holds the array of “datasubitems” as its property.
To deserialize this JSON, we need three classes. The inner most class "SelfLink" represents the "links" item. The “DataSubItem” represents each item in the array. It is observed that the “datasubitem” has a property "links" which will return the object of type "Selflink" which contains the link details.
Finally, the “dataitem” class represents the root level object in the JSON file. This root object has a property "data" which will return a collection of subitem. class Selflink { public string self { get; set; } } }
Once the class has been defined, we can deserialize the JSON file and store the file content of type "DataItem". The “dataitem” object contains a collection of “datasubitmes”. Hence, we need to iterate thru “datasubitems” to collect the details of the attributes. This is achieved by using the foreach loop construct in C#. As the attributes "type", "id" are the simple properties they can be extracted directly from the “datasubitem”. However, the “datasubitem” has "links" object as its attribute. Hence, we need to extract the "links" object from “datasubitem” to collect the "self" link.
The below script will help you to deserialize and extract all the data contents of the JSON file.; } }
Summary
In this tip, we learned about importing nested JSON data files using SQL Server Integration Services. Also, we have learned about deserializing nested JSON into JSON runtime objects.
Next Steps
Last Update:
2018-04-10
About the author
Nat Sundar is working as an independent SQL BI consultant in the UK with a Bachelors Degree in Engineering. | http://ugurak.net/index.php/2018/05/08/import-nested-json-files-to-sql-server-with-ssis/ | CC-MAIN-2018-47 | refinedweb | 1,040 | 64.41 |
Is there a standard/accepted/useful way of setting up a package such that other packages can be inserted into the (namespace of) the original package. For example suppose I have a package that has functionality to work with some cool data. Then I might want to have separate packages for displaying that data in Matplotlib and ..
I need to filter Titles by genre field, I tried to use DjangoFilterBackend, but it didn’t work, I don’t know, how to create a CustomSearchFilter? Views: class GenreViewSet(ModelCVDViewSet): queryset = Genre.objects.all() serializer_class = GenreSerializer permission_classes = (IsAdminOrReadOnly,) filter_backends = (DjangoFilterBackend, filters.SearchFilter) filterset_fields = (‘name’, ‘slug’) search_fields = (‘name’, ‘slug’) lookup_field = ‘slug’ class TitleViewSet(viewsets.ModelViewSet): queryset .. ..
I am a beginner in DevOps and a noob at programming. I have been assigned a task to autostart a group of instances with a specific sequence. Checking the health of its Linux services before starting the next one. I found an auto stop and start python script that can be run as a lambda ..
I am importing a CSV file into a Pandas DataFrame object. I’ve created an Enum which I use to validate the quality of the imported CSV, and to reference the columns elsewhere in the application. A sample is below: class FooProps(Enum): FOO = ‘Foo’ BAR = ‘Bar Bar’ def __str__(self): return self.value data = ds.getFooProperties() ..
I am still trying to learn pandas. I have a custom user defined function which will require two columns as input. It is an aggregation function so it needs to be done by group. This is my question: How can I get a grouped aggregation with multiple columns as inputs to a user defined function? ..
I ..
when i try to create VM with –nsg ” i got a template deployment error: message":"Resource name ” is invalid Same command works using ‘az’ command line. Any ideas? Regards, Vladimir Source: Python..
** ..
Recent Comments | https://askpythonquestions.com/ | CC-MAIN-2021-31 | refinedweb | 315 | 57.06 |
For checkpoint/restart (c/r) we need a method to (re)create the taskstree during restart. There are basically two approaches: in userspace(zap approach) or in the kernel (openvz approach).Once tasks have been created both approaches are similar in that allrestarting tasks end up calling the equivalent of "do_restart()" inthe kernel to perform the gory details of restoring its state.In terms of performance, both approaches are similar, and both canoptimize to avoid duplicating resources unnecessarily during theclone (e.g. mm, etc) knowing that they will be reconstructed soonafter.So the question is what's better - user-space or kernel ?Too bad that Alexey chose to ignore what's been discussed inlinux-containers mailing list in his recent post. Here is my take oncons/pros.Task creation in the kernel---------------------------* how: the user program calls sys_restart() which, for each task to restore, creates a kernel thread which is demoted to a regular process manually.* pro: a single task that calls sys_restart()* pro: restarting tasks are in full control of kernel at all times* con: arch-dependent, harder to port across architectures* con: can only restart a full containerTask creation in user space---------------------------* how: the user programs calls fork/clone to recreate a suitable task tree in userspace, and each task calls sys_restart() to restore its state; some kernel glue is necessary to synchronize restarting tasks when in the kernel.* pro: allows important flexibility during restart (see <1>)* pro: code leverages existing well-understood syscalls (fork, clone)* pro: allows restart of a only subtree (see <2>)* con: requires a way to creates tasks with specific pid (see <3>)<1> Flexibility:In the spirit of madvise() that lets tasks advise the kernel becausethey know better, there should be cradvise() for checkpoint/restartpurposes. During checkpoint it can tell the kernel "don't save thispiece of memory, it's scratch", or "ignore this file-descriptor" etc.During restart, it will can tell the kernel "use this file-descriptor"or "use this network namespace" (instead of trying to restore).Offering cradvise() capability during restart is especially importantin cases where the kernel (inevitably) won't know how to restore aresource (e.g. think special devices), when the application wants tooverride (e.g. think of a c/r aware server that would like to changethe port on which it is listening), or when it's that much simpler todo it in userspace (e.g. think setting up network namespaces).Another important example is distributed checkpoint, where therestarting tasks could (re)create all their network connections inuser space, before invoking sys_restart() and tell the kernel, viacradvise(), to use the newly created sockets.The need for this sort of flexibility has been stressed multiple timesand by multiple stake-holders interested in checkpoint/restart.<2> Restarting a subtree:The primary c/r effort is directed towards providing c/r functionalityfor containers.Wouldn't it be nice if, while doing so and at minimal added effort, wealso gain a method to checkpoint and restart an arbitrary subtree oftasks, which isn't necessarily an entire container ?Sure, it will be more constrained (e.g. resulting pid in restart won'tmatch the original pids), and won't work for all applications. But itwill still be a useful tool for many use cases, like batch cpu jobs,some servers, vnc sessions (if you want graphics) etc. Imagine you run'octave' for a week and must reboot now - 'octave' wouldn't care ifyou checkpointed it and then restart with a different pid !<3> Clone with pid:To restart processes from userspace, there needs to be a way torequest a specific pid--in the current pid_ns--for the child process(clearly, if it isn't in use).Why is it a disadvantage ? to Linus, a syscall clone_with_pid()"sounds like a _wonderful_ attack vector against badly writtenuser-land software...". Actually, getting a specific pid is possiblewithout this syscall. But the point is that it's undesirable to havethis functionality unrestricted.So one option is to require root privileges. Another option is torestrict such action in pid_ns created by the same user. Even more so,restrict to only containers that are being restarted.---Either way we go, it should be fairly easy to switch from one methodto the other, should we need to.All in all, there isn't a strong reason in favor of kernel method.In contrast, it's at least as simple in userspace (reusing existingsyscalls). More importantly, the flexibility that we gain with restartof tasks in userspace, no cost incurred (in terms of implementation orruntime overhead).Oren. | https://lkml.org/lkml/2009/4/13/401 | CC-MAIN-2016-26 | refinedweb | 755 | 52.6 |
You are required to write a C program that accepts two decimal integers, say d and r. You
may assume that the first decimal integer d is nonnegative and that the second decimal integer r
which represents the radix, will be one of 2, 3, 4, . . ., 15, 16. The program should convert the first
decimal integer d into its representation in radix r and print the result.
I seriously am clueless on how to start. What I want to do is take a decimal divide by a base record the remainder and keep doing so with the decimal until the decimal is 0 and record the remainders. That is how I get the binary.
This is what I got so far its absolutely nothing
#include <stdio.h> int main(void){ int d, r, temp; temp = d = r = 0; temp = d/r; | https://www.daniweb.com/programming/software-development/threads/223648/convert-decimal-to-radix | CC-MAIN-2021-39 | refinedweb | 141 | 70.84 |
#encrypting an image using AES
import binascii
from Crypto.Cipher import AES
def pad(s):
return s + b"\0" * (AES.block_size - len(s) % AES.block_size)
filename = 'path to input_image.jpg'
with open(filename, 'rb') as f:
content = f.read()
#converting the jpg to hex, trimming whitespaces and padding.
content = binascii.hexlify(content)
binascii.a2b_hex(content.replace(' ', ''))
content = pad(content)
#16 byte key and IV
#thank you stackoverflow.com
obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
ciphertext = obj.encrypt(content)
#is it right to try and convert the garbled text to hex?
ciphertext = binascii.hexlify(ciphertext)
print ciphertext
#decryption
obj2 = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
plaintext = obj2.decrypt(ciphertext)
#content = content.encode('utf-8')
print plaintext
#the plaintext here matches the original hex input file as it should
with open('path to - AESimageDecrypted.txt', 'wb') as g:
g.write(plaintext)
The question here is how to display encrypted image as an image without decrypting it.
The encrypted contents are not an image and cannot be unambiguously represented as an image. The best that can be done is to treat it as a bitmap, i.e. each binary value represents the intensity of some color at some coordinate.
It seems logical to treat the data as 3 bytes per pixel: RGB RGB RGB...
Images are 2D and encrypted data is just a list of bytes. Again, several options are valid. Let's say it is a square image (NxN pixels).
To create the image, I would use PIL / Pillow:
from PIL import Image # calculate sizes num_bytes = len(cyphertext) num_pixels = int((num_bytes+2)/3) # 3 bytes per pixel W = H = int(math.ceil(num_pixels ** 0.5)) # W=H, such that everything fits in # fill the image with zeros, because probably len(imagedata) < needed W*H*3 imagedata = cyphertext + '\0' * (W*H*3 - len(cyphertext)) image = Image.fromstring('RGB', (W, H), imagedata) # create image image.save('C:\\Temp\\image.bmp') # save to a file
BTW, this can be done with absolutely any string of bytes, not just encrypted images. | https://codedump.io/share/MaRQsrZflbFE/1/how-to-display-encrypted-image-as-an-image-without-decrypting-it | CC-MAIN-2017-43 | refinedweb | 348 | 52.87 |
Generally, users of regular expressions. We'll now explore the trade-offs in complexity and performance of these two approaches.
A common processing need is to match certain parts of a string and perform some processing. So, here's an example that matches words within a string and capitalizes them:
using System; using System.Text.RegularExpressions; class ProceduralFun { static void Main( ) { string txt = "the quick red fox jumped over the lazy brown dog."; Console.WriteLine("text=["+txt+"]"); string res = ""; string pat = @"\w+|\W+"; // Loop through all the matches foreach (Match m in Regex.Matches(txt, pat)) { string s = m.ToString( ); // If the first char is lower case, capitalize it if (char.IsLower(s[0])) s = char.ToUpper(s[0])+s.Substring(1, s.Length-1); res += s; // Collect the text } Console.WriteLine("result=["+res+"]"); } } previous example is by providing a MatchEvaluator, which processes it as a single result set.
So the new sample looks like:
using System; using System.Text.RegularExpressions; class ExpressionFun { static string CapText(Match m) { // Get the matched string string s = m.ToString( ); // If the first char is lower case, capitalize it if (char.IsLower(s[0])) return char.ToUpper(s[0]) + s.Substring(1, s.Length-1); return s; } static void Main( ) { string txt = "the quick red fox jumped over the lazy brown dog."; Console.WriteLine("text=[" + txt + "]"); string pat = @"\w+"; MatchEvaluator me = new MatchEvaluator(CapText); string res = Regex.Replace(txt, pat, me); Console.WriteLine("result=[" + res + "]"); } }
Also of note is that the pattern is simplified, since we need only to modify the words, not the nonwords. | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+6.+String+Handling/6.5+Procedural-+and+Expression-Based+Patterns/ | CC-MAIN-2018-26 | refinedweb | 261 | 60.01 |
Details
- Type:
Improvement
- Status:
Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 1.0 M2
- Fix Version/s: None
- Component/s: XML code generator
- Labels:None
- Environment:N/A
- Number of attachments :
Description
The request is to enhance the binding file such that each element or attribute can have code generated so that the method signature for the getter would include the property listener input parameter. This would be useful in an environment that has data collected and shared with mulitple editors or threads. The shared parts of the data would be unobtainable without the creation of a property change listener. This will enforce the design to the users without having to remember to call the addPropertyChangeListener directly.
The reason for this request is described in the email trail below. It tells a little bit about our usage of castor to collect the data for a gui interface.
> Ralf:
>
> Thanks for you quick response.
>
> I was thinking that the binding file could be enhanced to allow for
> specification of the listener at an element level. We are using castor
> to hold the model for our editors. The editors share some of the model
> data, but not all of it, so they only want to be notified of changes
> to the shared items.
>
> My suggestion is that instead of having to call the
> addPropertyChangeListener when getting an item, the shared pieces
> would require the listener when calling the getter... In other words,
> the getter would call the addPropertyChangeListener. This could be
> used to enforce the writers of the editors to use a listener when
> asking for shared data from the model. It would avoid the possibility
> that they would forget to call the listener registration method, and
> thus avoid the debugging session when the shared data is not being
> used properly.
>
> I was just wondering if Castor had the ability to do this and if there
> were any enhancements in the works to do this. I think Castor has lots
> of features that are useful, I thought this might be useful to others
> as well. I would love to be the one to dig through the code and modify
> it to do this, but we have a very intense schedule for the product
> that we are developing. There is a way around this as stated above.
> Just thought I would add the suggestion. Without it we will still be
> using Castor. I think it is a great product.
>
> Barbara
>
>
> ----
Original Message----
> From: Ralf Joachim ralf.joachim@syscon-world.de
> Sent: Monday, February 27, 2006 5:50 PM
> To: user@castor.codehaus.org
> Subject: Re: [castor-user] Code generation format
>
>
> Hi Barbara,
>
> without a closer look at your enhancement request, 2 questions came to
> my mind:
>
> 1. How would you specify that Castor should create a getter with a
> PropertyChangeListener for sharedItem but one without for notSharedItem?
>
> 2. Where should Castor get the PropertyChangeListener from when
> unmarshalling?
>
> Having said that this seams to be a very special enhancement that may
> not be of help to much other people in my opinon. If other committers
> share this opinon we would refuse that enhancement. Another point, if we
>
> agree to add this feature, is, if you would be in a position to
> provide
> us with a patch to add this.
>
> Regards
> Ralf
> Castor JDO, committer
>
>
> Barbara Prechtl schrieb:
>
>>To whom it may concern:
>>
>>I have been working with Castor for generation of code from a schema.
>>This has been working very well. I really appreciate all the work that
>
>
>>has gone into this project. My question is related to the code that
>>gets generated.
>>
>>In the environment I am working in, I need to enforce that certain
>>objects are registered for. It is nice that Castor has the ability to
>>generate the property change listeners for an object. I was just
>>wondering if there is a way to ensure that the item getter has the
>>property change listener in the getter method signature.
>>
>>For instance:
>>
>> <xsd:complexType
>> <xsd:sequence>
>> <xsd:element
></xsd:element>
>
>> <xsd:element</xsd:element>
>> </xsd:sequence>
>> </xsd:complexType>
>>
>>For the above schema type, I would like to be able to specify to
>>castor to generate the class so that the getter for the "sharedItem"
>>property has a listener required and the getter or the "notSharedItem"
>
>
>>does not. For instance the class would be generated as such:
>>
>>public class DataObject {
>>
>> String sharedItem = null;
>> String notSharedItem = null;
>>
>> public String getSharedItem(PropertyChangeListener pc)
>>
>> public String getNotSharedItem()
>>}
>>
>>We are using this data as the model for a gui interface. There are
>>multiple editors on the gui so some of the data is shared between
>>editors. We would like to enforce that when shared data items are
>>obtained they require that the listener is registered via the method
>>signature.
>>
>>The option that we have come up with is to wrap each of the castor
>>classes with our own implementation classes that enforce our
>>requirements on shared data. This may be a big effort, as most of the
>>data is embedded into the schema model elements, and the schema may
>>change as the project evolves.
>>
>>I was just wondering if castor has any support for this type of
>>generation. I reviewed the user documentation, but have not come
>>across an answer.
>>
>>Thanks,
>>Barbara
>> | http://jira.codehaus.org/browse/CASTOR-1336?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab | CC-MAIN-2015-14 | refinedweb | 876 | 60.75 |
Constants in C++ are the fixed values that can never change during the course of the program execution.
Any normal variable can be declared a constant, and once done the value of that particular variable cannot change in that program.
If one tries to change the value of a constant then an error occurs.
Constants are used generally for the values which are universally accepted.
For Example:
pi = 3.14 , g = 10 ms-2, etc.
Now, instead of writing these values again and again in a program one can simply define a constant with the desired value and use it everywhere without the possibility of an error.
How to declare a Constant?
Now, we have seen that what are constants, but how can we declare any variable a constant in C++. There are two ways in C++ to define constants:
1. Using #define preprocessor.
Syntax:
#define identifier value
For Example:
#include <iostream.h> #define PI 3.14 void main() { int area, r=10; area = PI*r*r; cout << “Area : ”<<area; }
Output:
314
Here,
#define PI 3.14
defines the value of PI as 3.14 using the #define preprocessor, now the value of variable PI will never change during the program execution.
2. Using const keyword:
The const keyword can also be used to declare a constant in C++.
Syntax:
const data type variable_name = value;
For Example:
#include <iostream.h> void main() { const float PI=3.14; int area, r=10; area = PI*r*r; cout << “Area : ”<<area; }
Output:
314
Here,
const float PI=3.14;
defines the value of PI as 3.14 using the const keyword, now the value of variable PI will never change during the program execution and if one tries to do so, then an error occurs.
Literals:
Following are the literals in C++:
Integer Literals:
An integer literal can be a decimal, octal, or hexadecimal constant. A prefix specifies the base or radix: 0x or 0X for hexadecimal, 0 for octal, and nothing for decimal.
For Example:
231
215u
0xFeeL
Floating-point Literals:
A floating-point literal has an integer part, a decimal point, a fractional part, and an exponent part.
For Example:
3.14159
314159E-5L
Character Literals:
Character literals are enclosed in single quotes. Character literals include plain characters (e.g., ‘x’), an escape sequence (e.g., ‘\t’), or a universal character (e.g., ‘\u02C0’).
Boolean Literals:
There are two Boolean literals and they are part of standard C++ keywords:
- A value of true representing true.
- A value of false representing false.
String Literals:
String literals are enclosed in double quotes.
For Example:
“C++ Programs”
Report Error/ Suggestion | https://www.studymite.com/cpp/constants-literals-in-cpp/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-50 | refinedweb | 436 | 59.19 |
A Singleton dependency is a single object instance that is shared by every object that depends upon it. In a WebAssembly application, this is the lifetime of the current application that is running in the current tab of our browser. Registering a dependency as a Singleton is acceptable when the class has no state or (in a server-side app) has state that can be shared across all users connected to the same server; a Singleton dependency must be thread-safe.
To illustrate this shared state, let’s create a very simple (i.e. non-scalable) chat application.
The Singleton chat service
First, create a new Blazor Server App. Then create a new folder named Services and add the following interface. This is the service our UI will use to send a message to other users, to be notified whenever a user sends a message, and when our user first connects will enable them to see an limited history of the chat so far. Because this is a Singleton dependency running on a Blazor server-side application, it will be shared by all users on the same server.
public interface IChatService { bool SendMessage(string username, string message); string ChatWindowText { get; } event EventHandler TextAdded; }
To implement this service we’ll use a
List<string> to store the chat history, and remove messages from the start of the list whenever there are more than 100 in the queue. We’ll use the
lock() statement to ensure thread safety.
public class ChatService : IChatService { public event EventHandler TextAdded; public string ChatWindowText { get; private set; } private readonly object SyncRoot = new object(); private List<string> ChatHistory = new List<string>(); public bool SendMessage(string username, string message) { if (string.IsNullOrWhiteSpace(username) || string.IsNullOrWhiteSpace(message)) return false; string line = $"<{username}> {message}"; lock (SyncRoot) { ChatHistory.Add(line); while (ChatHistory.Count > 50) ChatHistory.RemoveAt(0); ChatWindowText = string.Join("\r\n", ChatHistory.Take(50)); } TextAdded?.Invoke(this, EventArgs.Empty); return true; } }
- Line 3
An event our UI can hook into to be notified whenever a new message is posted to our chat server.
- Line 4
A string representing up to 50 lines of chat history.
- Lines 16-23
Locks
SyncRootto prevent concurrency issues, adds the current line to the chat history, removes the oldest history if more than 50 lines, and then recreates the
ChatWindowTextproperty’s contents.
- Line 25
Informs all consumers of the chat service that the
ChatWindowTexthas been updated.
To register the service, open Startup.cs and in
ConfigureServices add the following
services.AddSingleton<IChatService, ChatService>();
Defining the user interface
To separate our C# chat code from our display mark-up, we’ll use a code-behind approach. In the Pages folder create a new file named Index.razor.cs, Visual Studio should automatically embed it beneath the Index.Razor file. We then need to mark our new
Index class as partial.
public partial class Index { }
Well need our component class to do the following
- When initialised, subscribe to
ChatService.TextAdded.
- To avoid our Singleton holding on to references of disposed objects, when our component is disposed we should unsubscribe from
ChatService.TextAdded.
- Whenver
ChatService.TextAddedis triggered we should update the user interface to show the new
IChatService.ChatWindowTextcontents.
- We should allow the user to enter their name + some text to send to other users.
Let’s start with the easiest step, which is step 4, and then implement the other requirements in the order listed.
For simplicity, we’ll add the
Name and
Text properties to our current class rather than creating a view model, we’ll also decorate them with the
RequiredAttribute to provide feedback to the user when they try to post text without filling in the required inputs.
public partial class Index { [Required(ErrorMessage = "Enter name")] public string Name { get; set; } [Required(ErrorMessage = "Enter a message")] public string Text { get; set; } }
Initial mark-up and validation
We’ll replace the contents of Index.razor and replace it with a simple
EditForm consisting of a
DataAnnotationsValidator component and some Bootstrap CSS decorated HTML for inputting a user name and text.
@page "/" <h1>Blazor web chat</h1> <EditForm [email protected]> <DataAnnotationsValidator/> 4
Creates an EditForm that is bound to
this.
- Line 5
Enables validation based on data annotations such as
RequiredAttribute.
- Line 8
Binds a Blazor InputText component to the
Nameproperty.
- Line 9
Displays any validation errors for the
Nameproperty.
- Line 13
Binds a Blazor InputText component to the
Textproperty.
- Line 18
Displays any validation errors for the
Textproperty.
Consuming IChatService
Next we’ll inject the
IChatService and hook it up fully to our component. To achieve this, we’ll need to do the following.
public partial class Index : IDisposable { [Required(ErrorMessage = "Enter name")] public string Name { get; set; } [Required(ErrorMessage = "Enter a message")] public string Text { get; set; } [Inject] private IChatService ChatService { get; set; } private string ChatWindowText => ChatService.ChatWindowText; protected override void OnInitialized() { base.OnInitialized(); ChatService.TextAdded += TextAdded; } private void SendMessage() { if (ChatService.SendMessage(Name, Text)) Text = ""; } private void TextAdded(object sender, EventArgs e) { InvokeAsync(StateHasChanged); } void IDisposable.Dispose() { ChatService.TextAdded -= TextAdded; } }
- Lines 8-9
Declares a dependency on
IChatServicethat should be automatically injected.
- Line 11
Declares a property that makes accessing
IChatService.ChatWindowTextsimple.
- Line 16
Subscribes to the
IChatService.TextAddedevent.
- Line 21
Sends the current user’s input to the chat service.
- Line 27
Refreshes the user interface every time
IChatService.TextAddedis invoked.
- Line 32
When the component is disposed, unsubscribe from
IChatService.TextAddedto avoid memory leaks.
Note: We must wrap our
StateHasChanged call in a call to
InvokeAsync. This is because the
IChatService.TextAdded event will be triggered by whichever user added the text, and will therefore be triggered by various threads. We need Blazor to marshall these calls using
InvokeAsync to ensure all threaded calls on our component are performed in sequence.
Adding the chat window to our user interface
We now only need to add an HTML
<textarea> control to our mark-up and bind it to our
ChatWindowText property, and ensure that when the
EditForm is submitted without validation errors it calls our
SendMessage method.
The final user interface mark-up looks like this.
@page "/" <h1>Blazor web chat</h1> <EditForm [email protected] [email protected]> <DataAnnotationsValidator/> <div class="row"> <textarea class="form-control" rows=20 readonly>@ChatWindowText</textarea> </div> 5
Calls
SendMessagewhen the user presses enter on an
InputTextand the input validation passes.
- Lines 7-9
HTML to output an HTML
<textarea>and bind it to
WindowChatText.
Singleton dependencies in WebAssembly applications
The preceding application will only allow users to chat with each other if the Blazor application is a Blazor server-side application.
This is because Singleton dependencies are shared per-application process. Blazor server-side applications actually run on the server, and so singleton instances are shared across multiple users that are running in the same server application process.
When running in a WebAssembly application each browser tab is its own separate application process, thefore users would be unable to chat with each other if they are each running individual processes in their browsers (WebAssembly hosted applications) because they are not sharing any common state.
This is the same when using multiple servers. As soon as our chat service is popular enough to warrant one or more additional servers there is no longer a globally shared state for all users, only a shared state per server.
Once we need to scale up our servers, or we wish to implement our chat client as a WebAssembly app to take some of the workload away from our servers, we’d need to set up a more robust method of sharing state. This is not something within the scope of this section, as this section’s purpose is only to demonstrate how dependencies registered as Singletons are shared across a single application process.
Task for the reader
The browser is unlikely to have enough vertical space to display 50 chat messages all at once, so the user must manually scroll the chat area to see the latest messages.
To improve the user experience, our component should really scroll the
<textarea> scrollbar to the bottom every time new text is added. If you don’t wish to tackle this yourself then just take a look at the project that accompanies this section, the work is done for you. If you do fancy tackling it, here are some clues.
- You’ll some JavaScript that will take a control as a parameter and set
control.scrollTop = control.scrollHeight.
- You’ll need to invoke this JavaScript after every time our component renders.
- You’ll need an ElementReference to the
<textarea>to pass to the JavaScript. | https://blazor-university.com/dependency-injection/dependency-lifetimes-and-scopes/singleton-dependencies/ | CC-MAIN-2021-25 | refinedweb | 1,429 | 53.71 |
How to Implement Secure, HTTPOnly Cookies in Node.js with Express
April 12th, 2021
What You Will Learn in This Tutorial
Using Express.js, learn how to implement cookies that are secure in the browser to avoid XSS (cross-site scripting) attacks, man-in-the-middle attacks, and XST (cross-site tracing) attacks.
Table of Contents
Master Websockets — Learn how to build a scalable websockets implementation and interactive UI.
Cookies are a clever technique for sharing data between a user's browser and your server. The data contained in a cookie can be anything you'd like: a login token, some profile data, or even some behavioral data explaining how the user utilizes your app. From a developer's perspective this is great, but if you're not aware of common security issues, using cookies can mean accidentally leaking data to attackers.
The good news: if you're aware of the techniques required to secure cookies in your app, the work you need to do isn't too difficult. There are three types of attacks we need to guard against:
- Cross-site scripting attacks (XSS) - These attacks rely on client-side JavaScript being injected into the front-end of your application and then accessing cookies via the browser's JavaScript cookies API.
- Man-in-the-middle attacks - These attacks occur when a request is in-flight (traveling from the browser to the server) and the server does not have an HTTPS connection (no SSL).
- Cross-site tracing attacks (XST) - In the HTTP protocol, an HTTP method called
TRACEexists which allows attackers to send a request to a server (and obtain its cookies) while bypassing any security. While modern browsers generally make this irrelevant due to disabling of the
TRACEmethod, it's still good to be aware of and guard against for added security.
To get started, we're going to take a look at the server setup where our cookies will be created and then delivered back to the browser.
Creating secure cookies
To give context to our example, we're going to use the CheatCode Node.js Boilerplate which sets us up with an Express server already set up and ready for development. First, clone a copy of the boilerplate to your computer:
git clone
Next, make sure to install the boilerplate's dependencies:
cd nodejs-server-boilerplate && npm install
After that, go ahead and start up the server:
npm run dev
Next, let's open up the
/api/index.js file in the project. We're going to add a test route where we'll set our cookies and verify that they're working:
/api/index.js
import graphql from "./graphql/server"; export default (app) => { graphql(app); // Our cookie code will go here. };
Next, let's add the code for setting our cookie and then walk through how and why it's working:
/api/index.js
import dayjs from "dayjs"; import graphql from "./graphql/server"; export default (app) => { graphql(app);."); }); };
Lots of detail added, so let's step through it. First, at the top of the file, we've added an import for the
dayjs NPM package. This is a library for creating and manipulating dates in JavaScript. We'll use this below to generate the expiration date for our cookie to ensure that it doesn't linger around in a browser indefinitely.
Next, we use the Express
app instance (passed in to this file via the
/index.js file at the root of the project) to call the
.use() method which allows us to define a route in our Express application. To be clear, this is purely for example. In your own app, this could be any route where you'd want to set a cookie and return it to the browser.
Inside of the callback for our
/cookies route, we get to work setting up our cookie. First, we define an example
dataToSecure object with some test data inside.
Next, we set our cookie. Using the
res.cookie() method provided in Express, we pass three arguments:
- The name of the cookie we want to set on the browser (here,
secureCookie, but this could be whatever you want, e.g.,
pizza).
- The stringified version of the data we want to send. Here, we take our
dataToSecureobject and stringify it using
JSON.stringify(). Keep in mind: if the data you're sending back to the browser is already a string, you do not need to do this.
- The settings for the cookie. The properties set here (
secure,
httpOnly, and
expires) are Express-specific properties, but the names map 1:1 with the actual settings in the HTTP specification.
Focusing on that last argument, the settings, this is where our security comes in. There are three settings that are important for securing a cookie:
First, the
secure property takes a boolean (true/false) value which specifies whether or not this cookie can only be retrieved over an SSL or HTTPS connection. Here, we set this depending on which environment our application is running in. As long as the environment is not development, we want to force this to be
true. In development this isn't necessary because our application is not exposed to the internet, just us, and it's likely that you do not have an SSL proxy server setup locally to handle these requests.
Second, the
httpOnly property likewise takes a boolean (true/false) value, here specifying whether or not the cookies should be accessible via JavaScript in the browser. This setting is forced to
true, because it ensures that any cross-site scripting attacks (XSS) are impossible. We don't have to worry about the development environment here as this setting does not have a dependency on SSL or any other browser features.
Third, and finally, the
expires property allows us to set an expiration date on our cookie. This helps us with security by ensuring that our cookie does not stick around in a user's browser indefinitely. Depending on the data you're storing in your cookie (and your app's needs) you may want to shorten or extend this. Here, we use the
dayjs library we imported earlier, telling it to "get the current date, add 30 days to it, and then return us a JavaScript
Date object for that date." In other words, this cookie will expire in 30 days from the point of creation.
Finally, at the bottom of our route's callback function, we call to
res.send() to respond to our request. Because we're using
res.cookie() we're automatically telling Express to send the cookie back as part of the response—no need to do anything else.
Handling TRACE requests
Like we mentioned earlier, before we check that our cookies are working as expected, we want to ensure that we've blocked the potential for
TRACE requests. We need to do this to ensure that attackers cannot utilize the
TRACE HTTP method to access our
httpOnly cookies (
TRACE doesn't respect this rule). To do it, we're going to rely on a custom Express middleware that will automatically block
TRACE requests from any client (browser or otherwise).
/middleware/requestMethod.js
export default (req, res, next) => { // NOTE: Exclude TRACE and TRACK methods to avoid XST attacks. const allowedMethods = [ "OPTIONS", "HEAD", "CONNECT", "GET", "POST", "PUT", "DELETE", "PATCH", ]; if (!allowedMethods.includes(req.method)) { res.status(405).send(`${req.method} not allowed.`); } next(); };
Conveniently, the above code exists as part of the CheatCode Node.js Boilerplate and is already set up to run inside of
/middleware/index.js. To explain what's happening here, what we're doing is exporting a function that anticipates an Express
req object,
res object, and
next method as arguments.
Next, we define an array specifying all of the allowed HTTP methods for our server. Notice that this array does not include the
TRACE method. To put this to use, we run a check to see if this
allowedMethods array includes the current
request's method. If it does not, we want to respond with an HTTP 405 response code (the technical code for "HTTP method not allowed").
Assuming that the
req.method is in the
allowedMethods array, we call to the
next() method passed by Express which signals to Express to keep moving the request forward through other middleware.
If you want to see this middleware in use, start in the
/index.js file to see how the
middleware() method is imported and called (passing the Express
app instance) and then open the
/middleware/index.js file to see how the
/middleware/requestMethods.js file is imported and utilized.
Verifying secure cookies in the browser
Now, we should be all set up to test out our cookie. Because we're setting the cookie at the route
/cookies, we need to visit this route in a browser to verify that everything is working. In a web browser, open up and then open up your browser's console (usually accessible via a
CTRL + click on MacOS or by right-clicking on Windows):
In this example, we're using the Brave browser which has an identical developer insepection tool to Google Chrome (Firefox and Safari have comparable UIs but may not use the exact same naming we reference below). Here, we can see our
secureCookie being set, along with all of the data and settings that we passed on the server. To be clear, notice that here because we're in a
development environment,
Secure is unset.
An additional setting that we've left off here
SameSite is also disabled (this defaults to a value of
Lax) in the browser.
SameSite is another boolean (true/false) value that decides whether or not our cookie should only be accessible on the same domain. This is disabled because it can add confusion if you're using a separate front-end and back-end in your application (if you're using CheatCode's Next.js and Node.js boilerplates for your app, this will be true). If you want to enable this, you can, by adding
sameSite: true to the options object we passed to
res.cookie() as the third argument.
Retrieving cookies on the server
Now that we've verified our cookies exist in the browser, next, let's look at retrieving them for usage later. In order to do this, we need to make sure that our Express server is parsing cookies. This means converting the cookies string sent in the HTTP headers of a request to a more-accessible JavaScript object.
To automate this, we can add the
cookie-parser package to our app which gives us access to an Express middleware that parses this for us:
npm i cookie-parser
Implementing this is straightforward. Technically, this is already used in the CheatCode Node.js Boilerplate we're using for our example here, in the
middleware/index.js file at the root of the app:
/middleware/index.js
[...] import cookieParser from "cookie-parser"; [...] export default (app) => { [...] app.use(cookieParser()); };
Here, all we need to do is import
cookieParser from the
cookie-parser package and then call
app.use() passing a call to the
cookieParser() method like
app.use(cookieParser()). To contextualize this to our example above, here's an update to our
/api/index.js file (assuming you're writing your code from scratch):
/api/index.js
import dayjs from "dayjs"; import cookieParser from "cookie-parser"; import graphql from "./graphql/server"; export default (app) => { graphql(app); app.use(cookieParser());."); }); };
Again, you don't need to do this if you're using the CheatCode Node.js Boilerplate.
With this implemented, now, whenever the app receives a request from the browser, its cookies will be parsed and placed on the
req or request object at
req.cookies as a JavaScript object. Inside of a request, then we can do so something like the following:
/api/index.js
import dayjs from "dayjs"; import cookieParser from "cookie-parser"; import graphql from "./graphql/server"; export default (app) => { graphql(app); app.use(cookieParser()); app.use("/cookies", (req, res) => { if (!req.cookies || !req.cookies.secureCookie) {."); }); };
Here, before setting our cookie from our previous example, we call to
req.cookies (automatically added for us via the
cookieParser() middleware), checking to see if either the
req.cookies value is undefined, or, if
req.cookies is defined, is
req.cookies.secureCookie also defined. If
req.cookies.secureCookie is not defined, we want to go ahead and set our cookie as normal. If it's already been defined, we just respond to the request as normal but skip setting the cookie.
The point here is that we can access our cookies via the
req.cookies property in Express. You do not have to do the above check on your own cookie unless you want to.
How to manage cookies in GraphQL
To close the loop on managing cookies, it's worth understanding how to do this in relation to a GraphQL server. This is worth understanding if you want to set or retrieve cookies from a GraphQL resolver, or, during the server instantiation.
/api/graphql/server.js
import { ApolloServer } from "apollo-server-express"; import schema from "./schema"; import { isDevelopment } from "../../.app/environment"; import { configuration as corsConfiguration } from "../../middleware/cors"; export default (app) => { const server = new ApolloServer({ ...schema, introspection: isDevelopment, playground: isDevelopment, context: async ({ req, res }) => { const context = { req, res, user: {}, }; return context; }, }); server.applyMiddleware({ cors: corsConfiguration, app, path: "/api/graphql", }); };
Here, to ensure we can both access and set cookies via our GraphQL query and mutation resolvers, we've set the
context property for the server to be equal to a function that takes in the
req and
res (here, because we're tying this to an Express
app instance, these are the Express
req and
res objects) and then assigns them back to the
context object that's handed to all of our query and mutation resolvers:
import dayjs from 'dayjs'; export default { exampleResolver: (parent, args, context) => { // Accessing an existing cookie from context.req. const cookie = context?.req?.cookies?.secureCookie; // Setting a new cookie with context.res. if (context.res && !cookie) { const dataToSecure = { dataToSecure: "This is the secret data in the cookie.", }; res.cookie("secureCookie", JSON.stringify(dataToSecure), { secure: process.env.NODE_ENV !== "development", httpOnly: true, expires: dayjs().add(30, "days").toDate(), }); } // Arbitrary return value here. This would be whatever value you want to // resolve the query or mutation with. return cookie; }, };
In the above example, we repeat the same patterns as earlier in the tutorial, however, now we're accessing cookies via
context.req.cookies and setting them via
context.res.cookie(). Of note, this
exampleResolver isn't intended to be functional—it's just an example of how to access and set cookies from within a resolver. Your own GraphQL resolver will use more specific code related to reading or writing data in your app.
Ensuring cookies are included in your GraphQL requests
Depending on your choice of GraphQL client, the cookies from your browser (httpOnly or otherwise) may not be included in the request automatically. To ensure this happens, you will want to check the documentation for your client and see if it has an option/setting for including credentials. For example, here is the Apollo client configuration from CheatCode's Next.js Boilerplate:
new ApolloClient({ credentials: "include", link: ApolloLink.from([ new HttpLink({ uri: settings.graphql.uri, credentials: "include", }), ]), cache: new InMemoryCache(), defaultOptions: { watchQuery: { errorPolicy: "all", fetchPolicy: "network-only", }, query: { errorPolicy: "all", fetchPolicy: "network-only", }, mutate: { errorPolicy: "all", }, }, });
Here, we ensure to set the
credentials property as
'include' to signal to Apollo that we want it to include our cookies with each request. Further, because we're using the HTTP Link method from Apollo, for good measure we set
credentials to
'include' here, too.
Wrapping up
In this tutorial, we looked at how to manage secure cookies in Node.js with Express. We learned how to define a cookie, using the
secure,
httpOnly, and
expires values to ensure they stay separate from attackers as well as how to disable
TRACE requests to prevent backdoor access to our
httpOnly cookies.
We also learned how to access cookies by utilizing the Express
cookie-parser middleware, learning how to access cookies in an Express route as well as via a GraphQL context.
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode. | https://cheatcode.co/tutorials/how-to-implement-secure-httponly-cookies-in-node-js-with-express | CC-MAIN-2022-27 | refinedweb | 2,733 | 63.39 |
public class ServiceUI extends Object
The dialogs follow a standard pattern of acting as a continue/cancel option for a user as well as allowing the user to select the print service to use and specify choices such as paper size and number of copies.
The dialogs are designed to work with pluggable print services though the public APIs of those print services.
If a print service provides any vendor extensions these may be made accessible to the user through a vendor supplied tab panel Component. Such a vendor extension is encouraged to use Swing! and to support its accessibility APIs. The vendor extensions should return the settings as part of the AttributeSet. Applications which want to preserve the user settings should use those settings to specify the print job. Note that this class is not referenced by any other part of the Java Print Service and may not be included in profiles which cannot depend on the presence of the AWT packages.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait be empty, or may contain application-specified values.
These are used to set the initial settings for the initially displayed print service. Values which are not supported by the print service are ignored. As the user browses print services, attributes and values are copied to the new display. If a user browses a print service which does not support a particular attribute-value, the default for that service is used as the new value to be copied.
If the user cancels the dialog, the returned attributes will not reflect any changes made by the user. A typical basic usage of this method may be :
PrintService[] services = PrintServiceLookup.lookupPrintServices( DocFlavor.INPUT_STREAM.JPEG, null); PrintRequestAttributeSet attributes = new HashPrintRequestAttributeSet(); if (services.length > 0) { PrintService service = ServiceUI.printDialog(null, 50, 50, services, services[0], null, attributes); if (service != null) { ... print ... } }. | http://docs.oracle.com/javase/8/docs/api/javax/print/ServiceUI.html | CC-MAIN-2016-30 | refinedweb | 310 | 54.52 |
Anonymous
2010-06-23
I'm wondering why don't use OCAF in NaroCAD
In the OCC forum () you wrote: "The OCAF like layer is rewritten in C# for maintenance reasons as is pretty hard to work with two debuggers, one to debug C++ OpenCascade code and the other side the C#"
Could you be so kind to give me more details about you decision ?
Many thanks
m.
Hello makka,
NaroCAD as architecture it has two code parts: one that is OpenCascade/C++ code, and some managed code (namely the .NET/C# code).
The OCAF tree in OpenCascade is made like this: there is a defined integer-indexed tree structure (the integer usage is for strength reduction, meaning faster operations on trees like seaching, managing, etc.) and inside there is a list of GUID based attributes. In this way, by adding a new GUID "type" you can attach an extra attribute to a node that consist on extra data.
When you are in a .NET world, the wrapped OCAF stops from debugging point of view just as pointers to GUIDs, but not as internal data that the attribute express.
For example imagine that you add an extra attribute named Density, that will be attached to any volumetric shape. If you debug from Visual C++ code, as an OCAF attribute you may expand the pointer to the attribute and you will see if density was setup or not, if there is a specific bug that the density is always setup to 0.0 When I will do it in OCAF mode this pointer is meaningless and the GUIDs like identification does not help much.
How NaroCAD did it better by rewriting it?
- the attribute names are reflection/namespace based names. In internal logic of NaroCAD when you debug a tree node, you will see the Density like attribute
- it is generic based. If you defined your custom attribute you have to register it once (the registering is just to know how to save/restore from file) and you can write to any node somelike this: node.Set<DensityInterpreter>().Value = 30.0;
You should remark that node.Set<DensityInterpreter>() will return an instance of DensityInterpreter so the .Value have the meaning in the DensityInterpreter context. Is static typing but have somelike dynamic language feeling, to say so. The API is practically minimalist and makes sense using generics more than playing around with GUIDs. For example:
var interpreter = node.Get<DensityInterpreter>();
if(interpreter!=null) …
This code tests if node have attached Density values to the node, and later if the node have one, will offer it so you can use it as the right type. No GUIDs, or no reflection full name is needed. Everything is transparent for user.
- working with nodes is also easy, instead of writing somelike node.Find(newId, true), you can write directly node that creates automatically the node. This to be done, would need using another class just to wrap this behavior from C# point of view making hard to operate with
- the latest important feature is propagation. Propagation in OCAF (as much as I've understand it) is done by visiting the entire scene and mark gradually the values that were affected. The design in NaroCAD is much simpler, and is based on events/delegates. Every attribute that changes can call the OnModify method and propagation will make it automatically. On top of this the "Function" code just wrap it in a also generic way this features. In fact functions just simply creates nodes and connect to notify events of the child nodes. If you will take the NodeBuilder class, it will also wrap the Function's mechanisms to be also transparent to user
The implementation of both children of any node and attribute indexing is done by using the SortedDictionary generic class, meaning in short that the adding, remove or locating one node is O(log n) (it's somehow amortized if you balance right your tree: node for example is log(firstLevelOfTree)*log(secondLevelOfTree)). The visiting of all items is O(n) or O(1) for item. This will guarantee that will likely work enough fast for most propagation scenarios that affect a lot of shapes values without adding them. The performance for adding items is still good anyway, but adding 10.000 items at once may be slow(er) but it will be O(logn*n) for all items, so may be for an unbalanced tree, just some seconds.
So at the end the advantages of using a OCAF C# rewritten version adds clarity, speed guarantees, debugging, propagation, generics. At the end the whole Node class that I've was talking about have 375 lines, which is fairly tested code, which may be interesting to be used elsewhere. For this reason we used it for saving the options parameters or the "capabilities" code meaning to link sematic relations between items that NaroCAD store.
Regards, ciplogic
Anonymous
2010-06-23
Hi ciplogic,
thank you vey much for your long & detailed reply.
Pls correct me if I'm wrong but a NaroCAD document couldn't be used with XDE, right ?
Inside the "General Definitions" of XDE I found this entry: "To read and write attributes such as names, colors, layers for IGES and STEP and validation properties and structure of assemblies for STEP, you can use an XDE document". Looking at your code inside ExportToStep I can see that you are exporting only Shapes but you don't export attributes such as names, colors, layers. Am I wrong ?
thanks
makka
Hi makka,
Yes. you're right. But some attributes as colors and layers are stored in NaroXML file. If you know how to extend the code to add the extra attributes in both ways (import and export) as they mostly exist (I'm not sure about layers part, but I'm sure that shape name and color, transparency are there), I'll be really happy to assist you how to write a patch or to make it to work just right.
You're welcome,
ciplogic | http://sourceforge.net/p/narocad/discussion/703153/thread/f59d6084/ | CC-MAIN-2013-48 | refinedweb | 1,009 | 60.04 |
Developing a Game Engine
You now understand enough about what a game engine needs to accomplish that you can start assembling your own. In this section, you create the game engine that will be used to create all the games throughout the remainder of the book. Not only that, but also you'll be refining and adding cool new features to the game engine as you develop those games. By the end of the book, you'll have a powerful game engine ready to be deployed in your own game projects.
The Game Event Functions
The first place to start in creating a game engine is to create handler functions that correspond to the game events mentioned earlier in the lesson. Following are these functions, which should make some sense to you because they correspond directly to the game events:
BOOL GameInitialize(HINSTANCE hInstance); void GameStart(HWND hWindow); void GameEnd(); void GameActivate(HWND hWindow); void GameDeactivate(HWND hWindow); void GamePaint(HDC hDC); void GameCycle();
The first function, GameInitialize(), is probably the only one that needs special explanation simply because of the argument that gets sent into it. I'm referring to the hInstance argument, which is of type HINSTANCE. This is a Win32 data type that refers to an application instance. An application instance is basically a program that has been loaded into memory and that is running in Windows. If you've ever used Alt+Tab to switch between running applications in Windows, you're familiar with different application instances. The HINSTANCE data type is a handle to an application instance, and it is very important because it allows a program to access its resources since they are stored with the application in memory.
The GameEngine Class
The game event handler functions are actually separated from the game engine itself, even though there is a close tie between them. This is necessary because it is organizationally better to place the game engine in its own C++ class. This class is called GameEngine and is shown in Listing 3.1.
NOTE
If you were trying to adhere strictly to object-oriented design principles, you would place the game event handler functions in the GameEngine class as virtual methods to be overridden. However, although that would represent good OOP design, it would also make it a little messier to assemble a game because you would have to derive your own custom game engine class from GameEngine in every game. By using functions for the event handlers, you simplify the coding of games at the expense of breaking an OOP design rule. Such are the trade-offs of game programming.
Listing 3.1 The GameEngine Class Definition Reveals How the Game Engine Is Designed
1: class GameEngine 2: { 3: protected: 4: // Member Variables 5: static GameEngine* m_pGameEngine; 6: HINSTANCE m_hInstance; 7: HWND m_hWindow; 8: TCHAR m_szWindowClass[32]; 9: TCHAR m_szTitle[32]; 10: WORD m_wIcon, m_wSmallIcon; 11: int m_iWidth, m_iHeight; 12: int m_iFrameDelay; 13: BOOL m_bSleep; 14: 15: public: 16: // Constructor(s)/Destructor 17: GameEngine(HINSTANCE hInstance, LPTSTR szWindowClass, 18: LPTSTR szTitle, WORD wIcon, WORD wSmallIcon, int iWidth = 640, 19: int iHeight = 480); 20: virtual ~GameEngine(); 21: 22: // General Methods 23: static GameEngine* GetEngine() { return m_pGameEngine; }; 24: BOOL Initialize(int iCmdShow); 25: LRESULT HandleEvent(HWND hWindow, UINT msg, WPARAM wParam, 26: LPARAM lParam); 27: 28: // Accessor Methods 29: HINSTANCE GetInstance() { return m_hInstance; }; 30: HWND GetWindow() { return m_hWindow; }; 31: void SetWindow(HWND hWindow) { m_hWindow = hWindow; }; 32: LPTSTR GetTitle() { return m_szTitle; }; 33: WORD GetIcon() { return m_wIcon; }; 34: WORD GetSmallIcon() { return m_wSmallIcon; }; 35: int GetWidth() { return m_iWidth; }; 36: int GetHeight() { return m_iHeight; }; 37: int GetFrameDelay() { return m_iFrameDelay; }; 38: void SetFrameRate(int iFrameRate) { m_iFrameDelay = 1000 / 39: iFrameRate; }; 40: BOOL GetSleep() { return m_bSleep; }; 41: void SetSleep(BOOL bSleep) { m_bSleep = bSleep; }; 42: };
The GameEngine class definition reveals a subtle variable naming convention that wasn't mentioned in the previous lesson. This naming convention involves naming member variables of a class with an initial m_ to indicate that they are class members. Additionally, global variables are named with a leading underscore (_), but no m. This convention is useful because it helps you to immediately distinguish between local variables, member variables, and global variables in a program. The member variables for the GameEngine class all take advantage of this naming convention, which is evident in lines 5 through 13.
The GameEngine class defines a static pointer to itself, m_pGameEngine, which is used for outside access by a game program (line 5). The application instance and main window handles of the game program are stored away in the game engine using the m_hInstance and m_hWindow member variables (lines 6 and 7). The name of the window class and the title of the main game window are stored in the m_szWindowClass and m_szTitle member variables (lines 8 and 9). The numeric IDs of the two program icons for the game are stored in the m_wIcon and m_wSmallIcon members (line 10). The width and height of the game screen are stored in the m_iWidth and m_iHeight members (line 11). It's important to note that this width and height corresponds to the size of the game screen, or play area, not the size of the overall program window, which is larger to accommodate borders, a title bar, menus, and so on. The m_iFrameDelay member variable in line 12 indicates the amount of time between game cycles, in milliseconds. And finally, m_bSleep is a Boolean member variable that indicates whether the game is sleeping (paused).
The GameEngine constructor and destructor are defined after the member variables, as you might expect. The constructor is very important because it accepts arguments that dramatically impact the game being created. More specifically, the GameEngine() constructor accepts an instance handle, window classname, title, icon ID, small icon ID, and width and height (lines 1719). Notice that the iWidth and iHeight arguments default to values of 640 and 480, respectively, which is a reasonable minimum size for game screens. The ~GameEngine() destructor doesn't do anything, but it's worth defining it in case you need to add some cleanup code to it later (line 20).
I mentioned that the GameEngine class maintains a static pointer to itself. This pointer is accessed from outside the engine using the static GetEngine() method, which is defined in line 23. The Initialize() method is another important general method in the GameEngine class, and its job is to initialize the game program once the engine is created (line 24). The HandleEvent() method is responsible for handling standard Windows events within the game engine, and is a good example of how the game engine hides the details of generic Windows code from game code (lines 25 and 26).
The remaining methods in the GameEngine class are accessor methods used to access member variables; these methods are all used to get and set member variables. The one accessor method to pay special attention to is SetFrameRate(), which sets the frame rate, or number of cycles per second, of the game engine (lines 38 and 39). Because the actual member variable that controls the number of game cycles per second is m_iFrameDelay, which is measured in milliseconds, it's necessary to perform a quick calculation to convert the frame rate in SetFrameRate() to milliseconds.
The source code for the GameEngine class provides implementations for the methods described in the header that you just saw, as well as the standard WinMain() and WndProc() functions that tie into the game engine. The GameEngine source code also initializes the static game engine pointer, like this:
GameEngine *GameEngine::m_pGameEngine = NULL;
Listing 3.2 contains the source code for the game engine's WinMain() function.
Listing 3.2 The WinMain() Function in the Game Engine Makes Calls to Game Engine Functions and Methods, and Provides a Neat Way of Separating Standard Windows Program Code from Game Code
1: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, 2: PSTR szCmdLine, int iCmdShow) 3: { 4: MSG msg; 5: static int iTickTrigger = 0; 6: int iTickCount; 7: 8: if (GameInitialize(hInstance)) 9: { 10: // Initialize the game engine 11: if (!GameEngine::GetEngine()->Initialize(iCmdShow)) 12: return FALSE; 13: 14: // Enter the main message loop 15: while (TRUE) 16: { 17: if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) 18: { 19: // Process the message 20: if (msg.message == WM_QUIT) 21: break; 22: TranslateMessage(&msg); 23: DispatchMessage(&msg); 24: } 25: else 26: { 27: // Make sure the game engine isn't sleeping 28: if (!GameEngine::GetEngine()->GetSleep()) 29: { 30: // Check the tick count to see if a game cycle has elapsed 31: iTickCount = GetTickCount(); 32: if (iTickCount > iTickTrigger) 33: { 34: iTickTrigger = iTickCount + 35: GameEngine::GetEngine()->GetFrameDelay(); 36: GameCycle(); 37: } 38: } 39: } 40: } 41: return (int)msg.wParam; 42: } 43: 44: // End the game 45: GameEnd(); 46: 47: return TRUE; 48: }
Although this WinMain() function is similar to the one you saw in the previous lesson, there is an important difference. The difference has to do with the fact that this WinMain() function establishes a game loop that takes care of generating game cycle events at a specified interval. The smallest unit of time measurement in a Windows program is called a tick, which is equivalent to one millisecond, and is useful in performing accurate timing tasks. In this case, WinMain() counts ticks in order to determine when it should notify the game that a new cycle is in order. The iTickTrigger and iTickCount variables are used to establish the game cycle timing in WinMain() (lines 5 and 6) .
The first function called in WinMain() is GameInitialize(), which gives the game a chance to be initialized. Remember that GameInitialize() is a game event function that is provided as part of the game-specific code for the game, and therefore isn't a direct part of the game engine. A method that is part of the game engine is Initialize(), which is called to get the game engine itself initialized in line 11. From there WinMain() enters the main message loop for the game program, part of which is identical to the main message loop you saw in the previous lesson (lines 1724). The else part of the main message loop is where things get interesting. This part of the loop first checks to make sure that the game isn't sleeping (line 28), and then it uses the frame delay for the game engine to count ticks and determine when to call the GameCycle() function to trigger a game cycle event (line 36). WinMain() finishes up by calling GameEnd() to give the game program a chance to wrap up the game and clean up after itself (line 45).
The other standard Windows function included in the game engine is WndProc(), which is surprisingly simple now that the HandleEvent() method of the GameEngine class is responsible for processing Windows messages:
LRESULT CALLBACK WndProc(HWND hWindow, UINT msg, WPARAM wParam, LPARAM lParam) { // Route all Windows messages to the game engine return GameEngine::GetEngine()->HandleEvent(hWindow, msg, wParam, lParam); }
All WndProc() really does is pass along all messages to HandleEvent(), which might at first seem like a waste of time. However, the idea is to allow a method of the GameEngine class to handle the messages so that they can be processed in a manner that is consistent with the game engine.
Speaking of the GameEngine class, now that you have a feel for the support functions in the game engine, we can move right along and examine specific code in the GameEngine class. Listing 3.3 contains the source code for the GameEngine() constructor and de-structor.
Listing 3.3 The GameEngine::GameEngine() Constructor Takes Care of Initializing Game Engine Member Variables, Whereas the Destructor is Left Empty for Possible Future Use
1: GameEngine::GameEngine(HINSTANCE hInstance, LPTSTR szWindowClass, 2: LPTSTR szTitle, WORD wIcon, WORD wSmallIcon, int iWidth, int iHeight) 3: { 4: // Set the member variables for the game engine 5: m_pGameEngine = this; 6: m_hInstance = hInstance; 7: m_hWindow = NULL; 8: if (lstrlen(szWindowClass) > 0) 9: lstrcpy(m_szWindowClass, szWindowClass); 10: if (lstrlen(szTitle) > 0) 11: lstrcpy(m_szTitle, szTitle); 12: m_wIcon = wIcon; 13: m_wSmallIcon = wSmallIcon; 14: m_iWidth = iWidth; 15: m_iHeight = iHeight; 16: m_iFrameDelay = 50; // 20 FPS default 17: m_bSleep = TRUE; 18: } 19: 20: GameEngine::~GameEngine() 21: { 22: }
The GameEngine() constructor is relatively straightforward in that it sets all the member variables for the game engine. The only member variable whose setting might seem a little strange at first is m_iFrameDelay, which is set to a default frame delay of 50 milliseconds (line 16). You can determine the number of frames (cycles) per second for the game by dividing 1,000 by the frame delay, which in this case results in 20 frames per second. This is a reasonable default for most games, although specific testing might reveal that it needs to be tweaked up or down.
The Initialize() method in the GameEngine class is used to initialize the game engine. More specifically, the Initialize() method now performs a great deal of the messy Windows setup tasks such as creating a window class for the main game window and then creating a window from the class. Listing 3.4 shows the code for the Initialize() method.
Listing 3.4 The GameEngine::Initialize() Method Handles Some of the Dirty Work That Usually Takes Place in WinMain()
1: BOOL GameEngine::Initialize(int iCmdShow) 2: { 3: WNDCLASSEX wndclass; 4: 5: // Create the window class for the main window 6: wndclass.cbSize = sizeof(wndclass); 7: wndclass.style = CS_HREDRAW | CS_VREDRAW; 8: wndclass.lpfnWndProc = WndProc; 9: wndclass.cbClsExtra = 0; 10: wndclass.cbWndExtra = 0; 11: wndclass.hInstance = m_hInstance; 12: wndclass.hIcon = LoadIcon(m_hInstance, 13: MAKEINTRESOURCE(GetIcon())); 14: wndclass.hIconSm = LoadIcon(m_hInstance, 15: MAKEINTRESOURCE(GetSmallIcon())); 16: wndclass.hCursor = LoadCursor(NULL, IDC_ARROW); 17: wndclass.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1); 18: wndclass.lpszMenuName = NULL; 19: wndclass.lpszClassName = m_szWindowClass; 20: 21: // Register the window class 22: if (!RegisterClassEx(&wndclass)) 23: return FALSE; 24: 25: // Calculate the window size and position based upon the game size 26: int iWindowWidth = m_iWidth + GetSystemMetrics(SM_CXFIXEDFRAME) * 2, 27: iWindowHeight = m_iHeight + GetSystemMetrics(SM_CYFIXEDFRAME) * 2 + 28: GetSystemMetrics(SM_CYCAPTION); 29: if (wndclass.lpszMenuName != NULL) 30: iWindowHeight += GetSystemMetrics(SM_CYMENU); 31: int iXWindowPos = (GetSystemMetrics(SM_CXSCREEN) - iWindowWidth) / 2, 32: iYWindowPos = (GetSystemMetrics(SM_CYSCREEN) - iWindowHeight) / 2; 33: 34: // Create the window 35: m_hWindow = CreateWindow(m_szWindowClass, m_szTitle, WS_POPUPWINDOW | 36: WS_CAPTION | WS_MINIMIZEBOX, iXWindowPos, iYWindowPos, iWindowWidth, 37: iWindowHeight, NULL, NULL, m_hInstance, NULL); 38: if (!m_hWindow) 39: return FALSE; 40: 41: // Show and update the window 42: ShowWindow(m_hWindow, iCmdShow); 43: UpdateWindow(m_hWindow); 44: 45: return TRUE; 46: }
This code should look at least vaguely familiar from the Skeleton program example in the previous lesson because it is carried over straight from that program. However, in Skeleton this code appeared in the WinMain() function, whereas here it has been incorporated into the game engine's Initialize() method. The primary change in the code is the determination of the window size, which is calculated based on the size of the client area of the window. The GetSystemMetrics() Win32 function is called to get various standard Window sizes such as the width and height of the window frame (lines 26 and 27), as well as the menu height (line 30). The position of the game window is then calculated so that the game is centered on the screen (lines 31 and 32).
The creation of the main game window in the Initialize() method is slightly different from what you saw in the previous lesson. The styles used to describe the window here are WS_POPUPWINDOW, WS_CAPTION, and WS_MINIMIZEBOX, which results in a different window from the Skeleton program (lines 35 and 36). In this case, the window is not resizable, and it can't be maximized; however, it does have a menu, and it can be minimized.
The Initialize() method is a perfect example of how generic Windows program code has been moved into the game engine. Another example of this approach is the HandleEvent() method, which is shown in Listing 3.5.
Listing 3.5 The GameEngine::HandleEvent() Method Receives and Handles Messages That Are Normally Handled in WndProc()
1: LRESULT GameEngine::HandleEvent(HWND hWindow, UINT msg, WPARAM wParam, 2: LPARAM lParam) 3: { 4: // Route Windows messages to game engine member functions 5: switch (msg) 6: { 7: case WM_CREATE: 8: // Set the game window and start the game 9: SetWindow(hWindow); 10: GameStart(hWindow); 11: return 0; 12: 13: case WM_ACTIVATE: 14: // Activate/deactivate the game and update the Sleep status 15: if (wParam != WA_INACTIVE) 16: { 17: GameActivate(hWindow); 18: SetSleep(FALSE); 19: } 20: else 21: { 22: GameDeactivate(hWindow); 23: SetSleep(TRUE); 24: } 25: return 0; 26: 27: case WM_PAINT: 28: HDC hDC; 29: PAINTSTRUCT ps; 30: hDC = BeginPaint(hWindow, &ps); 31: 32: // Paint the game 33: GamePaint(hDC); 34: 35: EndPaint(hWindow, &ps); 36: return 0; 37: 38: case WM_DESTROY: 39: // End the game and exit the application 40: GameEnd(); 41: PostQuitMessage(0); 42: return 0; 43: } 44: return DefWindowProc(hWindow, msg, wParam, lParam); 45: }
The HandleEvent() method looks surprisingly similar to the WndProc() method in the Skeleton program in that it contains a switch statement that picks out Windows messages and responds to them individually. However, the HandleEvent() method goes a few steps further than WndProc() by handling a couple more messages, and also making calls to game engine functions that are specific to each different game. First, the WM_CREATE message is handled, which is sent whenever the main game window is first created (line 7). The handler code for this message sets the window handle in the game engine (line 9), and then calls the GameStart() game event function to get the game initialized (line 10).
The WM_ACTIVATE message is a new one that you haven't really seen, and its job is to inform the game whenever its window is activated or deactivated (line 13). The wParam message parameter is used to determine whether the game window is being activated or deactivated (line 15). If the game window is being activated, the GameActivate() function is called and the game is awoken (lines 17 and 18). Similarly, if the game window is being deactivated, the GameDeactivate() function is called and the game is put to sleep (lines 22 and 23).
The remaining messages in the HandleEvent() method are pretty straightforward in that they primarily call game functions. The WM_PAINT message handler calls the standard Win32 BeginPaint() function (line 30) followed by the GamePaint() function (line 33). The EndPaint() function is then called to finish up the painting process (line 35); you learn a great deal more about BeginPaint() and EndPaint() in the next hour. Finally, the WM_DESTROY handler calls the GameEnd() function and then terminates the whole program (lines 40 and 41).
You've now seen all the code for the game engine, which successfully combines generic Windows code from the Skeleton example with new code that provides a solid framework for games. Let's now take a look at a new and improved Skeleton program that takes advantage of the game engine. | http://www.informit.com/articles/article.aspx?p=29618&seqNum=3 | CC-MAIN-2019-26 | refinedweb | 3,147 | 52.83 |
So im working on a class assignment where I need to take a base 2 binary number and convert it to its base 10 equivalent. I wanted to store the binary as a string, then scan the string and skip the 0s, and at 1s add 2^i. Im not able to compare the string at index i to ‘0, and im not sure why if(binaryNumber.at(i) == ‘0’) isnt working. It results in an “out of range memory error”. Can someone help me understand why this doesnt work?
#include <iostream> using namespace std; void main() { string binaryNumber; int adder; int total = 0; cout << "Enter a binary number to convert to decimal n"; cin >> binaryNumber; reverse(binaryNumber.begin(),binaryNumber.end()); for (int i = 1; i <= binaryNumber.length(); i++) { if(binaryNumber.at(i) == '0') { //THIS IS THE PROBLEM //do nothing and skip to next number } else { adder = pow(2, i); total = adder + total; } } cout << "The binary number " << binaryNumber << " is " << total << " in decimal form.n"; system("pause"); }
Answer
Array indices for C++ and many other languages use zero based index. That means for array of size 5, index ranges from 0 to 4. In your code your are iterating from 1 to array_length. Use:
for (int i = 0; i < binaryNumber.length(); i++) | https://www.tutorialguruji.com/cpp/comparing-a-string-at-index-i-to-a-value-in-c/ | CC-MAIN-2021-43 | refinedweb | 211 | 65.83 |
Below is a little range generator, irange, which is compatible with range.?
lwickjr: I like the idea. Anyone else?
The author doesn't have the time/energy to write/push a PEP for the PythonEnhancementProcess. If you think this generator is a good idea, please submit a PEP.
lwickjr: Neither do I. Anyone else?
Test Suite
Here is the test:
import unittest from iter import irange class TestIter(unittest.TestCase): def testIrangeValues(self): testArguments = ( [10],[0],[7,70], [-10], [10,700,17], [10,1000,-3],[10,-99,-7] ) l = lambda *x: list(irange(*x)) for args in testArguments: list1 = l(*args) list2 = range(*args) self.assertEqual(list1,list2) def testIrangeArguments(self): self.assertRaises(ValueError,irange,0,99,0) #0 step #self.assertRaises(irange(1,2,3,4)) #too many arguments if __name__ == '__main__': unittest.main()
implementation
"""generator for a range of integers that matches the output of an iterator over the list range(...) """ from itertools import count,takewhile """private generator over the unchecked arguments start, stop, step [i for i in __irange3(start,stop,step)] produces range(start,stop,step) """ def __irange3__(start,stop,step): #the stop condition changes depending on the sign of step predicate = step > 0 and (lambda x : x < stop) or (lambda x : x > stop) i = start while predicate(i): yield i i += step """ generator to produce the same output as an iterator over range(args). The advantage of this over range() or xrange() is a list is not created in order to provide the generator. [i for i in irange(args) ] == [range(args)] """ def irange(*args): if len(args) not in (1,2,3): raise TypeError, "expected 1,2, or 3 arguments to irange" #the stop value will be the 1st argument when 1 argument supplied, it #will the the second argument otherwise. The index is one less than the #argument stop = args[(1,0)[len(args) == 1]] if len(args) == 3: step = args[2] #we check the step before we create the generator, for earlier #error annunciation. if step == 0: raise ValueError() return __irange3__(args[0],stop,step) #for the cases with no step (1 or two args) its easy to #use the built in count() generator and filter predicate = lambda x : x < stop #set up the arguments to count, depending on whether we have a start counter_args = len(args) != 1 and [args[0]] or () counter = count(*counter_args) return takewhile(predicate, counter)
Alternate Implementation
Perhaps a simple implementation can be constructed using *count* and *islice* from intertools?. -- Anon
I think it's way too much code, and also it does not accept named parameters. How about this? -- JürgenHermann 2005-10-19 20:06:52
def irange(start, stop=None, step=1): """ Generator for. """ if step == 0: raise ValueError("irange() step argument must not be zero") if stop is None: stop = start start = 0 continue_cmp = (step < 0) * 2 - 1 while cmp(start, stop) == continue_cmp: yield start start += step if __name__ == "__main__": cases = [ (0,), (1,), (2,), (1, 0), (1, 1), (1, 2), (1, 4), (1, 4, 1), (1, 4, 2), (1, 4, 3), (1, 4, -1), (1, 4, -2), (1, 4, -3), (4, 1, 1), (4, 1, 2), (4, 1, 3), (4, 1, -1), (4, 1, -2), (4, 1, -3), ] for case in cases: assert range(*case) == [i for i in irange(*case)] print "All %d tests passed!" % len(cases)
The alternate implementation is fine, though I'd prefer to see a "takewhile" rather than a while loop in the spirit of functional programming - but thats minor.? | https://wiki.python.org/moin/RangeGenerator?highlight=xrange | CC-MAIN-2017-04 | refinedweb | 580 | 57.1 |
I'm trying to create a webpage that will close itself when an mp3 audio file finishes playing. Right now I'm trying to do this simply by playing the mp3 from a keyframe in the timeline as an event. On the timeline I've placed a blank keyframe with an ActionScript attached to it that looks like this:
import flash.external.ExternalInterface
ExternalInterface.call("closeWindow");
stop();
The HTML page that embeds the Flash running the mp3 has a Javascript function called closeWindow that looks like this:
<script language="JavaScript">
function closeWindow(){window.close();}
</script>
Very simple very easy. But WHY DOESNT THIS WORK? The ExternalInterface call should fire off the Jscript on the HTML but it's not doing it.
Any suggestions out there?
javascript can only close windows that were opened using javascript.
Ok so this does work after all. First I had to create an HTML page from which to open a window containing the Flash mp3 audio using a window.open Javascript. Then, it was really important to test the whole thing on an active webserver and not just on my workstation. The webserver enabled the ExternalInterface call to connect to the window.close Javascript in the HTML page containing the Flash. So it is pretty simple after all.
that's kind of rude. | https://forums.adobe.com/thread/832196 | CC-MAIN-2018-05 | refinedweb | 218 | 75.4 |
16 November 2009 10:26 [Source: ICIS news]
RECAST: In order to clarify lead
LONDON (ICIS news)--Evonik Industries' chemicals business posted a 16% year-on-year increase in third-quarter earnings before interest, tax, depreciation and amortisation (EBITDA) to €505m ($753.7m), despite a drop in sales, due to cost cutting, the German specialty chemicals major said on Monday.
The group’s chemicals business reported a 16% drop in third quarter sales to €2.6bn compared with same period a year before due to the global economic crisis.
Evonik said the chemicals business had reported an upturn in demand, especially in Asia and ?xml:namespace>
Improvements in Evonik's chemical arm helped the company's overall third-quarter group net income to more than double to $168m compared with the same period last year, despite a 20% year-on-year fall in sales to €3.3bn.
“Our efforts to lower costs and raise efficiency are having an effect. We are on course despite rough seas,” said Klaus Engel, chairman of Evonik Industries’ executive board.
Engel said he had been satisfied with the progress of Evonik’s “on track” efficiency improvement programme, which had aimed to achieve a substantial and sustained improvement in the company’s competitiveness.
“We will exceed our goal of saving €300m this year alone,” he said.
In its outlook, Evonik said it would remain cautious and expected a substantial drop in full-year sales in 2009 as a result of the reduction in chemicals volumes caused by the economic situation.
“The group expects a slight upturn in some chemicals business lines but does not yet anticipate a sound and broadly based recovery,” the group said.
($1 = €0.67)
Read Paul Hodges’ Chemicals and the Economy Blog | http://www.icis.com/Articles/2009/11/16/9263915/evoniks-chemical-business-q3-ebitda-increases-16-to-505m.html | CC-MAIN-2014-42 | refinedweb | 290 | 58.82 |
As you may already know, I really like strace. (It has a whole category on this blog). So when the people at Big Data Montreal asked if I wanted to give a talk about stracing Hadoop, the answer was YES OBVIOUSLY.
I set up a small Hadoop cluster (1 master, 2 workers, replication set to 1) on Google Compute Engine to get this working, so that’s what we’ll be talking about. It has one 14GB CSV file, which contains part of this Wikipedia revision history dataset
Let’s start diving into HDFS! (If this is familiar to you, I talked about a lot of this already in Diving into HFDS. There are new things, though! At the end of this we edit the blocks on the data node and see what happens and it’s GREAT.)
$ snakebite ls -h / -rw-r--r-- 1 bork supergroup 14.1G 2014-12-08 02:13 /wikipedia.csv
Files are split into blocks
HDFS is a distributed filesystem, so a file can be split across many machines. I wrote a little module to help explore how a file is distributed. Let’s take a look!
You can see the source code for all this in hdfs_fun.py.
import hdfs_fun fun = hdfs_fun.HDFSFun() blocks = fun.find_blocks('/wikipedia.csv') fun.print_blocks(blocks)
which outputs
Bytes | Block ID | # Locations | Hostnames 134217728 | 1073742025 | 1 | hadoop-w-1 134217728 | 1073742026 | 1 | hadoop-w-1 134217728 | 1073742027 | 1 | hadoop-w-0 134217728 | 1073742028 | 1 | hadoop-w-1 134217728 | 1073742029 | 1 | hadoop-w-0 134217728 | 1073742030 | 1 | hadoop-w-1 .... 134217728 | 1073742136 | 1 | hadoop-w-0 66783720 | 1073742137 | 1 | hadoop-w-1
This tells us that
wikipedia.csv is split into 113 blocks, which are
all 128MB except the last one, which is smaller. They have block IDs
1073742025 - 1073742137. Some of them are on hadoop-w-0, and some are on
hadoop-w-1.
Let’s see the same thing using strace!
$ strace -f -o strace.out snakebite cat /wikipedia.csv | head
Part 1: talk to the namenode!
We ask the namenode where /wikipedia.csv is…
connect(4, {sa_family=AF_INET, sin_port=htons(8020), sin_addr=inet_addr("10.240.98.73")}, 16) sendto(4, "\n\21getBlockLocations\22.org.apache.hadoop.hdfs.protocol.ClientProtocol\30\1", 69, 0, NULL, 0) = 69 sendto(4, "\n\16/wikipedia.csv\20\0\30\350\223\354\2378", 24, 0, NULL, 0) = 24
… and get an answer!
recvfrom(4, "\255\202\2\n\251\202\2\10\350\223\354\2378\22\233\2\n7\n'BP-572418726-10.240.98.73-1417975119036\20\311\201\200\200\4\30\261\t \200\200\200@\20\0\236\2\n7\n'BP-572418726-10.240.98.73-1417975119036\20\312\201\200\200\4\30\262\t \200\200\200@\20\200\200\200@\237\2\n7\n'BP-572418726-10.240.98.73-1417975119036\20\313\201\200\200\4\30\263\t \200\200\200@\20\200\200\200\200\1\32\243\1\nk\n\01610.240.109.224\22%hadoop-w-0.c.stracing-hadoop.internal\32$bd6125d3-60ea-4c22-9634-4f6f352cfa3e \332\206\3(\233\207\0030\344\206\0038\0\20\200\300\323\356&\30\200\240\342\335\35 \200\240\211\202\2(\200\240\342\335\0350\263\257\234\276\242)8\1B\r/default-rackP\0X\0`\0 \0*\10\n\0\22\0\32\0\"\0002\1\0008\1B'DS-c5ef58ca-95c4-454d-adf4-7ceaf632c035\22\237\2\n7\n'BP-572418726-10.240.98.73-1417975119036\20\314\201\200\200\4\30\264\t \200\200\200@\20\200\200\200\300\1\32\243\1\nk\n\01610.240.146.168\22%hadoop-w-1.c.stracing-hadoop.inte"..., 33072, 0, NULL, NULL) = 32737
The hostnames in this answer totally match up with the table of where we think the blocks are!
Part 2: ask the datanode for data!
So the next part is that we ask
10.240.146.168 for the first block.
connect(5, {sa_family=AF_INET, sin_port=htons(50010), sin_addr=inet_addr("10.240.146.168")}, 16) = 0 recvfrom(5, "title,id,language,wp_namespace,is_redirect,revision_id,contributor_ip,contributor_id,contributor_username,timestamp,is_minor,is_bot,reversion_id,comment,num_characters\nIvan Tyrrell,6126919,,0,true,264190184,,37486,Oddharmonic,1231992299,,,,\"Added defaultsort tag, categories.\",2989\nInazuma Raigor\305\215,9124432,,0,,224477516,,2995750,ACSE,1215564370,,,,/* Top division record */ rm jawp reference,5557\nJeb Bush,189322,,0,,299771363,66.119.31.10,,,1246484846,,,,/* See also */,43680\nTalk:Goranboy (city),18941870,,1,,", 512, 0, NULL, NULL) = 512 recvfrom(5, "233033452,,627032,OOODDD,1219200113,,,,talk page tag using [[Project:AutoWikiBrowser|AWB]],52\nTalk:Junk food,713682,,1,,210384592,,6953343,D.c.camero,1210013227,,,,/* Misc */,13654\nCeline Dion (album),3294685,,0,,72687473,,1386902,Max24,1156886471,,,,/* Chart Success */,4578\nHelle Thorning-Schmidt,1728975,,0,,236428708,,7782838,Vicki Reitta,1220614668,,,,/* Member of Folketing */ updating (according to Danish wikipedia),5389\nSouthwest Florida International Airport,287529,,0,,313446630,76.101.171.136,,,125", 512, 0, NULL, NULL) = 512
$ strace -e connect snakebite cat /wikipedia.csv > /dev/null connect(5, {sa_family=AF_INET, sin_port=htons(50010), sin_addr=inet_addr("10.240.146.168")},
This sequence matches up exactly with the order of the blocks in the table up at the top! So fun. Next, we can look at the message the client is sending to the datanodes:
This is a little hard to read, but it turns out it’s a Protocol Buffer and so we can parse it pretty easily. Here’s what it’s trying to say:
OpReadBlockProto header { baseHeader { block { poolId: "BP-572418726-10.240.98.73-1417975119036" blockId: 1073742025 generationStamp: 1201 } token { identifier: "" password: "" kind: "" service: "" } } clientName: "snakebite" }
And then, of course, we get a response:
recvfrom(5,"title,id,language,wp_namespace,is_redirect,revision_id,contributo r_ip,contributor_id,contributor_username,timestamp,is_minor,is_bot ,reversion_id,comment,num_characters\nIvanTyrrell,6126919,,0,true,264190184,, 37486,Oddharmonic,1231992299,,,,\"Addeddefaultsorttag,categorie s.\",2989\nInazumaRaigor\305\215,9124432,,0,,224477516,,2995750,ACSE,12155643 70,,,,/*Topdivisionrecord*/rmjawpreference,5557\nJebBush,1 89322,,0,,299771363,66.119.31.10,,,1246484846,,,,/*Seea
Which is just the beginning of a CSV file! How wonderful.
Part 3: Finding the block on the datanode.
Seeing the datanode send us the data is nice, but what if we want to get even closer to the data? It turns out that this is really easy. I sshed to my data node and ran
$ locate 1073742025
with the idea that maybe there was a file with
1073742025 in the name that had the block data. And there was!
$ cd /hadoop/dfs/data/current/BP-572418726-10.240.98.73-1417975119036/current/finalized $ ls -l blk_1073742025 -rw-r--r-- 1 hadoop hadoop 134217728 Dec 8 02:08 blk_1073742025
It has exactly the right size (134217728 bytes), and if we look at the beginning, it contains exactly the data from the first 128MB of the CSV file. GREAT.
Super fun exciting part: Editing the block on the datanode
So I was giving this talk yesterday, and was doing a live demo where I was ssh’d into the data node, and we were looking at the file for the block. And suddenly I thought… WAIT WHAT IF WE EDITED IT GUYS?!
And someone commented “No, it won’t work, there’s metadata, the checksum will fail!“. So, of course, we tried it, because toy clusters are for breaking.
And it worked! Which wasn’t perhaps super surprising because replication
was set to 1 and maybe a 128MB file is too big to take a checksum of
every time you want to read from it, but REALLY FUN. I edited the
beginning of the file to say
AWESOME AWESOME AWESOME instead of
whatever it said before (keeping the file size the same), and then a
snakebite cat /wikipedia.csv showed the file starting with
AWESOME
AWESOME AWESOME.
So some lessons:
- I’d really like to know more about data consistency in Hadoop clusters
- live demos are GREAT
- writing a blog is great because then people ask me to give talks about fun things I write about like stracing Hadoop
That’s all folks! There are slides for the talk I gave, though this post is guaranteed to be much better than the slides. And maybe video for that talk will be up at some point. | http://jvns.ca/blog/2014/12/10/spying-on-hadoop-with-strace/ | CC-MAIN-2017-13 | refinedweb | 1,384 | 65.93 |
How to Convert Python Functions into PySpark UDFs
We have a Spark dataframe and want to apply a specific transformation to a column/a set of columns. In Pandas, we can use the map() and apply() functions. The Spark equivalent is the udf (user-defined function). A user defined function is generated in two steps. In step one, we create a normal python function, which is then in step two converted into a udf, which can then be applied to the data frame.
This post shows how to code and use a udf. First, we take a look at how to proceed in the simplest case: a function with one input and one output variable. Afterwards we level up our udf abilities and use a function with multiple in- and output variables. The code has been tested for Spark 2.1.1.
A general remark: When dealing with udfs, it is important to be aware of the type of output that your function returns. If you get the output data types wrong, your udf will return only nulls.
For both of the examples we need to import the following modules:
from pyspark.sql.functions import udf, struct, col from pyspark.sql.types import * import pyspark.sql.functions as func
Level 0: One-In-One-Out
Step 1: Define your function
I was recently recoding binned ages into numeric format. This is an abbreviated version of a function that takes a string, compares it to several options and finally returns a float.
def extractAge(mystring): if mystring.strip() == 'age 18-25': return 21.5 if mystring.strip() == 'age 26-35': return 30.5 else: return None
Step 2: Create the udf (user-defined function)
The function extractAge() takes a single input and returns a single output of type float. The udf-syntax therefore is:
extract_age_udf = udf(lambda row: extractAge(row), FloatType())
The return type (here FloatType) can be any of the standard Spark datatypes
Step 3: Usage
Create a test dataframe:
df = sc.parallelize([[1., 'age 18-25'], [2., 'age 100+']]).toDF(["f1","age"]) df.show() >>> +---+---------+ | f1| age| +---+---------+ |1.0|age 18-25| |2.0| age 100+| +---+---------+
Apply function:
df_new = df.withColumn('age_n', extract_age_udf(col('age'))) df_new.show() >>> +---+---------+-----+ | f1| age|age_n| +---+---------+-----+ |1.0|age 18-25| 21.5| |2.0| age 100+| null| +---+---------+-----+
Levelling up: Many-In-Many-Out
Step 1: Define your function
Let’s assume we want to create a function which takes as input two columns and returns the sum and the difference of the two columns.
def sum_diff(f1, f2): return [f1 + f2, f1-f2]
Step 2: Create the udf
Since the function now returns a vector, we can’t just use the FloatType() data type anymore, we need to first assemble the schema of the output. Both the elements are of type float, so the schema looks like this:
schema = StructType([ StructField("sum", FloatType(), False), StructField("diff", FloatType(), False) ])
Having defined the schema, we can define the udf as follows:
sum_diff_udf = udf(lambda row: sum_diff(row[0], row[1]), schema)
Alternatively, we can define function and udf as
def sum_diff(row): return [row[0] + row[1], row[0]-row[1]] sum_diff_udf = udf(lambda row: sum_diff(row), schema)
and still get the same output.
Step 3: Usage
Create a test dataframe:
df = spark.createDataFrame(pd.DataFrame([[1., 2.], [2., 4.]], columns=['f1', 'f2'])) df.show() >>> +---+---+ | f1| f2| +---+---+ |1.0|2.0| |2.0|4.0| +---+---+
Apply function:
df_new = test_data.withColumn("sum_diff", sum_diff_udf(struct([col('f1'), col('f2')])))\ .select('*', 'sum_diff.*') df_new.show() >>> +---+---+----------+---+----+ | f1| f2| sum_diff|sum|diff| +---+---+----------+---+----+ |1.0|2.0|[3.0,-1.0]|3.0|-1.0| |2.0|4.0|[6.0,-2.0]|6.0|-2.0| +---+---+----------+---+----+
Update: I just found this post commenting on execution plans for the * expansion. It suggests, wrapping the results in an array and then exploding the array. While the exploding has some drawbacks, it means, that you only need to execute the udf once, which is good, since udfs are inherently slow to execute. Adapting this idea for the example above leads to a code like this:
df_new = ( test_data .select( '*', func.explode( func.array( sum_diff_udf(struct([col('f1'), col('f2')])) ) ).alias('sum_diff') ) .select('*', col('sum_diff.*')) ) df_new.show()
Example of What Happens if you get your Output Data Type Wrong
As mentioned above, if you get your output data type wrong, your udf will return nulls. Here is a modified version of the one-in-one-out example above. If we assume the return to be float, but in fact, the function returns an integer, the udf returns nulls. Code as above, but modify the function to return an integer while we keep on telling the udf that the function should return a float.
def extractAge(mystring): if mystring.strip() == 'age 18-25': return 21 if mystring.strip() == 'age 26-35': return 30 else: return None extract_age_udf = udf(lambda row: extractAge(row), FloatType())
Applying the udf will lead to the output:
df_new = df.withColumn('age_n', extract_age_udf(col('age'))) df_new.show() +---+---------+-----+ | f1| age|age_n| +---+---------+-----+ |1.0|age 18-25| null| |2.0| age 100+| null| +---+---------+-----+
Note, that it does not matter, which are the data types. As long as they are not consistent, the udf will return nulls.
Summary and Conclusion
In this post we have taken a look at how to convert a Python function into a PySpark UDF. We have looked at the cases of a simple One-In-One-Out situation and at a situation where our function has multiple input and output variables.
I hope this post has been useful for you! | https://walkenho.github.io/how-to-convert-python-functions-into-pyspark-UDFs/ | CC-MAIN-2020-10 | refinedweb | 926 | 66.23 |
SVG::Metadata - Perl module to capture metadata info about an SVG file
use SVG::Metadata; my $svgmeta = new SVG::Metadata; $svgmeta->parse($filename) or die "Could not parse $filename: " . $svgmeta->errormsg(); $svgmeta2->parse($filename2) or die "Could not parse $filename: " . $svgmeta->errormsg(); # Do the files have the same metadata (author, title, license)? if (! $svgmeta->compare($svgmeta2) ) { print "$filename is different than $filename2\n"; } if ($svgmeta->title() eq '') { $svgmeta->title('Unknown'); } if ($svgmeta->author() eq '') { $svgmeta->author('Unknown'); } if ($svgmeta->license() eq '') { $svgmeta->license('Unknown'); } if (! $svgmeta->keywords()) { $svgmeta->addKeyword('unsorted'); } elsif ($svgmeta->hasKeyword('unsorted') && $svgmeta->keywords()>1) { $svgmeta->removeKeyword('unsorted'); } print $svgmeta->to_text();
This module provides a way of extracting, browsing and using RDF metadata embedded in an SVG file.
The SVG spec itself does not provide any particular mechanisms for handling metadata, but instead relies on embedded, namespaced RDF sections, as per XML philosophy. Unfortunately, many SVG tools don't support the concept of RDF metadata; indeed many don't support the idea of embedded XML "islands" at all. Some will even ignore and drop the rdf data entirely when encountered.
The motivation for this module is twofold. First, it provides a mechanism for accessing this metadata from the SVG files. Second, it provides a means of validating SVG files to detect if they have the metadata.
The motivation for this script is primarily for the Open Clip Art Library (), as a way of filtering out submissions that lack metadata from being included in the official distributions. A secondary motivation is to serve as a testing tool for SVG editors like Inkscape ().
Creates a new SVG::Metadata object. Optionally, can pass in arguments 'title', 'author', 'license', etc..
my $svgmeta = new SVG::Metadata; my $svgmeta = new SVG::Metadata(title=>'My title', author=>'Me', license=>'Public Domain');
Alias for creator()
Generates an rdf:Bag based on the data structure of keywords. This can then be used to populate the subject section of the metadata. I.e.:
$svgobj->subject($svg->keywords_to_rdf());
See:
Returns the last encountered error message. Most of the error messages are encountered during file parsing.
print $svgmeta->errormsg();
Extracts RDF metadata out of an existing SVG file.
$svgmeta->parse($filename) || die "Error: " . $svgmeta->errormsg();
This routine looks for a field in the rdf:RDF section of the document named 'ns:Work' and then attempts to load the following keys from it: 'dc:title', 'dc:rights'->'ns:Agent', and 'ns:license'. If any are missing, it
The $filename parameter can be a filename, or a text string containing the XML to parse, or an open 'IO::Handle', or a URL.
Returns false if there was a problem parsing the file, and sets an error message appropriately. The conditions under which it will return false are as follows:
* No 'filename' parameter given. * Filename does not exist. * Document is not parseable XML. * No rdf:RDF element was found in the document, and the try harder option was not set. * The rdf:RDF element did not have a ns:Work sub-element, and the try_harder option was not set. * Strict validation mode was turned on, and the document didn't strictly comply with one or more of its extra criteria.
Gets or sets the title.
$svgmeta->title('My Title'); print $svgmeta->title();
Gets or sets the description
Gets or sets the subject. Note that the parse() routine pulls the keywords out of the subject and places them in the keywords collection, so subject() will normally return undef. If you assign to subject() it will override the internal keywords() mechanism, but this may later be discarded again in favor of the keywords, if to_rdf() is called, either directly or indirectly via to_svg().
Gets or sets the publisher name. E.g., 'Open Clip Art Library'
Gets or sets the web URL for the publisher. E.g., ''
Gets or sets the creator.
$svgmeta->creator('Bris Geek'); print $svgmeta->creator();
Gets or sets the URL for the creator.
Alias for creator() - does the same thing
$svgmeta->author('Bris Geek'); print $svgmeta->author();
Gets or sets the owner.
$svgmeta->owner('Bris Geek'); print $svgmeta->owner();
Gets or sets the owner URL for the item
Gets or sets the license.
$svgmeta->license('Public Domain'); print $svgmeta->license();
Gets or sets the date that the item was licensed
Gets or sets the language for the metadata. This should be in the two-letter lettercodes, such as 'en', etc.
Gets or sets the XML retention option, which (if true) will cause any subsequent call to parse() to retain the XML. You have to turn this on if you want to_svg() to work later.
Gets or sets the strict validation option, which (if true) will cause subsequent calls to parse() to be pickier about how things are structured and possibly set an error and return undef when it otherwise would succeed.
Gets or sets the try harder option option, which causes subsequent calls to parse() to try to return a valid Metadata object even if it can't find any metadata at all. The resulting object may contain mostly empty fields.
Parse will still fail and return undef if the input file does not exist or cannot be parsed as XML, but otherwise it will attempt to return an object.
If you set both this option and the strict validation option at the same time, the Undefined Behavior Fairy will come and zap you with a frap ray blaster and take away your cookie.
Gets or sets an array of keywords. Keywords are a categorization mechanism, and can be used, for example, to sort the files topically.
Adds one or more a new keywords. Note that the keywords are stored internally as a set, so only one copy of a given keyword will be stored.
$svgmeta->addKeyword('Fruits and Vegetables'); $svgmeta->addKeyword('Fruit','Vegetable','Animal','Mineral');
Removes a given keyword
$svgmeta->removeKeyword('Fruits and Vegetables');
Return value: The keyword removed.
Returns true if the metadata includes the given keyword
Compares this metadata to another metadata for equality.
Two SVG file metadata objects are considered equivalent if they have exactly the same author, title, and license. Keywords can vary, as can the SVG file itself.
Creates a plain text representation of the metadata, suitable for debuggery, emails, etc. Example output:
Title: SVG Road Signs Author: John Cliff License: Keywords: unsorted
Return value is a string containing the title, author, license, and keywords, each value on a separate line. The text always ends with a newline character.
Escapes '<', '>', and '&' and single and double quote characters to avoid causing rdf to become invalid.
Generates an RDF snippet to describe the item. This includes the author, title, license, etc. The text always ends with a newline character.
Returns the SVG with the updated metadata embedded. This can only be done if parse() was called with the retain_xml option. Note that the code's layout can change a little, especially in terms of whitespace, but the semantics SHOULD be the same, except for the updated metadata.
XML::Twig
Bryce Harrington <bryce@bryceharrington.org>
This script is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~bryce/SVG-Metadata-0.28/lib/SVG/Metadata.pm | CC-MAIN-2013-20 | refinedweb | 1,186 | 55.44 |
I need you to calculate the total time it will take this ceiling fan to come to a stop, and then explain your calculations. I think the keywords are “rotational kinematics” but I’m way out of my depth here. I’ll call this off and post the third act (the answer) when someone gives an answer that’s inside a 10% margin, paired with an explanation. The winner gets the keys to my heart.
Here’s a video that (I hope) will simplify your analysis. [download]
BTW: Let’s get a few guesses in there also. Totally from the gut, informed by experience only. I’m interested to see who gets closer — the analysts or the guessers.
BTW: Don’t hesitate to get your students in on this also.
BTW: Yes, I already gave this a try but I was undone by the fact that I gave both the first and third act in advance which led to a lot of handwaving and glossy explanation.
2011 Mar 9. Frank Noschese gets inside the 10% envelope. His explanation, screencast, and Python script. Here’s the answer video, also.
70 Comments
RileyMarch 8, 2012 - 8:45 am -
I’d expect friction from the base to be constant, and friction from the air to be decreasing with the square of the speed. It’s been 10 years since I studied anything with the word “rotational” in it, but I think you could figure this out with a kind of ordinary graph of rotations per second.
My gut-check guess: 90 seconds
Chris LustoMarch 8, 2012 - 9:49 am -
I think rotations/second is probably a decent way to go. I’m no physicist, but I’m guessing the (constant?) friction from the base overwhelms any air resistance, so you end up with a roughly linear relationship. I timed a few individual rotations, turned that into rotations/second, and plotted that value against the time. Once you swag a line through there (and it looks pretty linear), let’s say we get to 0 rotations/second in the neighborhood of 40 seconds after you yank the chain.
JohnMarch 8, 2012 - 10:02 am -
I pulled this into Tracker video anlaysis, and measured the initial angular velocity to be 4.68 rad/s, and the angular acceleration to be 0.135 rad/s/s, which gives a time to stop of about 35 seconds.
Tim OltmanMarch 8, 2012 - 10:11 am -
Rotational Kinetics is the way to go for this problem. Now with all physics we have to make some assumptions. The assumptions I made was to neglect air resistance and assume a constant deceleration. I used the following rotational kinematics equations:
ω = ω0 + αt
Now in this equation the first term stands for the final rotational velocity which will be zero. The second term stands for the initial rotational velocity which using the video I got to be about 1rev/second. The time is what we are solving for. Which leaves us with alpha. This is our deceleration and we need to find it. Now using the video again I got the velocity at 15 seconds to be .584 revolutions/second. This allowed me to use this above equation to get an approximation for the deceleration. I got that to be -.027 revolutions/second/second. The negative confirms that the fan is slowing down and we know we are on the right path. Now a note here since I am human my timing can be off it would be better if you could be there with a stop watch and make the measurements, but even then their will be some error. Now back to the problem. Using the deceleration of -/027 revolutions/second/second we can solve for t. Doing this I got a time of 37.04 seconds. Feel free to correct me anywhere I made a mistake.
Josh GMarch 8, 2012 - 10:27 am -
Interesting – I used Tracker as well, and got somewhat different results, until I realized that I had to set the frame rate manually to 60 fps. I got an initial angular v of about 9.5 rad/s, an ang. accel. of .272 rad/s^2, and a time of 37.3 seconds from the beginning of the video. I only fit the line to the section of the graph after the switch was pulled. Changing the frame rate seems like a barrier to entry here, though.
[IMG][/IMG]
Mark WatkinsMarch 8, 2012 - 10:38 am -
I can explain how you would do it (my degree is physics) and it is rotational kinematics. Essentially you’d pick a blade and plot it’s angular position as it goes around including all full circles of rotation it’s gone through already. The kinematic equation you are after is a quadratic with time and you can fit a curve like any other quadratic. since it’s slowing down the “a” coefficient of the quad will be negative and you are looking for the vertex to determine the time at which it stops. Without the ability to download quicktime movies of Vimeo I am unable to throw it in LoggerPro at this time but the analysis is pretty easy to do once it’s in there.
Mark WatkinsMarch 8, 2012 - 10:41 am -
I wouldn’t use the acceleration/velocity kinematic equation because measuring instantaneous velocity at the start it a much harder thing to do than the initial position.
Dan MeyerMarch 8, 2012 - 10:46 am -
@Mark, there’s a link to a QuickTime download of the video.
MsPoodryMarch 8, 2012 - 10:48 am -
Not much to add, though I have done this with the video analysis function in Logger Pro. I then usually copy and paste into Excel for the analysis (I know, I know). First, convert x and y coordinates to theta coordinates. You can’t do this with the ATAN function, you need to use ATAN2. Then plot theta vs time. You will have to manually add 2*pi after one full rotation, then again after the second, and so on.
Once you have a column of times and a column of corrected thetas, do a new column of delta theta over delta time, and graph that vs. time. Then use the slope of that graph to determine angular acceleration. Or, don’t bother with the graph and use Excel’s LINEST function to generate the slope and intercept, with uncertainties, since there will undoubtedly be uncertainty in your result.
(i.e. we can’t predict when it will stop and guarantee you that that exact moment is when it will stop. There will be some range of possible values. Ideally, we would have data for multiple trials to reduce the uncertainty).
Dan MeyerMarch 8, 2012 - 11:04 am -
For me, the proof is in the physics pudding. If I’m a student of yours (and for the moment I am) and you tell me that physical models can explain the world around me, including that fan, I’m going to want to SEE that. So merely explaining an analytical method leaves me kind of cold. Explaining an analytical method that’s led to an answer within 10% of the margin? That’s hot.
And no one’s done that yet.
Greg SchwanbeckMarch 8, 2012 - 11:08 am -
I got 30.5 seconds.
The very first thing I did was time the video with a stopwatch to determine that the video was being played at half speed.
I wanted to use vf=vi+at, with vf = 0 rad/s–the final angular velocity), vi = the initial angular velocity, and a the angular acceleration.
I determined vi by counting rotations before you pulled the off string and timing with a stopwatch. I did a bunch of trials and got an average angular speed of 9.283 radians per second.
To find a, I first counted the number of rotations after you pulled the string to get the angular displacement during the 25.79 seconds it slowed down. I got 22 rotations which is an angular displacement of about 138 radians. Then I used
x=vit+0.5at^2 to find a. (x=138, vi=9.283, and t=25.79)
This yielded an angular acceleration of -0.304.
Plugging this in to vf=vi+at gave me a t of 30.5 seconds.
Tim OltmanMarch 8, 2012 - 11:21 am -
Their is one problem with wanting it within 10% error. Physics is not math and since I have the pleasure of teaching both I see this problem a lot. With math you want an answer and you want it to be right on or very very close. This is not how physics works. Sure we can get close but that is all. With this problem we have to make a lot of assumptions. You have to assume no air resistance, constant deceleration, that their is a uniform mass, and many other things. All of physics is about models and all of those models are wrong but they explain things the best we can with them. I do not know if getting within 10% is possible without making this problem really ugly with a lot of other things built into the equation. For example air resistance which we then need surface area of the fan blades. If you want to get within the 10% the only way I see us doing that is with logger pro or some other computer program, because doing it by hand with formulas will only get you so close. I guess my whole point of this is the difference between physics and math. In math you want an answer, but in physics you want to understand how it works and get close because you understand that you are leaving things out that will effect your results. The real world is very messy and hard to get precise answers.
JimMarch 8, 2012 - 11:23 am -
This is in fact a more complicated problem that you would see in a freshman physics (I/II) class, since there is a significant velocity-dependent force from air resistance.
Are you looking for the full treatment, or the simplified “ignoring air resistance” model? A brief perusal of Serway (3rd ed.) shows that air resistance is discussed as a velocity-dependent force, but I do not see a treatment of (for example) projectile motion incorporating drag. I suspect this topic came up in the later mechanics course(s).
Mark WatkinsMarch 8, 2012 - 11:37 am -
Just sent you my excel. Here’s my method. I used the post-its and VLC in slow motion to record the time at the end of each rotation. I made a table out of the time-rotations pairs and let excel fit a quadratic (which I already knew is right for constant acceleration). Then I found the vertex using -b/2a which corresponds to the time when the fan stops rotating forward. That gave me 33.7 seconds. I’m willing to bet it’s good to within 10%.
Mark WatkinsMarch 8, 2012 - 11:44 am -
@Jim all physics is an approximation so at some point you have to simplify something. For a 10% margin of error you could ignore a lot of important factors including calling air resistance approximately a constant for most of the slowdown. If you really wanted to get into the hairy of it you could use a discussion of internal friction relating to speed to guess that what ever number you came up with it’s probably a touch earlier since friction explodes near the end.
Daniel PetersMarch 8, 2012 - 12:29 pm -
Solving a second-order differential equation (in which I assume friction to be proportional to the angular velocity and approximating air resistance as a constant), I get 58.82 seconds.
Kenneth FinneganMarch 8, 2012 - 12:50 pm -
The problem eventually distills down to the fact that, as the fan slows, the acceleration changes, as a function of the velocity of the fan. Trying to break out each individual force is apt for failure, so I decided to simply model this with a second order relationship between velocity and acceleration (due to exponential grad, and lower order terms).
I stepped through the video and noted every moment any blade pointed down, and then used those points to calculate the angular velocity ( * 4, due to there being 4 blades). Smooth it out a little due to aliasing with your 30Hz frame rate, find the polyfit between the acceleration and velocity, and then take every initial point, model it out to finish using 0.01sec step sizes, and find the average finish time. This could probably be improved by weighing the later data points heavier due to the signal-to-noise ratio getting better as the fan slows.
In the end, my favorite answer is 46.7 seconds, but if you smooth the data using an even filter (point = mean(point…point+3) instead of an odd filter, the model jumps to ~62 seconds, so I can’t tell you which one is a better solution.
Matlab code:
Results:
Ben WildeboerMarch 8, 2012 - 1:34 pm -
OK, so at first I wanted to use some Work-Energy fun to figure this out, but I’m not sure you can do that without knowing the moment of inertia of the fan, which I’m not sure we can find so easily.
So, I used the video to measure the time when each blade passed the line. Since the blades are 90-deg apart (ð¿/2 rads), I created a table in Excel with radians & time. I then added a best fit polynomial, and took the derivative of that and solved for y = 0 rad/s. Included all the data, it worked out to 34.3 s. However, I noticed the best-fit line didn’t fit all that best. As commenters above have already pointed out, air resistance is most likely an important factor and can’t be eliminated. I then took just the last 5 seconds of data (figuring with lower angular velocity the effect of air resistance would be less) and repeated the above process, giving a time of 49.9 s.
Obviously air resistance is having a significant impact on the angular acceleration, which makes this problem more complicated. I’d say my answer of 49.9 seconds is definitely low, but possibly within 10%. Or not.
This problem would have a lower barrier to entry if we could find a situation where air resistance wouldn’t play a major role. However, I’m drawing a blank on a household object that could be used as a replacement that isn’t some sort of fan…
James ClevelandMarch 8, 2012 - 1:38 pm -
Isn’t it a bit unfair to expect within 10% of Act 3 when we don’t have Act 2, only Act 1?
Jason DyerMarch 8, 2012 - 1:52 pm -
Dan wanted some guesses from the gut and since I don’t have time right now for equations, I’m guessing 41 seconds.
Paul WolfMarch 8, 2012 - 1:55 pm -
This is not going to help, but my IQ drops significantly when I try to change the setting on any ceiling fan. Always one too many or too few clicks.
MsPoodryMarch 8, 2012 - 2:07 pm -
Dude, you need to make the model yourself, then. Having me give you the model and have it work is not going to ignite anything. You analyze it, you come up with a model that works, then you show if off to us. That is what I want my students to do. So if youvare being a student, pretend we all just gave you this video and you have to figure it out!
Have fun!
David GarciaMarch 8, 2012 - 2:43 pm -
Loving this! I did not use any video analysis software, since the time is right on the video. (BTW I love the colored paper showing that the outside of the blade is faster than the inside- in every frame it is always blurrier!) I recorded time rounded to the tenth of a second each time a fan blade crossed the white line. I then made a plot of angle vs. time. It was something between linear and quadratic looking, so along with most others I assumed it was quadratic. I got the computer to tell me the equation of the line and from the derivative got an initial velocity of 7.866 rad/sec and a constant acceleration of -0.2252 rad/sec/sec. These values gave me a time to stop of 34.9 seconds.
This was much less then I thought it would be when I guessed. Looking at act 1 I thought it would stop around 60 or 70 seconds. My too high guess would be 120 seconds and my two low guess would have been 30 seconds.
I think its interesting that my values for acceleration and velocity don’t match anyone else’s (in fact most posters seem to be disagreeing with each other’s results), even though I think I’m the third one to come up with an answer in the 35-40 second range.
I wish I was better at estimating uncertainty in my measurements. I’d like to be able to put a +/- value after my answer. Given that I assumed I could measure time within .1 seconds for each quarter rotation, can anyone tell me what the uncertainty in my answer should be?
Pete VMarch 8, 2012 - 3:15 pm -
Quick question. If you turned off the fan again, would it still take the same amount of time to stop?
YaacovMarch 8, 2012 - 3:27 pm -
Using an advanced neural network to model this, I’ve calculated 18.382 seconds from the time you turn it off to when it stops.
AndrewMarch 8, 2012 - 4:03 pm -
oops…with explanation. I put time into L1, # of spins into L2 (not hashtag ofspins…some of my students the other day didn’t know that # meant number. Led me into a discussion of the octothorpe…), let my ti-83 do the legwork on a quadratic regression, found the max of the parabola it spit out, and it was 36.76 seconds. I don’t know what made me put 37-38 instead of 36-37, but oh well. It’s still within 10% of my guess so I’ll take it
YaacovMarch 8, 2012 - 4:09 pm -
And using a TI-83 quadratic regression, I get that it never stops, but the minimum speed occurs around 42 seconds.
A question for the physicists: I remember something about dynamic friction and static friction. Does this work similarly in reverse? Is there some positive speed at which the static friction takes over? And if so, is it calculable from the dynamic friction, or is it another parameter we have to guess at?
Roy WrightMarch 8, 2012 - 6:27 pm -
Wouldn’t the exact time the fan completely stops be largely determined by small-scale features and forces that are inherently hard to model?
Dan MeyerMarch 8, 2012 - 6:34 pm -
MsPoodry:
You misunderstand my relationship to your discipline. I don’t think the models work. I know the fan will eventually stop moving but I don’t think any model can predict anything more precise than that. You come along and say, “Physics can.” I say, “Teach me. Show me. Prove it to me.”
Now you say, “You go do it.”
Tim Oltman:
So what kind of percent error should I allow here? If the models are useful at all, they should get me somewhere in the ballpark. I’m not requiring perfect accuracy. I’m asking you what kind of margin of error is fair to the difficulties of modeling the world.
James Cleveland:
It’s fair for you to say, “I need more information,” but not every problem does. The physicists in the room all admit to the imperfections of the model but none seem to think they’re insufficiently informed.
Pete V:
Great question. I ran it twice. Tomorrow, I’ll post both videos.
Roy Wright:
Maybe. For me, it’s an open question whether or not these models work to any useful degree. The physicists seem to have some confidence, though.
Roy WrightMarch 8, 2012 - 8:04 pm -
For my part, when I naively fit the data to an ODE of the form
θ” = – a – bθ’ – c(θ’)²
I end up with a stopping time of about 57 seconds. I have little confidence in that estimate, though.
Frank NoscheseMarch 8, 2012 - 9:23 pm -
My model predicts 78.9 seconds.
I used your ACT 2 video and used it like a student would.
1. I determined the angular speed of the fan before you turned it off: 9.4 radians/second.
2. I recorded the time each time the fan blade with the sticky on the end crosses the white line, along with the rotation number. Then I converted rotation number to angular distance using (theta) = (2*pi*Rotation#).
3. I graphed the data from #2 in Logger Pro as a Theta vs. Time graph.
4. I tried fitting a parabola to the graph, which would be the case for a constant angular deceleration. This would be true if there was a constant frictional torque and no air resistance. But the quadratic didn’t fit too well, so I now have to factor in air resistance.
5. One model for air resistance is that air resistance is proportional to the square of the speed (F_air = kv^2). This makes the net torque on the fan not constant, but decreasing over time. In other words: T_net = T_friction + T_air.
6. Since angular acceleration is directly proportional to torque (#5), we also know that: alpha_net = alpha_friction + alpha_air
7. But from #5 we also know F_air is proportional to T_air which is proportial to alpha_air. So we can say that alpha_air is proportional to F_air which is proportional to v^2. So alpha_air is proportional to v^2: alpha_air = b*omega^2
8. So now we have an equation for the angular acceleration at anytime, including air resistance: alpha_net = alpha_friction + alpha_air
9. Rather than solving a second-order differential equation, I wrote a Python program to model the process in small time-steps. The program graphs the data I recorded from the video and the data points my model predicts. If the two data sets didn’t match, I tweeked the constants in my model until they do
.
10. The program does this:
* Start with omega = 9.4 rad/s
* Use that value of omega to calculate alpha_air
* Use that value of alpha_air to calculate alpha_net
* Use that value of alpha_net to calculate omega a short time later.
* Use that new omega to calculate a new theta a short time later.
* Use that new omega to calculate a new alpha_air
And the cycle repeats itself until omega reaches zero.
11. Two problems, however, were determining the value of alpha_friction and the proportionality constant (b) in alpha_air. So, what it did was return to logger pro and fit a quadratic to the first few data points to get an approximate value for alpha net at the beginning (0.509 rad/s^2, i.e., twice the t^2 coefficient). I fit another quadratic to just the last few data point to get an approximate alpha_net at the end of the video (0.09702 rad/s^2). I used the slope tool to find an approximate omega at the beginning (9.4 rad/s) and end (3.3 rad/s). Then I can use those values to write a system of equations:
0.509 rad/s^2 = alpha_friction + b*(9.4 rad/s)^2
0.04851 rad/s^2 = alpha_friction + b*(3.3 rad/s)^2
And then had Wolfram|Alpha solve for alpha_friction and b — these would be starting point for the model.
12. I graphed both the movie data and the model data. The data fit a bit at the begining, but not at the end. So I tweeked the alpha_friction and b values until the the data sets overlapped.
I’m not sure this makes much sense to you, but I’m dying to see the third act.
Frank NoscheseMarch 8, 2012 - 9:40 pm -
My VPython Program:
from visual import *
from visual.graph import *
#intitial values
theta = 0
omega = 9.4
t = 0
dt = 0.1
alpha_fric = 0.04
b = 0.007
alpha_air = b*omega*omega
#data points from video
points = [(0,0),(0.7,6.28318530718),(1.4,12.5663706144),(2.16,18.8495559215),(2.93,25.1327412287),(3.77,31.4159265359),(4.63,37.6991118431),(5.57,43.9822971503),(6.53,50.2654824574),(7.57,56.5486677646),(8.6,62.8318530718),(9.74,69.115038379),(10.91,75.3982236862),(12.14,81.6814089933),(13.44,87.9645943005),(14.81,94.2477796077),(16.31,100.530964915),(17.88,106.814150222),(19.58,113.097335529),(21.38,119.380520836),(23.35,125.663706144),(25.45,131.946891451)]
#set up graph
data = gdots(pos=points, color=color.blue)
model = gdots(color=color.yellow)
#loop
while omega > 0:
model.plot(pos=(t,theta))
alpha_air = b*omega*omega
alpha_net = alpha_fric + alpha_air
omega = omega – alpha_net * dt
theta = theta + omega * dt
t = t + dt
print t
Frank NoscheseMarch 8, 2012 - 9:56 pm -
You can see the graph of both the model and video data here:
The physics model data is blue. The data from the fan video is yellow. (Opposite what the program above says…I swapped colors for easier viewing.)
You can see the graph levels out around 70 seconds, though the program officially reaches zero speed at 78.9 seconds. Hope that is within 10%
Mark WatkinsMarch 8, 2012 - 10:11 pm -
So can we take your withholding of the full video to mean no one is right to within +/- 10% or are you giving people some time to hit you with what they’ve got before you post who won the keys to the aforementioned heart?
Fawn NguyenMarch 9, 2012 - 12:39 am -
From gut and years of lying awake in the humid heat of Vietnam and staring up at the ceiling fan (that rarely came to a stop): 30 seconds.
Not caring much, however, for how long it takes a ceiling fan to stop — I turned off the fan and left the room already.
JohnMarch 9, 2012 - 4:32 am -
Is there any chance you could post the act two video without the white line? It throws off autotracker every time a post it crosses the line.
RileyMarch 9, 2012 - 5:57 am -
Yeah, Frank! I wish you’d said 81s, though – it is going to crack me up if 90 seconds is within 10%. I would have said 85 but I didn’t want to commit to two significant digits ;)
I hope all the physics teachers in the room send their students to this comment stream.
Also: re. ignoring air resistance: Maybe you can ignore it in a lot of these problems. I always felt alright about doing that for, you know, a marble rolling down a track. But this is a FAN, everyone. They build air resistance INTO it. Air resistance is its PURPOSE!
Andrew SMarch 9, 2012 - 7:07 am -
From the gut: 53 seconds
Love the idea! I will use Monday as a warmup with one of my classes!
Dan MeyerMarch 9, 2012 - 7:39 am -
Mark Watkins:
Here’s a link to the full video. Frank was the first analyst to beat the margin. (Credit to Riley also for a great gut-check.)
Thanks for the lucid description, Frank. Two questions to follow up:
1. How did you calculate the value for alpha_fric?
2. Since you’re the only analyst (I believe) who worked with air resistance, why was your answer so much larger than the others? Neglecting air resistance would indicate a fan that would spin down slowly, only slowed by kinetic friction. Right?
RileyMarch 9, 2012 - 7:56 am -
haha – thanks for a great lesson, Dan.
Ben WildeboerMarch 9, 2012 - 8:02 am -
To #2: When you neglect air resistance, you’re assuming the resistance forces are constant from the beginning to the end. Neglecting air resistance gives you an artificially short answer because, in reality, the air resistance goes from being a major factor (at the beginning) to a very small factor (at the end).
When you’re pretending that the resistance forces are constant you’re missing the fact that near the end of the fan’s rotation there’s much, much less total resistive torque than initially, so the angular acceleration is much less, so the fan take much more time to actually stop.
Frank: Is this the type of modeling/analysis that you would expect your students to be able to handle, or would it be a bit of a challenge for them?
Dan MeyerMarch 9, 2012 - 8:05 am -
Thanks for the explanation on the air resistance, Ben. No one discounted it. Everyone implicitly called it constant whereas Frank had it decreasing.
Daniel MillerMarch 9, 2012 - 8:10 am -
I used a stopwatch and a geometric series. observed how long each of the first ten rotations took and created a geometric series where each term represents the rotations per second. I got an average common ration of .947 and used the formula for the sum of an infinite geometric series using an initial rotations per second of 1.54. I ended up with an answer of 29.03 which seems kinda low but my observed times were probably quite inaccurate. regardless this seems like a good way to approach the problem for an Algebra 2 class, if you are dealing with sequence and series.
GregMarch 9, 2012 - 8:11 am -
At this point, I’d be interested in looking at a distribution of the guesses. It looks binomial (as much as it might look like anything at all) and I wonder if there’s any tentative conclusion to be drawn about:
1. physics modeling (the question that Dan seems most interested in)
2. data quality (there seem to be issues with the video and certain tools?)
3. choice of model to apply
I guess I’m saying the more interesting questions to me have all turned meta at this point. I don’t really care about the fan anymore. :)
Brian KMarch 9, 2012 - 8:20 am -
Did anyone take into consideration air density in their calculations. Things like temperature and relative humidity would also create more or less friction. Because the first thing I do in my head is start comparing what I see to alternate version to make predictions.
If I removed the air from the system, would that increase or decrease the time? Of course that would increase the stopping time, that thought brought me right to the idea that air resistance would also be directly proportional not just to surface area of the blade and the velocity of the blade, but the the air density.
Thus to knowing this third aspect would in fact, along with Frank’s data, allow for an even more precise answer.
JohnMarch 9, 2012 - 8:37 am -
I the presence of air resistance early on makes the angular acceleration much greater than the acceleration due to friction alone (which is very small), and so we underestimate the time it takes to stop.
Air resistance is the dominant factor in determining the acceleration early on, so if you “neglect air resistance” you really are assuming a much larger value for the acceleration than the actual value.
zetteMarch 9, 2012 - 9:27 am -
i tried every fan in our house (5- all different shapes) and at every speed setting….they all came to complete rest at 1 min 20 sec to 1 min 34 secs.
Sue VanHattumMarch 9, 2012 - 9:47 am -
(Re Fawn not caring) The reason I care is that I pull that cord a few times, and I can’t even tell if it’s stopped running. The slow speed and the no speed seem the same at first. My gut reaction was that it’s close to 2 minutes, but that’s because it seems like forever.
Frank NoscheseMarch 9, 2012 - 10:30 am -
The total angular deceleration of the fan at any instant is the sum of both a frictional deceleration (constant and independent of speed) and an air resistance deceleration (depends on the square of the speed).
In other words:
alpha_net = alpha_fric + alpha_air
but
alpha_air = b*omega^2
(b is a constant related to the drag coefficient of your fan)
we can write
alpha_net = alpha_fric + b*omega^2
Only problem is, I don’t know the values of alpha_fric or b for your fan. However, if I can get alpha_net and omega at two different times, I can write a system of equations to solve for the two unknown (alpha_frict and b).
So task 1 is estimate alpha_net at the beginning and end of the video clip. I did this by fitting a quadratic to just the first 4 data points, and then fitting another quadratic to the last 4 data points. The “A†coefficient in the quadratic fit is ½*alpha_net.
Task 2 is to estimate omega at the beginning and end of the clip. I did this by using the slope tool (slope of theta is omega)
This gives me the following 2 equations:
0.509 rad/s^2 = alpha_friction + b*(9.4 rad/s)^2
0.04851 rad/s^2 = alpha_friction + b*(3.3 rad/s)^2
And then had Wolfram|Alpha solve for the system for alpha_friction and b.
These values of alpha_frict and b would be the starting point for the model. I then tweeked them until the model model matched the data.
**************
WillyB answered your second question perfectly!
Mark WatkinsMarch 9, 2012 - 10:53 am -
Well, hopefully this convinced you that some model worked, even though it’s messier than a lot of us initially thought. It’s clear, now, that air resistance plays a much bigger part in the process. Makes sense in retrospect since the internal bearings are probably close to zero torque on the blades.
Brian KMarch 9, 2012 - 11:08 am -
I showed the videos to my basic Physics students and these were the questions that arose from our discussion.
1. Does the curved angle of the blade, from a fluid dynamics stand point, in that it produces and downward or upward force on air (depending on the direction it is moving) create a high and low pressure on opposite sides of the blade, and does that decrease the total magnitude of the frictional forces?
2. Does the cavitation in the fluid left behind each blade alter the air flow in such a way to increase frictional forces (forcing the blade to wobble up and down)? Is this also directly proportional to the speed?
3. Is there a drafting effect by a blade on the one behind it that reduces the overall magnitude of friction experienced? Is this drafting effect also directly proportional to blade speed?
Just some thoughts from High School seniors.
David GarciaMarch 9, 2012 - 11:20 am -
31. “I know the fan will eventually stop moving but I don’t think any model can predict anything more precise than that. You come along and say, “Physics can.†I say, “Teach me. Show me. Prove it to me.â€
Sometimes physics can predict a good model sometimes it can’t. We don’t know the answer unless we try. The best physics courses will teach students how to try.
Here’s how I’m trying:
Process A: look at Act Two and try to see if a useful mathematical model fits the observed data.
1. Graph the angular position vs. time
2. It looks curved, slowing down with constant acceleration is easy, I’ll try a quadratic fit.
3. Take the derivative, set it equal to zero I get an answer and post it in the comments [35 seconds]
4. Read more discussion about the importance of air resistance, so I try an inverse exponential fit [on logger pro: A(1-exp(-Ct))+B]
5. Use derivatives to create velocity and acceleration functions that I graph to see if they make sense
6. Start thinking more seriously about friction and air resistance:
Ceiling fans are designed to have very low friction at the axle. Friction would make a fan use more electricity and be inefficient. Additionally it would likely lead to more noise, and no one wants a noisy fan in their bedroom or living room. I wonder if a physics classroom model based off data collected from a “Smart Pulley” would be comparable to this situation.
So initially air resistance causes it to decelerate. And then when the speed gets low enough, I suspect friction takes over. So I imagine a graph of acceleration vs. time. There is a constant line close to zero that represents friction. And then there is a curve that starts further from zero that represents air resistance. As the fan slows the air resistance line approaches zero. But when the air resistance line get closer to the friction line, the effects of friction become more important and then the motion would seem to change abruptly.
My angular velocity curve never actually intersects zero, leaving me to guess at what point it’s close enough to zero for friction to take over. I’ll guess that friction becomes the most significant retarding force around 50 seconds and it stops at 55 seconds.
7. Try to figure out if there are other approaches
Approach B: think about which concepts of physics apply to this situation and build a theoretical model based on what those concepts predict should happen.
1. If all the forces are known, and the mass and rotational inertia, then the acceleration can be predicted.
2. I need to know what the friction force is- this can be measured at low speeds by determining how much acceleration results from applying a small, known torque.
3. I need to make a model of air resistance, perhaps measuring cross sectional area
4. At this point I start to lose interest in the problem. I wonder if other physics teachers would preserver to create a theoretical model. The model couldn’t answer any specific questions about Dan’s fan- but indicate what we need to know about Dan’s fan to be able to answer the question.
So then my thinking shifts gears. And I wonder about what the best way to contribute to the discussion, wonder about the implication for my own teaching, and wonder who’s more likely to be engaged by this task, a research physicist or a high school physics teacher?
Although many systems we observe are too chaotic to build an accurate model from basic concepts, introductory mechanics is still useful and important. At its best, a mechanics course would teach students to wonder about how the basic concepts show up in the world around us and give students the skills to determine if a situation is too chaotic for a predictive model. Most classroom systems that we make models from are simplified and students no longer recognize them as real world. Things that naturally happen around us are often too chaotic to demonstrate the physics concepts we want to see. What this discussion is showing me as that we need more systems that are at the boundary of chaotic and contrived. The ceiling fan may be a good example; a bungee jump may be another one.
One key question you should always thinking about when designing good tasks for students is do professionals ever do this task in completing their job. By that standard, the purpose and methods for analyzing the fan’s motion shifts. The best ceiling fans are not designed by optimizing theoretical models, they are designed by building something, observing how it works and then building something better. Indeed it is the very rare object or task that is improved by understanding the theoretical model behind it. So then a significant portion of our physics tasks should be to build a system make systematic and relevant measurements, and then improve it based on those..
Kevin MartzMarch 9, 2012 - 12:06 pm -
I love your comment:
.”
You kind of beat me to the punch there. Its all about making sense of the world around us.
I wonder what percentage of your loyal readers are physics teachers?
SamMarch 9, 2012 - 1:48 pm -
I didn’t have time/energy to work on it yesterday, and I’ve seen the answer, but the process is still interesting to me since there were a LOT of inaccurate answers.
Since I’ve been teaching AP Physics C, I was thinking in terms of the inverse exponential growth model for the angular position of the fan (A(1-e^(-t/time constant)). Looking at data plots shown here made me hopeful. However A can’t be known since it’s the eventual position of the fan which isn’t known in advance.
Taking the derivative of this gives the exponential decay of the fan’s angular velocity A(e^(-t/time constant)). Using others’ initial angular velocities (A = 9 ish rad/s), you can find what time corresponds to the angular velocity being 37% (or e^-1) of its initial value of 9. This happens around 20.5 seconds according to data borrowed from Frank. This data was not as accurate as it could be since it just looked at when the same blade came around to the same point time after time. 20.5 seconds is then the time constant and you have a predictive equation for ang velocity over time. 9ish(e^-t/20.5).
I was feeling pretty good about this when I realized that I would hit a road block in that this function never actually reaches zero when a real fan’s velocity does. So at some point you have to set a minimum velocity for the fan where the bearing friction is finally a bigger factor than air. According to Excel and Act 3, this slowest possible speed was approximately .14 rad/s or about 8 degrees per second for this fan. However, you could probably test most ceiling fans and find this critically slow speed without knowing the answer by a little testing since it’s probably stable over a large range of initial angular velocity speeds and perhaps even different models of ceiling fans.
I think a Pre-Calc class or a Physics class that is familiar with exponential decay could use that model to do a pretty good job of predicting the stopping time for a ceiling fan. It would also be great practice for using Euler’s number, natural logs and solving exponential equations.
… and I totally would have gotten it if I would have seen it earlier! :)
Andrew CasperMarch 9, 2012 - 2:06 pm -
I spent a little time fooling around with various differential equations to explore how well they modeled the fan’s behavior. I created a brief writeup of the results which you can see at.
The post describes two cases, one where the drag is proportional to velocity and another where drag proportional to velocity squared (similar to what Frank talked about above but with a constant drag coefficient). Based on these models I’d predict the fan to stop somewhere around 70 seconds.
Thanks for posting this video, I was looking for something just like this. I was actually planning on propping my bike upside down and filming the tire spin freely this weekend before I came across your post. So you saved me some time!
Fawn NguyenMarch 9, 2012 - 4:04 pm -
Sue, thanks, I was thinking of a ceiling fan with an on/off switch, but I know exactly what you mean with a pull cord. I’ve done that a few times and had to stick around to see. I appreciate seeing all the discussion, only wish I could understand it more.
aaronthillMarch 10, 2012 - 5:13 am -
To add slightly (by way of intuitive explanation) to Ben’s excellent comment above about air resistance:
On the one hand, air resistance surely ought to be the dominant cause of deceleration when the fan is moving quickly. The purpose of a ceiling fan is to circulate air and this is done by the air resistance on the blades. If air resistance is not the dominant cause of deceleration at “working speed” then we have an inefficient ceiling fan.
On the other hand, it seems natural that air resistance should play almost no part in the deceleration when the fan is going very slowly. Everyday experience indicates that air resistance is very small when the speed of an object is small (relative to the air).
I don’t know the nuts and bolts of physics well enough to know how “air resistance” and “other friction” should fit together into a formula, but both need to be taken into account.
mr bombasticMarch 10, 2012 - 5:37 am -
One “hands on” way to see that a parabola is not a good model is to use part of your data to build the model and see how well it “predicts” the other part of your data.
I fit 4 parabolas to Frank’s data using: 1st six values, 2nd six values, 3rd six values, last six values. The predicted stopping times for the models are 19 sec, 30 sec, 36 sec, and 47 sec. This sort of analysis is easily accesible to students and clearly shows the danger of extrapolating with a model for a situation you do not understand very well.
It is not so hard to build a spreadsheet for Frank’s model. It is interesting to play around with the choices for the values for the two sources of friction. Constant friction alone does not produce a good fit. Using wind resistance friction alone fits the data very well but produces a model where the fan never stops.
Greg SchwanbeckMarch 10, 2012 - 5:44 am -
How stupid of me to ignore air resistance for a device *designed* to plow its way through air! I feel such shame.
Robert HansenMarch 10, 2012 - 9:42 am -
Nice analysis Frank.
David Garcia wrote…
“Because, outside of physics teachers, who solves mechanics problems anymore today?”
I work at a company with 5000 engineers, this is exactly the stuff we do. Buildings, bridges, airplanes, cars, even iPads and iPhones are not designed hit or miss. Speaking of fans, they exist in turbo pumps, jet engines and power plants. You don’t just take an educated guess. It takes solid theory and experimentation to get these things to perform.
Teacher school should include a year of interning at companies that rely on these subjects. I realize that a full analysis of a ceiling fan looks intimidating, but that is exactly how fans are designed, albeit not the ones you buy at Lowes that are made in China.
David PetroMarch 10, 2012 - 1:23 pm -
@Robert, when you say “I work at a company with 5000 engineers, this is exactly the stuff we do. ” I wonder exactly what you mean? Here’s why I ask. I loved Frank’s solution, especially the bit about numerical integration. We teach our kids how to solve these equations and we say they model real world situations. But the real world is actually more nuanced as this fan problem showed. And solving the actual equation could be really tough.
However, with the use of computers and software, we can numerically solve them. If you look at the actual equations that Frank used in his program, there wasn’t any one that was too complex (most were linear or simple quadratic). Yet we ask kids to solve equations that are sometimes needlessly complex.
So my question to you Robert is whether the “stuff” you were referring to was closer to numerical analysis or algebraic analysis. My guess is numerical but I have no clue really.
Alejandro DominguezMarch 10, 2012 - 8:20 pm -
Hello Dan, and everybody.
I haven’t read all comments but I guess that in the explanation for that weird thing that considering resistance provokes larger stopping times could be explained by something called angular momentum.
It seems nobody has considered the mass distribution of the fan…
I guess it will take more time to stop it if the mass would be only in the end of the blades than if it would be distributed homogeneously. Dan, could you put for me an extra mass at the end of the blades and try again measuring the stopping time? If that is different then there would be the explanation for the large stopping time you recorded.
PS. By the way, let me tell you I am a fan of your approach to teach.
Alejandro DominguezMarch 10, 2012 - 9:30 pm -
Dan,
in case you put that extra weight in the end of the blades be careful, it could be dangerous.
Completing my recent comment, may be the quantity that should be used to fit the data is the “Rotational Kinetic Energy” Er=Iw^2/2. Where I is the moment of inertia (a constant value, equivalent to the mass in linear velocity) and w is the angular velocity.
That energy should be dissipated completely before the fan is stopped.
I guess that considering a constant angular acceleration from the initial time will result in a wrong value for the friction (higher), and thus a shorter stopping time will be obtained.
Friction losses from the axis of the fan would be constant in time I suppose, but drag on the blades will be dependent on the angular velocity (w^2 dependence). This will result in a large losses of energy in the beginning, and very small losses of energy at the end. This effect can be misleading about the value for the deceleration, and thus it can result in wrong answer.
Robert HansenMarch 10, 2012 - 9:51 pm -
“So my question to you Robert is whether the “stuff†you were referring to was closer to numerical analysis or algebraic analysis. My guess is numerical but I have no clue really.”
Frank did not solve the fan problem with numerical analysis, he employed numerical analysis in the solution. He started with physics, a kinematics problem involving rotational inertia and an opposing torque. He further refined the opposing torque as a combination of a constant element (the motor/bearings) and a non constant element (the air resistance). He chose the v^2 version of the drag equation because in problems such as fan blades and air that makes sense (there are other drag equations depending on the circumstances). Through all this he has been using physics and algebra to mathematically (quantitively) rationalize and describe the kinematics of the fan, with a guiding purpose, to determine how long it will spin after it is turned off. Finally, after the physics and algebra he applies calculus (because of the time element) and obtains a differential equation, a (small) cliff that he circumvents using a step wise approximation, that requires him to refactor that situation algebraically into a program that must also correctly approximate the solution of the differential equation. He also employed some data analysis to determine the friction coefficients.
To answer your question, there is no “or” between numerical analysis and algebra (and calculus). They are woven together in the same cloth. I think what you are asking is whether we “solve” differential equations analytically or numerically. We solve them the same as we did 100 years ago, if a solution exists and we need that we use that and if not then we use numerical methods (which have existed for as long as differential equations existed). It also depends on the situation. This fan differential equation might be part of a bigger problem and thus there would be no purpose for a numerical result at this point. We would still have to tread through more physics, algebra and calculus and then (likely) at the end of all that numerical methods might be employed.
“However, with the use of computers and software, we can numerically solve them. If you look at the actual equations that Frank used in his program, there wasn’t any one that was too complex (most were linear or simple quadratic). Yet we ask kids to solve equations that are sometimes needlessly complex.”
There is some confusion here about the meanings of “solve” and “numerically solve”. When we solve an expression we generally mean to put it in a closed form with the dependent variable on the left of the equals. For example, to solve the expression “y – x^2 = 0” we get “y = +/-sqrt(x)”. But that isn’t a numerical solution because, except for special cases, we cannot calculate the sqrt(x) exactly, we can only approximate it using numerical methods, either by hand or with a computer. So, when a numerical result is the goal, numerical methods are involved regardless. But numerical results are not always the goal and generally they are almost never the intermediate goal. It takes a considerable amount of familiarity with the math (and in this case the physics) and the problem to know where and when the problem favors numerical or analytical methods. On top of that it also takes a considerable familiarity with numerical methods (algorithm and programming) to employ them, especially to do a simulation as Frank just did.
I hear what you are saying, but it isn’t as easy as Frank makes it appear in a post. Like watching a musician perform, it isn’t as easy as they make it look. If you analyze what Frank did you will realize that there are a number of proficiencies involved and as far as I can tell there is no easy and quick formula for teaching all of that in one fell swoop. It boils down to interest, coaching and a lot of practice. In the beginning it is unrecognizable as what Frank just did, but at some point the fruit of all of that work starts to become recognizable and then it matures and then, after even more work, it becomes Frank.
Here is a case where the complexity of the problem rules out a theoretical analysis and favors an empirical one. In order to find out what makes engineers engineers, go find successful engineers and ask them. They are generally going to tell you “not any one thing in particular, just all of it, even the stuff they didn’t like.”
PS: My point to Garcia was that without the mechanical (physics) analysis of the kinematics of the fan, Frank wouldn’t have a mathematical model at all. This was not a problem to be solved by data analysis and modeling (because there wasn’t enough data). This was a physics problem from the get go. Physics starts with how and why and derives the math from that. Regardless of its simplicity, this was a very realistic engineering problem requiring theory (physics), mathematical analysis (algebra calculus), data analysis (the coefficients) and numerical methods (the simulation).
mr bombasticMarch 11, 2012 - 7:37 am -
@Hansen, nice post. I agree, this is a physics problem. A person with limited understanding of a situation is very likely to come up with a poor model, or use the model inappropriately. I think it sends a very poor message to have students use data to model situations they do not understand – analagous to quoting statistics without researching the issue thoroughly.
On the other hand, I think this is an outstanding problem
if you are willing to let students come up with a model, discuss creative ways to check the model and the reasons it doesn’t work, and lead them through something like what Frank did.
kenAugust 2, 2012 - 11:06 pm -
Fans freak me out. say a fan has a 6 inch radius, right? At the 2 inch point, the fan might travel 2 revolutions per second. C = pi x R, yes. So in 2 seconds in would travel about 12 inches , rounding pi down to 3, of course. At 6 inch part the blade travels it will travel approx 36 inches in 2 seconds. Then I get confused when I watch Sagan say that the inner parts of the galaxies move slower than the outer spirals. I guess house hold mechanics don’t hold true in celestial mechanics. I live in florida and have to watch fans every day…HELP! | https://blog.mrmeyer.com/2012/i-need-a-physics-tutor/ | CC-MAIN-2020-40 | refinedweb | 9,479 | 70.73 |
Before my holiday I was plagued by invisible differences in doctest output, something like
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds. - Ran 2 tests with 0 failures and 0 errors in N.NNN seconds. + Ran 2 tests with 0 failures and 0 errors in N.NNN seconds. ? ++++++++ Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
After reading some bug reports, I think the following is an accurate summary:
Your operating system’s readline library sometimes emits a
^[[?1034h
escape code to switch on 8bit character display when initialized.
One of the things needed for the problem to occur: your
TERM environment
variable must be set to
xterm. Either
linux or
vt100 is fine.
I don’t know why. This is the point at which operating systems can have a
different setting, btw.
An
import readline somewhere in your python code triggers the output of
the escape code. The escape code is no problem, unless you’re catching
stdout and doing string comparison on it…
The doctest whitespace normalizer doesn’t filter out this escape code.
My hack to work around it is to set the environment variable to something not-xterm-like right smack at the start of my python file that sets up the tests. So something like:
import os # Start dirty hack. os.environ['TERM'] = 'linux' # End of hack.
Not nice. But at least I can continue working for now on my real task. Should doctest’s whitespace normalizer be appended? Is there a real bug somewhere in python’s readline implementation (one bug report mentioned that python before 2.3 didn’t have this problem)? Enlightened opinion is welcome at reinout@vanrees.org, of course.
Comment by Jean-Paul Ladage: if you use buildout, you can set such an environment variable your the zc.recipe.egg part:
environment-vars = TERM linux
This way you don’t have to hack in the): | https://reinout.vanrees.org/weblog/2009/08/14/readline-invisible-character-hack.html | CC-MAIN-2021-21 | refinedweb | 321 | 68.16 |
ReSharper C++ Quick Tips: Inlay Hints
Our ReSharper C++ Quick Tips video series has a new episode out! If you missed the previous ones, here are the links:
- Overload Resolution
- Code Completion
- Converting Enum to String
- Macro Substitution
- C++20’s Comparisons
- Includes Analyzer
- Postfix Completion
- Modernizing Quick-Fixes
ReSharper C++ helps you read code, as well as write it. Sure, hover tooltips are great for seeing more details about the code as you read it. But ReSharper can do more – it can show you hints right in the editor, so you can always see what’s going on without ever touching the mouse. This includes hints for parameter names, namespace names, preprocessor directives, and type names:
Learn more about ReSharper C++ in our online help. | https://blog.jetbrains.com/rscpp/2021/08/26/resharper-c-quick-tips-inlay-hints/ | CC-MAIN-2021-39 | refinedweb | 126 | 55.17 |
An earlier article A Utility to Read, Edit, Encrypt, Decrypt, Write XmlStore Files is part 1 of a project on using a DataGridView as the UI for a simple XML file. Here's an excerpt from its introduction:
DataGridView
an individual cell
an entire row
an entire column, or
the entire data set (table)
Part 1 describes many of the pertinent details and they are not repeated here. However, Part 1 does not address printing the contents of an XmlStore. This article does. The code provided with this article does everything Part 1 does; it also supports printing on any Windows supported printer.
This utility supports encryption. If you encrypt anything which requires you to supply a password, you will need the same password to decrypt the data. YOU have to remember it, as the password used for encryption is not stored anywhere.
On completing my XmlStore project (described in part 1), I discovered I had additional uses for the XmlStore project. However, for some of them, I needed an easy way to print the contents of the XmlStore. Nothing fancy, just "reasonably nice looking", "consistent", and "predictable".
It turns out, printing a people-friendly version of a DataGridView (which in this case is really the content of an XmlStore file) is not supported by Microsoft. It looked, at first, like an oversight on that company's part. Then I thought printing must be really easy. Perhaps all it required was to write a simple PrintDGV method and tally ho!
PrintDGV
Was I wrong!!! Sadly, printing anything in Microsoft Windows using C# is a complicated event-driven process. Simply put, one has to render each page when asked by the printer driver. But, for a newbie like myself, comprehending the convoluted MSDN documentation on printing was, and still is, very difficult.
So, I searched around The Code Project to see if someone had a solution that would meet my needs. To my great relief, more than a few had contributed classes that promised a useable solution. (Isn't The Code Project community an amazing resource?) The noteworthy ones I found are:
Each is an interesting solution but, unfortunately, I couldn't use any one of them "out of the box". I realized, I'd have to tweak each one to get what I needed. The tweaks included, among other things, the ability to:
Indeed, if the tweaks weren't necessary, this article would have been moot.
To explore, tweak, and finally pick, the solution that would work best (for me), I created a class called Print in my VVX namespace. VVX.Print is essentially a wrapper class that provides access to modified versions of the three classes mentioned above. I then added the following menuitems to the File menu:
Print
VVX
VVX.Print
menuitem
NOTE: When appropriate, each one also displays the standard print dialog with which you can change printers, change orientation, etc. Unless you're a Windows veteran, a potentially confusing thing [in the context of Print Previews] is: the PrintDialog displays a "Print" button. It should have been called "OK" or "Apply" or "Continue", because that's what it really does. As I don't know how to fix this, I hope this note suffices.
PrintDialog
I undertook this project originally to help a friend and, in the process, also to learn a few things about Visual Studio 2005, C#, XML, DataGridView, cryptography, and printing, etc. Caveat Emptor: Nothing in my code has been properly or adequately tested. More important, there is a good chance, you or someone else can do this better. So, if you do use this code and cannot decrypt something, there is little I or anyone can do to help you. On the bright side, though, since you have the source code, you can make it more bullet-proof to suit your needs. Also, printing is full of in-the-eyes-of-the-beholder issues and you just might hate how they are addressed here!
DataGridView
If you have Visual Studio 2005, then you should be able to use the project source code "out of the box" -- simply build and run. The code itself is not rocket science. It is reasonably documented. If you don't have Visual Studio 2005, you will have to ask a more experienced friend. (Don't ask me, as I don't have a clue! Don't ask Microsoft either, as no one there will respond!!!)
Side benefits of the code for newbies (like myself): The project includes modules with reasonably documented and self-explanatory code. With luck, they may help you learn how to use a few features of the various technologies employed. Most of the things that merit mention, such as XmlStore and the various VVX modules, are described in part 1. The following are mentioned below mainly for the printing related modules.
VVX.Print is essentially a wrapper class that provides access to modified versions of three classes:
DGVPrinter
PrintDGV
ControlPrint
These modified versions are placed in my VVX namespace so that I can freely tinker with the code.
The VVX.Print class contains one enum, two constructors, three public methods, and a handful of properties.
enum
public
The VVX.Print class contains three simple public methods:
DoPrintDGV
DoPrintControl
DoGetPrinterAndSettingsFromUser
System.Drawing.Printing.PrinterSettings
System.Drawing.Printing.PrintDocument
The VVX.Print class contains a few simple public properties:
PrintWhichDGV
PrintHowDGV
PrintPreview
PrintTitle
PrintJobName
Salan Al-Ani, Afrasiab Cheraghi, and Nader Elshehabi collectively did much to improve my understanding of not only how to print data from a DataGridView, but also about printing in Windows.
There is plenty of room for improving this utility, such as the ability to do some or all. | http://www.codeproject.com/Articles/18056/XmlStore-Part-2-Printing-DataGridView-Contents?PageFlow=FixedWidth | CC-MAIN-2016-44 | refinedweb | 947 | 63.09 |
From Documentation
Requirements
- 1. NetBeans 5.5
- 2. The last version of ZK libraries
- 3. An Application Server. In this example we use JBoss.
- 4. Have the knowledge for building a simple ZK application (if you don’t, read this tutorial How to build your first ZK application with NetBeans).
Procedure
- 1. Launch the IDE NetBeans.
- 2. Create a new Web Application Project (File > New Project… > Web > Web Application). Give it the name you want then click Finish.
- 3. Set up the Web Application for the ZK environment (set up web.xml, include the ZK jars). You can jump these three steps if you start from the hello world tutorial.
- 4. Create a new empty file in the web directory
- and set his name to index.zul, then click finish.
- 5. Put this code inside index.zul
<zk> <window id="root"> <tabbox use="com.MyTabbox" id="tabbox" width="100%" > <tabs> <tab label="Preload"/> <tab label="OnDemand 1"/> <tab label="OnDemand 2"/> </tabs> <tabpanels> <tabpanel> This panel is pre-loaded. </tabpanel> <tabpanel id="second"> <include id="secondContent" src=""/> </tabpanel> <tabpanel id="third"> <include id="thirdContent" src=""/> </tabpanel> </tabpanels> </tabbox> </window> </zk>
There are few things to note on this code. We are going to create a window that contains three tabs. The first have a fixed preloaded content. The second and the third for now don’t have content, but we will load it on demand. As you can see from the upper zone of the code, this is not a standard tabbox, but an extended version that we are going to define now (you note that from the syntax use="com.MyTabbox").
- 6. Create a new java class in the source folder:
Give it MyTabbox as name and set it to com package, then click Finish.
- 7. Set this as content of MyTabbox.java.
package com; import org.zkoss.zk.ui.Desktop; import org.zkoss.zk.ui.Path; import org.zkoss.zul.Include; import org.zkoss.zul.Tabpanel; public class MyTabbox extends org.zkoss.zul.Tabbox { private int i=0; public void onSelect() { i++; Tabpanel item = getSelectedPanel(); Desktop desktop = this.getDesktop(); desktop.setAttribute("time",i); if(item != null && item.getId().equals("second")) { Include inc = (Include)Path.getComponent("/root/secondContent"); inc.setSrc("secondTab.zul"); } if(item != null && item.getId().equals("third")) { Include inc = (Include)Path.getComponent("/root/thirdContent"); inc.invalidate(); inc.setSrc("thirdTab.zul"); } } }
In this extension of the tabbox component we define how to handle the selection of the tabs. For the first tab we don’t do anything special, but for the second and the third we load a zul on demand. The difference between second and third tab is that the content of the third is refreshed every time the tab is pressed.
- 8. Create two more zuls in the we Web Pages section, give it secondTab.zul and thirdTab.zul as names. Then put this code inside secondTab.zul:
<window> <vbox> <label value="This content is loaded on demand."/> <label value="For now, you have clicked times on the tabs."/> </vbox> </window>
and this inside thirdTab.zul:
<window> <vbox> <label value="This content is reloaded every time the third tab is selected."/> <label value="For now, you have clicked ${desktopScope.time} times on the tabs."/> </vbox> </window>
- 9. You can now run the project (F6) and this is how the application will look like:
If you select the second tab (OnDemand 1) you will have this:
and this will be the content of the second tab for all future selections. On the other hand, we had said that the third tab loads its contents every time we select it. This is how the third tab will look like. Note that the number in "you have clicked X times" changed on each selection.
...
...
Tommaso Fin is a Jave EE developer working for the InfoCamere Group, Italy. He has got a degree in Informatic Engenieering and have about 2 years of experience in LAMP architecture managment and development. He is now focusing on the development of web applications with a massive use of databases data. | http://books.zkoss.org/wiki/Small%20Talks/2007/March/How%20to%20load%20tab%20contents%20on%20demand | CC-MAIN-2015-48 | refinedweb | 676 | 68.26 |
Python has had the Decimal data type for some time now. The Decimal data type is ideal for financial calculations. Using this data type would be more intuitive to computer novices than float as its rounding behaviour matches more closely what humans expect. More to the point: 0.1 and 0.01 are exact in Decimal and not exact in float. Unfortunately it is not very easy to access the Decimal data type. To obtain the decimal number 12.34 one has to do something like this: import decimal x=decimal.Decimal("12.34") Of course we can intruduce a single character function name as an alias for the Decimal type constructor, but even then we have to use both parentheses and quotes for each and every decimal constant. We cannot make it shorter than D("12.34") With Python 3000 major changes to the language are carried out anyway. My proposal would require relatively minor changes. My proposal: - Any decimal constant suffixed with the letter "D" or "d" will be interpreted as a literal of the Decimal type. This also goes for decimal constants with exponential notation. Examples of decimal literals: 1d 12.34d 1e-3d (equivalent to 0.001d) 1e3d (same value as 1000d, but without trailing zeros). 1.25e5d (same value as 125000d but without trailing zeros). When we print a decimal number or convert it to a string with str(), a trailing d should not be added. The repr function does yield strings with a trailing d. When decimal numbers are converted from strings, both numbers with and without a trailing d should be accepted. If we have decimal literals in the core language, we should probably have a type constructor in the core namespace, e.g. dec() along with int() and float(). We then need the decimal module only for the more advanced stuff like setcontext. Pros: - the Decimal data type will be more readily accessible, especially for novices. - Traling characters at the end of a literal are already used (the L for long). - It does not conflict with the syntax of other numeric constants or language constructs. - It does not change the meaning of existing valid literal constants. Constants that used to be float will continue to do so. Cons: - The lexical scanner of the Python interpreter will be slightly more complex. - The d suffix for constants with exponential notation is ugly. - Decimal numbers with exponentail notation like 2e5d could be mistaken for hex (by humans, not by the parser as hex requires the 0x prefix). - It requires the decimal module to be part of the core Python interpreter. -- Lennart | https://mail.python.org/pipermail/python-list/2007-October/420298.html | CC-MAIN-2014-15 | refinedweb | 436 | 67.35 |
License: MIT License
LANumerics is a Swift package for doing numerical linear algebra.
The package depends on Swift Numerics, as it supports both real and complex numerics for both
Float and
Double precision in a uniform way.
Under the hood it relies on the
Accelerate framework for most of its functionality, in particular
BLAS and
LAPACK, and also
vDSP.
Examining the current tests provides a good starting point beyond this README.
LANumerics is a normal Swift package and can be added to your app in the usual way.
After adding it to your app, import
LANumerics (and also
Numerics if you use complex numbers).
You can try out if everything works fine by running
import LANumerics let A : Matrix<Float> = Matrix(columns: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) print("A: \(A)")
which should output something like
A: 4x3-matrix: ⎛1.0 5.0 9.0 ⎞ ⎜2.0 6.0 10.0⎟ ⎜3.0 7.0 11.0⎟ ⎝4.0 8.0 12.0⎠
The
LANumeric protocol denotes the type of numbers on which LANumerics operates. It is implemented by the following types:
Float
Double
Complex<Float>
Complex<Double>
Most functionality of LANumerics is generic in
LANumeric, e.g. constructing matrices and computing with them, solving a system of linear equations, or computing the singular value decomposition of a matrix.
The main work horse of LANumerics is the
Matrix type. For convenience there is also a
Vector type, but this is just a typealias for normal Swift arrays.
The expression
Matrix([1,2,3]) constructs the matrix:
3x1-matrix: ⎛1.0⎞ ⎜2.0⎟ ⎝3.0⎠
The expression
Matrix(row: [1, 2, 3]) constructs the matrix:
1x3-matrix: (1.0 2.0 3.0)
The expression
Matrix<Float>(rows: 2, columns: 3) constructs a matrix consisting only of zeros:
2x3-matrix: ⎛0.0 0.0 0.0⎞ ⎝0.0 0.0 0.0⎠
The expression
Matrix(repeating: 1, rows: 2, columns: 3) constructs a matrix consisting only of ones:
2x3-matrix: ⎛1.0 1.0 1.0⎞ ⎝1.0 1.0 1.0⎠
Given the two vectors
v1 and
v2
let v1 : Vector<Float> = [1, 2, 3] let v2 : Vector<Float> = [4, 5, 6]
we can create a matrix from columns
Matrix(columns: [v1, v2]):
3x2-matrix: ⎛1.0 4.0⎞ ⎜2.0 5.0⎟ ⎝3.0 6.0⎠
or rows
Matrix(rows: [v1, v2]):
2x3-matrix: ⎛1.0 2.0 3.0⎞ ⎝4.0 5.0 6.0⎠
It is also legal to create matrices with zero columns and/or rows, like
Matrix(rows: 2, columns: 0) or
Matrix(rows: 0, columns: 0).
Swift supports
simd vector and matrix operations. LANumerics plays nice with
simd by providing conversion functions to and from
simd vectors and matrices. For example, starting from
import simd import LANumerics let m = Matrix(rows: [[1, 2, 3], [4, 5, 6]]) print("m: \(m)")
with output
m: 2x3-matrix: ⎛1.0 2.0 3.0⎞ ⎝4.0 5.0 6.0⎠
we can convert
m into a
simd matrix
s via
let s = m.simd3x2 print(s)
resulting in the output
simd_double3x2(columns: (SIMD2<Double>(1.0, 4.0), SIMD2<Double>(2.0, 5.0), SIMD2<Double>(3.0, 6.0)))
Note that
simd reverses the role of row and column indices compared to
LANumerics (and usual mathematical convention).
We can also convert
s back:
print(Matrix(s) == m)
will yield the output
true.
Matrix elements and submatrices can be accessed using familiar notation. Given
import simd import LANumerics var m = Matrix(rows: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(m)
with output
3x3-matrix: ⎛1.0 2.0 3.0⎞ ⎜4.0 5.0 6.0⎟ ⎝7.0 8.0 9.0⎠
we can access the element at row 2 and column 1 via
m[2, 1]
which yields
8.0. We can also set the element at row 2 and column 1 to some value:
m[2, 1] = 0 print(m)
The output of running this is
3x3-matrix: ⎛1.0 2.0 3.0⎞ ⎜4.0 5.0 6.0⎟ ⎝7.0 0.0 9.0⎠
We can also access submatrices of
m, for example its top-left and bottom-right 2x2 submatrices:
print(m[0 ... 1, 0 ... 1]) print(m[1 ... 2, 1 ... 2])
This will print
2x2-matrix: ⎛1.0 2.0⎞ ⎝4.0 5.0⎠ 2x2-matrix: ⎛5.0 6.0⎞ ⎝0.0 9.0⎠
Finally, using the same notation, we can overwrite submatrices of
m:
m[0 ... 1, 0 ... 1] = m[1 ... 2, 1 ... 2] print(m)
This overwrites the top-left 2x2 submatrix of
m with its bottom-right 2x2 submatrix, yielding:
3x3-matrix: ⎛5.0 6.0 3.0⎞ ⎜0.0 9.0 6.0⎟ ⎝7.0 0.0 9.0⎠
LANumerics supports common operations on matrices and vectors, among them:
transposeand
adjoint
In the following, assume the context
import Numerics import LANumerics let u = Matrix<Complex<Float>>(rows: [[1, 2 * .i], [3, 4 * .i + 1]]) let v = Matrix<Complex<Float>>(rows: [[.i, 0], [0, 1 + 1 * .i]]) print("u : \(u)\n") print("v : \(v)\n")
which has output
u : 2x2-matrix: ⎛1.0 2.0i ⎞ ⎝3.0 1.0 + 4.0i⎠ v : 2x2-matrix: ⎛1.0i 0.0 ⎞ ⎝0.0 1.0 + 1.0i⎠
For real matrices,
transpose and
adjoint have the same meaning, but for complex matrices the
adjoint is the element-wise conjugate of the
transpose. Executing
print("u.transpose : \(u.transpose)\n") print("u.adjoint : \(u.adjoint)\n")
thus yields
u.transpose : 2x2-matrix: ⎛1.0 3.0 ⎞ ⎝2.0i 1.0 + 4.0i⎠ u.adjoint : 2x2-matrix: ⎛1.0 3.0 ⎞ ⎝-2.0i 1.0 - 4.0i⎠
The
adjoint has the advantage over the
transpose that many properties involving the adjoint generalize naturally from real matrices to complex matrices. Therefore there is the shortcut notation
u′
for
u.adjoint.
Note that
′ is the unicode character "Prime"
U+2032. You can use for example Ukelele to make the input of that character smooth. Other alternatives are configuring the touchbar of your macbook, or using a configurable keyboard like Stream Deck.
Multiplying
u and
v is done via the expression
u * v. Running
print("u * v: \(u * v)") results in
u * v: 2x2-matrix: ⎛1.0i -2.0 + 2.0i⎞ ⎝3.0i -3.0 + 5.0i⎠
Instead of
u′ * v one can also use the equivalent, but faster expression
u ′* v:
print("u′ * v: \(u′ * v)\n") print("u ′* v: \(u ′* v)\n")
yields
u′ * v: 2x2-matrix: ⎛1.0i 3.0 + 3.0i⎞ ⎝2.0 5.0 - 3.0i⎠ u ′* v: 2x2-matrix: ⎛1.0i 3.0 + 3.0i⎞ ⎝2.0 5.0 - 3.0i⎠
Similarly, it is better to use
u *′ v than
u * v′, and
u ′*′ v instead of
u′ * v′.
We will view
u and
v as vectors
u.vector and
v.vector now, where
.vector corresponds to a column-major order of the matrix elements:
u.vector: [1.0, 3.0, 2.0i, 1.0 + 4.0i] v.vector: [1.0i, 0.0, 0.0, 1.0 + 1.0i]
(Actually, in the above, we used
u.vector.toString() and
v.vector.toString() for better formatting of complex numbers. We will also do so below where appropriate without further mentioning it.)
The dot product of
u.vector and
v.vector results in
u.vector * v.vector: -3.0 + 6.0i
Another vector product is
u.vector ′* v.vector: 5.0 - 2.0i
which corresponds to
u.vector′ * v.vector: [5.0 - 2.0i]
Furthermore, there is
u.vector *′ v.vector:
which is equivalent to
Matrix(u.vector) * v.vector′:
Element-wise operations like
.+,
.-,
.* and
./ are supported on both vectors and matrices, for example:
u .* v : 2x2-matrix: ⎛1.0i 0.0 ⎞ ⎝0.0 -3.0 + 5.0i⎠
The
Matrix type supports functional operations like
map,
reduce and
combine. These come in handy when performance is not that important, and there is no accelerated equivalent available (yet?).
For example, the expression
u.reduce(0) { x, y in max(x, y.magnitude) }
results in the value
4. In this case it is better though to use the equivalent expression
u.infNorm instead.
You can solve a system of linear equations like
by converting it into matrix form
A * u = b and solving it for
u:
import LANumerics let A = Matrix<Double>(rows: [[7, 5, -3], [3, -5, 2], [5, 3, -7]]) let b : Vector<Double> = [16, -8, 0] let u = A.solve(b)! print("A: \(A)\n") print("b: \(b)\n") print("u: \(u.toString(precision: 1))")
This results in the output:
A: 3x3-matrix: ⎛7.0 5.0 -3.0⎞ ⎜3.0 -5.0 2.0 ⎟ ⎝5.0 3.0 -7.0⎠ b: [16.0, -8.0, 0.0] u: [1.0, 3.0, 2.0]
Therefore the solution is x=1, y=3, z=2.
This example is actually plugged from an article which describes how to use the
Accelerate framework directly and without a nice library like
LANumerics 😁.
You can solve for multiple right-hand sides simultaneously. For example, you can compute the inverse of
A like so:
let Id : Matrix<Double> = .eye(3) print("Id: \(Id)\n") let U = A.solve(Id)! print("U: \(U)\n") print("A * U: \(A * U)")
This results in the output
Id: 3x3-matrix: ⎛1.0 0.0 0.0⎞ ⎜0.0 1.0 0.0⎟ ⎝0.0 0.0 1.0⎠ U: 3x3-matrix: ⎛0.11328125 0.1015625 -0.019531249999999997⎞ ⎜0.12109375 -0.13281250000000003 -0.08984375 ⎟ ⎝0.1328125 0.01562499999999999 -0.1953125 ⎠ A * U: 3x3-matrix: ⎛1.0 -1.0755285551056204e-16 0.0⎞ ⎜0.0 1.0 0.0⎟ ⎝0.0 -4.163336342344337e-17 1.0⎠
Extending the above example, you can also solve it using least squares approximation instead:
print(A.solveLeastSquares(b)!.toString(precision: 1))
results in the same solution
[1.0, 3.0, 2.0].
Least squares is more general than solving linear equations directly, as it can also deal with situations where you have more equations than variables, or less equations than variables. In other words, it can also handle:
A
There is a shorthand notation available for the expression
A.solveLeastSquares(b)!:
A ∖ b
This also works for simultaneously solving for multiple right-hand sides as before, the inverse of
A can therefore also be computed using the expression
A ∖ .eye(3):
A ∖ .eye(3): 3x3-matrix: ⎛0.11328124999999997 0.10156250000000001 -0.01953124999999994⎞ ⎜0.12109375 -0.13281250000000006 -0.08984374999999999⎟ ⎝0.13281249999999997 0.015624999999999993 -0.19531249999999994⎠
Note that
∖ is the unicode character "Set Minus"
U+2216. The same advice for smooth input of this character applies as for the input of
′ earlier.
In addition, there is the operator
′∖ which combines taking the adjoint and solving via least squares. Therefore, to compute the inverse of
A′, you could better write
A ′∖ .eye(3) instead of
A′ ∖ .eye(3):
A ′∖ .eye(3): 3x3-matrix: ⎛0.11328124999999996 0.12109375 0.13281249999999994 ⎞ ⎜0.10156250000000001 -0.13281250000000003 0.01562499999999999 ⎟ ⎝-0.01953124999999994 -0.08984374999999999 -0.19531249999999994⎠
The inverse of
A can more concisely also be obtained via
A.inverse!.
The following matrix decompositions are currently supported:
A:
A.svd()
A(that is for real or complex matrices for which
A == A′):
A.eigen()
A:
A.schur()
Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | https://swiftpack.co/package/phlegmaticprogrammer/LANumerics | CC-MAIN-2021-21 | refinedweb | 1,889 | 65.93 |
Feature Reduction using Genetic Algorithm with Python
This tutorial discusses how to use the genetic algorithm (GA) for reducing the feature vector extracted from the Fruits360 dataset in Python mainly using NumPy and Sklearn.
Introduction
Using the raw data for training a machine learning algorithm might not be the suitable choice in some situations. The algorithm, when trained by raw data, has to do feature mining by itself for detecting the different groups from each other. But this requires large amounts of data for doing feature mining automatically. For small datasets, it is preferred that the data scientist do the feature mining step on its own and just tell the machine learning algorithm which feature set to use.
The used feature set has to be representative of the data samples and thus we have to take care of selecting the best features. The data scientist suggests using some types of features that seems helpful in representing the data samples based on the previous experience. Some features might prove their robustness in representing the samples and others not.
There might be some types of feature that might affect the results of the trained model wither by reducing the accuracy for classification problems or increasing the error for regression problems. For example, there might be some noise elements in the feature vector and thus they should get removed. The feature vector might also include 2 or more correlated elements. Just using one element will substitute for the other. In order to remove such types of elements, there are 2 helpful steps which are feature selection and reduction. This tutorial focuses on feature reduction.
Assuming there are 3 features F1, F2, and F3 and each one has 3 feature elements. Thus, the feature vector length is 3x3=9. Feature selection just selects specific types of features and excludes the others. For example, just select F1 and F3 and remove F3. The feature vector length is now 6 rather than 9. In feature reduction, specific elements from each feature might be excluded. For example, this step might remove the first and third elements from F3 while keeping the second element. Thus, the feature vector length is reduced from 9 to just 7.
Before starting in this tutorial, it is worth mentioning that it is an extension to a previously published 2 tutorials in my LinkedIn profile.
The first tutorial is titled "Artificial Neural Network Implementation using NumPy and Classification of the Fruits360 Image Dataset". It starts by extracting a feature vector of length 360 from 4 classes of the Fruits360 dataset. Then, it builds an artificial neural network (ANN) using NumPy from scratch in order to classify the dataset. It is available here. Its GitHub project is available here:.
The second tutorial is titled "Artificial Neural Networks Optimization using Genetic Algorithm". It builds and uses the GA for optimizing the ANN parameters in order to increase the classification accuracy. It is available here. Its GitHub project is also available here:.
This tutorial discusses how to use the genetic algorithm (GA) for reducing the feature vector extracted from the Fruits360 dataset of length 360. This tutorial starts by discussing the steps to be followed. After that, the steps are implemented in Python mainly using NumPy and Sklearn.
The implementation of this tutorial is available in my GitHub page here:
GA starts from an initial population which consists of a number of chromosomes (i.e. solutions) where each chromosome has a sequence of genes. Using a fitness function, the GA selects the best solutions as parents for creating a new population. The new solutions in such a new population are created by applying 2 operations over the parents which are the crossover and the mutation. When applying GA to a given problem, we have to determine the representation of the gene, the suitable fitness function, and how the crossover and the mutation are applied. Let’s see how things work.
More Information about GA
You can read more about GA from the following resources I prepared:
- Introduction to Optimization with Genetic Algorithm
- Genetic Algorithm (GA) Optimization - Step-by-Step Example
- Genetic Algorithm Implementation in Python
I also wrote a book in 2018 that covers GA in one of its chapters. The book is titled "Practical Computer Vision Applications Using Deep Learning with CNNs" which is available here at Springer.
Chromosome Representation
The gene in GA is the building block for the chromosome. At first, we need to determine what genes are inside the chromosome. To do that, take into regard that every property that may affect the results should be regarded as a gene. Because the target of our problem is selecting the best set of feature elements, thus every feature element might affect the results if selected or not. Thus, every feature element is regarded as a gene. The chromosome will consist of all genes (i.e. all feature elements). Because there are 360 feature elements, then there will be 360 genes. A good piece of information is now clear that the length of the chromosome is 360.
After determining what are the selected genes, next is to determine the gene representation. There are different representations such as decimal, binary, float, string, and others. Our target is to know whether the gene (i.e. feature element) is selected or not in the reduced set of features. Thus, the value assigned to the gene should reflect whether it is selected or not. Based on this description, it is very clear that there are 2 possible values for each gene. One value signifies that the gene is selected and another when it is not selected. Thus, the binary representation is the best choice. When the gene value is 1, then it will be selected in the reduced feature set. When 0, then it will be neglected.
As a summary, the chromosome will consist of 360 genes represented in binary. There is a one-to-one mapping between the feature vector and the chromosome according to the next figure. That is the first gene in the chromosome is linked to the first element in the feature vector. When the value for that gene is 1, this means the first element in the feature vector is selected.
Fitness Function
By getting how to create the chromosome, the initial population can be initialized randomly easily is NumPy. After being initialized, the parents are selected. GA is based on Darwin’s theory "Survival of the Fittest". That is the best solutions in the current population are selected for mating in order to produce better solutions. By keeping the good solutions and killing the bad solutions, we can reach an optimal or semi-optimal solution.
The criterion used for selecting the parents is the fitness value associated with each solution (i.e. chromosome). The higher the fitness value the better the solution. The fitness value is calculated using a fitness function. So, what is the best function for use in our problem? The target of our problem is creating a reduced feature vector that increases the classification accuracy. Thus, the criterion that judges whether a solution is good or not is the classification accuracy. As a result, the fitness function will return a number that specifies the classification accuracy for each solution. The higher the accuracy the better the solution.
In order to return a classification accuracy, there must be a machine learning model to get trained by the feature elements returned by each solution. We will use the support vector classifier (SVC) for this case.
The dataset is divided into train and test samples. Based on the train data, the SVC will be trained using the selected feature elements by each solution in the population. After being trained, it will be tested according to the test data.
Based on the fitness value of each solution, we can select the best of them as parents. These parents are placed together in the mating pool for generating offspring which will be the members of the new population of the next generation. Such offspring are created by applying the crossover and mutation operations over the selected parents. Let’s configure such operations as discussed next.
Crossover and Mutation
Based on the fitness function, we can filter the solutions in the current population for selecting the best of them which are called the parents. GA assumes that mating 2 good solutions will produce a third better solution. Mating means exchanging some genes from 2 parents. The genes are exchanged using the crossover operation. There are different ways in which such an operation can be applied. This tutorial uses the single-point crossover in which a point divides the chromosome. Genes before the point are taken from one solution and genes after the point are taken from the other solution.
By just applying crossover, all genes are taken from the previous parents. There is no new gene introduced in the new offspring. If there is a bad gene in all parents, then this gene will be transferred to the offspring. For such a reason, the mutation operation is applied in order to introduce new genes in the offspring. In the binary representation of the genes, the mutation is applied by flipping the values of some randomly selected genes. If the gene value is 1, then it will be 0 and vice versa.
After generating the offspring, we can create the new population of the next generation. This population consists of the previous parents in addition to the offspring.
At this point, all steps are discussed. Next is to implement them in Python. Note that I wrote a previous tutorial titled "Genetic Algorithm Implementation in Python" for implementing the GA in Python which I will just modify its code for working with our problem. It is better to read it.
Python Implementation
The project is organized into 2 files. One file is named "GA.py" which holds the implementation of the GA steps as functions. Another file, which is the main file, just imports this file and calls its function within a loop that iterates through the generations.
The main file starts by reading the features extracted from the Fruits360 dataset according to the code below. The features are returned into the data_inputs variable. Details about extracting these features are available in the 2 tutorials referred to at the beginning of the tutorial. The file also reads the class labels associated with the samples in the variable data_outputs.
Some samples are selected for training with their indices stored in the train_indices variable. Similarly, the test samples indices are stored in the test_indices variable.
import numpy import GA import pickle import matplotlib.pyplot f = open("dataset_features.pkl", "rb") data_inputs = pickle.load(f) f.close() f = open("outputs.pkl", "rb") data_outputs = pickle.load(f) f.close() num_samples = data_inputs.shape[0] num_feature_elements = data_inputs.shape[1] train_indices = numpy.arange(1, num_samples, 4) test_indices = numpy.arange(0, num_samples, 4) print("Number of training samples: ", train_indices.shape[0]) print("Number of test samples: ", test_indices.shape[0]) """ Genetic algorithm parameters: Population size Mating pool size Number of mutations """ sol_per_pop = 8 # Population size. num_parents_mating = 4 # Number of parents inside the mating pool. num_mutations = 3 # Number of elements to mutate. # Defining the population shape. pop_shape = (sol_per_pop, num_feature_elements) # Creating the initial population. new_population = numpy.random.randint(low=0, high=2, size=pop_shape) print(new_population.shape) best_outputs = [] num_generations = 100
It initializes all parameters of GA. This includes the number of solutions per population which is set to 8 according to the sol_per_pop variable, number of offspring which is set to 4 in the num_parents_mating variable, and number of mutations which is set to 3 in the num_mutations variable. After that, it creates the initial population randomly in a variable called new_population.
There is an empty list named best_outputs which holds the best result after each generation. This helps to visualize the progress of the GA after finishing all generations. The number of generations is set to 100 in the num_generations variable. Note that you can change all of these parameters which might give better results. | https://www.kdnuggets.com/2019/03/feature-reduction-genetic-algorithm-python.html | CC-MAIN-2020-24 | refinedweb | 2,007 | 57.37 |
Bo Borgerson wrote: > Jim Meyering wrote: >> Bo Borgerson <address@hidden> wrote: >>> I may be misinterpreting your patch, but it seems to me that >>> decrementing count for zero-width characters could potentially lead to >>> confusion. Not all zero-width characters are combining characters, right? >> It looks ok to me, since there's an unconditional increment >> >> chars++; >> >> about 25 lines above, so the decrement would just undo that. > > > Right, I guess my question is more about the semantics of `wc -m'. > Should stand-alone zero-width characters such as the zero-width space be > counted? > > The attached (UTF-8) file contains 3 characters according to HEAD, but > only two with the patch. Interesting, I thought of that myself but assumed iswspace(u"zero-width space") == 1 Actually there are no chars where: wcwidth(char)==0 && iswspace(char)==1 In the first 65535 code points there are also 404 chars which are not classed as combining in the unicode database, but are classed as zero width in the glibc locale data at least (zero-width space being one of them like you mentioned). I determined this with the attached progs: ./zw | python unidata.py | grep " 0 " | wc -l So I suggest that we don't merge my tweak as is. What we could do is: 1. Find a method to distinguish the above 404 characters at least. 2. Define -m to mean "individual displayable characters" if this is what people usually want. 3. Add a new option for this. Pádraig.
#define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <wchar.h> #include <wctype.h> #include <string.h> #include <locale.h> int main(int argc, char** argv) { /* This is a single threaded app, so mark as such for performance. */ #include <stdio_ext.h> __fsetlocking(stdin,FSETLOCKING_BYCALLER); __fsetlocking(stdout,FSETLOCKING_BYCALLER); if (!setlocale(LC_CTYPE, "")) { //TODO: What about LC_COLLATE? fprintf(stderr,"Warning locale not supported by glibc, using 'C' locale\n"); } wchar_t wc; for (wc=0; wc<=0xFFFF; wc++) { if (!wcwidth(wc)) { printf("%04X\n",wc); } } }
import unicodedata,sys for char in sys.stdin: char = char[:-1] c = unichr(int(char,16)) try: print char, int(unicodedata.combining(c)!=0), unicodedata.name(c) except: print | http://lists.gnu.org/archive/html/bug-coreutils/2008-05/msg00050.html | CC-MAIN-2016-30 | refinedweb | 360 | 68.77 |
Some background.
We all used the string class, but some of you might not know that
string is just a typedef of basic_string.
In fact here is the typedef you might find when looking in the XString library.
typedef basic_string<char, char_traits<char>, allocator<char> > string;
This code snippet shows you how to use basic_string to your
advantage. It has almost the same functionality as the string library
but it adds some functionality. It has little error checking. Again this
is shown as an example to learn from. Here is a list of things to
keep in mind :
- template class
- inheritance
- pointer function
- const correctness
- template functions
- scoped namespace
- assert function
- use of typedef | https://www.daniweb.com/programming/software-development/code/252294/string-class-inherited-from-basic-string | CC-MAIN-2021-43 | refinedweb | 114 | 72.26 |
Design, Expression Blend, Silverlight, WPF
Here is a collection of a few of the blog posts I came across around the new databinding features in Blend 3. Hope you find them useful!
Sample data
Master/Detail scenarios
Element to Element binding
TreeView control
If you do run into a situation where the Blend flyout menus appear on the left instead of the right as shown below, the Tablet PC Settings panel is a good place to check.
While.
For Blend 3, we have completely re-designed the Asset Library.
Here are some of the highlights:
a) Categorization for various assets makes discoverability easier.b) Searchability allows for quick location of an asset across categories like Controls, Effects and Behaviors.b) Freely dockable anywhere in the UI. We also have left the popup mode unchanged for quick one-time access to assets.c) Extensible - you can register your own assets that make it easy for inclusion into the projects on an on-demand basis. Adding an asset like a Control or a Behavior will add all necessary references required for the functioning of the project automatically.d) List and Grid modes for the display of assets.
Registering your own library (or sets of libraries) in the asset tool is very simple. All you have to do is to setup a registry key that points to a folder containing the libraries. As an example, the Silverlight SDK registers itself into the Blend asset tool using the following key/value pair on a 64-bit machine:Key: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Expression\Blend\3.0\Toolbox\Silverlight\v3.0\Silverlight SDK Client LibrariesValue: c:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Libraries\Client\
For a WPF example, the following is the way the WPF ToolKit registers itself on a 32-bit machine:Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Expression\Blend\3.0\Toolbox\WPF\v3.0\WPFToolkitValue: C:\Program Files\WPF Toolkit\v3.5.40320.1
There are a few things you can customize around the display of assets:
The Icon: For supplying a custom icon, all you need to do is to add an image to the control library (preferrably its design-time library that helps keep the size of the library small) that uses the namespace qualified name of the control for its name. For example, say you had a custom control Foo. The icon would be then called FoosNamespace.Foo.png. Few things to keep in mind: Use EmbeddedResource as the build item for the Image. Blend's asset library only supports PNGs. If you wanted a 24x24 and 12x12 version of icons (since we use these two standard sizes in various places of our UI), all you have to do is to name the PNGs as follows: FoosNamespace.Foo.SmallIcon.PNG, and FoosNamespace.Foo.LargeIcon.PNG. The SmallIcon/LargeIcon part don't actually matter - you can pick a string of your choice, and we will dynamically determine and pick up the appropriate icon for display.
Description: You can use the DescriptionAttribute to change the string that is displayed as a tooltip for the asset.
Asset Library availability: If you wanted to prevent a particular asset from being visble in the Asset Library, you could use the ToolBoxBrowsable attribute as a part of the metadata specification for that asset in the design-time library.
Location in the category hierarchy: Again done via a newly introduced attribute (ToolBoxCategoryAttribute) that you can supply via a design-time library.!
A.
A).
Based on popular demand, here is a quick sample (WPF only though it would be very easy to port this as-is to Silverlight - currently busy with Blend 3 work and all the cool newer features we are adding to Blend 3) that will allow you to get started on the following new extension points we have added to Blend 3:a) The new way to specifying metadata for your controlsb) DefaultInitializer that allows you to set properties when a control in instantiated from the Blend asset libraryc) Custom context menusd) An adorner for the controlLet me know if you wanted some samples for any specific scenarios you might be interested in.
Writing.
The project created from this project template works as follows::
Enjoy, and please do let me know if you run into issues as you work with Silverlight design time experiences so we can address them.
Blend 3 adds really cool support for the DataGrid control (for both Silverlight 3 and WPF Toolkit). Here is a quick demo to help you get started. Enjoy!
To help you walk thru some of support we have added for sample data inside Blend 3, I wrote a hands-on-lab that takes you thru the basic experience of building a simple master/detail visualization. While there are a lot more scenarios that are possible with the support for sample data in Blend 3 (some of which are not available in the public preview build, and I will in write in detail about over the next few days), you can get a first-hand experience here for the following:
Download the hands-on-lab document here (because of the ISP that I use for hosting these files, you will have to rename this .zip file to .docx), and the starting project (to save you some time with the graphics, styling, and layout) here.
I am sure you must have heard - Expression Blend 3 Preview is here! And I am back after a 2 year break from writing on this blog - if you were to use the product, you will realize what kept us busy :).
If I were to list the new features in Blend 3, it would take me a day to write. Instead, I am going to list just the set of features that I personally was responsible for:
a) The new databinding experience - We have radically redone the databinding experience in Blend 3. Be sure to check out the newly introduced support for sample data, which is a huge enabler in design scenarios that previously required writing code, or were just not possible. We also have brought to the table an unmatched DataGrid editing experience, and added have support for easily creating Master/Detail visualizations (Did I mention that creating the "Hello World" of RIAs - an RSS reader - is only a couple clicks away in Silverlight 3 using Blend 3?). More on all this shortly...b) XAML Intellisense - easily the number one requested feature of Blend. Also, Blend now has C# and VisualBasic code editing.c) TFS support - read more hered) Silverlight and WPF Extensibility - read more here
A bunch of other small things that I will cover over the next few days (there are also a few more tricks that we are still holding up our sleeves, and unfortunately, I won't be able to share much information about them). You also can watch a quick video of me introducing Blend 3 here (it is very, very hard to do justice to most Blend 3 features in 20 minutes, let alone all of them!).
Hope you enjoy using Silverlight 3 (my personal three favorite SL 3 features - shaders, projection transforms, and support for out-of-browser apps - which are yours?) and Blend 3, and please don't hestitate to ask questions.
Based on popular customer feedback, we have added a number of new extensibility points in Blend 3 (sorry, no support for plugins, yet, but who needs an officially supported extensibility model anyway? :) )
Over the next few weeks, I will provide more information on these, including interesting use cases that are already starting to pop up. Here is a quick listing of what we currently support:
Some other notes:Blend 3 will share the same extensibility model as the next version of Visual Studio.
As we live in a world where there is greater and greater parity between WPF and Silverlight, we wanted the same for these APIs wherein you could use the same design time code to target controls for both platforms. To accomodate this, we had to make some minor breaking changes to the VS 2008 APIs (hopefully the changes will be very straightforward to adapt to) - I will try to post a number of samples to help out with this. By and large, the APIs remain the same as documented here (the documentation will be updated to reflect the breaking changes at a later date).
The APIs are not final yet, and are subject to change. Hopefully, we can keep the changes to a minimum.
Expression Blend 3 Preview adds support for integration with Team Foundation Server, one of our top feature requests.Some examples of the various integration points:a) Saving a file automatically checks it outb) Adding a new UserControl or assets to a project automatically adds them to source controlc) Renaming or deleting files automatically renames or deletes items under source controld) Right clicking on an item that has been modified under source control allows you to submit that particular changee) View history, get latest versions of files or specific versions, undo changes, etc.To enable source control for a solution open inside Blend, the solution must be bound and residing in a valid workspace on the client. You can refer to the Visual Studio documentation on how to setup a solution under TFS source control here.
While you don't need to have Visual Studio 2008 installed on the machine to avail TFS support inside Blend, you do need to install Visual Studio Team System 2008 Team Explorer, a free download. You also need to install SP1 of Team Explorer, and a hotfix for Team Explorer SP1 that enables TFS support inside Blend - you can download that from here.
Enjoy!
With. | http://blogs.msdn.com/unnir/ | crawl-002 | refinedweb | 1,616 | 55.27 |
In web-based applications, Message Box, Confirmation Box etc are used very often to alert user or to ask user's confirmation for some important operation. In web forms, these popup boxes are usually implemented using JavaScript that runs on client side, for example, using alert, confirm etc functions. A disadvantage of this solution is that the popup time of those prompt boxes is statically specified on client side at design time, so they can't be flexibly used on server side like a server side component, which is very inconvenient, especially for complicated commercial applications that have very complex business logic but need to provide friendly user interface.
In Lee Gunn's article "Simple Message Box functionality in ASP.NET", he only solved the Message Box problem, but didn't mention Confirmation Box issue. In this article, we will introduce a simple but very practical server control that can perform the functionality of either Message Box or Confirmation Box. What's more, the configuration and use of this server control is also very easy.
To use the server control in the code, first you need to add the server control as a .NET Framework Components to your web application. Detailed steps are: Right click the "components" tab in the toolbox, click "Add/Remove Items", there's a window pop up, in the .NET Framework components tab, click "Brower", and select and add the server control executable from where you kept in your computer. The file name is msgBox.dll. Then, drag and drop the server control to your web form. (Please note here: please put the server control at the last position of the web form, else there will be some unexpected result).
The following is WebForm1.aspx code with the server control inside.
<%@ Register <HTML> <HEAD> <title>WebForm1</title> <meta content="Microsoft Visual Studio .NET 7.1" name="GENERATOR"> <meta content="C#" name="CODE_LANGUAGE"> <meta content="JavaScript" name="vs_defaultClientScript"> <meta content="" name="vs_targetSchema"> </HEAD> <body> <form id="Form1" method="post" runat="server"> <asp:Button</asp:Button> <asp:TextBox </asp:TextBox> <cc1:msgBox</cc1:msgBox> </form> </body> </HTML>
Suppose such a simple scenario: when you click Button1, you will pop up a
Message Box if no text input in TextBox1, but if there's input in
TextBox1, you will pop up a confirmation box to ask user if he
wants to continue. Corresponding Button-Click event handling code is listed as
below.
To pop up a Message Box , you only need to call
alert(string
msg) method of the server control. It is very easy so don't explain here.
We will mainly introduce how to pop up a confirmation box here. The second
parameter of method
confirm(string msg, string hiddenfield_name) is
the name of a hidden field element of the web form which the server control also
belongs to. You don't need to explicitly create the hidden field component, the
server control will create one for you using the hidden field name you provide,
but you need to provide a unique hidden field name that differentiate itself
from the other components in the web form. The original value of the hidden
field is "0". When the user click "ok" to confirm the operation, the server
control will change the value of the previously specified hidden field value
from "0" to "1".
private void Button1_Click(object sender, System.EventArgs e) { //for the page with only one form if(TextBox1.Text!=null && TextBox1.Text!="") MsgBox1.alert("Please input something in the text box."); else MsgBox1.confirm("Hello "+ TextBox1.Text + "! do you want to continue?", "hid_f"); }
If the user answers the confirmation box by clicking either "OK" or "CANCEL"
button, the web page will be posted back, so you need to write corresponding
code to capture and process that. That piece of code is usually put in the
Page_Load() method of the code behind of the ASPX page. By checking
if the value of the hidden field element of the form is changed to "1", which
means the user confirms, you can put corresponding processing code as below.
Please don't forget to reset the hidden field value back to original value "0",
otherwise something will be wrong the next time to invoke the confirm box.
private void Page_Load(object sender, System.EventArgs e) { if(IsPostBack) { //PLEASE COPY THE FOLLOWING CODE TO YOUR Page_Load() //WHEN USING THE SERVER CONTROL in your page if(Request.Form["hid_f"]=="1") //if user clicks "OK" to confirm { Request.Form["hid_f"].Replace("1","0"); //Reset the hidden field back to original value "0" //Put the continuing processing code MsgBox1.alert("hello " + TextBox1.Text); } //END OF CODE TO BE COPIED } }
That's it. Very easy and simple, isn't it? All above C# code is within WebForm1.aspx.cs, the code behind of WebForm1.aspx.
The other thing we need to mention is that in above scenario, we assume that
WebForm1.aspx only includes ONE web form. Actually, in ASP.NET, each ASPX page
only supports one web form server control, so for most cases, above solution is
enough. However, there're also very few cases such that in some ASPX page,
there's one web form server control and also other traditional HTML form, i.e.,
the page includes more than one web forms. Using the msgBox server control can
also easily solve this issue. What you need to do is to put the msgBox as an
element of the server control web form and this web form should be the first
form in the web page. The rest process remains the same. The WebForm2.aspx in
the demo project shows this case.
The normal HTML form element is not a server control so it can't post back to the code behind of the ASPX page, so you need to write the processing code somewhere else, in the following sample, those processing code is shown in the same WebForm2.aspx page.
<%@ Register <HTML> <HEAD> <title>WebForm2</title> <% if(Request.Form["btn"]!=null) { //for the page with more than one forms if(Request.Form["text1"]=="") { //Response.Write("Hi"); MsgBox1.alert( "Button2 clicked. Please input something in the second text box."); } else { MsgBox1.confirm("Button2 clicked. Hello "+ Request.Form["text1"].ToString() + "! do you want to continue?", "hid_f2"); } } if(Request.Form["hid_f2"]=="1") //if button2 is clicked and user confirmed { //Your processing code here MsgBox1.alert("Button2 Clicked and user confirmed. Hello " + Request.Form["text1"]); Request.Form["hid_f2"].Replace("1","0"); } %> <meta content="Microsoft Visual Studio .NET 7.1" name="GENERATOR"> <meta content="C#" name="CODE_LANGUAGE"> <meta content="JavaScript" name="vs_defaultClientScript"> <meta content= </HEAD> <body MS_POSITIONING="GridLayout"> <DIV style="LEFT: 56px; WIDTH: 512px; POSITION: absolute; TOP: 136px; HEIGHT: 128px" ms_positioning="FlowLayout"> <FORM id="Form1" method="post" runat="server"> <asp:button </asp:button><asp:textbox </asp:textbox><cc1:msgbox</cc1:msgbox></FORM> <FORM id="Form2" method="post"> <INPUT id="btn" type="submit" value="Button2" name="btn" runat="server"> <INPUT id="text1" style="WIDTH: 224px; HEIGHT: 31px" type="text" size="32" name="text1" runat="server"> </FORM> </DIV> </body> </HTML>
The basic mechanism of msgBox server control is actually very simple, and may need better improvement later, but the convenience it brings is also tremendous. The key thing inside the server control is that it outputs the corresponding JavaScript code to HTML during the server control rendering phase, and it utilize JavaScript to change the value of the hidden field that it creates so that the hidden field can work as a tag representing the user's behavior (user's answer to the confirmation box). Please note, the hidden field control is a pure HTML element, NOT a server control. Only in this way, it can be accessed in JavaScript code, and its value will also be posted to server when the form is posted. The following is the source code for msgBox server control, it's pretty easy for you to understand if you know a little of ASP.NET server control life cycle.
using System; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; using System.Text; namespace BunnyBear { /// <summary> /// Summary description for WebCustomControl1. /// </summary> [DefaultProperty("Text"), ToolboxData("<{0}:msgBox</{0}:msgBox>")] public class msgBox : System.Web.UI.WebControls.WebControl { //private string msg; private string content;
true), Category("Appearance"), DefaultValue("")> public void alert(string msg) { string sMsg = msg.Replace( "\n", "\\n" ); sMsg = msg.Replace( "\"", "'" ); StringBuilder sb = new StringBuilder(); sb.Append( @"<script language="'javascript'">" ); sb.Append( @"alert( """ + sMsg + @""" );" ); sb.Append( @"</script>" ); content=sb.ToString(); } //confirmation box public void confirm(string msg,string hiddenfield_name) { string sMsg = msg.Replace( "\n", "\\n" ); sMsg = msg.Replace( "\"", "'" ); StringBuilder sb = new StringBuilder(); sb.Append( @"<INPUT type=hidden"); sb.Append( @"<script language="'javascript'">" ); sb.Append( @" if(confirm( """ + sMsg + @""" ))" ); sb.Append( @" { "); sb.Append( "document.forms[0]." + hiddenfield_name + ".value='1';" + "document.forms[0].submit(); }" ); sb.Append( @" else { "); sb.Append("document.forms[0]." + hiddenfield_name + ".value='0'; }" ); sb.Append( @"</script>" ); content=sb.ToString(); } /// <summary> /// Render this control to the output parameter specified. /// </summary> /// <param name="output"> The HTML writer to write out to </param> protected override void Render(HtmlTextWriter output) { output.Write(this.content); } } }
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/webforms/NingLiangSimpleControl.aspx | crawl-002 | refinedweb | 1,515 | 56.76 |
Default Console Application:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApplication3 { class Program { static void Main(string[] args) { } } }
using:
- Is there any reason you wouldn't want to call System every time?
- Why are these four specific classes called from System on a new application?
- Are System.Collections.Generic and System.Threading.Tasks using nested classes (i.e. System.Class.SubClass)?
namespace:
- C# compiles without using namespace; is there any reason to include it every time?
- What is the purpose of using multiple namespaces in a single file?
- What is the purpose of using nested namespaces over classes?
class:
- If I'm using Main() in multiple classes, does the compiler just start with the first one it sees?
- Is there any reason to include "string[] args" if I don't intend to accept arguments?
by 6f00ff via /r/csharp | http://howtocode.net/2015/04/help-analyzing-the-visual-studio-console-defaults/ | CC-MAIN-2017-47 | refinedweb | 149 | 53.47 |
This is your resource to discuss support topics with your peers, and learn from each other.
02-11-2013 11:17 PM - edited 02-12-2013 05:41 AM
in my previous post I came to the realization that I need to interact with screen events from C++
moving this to a new thread because it's not related to the keyboard....
I'm trying to interact with the entire screen (bezel & all) however this doesnt seem to be possible from QML
Im currently using the BPS Monitor & discovered this Multi-touch Handler
I belive that i've managed to get it working with the bps monitor but to test it
my issue is, trying to get a qml label to update with the current X, Y positions
//in appname.cpp AbstractPane *root = qml->createRootObject<AbstractPane>(); Label *mLabel = root->findChild<Label*>("rootLabel"); mLabel->setText("C++ Label");
I'm able to change the text of the qml label but I do not know how to access the child object in my ScreenService.cpp file, currently its accessed in appName.cpp
case SCREEN_EVENT_MTOUCH_TOUCH: if (!found) { Touchpoint *tp = new Touchpoint(mtouch_event.x, mtouch_event.y, mtouch_event.contact_id); if(mtouch_event.contact_id<4){ //interact with touch events here // such as mLabel->setText("X " + mtouch_event.x + ", Y " + mtouch_event.y); }
02-12-2013 06:17 AM
02-12-2013 12:15 PM
doing that in screenService.cpp tells me that application is not declared....
It would probably be easier if i linked the X & Y values from screenService & update the label with them later but I still have no idea how to do such a thing using c++, once i want to leave the current page with a value, nothing wants to work
i do have access to touchpoint.cpp/h , it has a value of Touchpoint::getX & getY which would return the values to me if it ever decides to cooperate
02-12-2013 12:18 PM - edited 02-12-2013 12:26 PM
Add these lines to screenService.cpp:
#include <bb/cascades/Application> using namespace bb::cascades;
Regarding the second approach:
Declare a signal in ScreenService header:
signals: void positionUpdated(int x, int y);
Emit the signal when position updates:
emit positionUpdated(newX, newY);
Now any other class can subscribe to this signal.
Declare a slot in class which instantiates ScreenService:
public slots: void onPositionUpdated(int x, int y);
After creating ScreenServer instance connect it's signal to the slot:
QObject::connect(screenServer, SIGNAL(positionUpdated(int,int)),
this, SLOT(onPositionUpdated(int,int)));
In onPositionUpdated handler update the label.
02-12-2013 12:25 PM
thanks for the fast response, that light everything up as if it should work but when i go to build it tells me
invalid use of incomplete type 'struct bb::cascades::AbstractPane'
02-12-2013 12:28 PM - edited 02-12-2013 12:28 PM
02-12-2013 01:54 PM - edited 02-12-2013 03:06 PM
Thanks again for the help, i've nearly figured it out & sorry to be a plague c++ is not my friend
it's saying newX & Y don't exist so
emit positionUpdated(newX, newY);
if i replace it with this which is setup to values set touchpoint.cpp
emit positionUpdated(mtouch_event.x, mtouch_event.y);
it gives me no errors (or visible results) when i build this
void ScreenService::onPositionUpdated(int x, int y){ QString xstring = QString::number(x); QString ystring = QString::number(y); Label *s = Application::instance()->scene()->findChild<Label *>( "rootLabel"); QObject::connect(s, SIGNAL(positionUpdated(int,int)), this, SLOT(onPositionUpdated(int,int))); s->setText("X " + xstring + ", Y " + ystring); }
...if only this all worked in qml i'd have been done days ago lol
--Edit after removing all intergers etc, i was unable to get the label text to change while on the services.cpp page so i went back to appName.cpp & the text changes like a charm
AbstractPane *root = qml->createRootObject<AbstractPane>(); Label *mLabel = root->findChild<Label*>("rootLabel"); mLabel->setText("C++ Label");
only, I've no clue how to pull the integer values from touchpoint.cpp's Touchpoint::getX & getY & update the text with that information from appName.cpp
02-12-2013 03:12 PM
No problem.
The simplest approach is probably going back to the original code and replacing this line:
// such as mLabel->setText("X " + mtouch_event.x + ", Y " + mtouch_event.y);
with:
Label *rootLabel = Application::instance()->scene()->findChild<Label *>("rootLabel");
QString xString = QString::number(mtouch_event.x);
QString yString = QString::number(mtouch_event.y); rootLabel->setText("X " + xString + ", Y " + yString);
And adding
#include <bb/cascades/Application>
#include <bb/cascades/Label>
using namespace bb::cascades;
on the top of the file.
06-07-2014 09:07 AM
I am trying to also receive screen events using platform services. I have done screen_create_context() and screen_request_events() but I am not receiving touch events. Do you I have to create a screen (i.e. screen_create_window() ?
I am trying to use those events in cascades, is this even possible? | https://supportforums.blackberry.com/t5/Native-Development/Screen-Events/m-p/2159271 | CC-MAIN-2017-13 | refinedweb | 821 | 50.26 |
- Portable Asphalt Plant For Sale | Mobile Asphalt Plant
small asphalt plant of capacity 20 30 tph for sale .. bitumen tank and control cabin of the portable asphalt plants for sale .. portable asphalt plant 20 30 tph
- Asphalt Batch Mix Plant Atlas Industries
atlas industries offers asphalt batching plant for sale. we have developed this equipment to help contractors produce hot mix asphalt using discontinuous process with
- Batch Mix Asphalt Plant Price In The Republic Of Cote D
batch mix asphalt plant price in the republic of quality asphalt batch mix plant for sale of mix plant,mobile asphalt plant,small asphalt plant. china
- Adm Asphalt / Pavers / Concrete Equipment For Sale
post a free for sale listing .. adm asphalt / pavers / concrete equipment for sale .. used 1984 adm model 100 asphalt plant w/ wet scrubber,
- Used Concrete Batch Plants For Sale. Bandit Equipment
search 39 listings for used concrete batch plants. find bandit, czech republic. click to contact demo unit skidpatcher 10a mobile asphalt plant overview
- Asphalt Production Machine Hot Sale In Republic Of
portable asphalt plant for sale | mobile asphalt. small asphalt plant hot mix asphalt czech republic; of batch and drum mix asphalt plants can meet any
- Used Asphalt Mixing Plants For Sale A Usa
numerous online ads with used asphalt mixing plants for sale. find used asphalt mixing plants at a usa. a usa .. czech republic (1) finland (1) france (3
- Mobile Asphalt Drum Mix Plant Atlas Industries
atlas industries is a provider of mobile asphalt drum mix plant which is available in the capacities of 20 small asphalt drum mixer asphalt batching plant
- Machinerytrader | Asphalt / Pavers / Concrete
post a free for sale listing .. ross asphalt / pavers / concrete equipment for mix mixer, radial conveyer 68 ross 7000 portable concrete batch plant
- Mobile Asphalt Plant | Asphalt Mixing Plant Manufacturer
mobile asphalt plant atlas has one concrete solution for customers who are looking for quality mobile asphalt plant for sale .. asphalt batching plant;
- Asphalt Plant For Kazakhstan Market Asphalt Mixing Plant
feb 26, 2014· moldova, russia, poland, czech republic, 80tph small asphalt mixing plant,asphalt mixed asphalt batching plant for sale in china hot sale
- Mobile Asphalt Plant For Sale In Republic Of Uzbekistan
portable asphalt plant for sale | mobile asphalt mobile asphalt plants on a poland, czech republic, sale and export asphalt batching plants with
- Used Batch Plant For Sale. Xuetao Equipment & More |
search 160 listings for used batch plant. find czech republic (3 dryer from stansteel model rm30 asphalt batch plant all offers accepted will sell
- Asphalt Mixing Plant For Sale In Republic Of Uzbekistan
roadstar 40 to 90 tph a plantportable/mobile asphalt batch mixing plants portable asphalt plant for sale | mobile asphalt plant . small czech republic;
- Bitumen Hot Mix Plant For Sale In Lao Peoples Republic
bitumen hot mix plant for sale in lao peoples republic .. czech republic karippa s.r. 010 for 96t/h lb1200 small asphalt batch mix plant for sale asphalt mixing
- Sold In Russia, Czech Republic Moving Concrete Batching Plant
asphalt plant, concrete batching earth czech republic, concrete batching plant well sold concrete mixing plant for sale 1996 09 04; small size well
- Sold In Russia, Czech Republic Moving Concrete Batching Plant
asphalt / pavers / concrete . up for sale is a concretion concrete batching plant. concrete batching plant in sri lanka, czech republic jzc250 small
- Asphalt Plants. Batch & Drum Mix | A Plant
asphalt batch mixing and continuous parallel drum mix plants in mobile, modular and static configurations with output capacity ranging from 4 to 400tph
- Hot Mix Asphalt Commercial Asphalt Mixes | Cemex
czech republic; dominican we have grown from a small regional cement firm into a top global building hot mix asphalt & other commercial asphalt mixes .
- New & Used Concrete Batching Plants For Sale By Vince
vince hagan has new & used concrete batching plants for sale. please contact our sales department at (800) 354 3238 with any questions.
- Better Pld2400 Simple Concrete Batch Plant
better pld2400 simple concrete batch plant .. czech republic hzs90. asphalt mixing plant, batch low cost concrete batching plant, small asphalt mixer how
- Used Asphalt Mixing Plants For Sale A Uk
buy used asphalt mixing plants for sale on a uk .. czech republic (1) finland (1) france (3) a blackmove 200 mobile batch plant movement type
- Region And Language | A
plants. asphalt plants. batch asphalt mixing plants; continuous asphalt mixing plants; complementary products; a group. copyright ©2016 all rights reserved.
- Asphalt Equipment For Sale & Rental New & Used Asphalt
new & used asphalt equipment for sale & rental asphalt batch plants | mixers (28) asphalt chip spreaders construction on roadways is done using asphalt equipment.
- Asphalt Plant Component For Sale & Rental New &
asphalt plant component for sale & rental | rock & dirt. search from 1000's of listings for new & used asphalt plant components for sale or rental updated daily from
- [pdf]batch Asphalt Mixing Plants Overview
czech republic alfeld germany small and in between. plants are mobile, asphalt mixing plants a batch plants provide the
- Mobile China Concrete Mixing Batch Plant Hot Sale In
batching plant for sale supplier, small asphalt plant of capacity 20 30 tph for sale. mobile asphalt plant gives portable asphalt plant czech republic;
- 20t Per Hour Mobile Type Asphaltum Batch Mix Plant For Sale
china 20t per hour mobile type asphaltum batch mix plant for sale with high quality, leading 20t per hour mobile type asphaltum batch mix plant for sale manufacturers
- Bitumen Hot Mix Plant Cost In Central African Republic
selected elvaloy® ret project examples from in czech republic ·high quality dayu lb series asphalt batch mix plant ·qlb5 small mobile asphalt mixing plant
- Concrete Batching Plant For Costa Rica Mj Concrete
czech republic; atlas asphalt batch plant concrete batch plant for sale & rental search from 1000's of listings for new & used batch plants for sale or
- Mobile Asphalt Plant Price In Democratic Republic Of
mobile asphalt plant price in democratic republic of the congo .. batch asphalt plants; democratic republic of the 30t/h small asphalt plant sale,mobile
- Hzs50 Russia Cement Batching Station Concrete Batching Plant
mixing plant hot mix asphalt plant dry mortar small concrete batching plant for sale,us $ batching plant sold in russia, czech republic hzs50
- Asphalt / Pavers / Concrete Equipment For Sale
buy 2017 a as4252c, 1995 a tr3503, 1998 a tr4503, leeboy 8000, soff cut gx1500, 2008 T-Company a rs425c, asphalt zipper az600b, 2015 allen eng at16, 1987 a we
- Mini Asphalt Mix Plant, Mini Asphalt Mix Plant Suppliers
40t/h small asphalt plant, batch type mini asphalt hot mix plant price .. mini 60 80t/h hot mix process asphalt batch plant for sale .. (islamic republic of) (2)
- Commercial Concrete Batching Plant On Sale (factory
mixing batch plant on sale / hzs60 concrete batching pl czech republic, concrete mixing batching plant,mobile concrete batch plant, hot mixing asphalt plant
- 60m3/h Concrete Batch Plant Sale/concrete Mixing Batch
60m3/h concrete batch plant sale/concrete mixing batch plant in real e .. sustainable hydration in czech republic bottled .. small asphalt
- Concrete Plants | A Roadbuilding
a roadbuilding offers a complete range of dry and wet mix concrete plants from 4 yd highly versatile 5yd³ concrete batch plant, asphalt plants. counter
- New/used Concrete Batching Plants For Sale Saudi
new and used, small or second hand concrete batching plant for sale, buy in jeddah, riyadh, saudi arabia. find heavy machinery, industrial, construction equipment
- Asphalt Mixer 7.5 Litre Capacity
used mixers and blenders for sale in czech republic. (7) automatic asphalt large laboratory mixer, 32 .. batching plants mobile concrete batching plant.
- Asphalt Mixing Plant,china Asphalt Mixing Plant
jdc350 small concrete mixers for sale, mini asphalt batch plant uses the structural style which the trailer domain drying cyclinder and the mixer are united
- China Asphalt Plant Equipment Suppliers, Asphalt Plant
import china asphalt plant equipment from various high quality chinese asphalt plant equipment 40tph small asphalt batch mix plant .. czech republic (172
- Dowin Factory Ready Mix Concrete Plant Manufacturers
25cbm/h small concrete batching plant with factory concrete batch asphalt mixing plant for sale in page 3 the first century of the republic .. bessemer plant;
- Mobile Concrete Batching Plant In Mexico Induriet.org
ready mix small mobile concrete batching plant in mexicomobile alibaba mb1800 portable mobile concrete batching plant for sale 20 czech republic,
- Used Asphalt Machines For Sale A Uk
buy used asphalt machines for sale on czech republic (12 barber greene ba1500 asphalt batching plant,this asphalt batching plant will produce a
- Used A Asphalt Pavers For Sale Baupool.co.uk
buy a asphalt pavers used or new on baupool.co.uk. best prices by professional used a asphalt pavers for sale .. a blackmove 200 mobile batch plant.
- Ranger 12 Portable Concrete Batch Plant | A
asphalt plants. counter flow the a ranger 750 twin is the ultimate in high production mobile concrete batch plants for the paving ranger 12. output
- Asphalt Plant For Bilfinger Berger
julius berger quarry in nigeria. bilfinger berger gas and oil services nigeria ltd. address julius berger yard, ijora causeway, ijora lagos .. an asphalt plant
- Asphalt Plant China For Sale In Seychelles
2016 t company magnum 140 asphalt batch plant for sale china. croatia. cuba. czech republic portable asphalt plant for sale | mobile asphalt. small
- China Mobile Mini Asphalt Plant For Asle From China
china mobile mini asphalt plant for asle from china with find mobile asphalt plant for sale,small asphalt plant,mini asphalt plant on batching machine
- Machinerytrader.ie | Pme Asphalt / Pavers / Concrete
pme asphalt / pavers mobile concrete batching plant you can find used tracked paving equipment for sale if you would prefer to reduce the costs
- Stationary Batching Plant For Concrete Concrete Mixing
concrete batching plant hzs25 small type for sale. concrete batching plant stationary. 17 listings found. select united states (2) czech republic
- Contact | Mix Right Portable Concrete Mixers, Batch Plants
contact mix right for pricing on portable concrete mixers, batch plants & cement right manufacturing systems is a small company making big things in the concrete
- Asphalt Concrete Wikipedia
asphalt batch mix plant .. asphalt concrete (commonly called asphalt, small scale asphalt recycling is used when wanting to make smaller road repairs vs
- Costa Rica Concrete Batch Plant Manager Jobs Batching Plant
costa rica concrete batch plant manager jobs .. croatia; czech republic; dominican republic a plant's range of batch and drum mix asphalt plants
- Asphalt Plants Manufacturers, China Asphalt Plants
find asphalt plants manufacturers from china .. asphalt batching plant with capability of 160t/hour, czech republic (1388) denmark (2655) djibouti
- Batching Systems, Batching Systems Suppliers And
batching systems, wholesale various high quality batching systems products from global batching systems suppliers and batching systems factory,importer,exporter at
- Asphalt Mixing Machine For Sale In Germany Grinding
asphalt mixing machine for sale in germany czech republic (1) finland (1) france (8) asphalt plant, cement plant, batching plant ,
- 2015 China Top Quality With Ce,sgs Certificate Ar 200
czech republic; denmark batch mix chinese factory low price 80t/h fixed asphalt batch hot mix plant,asphalt mixing plant sale small dry mix
- Infrared Asphalt Heater P200 Folding Design
used asphalt plants (17) used baghouses (9) p200 infrared asphalt heater. more images .. sale price $5,800 . more info.
- Aggregates/asphalt Materials Calculator Cemex Usa
czech republic; dominican republic; we have grown from a small regional cement firm into a top global building solution aggregates/asphalt materials calculator. | http://www.aboutastrology.co.in/china_made/31702/small-asphalt-batch-plant-for-sale-in-czech-republic.html | CC-MAIN-2017-47 | refinedweb | 1,875 | 53.34 |
Vectorization
Introduction
Enabling your application to take advantage of vectorization is an important component of achieving high performance on today's supercomputers. And, vectorization will be even more important on Cori, the next NERSC supercomputer system, because it will support AVX-512 instructions, 512-bit (64-byte) extensions of AVX. Vectorization allows you to execute a single instruction on multiple data objects in parallel within a single CPU core, thus improving performance. This is different from task parallelism using MPI, OpenMP or other parallel libraries where additional cores or nodes are added to take care of data belonging to separate tasks placed on different cores or nodes. This way of parallelism (data parallelism) is possible since modern computer architecture includes SIMD (Single Instruction, Multiple Data) instructions which are able to perform a single instruction on multiple data objects on a single CPU core at the same time.
SSE (Streaming SIMD Extensions) is an SIMD extension to the x86 instruction set architecture first introduced in 1999 by Intel and subsequently expanded later. 128-bit registers are provided to handle SSE instructions, to operate simultaneously on two 64-bit (8-byte) double precison numbers, four 32-bit (4-byte) floating point numbers, two 64-bit integers, four 32-bit integers, eight 16-bit short integers or sixteen 8-bit bytes.
Advanced Vector Extensions (AVX) are vector instructions added to to the x86 instruction set architecture whose vector width is further extended to 256 bits (or 32 bytes). 256-bit wide registers are available to handle such instructions. These instructions were introduced by Intel with the Sandy Bridge processor shipping in 2011 and later that year by AMD with the Bulldozer processor. AVX instructions simultaneously operate on 8 pairs of single precision (4 bytes) operands or 4 pairs of double precision (8 bytes) operands, etc. This can speed up a vectorizable loop by up to 8 times in case of single precison (4 bytes), or 4 times in case of double precision (8 bytes), etc.
AVX and SSE instructions are available on Edison nodes. On Hopper, only SSE instructions are available.
We often use the term 'vector length' when talking about vectorization. This is the number of variables that can be simultaneouly operated on by a vector instruction. Since SSE registers are 128-bit (16-byte) wide, the vector length is four if single precison (4 bytes) variables are used, and two with double precison variables. AVX registers are 256-bit (32-byte) wide, thus the vector length will double the vector length for SSE instructions, eight for single precison and four for double precision.
Vectorization
To vectorize a loop, the compiler first unrolls the loop by the vector length, and then packs multiple scalar instructions into a single vector instruction. This optimization process is called 'vectorization'. In this example the vector length is 4. Then, the following loop
do i=1,100 a(i) = b(i) + c(i) end do
is transformed into
do i=1,100,4 ! unrolled 4 times a(i) = b(i) + c(i) a(i+1) = b(i+1) + c(i+1) a(i+2) = b(i+2) + c(i+2) a(i+3) = b(i+3) + c(i+3) end do
and then into
do i=1,100,4 <load b(i), b(i+1), b(i+2), b(i+3) into a vector register, vB> <load c(i), c(i+1), c(i+2), c(i+3) into a vector register, vC> vA = vB + vC ! 4 additions are now done in a single step <store a(i), a(i+1), a(i+2), a(i+3) from the vector register, vA> end do
By executing multiple operations in a single step, performance can potentially improve by a factor of up to the vector length (4 in this example), over scalar mode where one pair of operands are being operated on sequentially.
Compilers can auto-vectorize loops for you that are considered safe for vectorization. In case of the Intel compiler, this happens when you compile at default optimization level (-O2) or higher. On the other hand, if you want to disable vectorization for any loop in a source file for any reason, you can do that by specifying the '-no-vec' compile flag.
For the Cray compiler, there are four levels of automatic vectorization, shown in the following table.
The following test code demonstrates the effect of vectorization in a simple vector add operation. The main computation loop is repeated itmax times, partly to use small arrays in order to show the effect of vectorization without this being severely affected by slow memory bandwidth with a large array, and also to have the computation time larger than the time resolution of the timing function used (1 microsecond). For a finer resolution, MPI_Wtime() could be used.
program vecadd !... Prepared by NERSC User Services Group !... April, 2014 implicit none integer :: n = 5184 integer :: itmax = 10000000 #ifdef REAL4 real, allocatable :: a(:), b(:), c(:) #else real*8, allocatable :: a(:), b(:), c(:) #endif integer i, it integer*8 c1, c2, cr, cm real*8 dt allocate (a(n), b(n), c(n)) !... Initialization do i=1,n a(i) = cos(i * 0.1) b(i) = sin(i * 0.1) c(i) = 0. end do !... Main loop call system_clock(c1, cr, cm) do it=1,itmax do i=1,n c(i) = a(i) + b(i) end do b(n) = sin(b(n)) end do call system_clock(c2, cr, cm) dt = dble(c2-c1)/dble(cr) print *, c(1)+c(n/2)+c(n), dt deallocate(a, b, c) end
The plot below shows performance speedup with vectorization for different array sizes observed on Edison using single ('R*4') and double ('R*8') precisions with the Intel compiler. The speedup is over 4 in single precision and over 2 in double precision when the data is all in the L1 cache (32 KB).
Performance improvement maximizes when all data is in caches. If data has to be moved from memory to caches frequently, performance gain with vectorization can become obscured by the cost for data movement, as can be seen in the following large array dimension cases. The speedup factor now becomes about 1.4 with single precision and 1.06 with double precision, outside the L2 cache boundary (256 KB).
For even larger array dimensions, the cost associated with data movement dominates so much that vectorization effect is almost non-existent or vectorization has an adverse effect. Below is a plot when memory usage is around the L3 cache size (30 MB).
This exercise demonstrates importance of efficient use of caches for better vectorization performance.
Vectorization inhibitors
Not all loops can be vectorized. A good introduction to vectorization with the Intel compiler ('A Guide to Vectorization with Intel C++ Compilers') summarizes the criteria for vectorization quite well:
(1) The loop trip count must be known at entry to the loop at runtime. Statements that can change the trip count dynamically at runtime (such as Fortran's 'EXIT', computed 'IF', etc. or C/C++'s 'break') must not be present inside the loop.
(2) In general, branching in the loop inhibits vectorization. Thus, C/C++'s 'switch' statements are not allowed. However, 'if' statements are allowed as long as they can be implemented as masked assignments. The calculation is done for all 'if' branches but the results is stored only for those elements for which the mask evaluates to true.
(3) Only the innermost loop is eligible for vectorization. If the compiler transforms an outer loop into an inner loop as a result of optimization, then the loop may be vectorized.
(4) A function call or I/O inside a loop prohibits vectorization. Intrinsic math functions such as 'cos()', 'sin()', etc. are allowed because such library functions are usually vectorized versions. A loop containing a function that is inlined by the compiler can be vectorized because there will be no more function call.
(5) There should be no data dependency in the loop. Some dependency types are as follows:
Read-after-write (also known as "flow dependency"): An example is shown below.
do i=2,n a(i) = a(i-1) + 1 end do
This type of dependency results in incorrect numerical results when performed in vector mode. Let's assume that the vector length is again 4. If the above is computed in vector mode, the old values of a(1), a(2), a(3), and a(4) from the previous loop will be loaded into the right hand side. On the other hand, in scalar mode, the new value computed in the previous scalar iteration, a(i-1), will be loaded into the right hand side. Therefore, the loop cannot be vectorized:
% ifort -vec-report3 atest.f atest.F(14): (col. 7) remark: loop was not vectorized: existence of vector dependence atest.F(16): (col. 7) remark: vector dependence: assumed FLOW dependence between atest line 16 and atest line 16
Write-after-read (also known as "anti-dependency"): An example is:
do i=2,n a(i-1) = a(i) + 1 end do
This loop can be vectorized.
Write-after-write (also known as "output dependency"): The following loop cannot be vectorized because incorrect numerical results obtained in vector mode.
do i=2,n a(i-1) = x(i) ... a(i) = 2. * i end do
However, reduction operations can be vectorized. In the following example, the compiler will vectorize and compute partial sums. At the end, the total sum is computed from partial sums.
s = 0. do i=1,n s = s + a(i) * b(i) end do
(6) Non-contiguous memory access hampers vectorization efficiency. Eight consecutive ints or floats, or four consecutive doubles, may be loaded directly from memory in a single AVX instruction. But if they are not adjacent, they must be loaded separately using multiple instructions, which is considerably less efficient.
Useful tools
Intel provides two very useful vector analysis tools.
There is an Intel compiler option '-vec-report=<n>', to generate a diagnostic report about vectorization. Use it to see why the compiler vectorizes certain loops and doesn't vectorize others. The report level can be selected by setting the flag to any number from 0 to 7. See the ifort, icc or icpc man page for details. This is an example of the report:
% ftn -o v.out -vec-report=3 yax.f yax.f(12): (col. 10) remark: LOOP WAS VECTORIZED yax.f(13): (col. 22) remark: loop was not vectorized: not inner loop yax.f(18): (col. 10) remark: LOOP WAS VECTORIZED yax.f(26): (col. 13) remark: LOOP WAS VECTORIZED yax.f(25): (col. 10) remark: loop was not vectorized: not inner loop yax.f(24): (col. 7) remark: loop was not vectorized: not inner loop
The compiler option '-guide' causes Guided Auto-Parallelization (GAP) feature turned on. It generates parallelization diagnostics, suggesting ways to imporove auto-vectorization, auto-parallelization, and data transformation. The option requires an optimization level of '-O2' or higher.
% ftn -real-size 64 -vec-report=3 -parallel -guide -o v.out alg.F alg.F(20): (col. 12) remark: LOOP WAS VECTORIZED alg.F(21): (col. 12) remark: LOOP WAS VECTORIZED alg.F(22): (col. 7) remark: LOOP WAS VECTORIZED alg.F(22): (col. 7) remark: loop was not vectorized: not inner loop alg.F(35): (col. 7) remark: LOOP WAS VECTORIZED alg.F(34): (col. 7) remark: loop was not vectorized: not inner loop alg.F(43): (col. 16) remark: LOOP WAS VECTORIZED alg.F(20): (col. 12) remark: LOOP WAS VECTORIZED alg.F(21): (col. 12) remark: LOOP WAS VECTORIZED alg.F(22): (col. 7) remark: loop was not vectorized: vectorization possible but seems inefficient alg.F(35): (col. 7) remark: LOOP WAS VECTORIZED alg.F(43): (col. 16) remark: LOOP WAS VECTORIZED GAP REPORT LOG OPENED ON Thu Apr 24 13:04:28 2014 alg.F(20): remark #30525: (PAR) Insert a "!dir$ loop count min(512)" statement right before the loop at line 20 to parallelize the loop. [VERIFY] Make sure that the loop has a minimum of 512 iterations. alg.F(21): remark #30525: (PAR) Insert a "!dir$ loop count min(512)" statement right before the loop at line 21 to parallelize the loop. [VERIFY] Make sure that the loop has a minimum of 512 iterations. alg.F(22): remark #30525: (PAR) Insert a "!dir$ loop count min(512)" statement right before the loop at line 22 to parallelize the loop. [VERIFY] Make sure that the loop has a minimum of 512 iterations. Number of advice-messages emitted for this compilation session: 3. END OF GAP REPORT LOG
The advice may include suggestions for source code modifications, applying specific pragmas, or add compiler options. In all cases, applying a particular suggestion requires the user to verify that it is safe to apply that particular suggestion.
The compiler does not produce any objects or executables when the -guide option is specified.
Note: With the version 15 of the Intel compiler (available now on Babbage), the compiler flag to get vectorization information is changing! The flags -vec-report=n, -par-report and -openmp-report are deprecated. Instead, use
-qopt-report[=N], N = 1-5 -qopt-report-phase=vec
for increasing levels of detail (default N=2). Also, compiler optimization reports now go to a file with the extension ".optrpt" and stderr is not the default. There will be one report generated for each object file created, in the directory where the object files are located. Each time you compile, any existing *.optrpt files will be overwritten. You can also restrict the report in various ways; e.g., using -qopt-report-routine:function_name
Useful Tools from the Cray Compiler
The Cray compiler listing option -rm provides a "loopmark" source code listing with optimizations including in the actual source listing. It writes this to a file source.lst where source.ext is the source file, e.g. source.f, source.F, source.f90, etc.
The -hnopattern option in this example is included to turn off Cray pattern matching optimization which replace source code lines with calls to math libraries, in this case to the DGEMM BLAS routine.
% ftn -hnopattern -hvector3 -rm matmat.F
produces a matmat.lst file which includes a source file listing with all loop optimizations indicated.
57. + 1------------< do it=1,itmax
58. + 1 br4--------< do j=1,n
59. + 1 br4 b------< do k=1,n
60. 1 br4 b Vr2--< do i=1,nr
61. 1 br4 b Vr2 c(i,j) = c(i,j) + a(i,k) * b(k,j)
62. 1 br4 b Vr2--> end do
63. 1 br4 b------> end do
64. 1 br4--------> end do
65. 1------------> end do
Elsewhere in the .lst file these marks are defined and the specific optimizations for each line described:
b - blocked
r - unrolled
V - Vectorized
....
ftn-6254 ftn: VECTOR File = matmat.F, Line = 57
A loop starting at line 57 was not vectorized because a recurrence was found on "c" at line 61.
ftn-6294 ftn: VECTOR File = matmat.F, Line = 58
A loop starting at line 58 was not vectorized because a better candidate was found at line 60.
ftn-6049 ftn: SCALAR File = matmat.F, Line = 58
A loop starting at line 58 was blocked with block size 8.
ftn-6005 ftn: SCALAR File = matmat.F, Line = 58
A loop starting at line 58 was unrolled 4 times.
ftn-6254 ftn: VECTOR File = matmat.F, Line = 59
A loop starting at line 59 was not vectorized because a recurrence was found on "c" at line 61.
ftn-6049 ftn: SCALAR File = matmat.F, Line = 59
A loop starting at line 59 was blocked with block size 16.
ftn-6005 ftn: SCALAR File = matmat.F, Line = 60
A loop starting at line 60 was unrolled 2 times.
ftn-6204 ftn: VECTOR File = matmat.F, Line = 60
A loop starting at line 60 was vectorized.
Compiler directives and pragmas
The compiler will not vectorize a loop if it perceives that there can be data dependency. However, when a user knows clearly that there is no data dependency, that information can be passed to the compiler to make the loop vectorize. This information is specified in source codes using compiler directives in Fortran or pragmas in C/C++. They are placed just before the start of the loop of interest. Other directives/pragmas are to command the compiler to perform a certain task such as to allocate memory for an array in a certain fashion.
Some of the commonly used ones in the Intel compiler are shown below. Please see the Intel documentation for a complete list.
'!dir$ ivdep' or '#pragma ivdep'
This tells the compiler to ignore vector dependencies in the loop that immediately follows the directive/pragma. However, this is just a recommendataion, and the compiler will not vectorize the loop if there is a clear dependency.
The Cray compiler supports the same directive.
'!dir$ vector' or '#pragma vector'
This overrides default heuristics for vectorization of the loop. You can provide a clause for a specific task. For example, '!dir$ vector always' (#pragma vector always) will try to vectorize the immediately-following loop that the compiler normally would not vectorize because of a performance efficiency reason. As another example, '!dir$ vector aligned' (#pragma vector aligned) will inform that all data in the loop are aligned at a certain byte boundary so that aligned load or store SSE or AVX instructions can be used (they are more efficient and less expensive instructions than unaligned load or store instructions).
This directive may be ignored by the compiler when it thinks that there is a data dependency in the loop.
This directive is interpreted differently by the Cray Fortran and C compilers. It turns on vectorization for all succeeding loop nests until a novector compiler directive is encountered or the end of the programming unit is reached.
'!dir$ novector' or '#pragma novector'
This tells the compiler to disable vectorizaton for the loop that follows.
The Cray C compiler interprets it solely for the following loop in the same way as the Intel compiler, but the Cray Fortran compiler turns off vectorization for all succeeding loop nests until a vector compiler directive is encountered or the end of the programming unit is reached.
'!dir$ simd' or '#pragma simd'
This is used to enforce vectorization for a loop that the compiler doesn't auto-vectorize even with the use of vectorization hints such as '!dir$ vector always' ('#pragma vector always') or '!dir$ ivdep' ('#pragam ivdep'). Because of this nature of enforcement, it is called user-mandated vectorization. A clause can be accompanied to give a more specific direction.
The loop in the following surboutine will not be vectorized since the compiler cannot know what the value of 'k' might be.
subroutine sub(a,b,n,k) integer n, k real a(n), b(n) integer i do i=k+1,n a(i) = 0.5 * (a(i-k) + b(i-k)) end do end
% ftn -c -vec-report=3 sub.F sub.F(5): (col. 7) remark: LOOP WAS VECTORIZED sub.F(5): (col. 7) remark: loop was not vectorized: existence of vector dependence sub.F(6): (col. 9) remark: vector dependence: assumed ANTI dependence between a line 6 and a line 6 sub.F(6): (col. 9) remark: vector dependence: assumed FLOW dependence between a line 6 and a line 6 sub.F(6): (col. 9) remark: vector dependence: assumed FLOW dependence between a line 6 and a line 6 sub.F(6): (col. 9) remark: vector dependence: assumed ANTI dependence between a line 6 and a line 6 sub.F(5): (col. 7) remark: loop skipped: multiversioned
Depending on the actual value, there can be a data dependency. However, if a user who knows for sure that the value of k will be always larger than 8, then the user can use '!dir$ simd vectorlength(8)' (or simply '!dir$ simd') to vectorize the loop.
!dir$ simd vectorlength(8) do i=k+1,n a(i) = 0.5 * (a(i-k) + b(i-k)) end do
% ftn -c -vec-report=3 sub.F sub.F(6): (col. 7) remark: SIMD LOOP WAS VECTORIZED
Vectorization guidelines
Memory alignment
Data movement (i.e., loading into a vector register and storing from a register) instructions of SSE operate more efficiently on data objects when they are aligned at 16 byte boundaries in memory (that is, the start memory address of the data objects is a multiple of 16 bytes). Data objects for AVX instructions are to be aligned at 32 bytes boundaries for good performance.
One can enforce memory alignment using the clause '__attribute__((aligned(...)))' in the declaration statement and use '#pragma vector aligned' in relevant for-loops. In Fortran you can use the 'attributes align' directive in the declaration statement and 'vector aligned' directive for loops. Example codes on this issue can be found in the Intel compiler package: in the ${INTEL_PATH}/Samples/en_US/{Fortran,C++}/vec_samples directory (the environment variable INTEL_PATH is defined whenever an 'intel' module is loaded into your programming environment).
% cd ${INTEL_PATH}/Samples/en_US/C++/vec_samples % cat Driver.c ... #ifdef ALIGNED ... FTYPE a[ROW][COLWIDTH] __attribute__((aligned(16))); FTYPE b[ROW] __attribute__((aligned(16))); FTYPE x[COLWIDTH] __attribute__((aligned(16))); ... #else ... #endif ... % cat Multiply.c ... #ifdef ALIGNED // The pragma vector aligned below tells the compiler to assume that the data in // the loop is aligned on 16-byte boundary so the vectorizer can use // aligned instructions to generate faster code. #pragma vector aligned #endif for (j = 0;j < size2; j++) { b[i] += a[i][j] * x[j]; } ...
The following test code examines the effect of memory alignment in a simple-minded matrix-matrix multiplication case. Here, we pad the matrices with extra rows to make them aligned at certain boundaries. We examine memory alignment at 16, 32, and also 64 byte boundaries. The 64-byte case is also considered here since Edison and Hopper's cache line size of 64 bytes has a performance implication (in terms of better cache utilization).
program matmat !... Prepared by NERSC User Services Group !... April, 2014 implicit none integer :: n = 31 integer :: itmax = 200000 #ifdef REAL4 real, allocatable :: a(:,:), b(:,:), c(:,:) #else real*8, allocatable :: a(:,:), b(:,:), c(:,:) #endif #ifdef ALIGN16 !dir$ attributes align : 16 :: a,b,c #elif defined(ALIGN32) !dir$ attributes align : 32 :: a,b,c #elif defined(ALIGN64) !dir$ attributes align : 64 :: a,b,c #endif integer i, j, k, it integer :: vl, nr integer*8 c1, c2, cr, cm real*8 dt !... Vector length #ifdef ALIGN16 vl = 16 / (storage_size(a) / 8) #elif defined(ALIGN32) vl = 32 / (storage_size(a) / 8) #elif defined(ALIGN64) vl = 64 / (storage_size(a) / 8) #else vl = 1 #endif nr = ((n + (vl - 1)) / vl) * vl ! padded row dimension allocate (a(nr,n), b(nr,n), c(nr,n)) !... Initialization do j=1,n #if defined(ALIGN16) || defined(ALIGN32) || defined(ALIGN64) !dir$ vector aligned #endif do i=1,nr a(i,j) = cos(i * 0.1 + j * 0.2) b(i,j) = sin(i * 0.1 + j * 0.2) c(i,j) = 0. end do end do !... Main loop call system_clock(c1, cr, cm) do it=1,itmax do j=1,n do k=1,n #if defined(ALIGN16) || defined(ALIGN32) || defined(ALIGN64) !dir$ vector aligned #endif do i=1,nr c(i,j) = c(i,j) + a(i,k) * b(k,j) end do end do end do end do call system_clock(c2, cr, cm) print *, c(1,1)+c(n,n), dble(c2-c1)/dble(cr) deallocate(a, b, c) end
The speedup factors observed on Edison are plotted for small array dimensions using single ('R*4') precision.
Note that the i index value goes up to nr, the padded dimension for the matrix row. Performance appears better by going up to nr instead of n. The data from rows from n+1 to nr should be simply ignored.
AoS (Array of Structures) vs. SoA (Structure of Arrays)
A data object can become complex with multiple component elements or attributes. Programmers often represent a group of such data objects using an array of Fortran's derived data type or C's struct objects (i.e., an array of structures or AoS). Although an AoS provides a natural way to represent such data, memory reference of any component requires non-unit stride access. Such a situation is illustrated in the following example code. When the main loop is transformed into a vector loop, three components of a 'coords' object will be stored into three separate vector registers, one for each component. With the AoS data layout, loading into such a register will require stride 3 (or more) access, reducing efficiency of the vector load.
A better data structure for vectorization is to separate each component of the objects into its own array, and then form a data object composed of three arrays (i.e., a structure of arrays or SoA). When the main loop is vectorized, each component will be loaded into a separate register but this will be done with unit-stride access. Therefore, vectorization will be more efficient.
program aossoa !... Prepared by NERSC User Services Group !... April, 2014 implicit none integer :: n = 1000 integer :: itmax = 10000000 #ifdef SOA type coords real, pointer :: x(:), y(:), z(:) end type type (coords) :: p #else type coords real :: x, y, z end type type (coords), allocatable :: p(:) #endif real, allocatable :: dsquared(:) integer i, it integer*8 c1, c2, cr, cm real*8 dt !... Initialization #ifdef SOA allocate(p%x(n), p%y(n), p%z(n), dsquared(n)) do i=1,n p%x(i) = cos(i + 0.1) p%y(i) = cos(i + 0.2) p%z(i) = cos(i + 0.3) end do #else allocate(p(n), dsquared(n)) do i=1,n p(i)%x = cos(i + 0.1) p(i)%y = cos(i + 0.2) p(i)%z = cos(i + 0.3) end do #endif !... Main loop call system_clock(c1, cr, cm) do it=1,itmax #ifdef SOA do i=1,n dsquared(i) = p%x(i)**2 + p%y(i)**2 + p%z(i)**2 end do #else do i=1,n dsquared(i) = p(i)%x**2 + p(i)%y**2 + p(i)%z**2 end do #endif end do call system_clock(c2, cr, cm) dt = dble(c2-c1)/dble(cr) print *, dsquared(1)+dsquared(n/2)+dsquared(n), dt #ifdef SOA deallocate(p%x, p%y, p%z, dsquared) #else deallocate(p, dsquared) #endif end
The following performance comparison observed on Edison shows that the SoA data layout gives better performance.
Elemental function
Elemental functions are functions that can be also invoked with an array actual argument and return array results of the same shape as the argument array. This convenient feature is quite common in Fortran as it is widely used in many intrinsic functions.
The Intel compiler allows a user to define a function as elemental for a vectorization purpose. When requested, the compiler generates a vector version of the function as well as a scalar version and the proper version is used for a function call.
A function call inside a loop generally inhibits vectorization. However, if an elemental function is called within a loop, the loop can be executed in vector mode. In vector mode, the function is called with multiple data packed in a vector register and returns packed data.
Below is a simple example code showing how to declare an elemental function.
!... Prepared by NERSC User Services Group !... April, 2014 module fofx implicit none contains function f(x) #ifdef ELEMENTAL !dir$ attributes vector :: f #endif program main use fofx implicit none integer :: n = 1024 integer :: itmax = 1000000 #ifdef REAL4 real, allocatable :: a(:), x(:) #else real*8, allocatable :: a(:), x(:) #endif integer i, it integer*8 c1, c2, cr, cm real*8 dt allocate (a(n), x(n)) !... Initialization do i=1,n x(i) = cos(i * 0.1) + 0.2 end do !... Main loop call system_clock(c1, cr, cm) do it=1,itmax do i=1,n a(i) = f(x(i)) end do x(n) = x(n) + 1.0 end do call system_clock(c2, cr, cm) dt = dble(c2-c1)/dble(cr) print *, n, a(1)+a(n/2)+a(n), dt deallocate(a, x) end
Again, the following performance was obtained on Edison.
'restrict' keyword
The Intel compiler's 'restrict' keyword may be used for a pointer argument in a C/C++ function to indicate that the actual pointer argument provides exclusive access to the memory referenced in the function and no other pointer can access it. This is to clarify about the ambiguity whether two pointers in a loop may reference a common memory region which may create a vector dependency. An example code is found in the ${INTEL_PATH} directory again.
% cat ${INTEL_PATH}/Samples/en_US/C++/vec_samples/Multiply.h ... #ifndef FTYPE #define FTYPE double #endif ... % cat ${INTEL_PATH}/Samples/en_US/C++/vec_samples/Multiply.c ... // The "restrict" qualifier tells the compiler that pointer b does // not alias with pointers a and x meaning that pointer b does not point // to the same memory location as a or x and there is no overlap between // the corresponding arrays. ... #ifdef NOALIAS void matvec(int size1, int size2, FTYPE a[][size2], FTYPE b[restrict], FTYPE x[]) #else void matvec(int size1, int size2, FTYPE a[][size2], FTYPE b[], FTYPE x[]) #endif ... for (j = 0;j < size2; j++) { b[i] += a[i][j] * x[j]; } ...
As the comment in the code explains, the 'restrict' keyword declares that the memory referenced by 'b' doesn't overlap the region that the pointers a and x reference (see the 'NOALIAS' portion). Therefore, there is no vector dependency in the loop. Copy the files to your working directory and try to compile it as follows:
% cp ${INTEL_PATH}/Samples/en_US/C++/vec_samples/Multiply.{c,h} . % cc -c -vec-report=3 -restrict -DNOALIAS -I. Multiply.c Multiply.c(55): (col. 3) remark: LOOP WAS VECTORIZED ...
OpenMP 4.0 SIMD constructs
OpenMP 4.0 standard now has the SIMD construct to specify the execution of a loop in vectorization mode (i.e., SIMD operations). The syntax is:
#pragma omp simd [clause...]
for C/C++, and, in case of Fortran:
!$omp simd [clause...]
where some of the optional clauses are
- safelen(length)
- aligned(list[:alignment])
- reduction(reduction-identifier:list)
- collapse(n)
The safelen clause is to specify the maximum length for safe vectorization without incurring data dependency. The aligned clause declares that the list of the variables are aligned to the number of bytes expressed in the optional parameter. The reduction clause is to list the variables where a reduction operation (i.e., + for summation, min for minimum, max for maximum, etc.) result is stored. The collapse clause is to indicate that how many levels of the nested loops that immediately follow the OpenMP directive should be collapsed into a single aggregate loop with larger iteration space.
The memory alignment example above can be written using OpenMP 4.0 directives:
do it=1,itmax do j=1,n do k=1,n #if defined(ALIGN16) !$omp simd aligned(a,b,c:16) #elif defined(ALIGN32) !$omp simd aligned(a,b,c:32) #elif defined(ALIGN64) !$omp simd aligned(a,b,c:64) #endif do i=1,nr c(i,j) = c(i,j) + a(i,k) * b(k,j) end do end do end do end do
An elemental function can be created, too:
#pragma omp declare simd [clause...] function definitoin or declaration
or, in case of Fortran:
!$omp declare simd( proc-name ) [clause...] function definition
As in the above elemental function section, this OpenMP directive asks the compiler to create a vector version of a function. An OpenMP SIMD directive version of the above elemental function program is given below:
!... Prepared by NERSC User Services Group module fofx implicit none contains #ifdef ELEMENTAL !$omp declare simd (f) #endif function f(x) ...
Vectorization with respect to arithmetic operations
In this section, we present floating-point and integer arithmetic operation rates measured with respect to vectorization compile flags and operand data types on compute nodes on Edison and Hopper as well as Babbage's KNC (Intel Knights Corner). This work was inspired by a study for Sandy Bridge and Westmere CPUs. By comparing the performance with and without vectorization-enabling compile flags, we can see an expected vectorization speedup associated with a particular data type and operation type.
A Fortran code is built with the Intel compiler, and a single task is run on a node. Elapsed times are measured for up to 20 times an arithmetic operation is applied, and the per-operation cost is estimated after least-squares fitting of the elapsed times. Below we show plots of arithmetic operation rates for different data types and operation types on different NERSC machines. The results are in billion floating point operations per second (Gflops) in case of floating point operands, or billion integer operations per second in case of integer operands. 'R*4' is for 4 byte reals, 'R*8' is for 8 byte reals, 'I*4' is for 4 byte integers, 'I*8' is for 8 byte integers, 'C*8' is for 8 byte complex variables (4 bytes for real and imaginary parts), and 'C*16' is for 16 byte complex variables (8 bytes for real and imaginary parts).
Edison's Ivy Bridge
Edison compute nodes (Ivy Bridge) support both SSE and AVX instructions. By default, the compiler uses AVX instructions, but we can also select a type with the Intel compile flags '-xSSE4.2' and '-xAVX'. Vector widths with SSE and AVX instructions are 128 bits (16 bytes) and 256 bits (32 bytes), respectively. The Intel compiler version used is 14.0.2.144.
For some data and operation types, we see the expected vector speedup with respect to the used vectorization type (4 times with SSE and 8 times with AVX in case of 4-byte operands; 2 times with SSE and 4 times with AVX in case of 8-byte operands). But others (such as SQRT() operations with AVX or integer operations with AVX) don't show the expected vectorization speedup. For some data and operation types such as all integer operations, no additional speedup with AVX over SSE is observed. Some operations (for example, EXP()) show worse performance with vectorization.
Hopper's Magny-Cours
Hopper's compute nodes (Magny-Cours) sppport SSE2 instructions. Vector registers are 128-bit (16-byte) wide. The Intel compiler version used is 14.0.0.080.
Babbage's KNC
NERSC Intel Xeon Phi testbed cluster called Babbage has Intel Knight's Corner (KNC) cards, which support Intel Initial Many Core Instructions (Intel IMCI). KNC vector registers are 512-bit (64-byte) wide. The Intel compiler version used is 14.0.0.080.
| https://www.nersc.gov/users/computational-systems/cori/application-porting-and-performance/vectorization/ | CC-MAIN-2018-13 | refinedweb | 5,802 | 55.74 |
Hello all,
In the svn commit: r1831256 - java/org/apache/tomcat/util/descripto/web/ResourceBase.java , we have changed the lookupname validation which checks that lookup starts with java:/ namespace. But, as extensively used in Apache TomEE, users have already been using openejb:/ namespace in their lookup names and all working until this commit. Can we do something configurable for this validation check?
Regards.
Gurkan
I think r1831256 should be partially reverted due to this existing use. I plan to do it shortly unless there's a disagreement.
This behaviour is required by the Java EE spec. I am therefore moving this to an enhancement to add an option to ignore the specification requirement.
Implementation looks to be non-trivial. If implemented, moving the check to the NamingContextListener looks like the best option as that has access to the Context/Server so this could be configured per Context rather than globally.
Fixing the non-specification compliant usage would be a better solution.
Well, ok, but there is existing use and I don't see a benefit of being strict here (it's not like a URL or HTTP element where security issues can occur). So the fix sounds more like a regression than an improvement to me. I think it would be ok to do it in Tomcat.next, but probably not here, at least not by default.
Yes, issue is really it breaks a lot of users (we already lived that years ago and all apps needed to be reconfigured).
There is a flag for strict compliance in tomcat already so maybe this is a behavior falling into that "disabled by default but activable" behavior. Will not hurt users. Also note that it conflicts with JNDI spec which enables to have any namespace so you can read the spec from the point of view matching your need in such a case.
I've had another read of the relevant bits of the Java EE spec and I can't see a clear reason for this limitation. There are some hints that suggest it is to do with the scopes of the various defined namespaces and making sure the application sees the JNDI context it expects. But I don't see why that should stop some other namespace being used - obviously with the onus on the user to make sure they are using it correctly / it behaves as they expect w.r.t. returned types, shared/unique instances, visibility scope etc.
Given that this change was only applied because it was noticed that the validation was missing while reviewing the lookup name implementation - rather than as a result of a bug report or similar - then I've no objection to the validation being reverted if that is the preferred option. Note it would need to be reverted from all current versions.
Nice, it's a lot simpler this way !
The fix will be in 9.0.11, 8.5.33, 8.0.54 and 7.0.91. | https://bz.apache.org/bugzilla/show_bug.cgi?id=62527 | CC-MAIN-2019-30 | refinedweb | 497 | 63.09 |
Following on from Part 1 , we will further enhance our script to trace incoming connections through the kernel to the application.
Before doing that we will tweak the script to provide a CSV output. We do this using a BEGIN probe to print the header and then update the probes. We will also split the probes. Where there is a match, multiple probes are triggered in the order listed in the script. We can therefore say we are only interested in a specific port and then do the heavy lifting later on.
It is also important that you clear variables once you finish with them so as not to fill up the kernel buffers (you will loose traces if you don't; given enough probes firing).
#!/usr/sbin/dtrace -Cs
#pragma D option quiet
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/ip.h>
BEGIN
{
printf("TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax\n");
}
fbt:ip:tcp_conn_request:entry
{
self->tcpq = *(tcp_t**)((char*)arg0+0x28);
self->lport = ntohs(*(uint16_t*)((char*)self->tcpq->tcp_connp+0x10a));
}
fbt:ip:tcp_conn_request:entry
/self->lport == $1/
{
this->srcAddr = 0;
this->srcPort = 0;
printf("%d,%Y,%s,0x%08x,%d,%d,%d,%d,%d\n",
timestamp, walltimestamp, "syn",
this->srcAddr, this->srcPort,
self->lport,
self->tcpq->tcp_conn_req_cnt_q0,
self->tcpq->tcp_conn_req_cnt_q,
self->tcpq->tcp_conn_req_max
);
}
fbt:ip:tcp_conn_request:return
/self->lport/
{
self->tcpq = 0;
self->lport = 0;
}
Whilst in tcp_conn_request() we don't actually have a connection from the client; the kernel is about to build one.
It is the message in the call that is actually the incoming packet to process. IP has dealt with it; now it is TCP's turn. Remember that the tcpq we have used so far is the listener's connection (the one that is in the listen state), not the eager's (new connection); so looking at that is not going to be productive.
We could probe another function or we could decode the message; tcp_accept_comm() is called by tcp_conn_request() to accept the connection providing a few things are satisfied such as Qmax not been reached. If we are only interested in connections that aren't dropped due to, e.g. Q==Qmax, then this would be a good place. However, if we also want to capture failed connections, we need to do the processing ourselves.
So, let's decode the message :D
Looking at the source for tcp_conn_request() we can see that arg1 ( mp , the mblk_t ) is the message. Not surprising as this is a STREAMS module. Further down we can see where the data is and the format of the data. i.e. we have a pointer to the IP header here.
ipha = (struct ip *)mp->b_rptr;
As mblk_t is a standard structure we can easily map this within our DTrace script. We can therefore change the setting of srcAddr to the following:
this->mblk = (mblk_t *)arg1;
this->iphdr = (struct ip*)this->mblk->b_rptr;
this->srcAddr = this->iphdr->ip_src.s_addr;
If we then run the script and connect to the server running on 172.16.170.133 from client 172.16.170.1 we get the following:
root@sol10-u9-t4# ./tcpq_ex7.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
180548302609318,2016 Aug 11 14:37:02,syn, 0x85aa10ac ,0,22,0,0,8
As you can see, the hex address is that of the destination and not the source. Given that an IP header is a standard format (defined in rfc791 ) this is a bit surprising. So, what is going on?
A Slight Detour
The IP header is defined in /usr/include/netinet/ip.h . It is as expected. Whilst I won't go into all the steps to find the cause, here is the quick version.
The DTrace compiler uses the C pre-processor, but compiles it's own code for the kernel to use. DTrace should be able to import C headers to allow you to use names, etc, but clearly something has gone wrong.
One test I did is to create my own header using the constituent parts and a key tweak; one struct doesn't use fields (bitmasks) but both are the same size.
#ifndef MYIP_H
#define MYIP_H
#include <sys/types.h>
typedef struct {
uchar_t c1;
uchar_t c2,c3,c4;
ushort_t s1,s2,s3,s4;
uint32_t src,dst;
} myip_t;
typedef struct {
uchar_t c1_1:4,c1_2:4;
uchar_t c2,c3,c4;
ushort_t s1,s2,s3,s4;
uint32_t src,dst;
} myip2_t;
#endif
Then, if we run this through a simple C program that just prints the sizes all looks good:
#include <stdio.h>
#include "myip.h"
int main(int argc, char *argv[]) {
printf("myip_t : %u\n", sizeof(myip_t));
printf("myip2_t: %u\n", sizeof(myip2_t));
return 0;
}
Which gives the following output:
root@sol10-u9-t4# ./a.out
myip_t : 20
myip2_t: 20
Then within the BEGIN probe of the DTrace we do the same (not forgetting to include the header):
printf("myip_t : %d\n", sizeof(myip_t));
printf("myip2_t: %d\n", sizeof(myip2_t));
Which gives the following output:
root@sol10-u9-t4# ./tcpq_ex8.d -I `pwd` 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
myip_t : 20
myip2_t: 24
This is the problem. A few more tests show that the DTrace compiler cannot handle fields (bitmasks) in the C structs correctly, at least in the situations I've tested (me'thinks a bug). I will leave it as an exercise for the reader to analyse this further.
Back to the Task at Hand
As the IP header is well-defined we can just define the offset without further analysis.
this->srcAddr = *(uint32_t*)((char*)(this->iphdr)+ 0xc );
This yields the correct IP address:
root@sol10-u9-t4# ./tcpq_ex9.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
181523558976884,2016 Aug 11 14:53:17,syn, 0x01aa10ac ,0,22,0,0,8
As the header length of the IP header and the TCP header has the bitmask after the port, we should be able to extract the port quite easily (setting srcPort to the result):
this->ihl = 4 * ((int)*((char*)(this->iphdr)) & 0xf);
this->tcphdr = (struct tcphdr*)((char*)(this->iphdr)+(this->ihl));
this->srcPort = ntohs(this->tcphdr->th_sport);
This yields the following output:
root@sol10-u9-t4# ./tcpq_ex10.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
181903328660708,2016 Aug 11 14:59:37,syn,0x01aa10ac, 40950 ,22,0,0,8
Which is confirmed via netstat:
root@sol10-u9-t4# netstat -an | fgrep 40950
172.16.170.133.22 172.16.170.1. 40950 29312 0 49232 0 ESTABLISHED
Completion of the handshake
As you can guess, there are various locations that we can probe the kernel in order to mark events. In the case of the completion of the three-way handshake, I'm going to go for tcp_send_conn_ind() in usr/src/uts/common/inet/tcp/tcp_tpi.c . The key function is actual tcp_rput_data() in usr/src/stand/lib/tcp/tcp.c where the state is changed to EST.
Anyway, in tcp_send_conn_ind() arg0 is the listener and arg1 is a mblk_t containing a pointer to a struct T_conn_ind . However, as we have already processed the original syn packet, we also have an eager tcp_t . We can extract that, and use the offsets we know to get the address and port of the peer.
We can combine probes where there is common processing. In this particular kernel arg0 is the same.
fbt:ip:tcp_conn_request:entry,
fbt:ip:tcp_send_conn_ind:entry
{
self->tcpq = *(tcp_t**)((char*)arg0+0x28);
self->lport = htons(*(uint16_t*)((char*)self->tcpq->tcp_connp+0x10a));
}
We can now predicate all the other probes within the kernel thread to where the port maps what we are after. This includes a simple way to label the action. We can also change the variable context from 'this' (lexical scope) to 'self' (thread scope) to pass the parameters that are in different probes but the same thread.
First, tcp_conn_request() , which should be familiar:
fbt:ip:tcp_conn_request:entry
/self->lport == $1/
{
self->action = "syn";
this->mblk = (mblk_t *)arg1;
this->iphdr = (struct ip*)this->mblk->b_rptr;
this->ihl = 4 * ((int)*((char*)(this->iphdr)) & 0xf);
this->tcphdr = (struct tcphdr*)((char*)(this->iphdr)+(this->ihl));
self->srcAddr = *(uint32_t*)((char*)(this->iphdr)+0xc);
self->srcPort = ntohs(this->tcphdr->th_sport);
}
Then tcp_send_conn_ind() :
fbt:ip:tcp_send_conn_ind:entry
/self->lport == $1/
{
self->action = "syn-aa";
this->mblk = (mblk_t *)arg1;
this->tconnind = (struct T_conn_ind*)(this->mblk->b_rptr);
this->tcpp = *(tcp_t**)(((char*)this->tconnind)+(this->tconnind->OPT_offset));
self->srcPort = ntohs(*(uint16_t*)((char *)this->tcpp->tcp_connp+0x108));
self->srcAddr = *(uint32_t*)((char*)this->tcpp->tcp_connp+0x104);
}
The printout then is common to both, since all of it is data driven from the previous stages:
fbt:ip:tcp_conn_request:entry,
fbt:ip:tcp_send_conn_ind:entry
/self->lport == $1/
{
printf("%d,%Y,%s,0x%08x,%d,%d,%d,%d,%d\n",
timestamp, walltimestamp,
self->action,
self->srcAddr, self->srcPort,
self->lport,
self->tcpq->tcp_conn_req_cnt_q0,
self->tcpq->tcp_conn_req_cnt_q,
self->tcpq->tcp_conn_req_max
);
}
Finally we clear the buffers to tidy up:
fbt:ip:tcp_conn_request:return,
fbt:ip:tcp_send_conn_ind:return
/self->lport/
{
self->tcpq = 0;
self->lport = 0;
self->action = 0;
self->srcAddr = 0;
self->secPort = 0;
}
If we run this on a connection we can see the two parts. Notice the value of Q0. On entry it is one as we have an embryonic connection which is about to transition to Q.
root@sol10-u9-t4# ./tcpq_ex11.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
185255847646301,2016 Aug 11 15:55:29,syn,0x01aa10ac,41010,22,0,0,8
185255847845639,2016 Aug 11 15:55:29,syn-aa,0x01aa10ac,41010,22, 1 ,0,8
If we fill up Q so it equals Qmax (using my test script from a previous blog) and retry, accepting a connection on the queue part way through to free the kernel backlog we get this, clearly showing the remote servers tcp retries.
root@sol10-u9-t4# ./tcpq_ex11.d 2000
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
185458485779874 ,2016 Aug 11 15:58:52,syn,0x01aa10ac,50066,2000,0,2,2
185459488194513,2016 Aug 11 15:58:53,syn,0x01aa10ac,50066,2000,0,2,2
185461492294103,2016 Aug 11 15:58:55,syn,0x01aa10ac,50066,2000,0,2,2
185465496397317,2016 Aug 11 15:58:59,syn,0x01aa10ac,50066,2000,0,1,2
185465496517076 ,2016 Aug 11 15:58:59,syn-aa,0x01aa10ac,50066,2000,1,1,2
As you can see, the max'ed out queue resulted in the connection taking 185465496517076 minus 185458485779874 = 7010737202 ns (or 7.01 seconds) to establish the connection.
In the next part we will look at extending this into the application, using both syscall tracing and userspace tracing.
As always, please can you provide feedback so I can improve the blog. | http://126kr.com/article/491441o7p0u | CC-MAIN-2017-09 | refinedweb | 1,805 | 59.84 |
In C#, there are symbols which tell the compiler to perform certain operations. These symbols are known as operators. For example, (+) is an operator which is used for adding two numbers similar to addition in Maths.
C# provides different types of operators:
- Arithmetic Operators
- Relational Operators
- Increment and Decrement Operators
- Logical Operators
- Assignment Operators
C# Arithmetic Operators
Arithmetic Operators are the type of operators which take numerical values as their operands and return a single numerical value.
Let's assume the values of a and b to be 8 and 4 respectively.
using System; class Test { static void Main(string[] args) { int a=10, b=2; Console.WriteLine("a+b = " + (a+b)); Console.WriteLine("a-b = " + (a-b)); Console.WriteLine("a*b = " + (a*b)); Console.WriteLine("a/b = " + (a/b)); } }
int a = 10, b = 2 → We are declaring two integer variables a and b and assigning them the values 10 and 2 respectively.
Inside
Console.WriteLine,
"a+b = " is written inside (" "), so it got printed as it is without evaluation. Then
(a+b) after
+ is evaluated (i.e., 12) and printed.
So,
"a+b = " combined with the calculated value of
a+b i.e. (10+2 i.e. 12) and
a+b = 12 got printed.
Let's take one more example.
using System; class Test { static void Main(string[] args) { int a=10, b=2; int z = a+b; Console.WriteLine("a+b = " + z); } }
In this example, z is assigned the value of
a+b i.e., 12. Inside
WriteLine,
a+b = is inside " ", so it got printed as it is (without evaluation) and then the value of z i.e., 12 got printed. So,
a+b = 12 got printed.
When we divide two integers, the result is an integer. For example, 7/3 = 2 (not 2.33333).
To get the exact decimal value of the answer, at least one of numerator or denominator should have a decimal value.
For example,
7/3.0,
7.0/3 and
7.0/3.0 return 2.33333 because at least one of the operands in each case has a decimal value.
using System; class Test { static void Main(string[] args) { Console.WriteLine("3/2 = " + (3/2)); Console.WriteLine("3/2.0 = " + (3/2.0)); Console.WriteLine("3.0/2 = " + (3.0/2)); Console.WriteLine("3.0/2.0 = " + (3.0/2.0)); } }
As we have seen that
3/2 (both
int) is giving 1 whereas
3.0/2 or
3/2.0 or
3.0/2.0 (at least one is double) is giving us 1.5.
Suppose we are using two integers in our program and we got a need to get a double result after division, we can easily convert them to double during the time of division using explicit conversion. For example,
(double)a/b.
Let's look at an example.
using System; class Test { static void Main(string[] args) { int a = 5; int b = 2; Console.WriteLine((double)a/b); } }
We casted the variable a to double during the division (
(double)a/b) and we got a double result (2.5).
C# Relational and Equality Operators
Relational Operators check the relationship (comparison) between two operands. It returns
true if the relationship is true and
false if it is false.
Following is the list of relational operators in C#.
Again, assume the value of a to be 8 and that of b to be 4.
Let's see an example to understand the use of these operators.
using System; class Test { static void Main(string[] args) { int a = 5; int b = 4; Console.WriteLine(a == b); Console.WriteLine(a != b); Console.WriteLine(a > b); Console.WriteLine(a < b); Console.WriteLine(a >= b); Console.WriteLine(a <= b); } }
In the above example, the value of a is not equal to b, therefore
a == b (equal to) returned false and
a !=b (not equal to) returned true.
Since the value of a is greater than b, therefore
a > b (greater than) and
a >= b (greater than or equal to) returned true whereas
a < b (less than) and
a <= b (less than or equal to) returned false.
C# Difference between = and ==
Although = and == seem to be the same, but they are quite different from each other.
= is an assignment operator while
== is an equality operator.
= assign values from its right side operands to its left side operands whereas
== compares values and check if the values of two operands are equal or not.
Take two examples.
x = 5
x == 5
By writing
x = 5, we assigned a value 5 to x, whereas by writing
x == 5, we checked if the value of x is 5 or not.
C# Logical Operators
In C#, if we write
A and B, then the expression is true if both A and B are true. Whereas, if we write
A or B, then the expression is true if either A or B or both are true.
A and B → Both A and B
A or B → Either A or B or both.
The symbol for AND is
&& while that of OR is
||.
Assume the value of a and b to be true.
In Logical AND (&&) operator, if anyone of the expression is false, the condition becomes false. Therefore, for the condition to become true, both the expressions must be true.
For example,
(3>2)&&(5>4) returns true because both the expressions (3>2 as well as 5>4) are true. Conditions
(3>2)&&(5<4),
(3<2)&&(5>4) and
(3<2)&&(5<4) are false because at least one of the expressions is false in each case.
For Logical OR (||) operator, the condition is only false when both the expressions are false. If any one expression is true, the condition returns true. Therefore,
(3<2)||(5<4) returns false whereas
(3>2)||(5<4),
(3<2)||(5>4) and
(3>2)||(5>4) returns true.
Not (!) operator converts true to false and vice versa. For example,
!(4<7) is true because the expression (4<7) is false and the
! operator makes it true.
using System; class Test { static void Main(string[] args) { int a = 5, b = 0; Console.WriteLine("(a>b) && (b==a) = " + ((a>b) && (b==a))); Console.WriteLine("(a>b) || (b==a) = " + ((a>b) || (b==a))); Console.WriteLine("!(a > b) = " + !(a > b)); } }
In the expression
(a>b) && (b==a), since
b==a is false, therefore the condition became false. Since
a>b is true, therefore the expression
(a || b) became true. The expression
a>b is true (since the value of a is greater than b) and thus the expression
!(a>b) became false.
Before going further to learn about more different operators, let's look at the precedence of operators i.e., which operator should be evaluated first if there are more than one operators in an expression like
2/3+4*6.
C# Precedence of Operators
If we have written more than one operation in one line, then which operation should be done first is governed by the following rules :- Expressions inside brackets '()' are evaluated first. After that, this table is followed (The operator at the top has higher precedence and that at the bottom has the least precedence). If two operators have the same precedence, then the evaluation will be done in the direction stated in the table.
Let's consider an expression
n = 4 * 8 + 7
Since the priority order of multiplication operator ( * ) is greater than that of addition operator ( + ), so first 4 will get multiplied with 8 and after that 7 will be added to the product.
Suppose two operators have the same priority order in an expression, then the evaluation will start from left or right as shown in the above table. For example, take the following expression:
10 / 5 + 2 * 3 -8
Since the priorities of / and * are greater than those of + and -, therefore / and * will be evaluated first. But / and * have the same priority order, so these will be evaluated from left to right (as stated in the table) simplifying to the following expression.
2 + 2 * 3 - 8
After /, * will be evaluated resulting in the following expression:
2 + 6 - 8
Again + and - have the same precedence, therefore these will also be evaluated from left to right i.e., first 2 and 6 will be added after which 8 will be subtracted resulting in 0.
C# Assignment Operators
Assignment Operators are used to assign values from its right side operands to its left side operands. The most common assignment operator is
=.
a = 10 means that we are assigning a value 10 to the variable a.
10 = ais invalid because we can't change the value of 10 to a.
There are more assignment operators which are listed in the following table.
Before going further, let's have a look at an example:
using System; class Test { static void Main(string[] args) { int a = 7; a = a+1; Console.WriteLine(a); a = a-1; Console.WriteLine(a); } }
'=' operator starts from the right. eg.- if a is 4 and b is 5, then a = b will change a to 5 and b will remain 5.
a = a+b → Since '+' has a higher priority than '=', so, a+b will be calculated first. After this, '=' will assign the value of the sum
a+b to a.
In the exact same fashion, in
a = a+1,
a+1 will be calculated first since
+ has higher priority than
=. Now, the expression will become
a = 8 making the value of a equal to 8.
Similarly,
a = a-1 will make the value of a equal to 7 again.
Let's look at an example where different assignment operators are used.
using System; class Test { static void Main(string[] args) { int a = 7; Console.WriteLine("a += 4 Value of a: " + (a += 4)); Console.WriteLine("a -= 4 Value of a: " + (a -= 4)); Console.WriteLine("a *= 4 Value of a: " + (a *= 4)); Console.WriteLine("a /= 4 Value of a: " + (a /= 4)); Console.WriteLine("a %= 4 Value of a: " + (a %= 4)); } }
To understand this, consider the value of a variable n is 5. Now if we write
n += 2, the expression gets evaluated as
n = n+2 thus making the value of n 7 (n = 5 + 2).
In the above example, initially, the value of a is 7.
The expression
a = a+4 thus making the value of a as 11. After this, the expression
a -= 4 gets evaluated as
a = a-4 thus subtracting 4 from the current value of a (i.e. 11) and making it 7 again. Similarly, other expressions will get evaluated.
C# Increment and Decrement Operators
++ and -- are called increment and decrement operators respectively.
++ adds 1 to the operand whereas
-- subtracts 1 from the operand.
a++ increases the value of a variable a by 1 and
a-- decreases the value of a by 1.
Similarly,
++a increases the value of a by 1 and
--a decreases the value of a by 1.
In
a++ and
a--,
++ and
-- are used as postfix whereas in
++a and
--a,
++ and
-- are used as prefix.
For example, suppose the value of a is 5, then
a++ and
++a changes the value of a to 6. Similarly,
a-- and
--a changes the value of a to 4.
C# Difference between Prefix and Postfix
While both
a++ and
++a increases the value of 'a', the only difference between these is that
a++ returns the value of a before the value of a is incremented and
++a first increases the value of a by 1 and then returns the incremented value of a.
Similarly,
a-- first returns the value of a and then decreases its value by 1 and
--a first decreases the value of a by 1 and then returns the decreased value.
Let's look at the example given below.
using System; class Test { static void Main(string[] args) { int a=8, b=8, c=8, d=8; Console.WriteLine("a++: Value of a: " + (a++)); Console.WriteLine("++b: Value of a: " + (++b)); Console.WriteLine("c--: Value of a: " + (c--)); Console.WriteLine("--d: Value of a: " + (--d)); } }
In
a++, postfix increment operator is used with a which first printed the current value of a (8) and then incremented it to 9.
Similarly in
++b, the prefix operator first added one to the current value of b thus making it 9 and then printed the incremented value. The same will be followed for the decremented operators.
C# sizeof
sizeof() operator is used to get the size of data type. We can use the
sizeof operator to get the size of
int as 4. Let's look at the example given below.
using System; class Test { static void Main(string[] args) { Console.WriteLine("size of int : " + sizeof(int)); Console.WriteLine("size of long : " + sizeof(long)); Console.WriteLine("size of unsigned int : " + sizeof(uint)); Console.WriteLine("size of boolean : " + sizeof(bool)); Console.WriteLine("size of short : " + sizeof(short)); Console.WriteLine("size of unsigned short : " + sizeof(ushort)); Console.WriteLine("size of double : " + sizeof(double)); Console.WriteLine("size of char: " + sizeof(char)); } }
C# typeof
typeof() operator is used to get the type of a data type. We can use the
typeof operator to get the type of
int as System.Int32. Let's look at the example given below.
using System; class Test { static void Main(string[] args) { Console.WriteLine("size of int : " + typeof(int)); Console.WriteLine("size of long : " + typeof(long)); Console.WriteLine("size of unsigned int : " + typeof(uint)); Console.WriteLine("size of boolean : " + typeof(bool)); Console.WriteLine("size of short : " + typeof(short)); Console.WriteLine("size of unsigned short : " + typeof(ushort)); Console.WriteLine("size of double : " + typeof(double)); Console.WriteLine("size of char: " + typeof(char)); } }
C# Math Class
What if you want to take sine, cosine or log of a number? Yes, we can perform such mathematical operations in C# by using the Math class inside System namespace in C#. It contains many useful mathematical functions. Let's have a look at some important methods of the Math class.
Let's look at an example using some of these functions.
using System; class Test { static void Main(string[] args) { Console.WriteLine(Math.Sin(Math.PI)); Console.WriteLine(Math.Cos(Math.PI)); Console.WriteLine(Math.Abs(-1)); Console.WriteLine(Math.Floor(3.4)); Console.WriteLine(Math.Ceiling(3.4)); Console.WriteLine(Math.Pow(4, 2)); Console.WriteLine(Math.Log10(100)); Console.WriteLine(Math.Sqrt(4)); } }
With this chapter, we have covered all the basics required to enter the real programming part. From the next chapter, you will see a new part of programming. | https://www.codesdope.com/course/c-sharp-operators/ | CC-MAIN-2022-40 | refinedweb | 2,417 | 66.94 |
Comparing Blade and Twig templates in Laravel, but in this post we’re going to compare the Blade and Twig templating engines side-by-side.
TLDR; Spoiler alert
Both Blade and Twig provide the most important features; template inheritance, sections, escaping output and clean syntax. Blade provides a fast and simple syntax, but doesn’t add (much) extra functionality. Twig takes it a step further and adds an extra layer to provide more security and added features. The choice mostly depends on your personal preference. If you mostly develop for Laravel, Blade would probably be good. If you also use a lot of other frameworks, Twig might be a better fit.
About Blade
Blade is the default template engine for Laravel (since Laravel 2 in 2011). The syntax is originally inspired by the ASP.net Razor syntax and provides a cleaner way to write your templates. But the syntax is just 1 part, the main benefit of using Blade instead of plain PHP is to make it easier to re-use templates and split templates.
From the Laravel 5.1 docs:. [..] Two of the primary benefits of using Blade are template inheritance and sections.
Example:
@extends('layouts.master')
@section('content')
@foreach ($users as $user)
<p>This is user </p>
@endforeach
@endsection
Basically, Blade syntax is a thin wrapper for PHP, to provide a clean syntax. Everything you can do with PHP, you can do with Blade (and vice versa). You can also easily mix plain PHP in your Blade templates.
About Twig
Twig is developed by Fabien Potiencer, for reasons he describes in hisblogpost announcing Twig. It’s included in the default Symfony2 installation, can be used stand-alone and a lot of other projects support Twig. Drupal 8 will use Twig as template engine and Magento 2 was going to use it, but decided to drop it a while back. Twig support was also considered to be added in Laravel by default, but was eventually removed before the final release..
Example:
{% extends "layouts.master" %}
{% block content %}
{% for user in users %}
<p>This is user {{ user.id }}</p>
{% endfor %}
{% endblock %}
Twig is sandboxed. You can’t just use any PHP function by default, you can’t access things outside the given context and you can’t use plain PHP in your templates. This is done by design, it forces you to seperate your business logic from your templates.
Twig in Laravel
Together wit Rob Crowe, I’ve built a TwigBridge: rcrowe/twigbridge This offers Twig support in Laravel with all the things you would expect in Laravel:
- Using events for View Composers
- Access to common functions/filters (input, auth, session etc)
- Facade support
So it’s easy to get started and give Twig a try :)
Blade vs. Twig
So there are a few similarities here:
- Both compile to plain PHP, for better performance
- Both provide a syntax to easily output variables
- Both have control-structures (if/for/while etc)
- Both have escaping mechanism for safe output
- Both have template-inheritance and sections
There are also a few differences, that are easy to spot when you look at the examples.
- Different syntax for control structures
- Different handling of escaping
- Different handling of variables and variable access
- Different ways to add functionality
- Different perspective on security
We’ll take a quick look through these differences, so you can choose yourself what you like best.
Outputting variables
Outputting variables is probably the most common thing in your templates, so it should be easy. But more importantly, it should be safe. Luckily, since Laravel 5 Blade has sane defaults: {{ $var }} shows escaped content, {!! $var !!} raw output. The same goes for Twig, by default {{ var }} is escaped, {{ var |raw }} is raw.
Laravel gives you the option to change the tags and Twig gives you the option to change the default escaping. Both are probably not very smart in most cases, because it can be unpredictable for other developers.
Besides escaped or raw, Twig gives you the option to use different escaping methods, eg. for JS or HTML attributes. This can also be configured for an entire chunk of code.
{{ user.username|e('css') }}
{% autoescape 'js' %}
Everything will be automatically escaped in this block (using the JS strategy)
{% endautoescape %}
You probably noticed the | character. Those are used for filters. Filters can tweak the output. They are not very much different then functions, but they might be easier to read and can be combined. Example: {{ var | striptags | upper }}.
Accessing attributes
Twig makes it easy to access variables using the dot notation. This can be used on either a object or array. In Blade, it’s the same as plain PHP.
$user->name --> user.name
$user['name'] --> user.name
Control structures
In Blade, most control structures are simply replaced by their PHP equivalent during compilation, as you can see in the code. Some structures are a tiny bit more complicated. forelse doesn’t actually exist, so that adds a few more lines.
Simplified example:
@forelse ($user in $user)
@if(!$user->subscribed)
{{ $user->name }}
@endif
@empty
<p>No users found</p>
@endforelse
<?php $empty = true; foreach($user in $user): $empty = false; ?>
<?php if (!$user->subscribed) : ?>
<?php echo e($user->name); ?>
<?php endif; ?>
<?php endforeach; if ($empty): ?>
<p>No users found</p>
<?php endif; ?>
In Twig, the controle structures are called tags. They are compiled by a Lexer and can be a bit more complicated. For example, the for tag adds a loopvariable to the context, so you can access the current loop state;loop.first, loop.last, loop.index etc. This makes it just a bit cleaner then doing it yourself. The if tags makes it possible to read more like a sentence, instead of just a statement.
{% for user in users %}
{% if not user.subscribed %}
{{ user.name }}
{% endif %}
{% else %}
<p>No users found</p>
{% endfor %}
Template inheritance and sections
Template inheritance and sections are pretty much the same. It’s just different syntax. See the example from the Laravel docs, vs Twig syntax:
<!-- layouts/master.blade.php -->
<html>
<head>
<title>App Name - @yield('title')</title>
</head>
<body>
@section('sidebar')
This is the master sidebar.
@show
<div class="container">
@yield('content')
</div>
</body>
</html>
<!-- child.blade.php -->
@extends('layouts.master')
@section('title', 'Page Title')
@section('sidebar')
@parent
<p>This is appended to the master sidebar.</p>
@endsection
@section('content')
<p>This is my body content.</p>
@endsection
Same result in Twig:
<!-- layouts/master.twig -->
<html>
<head>
<title>App Name - {% block title %}{% endblock %}</title>
</head>
<body>
{% block sidebar %}
This is the master sidebar.
{% endblock %}
<div class="container">
{% block content %}{% endblock %}
</div>
</body>
</html>
<!-- child.twig -->
{% extends "layouts.master" %}
{% block title %}Page Title{% endblock %}
{% block sidebar %}
{{ parent() }}
<p>This is appended to the master sidebar.</p>
{% endblock %}
{% block content %}
<p>This is my body content.</p>
{% endblock %}
Security and context
As stated before, Blade doesn’t actually differ so much from plain PHP. It makes it easier to properly escape variables, but it doesn’t place any restrictions. Twig on the other hand, works in a seperate context. All functions and filters calls are restricted by the functions you explicitly enable (besides the built in functions).
Both have their pros and cons. In Twig, your template designer can’t easily take shortcuts. Eg. calling a query in your templates. They’ll have to pass the result to the view or allow access to a certain function (or ask the backend developer). The downside is that it is more work to just call a simple function/filter sometimes.
Blade
@foreach(User::where('active')->get() as $user)
{{ $user->name }}
@endforeach
This isn’t exactly possible in Twig. You either pass the result to the view (in your controller or view composer), or if you must, call the query on a User instance.
{% for user in model.where('active').get() %}
{{ user.name }}
{% endfor %}
This also means that you can’t just use Facades. In your TwigBridge, we’ve made it an option to just add your facades to the list in the configuration.Auth::check() –> Auth.check()
Twig also includes a Sandbox mode, which can create a safe context, eg. to let users edit their templates, with limited access to functions/variables. These templates can also be retrieved from a database.
Extending functionality
Extending the directives in Blade is simple, but limited to replacing a tag with other lines.
<?php
// @upper($var)
Blade::directive('upper', function($expression) {
return "<?php echo strtoupper{$expression}; ?>";
});
In Twig, writing functions and filters is very easy. But writing custom tags is a lot harder, you usually don’t need to do that. Above example could be just a simple filter:
<?php
//
$filter = new Twig_SimpleFilter('upper', function ($string) {
return strtoupper($string);
});
Both engines also support macros, as you can read in the docs.
Performance
I haven’t actually tested this, but I would guess that Blade is slightly faster, but only because the Twig compiled code is a bit more complicated, because of extra functionality. But they are both compiled to plain PHP, so the performance difference can probably be neglected.
Other differences
There are some small syntaxical and behavioral differences, that probably don’t have a big impact on your choice. Twig has some niceties in custom operators, some math/expressions stuff, handy built-in filters/functions etc. You’ll probably get a good idea if you read through the Twig manual, including all the tags/filters etc. But by now, you’ve probably already made up your mind about what you’re going to use ;)
So ..?
Mostly it’s just a matter of preference. You can build any app you like with both templating engines and probably never run into any issues.
Blade is just a bit simpler, but probably good enough for most developers. It doesn’t impose any restrictions, so it leaves the best practices up to the developer. It does feel more ‘Laravel’ like in it’s simplicity, so that might appeal to new Laravel users.
Twig is (in my opinion) the more mature one. It’s built from the ground up as secure, with different escaping strategies, context restrictions and lexer. But sometimes it can feel a bit too restrictive, for rapid development. In mixed teams (frontend + backend) or with not so experienced developers, this can be a good thing to prevent spaghetti code. But if you must, you can ofcourse still write bad code.. If your company also uses different frameworks (Symfony, Drupal, Slim, Yii, Kohana, etc), the Twig support could be a big PRO.
If you never used Twig, I suggest you give it a try. It’s easy to setup on Laravel and the documentation is very extensive!
Originally published at barryvdh.nl on August 22, 2015. | https://medium.com/@barryvdh/comparing-blade-and-twig-templates-in-laravel-187fde7fcac9?source=user_profile---------3----------- | CC-MAIN-2018-17 | refinedweb | 1,776 | 65.93 |
Eclipse Community Forums - RDF feed Eclipse Community Forums Moxy and Reference Impl schemas do not match <![CDATA[I've been trying to use Moxy for some of the extensions but have found that as soon as I drop Moxy in, I lose everything that was working. This may be something simple. I'm hoping. But definitely I find it weird that simply by changing jaxb implementations, I get completely different (and in the case of Moxy, what I consider incorrect) schemas. It all revolves around inheritance on a collection. Here's the class that we start with. It has a collection of an abstract class. @XmlAccessorType(XmlAccessType.NONE) @XmlRootElement(name = "form") public class FormType { ... @XmlElementRef public List<FormElement> getFormElements() { } This abstract class has one notable thing: formElements can have collections of formElements. However, I've commented that entire part out and it doesn't seem to make a difference. I should probably remove it from this code but want to keep it to be as complete as possible. @XmlSeeAlso(value = {UiFormElement.class}) @XmlAccessorType(XmlAccessType.NONE) @XmlRootElement public abstract class FormElement implements Serializable, DoesGroovy { ... @XmlElementRef public List<FormElement> getFormElements() { } Another layer of abstraction. This is the one that points to the concrete classes that Moxy seems to be missing. @XmlSeeAlso(value = {TabSheetUiFormElement.class, TabUiFormElement.class, ...}) @XmlAccessorType(XmlAccessType.NONE) @XmlRootElement public abstract class UiFormElement extends FormElement { ... } An example of how each of the concrete subclasses are annotated. @XmlAccessorType(XmlAccessType.NONE) @XmlRootElement(name = "tabsheet") public class TabSheetUiFormElement extends UiFormElement implements FormComponent { same for others in the SeeAlso list above. The reference implementation "sees" all of these concrete subclasses of UiFormElement and shows them in the schema. Moxy does not. Is there anything obviously wrong here? I'm happy to try and post the cleaned-up versions of the schemas (minus all the dozens of simple fields that I highly doubt would be anything but cruft to this issue). ]]> Will Hughes 2013-09-25T01:32:03-00:00 Re: Moxy and Reference Impl schemas do not match <![CDATA[Hi Will, This appears to be a bug in MOXy's processing of XmlSeeAlso. I've opened an EclipseLink bug to track this: As a temporary workaround, you could include the concrete subclasses in the list of classes passed into the JAXBContext creation, or include them in a jaxb.index file if you are creating the context from a package name. Hope this is helpful. -Matt ]]> Matt MacIvor 2013-09-25T17:09:49-00:00 Re: Moxy and Reference Impl schemas do not match <![CDATA[Hey, thanks! Yeah, I was heading down the "looking like a bug" path. But I really appreciate you looking into it and confirming. Keeps me from thinking I've gone nuts. ;-> It's the "intermediate" class that seems to cause the problem. If it's just "one" level of inheritance, then all is well. But more than that causes an issue. I've got another set of classes that are less deep and they work fine in both Moxy and the RI. So I stuck an empty class in between (but still annotated with the @XmlSeeAlso) and the RI handled it fine but Moxy then had trouble. And how soon would you guys release a fix? Do you do patches or will I have to wait for the next release? Thanks for looking into this and answering! Will]]> Will Hughes 2013-09-25T19:11:24-00:00 Re: Moxy and Reference Impl schemas do not match <![CDATA[Hi Will, You're right, the issue is with multiple levels of XmlSeeAlso annotations. I've set the target to the 2.5.2 release. You can always get the latest nightly builds from the eclipselink downloads page here: Once the fix has been checked in, you'd be able to get the nightly build with the fix in it the next day. -Matt]]> Matt MacIvor 2013-09-25T19:23:33-00:00 Re: Moxy and Reference Impl schemas do not match <![CDATA[Great! Really appreciate it. Will]]> Will Hughes 2013-09-25T23:02:25-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=530788&basic=1 | CC-MAIN-2016-40 | refinedweb | 673 | 66.33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.