Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Windows;
using System.Windows.Input;
using WpfUtility.Services;
namespace Sample.UserControls
{
public class ClipboardViewModel : ObservableObject
{
private ObservableCollection<string> _pasteElements = new ObservableCollection<string>();
private ObservableCollection<int> _selectedProducts = new ObservableCollection<int>();
public ClipboardViewModel()
{
PasteElements =
new ObservableCollection<string>(
new List<string> {"218382", "344846", "5614645", "abcdef", "ghijkl", "218382", "mnopqr"});
}
public ObservableCollection<int> SelectedProducts
{
get => _selectedProducts;
private set => SetField(ref _selectedProducts, value);
}
public ObservableCollection<string> PasteElements
{
get => _pasteElements;
private set => SetField(ref _pasteElements, value);
}
public ICommand PasteCommand => new DelegateCommand(Paste);
public ICommand EmptyListCommand => new DelegateCommand(EmptyList);
private void Paste()
{
var rowData = ClipboardHelper.ParseClipboardData().Select(x => x[0]).ToList();
var cleansedList = new List<int>();
foreach (var entry in rowData)
if (int.TryParse(entry, out var value))
cleansedList.Add(value);
SelectedProducts = new ObservableCollection<int>(cleansedList);
if (!cleansedList.Any())
MessageBox.Show(
$@"Your clipboard content is empty or is in a wrong format.{
Environment.NewLine
}Please paste only valid numbers!",
"Clipboard content not valid!", MessageBoxButton.OK, MessageBoxImage.Exclamation);
}
private void EmptyList()
{
SelectedProducts = new ObservableCollection<int>();
}
}
}
|
STACK_EDU
|
init_debugger_audio: DirectMusic not natively supported
DirectX 6 DirectMusic not natively supported with Unix. What now?
EDIT: I talk about DirectMusic .DTS and .SGT files.
Currently I'm wrapping real DX6 component. Linux may probably require pre-converted OGG sounds.
NAudio which we use is capable to forge a Midi messages and is capable of supporting all important messages. Therefore it may be possible to actually convert SGT segments to real buffered Mdi event collections. That may require digging up the deprecated documentation of segments. Question is if it's really worth the probably huge amount of time to introduce DirectMusic converter
cool :)
Got the music working without using DirectMusic by parsing SGT file into sequence and translating to MIDI messages. See example state at:
https://github.com/MaKiPL/OpenVIII/commit/b3a7785f449398b23fdf246fbf4b5e4ea5163e29
the all tempo and things like that are constant, I just wanted to play the notes, not caring for the tempo, speed and instruments. Playback would be probably provided with FluidSynth or NAudio for both Linux and Windows just to delete use of the deprecated directmusic at all
some notes so I don't forget:
seqt[n].mTime is sorted - that's good, no need for Linq every note as in prototype.
seqt[n].mDuration should be put on a list for a note and channel and then noteoff'd
seqt[n].dwPChannel has already a channel pointer- that's very good
seqt[n].bByte1 is note
seqt[n].bByte2 is velocity
seqt[n].bStatus is MID event, but it's always 144=noteOn
Tempo is double, but it reports weird values. Maybe it's not double or something?
Thread:
init_debugger_audio on init should create Thread. The thread should get messages from the engine. Every PlayMusic should clear the sequence and stop the internal MIDIEvents. The thread should also GC pin the unmanaged variables.
Performance:
The segment reading is faster than vanilla DxMusic
segh::mTime```
0: 62208
1: 44544
4: 154368
5: 185856
79:329024 (demo)
93:467712 (lasboss)
[ ] Handle tempo correctly
[ ] Handle mDuration (/noteoff) faster way than operations on List<>
[ ] Handle curves if needed
[ ] Recompile library for Windows AMD64
[ ] Handle loops
I wonder if creating a MIDI itself wouldn't be faster and more stable than handling the operations on my own. Currently I have to implement a thread with tempo, curves and all that stuff, but on the other hand I can create a .mid in memory and whole synth will do it on it's own 'their way' (which I'm more than sure is way faster). Afair NAudio has whole MIDI creation class.
DMUS_IO_TIMESIGNATURE may be wrong. The structure sizeof=8, but the algorithm reads actually (sectionLength-4) / 2 where it should be divide by 8. I just got a lot of rubbish in tims and noticed this error. Finally we lack the formulas for converting the time signature events to MIDI like- worth to follow libdmusic code to find out how the tempo and timing is parsed
TEMPO notes:
DMUS_PPQ is const 768
Calcs: 60 000 000 (BPMPPQ)?
Current example: 1202000 = 240 000
QuarterNote is set to 360 for now
Let N= 2000 in (1202000)
N - slower
<N - faster
OpenVIII now uses custom NAudio library with my extension: https://github.com/naudio/NAudio/pull/499
to write Midi to memory instead of HDD for fluid_player_add_mem
DirectMusic segments are now played in X64 and linux. It's still WIP, I'm making new issue
|
GITHUB_ARCHIVE
|
So far we've been using a password to authenticate ourselves to the remote server, but there's another thing we can use, which is a public key.
Keys are really useful, because they allow us to log in without using a password. It's a lot more secure. We can turn off password authentication completely on the remote server, which prevents other people from trying to log in by brute force attacking with different passwords. So using keys is really the way to go.
So what I'm going to do is exit out of here, and let's start off by creating our first key. We're going to use a protocol called RSA for creating a public key pair, and then use that to log in.
The first thing you need to do, if you're on a Mac, is to check out what's inside of the home directory .ssh folder. If you're just starting from scratch, it should be pretty much empty, except for maybe this known_hosts file, but if you have any keys in there, particularly, if you have an id_rsa, an id_rsa.pub file-- if you have these two files-- you can use them directly. But since they don't exist yet in my folder, I'm going to go ahead and create these from scratch.
So to do that, I can use a special program, it's called ssh-keygen, and I need to provide it one option, which is the type of key pair I want to create, and I'm going to use the RSA algorithm-- the RSA protocol to do that.
Then it'll ask me where I want to save the file, and notice, by default, it's going to put it into the .ssh folder, and it's going to call the file id_rsa. So this will be kind of like my standard key that I can use for any number of services. It could be for GitHub, for example, or some other service, but I'm going to use it, also, to log into this remote machine.
So I'll just accept these defaults by pressing Enter, and then it'll ask me if I want to protect the file with a passphrase. You probably do want to add a passphrase here, in case somebody takes your laptop, but I'm just going to press Enter, and when that is all said and done, if we look inside of the .ssh folder, notice we have two new files now-- id_rsa and id_rsa.pub.
Now, pub stands for public, and in a public key encryption system, this public key can be distributed anywhere. It can be public to the world-- anybody can see it-- but this one here is private, so we want to keep that on our local machine, and we want to keep it hidden from any other people. And you want to make sure that only the user that's supposed to use this file has read and write privileges, and notice that everybody else does not.
Next up, what we need to do is to take this public key, and we need to upload it to the server, and we're going to store it into a special file called authorized_keys, which is going to be in the .ssh folder of the user, so in this case, it's going to be cmather. And in that home folder, in .ssh authorized_keys, we're going to put the contents of this file.
Now, the program that's running on the server-- the SSH server-- is going to know to look in this file, and when we try to authenticate with our key from our Macintosh, it'll work so long as this public key is in this file.
So let's go ahead and get it up to the server using the techniques that we learned in the last video. What I'll do is, we can cat-- actually, what I'll do is use ssh directly. So this time around, we're going to use password authentication, so I'll log in as cmather at the normal address, and then the command that I'm going to run is to cat.
Let's first make the directory. We want to make sure that the .ssh folder exists, and then we're going to cat standard input out to-- we can even append it, just in case the foot file already exists-- .ssh authorized_keys, and then we're going to send into that the file id_rsa.pub. Remember to send up the public file, not the private one.
OK. So now we should have our authorized key file, and let's just log in and make sure that that's true.
Now, notice this time, right away, I don't actually get asked for a password, and that's because, by default, SSH is going to try to use this id_rsa key to get in, and it found one, and so it's not asking me for a password at all. So it looks like this worked properly, and let's just go into the .ssh folder. Let's make sure that it exists, first of all. And if we change into it, we should see that we have this authorized key file, and if I cat it out to standard output, notice that it now has the contents of my public key that we created earlier. So we were successfully able to upload our public key to the server and store it in authorized_keys.
Now, you'll need to do this-- you'll need to create an authorized key file-- for each user, and notice that the authorized key file is in the cmather home folder. So that is where we put this, and it's for each individual user. So this is a lot more convenient than what we were doing before with passwords.
Now, if you're unable to do this directly, if the id_rsa file, for example, is named something different, or for whatever reason ssh can't find it, you can provide another option, which is the i option, and then provide a path to the identity file. So in this case, it would be in the home directory, with tilde .ssh id_rsa dot-- well, no, it's the private one we'll use here. So we're going to use the private one, and when I press Enter, that'll explicitly tell SSH to use this key in order to log in.
One last small detail, this private key here that we're using in the identity option is not sent up to the server, so this is not going to cross over the wire, it's just used locally, and only the public key gets sent up to the server, and the public key can be open to the world. You could even email it to someone if you wanted to-- no big deal-- but the private key stays local.
So hopefully this shows you that keys are a lot easier to work with than entering in passwords, but they're are also a lot more secure, so just about everyone is going to be using keys with SSH. Hopefully this will give you a better sense of how to create them and use them yourself.
|
OPCFW_CODE
|
21st century is highly influenced by mobile devices: Mobile devices were introduced with an aim to connect with each other through mobile calls and SMS before few decades. With time, the functionality and look and shape of mobile devices have been changed in a magical way. Now this devices are not just performs calling or sending sms but they are having a wide range of utility functions within these devices. Mobile phones are now transformed into smart phones because they are much smarter than before. Now these devices can be used for browsing internet, playing multimedia files, capturing wonderful moments of our life and playing graphics intensive games etc. As a result they have received a tremendous popularity from the users throughout the globe. The numbers of smart phones in the industry are growing in a much faster pace and beyond our expectations. Considering its popularity and global presence we can say 21st century is dominated by smart phones.
ASP .Net for mobile website design: Due to high population of mobile devices including smart phones, web developers are now trying to reach at every user through their website i.e., mobile friendly. This means in order to reach at more number of users the industry must have to adopt mobile web app development methodologies in 21st century. Well Microsoft’s flagship product i.e., asp .net, a web framework is also well equipped with various features and facilities to make the development process transform from ordinary websites to mobile web applications. It provides extensive and adequate supports to the .net developers in building responsive web applications for different screen sizes and mobile devices.
Responsive web design and fluidic design through ASP .Net platform: Though fluidic web design and responsive web design creates confusion among the web developers many times, but there is a thin separation line between them. Fluidic design refers to developing mobile websites with the help of percentage layouts i.e., it displays different components of website according to the screen size of the device. It allows the developer to fix the component’s size and shape as a reference of some percentage value of screen size of the device. On the other hand the latest asp .net developers can develop modern mobile websites in a responsive manner that takes the maximum help of different sensors of the device along with screen size to make the website look much more beautiful and professional than before. It allows the asp .net developers in checking the rotation of the device and accordingly changing the contents of the website to provide better user experience.
Take the help of modern web libraries for better output: Now asp.net developers can also take the help of modern and smart web libraries like JQuery, Twitter bootstrap and other plugins that can help them in developing enterprise standard responsive web applications. They can also adopt HTML5 and CSS3 in developing file attractive and interactive web applications for mobile devices. What is more? Web developers can also integrate futuristic features like cloud storage in order to provide enhanced user experience and improved performance for their web applications. If you are planning to hire asp.net developer, you must evaluate them to see if they are sufficiently agile or not!
Mindfire Solutions, a Microsoft Gold certified Partner Company, provides Asp.net development services. If you have custom .net development needs please write an email to sales at Mindfiresolutions dot com to avail a free quote.
|
OPCFW_CODE
|
How to use variable value declared in Microsoft PowerPoint Object module in Main Module
I need to refer the variables in slide object modules to refresh the charts which I am struggling since a week.
Could anyone help me please?
I just need make Slide2146449163 as variable since SRngMkt, SRngClt etc are defined in Slide2146449163. I have similar slides for other markets and countries
Public Sub test()
Set ActSld = ActiveWindow.View.Slide
Debug.Print ActSld.Name
SldNum = ActSld.SlideID
'SldNum = ActiveWindow.View.Slide.SlideIndex
Debug.Print SldNum
Debug.Print ActSld.Name, ActSld.SlideID, ActSld.SlideNumber, ActSld.SlideIndex
Debug.Print ActSld.SRngMkt <--- ActSld should be dynamic. Giving error
Debug.Print Slide2146449163.SRngClt <--- works perfectly fine
Debug.Print Slide2146449163.SRngHub
Debug.Print Slide2146449163.SRngCTSS
End Sub
Below is the code written in slide object module
Public Sub cmd_Ink_CEE_Germany_Click()
SRngMkt = "CENTRAL & EASTERN EUROPE"
SRngClt = "*"
SRngHub = "Germany"
SRngCTSS = "Included"
Call test
End Sub
enter image description here
Pass 4 arguments to Refresh_Charts
btw, Dim SRngCTSS, SRngClt As String declares SRngCTSS as a Variant variable. There are several instances in your snippet.
Public Sub Refresh_Charts(SRngCTSS As String, SRngClt As String, SRngHub As String, SRngCTSS As String)
'...'
xlWS.Range("E4").Value = SRngMkt <------- This line is working but Slide2146449163 should be dynamic
xlWS.Range("J4").Value = SRngClt
xlWS.Range("O4").Value = SRngHub
xlWS.Range("T4").Value = SRngCTSS
'...'
End Sub
Public Sub cmd_Ink_CEE_Germany_Click()
' Use local variables
Dim SRngCTSS As String, SRngClt As String, SRngHub As String, SRngCTSS As String
SRngMkt = "CENTRAL & EASTERN EUROPE"
SRngClt = "*"
SRngHub = "Germany"
SRngCTSS = "Included"
' pass arguments to Refresh_Charts
Call Refresh_Charts(SRngCTSS, SRngClt, SRngHub, SRngCTSS)
End Sub
|
STACK_EXCHANGE
|
About RE2.jl it looks well written and not too long so I think it should relatively easy for someone experienced to update it (I could even give it a try), although we’d also need a binary builder for the library. But if there’s interest for it I think that shouldn’t be an issue.
Are you using Julia’s built-in regex capability, occursin?
Yes, I pulled down RE2.jl. Figured I might have problems with it since the last commit was in early 2018…and I did. Found a thread on these forums related to the error Julia was throwing, and made a change to, I think, regex.jl. It then imported without error after that, and I was able to do some simple multi-threaded pattern matching. But enough time has passed that I don’t remember the change I made, and I’m confident that I never understood why it was breaking before or why it no longer broke. I also remember thinking “why is so much code needed for what should be a simple wrapper around a C++ library?!?”
So, yeah, that “relatively experienced” part is the thing I lack, but want to get. Am wondering how best to do it.
PS - opened up regex.jl in vim, and it maybe looks like I was doing something in function _write_capture().
To be honest, I don’t think doing things like this is Julia’s comparative advantage.
Essentially, my understanding is that you are looking for a book to teach you programming, using Julia. But the emphasis is on learning programming. While there are some books out there, I would recommend that you
- just read the manual,
- get started on a project that interests you,
- prepare for mistakes and frustration, but persevere,
- ask questions here if you get stuck.
I guess that’s pretty much how most of us learned programming. Also, reading code from well-written packages is instructive, but few people have the discipline to do that just for its own sake. So I recommend making PRs: you get a code review, which is like mentoring from an experienced developer. It’s a great way to learn.
No? I’ve read Julia’s main use case / advantage is in scientific computing, and that it solves the 2-language problem. There’s lots of evidence that’s the case, including Graydon Hoare’s pair of really good blog posts about it. But I’ve also read there’s no reason it can’t serve as a general purpose programming language.
If that’s not the case, fine. What language would you turn to for fast multi-threaded data analytics (I view regex as a data analytic tool)?
Maybe? I describe myself as a non-programmer, which is pretty accurate. If you’re suggesting that, to build new Julia functionality (as a module, etc), or to fix someone else’s Julia code, one needs to be a programmer, then yes, I suppose that’s what I’m asking.
Though if that indeed is what I’m asking, then the almost immediate follow-up question becomes “If I’m gonna have to learn programming to do this thing, then why not suck it up and learn C++?” 2-language problem or not, I know Python, and it has a lots of strengths, including its general purpose nature. And C++ has been around for a long time and lots of good learning resources exist.
It is a great general programming language, but if you just need to digest large files using regular expressions, then specialized tools may be faster. Of course, if you need to do other things with that data, Julia could be very useful.
That is of course up to you. Most Julia programmers who know C++ find that they can prototype quicker in Julia and achieve pretty much the same speed with much less code; but in some contexts C++ may be your best choice (eg if you already have a lot of legacy code in it you have to maintain, the rest of your team prefers to use it and they need to be able to fix your code too, etc). Only you have the information to make these choices.
Even though you have told us about your previous experience in detail, I am still unsure what you are looking for — “data analysis” is such a broad term. If you need a good general programming language for some kinds of data analysis, Julia will be among your top choices.
However, if you mostly do text processing with regexs, Julia can do that too; however, since
Base currently just wraps PCRE so you can expect the same kind of performance.
Maybe you could try
What kinds of data analysis is Julia a top choice for?
As for what I’m looking for, I think it’s a couple of things, with different time scales:
- In the very near term, can it solve the immediate problem I’m trying to solve?
- In the more moderate to long term, how does Julia compare to Python for the types of things I typically do? How viable might it be as a Python replacement?
And yes, “data analysis” is a broad term. I typically do what people refer to as “exploratory analytics” - Python + Numpy + Pandas + Matplotlib (or Plotly) cover probably 80-85% of what I usually do**. But every once-in-a-while, I need to do something off-the-wall, like a regex-based pre-filter of large files before doing additional analysis. Or this one, which happened a few years ago: “iterate through all 100M+ possible permutations of a thing and perform a CPU-intensive calculation on each one.” (Python’s itertools + multiprocessing did the job there, albeit quite slowly.)
**Edit: in addition to data analytics, I’ve also worn the hat of “performance engineering,” “test engineering,” and “performance characterization” in the past, which has involved gathering data and then analyzing said data.
Thanks, I’ll take a look.
Typically, the kind of analysis which involves
- writing code (as opposed to using some canned method),
- a nontrivial amount of computation.
I’d say julia is the perfect choice in this case - it’s similar to what I do. I can also recommend Think Julia as an introduction - it’s the first thing I suggest for my students - but also given your interests, the DataFrames tutorial will likely be of use.
As to your current off-the-wall issue, depending on the complexity of your regex, you might want to take a look at Automa.jl - it’s meant for building parsers, but it’s got its own regex engine that’s quite fast (though it has some limitations). You might be able to use it for your purpose.
More generally, I’d say that, once you begin to use julia, you may find yourself shedding that sense that you’re not a programmer. Speaking only for myself (I’m a biologist, not a programmer, at least I would have said so 5 years ago when I started down this path), julia and this community have a way of getting you to look at your code differently, and encouraging good coding habits. I will never be a computer scientist the way some in this community are, but I find that an inspiration rather than a barrier.
Ok, you asked for a book, but maybe these courses from the Julia Academy are useful for you?
(You can also watch the videos on YouTube here)
These are two really solid resources for learning some of the fundamentals. Based on the kind of work you describe, I would also recommend that you have a look at the following packages:
- Queryverse (this is a whole suite of data analytics packages)
I would also read the documentation for distributed/parallel computing in Julia. Aside from the official Julia manual itself, there are several blog post type resources online that show the basics of distributed/parallel computing and one of the courses on JuliaAcademy is on parallel computing.
These resources alone should get you off and running and beyond the point of needing a book/additional courses, at least for the kind of work you describe above.
I assume this is because there are very few useful canned methods …??? If that’s the case, then the natural follow-up question is: “is there any plan in Julia’s roadmap to become something more than ‘write your own code’? If so, over what time frame (assuming that can even be predicted)?”
Several of the comments in a different thread seem related to this question. For example:
I think that a core question I’m asking myself is the following: when I need to do something that’s non-canned, what’s the best long-term solution, assuming I only have time/energy to do one? Am I better off learning a completely new language that shares many of Python’s strengths but still lacks maturity and reach? Or am I better off learning a language that can act as a good companion/2nd language to Python (eg, C++ or Rust)?
Thanks; I’ll take a look. What do you teach?
Will also look at this.
Thanks and thanks. How useful will the 2nd course/book be to someone who knows next to nothing about economics?
As with other open source (and closed source to some degree) projects that depends on the community. Of course one will always have to write code but more and more methods might be implemented in julia by the community. That being said I think Julia makes it easier than other languages to produce a decent package (I have been developing in C++ / R before which is a pain compared to Julia). More importantly great packages already exist that are in my opinion at least as good as counterparts in R and Python (this strongly depends on preference though). Among them many of the ones named in this thread. In addition, if something does not exist the community and infrastructure to create new packages is amazing (no getting yelled at in some mailing list ;)). Interop with other languages is also great if you do need to share code with someone who only uses say R (even though cran checks fail because they do not have Julia installed on the testing systems except for debian). Personally, I get the most bang for my Buck in julia compared to C++ (yes I forget
; on line 349 and put the argument order wrong in the header) and R. Python was supposed to be the language I learned after R but I never really got into it and then found julia and well here I am.
I think a lot of it will still be very useful. Feel free to ping me on economics related questions in the off topic section (I did study that stuff😃).
Not really, a lot of the modern canned methods exist in some package now, even if the collection is not as extensive as, say, CRAN.
The point is that Julia makes it easy to write custom code in a performant way.
I think you misunderstand how this works: there is no centralized “roadmap” for package development (similarly to other FOSS languages like R, Python, …).
Also, I think that you have arrived at the point where talking about these things in the abstract has diminishing returns and just starting learning & coding should be more informative.
I think this is the most underestimated point especially for Julia. I really think the manual teaches solid programming. Read the parts that you need for your work and see if the syntax and workflow are something you enjoy. I’ve had the experience of really disliking popular software twice (dplyr and Python) for no apparent reason. Just didn’t click in my brain. So maybe its for you or not. In general Julia is a solid choice for what you are trying to do (from what I can gather here)
I don’t disagree with this; the manual is well written; it’s actually how I learned enough to find and (sort of) fix issues with RE2.jl. The thing it’s “missing” that Think Julia has is exercises.
That’s probably fair.
Thanks, everyone, for the replies!
That’s awesome! I think the combination of the two is really great.
There is also this alternative
|
OPCFW_CODE
|
Pegen doesn't work with Python 3.9 due to visibility change
When you try pegen with Python 3.9, you may get an err like this (in Mac; other platforms may vary):
$ make test
python3 -m pegen -q -c data/cprog.gram -o pegen/parse.c --compile-extension
python3 -c "from pegen import parse; t = parse.parse_file('data/cprog.txt'); exec(compile(t, '', 'exec'))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(/Users/guido/pegen/pegen/parse.cpython-39-darwin.so, 2): Symbol not found: _PyAST_mod2obj
Referenced from: /Users/guido/pegen/pegen/parse.cpython-39-darwin.so
Expected in: flat namespace
in /Users/guido/pegen/pegen/parse.cpython-39-darwin.so
make: *** [test] Error 1
With the help of Victor Stinner I've found that this is due to the addition of -fvisibility=hidden to CONFIGURE_CFLAGS_NODIST in CPython's Makefile. This flag[1] hides many of the functions we're using, from PyAST_mod2obj via PyTokenizer_Free to _Py_BinOp. All those functions are not part of the public C API, and the new flag hides them from extensions modules like pegen.
The best thing I've come up with is to just remove that flag from CPython's Makefile, make clean, and make altinstall.
A serious consequence of this is that the C code generator will really only be useful once it's been integrated into CPython. I don't want to spend effort on convincing the CPython core devs that we should make all those APIs public.
Here's a patch to CPython's configure file that will preserve the change when the Makefile is rebuilt (but not when configure.in is rebuilt):
diff --git a/configure b/configure
index 44f14c3c2c..1901b6edc7 100755
--- a/configure
+++ b/configure
@@ -7377,10 +7377,10 @@ fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_enable_visibility" >&5
$as_echo "$ac_cv_enable_visibility" >&6; }
- if test $ac_cv_enable_visibility = yes
- then
- CFLAGS_NODIST="$CFLAGS_NODIST -fvisibility=hidden"
- fi
+ ## if test $ac_cv_enable_visibility = yes
+ ## then
+ ## CFLAGS_NODIST="$CFLAGS_NODIST -fvisibility=hidden"
+ ## fi
# if using gcc on alpha, use -mieee to get (near) full IEEE 754
# support. Without this, treatment of subnormals doesn't follow
[1] See http://gcc.gnu.org/wiki/Visibility for more on visibility. It's a GCC 4.0+ feature.
Solved by moving the parser into CPython. :-)
|
GITHUB_ARCHIVE
|
You spent three weeks preparing a presentation for management. Two days in Budget Committee meetings. Twelve visits to vendors and vendor reference sites before handing over £1/2 million. And now here you are, barely eighteen months later, faced with going through the whole process again because the high availability, high throughput Enterprise Storage system at the heart of your SAN, that was going to provide all your disk space for the next five years, will hit the ceiling the next time anyone updates their address book. And there's no way to expand it short of buying a complete new system. Ouch!
The InServ range of storage servers comprising the S400 and S800 models uses 3Par's 'InSpire' clustering architecture to combine the flexibility, and scalability, of modular storage with the reliability and throughput of a monolithic system. Indeed, their latest 'X' series server has set a mark for the standard SPC-1 storage performance benchmark that other vendors will struggle to match.
The architecture has essentially three components an array of disk units, managed by two or more intelligent controller nodes, clustered via a full-mesh backplane. All disks are standard FC drives, currently available in 36, 73 and 147GB capacities, with a 300 GByte unit planned.
A controller node comprises a proprietary ASIC with up to 8GB memory for I/O management, a pair of Pentium processors for process control and 6 PCI slots for external connections.
Tying everything together is a full-mesh backplane linking the controller nodes. Essentially a passive circuit board, the backplane provides a 1GByte/sec pathway between each pair of controllers. Although controllers function independently, the proprietary InForm operating system uses the fast point-to-point backplane connections to provide load balancing between controller nodes and to synchronize the caches, ensuring cache consistency.The raw numbers are impressive. An S400 system can have 2 or 4 controller nodes, while an S800 can accommodate up to 8. A single controller can manage as many as 32 drive chassis units, with 2 cages per chassis, 5 disk magazines per cage and 4 disks per magazine. The S400 thus has a maximum capacity of 189TBytes, while the S800 can provide up to 376TBytes of storage.
However, enterprise storage is about far more than raw capacity. With a 4-port Fibre Channel Adapter in every PCI slot each controller node can provide 24 full bandwidth FC connections to external servers or other devices, eliminating the need for separate switches to manage the SAN. The maximum internal data throughput rate is 28GBytes/sec for an S800, 6GBytes/sec for an S400. In our testing, with just a single server linked to a near-minimal S400 unit, we easily achieved a steady throughput of 60MBytes/sec at 7500 IOPS and 220MBytes/sec at 1000 IOPS.
As might be expected for a serious contender in the enterprise storage stakes, high availability is ensured by massive redundancy, both within and between components. For example, each controller node has two power supplies (1 + 1 redundant), which can be connected to independent AC supplies. In addition, onboard disk units and battery backup ensure that in the event of both power supplies failing the write cache can be saved to disk. The upshot is that an InServ system can lose a cable, a drive magazine or even a controller node and continue to function. All components are hot-swappable, though if a single drive fails the entire magazine must be removed, a minor restriction in an otherwise highly flexible system.
When faults or potential faults are detected an onboard service unit issues warnings and sends diagnostic information to 3PAR, where Customer Service engineers can analyse the problem. But cutting-edge hardware needs software to match and this is where the InServ really shines. The InForm operating environment effectively isolates the applications level from the physical storage through several layers of virtualisation.
From the point of view of an external user the available space is divided into Virtual Volumes, VVs. VVs are identified with Virtual Logical Units or VLUNs. Physical disk units are divided into a pool of 256MByte 'chunklets' and InForm assembles groups of chunklets from across all available disks into Logical Disks (LDs). A VV is built from part or all of an LD or spread cross several LDs. InForm supports RAID 0, RAID 10 (mirroring + striping) and RAID 50 (RAID 5 + striping).
Management, including security, is through a command-line interface (InForm CLI) or a graphical user interface (InForm GUI). Both are easy to use but do require an understanding of the underlying hardware for example, when creating a VV you must specify whether it should have Cage or Magazine-level redundancy, that is whether the VV can survive an entire cage failure or just a single magazine failing, which does, of course, imply you understand what cages and magazines are, and why the distinction is important.
Nonetheless 3PAR claims it typically needs to provide no more than a few hours instruction on the management tools before users can manage new installations efficiently. Part of the reason it has such confidence in the ease-of-use is that InForm automates most management functions. Indeed, far from needing months of expert tuning, the record-setting SPC-1 result was achieved with a near-default set-up.
The multiple layers of virtualisation allow InForm to provide a number of tools to optimise space utilisation. Virtual Copy makes copies efficiently by copying the allocation table rather than the contents, and storing change blocks as the copies diverge. Actual copying of data need not happen until the original space is overwritten. Thin Provisioning allows an application to allocate far more space than it actually uses disk space is only assigned when it's written. And Remote Copy (which requires two InServ units) combines both techniques to offer an efficient disaster recovery mechanism.
The clustered processors and massive I/O throughput, the multiple redundancy and intelligent software - it's all more suggestive of a supercomputer than a simple storage array. One can't help feeling the InServ could have real potential as a high-end computer server for multiply-threaded, memory intensive applications such as protein structure analysis, seismic data processing and orbital mechanics. It is, though, simply a very gifted dedicated storage server.
|
OPCFW_CODE
|
SSL Certi1ficate Issue: certificate verify failed: self-signed certificate in certificate chain
Hihi, i have been having issue trying to connect my SharePoint connector 2016 to elastic. My Elastic and Kibana are both running locally on Windows, while i trying to run my connector in WSL-Ubuntu. I have no issue running curl -v https://localhost:9200 and it gives me the results here:
but when i try to run the connectiong using the command make run, it gives me error:
Not sure what the real problem here. I have tried to copy cert from my local machine to ubuntu following this thread: SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed : self signed certificate in certificate chain (_ssl.c:992)
I also have tried use_ssl: False and verify_certs: False
Here are the versions i use:
Connector source : https://github.com/elastic/connectors
ElasticSearch: 8.12.2
Kibana: 8.12.2
WSL: <IP_ADDRESS>
WSL Distro: Ubuntu
elasticsearch.yml
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
connector config.yml
connectors:
-
connector_id: "adrl8o0BJBmtFMDNEQQR"
service_type: "sharepoint_server"
api_key: "OVNnUFVJNEJHS3dvQy1XdVFEWFQ6NFZsWU5tc21ScHl4OXVLYzVzM0Z5dw=="
elasticsearch:
host: "https://localhost:9200"
api_key: "OVNnUFVJNEJHS3dvQy1XdVFEWFQ6NFZsWU5tc21ScHl4OXVLYzVzM0Z5dw=="
Are you sure that for the first command exactly curl -v https://localhost:9200 was used, not e.g. curl -v --insecure -u "elastic:changeme" https://localhost:9200?
I don't know the client technology used here, but whenever there's secure connection with a self-signed certificate used, the certificate has to be either trusted or ignored by the client application.
In case of curl this is exactly what --insecure or -k is doing: using HTTPS regardless of the issuer of the certificate.
For the record: the client can't say "don't use secure connection" if the server requires one, hence telling the client use_ssl: false won't make things work. Unless SSL is disabled at server too (not recommended).
For production usage it may make sense to switch to trusted certificate. Or if self-signed is to be used, to make this CA trusted, e.g. by exporting it and then using with elasticsearch.ca_certs
hi yes, im sure its the first command. i get an empty reply from server using http instead of https. how do i know its self signed ? my elastic only got http_ca.crt, which is auto issued. so im not sure what to do from here since i got nthg on security knowledge.
Are you really sure the first command has only -v switch?
How come cURL is showing "your user name is elastic"?
It says so in the first screenshot, as long as this is self-signed.
This http_ca.crt is something you'd need to export and use in the connecter I'd say ;-)
|
STACK_EXCHANGE
|
I am new to automation and looking to make my home a bit smarter. To this end I am considering using openhab running as a VM on an ESXi platform with an Aeon USB gen 2 for a zwave interface, I have ordered the following hardware:
2 x Fibaro universal binary sensors
2 x Aeon 6 in 1 sensors
3 x DS18B20 temp sensors
1 x Vision ZM1701 door lock
My research has shown that apart from the door lock all this should be doable in openhab via the zwave dongle.
So my general question is does anyone see any issues/have any comments on my proposed HW list?
Also, what can I do in the interim to get the door lock integrated, do I need an intermediate controller to handle the zwave stuff?
I also have a raspberry Pi2 with piface module installed which I intend to somehow integrate into the openhab system so I can get access to its IO. Could this just be run as a second instance of openhab/secondary controller maybe or do I need other software for it?
I see no issues. Some people have had a challenge passing the USB dongle to the VM but in general this looks reasonable. You can check the z-wave database here to make sure your specific devices are supported. If they are not it is usually pretty easy to add it unless it uses the Security Command Class.
You will need a hub that can both communicate with your lock and communicate with OH. Vera seems to be pretty popular and there is support for a rooted Wink hub.
The recommended approach is to have just one OH instance in a given deployment and use other software to report back or received commands on the remote RasPis. I do this using a Python script I wrote and use MQTT to receive sensor updates from my RasPis. It works quite well.
Yes the security class is still the stumbling block with openhab and zwave, although I see some people are claiming to sort of have their locks working after much manual tinkering.
I have changed my order for the lock to a Lockwood (Yale) zwave deadbolt one as it seems this may offer more chance in the future of being integrated into openhab, also I can pull the zwave module out in the meantime and use the keypad (save battery) until it can work in openhab.
Could I run openhab directly on the pi with the zwave usb stick and also the piface integration or would this be too much for the pi (its a pi2 model)?
I just got a zwave network set up and added a couple of the Aeotec 6in1 sensors, was not that difficult. I did find both of my sensors temperature reading were a bit high out of the box. I did have trouble setting the calibration in habmin, the fellow from Aeon Labs offered to give one a test from his end and let me know, so a bit tick for customer support there. Alternatively thanks to watou it was relatively straight forward to add a rule to calibrate them.
|
OPCFW_CODE
|
Metering strategies for time lapse
In short, my question is: what are the best metering strategies when shooting a time lapse sequence to avoid visible fluctuations in brightness? I'd like to make the day-to-night transition look smooth.
This is my first try at making a time lapse video. The frames are unprocessed, they're just stitched together. Ignoring other problems and mistakes I made, one annoying problem is the fluctuations in the brightness of the image. What is a good strategy to avoid these? During this sequence, I set the metering to "matrix mode", but there are still several jumps in brightness. An obvious strategy would be using manual settings and fixing both the shutter speed and aperture at constant values. Obviously this won't work during dusk---the shutter speed went from 1/2000 to 1/4 during this sequence.
Since in this particular shot the middle of the picture is clear sky, I thought of trying spot metering next time, so the passing cars and light in the background won't have such a high impact. But I'm not at all sure it will be better. Also, it's impossible to tell if someone will shoot some fireworks right in front of that metering spot (fireworks are very common here). What do you think?
Finally, I thought of hacking the program I used to control the camera, and adjusting the camera settings from the program continuously according to a predefined curve. (I can get the curve from the sequence of shots I already have.) This is a lot of work though, so I'd only use it as a last resort.
You answer lies in this free software plugin for lightroom http://lrtimelapse.com/
When you take your timelapse it is important that everything is set to manual including white-balance, shutter speed, apperture and iso. After you have done that you can use a combination of lightroom and the lrtimelapse software to post process all the pictures to create an exposure curve to compensate light loss or light gain during the timelapse.
Other than that it is important to predict what will happen so that you can chose the appropriate metering for you timelapse. But as said before the biggest help is the lrtimelapse software as it allows you to make gradual changes on all images.
Time lapse photography
Time lapse movies are really fascinating. If you own Adobe Lightroom or Camera RAW you can easily make your own time lapse movies. You can download all templates for free in the download section.
LRTimelapse will take your movies to the next level. It allows you to continuously change Adobe Lightroom or Camera RAW development parameters over the time enabling sort of key-frame animations like in video-processing. The great advantage over post processing in your favorite video production software is the way higher quality of pre processing on a RAW-file basis. Of course you can work with JPG as well.
Furthermore LRTimelapse is one of the best instruments to deflicker your time lapse movies.
Examples and use cases
Alter white balance and other parameters over the time (for example for sun sets)
Make the "Holy Grail" - (day to night transition) easy peasy
Deflicker Make Ken-Burns effects (pan/zoom) Fade in / fade out
Continuously saturate / desaturate
and many more...
This program and its deflicker feature looks exactly what I was looking for (if only it didn't require LightRoom...). But one thing I don't understand is why people keep suggesting to fix all settings, even during extreme changes in illumination like what I showed. My camera sensor is simply not capable of capturing decent images under such an extreme variation, making some adjustment is absolutely necessary. Otherwise I either end up with a blown out sky or extreme noise in the dark areas. But the program you linked to does indeed seem to be able to fix the fluctuations caused by A settng
It does work nicely :-). Thanks!
@Szabolcs it always depends in what kind of timelapse you are interested in doing. If i would do a day to night transition i would set the day exposure so that shots are not over exposed, and gradually slow down the shutter speed the darker the environement is, so that i can capture everything. A good way to not have any flicker in a timelapse is to use shutter speeds below 1/60. I have found with higher shutter speeds their is more flicker afterwards though i am not sure why. Thanks for accepting the answer ; )
How do you reduce the shutter speed? Do you do it manually?
yes i do it manually. I have a small notebook i have a few exposure times that i write down so that it is faster to change especially when doing landscapes. Something that is difficult to do is the day to night transition as in the night you might need 4-6 seconds exposure. You have to take that into consideration when you start shooting in the day so that the day timelapse doesn't seem faster than the night timelapse. I use the Magic Lantern addon in combination with a canon 5d.
Really nice effort!
I note that the fluctuations only really start after the sun is setting. Obviously at this time the light entering the camera starts to diminish hence it getting confused.
My suggestion would be to use (M)anual mode on the camera. Use matrix (or evaluative in Canon-speak) to meter the scene in the daytime, and dial in those fixed settings to your camera (don't forget to fix the ISO too).
This will ensure the exposure settings do not change at all -- thereby no fluctuations, and ensuring that the effect of the diminishing light is captured.
I read somewhere that the reason behind flicker at higher shutter speeds is due to inaccuracy of the shutter mechanism timing at higher shutter speeds. It makes sense that shorter shutter durations allow for less leeway because there is less room for error on each end of the shutter cycle. The higher the shutter speed, the more accurate the timing of the shutter actuation would need to be. These timing errors are negligible under "normal" operation, but when essentially comparing sequential images back to back to back at 24fps+. the differences are more perceivable as what we recognize as flicker.
|
STACK_EXCHANGE
|
And then call it Gutenberg.
There are no general “basics”. No universal default styling/behaviour that suits everyone’s needs. You suppose that everybody use the gallery the same way, but that’s not quite right. Just try to offer the concrete implementation (design, layout, markup) and you’ll find that half of users are not pleased with it at all. Try another one and the first half would be against it once again.
I use ~6 gallery solutions on my sites right now. They are totally differrent and incompatible with each other. They use different lightboxiing libs (or no libs at all) and have completely different layout. Which one should be implemented in the core? Yours, I guess? The seventh. And I’ll have to remove basic styling and poor default lightbox 20+ times in my sites just to keep everything as is.
It’s not the core. It’s theming.
Fall back to the default minimalistic gallery which is universal for any other plugin or theme.That is the key concept. Defaults are the most widely supported. That makes the whole system flexible.
P.S. And I agree with James that starting some petitions is the best way to estimate the real community needs and interests. Please transform your vision into a list of separate features and let us vote for each one separately. I’ve read through the whole discussion and it seems that all arguments are cycling around all the same things. Why not just vote instead?)
It must be lovely having the luxury of managing one website (or a few similar websites) and imagining the perfect CMS that would do exactly what you need it to do out of the box (if only the developers would listen!).
Most of us (and CP’s primary market) manage multiple websites with a wide variety of requirements. Keeping the core clean and simple (there are plans to further clean up and simplify) is what we need from a professional CMS, modifying or adding features with well-coded, well-supported plugins (possibly creating our own for a particular use).
CP is a community-led platform that encourages the efforts of third party developers to bring all kinds of great functionality to the table (in the form of plugins). And this also relates to the need to keep core clean and simple. I for one sincerely hope it stays that way.
Thank you for the nice discussion.
My suggestion, again, is to start a petition for these things. My advice for the petitions is to keep them very clear and specific, one petition per topic. The media library petition is going to be interesting since it is not at all clear what should be implemented there at the moment.
We have already started differentiating ourselves from WP in small ways such as removing Hello Dolly, and we are happy to continue this.
However we do need to follow our already-established process as outlined in our democracy guidelines and our roadmap. There are literally thousands of potential changes we could make, and the way we will prioritize them is by looking at what the community wants to see. This is a much better way of doing things than relying on your or my or the committee’s individual opinions.
In short: yes, let’s make ClassicPress what we want it to be, but in order for this to be possible you have to work with us. In this case, the first step is making small, focused petitions for what you want to see.
|
OPCFW_CODE
|
SQL Server Complex DateDiff Where Scenario
This is the current table I am dealing with. I would like to record the length of time that the stations (st) are in state 5.
+-----+-----+-----+-----+---------------------+
| st1 | st2 | st3 | st4 | TimeStamp |
+-----+-----+-----+-----+---------------------+
| 3 | 3 | 3 | 3 | 2018-07-23 07:51:06 |
+-----+-----+-----+-----+---------------------+
| 5 | 5 | 5 | 5 | 2018-07-23 07:50:00 |
+-----+-----+-----+-----+---------------------+
| 0 | 0 | 10 | 10 | 2018-07-23 07:47:19 |
+-----+-----+-----+-----+---------------------+
| 5 | 5 | 5 | 5 | 2018-07-23 07:39:07 |
+-----+-----+-----+-----+---------------------+
| 3 | 3 | 10 | 10 | 2018-07-23 07:37:48 |
+-----+-----+-----+-----+---------------------+
| 3 | 3 | 10 | 10 | 2018-07-23 07:37:16 |
+-----+-----+-----+-----+---------------------+
This is about what I would like to have:
+-----+-----+-----+-----+---------------------+----------+
| st1 | st2 | st3 | st4 | TimeStamp | TimeDiff |
+-----+-----+-----+-----+---------------------+----------+
| 5 | 5 | 5 | 5 | 2018-07-23 07:50:00 | 66 |
+-----+-----+-----+-----+---------------------+----------+
| 5 | 5 | 5 | 5 | 2018-07-23 07:39:07 | 492 |
+-----+-----+-----+-----+---------------------+----------+
This might be a difficult way to go about doing this so I am definitely open to other ideas. My end goal is to be able to pull a query and sum the time for state 5 on a daily basis. my problem is getting from the time stamps on stations and their respective states to a time length that I can manipulate and work with. I also might add since there is some variation in these numbers when a station reads state 5 the whole row will be filled with the number 5 unlike the other numbers where the row is filled with different numbers.
If I had the Datediff for each interval I could just narrow it down to state 5 using a where clause so that's why I have the datediff for each time stamp interval in my final table.
Any help would be greatly appreciated. Thank you.
EDIT
More specific details in the comments below.
Just to clarify: For an end result, do you want 653 (the number of seconds between the first instance of all State 5 and the 2nd instance)?
No 653 is not what I am looking for. I made a more detailed table for my end result which shows what I am trying to show. The time elapsed during state 5 instances.
You can use lead():
select t.*,
datediff(second, timestamp, lead(timestamp) over (order by timestamp)) as timediff
from t;
I run into the issue after adding a where st1=5 clause that the datediff then gives me the difference between only state 5 instances. Do you know of way for me to get around this? I want to pull the length of individual instances of state 5. Ill update my end result table to represent this now.
It is giving me the datediff between instances where state = 5 instead of the length of time of the individual state 5 instances.
@tphasley . . . This is what this question asks for, explicitly with the result set. I do believe you have asked another question already, which is the right approach.
|
STACK_EXCHANGE
|
#!/usr/bin/env python3
class Course:
'''Course superclass for any course'''
def __init__(self, dept, num, name='',
units=4, prereq=set(), restr=set(), coclass=None):
self.dept, self.num, self.name = dept, num, name
self.units = units
self.prereq, self.restr = prereq, restr
if coclass:
self.coclass = coclass
else:
self.coclass = 'No co-classes required'
def __repr__(self):
return self.dept+' '+self.num
def get_info(self):
return self.__repr__()+': '+self.name\
+'\nPrerequisites: '+', '.join(self.prereq)\
+ '\nCoclasses: '+self.coclass
class GE(Course):
'''GE class'''
def __init__(self, dept, num, name='',
units=4, prereq=set(), restr=set(), coclass=None, section=set()):
self.section = section
super.__init__(dept, num, name, units, prereq, restr, coclass)
|
STACK_EDU
|
On a two-wire transmission circuit, the difference between the instantaneous voltages a and b on the two wires is defined as the differential voltage d :
a and b are each measured with respect to a common arbitrary reference.
On a two-wire transmission circuit, the average of the instantaneous voltages a and b on the two wires is called the common-mode voltage c :
a and b are each measured with respect to a common reference, usually a local earth ground, but sometimes a local reference plane or other local reference point.
The differential and common-mode voltages comprise an alternate representation of the original signal, often called a decomposition of the original signal. Given the common-mode and differential voltages, you can reconstruct a and b . (The same decomposition applies to currents.)
In a good differential system one usually strives to limit the AC component of the common-mode signal. This is done because the common-mode portion of the transmitted signal does not enjoy any of the noise-canceling or radiation-preventing benefits of differential transmission. The common-mode and differential signals also propagate differently in most cabling systems, which can lead to peculiar skew or ringing problems if the common-mode component is an appreciable fraction of the overall signal amplitude, especially if those common-mode currents are accidentally converted into differential signals (see Section 6.8, "Differential to Common-Mode Conversion"). Intercabinet cabling, particularly, is extremely sensitive to the presence of high-frequency common-mode currents, which radiate quite efficiently from unshielded cabling.
Another decomposition of the two-wire transmission problem defines odd-mode and even-mode voltages and currents. These are similar to, but slightly different from, differential and common-mode voltages and currents.
An odd-mode signal is one that has amplitude x ( t ) on one wire and the opposite signal “ x ( t ) on the other wire. A signal with an odd-mode amplitude of x ( t ) has a differential amplitude of 2 x ( t ). If the signal x ( t ) takes on a peak-to-peak range of y , then the peak-to-peak odd-mode range is simply y , but the peak-to-peak differential amplitude is 2 y .
An even-mode signal is the same on both wires. An even-mode signal with a peak-to-peak range of y also has a peak-to-peak common-mode range of y . The even-mode amplitude and common-mode amplitude are one and the same thing.
Two-wire transmission systems sometimes send a signal voltage on one wire, but nothing on the other. In this case the differential-mode amplitude equals the signal amplitude on the first wire. The common-mode amplitude is half that value. In this case the odd-mode and even-mode amplitudes are the same and both equal to half the signal amplitude on the first wire.
Here are the translations between odd-mode and even-mode quantities . The same decomposition applies to currents.
a and b represent the voltages on the two wires with respect to a common reference.
The differential-and-common-mode decomposition and the even-and-odd mode decomposition share very similar definitions. The discrepancy between the two models has to do with the definition of the differential mode. The differential voltage is what you read with an electrical instrument when you put it across two wires. The odd-mode voltage is a mathematical construct that simplifies the bookkeeping in certain situations.
Any noise like external RF interference that equally affects both wires of a differential pair will induce a common-mode (even-mode) signal, but not a differential-mode (odd-mode) signal (Figure 6.8). A good differential receiver senses only the differential signal and is therefore immune to this type of noise.
Figure 6.8. A good differential receiver cancels any noise that affects both wires equally, such as external RFI.
POINTS TO REMEMBER
Transmission Line Parameters
Pcb (printed-circuit board) Traces
Generic Building-Cabling Standards
100-Ohm Balanced Twisted-Pair Cabling
150-Ohm STP-A Cabling
Time-Domain Simulation Tools and Methods
Points to Remember
Appendix A. Building a Signal Integrity Department
Appendix B. Calculation of Loss Slope
Appendix C. Two-Port Analysis
Appendix D. Accuracy of Pi Model
Appendix E. erf( )
|
OPCFW_CODE
|
A few days ago, from July 17 to 25, I attended the SEAMS (Southeast Asian Mathematical Society) School held at the Institute of Mathematics, University of the Philippines Diliman, discussing topics on elliptic curves. The school was also partially supported by CIMPA (Centre International de Mathematiques Pures et Appliquees, or International Center for Pure and Applied Mathematics), and I believe also by the Roman Number Theory Association and the Number Theory Foundation. Here’s the official website for the event:
Southeast Asian Mathematical Society (SEAMS) School Manila 2017: Topics on Elliptic Curves
There were many participants from countries all over Southeast Asia, including Indonesia, Malaysia, Philippines, and Vietnam, as well as one participant from Austria and another from India. The lecturers came from Canada, France, Italy, and Philippines.
Jerome Dimabayao and Michel Waldschmidt started off the school, introducing the algebraic and analytic aspects of elliptic curves, respectively. We have tackled these subjects in this blog, in Elliptic Curves and The Moduli Space of Elliptic Curves, but the school discussed them in much more detail; for instance, we got a glimpse of how Karl Weierstrass might have come up with the function named after him, which relates the equation defining an elliptic curve to a lattice in the complex plane. This requires some complex analysis, which unfortunately we have not discussed that much in this blog yet.
Francesco Pappalardi then discussed some important theorems regarding rational points on elliptic curves, such as the Nagell-Lutz theorem and the famous Mordell-Weil theorem. Then, Julius Basilla discussed the counting of points of elliptic curves over finite fields, often making use of the Hasse-Weil inequality which we have discussed inThe Riemann Hypothesis for Curves over Finite Fields, and the applications of this theory to cryptography. Claude Levesque then introduced to us the fascinating theory of quadratic forms, which can be used to calculate the class number of a quadratic number field (see Algebraic Numbers), and the relation of this theory to elliptic curves.
Richell Celeste discussed the reduction of elliptic curves modulo primes, a subject which we have also discussed here in the post Reduction of Elliptic Curves Modulo Primes, and two famous problems related to elliptic curves, Fermat’s Last Theorem, which was solved by Andrew Wiles in 1995, and the still unsolved Birch and Swinnerton-Dyer conjecture regarding the rank of the group of rational points of elliptic curves. Fidel Nemenzo then discussed the classical problem of finding “congruent numbers“, rational numbers forming the sides of a right triangle whose area is given by an integer, and the rather surprising connection of this problem to elliptic curves.
On the last day of the school, Jerome Dimabayao discussed the fascinating connection between elliptic curves and Galois representations, which we have given a passing mention to at the end of the post Elliptic Curves. Finally, Jared Guissmo Asuncion gave a tutorial on the software PARI which we can use to make calculations related to elliptic curves.
Participants were also given the opportunity to present their research work or topics they were interested in. I gave a short presentation discussing certain aspects of algebraic geometry related to number theory, focusing on the spectrum of the integers, and a mention of related modern mathematical research, such as Arakelov theory, and the view of the integers as a curve (under the Zariski topology) and as a three-dimensional manifold (under the etale topology).
Aside from the lectures, we also had an excursion to the mountainous province of Rizal, which is a short distance away from Manila, but provides a nice getaway from the environment of the big city. We visited a couple of art museums (one of which was also a restaurant serving traditional Filipino cuisine), an underground cave system, and a waterfall. We used this time to relax and talk with each other, for instance about our cultures, and many other things. Of course we still talked about mathematics, and during this trip I learned about many interesting things from my fellow participants, such as the class field theory problem and the subject of real algebraic geometry .
I believe lecture notes will be put up on the school website at some point by some of the participants of the school. For now, some of the lecturers have put up useful references for their lectures.
SEAMS School Manila 2017 was actually the first summer school or conference of its kind that I attended in mathematics, and I enjoyed very much the time I spent there, not only in learning about elliptic curves but also making new friends among the mathematicians in attendance. At some point I also hope to make some posts on this blog regarding the interesting things I have learned at that school.
|
OPCFW_CODE
|
Vue JS Role Based Authentication and Routing
i have an old asp.net web-form based application, which i want to convert to Vuejs based front-end and Asp.Net Core base api as back-end.
The current application has a login page, where the user inputs his credentials, once the credentials got verified, user is taken to application home page, which has side menu bar.
The side menu bar is loaded based on the current users role/privilege. Say for example, a user with role of admin may have 10 menu items, while a normal user may have only 5 menu items.
I'm very new VUE, so pls guide me, how to set up the vue application and routing for above scenario.
Thanks in advance.
You are asking for too much. Use Google, YouTube and other places to find guides and courses. Then come back and ask specifics. We cannot do your work or you.
Welcome to SO! I recommend taking a look at these guidelines. To get a good answer, you need to share your code (as a minimal, reproducible snippet), explain what you've tried so far, and what specifically isn't working.
thx all for the contribution, with search and tutorials figured out most of the things. now one grey area remains, how to dynamically set the routes in the side menu bar using the role stores in vuex.
There are many ways to go about this, your goal is to load data about your user into your application.
One way to solve this is to create an API function that returns information about the currently logged on user.
The authentication of the request can be done through cookies, jwt header or something else.
The api call to get the authenticated user data will also help you figure out if the user is already logged on when the app starts up.
Putting aside how you make the network request, lets say you now have the data in your application.
There are a few choices on how to store it, this is an architecture choice as the results of this will likely have effect on many other parts of your application.
The common solution to storing application-wide (global) state is to use Vuex.
This will also play well together with vue-router.
Lets say that in Vuex you will make a field roles that will hold an array of strings, indicating the roles the user has.
In a vue component you can reach the vuex store from the $store property (this.$store in the code, $store in templates).
The state of the store is then reachable via $store.state, and your roles array would exist over at $store.state.roles.
To set the roles you will have to setup mutations that will let you save the roles, and the api call would be part of an action. You can read more about that on the vuex documentation on how to update the state.
|
STACK_EXCHANGE
|
Is there a cross browser event for activate in the browser?
Is there a cross browser event that can be used to show a message to the user returning to their web page?
For example, a user has ten applications or tabs open. They get a new notification from our app and I show a notification box. When they switch to our tab I want to begin our notification animation.
The activate event is common on desktop applications but so far, on the window, document and body, neither the "activate" or "DOMActivate" do anything when swapping between applications or tabs but the "focus" and "blur" do. This event works but the naming is different and the events that should be doing this are not.
So is the right event to use cross browser or is there another event?
You can test by adding this in the console or page and then swapping between applications or tabs:
window.addEventListener("focus", function(e) {console.log("focused at " + performance.now()) } )
window.addEventListener("blur", function(e) {console.log("blurred at " + performance.now()) } )
Update:
In the link to the possible duplicate is a link to the W3 Page Visibility doc here.
It says to use the visibilitychange event to check when the page is visible or hidden like so:
document.addEventListener('visibilitychange', handleVisibilityChange, false);
But there are issues:
The Document of the top level browsing context can be in one of the
following visibility states:
hidden
The Document is not visible at all on any screen. visible
The Document is at least partially visible on at least one screen. This is the same condition under which the hidden attribute is set to
false.
So it explains why it's not firing when switching apps. But even when switching apps and the window is completely hidden the event does not trigger (in Firefox).
So at the end of the page is this note:
The Page Visibility API enables developers to know when a Document is
visible or in focus. Existing mechanisms, such as the focus and blur
events, when attached to the Window object already provide a mechanism
to detect when the Document is the active document.
So it would seem to suggest that it's accepted practice to use focus and blur to detect window activation or app switching.
I found this answer that is close to what would be needed to make a cross browser solution but needs focus and blur (at least for Firefox).
Observation:
StackOverflow has a policy against mentioning frameworks or libraries. The answers linked here have upvotes for the "best" answer.
But these can grow outdated. Since yesterday I found mention of two frameworks (polyfills) that attempt to solve this same problem here for visibly and isVis (not creating a link). If this is a question and answer site and a valid answer is, "here is some code that works for me" but "Here is the library I created using the same code that can be kept up to date and maintained on github" is not valid then in my opinion it's missing it's goal.
I know above should probably go to meta and I have but they resist changing the status quo for some reason. Mentioning it here since it's a relevant example.
This is one way of doing it by using focus and blur, yes. Please see link below.
Possible duplicate of Is there a way to detect if a browser window is not currently active?
My question is different in that it is to check if the window is currently active not inactive. See the difference? I care if they activate the window not if it is deactive. However a lot of that answer is helpful.
The Page lifecycle API can be used to listen for visibilitychange events.
[This event triggers] when a user navigates to a new page, switches tabs, closes a tab, minimizes or closes the browser, or switches apps on mobile operating systems. Quote
Current browser support
Reference on MDN
Thank you. I read through it and then tested it and it works for tab switching but doesn't seem to pick up events from application switching (via alt + tab). Focus and blur seem to work at least in Firefox so a combination might work. Another note: when switching and the page is completely occluded the document.hidden flag still reports visible.
|
STACK_EXCHANGE
|
pub mod memory;
pub mod pipeline;
pub use memory::MemoryModel;
pub use pipeline::PipelineModel;
use ro_cell::RoCell;
type PipelineFactory = fn(usize) -> Box<dyn pipeline::PipelineModel>;
// As in register_memory_model will use Box::from_raw, we must use a ZST type to avoid freeing underlying memory.
static MEMORY_MODEL: RoCell<&'static dyn MemoryModel> = RoCell::new(&memory::AtomicModel);
static PIPELINE_MODEL: RoCell<PipelineFactory> = RoCell::new(|_| Box::new(pipeline::AtomicModel));
pub fn get_memory_model() -> &'static dyn MemoryModel {
*MEMORY_MODEL
}
pub fn new_pipeline_model(hartid: usize) -> Box<dyn PipelineModel> {
(*PIPELINE_MODEL)(hartid)
}
unsafe fn register_memory_model(model: Box<dyn MemoryModel>) -> Box<dyn MemoryModel> {
Box::from_raw(RoCell::replace(&MEMORY_MODEL, Box::leak(model)) as *const dyn MemoryModel as _)
}
unsafe fn register_pipeline_model(model: PipelineFactory) -> PipelineFactory {
RoCell::replace(&PIPELINE_MODEL, model)
}
/// Set whether lock-step execution is required for this model's simulation.
/// For cycle-level simulation you would want this to be true, but if no cache coherency is
/// simulated **and** only rough metrics are needed it's okay to set it to false.
unsafe fn set_lockstep_mode(mode: bool) {
RoCell::as_mut(&crate::FLAGS).thread = !mode;
}
pub unsafe fn switch_model(id: usize) {
match id {
0 => {
register_memory_model(Box::new(memory::AtomicModel));
register_pipeline_model(|_| Box::new(pipeline::AtomicModel));
set_lockstep_mode(false);
}
1 => {
register_memory_model(Box::new(memory::SimpleModel));
register_pipeline_model(|_| Box::new(pipeline::InOrderModel::default()));
set_lockstep_mode(false);
}
_ => panic!("unknown model id"),
}
}
|
STACK_EDU
|
Not sure what to do with your son or daughter during “Take Your Child to Work Day”? TechGirlz, a renowned Philadelphia-based nonprofit, has the answer for you! Why not provide your child or a visiting group of kids with a fun, free, technology activity?
TechGirlz is developing a series of lesson plans to further the technology skills of young people. This is important—by 2020, there will be over a million jobs in tech! TechGirlz is focused on reducing the gender gap in tech careers by offering workshops and summer camps to middle school aged girls. All young people can benefit from the lesson plan ideas, which will expose them to technology concepts that are typically not taught in the classroom.
What can my kids learn?
The technology workshops include a lot of fun and interesting topics. Kids can learn how to:
- Create a podcast: in one of our most popular workshops, students will learn how to create an (audio) podcast. (Novice workshop)
- Create a website using HTML and CSS: in this popular workshop, students will learn the elements of HTML and CSS and use them to edit a webpage template to create their own website. (Intermediate workshop)
- Design with Scratch: using Scratch, students will experiment with the basics of programming. (Intermediate workshop)
- Program with Python: using Python, students will learn the basics of programming including: simple data types, comparisons, if-statements, and loops. (Intermediate workshop)
- Design Mobile Apps: using MIT App Developer, students will work in teams to design a mobile app. They will select an app idea, develop a prototype, and present their final product. (Intermediate workshop)
- Gaming with Unity 3D: using Unity 3D, students will learn about the history of video games and build their own game. (Expert workshop)
- Learn the basics of Ruby on Rails: Students will learn the basics of Ruby on Rails web development and create their own app. (Expert workshop)
TechGirlz has also offered other workshops using : Mindstorms, LittleBits, MakeyMakey kits, and Raspberry Pi. These workshops require that you purchase or have access to these materials.
What if I’m not a tech expert?
That’s OK. Perhaps you can prepare to lead a novice-level workshop such as “Create a Podcast”. Or partner with your IT department.
How do I get started?
- Visit techgirlz.org and view our selection of TechShopz – pick your topic
- Find a presenter(s) who is available on April 24th
- Secure a location and time. Workshops are generally 2.5-3 hours.
- Have your employees sign up their children for the sessions. Even though these TechShopz are designed to engage girls, the material is not gender specific. We do encourage an age minimum of 11 and up.
|
OPCFW_CODE
|
etcd3 on Compose
etcd is a distributed key/value store with an emphasis on consistency. Using the RAFT consensus algorithm to coordinate all the nodes, queries to etcd are assured to get the correct answer. etcd version 3 uses gRPC for communications, leases for key expiry, transactions for ACID like operations and scalable watch monitoring, this emphasis on correctness has made etcd a leader in configuration management databases.
etcd v3 and v2
There are two major versions of etcd, v2 and v3, which have different APIs and different capabilities. The current Compose default is v3 and you are reading the documentation for that now. The now deprecated version, v2, is documented separately for clarity.
etcd for All is a general introduction to etcd on Compose.
Are you wondering what you'll get, or can get, with a Compose etcd deployment? Want to know what you'll need to do to manage it? Check out some of the implementation features and details down in etcd for Ops and Admins.
Just deployed etcd and want to get coding with it? Developing an application or want to try a new stack? Then see the etcd for Developers section for resources on how to connect from different languages, command line tools and more information to get you started.
etcd for All
When deployed on Compose, etcd has comes with these standard Compose features.
- Automatically scaling server stack that scales RAM, CPU, and I/O as your etcd data grows.
- Daily, weekly, monthly, and on-demand backups.
- Metrics displayed in the Compose UI
- Deploy, manage, backup, and otherwise automate database tasks through the The Compose API.
- Guaranteed resources per deployment.
- Daily logs available for download.
Compose deployments of etcd also come with a number of etcd specific features:
- Data Browser For etcd3.
- An optional add-on for real-time log access.
- Start with 256MB for $28.50 - as you grow each additional 256MB unit costs $19.50.
An etcd deployment starts with three etcd data members; each member with 256MB memory and 256MB storage. Also included are two HAProxy portals to manage connections and high-availability at 64MB memory each.
etcd deployments may be vertically scaled with additional storage and memory added to each node. The HAProxy portals may also be scaled with more memory.
See Compose Datacenter Availability for current location availability.
Billing and Costs
Compose deployments are billed based on disk usage and your etcd deployment usage is measured hourly. It is then grouped into a single monthly billing cycle. This means that any scaling or add-on usage will be charged from when the new resource was provisioned; not just for the month. You can see your deployment's usage in the Overview panel, under Current Usage.
For Ops and Admins
High-availability and Failover Details
In an etcd cluster each member knows about every other member and relies on a distributed consensus protocol to determine the leader. The leader send out information about its leader status at a set interval. The followers, if nothing has been heard from the leader, all have their own interval after which they will attempt to become a leader. The two haproxy portals will automatically handle incoming connections regardless of which member of the cluster is leader and should one haproxy go down your application can fail-over to the other portal.
Etcd3 backups use the
snapshot command on your running database cluster to backup your entire deployment. You can both download your backups for local use, or restore them directly into a new Compose deployment. More information, details, and instructions for using your backups can be found on the Backups for etcd3 page.
General information on backup schedules and downloading your backups can be found on the Backups page.
High Resolution Scaling
With etcd, the autoscaling system samples the resource consumption of deployments sixty times faster (once a minute) than standard Compose autoscaling.This makes it extra-responsive to the demands a predominantly memory-based database can place on resources.
The etcdctl is the command-line utility and is available through a local installation of etcd. It offers a full set of easy to use commands which map to the underlying etcd API.
Connecting to etcd
The Overview panel of the Compose UI provides the basic information you need to get connected to your databases. In the section Connection info you will find Credentials and Connection Strings for your application.
Our etcd deployments are SSL-enabled using Let's Encrypt certificates. More information on getting your applications connected is on the Connecting to etcd3 page.
Features for your etcd Application
Keys are handled in a flat namespace and treated as strings. A directories can be emulated by naming keys with a directory structure and then using
--prefix. The prefix option matches any key that starts with a particular string. You can query and watch keys that start with the value you enter.
etcd3 has the ability to group multiple operations into a single transaction using an If/Then/Else structure. The requested operations are dependent on the existing contents of the key/value store before execution is allowed to occur. The key can be checked as a comparison or series of comparisons as a single operation. If the comparison is successful then it executes the code in the 'success requests' block and if the comparison fails then it executes the code in the 'failure requests' block.
Watch a key for any changes. Behind the scenes, etcd3 combines all watchers on a single, bidirectional gRPC stream. The stream delivers events tagged with a watcher’s registered ID. Multiple watch streams can share a connection instead of opening a new one for every watcher, keeping memory overhead low.
Leases and Time-To-Live:
Issue leases for temporary keys with TTL expiration. Behind the scenes, this model reduces keep-alive traffic when multiple keys are attached to the same lease. The keep-alive connections are combined in a lease’s single gRPC stream.
Additional Resources and Related Articles
The full list of documentation for etcd is in the sidebar, in addition to all things Compose.
For more than just help docs, check out Compose Articles and our curated collection of etcd-related topics for more how-to's and information on etcd on Compose.
Still Need Help?
If this article didn't solve things, summon a human and get some help!
Updated over 3 years ago
|
OPCFW_CODE
|
The name of the query motif, which is unique in the motif database file.
An alternate name for the query motif, which may be provided in the motif database file.
The width of the motif. No gaps are allowed in motifs supplied to MAST
as it only works for motifs of a fixed width.
The sequence that would achieve the best possible match score (and its
reverse complement for nucleotide motifs).
MAST computes the pairwise correlations between each pair of motifs.
The correlation between two motifs is the maximum sum of Pearson's
correlation coefficients for aligned columns divided by the width of
the shorter motif. The maximum is found by trying all alignments of the
Motifs with correlations below 0.60 have little effect on
the accuracy of the E-values computed by MAST. Motifs with higher
correlations with other motifs should be removed from the query. You can
also request MAST to remove redundant motifs from its analysis
under Advanced options from the MAST web page,
or by specifying --remcorr
when running MAST on your own computer.
The name of the (FASTA) sequence database file.
The number of sequences in the database.
The number of letters in the sequence database.
The date of the last modification to the sequence database.
The name of a file of motifs ("motif database file") that contains the (MEME-formatted) motifs used in the search.
The date of the last modification to the motif database.
The name of the alphabet symbol.
The frequency of the alphabet symbol as defined by the background model.
The score for the match of a position in a sequence to a motif is
computed by summing the appropriate entry from each column of the
position-dependent scoring matrix that represents the motif. Sequences shorter than one or more of the motifs are skipped.
The p-value of a motif match is the probability of a single random
subsequence of the length of the motif
at least as well as the observed match.
The identifier of the sequence (from the FASTA sequence header line). This maybe be linked to search a sequence database for the sequence name.
The description appearing after the identifier of the sequence in the FASTA header line.
This diagram shows the normal spacing of the motifs specified to MAST.
MAST will calculate larger p-values for sites that diverge from the order and spacing in the diagram.
If strands were scored separately then there will be two
E-values for the sequence separated by a slash (/). The score for the
provided sequence will be first and the score for the reverse-complement
will be second.
The block diagram shows the best non-overlapping tiling of motif matches on the sequence.
These motif matches are the ones used by MAST to compute the E-value for the sequence.
Hovering the mouse cursor over a motif match causes the display of the motif name,
position p-value of the match and other details in the hovering text.
The length of the line shows the length of a sequence relative to all the other sequences.
A block is shown where the position p-value
of a motif is less (more significant) than the significance threshold,
which is 0.0001 by default.
If a significant motif match (as specified above) overlaps other
significant motif matches, then it is only displayed as a block if its
position p-value is less (more significant) then the
product of the position p-values of the significant matches that it
The position of a block shows where a motif has matched the sequence.
Complementable alphabets (like DNA) only: Blocks displayed above the line are a match on the given sequence, whereas blocks
displayed below the line are matches to the reverse-complement of the given sequence.
Complementable alphabets (like DNA) only: When strands are scored separately, then blocks may overlap on opposing strands.
The width of a block shows the width of the motif relative to the length of the sequence.
The colour and border of a block identifies the matching motif as in the legend.
Note: You can change the color of a motif by clicking on the motif in the legend.
The height of a block gives an indication of the significance of the match as
taller blocks are more significant. The height is calculated to be proportional
to the negative logarithm of the position p-value,
truncated at the height for a p-value of 1e-10.
If strands were scored separately with a complementable alphabet then
there will be two p-values for the sequence separated by a slash (/).
The score for the given sequence will be first and the score for the
reverse-complement will be second.
This indicates the offset used for translation of the DNA.
The annotated sequence shows a portion of the sequence with the
matching motif sequences displayed above.
The displayed portion of the sequence can be modified by sliding the
two buttons below the sequence block diagram so that the portion you want
to see is between the two needles attached to the buttons. By default the
two buttons move together but you can drag one individually by holding
shift before you start the drag.
If the strands were scored separately then overlaps in motif sites may
occur so you can choose to display only one strand at a time. This is done
by selecting "Matches on given strand" or "Matches on opposite strand"
from the drop-down list.
The sequence p-value of a score is defined as the probability of a
random sequence of the same length containing some match with as good or
better a score.
The combined p-value of a sequence measures the strength of the match
of the sequence to all the motifs and is calculated by
finding the score
of the single best match of each motif to the sequence (best matches
The E-value of a sequence is the expected number of sequences in a
random database of the same size that would match the motifs as well as
the sequence does and is equal to the combined p-value of
the sequence times the number of sequences in the database.
Change the portion of annotated sequence by dragging the buttons; hold shift to drag them individually.
If you use MAST in your research, please cite the following paper:
Timothy L. Bailey and Michael Gribskov,
"Combining evidence using p-values: application to sequence homology searches",
Bioinformatics, 14(1):48-54, 1998.
Each of the following 56 sequences has an E-value less than
The motif matches shown have a position p-value less than 0.0001. Hover the cursor over the sequence name to view more information about a sequence. Hover the cursor over a motif for more information about the match. Click on the arrow (↧) next to the E-value to see the sequence surrounding each match.
|
OPCFW_CODE
|
Facing problems using MacBook Pro M1 chip
i've been trying to install software like Homebrew, flutter etc through the terminal and i've been getting the same error codes:
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
Error: Fetching /opt/homebrew/Library/Taps/homebrew/homebrew-core failed!
fatal: invalid upstream 'origin/master'
Failed during: /opt/homebrew/bin/brew update --force --quiet
I'm using the new Macbook Pro and having been trying to find a fix to this issue since i got this device. I've contacted the developers of Homebrew, and they said it was a security issue, i asked apple and they couldn't help. I've tried removing the failed install with:
rm -fr $(brew --repo homebrew/core)
However, this didn't fix my problem. I also tried 3 different networks, one of them being the 4G on my phone. I can't seem to get around this problem. When i try install dependancies with flutter i get the same errors so it's not one install that's raising the problem.
Full command line for the Homebrew install:
HEAD is now at 8853fb6c1 Merge pull request #11145 from Bo98/json-tab-changed_files-fix
error: Not a valid ref: refs/remotes/origin/master
fatal: ambiguous argument 'refs/remotes/origin/master': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
Error: Fetching /opt/homebrew/Library/Taps/homebrew/homebrew-core failed!
fatal: invalid upstream 'origin/master'
Failed during: /opt/homebrew/bin/brew update --force --quiet
Has any one faced this problem before, or know of a fix? any help would greatly be appreciated.
can you try this - https://medium.com/swlh/issues-installing-homebrew-on-new-macbook-m1-silicon-heres-how-to-fix-it-8b63921c7290
When i first came across this error I found this article, but sadly it didn't work.
I found a couple of solutions to this problem. It boiled down to my ISP provider (Virgin UK) who was bottlenecking the install, probably because of a security issue or bug.
A solution that worked for me was removing the initial failed installation and using a VPN to changing my connection and install it that way.
A second solution would be turning off Child Safe with your ISP.
These are the solutions I came across, the first one worked for me, and I have been told that the second one works too!
|
STACK_EXCHANGE
|
There are several aspects to this question that make it a poor question to ask on Software Engineering.SE, and some that also make it counter productive to have planned out for an interview.
The first aspect is that the question is too broad. The "How do you build Amazon" being asked here is akin to going to Home Improvement.SE and asking "How do you build a house?" There is no way the totality of that question can be answered in a meaningful way in this text box - the topic is far too broad.
To an extent, it is exactly that broadness that the interviewing is asking. This question isn't one like "how many ways can you use
static in Java" - which has an answer. The interviewer is coming to you like a business person would with some question that you need to drill down into to ask - and clarify the question.
However, as a question on a Q&A site, with you asking the question you don't know all the questions and the scope of the question the interviewer has in mind. One interviewer may want you to drill down into the shopping cart side of Amazon, another may want to want to drill into the virtualization of AWS, another may want things about product recommendation or customer purchase prediction.
Amusingly stated in a comment:
Your "friend" should have closed the interviewer's question as "too broad." In other news, "design a big distributed system" is not a requirements specification. – Robert Harvey
No clearly defined problem
The Q&A format that Stack Exchange uses works best when there is a clearly defined problem. With the Amazon question, asking "how do I build and collect data for A/B testing - I've done XYZ, but I have trouble with this design for dealing with..." could be a reasonable question - it is one problem that can be asked and answered, and not to be overlooked found when someone else is searching for the same thing.
The asked, answered and found steps are key parts to how Stack Exchange works - remember that the question isn't just for you, its also for the next person asking the question.
Interviewing by rote
Next, it's counter productive. Lets say you get asked a question on how to implement Monopoly and you had no idea didn't get the job and went home to study Monopoly implementations. A week later, another interview, and you are asked how to implement Monopoly again and you rattle off the implementation that you've memorized (just like the how many ways can you use
static in Java question asked just before). It is clear that rattling off Monopoly from rote doesn't mean you understand it, so the interviewer asks how you would implement Risk...
The key to this example with why its counter productive to try to have a pre-constructed answer for this question is that the interviewer is trying to find your boundaries. By going with a pre-thought out answer that boundary hasn't been determined. It's the process of thinking through these questions as part of the interview that the interviewer wants to see - and will ask you question and question again until that process is clear about how you think of design.
And so, while there may be an answer that can fit in the space (you talk about the circular array for the board, the stacks for the cards, the rules engine for implementing how the different cards behave, the rules for the game, etc...) that might fit here, by having a thought out answer you're going to be asked them again and again until you are asked one you don't know - you've only wasted your time and that of the interviewers up until that point.
I will point out that studying BSD Games/monop which is written in classic C will likely have difficulty when the interviewer says "let's try that again in C#" or "let's change that design to anti-monopoly"
Where to ask about broad questions
One of the best places to ask such broad questions is Chat. The wide range of the question and understanding that it's an interview question can be better addressed in an interactive way rather than a question and answer format. The dialog between the person asking the question and the person answering it is an important part of the process of this type of question and understanding the design aspects of it.
"The interviewer didn't like my answer" question
Some interviewers ask questions with specific answers in mind. All other answers are unsatisfactory to them other than the one they want. Even if its wrong.
The difficulty with such questions is that the only person who knows the answer the interviewer was looking for is the interviewer. They may not have liked your design because of some opinionated blog post they read a year ago, or that its not cutting edge enough, or that you didn't implement their favorite Pattern.
Asking such a question here results in answers that are just speculation. We weren't the interviewer and we can only guess at what issues the person had with your design or solution.
|
OPCFW_CODE
|
This Prague travel guide by drone 4K takes you to the city whose glory reach the stars and to the place where every step you make tells you a story of the past.
In this aerial video I would like to present you Prague the city where I was born and where I spend most of my life. Prague beauty is very hard to explain by words, everywhere you look there’s always something new to see, artists performing in the streets and an atmosphere that takes you away. Everything is so old and beautiful, it is a marvel of architecture and culture.
In video about Prague you can see:
The Charles Bridge It spans the Vltava river and is adorned with many statues of the saints, making its visit an unique experience
Prague Castle which has been a seat of power for kings of Bohemia, Holy Roman emperors, and presidents of Czechoslovakia. It is also the largest ancient castle in the world.
The Emmaus monastery as well as Charles bridge were founded by king Charles IV, This monastery is unique because it was the first place where ceremonies were not held in Latin but in a local language. It was allowed by pope in a condition that it will be the only monastery of this kind in the empire.
The Czech Prague National Theatre which is known as the alma mater of Czech opera, and as the national monument of Czech history and art. The National Theatre belongs to the most important Czech cultural institutions, with a rich artistic tradition, which helped to preserve and develop the most important features of the nation–the Czech language and a sense for a Czech musical and dramatic way of thinking.
The Castle Vyšehrad is fortified Castle with a lot of legends. Vyšehrad was also the place of the first settlement which later became Prague
Statue of Kafka – Rotating head by famous Czech artist David Černý. The 42 mobile tiers of eleven-metre-tall sculpture align to form the face of the famous Czech writer Franz Kafka.
The Old Town Square which features buildings belonging to various architectural styles, including the Gothic Church of Our Lady before Týn, which has been the main church of this part of the city since the 14th century. Its characteristic towers are 80 m high. The Baroque St. Nicholas Church is another church located in the square. Prague Orloj is a medieval astronomical clock mounted on the Old Town Hall. The clock was first installed in 1410, making it the third-oldest astronomical clock in the world and the oldest one still in operation. The tower of the Old Town Hall is open to the public and offers panoramic views of the Old Town. The square’s center is home to a statue of religious reformer Jan Hus, who was burned at the stake for his beliefs in Constance. This led to the Hussite Wars. The statue known as the Jan Hus Memorial was erected on 6 July 1915 to mark the 500th anniversary of his death.
Every step you discover something new and every corner tells a story of the past.
André Aciman (born 2 January 1951) is an American writer. Born and raised in Alexandria, Egypt, he is currently distinguished professor at the Graduate Center of City University of New York, where he teaches the history of literary theory and the works of Marcel Proust. Aciman previously taught creative writing at New York University and French literature at Princeton and Bard College.
In 2009, he was Visiting Distinguished Writer at Wesleyan University.
He is the author of several novels, including Call Me by Your Name (winner, in the Gay Fiction category, of the 2007 Lambda Literary Award and made into a film) and a 1995 memoir, Out of Egypt, which won a Whiting Award. Although best known for Call Me by Your Name, Aciman stated in an interview in 2019 his best book to be the novel Eight White Nights.
Terry Jones, Monty Python founder and Life of Brian director, dies aged 77. Jones, who was diagnosed with dementia in 2015, was the main directing force in Python’s films, as well a prolific creator of TV documentaries and children’s books.
In 1969, Palin and Jones joined Cambridge graduates Cleese and Graham Chapman – along with Idle and animator Terry Gilliam – on a BBC comedy sketch show. Eventually broadcast under the title Monty Python’s Flying Circus, it ran until 1974, with Jones largely writing with Palin (complementing Cleese’s partnership with Chapman). Seemingly chaotic, frequently surreal and formally daring, Monty Python’s Flying Circus would became one of the most influential shows in BBC history, revolutionising comedy formats, spawning scores of catchphrases, and inspiring an entire generation of comedians. Jones’s fondness for female impersonation was a key feature of the show, as was his erudite writing.
Terra Mundo is a truly unique and luxurious dining experience set over three courses inspired by three distinctly earthly environments of forest, fire and ocean. Exquisite pairings of delicious food and stunning drinks are enhanced through musical soundscapes and 360-degree immersive projections bringing you the tastes, sights and sounds of Terra Mundo.
A delicious welcome drink
Three courses of fine dining
Perfectly-paired drinks for each course
Tantalising appetisers and amuse bouche
One hour show of fully visuals and musical soundscapes
Directed by: Edward Lovelace and James Hall (D.A.R.Y.L.) Production Company – Pulse Films
Director of Photography: Ben Fordesman
Produced by: VOLVO
The story of an ornithologist who’s remarkable work is safeguarding the future of not just birds but reptiles, mammals and one day perhaps even humans.
When scientists declared the Mauritius Kestrel beyond salvation, one young biology graduate refused to let it to become yet another entry into the archive of obsolete species. THE BIRDMAN was aired on Sky Atlantic January 20th 2020.
Introducing the LivingHome AD1: The Versatile Accessory Dwelling Unit (ADU). This one bedroom, one bath ADU is designed to provide affordable, sustainable rental units or family housing on existing single family lots. Finish options include three packages for interior and three for exterior, giving owners a total of nine standard configurations to choose from.
Here are the latest photographs of the almost complete first Romotow. We have kept true to concept with this revolutionary luxury travel trailer. The exterior images showcase the timeless styling, carbon composite construction, huge glazed windows and of course the patented covered deck space. Below you can see the beautiful interior styling with teak joinery, acrylic counters and luxury detailing.
Cruise, the self-driving subsidiary of General Motors, revealed its first vehicle to operate without a human driver, the Cruise Origin. The vehicle, which lacks a steering wheel and pedals, is designed to be more spacious and passenger-friendly than typical self-driving cars. Cruise says the electric vehicle will be deployed as part of a ride-hailing service, but declined to say when that might be.
|
OPCFW_CODE
|
Network SituationalAwareness with d00gleDug Songdugsong@monkey.org
BackgroundTime to update dsniff! • Suite of traffic interception tools for penetration testingLast public release almost exactly 4 years ago • dsniffs ARP/DNS, SSH/SSL Man-In-The-Middle techniques to intercept switched, encrypted traffic are quite common now • Interesting traffic analysis tools are still rareTotal Information Awareness, CALEA, why should thegovernment have all the fun?dsniff becomes d00gle...
EnvironmentVulnerability-aware Internet perimeter • client-side exploits, VPN clients, worms / viruses, wardrivingLittle / no access control / encryption internally • internal firewalls / IPSs cannot disrupt business processesUnpatched production systems • legacy software, heterogeneous hardware, rare change management windows for non-critical upgradesLimited visibility • little / no instrumentation for measurement / monitoring
Client AttackSomething to do at cafes, airports, hotelsIdentify interesting users to target • corporate VPN users on vulnerable hosts • unsophisticated, unencrypted usersStandard MITM, TCP injection, protocol downgrade, client-sideattacks applyLeverage into an attack on the home / corporate network
Network Attack!What is the organizational reporting structure?What are the passwords for this user?For this router / switch?What does this user have access to?Where are the shared public resources (fileservers, intranetwebservers, login servers), and what are they running?Where are the remote loghosts?Has anyone detected the intrusion?
Our GoalsIntelligence, Surveillance, ReconnaissanceExtract as much information as we can passivelyAssemble it into a coherent relational databasePerform data correlation and analysis real-timeSupport interesting queries and visualization of the dataEnable rapid prototyping of new traffic analysis toolsMaintain dsniffs tool-oriented modularityShare the code (GPL) to encourage experimentation
Data collectedLogin / authentication informationPhone numbers / callsE-mail messagesInstant messagesWWW usageConnection informationHost inventory: IP, mac address, hostname/DHCP name, OSversion, open ports / services / applicationsInteractive / encrypted sessions
Why Python?C extension modules for performance-critical codePortability, maintainability, modularityEasy to learn, but still powerfulPython versus C lines of code: • dsniff - 1700 vs 6800 LOC • p0f2 - 519 vs 1798 LOC • vomit - 54 vs 1864 LOCGreat for lazy programmers like me!
ArchitectureSimple Python modules + glueFlowDecode subclasses handle flow start, data, and end eventsDecodes can be registered dynamically with the flow engine forarbitrary Ethernet / IP / RPC program triggersEach module can be run as a separate command-line toolCan use any Python DB-API compliant database backend(default sqlite)UI is served by simple standalone Python webserver
*snarfauthsnarf - Password sniffer for AIM, Citrix ICA, CVS, FTP,Cisco HSRP, HTTP, IMAP, IRC, LDAP, Meeting Maker, NFS,Napster, NNTP, Oracle SQL*Net, OSPF, PC Anywhere, POP,Postgres, Halflife, QuakeWorld (many games), RIP, Rlogin,Cisco VOIP, Sybase and Microsoft SQL, Microsoft SMB, SMTP,SNMP, NAI Sniffer, SOCKS, Telnet, VRRP, X11, YP/NIS,various web login formsurlsnarf - Record all visited URLs and browser versionsmailsnarf - Record all e-mail messages in SMTP and POP trafficmsgsnarf - Record all AIM, ICB, IRC, Jabber, MSN, Yahooinstant messages
vomitVoice Over Misconfigured Internet TelephonesOriginal version by Niels Provos (firstname.lastname@example.org)Records all SIP/Cisco SCCP phone calls: • Watches control channel for call setup • Intercepts negotiated media channel, saving the voice data as a WAV fileRip offline to MP3 with appropriate ID3 tags
neticsOriginal version by Marius Eriksen (email@example.com)Attempts to identify interactive, encrypted sessions on anyprotocol or portInteractivity heuristic: • small client packet sizes • ratio of client/server segments • interpacket arrival timeEncryption heuristic: • Ueli Maurers universal randomness test
p0fStraight Python port of p0fv2 by Michal ZalewskiPassive OS fingerprinting of IP endpoints based on TCP SYN,SYN/ACK parameters • operating system and version • host uptime (TCP timestamp option) • distance (TTL inference) • link type (maximum segment size)
nmapvPassive application fingerprinting • service protocol • specific application name and versionSimple hack of nmaps regex-based service response match • nmap version scan minus the scan - just match replies • some entries (e.g. SSL) need modification
Query interfaceGoogle is smarter than me - ape their interfaceQuery language is simple (text, wildcards, +/-), but moreadvanced queries possible with search operators (e.g."app:Apache*")Query engine maps Google-style queries to SQLWould like to support stored queries, and a simple query history
Related workPython fragroute • evade dsniff detection! :-)Arbor Networks Peakflow • scalable traffic monitoring, engineering, and behavioral analysis for service providers and enterprises
Future workUser / social network profilingSemantic analysis of conversation dataAuto-focusSpeech transcription for full-text VOIP search? :-)Other Big Brother stuffContributions and derived work from users like you!
ConclusionEverything you do on a network is observable in some wayWhat is your network saying about you? :-)http://monkey.org/~dugsong/dpkt/http://monkey.org/~dugsong/pypcap/http://monkey.org/~dugsong/pyevent/http://monkey.org/~dugsong/dsniff/
|
OPCFW_CODE
|
How can I extract data from a multipage PDF into a single CSV file using camelot python library?
I've got a PDF file consisting of 4 pages of data. The first 3 pages contain 3 tables with the same columns. After some research I've found a pretty good python lib called camelot-py. Would it be possible to generate a single CSV file containing all data from the PDF file (with the condition all tables have the same columns a data types), using camelot-py?
I implemented this first version below, inspired on this post. The problem is that it generates multiple CSV files one per table found inside de PDF file, and I need a single CSV file containing all the data from the PDF pages. I would like to have just one file in the end of the process. Would it be possible using this camelot-py python lib, maybe in conjunction with another one (like pandas)?
# First version
import camelot
def pdf2csv_single(pdf_filename, csv_filename):
tables = camelot.read_pdf(pdf_filename)
if tables:
# Se sim, salvar a primeira tabela em um arquivo CSV
tables.export(f'{csv_filename}', f='csv')
print("Data successfully extracted to CSV")
else:
print("No table found")
One of the simpler ways to extract GUI text as tabular is via Java Tabula.
Since we are working with a "graphics" format (PDF) it is most sensible to use heads up graphical editors, same as you would use an image app for pixel manipulation or Graphical WP for text style processing.
PDF charted characters on left Visualisation of extraction upper right. Character separated values result lower right.
If you want to run headless then use PDFtoText and inject say commas to replace the voids.
TABLE 1 - RANDOM DIGITS
11164 36318 75061 37674 26320 75100 10431 20418 19228 91792
21215 91791 76831 58678 87054 31687 93205 43685 19732 08468
10438 44482 66558 37649 08882 90870 12462 41810 01806 02977
36792 26236 33266 66583 60881 97395 20461 36742 02852 50564
73944 04773 12032 51414 82384 38370 00249 80709 72605 67497
49563 12872 14063 93104 78483 72717 68714 18048 25005 04151
64208 48237 41701 73117 33242 42314 83049 21933 92813 04763
51486 72875 38605 29341 80749 80151 33835 52602 79147 08868
99756 26360 64516 17971 48478 09610 04638 17141 09227 10606
71325 55217 13015 72907 00431 45117 33827 92873 02953 85474
65285 97198 12138 53010 94601 15838 16805 61004 43516 17020
17264 57327 38224 29301 31381 38109 34976 65692 98566 29550
95639 99754 31199 92558 68368 04985 51092 37780 40261 14479
61555 76404 86210 11808 12841 45147 97438 60022 12645 62000
78137 98768 04689 87130 79225 08153 84967 64539 79493 74917
TABLE 1 - RANDOM DIGITS
11164,36318,75061,37674,26320,75100,10431,20418,19228,91792
21215,91791,76831,58678,87054,31687,93205,43685,19732,08468
10438,44482,66558,37649,08882,90870,12462,41810,01806,02977
36792,26236,33266,66583,60881,97395,20461,36742,02852,50564
73944,04773,12032,51414,82384,38370,00249,80709,72605,67497
49563,12872,14063,93104,78483,72717,68714,18048,25005,04151
64208,48237,41701,73117,33242,42314,83049,21933,92813,04763
51486,72875,38605,29341,80749,80151,33835,52602,79147,08868
99756,26360,64516,17971,48478,09610,04638,17141,09227,10606
71325,55217,13015,72907,00431,45117,33827,92873,02953,85474
65285,97198,12138,53010,94601,15838,16805,61004,43516,17020
17264,57327,38224,29301,31381,38109,34976,65692,98566,29550
95639,99754,31199,92558,68368,04985,51092,37780,40261,14479
61555,76404,86210,11808,12841,45147,97438,60022,12645,62000
78137,98768,04689,87130,79225,08153,84967,64539,79493,74917
In fact tabula is a very good library, it was the first one I've found and tried to use. But it requires you to install Java and I needed a fast solution to an emergency. Additionally, I'm not so fluent in java as I'm in python, so I tried to find some lib in python. I was searching some tool to use in conjunction to jupyter notebooks, because this just a part of a greater process of data cleaning and standardization. But your suggestions was very useful and informational. Does it have a Docker image.
I've found this Docker specification for Tabula https://github.com/asnelling/tabula-docker.
|
STACK_EXCHANGE
|
Ontology is a word that comes from philosophy, but many scientific fields have borrowed ontology and established from it their first assumptions about the reality they study. The same is true for software to do anything in management, and more broadly in organizational reality. I will describe today, based on the literature, why ontology is so important.
An ontology is a formal, predetermined description of phenomena in a given slice of reality, whose characteristics are describable by certain variables or parameters . However, many different definitions of ontology can be found in the literature. For example, ontology defines linguistic elements belonging to established concepts in order to construct knowledge . It is also assumed that an ontology is an information system that contains the names of concepts in order to describe selected fragments of reality along with the adopted meaning assumptions . W.V.O. Quinn used to say that when it comes to ontology, millennia of ontological inquiry can be encapsulated in three words: “what is here?”. It must be admitted that this definition, although expressed by a question, is quite suggestive.
If you’re interested in why ontology is so important in our lives, see a great company about the importance of ontology in philosophy, which is really the science of what exists in the universe and the interpretation of that universe:
Ontology in the field of software design can be defined as “the set of activities that deal with the ontology development process, the ontology lifecycle, and the methodologies, tools and languages for building ontologies” . Ontologies in software engineering offer a formal representation of knowledge. They are created to use a common vocabulary in a specific domain with the goal of sharing information through concepts and the relationships between these concepts . The most important motivation for building ontologies in software engineering is to share a common understanding of information structure among application users and to enable them to reuse this knowledge.
I will now show research on the goals of using ontologies in software design, so in the case of the use of intelligent systems, which is to be the robot manager. The research showed that 72% of respondents expected the ontology to provide conceptual modeling and data integration. Slightly less, 65% of respondents said that the purpose of ontologies in software design is to define knowledge base schemas and link data from different public knowledge bases. Sharing knowledge and providing common access to heterogeneous data were indicated by 56% of respondents, and 50% indicated ontology-based search as a goal of software ontology .
Ontology in software design is a conceptual and terminological description of shared domain-specific knowledge, which means making improvements in communication using the same system in terms of terminology and concepts . Ontologies are important parts of applications that support shared life, enabling analysis of high-performance datasets, data standardization and integration, search and discovery .
There is a view in the literature that regardless of the ontological assumptions made in a given scientific discipline (e.g., management science – author’s note), different objects of reality (organizational – author’s note) are understood differently by different researchers and within different research projects . They can be objective, independent of the cognizing subject, or they can be subjective (in the original “values” – author’s note), forming an inseparable bond with the subject . They can also be “quasi-objective” creations of the intellect, called conceptual objects and serving as instruments of cognition. Finally, they can be objects that are a mixture of all three approaches above.
In conclusion, I would like to emphasize that the ontology of reality, in our case organizational reality, provides a conceptual framework for representing, sharing and managing knowledge through a system of concepts, their hierarchy, the relationships assigned to them and the way they are semantically distinguished (El-Diraby, Lima, & Feis, 2005).
This is why ontological assumptions are so important in the concept of organizational size system, and nothing can be done without them if we want to build an artificial manager. Although I have already written several times about the ontological assumptions I built for this purpose, such as here: https://artificialmanagers.com/2023/02/20/what-is-the-world-of-an-artificial-manager-made-of-part-6-lets-combine-resources-and-processes/ I will devote several more posts to this topic in the future.
Chang, X., Terpenny, J., & Koelling, P. (2010). Reducing Errors in the Development, Maintenance and Utilisation of Ontologies. International Journal of Computer Integrated Manufacturing, 23(4), 341-352.
Hendler, J. (2001). Agents and the Semantic Web. IEEE Intelligent Systems, 16, 30-37.
Corcho, O., Fernandez-Lopez, M., & Gomez-Perez, A. (2003). Methodologies Tools and Languages for Building Ontologies. Data and Knowledge Engineering, 46, 41-64.
Brink, C., & Rewitzky, I. (2002). Three Dual Ontologies. Journal of Philosophical Logic, 31(6), 543-468.
Cakula, S., & Salem, A.-B. M. (2013). E-learning developing using ontological engineering, WSEAS Transactions on. Information Science and Application, 10(1), 14-25.
Gayathri, R., & Uma, V. (2018). Ontology based knowledge representation technique, domain modeling languages and planners for robotic path planning: A survey. ICT Express, 3(2), 69-74.
Warren, P, Mulholland, P., Collins, T., & Motta, E. (2014). Using ontologies: understanding the user experience. In: Knowledge Engineering and Knowledge Management (pp. 579-590), Lecture Notes in Computer Science, Springer.
Fonseca, V. S., Barcellos, M. P., & Falbo, R. (2017). An ontology-based approach for integrating tools supporting the software measurement process. Science of Computer Programming, 135, 20-44.
Fraga, A. L, Vegetti, M., & Leone, H. P. (2020). Ontology-based solutions for interoperability among product lifecycle management systems: A systematic literature review. Journal of Industrial Information Integration, 20, 100176.
Laudan, L. (1984). Science and Values: The Aims of Science and Their Role in Scientific Debate. Berkeley: University of California Press.
Ghenea, S. V. (2015). On Facts and Values. Scientific Journal of Humanistic Studies, 7(12), 11-14.
El-Diraby, T. A., Lima, C., & Feis, B.: Domain Taxonomy for Construction Concepts – Toward a Formal Ontology for Construction Knowledge. Journal of Computing in Civil Engineering, 19(4), 394-411.
|
OPCFW_CODE
|
Callback for receiving packets and sending packets directly (TZ-93)
I'd like to be able to use the ESP32-C6 as a coordinator for an already existing Zigbee device which implements only the bare minimum of the Zigbee protocol.
These devices send out custom packets, how can I register a callback that gets called when a packet is received?
How can I send a custom Zigbee packet?
If such a feature is not available yet I request you to add support for directly receiving and sending Zigbee packets.
@renzenicolai What kind of custom packets you'd like to send and receive? Could you explain little bit more? If you are talking about a custom ZCL command with customized cluster(0xfc00-0xffff), command ID. You could refer to esp_zb_zcl_custom_cluster_cmd_req, esp_zb_add_custom_cluster_command_cb
Thank you for the quick response! I've captured a packet as it appears over the air using a packet sniffer and Wireshark.
Frame 544: 64 bytes on wire (512 bits), 64 bytes captured (512 bits) on interface -, id 0 IEEE 802.15.4 Data, Dst: 02:38:a5:76:3b:1e:ff:ff, Src: TexasIns_00:14:d9:49:35 Frame Control Field: 0xcc41, Frame Type: Data, PAN ID Compression, Destination Addressing Mode: Long/64-bit, Frame Version: IEEE Std 802.15.4-2003, Source Addressing Mode: Long/64-bit .... .... .... .001 = Frame Type: Data (0x1) .... .... .... 0... = Security Enabled: False .... .... ...0 .... = Frame Pending: False .... .... ..0. .... = Acknowledge Request: False .... .... .1.. .... = PAN ID Compression: True .... .... 0... .... = Reserved: False .... ...0 .... .... = Sequence Number Suppression: False .... ..0. .... .... = Information Elements Present: False .... 11.. .... .... = Destination Addressing Mode: Long/64-bit (0x3) ..00 .... .... .... = Frame Version: IEEE Std 802.15.4-2003 (0) 11.. .... .... .... = Source Addressing Mode: Long/64-bit (0x3) Sequence Number: 98 Destination PAN: 0x4447 Destination: 02:38:a5:76:3b:1e:ff:ff (02:38:a5:76:3b:1e:ff:ff) Extended Source: TexasIns_00:14:d9:49:35 (00:12:4b:00:14:d9:49:35) FCS: 0xde49 (Correct) Data (41 bytes) Data:<PHONE_NUMBER>7d18c61df6373f445c4e80881848f5a36b587de00a4f69f6bee21661f67992… [Length: 41]
The data contained in this packet is an encrypted custom protocol. I'd like to send and receive the raw data contained in these packets from my application running on an ESP32-C6.
How can I send and receive these custom 802.15.4 packets?
For completeness here is a hex dump of a full packet:
0000 41 cc 62 47 44 ff ff 1e 3b 76 a5 38 02 35 49 d9 0010 14 00 4b 12 00 55 23 31 83 08 7d 18 c6 1d f6 37 0020 3f 44 5c 4e 80 88 18 48 f5 a3 6b 58 7d e0 0a 4f 0030 69 f6 be e2 16 61 f6 79 92 be 30 df 21 64 49 de
our esp-zigbee-sdk doesn't provide sending / receiving 802.15.4 packet. However, you could refer to those API from 802.15.4 components and search esp_ieee802154_receive_done and esp_ieee802154_transmit. Example you could refer154 command
Thank you for explaining, I've gotten it to work using the esp_ieee802154 component.
(And for other people finding this thread via google, have a look at https://github.com/badgeteam/esp32c6-firmware-esl-station for my solution)
|
GITHUB_ARCHIVE
|
This is to let you know that the deadline for paper submission <https://swisstext-and-konvens-2020.org/call-for-papers/> for the 5th Swiss Text Analytics Conference (SwissText) & 16th Conference on Natural Language Processing (KONVENS), which takes place on June 23-25, 2020 in Zurich, Switzerland, has been extended to March 15, 2020 (23:59 CEST).
Registration for the conference is open at https://swisstext-and-konvens-2020.org/registration/.
Best regards, the SwissText + KONVENS 2020 Organising Committee
Call for Papers
SwissText + KONVENS 2020 will feature the following tracks: Applied Track, with a strong focus on industry and applied research. This track constitutes the continuation of the "Swiss Track" from previous SwissText editions. Scientific Track, with technical research papers from the international scientific community. Demo Track, which provides a platform for companies and universities to showcase their interactive NLP solutions.
Conference website: https://swisstext-and-konvens-2020.org/ <https://swisstext-and-konvens-2020.org/>
Applied Track The goal of the Applied Track is to bring together experts from industry and academia. For this track, we are looking for presentations with a strong focus on practical applicability.
Presentations can be: Showcases Applications (if you have a live demonstration, please submit to the Demo Track instead) Resources Applied research results Any other contribution that is interesting to the audience
For this track, a short abstract (up to 1500 characters) has to be submitted, along with a CV of the presenter (text format, no tables — max. 500 characters) and the intended audience (e.g. developers, decision makers, project managers, max. 300 characters). All data is entered online via EasyChair: https://easychair.org/conferences/?conf=swisstextkonvens2020 <https://easychair.org/conferences/?conf=swisstextkonvens2020>
Note: If you intend to publish a full scientific paper, please submit it to the Scientific Track.
Scientific Track In this track, we welcome technical papers dealing with natural language processing, computational linguistics, and machine learning/data science with a focus on text analytics.
The 2020 special theme is: Multilingual NLP
While previous KONVENS editions have traditionally focused on NLP for German, given the multilingual setting of Switzerland (German, French, Italian, Romansh, Swiss German), we encourage NLP researchers working on these languages (plus English as supplement) in and outside of Switzerland to submit their work.
We welcome the following two types of contributions: Long papers (8 pages plus references): substantial results Short papers (4 pages plus references): progress reports and similar
Submission format: Please use the ACL LaTeX template: http://www.acl2019.org/medias/340-acl2019-latex.zip <http://www.acl2019.org/medias/340-acl2019-latex.zip> Language: English Anonymization: The review process will be double blind; we ask authors to anonymize their submission reasonably. Instead of e.g. "We previously showed (Smith, 1991) ..." write "Smith previously showed (Smith, 1991) …" Manuscripts must describe original work that has neither been published before nor is currently under review elsewhere. Submission will be made through EasyChair: https://easychair.org/conferences/?conf=swisstextkonvens2020 <https://easychair.org/conferences/?conf=swisstextkonvens2020>
Demo Track The conference features a dedicated track for demonstrations of available NLP solutions (open source or proprietary). The goal is to give developers, companies, and researchers a platform to showcase their products and tools, and for the audience to get an overview of production-ready NLP solutions. All accepted demos will be presented live during the demo exhibition at the conference. Please provide a short abstract of your demo (up to 1500 characters), a screenshot of your solution, and a description of the intended audience (e.g. developers, researchers, etc., max. 300 chars). All data is entered online via EasyChair: https://easychair.org/conferences/?conf=swisstextkonvens2020 <https://easychair.org/conferences/?conf=swisstextkonvens2020>
Proceedings All accepted scientific papers will be published in the conference proceedings. For presentations in the Applied Track, the abstract will be included in the proceedings. There is no differentiation between oral or poster presentations in the proceedings. The proceedings will be published on CEUR-WS.org <http://ceur-ws.org/> (open access). To appear in the proceedings, each contribution has to be presented by at least one author at the conference.
Presentation Types Accepted submissions in the Applied Track and the Scientific Track will be presented orally or as posters, as determined by the program committee. All accepted Demos will be presented live during the demo exhibition at the conference. Presentations are in English and will be listed in the conference program.
Important Dates Submission deadline: March 8, 2020 (23:59 CEST) March 15, 2020 (23:59 CEST) Author notification: April 19, 2020 Camera-ready version due: May 5, 2020 Conference: June 23-25, 2020
Organization Committee Zurich University of Applied Sciences (Mark Cieliebak, Manuela Hürlimann, Don Tuggener, Fernando Benites, Jan Deriu) University of Zurich (Martin Volk, Sarah Ebling, Debora Beuret)
Venue The conference will take place on the main campus of the University of Zurich. Contact
Please direct all questions regarding the submission process to submissions at swisstext-and-konvens-2020.org <mailto:submissions at swisstext-and-konvens-2020.org>.
Manfred Vogel Noah Bubenhofer Tim Vor der Brück Fabio Rinaldi Roberto Mastropietro Jürgen Vogel Martin Jaggi Jürgen Spielberger Hatem Ghorbel Andrei Popescu-Belis Thoralf Mildenberger Don Tuggener Fernando Benites Egon Werlen Mark Cieliebak Maria Sokhn Thilo Stadelmann
Noëmi Aepli Adrien Barbaresi Fernando Benites Chris Biemann Marcel Bollmann Ernst Buchberger Pascual Cantos-Gómez Mark Cieliebak Simon Clematide Berthold Crysmann Ernesto William De Luca Stefanie Dipper Sarah Ebling Xavier Gómez Guinovart Ulrich Heid Serge Heiden Manuela Hürlimann Martin Jaggi Manfred Klenner Roman Klinger Valia Kordoni Brigitte Krenn Udo Kruschwitz Ekaterina Lapshinova-Koltunski Roberto Mastropietro Alexander Mehler Margot Mieskes Simonetta Montemagni Preslav Nakov Sebastian Pado Patrick Paroubek Johann Petrak Hannes Pirker Andrei Popescu-Belis Uwe Quasthoff Ines Rehbein Georg Rehm Fabio Rinaldi Sophie Rosset Paolo Rosso Tanja Samardzic Felix Sasaki Yves Scherrer Helmut Schmid Gerold Schneider Sabine Schulte Im Walde Roland Schäfer Rico Sennrich Manfred Stede Kurt Stockinger Ludovic Tanguy Don Tuggener Manfred Vogel Martin Volk Tim Vor der Brück Egon Werlen Magdalena Wolska Torsten Zesch Heike Zinsmeister -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 11996 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200305/2ade7860/attachment.txt> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4295 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200305/2ade7860/attachment.p7s>
|
OPCFW_CODE
|
Support Sonoff L1 LED Strip
It would be awesome with Sonoff LED strip. I'm sorted by most requested functions.
On/Off
Change color
Brightness
Mode
Coloful
Sync to music
Coloful Gradient
...
@ebattulga I don't have this one. Could you send some debug logs?
From another Sonoff-integration that I tried.
Mode 1: Colorful
Mode 2: Colorful gradient
Mode 3: Colorful Breath
Mode 4: DIY Gradient
Mode 5: DIY Pulse
Mode 6: DIY Breath
Mode 7: DIY Strobe
Mode 8: RGB Gradient
Mode 9: RGB Pulse
Mode 10: RGB Breath
Mode 11: RBG Strobe
Mode 12: Sync to music
sonoff.L1.txt
sonoff-debug.txt
Hope it can help!
@AcidSleeper I see firmware version 2.7.0.
Devices of this version should work in local mode only with the Internet turned off.
Have you blocked your device Internet?
Is there a newer firmware version?
No newer version in eWeLink app.
After installing integration in Home assistant via HACS and putting minimum requirements in configuration.yaml the device dont appear.
The .sonoff.json file that gets created indicates it downloads the Sonoff L1 from server but dont appear in Home assistant
@AcidSleeper can you:
enable debug log
block Internet
restart device and HA
switch strip through remote
check your logs
Code for debug you want?
Block internet for Sonoff L1?
Switch strip? On/Off?
Enable debug logs:
default: info
logs:
custom_components.sonoff: debug```
2. Yes. Block Internet for Sonoff L1?
3. Yes. On/Off
Nothing happens!
HA dont produces a log!
@AcidSleeper do you have telegram? Can you write to me there?
Discord?
This is whats get created in .sonoff.json
{"1000990af7":{"settings":{"opsNotify":0,"opsHistory":1,"alarmNotify":1,"wxAlarmNotify":0,"wxOpsNotify":0,"wxDoorbellNotify":0},"group":"","online":false,"shareUsersInfo":[],"groups":[],"devGroups":[],"_id":"5d85d8ec8ff123e94fe13da9","name":"Sonoff Led1","type":"10","deviceid":"1000990af7","apikey":"REDACTED","extra":{"extra":{"uiid":59,"description":"20190703002","brandId":"5c4c1aee3a7d24c7100be054","apmac":"REDACTED","mac":"REDACTED","ui":"律动灯带","modelInfo":"5c700fabcc248c47441fd241","model":"PSF-BTA-GL","manufacturer":"深圳松诺技术有限公司","staMac":"REDACTED"},"_id":"5d1c529c2fb3e272ea638b13"},"createdAt":"2019-09-21T08:01:48.995Z","__v":0,"onlineTime":"2020-02-10T11:02:54.221Z","ip":"<IP_ADDRESS>","location":"","params":{"sledOnline":"on","rssi":-67,"fwVersion":"2.7.0","staMac":"REDACTED","switch":"on","light_type":1,"colorR":255,"colorG":255,"colorB":255,"bright":100,"mode":2,"speed":100,"sensitive":10},"offlineTime":"2020-02-15T15:20:44.294Z","deviceStatus":"","sharedTo":[],"devicekey":"REDACTED","deviceUrl":"https://eu-api.coolkit.cc/api/detail/5c700fabcc248c47441fd241_en.html","brandName":"SONOFF","showBrand":true,"brandLogoUrl":"https://eu-ota.coolkit.cc/logo/Q4RJzznuKEeDgFXbgyS9OClbyEDR7gXd.png","productModel":"L1","devConfig":{},"uiid":59}}
@AcidSleeper Discord: AlexxIT#0816
Here #31
Is there any new progress in the project? I have an L1 light strip. What do I need to do?
{"1000a2ac7d":{"settings":{"opsNotify":0,"opsHistory":1,"alarmNotify":1,"wxAlarmNotify":0,"wxOpsNotify":0,"wxDoorbellNotify":0,"appDoorbellNotify":1},"group":"","online":true,"shareUsersInfo":[],"groups":[],"devGroups":[],"_id":"5db6cd68eafb2786263c3515","name":"主卧灯带","type":"10","deviceid":"1000a2ac7d","apikey":"8c236797-5fa2-4854-ae0b-c8101facadd4","extra":{"extra":{"model":"PSF-BTA-GL","ui":"律动灯带","uiid":59,"description":"20190814007","manufacturer":"深圳市阳溢电子商务有限公司","mac":"d0:27:01:45:56:4a","apmac":"d0:27:01:45:56:4b","modelInfo":"5ba1dcf777db4c0c7324041a","brandId":"5a03b77c6ed24cfc5975198e","staMac":"D8:F1:5B:8C:24:70"},"_id":"5d54ff6613451dda060dd489"},"createdAt":"2020-02-01T10:40:11.679Z","__v":0,"onlineTime":"2020-04-12T07:33:45.952Z","ip":"<IP_ADDRESS>","location":"山东","params":{"sledOnline":"on","rssi":-69,"fwVersion":"2.7.0","staMac":"D8:F1:5B:8C:24:70","switch":"off","light_type":1,"colorR":255,"colorG":183,"colorB":0,"bright":57,"mode":1,"speed":30,"sensitive":2,"bindInfos":{"aligenie":["8c236797-5fa2-4854-ae0b-c8101facadd4_33d17855f659bbea8279ac74a01f24f196a929bb"],"miot":["8c236797-5fa2-4854-ae0b-c8101facadd4_ewelink-miot-v1"]},"timers":[],"partnerApikey":"55b441f6-f692-438d-82f0-3338da3baef8"},"offlineTime":"2020-04-12T07:33:21.599Z","deviceStatus":"","sharedTo":[],"devicekey":"4c9bf94b-a7a8-4368-be46-c1a04dbbd857","deviceUrl":"","brandName":"ABC","showBrand":false,"brandLogoUrl":"","productModel":"2.0-led","devConfig":{},"uiid":59}}
2020-04-15 22:40:00 DEBUG (MainThread) [custom_components.sonoff.utils] Init zeroconf singleton
2020-04-15 22:40:00 DEBUG (MainThread) [custom_components.sonoff] Update device config 1000a2ac7d
2020-04-15 22:40:00 DEBUG (MainThread) [custom_components.sonoff.utils] Generate zeroconf singleton
2020-04-15 22:40:02 DEBUG (SyncWorker_17) [custom_components.sonoff.utils] Use zeroconf singleton
Try new version #99
Supported
|
GITHUB_ARCHIVE
|
Unnecessary -var-create and -var-delete commands during a stack trace can cause noticeable pause each time the debuggee stops
Summary
Each time the debuggee stops, Visual Studio Code requests a stack trace of the threads that are expanded in the CALL STACK window. Turning on engineLogging shows that MIEngine implements a stack trace for a thread by issuing a -stack-list-arguments for the thread, and then issuing a -var-create and -var-delete command for every argument in every frame in the stack.
When there are many stack frames with many arguments, or if the debugger is behind a sluggish network connection, or if the user installs a hook that runs on every GDB prompt, the time taken to execute these commands can amount to a significant delay.
These -var-create and -var-delete commands ought to be unnecessary as explained below.
Reproduction
Put this program in a suitable file, say recurse.c:
static
void r(int a, int b, int c, int d, int e, int f, int g, int h, int i, int j)
{
if (a)
{
r(b, c, d, e, f, g, h, i, j, 0);
}
}
int
main(void)
{
r(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
}
Compile it with debugging information, for example gcc -g -o recurse recurse.c.
Open Visual Studio Code and create a new launch configuration using the "(gdb) launch" template, supplying recurse as the value for the "program" key, and turning on engineLogging:
"logging": {
"engineLogging": true
}
Select View ⟶ Run to switch to the run panel, select the new launch configuration in the RUN AND DEBUG dropdown, and press F5 to start debugging.
Click "Step Into" a few times until there are multiple frames on the stack.
Look at the DEBUG CONSOLE to see the GDB/MI commands sent by MIEngine. I took a copy of the commands issued by a single Step Over operation with ten frames on the stack and attached them here: stack-list-arguments.gz. Here's a summary of the commands issued by MIEngine to GDB:
Command
Count
-stack-list-arguments
2
-stack-list-frames
1
-stack-list-variables
1
-stack-select-frame
20
-var-create
210
-var-delete
210
If each of the -var-create and -var-delete commands takes only a millisecond due to network lag or a GDB before-prompt hook, then there's a 0.4-second pause each time the debuggee stops.
Analysis
You can see from the log output that each -var-create is followed immediately by a -var-delete. This means that MIEngine is only issuing these commands to collect information about the variables (not to get a variable reference that it can later use). Why is this? The culprit is DebuggedProcess.GetParameterInfoOnly() which has a comment explaining what's happening:
https://github.com/microsoft/MIEngine/blob/c594534c5af5b9cc3ad81b10588f00649db1ce6f/src/MIDebugEngine/Engine.Impl/DebuggedProcess.cs#L1975-L1976
The problem is that GetParameterInfoOnly() was called with values = false and so it passed PrintValue.NoValues to the -stack-list-arguments call and then followed up with -var-create and -var-delete commands to get the types of the arguments.
The reason why values = false here is that AD7DebugSession.HandleStackTraceRequestAsync() did not set the FIF_FUNCNAME_ARGS_VALUES flag when calling AD7Thread.EnumFrameInfo(). And the reason why this flag is unset is that VSCode did not pass a format argument to the StackTrace request, and so we are in the "default format" case:
https://github.com/microsoft/MIEngine/blob/c594534c5af5b9cc3ad81b10588f00649db1ce6f/src/OpenDebugAD7/AD7DebugSession.cs#L1706-L1710
Solution ideas
The default format could include FIF_FUNCNAME_ARGS_VALUES: then GetParameterInfoOnly() would pass PrintValue.SimpleValues to the the -stack-list-arguments call, which would return the types immediately and there would be no need for the subsequent -var-create and -var-delete calls.
GetParameterInfoOnly() could pass PrintValue.SimpleValues to the the -stack-list-arguments call if either types or values were required, discarding the values information if it is not needed.
Software versions
Cpptools extension: 1.12.4
VSCode: 1.71.0
Commit: 784b0177c56c607789f9638da7b6bf3230d47a8c
Date: 2022-09-01T07:25:10.472Z
Electron: 19.0.12
Chromium: 102.0.5005.167
Node.js: 16.14.2
V8: <IP_ADDRESS>-electron.0
OS: Linux x64 5.4.0-124-generic
Sandboxed: No
It feels like the best solution here is to have a launch.json knob to control the default stack trace format (and ideally context menu controls in the call stack window to change it during a debug session).
Maybe—but first I think it is worth trying to see if GDB can be fixed. The underlying problem is that the GDB/MI command -stack-list-arguments --simple-values is supposed to print simple values like integers and not print compound values like structures, so that the whole stack can be printed in reasonable time. The problem noted in #673 is that --simple-values does not work as expected in C++: in particular, references to arbitrarily large arrays, structures, and unions are printed.
I think this is a bug, or at least an important oversight, in GDB, so I have filed bug 29554 together with a patch to GDB that updates --simple-values so that it does not print references to arrays, structures, and unions. If I can get this merged, then MIEngine can be updated to take advantage of the new behaviour.
(It might be that the GDB maintainers won't agree that this is a bug, but if so I can try again, this time leaving --simple-values unchanged and adding a new print-values option that has the desired behaviour.)
Even with that, I think we would still need a knob to turn off values for folks using older versions of GDB (unless we want to do version sniffing and turn it off by default).
GDB/MI has built-in feature-detection via the -list-features command—you'll see that in my patch on bug 29554 I added a feature for the new behaviour of --simple-values which MIEngine could consult.
Sounds reasonable. BTW: Due to GDB's licensing, I am not allowed to look at GDB source code.
|
GITHUB_ARCHIVE
|
Chips off the block – vacation reflections on innovation and Health IT
Regular followers of this blog may have noted a paucity of new posts. That’s because I’ve been on vacation the past two weeks. My wife and I took a much needed respite in the Canadian Rockies. We flew to Calgary where we rented a car and spent the next six days exploring Banff, Lake Louise, Jasper, and all points in between. Some of you may have come across a few of my favorite vacation photos on Facebook.
One of my favorite locations (actually there were many) was just south of Jasper along the Ice Fields Parkway in Jasper National Park. We took a rather nondescript cut-off from the main highway and ventured several miles into the mountains on a very narrow, yet well maintained two-lane road. After 20 minutes or so of nail-biting twists and turns, we made it above tree-line and finally to a parking lot already crowded with cars and a few flat-bed campers that had ignored the width restrictions for vehicles on the narrow mountain road.
After walking about a mile through a rocky valley of pure Alpine bliss, we came across the prize we had come all this way to see - the Mt. Edith Cavell and Angel Glaciers. Here, lucky visitors can see chunks of ice as large as automobiles falling thunderously into a pond below creating a mini tsunami across the frigid water. It is truly spectacular. For scale, find the human figure sitting by the pond in the middle of the panoramic photo (made with Windows Live photo editing) below.
So you may be wondering what any of this has to do with innovation and health IT? First of all, let me assure you that I wasn’t thinking about my job or health IT while visiting this monument to nature. It was only after I had returned home and read a blog post citing a recent speech by Pay-Pal cofounder and investor Peter Theil that what I had observed on vacation came to mind. During a presentation at an investor’s event held in Aspen, Mr. Theil expressed an opinion that the pace of technological change in America is stagnating. He gave several examples to back up his claim. Whether you believe that or not, I’d say that the pace of change in Health IT is anything but stagnant. Yes, we still need devices and solutions that integrate better with clinical workflow. There is still work that needs to be done to make it easier for clinicians and other health workers to document their work using a wider variety of input options. We need even more intuitive and flexible user interfaces to clinical solutions, and even better tools to sort through, prioritize and reveal the most critical patient care data for analysis and action. Having said that, I’m seeing amazing progress in mobile health solutions, telehealth, and telemedicine. We have powerful cloud or premises-based, enterprise ready solutions to improve care-team communication, coordination, and collaboration. We have business intelligence and CRM tools that provide unprecedented ways to interpret and share clinical and financial data. On the consumer/patient side of things, we have solutions that help people track and manage their own health information and that of their family. And, the era of access to health information, “health gaming”, and health experts via interactive entertainment and information systems in our living rooms is just around the corner.
You see, Health IT is an additive, cumulative process of new technologies and solutions large and small, for the enterprise, small businesses, and in the home, that over time become stitched together to find their way into everyday use. Some of this change drops into the pond in very large blocks. Some change comes in small, unexpected splinters that hardly get noticed. But over time, they become melded together into something that is really quite beautiful and amazing- just like the deep clear waters of the lake that is forming below Mt. Edith Cavell in Jasper National Park.
That was my vacation revelation. Now, it’s back to work with all of my colleagues at Microsoft to help add more ice to the pond.
Bill Crounse, MD Senior Director, Worldwide Health Microsoft
Technorati Tags: Health IT,healthcare IT,information technology,telehealth,telemedicine,eHealth,mHealth,mobility,NUI,electronic medical records,Microsoft,Office 365,SharePoint,Xbox Live,Kinect,Lync. SQL Server,Amalga,HealthVault
|
OPCFW_CODE
|
I think I would read through the following guide: https://forums.opensuse.org/content/128-re-install-grub2-dvd-rescue.html and after the Windows install, reinstall grub2. Normally to “boot” openSUSE from a Logical Partition, Grub2 must be loaded into the MBR. If grub is actually loaded into the Extended partition, Windows could make a mess of things and may not even install. You can only install grub2 into the MBR or one of the four Primary Partitions and apprantly you can get openSUSE to “boot” from a logical partition if Grub2 is loaded into the Extended Partition which is not standard to Windows. The worst that can happen is you need to reinstall openSUSE, but only if Windows will install at all with your present setup. Just make sure you have a working openSUSE boot disk before you start. Have a look at my partition guide here: Formating and Partitioning Hard Disk During Install , backup any important data from your /home area if you can.
I have this configuration with Windows 7 and suse 12.3:
/dev/sda1 100 mb (for windows 7) nfts
/dev/sda2 110 gb (Windows 7) nfts
/dev/sda3 310 gb (for data files) nfts
/dev/sda4 45 gb extended partition
/dev/sda5 14 gb “/”
/dev/sda6 9 gb “swap”
/dev/sda7 22gb “/home”
grub is on hd0,4 (sd4) and there was a problem in Windows about hibernation. I solved by setting sda1 partition as active. After this, the grub was gonne away and Windows started without dualboot. Now, if I will re-install the grub, will I have problems with hibernation in Windows again? I would like to have hibernation on Windows and on suse. How i must to do? The grub place (sd4) is correct or not in my case? If i must change it, where i must put it and how I must to do? If is necessary, I can re-install suse without problem.
In this case “recover” grub is easy - just set partition 4 back as active.
and there was a problem in Windows about hibernation. … Now, if I will re-install the grub, will I have problems with hibernation in Windows again? I would like to have hibernation on Windows and on suse.
What problems did you have? Did it refuse to hibernate or did it fail to resume?
|
OPCFW_CODE
|
Nick Post mentioned in a comment that he's had problems trying to get images in the 2D panels that have an alpha channel to rotate. An astute observation! I had to try this out myself with an XML gauge to see why it doesn't work, and sure enough, this code path simply isn't implemented. In a debug build of flight sim (which I run most of the time) I get an assert which alerts me to this fact. Note that you can successfully shift an image element that has alpha; the code paths diverge significantly for shifts vs. rotations. Unfortunately, I don't see any way to work around this issue in FS9, but a bug has been logged for future reference.
I haven't tried using an image with alpha in a C-style gauge yet, but based on the code I see, it looks it shouldn't work either.
Thanks for pointing this out, Nick.
you say you're running a debug build of flight sim most of the time... Can you give us a comparison of frame rates, between release and debug builds on your PC? (I'm always interested, as when I run debug builds of our app, it's not that much different)...
He, he, this blog could become a full time job.
1) An explanation of Formatted text would be really useful, again not covered in the SDK's. This is a very useful feature.
2) In the Spirit of St Louis panel, the periscope gauge has the line,
<CustomDraw Name="fs9view:view" X="84" Y="77" Zoom="1.0" Pitch="0" Bank="0" Heading="0" OffsetUp="1" OffsetForward="1"> The pitch, bank and heading do not work. Also the periscope view does not work in VC view.
3) I have written 3 tutorials on XML programming. I thought you might like to take a look. On the Tutorials page at FS2x.com. I intended to write more but it's finding the time. I'd like to write one on Formatted text, but I have a lot to learn here.
4) I'm not being funny, and please do not take this the wrong way, but I am interested in knowing why the SDK's are a bit 'thin' on information. We can achieve so much with FS, and it's a terrific piece of software, yet the SDK's only cover a proportion of the possibilities. It's also interesting that the default aicraft are somewhat 'simple', again considering the features available.
5) ...and finally (you'll be glad to hear) the effects SDK could be more forthcoming. Again, a feature set that can bring FS to life, yet fx files are still a mystery to most.
Hope I haven't taken up too much of your time,
Nice work on the tutorials, Nick. To be honest, I haven't read through all 5 million pages of .pdf's that you've written... yet. It looks like I'll have something to read on my bus ride home!
Yes, you are right about the pitch/bank/heading not being implemented. And yes, the VC version is a bit of a hoax, really, because we can't use the periscope from the 2D panel in the VC. But it does let you experience the limited view capabilities that Lindberg had to some extent at least!
Yes, the SDK is thin (downright anorexic, really) on the documentation of XML gauge development stuff. I'll see if I can't help rectify that in the future.
I have nothing to do with the effects system whatsoever, so, um, I'll pass the message along to those guys. :)
|
OPCFW_CODE
|
Known Riddle Pieces Edit
Each item given to Wolly gives you a piece of the riddle for that particular item. Each player gets a certain piece of it, but different people get different pieces. These can be aligned by their overlap.
Sextant: - Hinged, plated, or spiraled, this armor was not made by man.
Sea worther: - Smaller than a stone, smaller than a pebble, yet I cover great expanses.
Toy Boat: - A vessel of potential, potential no more. What was once loved, we now abhor.
Old boot: - You give me to every new person you meet, but I still remain yours and sound so sweet.
Soaked candle: - Two old ones made the new one I hold. I move as if in ocean's fold.
Riddle Answers Edit
Entering the corresponding answer to the riddle for the card shown on the Wolly website gives a new card. However, the item that gave the riddle does not match the card for which the answer must be given for that riddle.
|Riddle Card||Answer||Answer Card|
First Card Puzzle Edit
There are dots that are present in the background of each of the cards. They align along a grid of 9 columns and 26 rows. In each column, only one of the dots is present on all cards. The row positions of these dots, starting from the top, are 4, 9, 19, 3, 5, 19, 19, 21, and 13. Using 1 = A, 2 = B, and so on, this spells DISCESSUM (Latin for "departure"). Giving Wolly the answer "DISCESSUM" reveals another page with cards for the second set of cards depicting the answers to the riddles for the first set.
Second Card Puzzle Edit
The second set of cards has words in the background, as well as two symbols: a seahorse and an anchor. Aligning all the cards on top of each other by the seahorse and anchor symbols, a set of words line up to spell out another riddle:
With my shell on the bottom, I stood so tall. 'Til my cradle compelled me to obstinate goal. Now my lives are all gone, my corpse among sand. My first name's my being; And second my end.
Final Answer Edit
The answer to the last riddle is "shipwrecked" -- the first name, "ship" is its being, and second, "wrecked", is its end. Entering http://www.dontstarvegame.com/shipwrecked takes you to the announcement for Don't Starve: Shipwrecked.
This is the list of all possible quotes that the Wolly puzzle page can display:
Part 2 (accepted)
|
OPCFW_CODE
|
Learn to write components that render differently on iOS and Android, but present the same API. First, we'll use the Platform module to change behavior based on the platform. Then, we'll use the platform-specific file extensions, .ios.js and .android.js, to render platform-specific components.
[00:00] One of the awesome things about React Native is that we can write mostly cross platform, or reusable code, but when we need to, we can write code that behaves differently on different platforms.
[00:10] In order to do that, we have a couple of options. The first one we're going to look at today is the platform module. To use that, we're going to start by just importing it from React Native just like we would any other React Native module.
[00:22] Now, let's use that. I'm going to add a variable message. Then, we're going to say if platform.os is iOS, I want that message to say "hello, you are on an iOS device."
[00:49] Now, if you are on androidplatform.os, that's the API provided by the platform module equal android. I'm going to set message to be "hello, you are on an Android device."
[01:06] Now, we have this message string, and let me set it to be an empty string to begin with. Let's try rendering this and see what it looks like in the different simulators. I'm going to put it right here inside of this text component.
[01:20] Now, if I open up my iOS simulator, and refresh, yay, we can see that platform.os worked. We can identify the fact that we are in fact on an iOS device. Now, just to sandy check this, let's also see what it looks like on Android. If I open up our Android emulator, we can see that, yeah, we also get platform.os working on Android.
[01:43] Now, the whole point of writing platform specific code is that we can keep the rest of our code base very clean, and totally cross platform compatible, and we can isolate the platform specific parts to individual components, but keep the API consistent.
[01:58] Because of that, we're usually not going to want to use this platform module, unless there are very, very small changes that we want between iOS, and Android.
[02:05] Much more commonly, we're going to want components that render completely differently on iOS or Android, or at least substantially, for things like, say, different navigation, different user interactions, that sort of thing.
[02:17] To facilitate that, let's look at how we might write components that render completely separately. Going back to my file right here, we're going to put this special custom message into its own component so that these two separate versions can live in totally separate files. We don't need to intermingle our iOS, and Android code.
[02:37] I'm going to call that component hello. It's going to be located in some different files. We're going to say import hello from ./hello. Now, what we're going to do is we're going to create two new files.
[02:51] The first is going to be hello.ios.js, and the second is hello.android.js. These platform specific file extensions tell React Native which version to load. Then when we import them, we don't reference the file extension at all when the app gets loaded to load the cracked version for us.
[03:07] Now, what I'm going to do is I'm going to copy and paste all of this code into these two files. Then, we're going to clean it up and customize it to work just for what we need. In hello.iso.js here, I don't need to import a platform anymore. I don't need to import this component.
[03:32] I'm going to call this class hello, and I don't need to say if platform.os equals iOS, because I know that this version of the component is only ever going to be rendered on iOS. Instead, I'm just going to copy this text string, and replace the message entirely.
[03:49] We can now delete all of this. My style sheet stays the same. I'm exporting this hello component. Great. Now, for Android, we do basically the same thing. Starting again with the same code, but now, we know that this version of the component only gets loaded for Android so we can write it just like this.
[04:16] The last thing we'll need to do is we're going to delete all this old rendering stuff from our main component. Instead of rendering anything special, we're just going to return a hello component. I'm also going to delete these styles, because we're not using them. We don't actually need any of that.
[04:42] Now, let's try loading this. I'll go into the iOS simulator, and refresh. You can see the same exact thing renders. I'll go into the Android simulator, and refresh. Again, same exact thing renders. These are effectively equivalent approaches.
[04:57] The difference here, and the reason why you might want you use the file extensions that are platform specific, the .ios.js and the .android.js approach is because now, from our main component, we have a super simple interface, and we know we can rely on the fact that's going to be the same for each component.
[05:14] If we wanted to pass down props, or things like that, each file could handle its logic separately, and we don't have to look at them all in the same giant loaded file. That's all you have to do in order to get cross platform behavior, but keep a nice, clean API.
|
OPCFW_CODE
|
'The server address uses the pseudo-TLD “.onion” that is not resolvable outside of a Tor network'
Until ICANN sells it to the highest bidder.
Point-of-sale malware dubbed ChewBacca has hit dozens of small retailers in 11 countries as far apart as the US, Russia, Canada and Australia. Researchers at RSA Security have put the ChewBacca Trojan under microscope revealing much more information about a strain of malware targeted at retailers that, whilst not new for …
A good question to ask the owners of the PoS machines is
"Why the fuck are your PoS machines allowed to communicate with the outside world - and TOR! FFS get a clue."
or words to that effect.
I can appreciate that it can be hard to stop this shit getting onto your machines running XPsp1 but you can at least stop the buggers phoning home with all the goodies.
Sir, how the fuck do you expect a PoS machine to work if it can't talk to the outside world?
Go OLD SCHOOL, and use only the ones that dial a telephone number to complete a transaction.
The majority of those types DO NOT respond to an incoming call on the connected line; thus, the ONLY TIME they are exposed to the outside world is during a transaction, and at the end of the day when the day's totals are sent.
The biggest problem with most "modern" POS systems is that they often use a POS operating system (aka WindblowZE), which we all know is easily FUCKED.
Why can it connect to any IP it wants to, where is the firewall that says it can only connect to one single IP address?
Surely this is the first thing that should be done with these machines. It has no reason to need any IP other than the one of the bank it talks to.
Kind of how any coupling should be set up!
Why can it connect to any IP? Because in small retailers (as this particular attack appears to target) the POS is usually connected to a commodity PC running Windows with a software stack for the actual card reader. The PC itself may have other software installed such as links with inventory management software.
In short, it's a basic, commodity PC without any specific security.
I'm not saying it's a good situation, but in contracting work for a friend's company, I saw retail staff browsing the Internet on POS PCs. This malware appears to be targeted at these types of machines.
I'm not blaming the shops here - this is something that is not their area of expertise.
The vendors for the POS (how appropriate that the acronym works both ways...) software should ensure that their systems have at least basic security. Actually, they should ensure they are bloody secure!
Using the excuse that it is a bog standard PC is no excuse.
dial-up won't necessarily solve anything. At least here in the US, many home and business users have moved from PSTN to IP Telephony. And even those who haven't still aren't fully protected from "the outside world." Many of the larger carriers have at least some of their traffic routed via cloudy bits so, even if you shun IPT you may still be at least minimally exposed to it.
Yes, most large (i.e. well staffed/trained) organizations have a measure of control over this, but your average small shop owner doesn't have that kind of expertise or access.
The PIN isn't protected and it may be irrelevant.
1) The reader can be swapped.
2) It only asks Card if PIN is OK, so a fake card says "OK" to ANY PIN, or PIN of your choice.
3) I've only ever been asked for PIN in Retail (see point (2)) never online.
Ok, we've been here before, but:
When you say "The Reader" do you mean the PED? if you do, it may be directly swapped, maybe with other hardware in a replica box, but it's not going to be able to talk to the POS driver layer.
You correctly state that the PIN is stored on the card, and that the card usually just says "yes" or "no", however this is done by means of an encrypted communication which has to be signed with the correct keys. How do you propose a fake card would be able to do this?
You wouldn't have been asked for a PIN online, that's not the job of chip and PIN. You will, however not be able to make a chip and PIN card from the information obtained online because - even if you could create the encryption layer in the chip - the account number used by the chip and PIN part of the card is different to that which is stamped across it.
|
OPCFW_CODE
|
Opportunities That Come Your Way While Choosing Blockchain As your Career
Blockchain is one of the fastest-growing technology in the market, and a great number of banking, insurance, and tech giants have been using various blockchain solutions. In 2019, blockchain professionals were at the top of the list of most in-demand jobs, as per LinkedIn.
Various financial institutions, startups, and traditional enterprises are already either adopting or experimenting with technology with vast success. The crypto ecosystem had been thriving in countries like Singapore, for instance, supporting it become new-age banking services, and pushing decentralized innovation in the financial sector.
Because the technology is still very distinct, people who don’t have a technical background or coding skills can make a career by learning the field in detail and discovering an area they can provide. If you are a developer, you need to know the basics of blockchain.
Which Blockchain Software To Learn?
While some businesses operate solely on the Ethereum technology stack or other public protocols, other companies are operating on private blockchains, so there are many complex protocols.
If you are interested in serving in the crypto space, then you can concentrate on learning programming compared to open blockchain protocols like Ethereum and learn niche languages recognized as Solidity, a contract-oriented, high-level language for executing smart contracts.
Even Python would operate in blockchain as Ethereum will shortly be launching programs and smart contracts written in Vyper (a Pythonic language). Learning smart contacts is essential, understanding the basic principles of data structure, algorithms, and awareness with the development processes is key to success.
If you want to go and work for financial firms, most of them depend on private blockchains, i.e., the admittance to the blockchain is controlled by the company, and the information isn’t open. The most valuable skill is learning Hyperledger Fabric, an open-source development platform managed by the Linux Foundation.
Various other blockchain solutions are accessible from companies such as Oracle, IBM, and Salesforce, allowing blockchain-as-a-service tools and leveraging modern programming languages. Adopting a platform will require important research and a clear knowledge of the use case, particularly to a particular industry. All enterprise blockchain solutions have their training modules produced by vendors, which programmers can learn.
Blockchain Technical Positions
There are highly advantageous positions open for the blockchain industry when it proceeds to the software side for designing networks and developing decentralized applications and smart contracts. This covers positions such as blockchain network architect, blockchain engineer, blockchain UI/UX designers, blockchain developer, blockchain project manager, blockchain network security analyst, etc.
By all of these roles, professionals can make anywhere between $80,000- 150,000 on the global talent market. Salaries in the domestic market would be considerably less, though, as the Indian blockchain industry is virtually non-existent, and only a handful of startups and enterprise POCs exist.
Business Professionals Can Enter The Field Too
While learning software is one of the most famous routes of developing a career in blockchain, there are other ways. You can establish a new protocol for file storage or exchange of data utilizing blockchain as a startup. Many such startups came up in the last 3-4 years, raising funds through crowdsourcing, aka ICOs.
This involves business development, marketing, and content relevant positions. With the crypto ecosystem appearing to life, we see various projects proposed ICOs and hired a bunch of business executives and marketing professionals.
Blockchain advisors can assist companies in determining problems that can be solved with blockchain technology. Due to their business and technology understanding, they can connect those two to discover niche implementations of distributed ledgers. If you are a business person without technical skills, you can begin with the basic understanding of fundamental blockchain ideas such as distributed computing, consensus protocols, cryptography, tokenomics, crowdfunding, etc.
|
OPCFW_CODE
|
<?php
namespace Chadicus\Exception;
/**
* @coversDefaultClass \Chadicus\Exception\Util
*/
final class UtilTest extends \PHPUnit_Framework_TestCase
{
/**
* Verify basic functionality of getBaseException().
*
* @test
* @covers ::getBaseException
*
* @return void
*/
public function getBaseException()
{
$a = new \ErrorException('exception a');
$b = new \InvalidArgumentException('exception b', 0, $a);
$c = new \Exception('exception c', 0, $b);
$this->assertSame($a, Util::getBaseException($c));
$this->assertSame($a, Util::getBaseException($b));
$this->assertSame($a, Util::getBaseException($a));
}
/**
* Verify behavior of getBaseException() when there is no previous exception.
*
* @test
* @covers ::getBaseException
*
* @return void
*/
public function getBaseExceptionNoPrevious()
{
$e = new \Exception();
$this->assertSame($e, Util::getBaseException($e));
}
}
|
STACK_EDU
|
In terms of functionalities we may have lost a bit in the way of quality of life, as one of the advantages of using just Google services is the homogeneity and interconnection between the services. Do you mean I set up a Google account for each user using company email address. Once you need an apps that is stored only in Google play, you can activate your account again for the time you need to install it 12. If you have to get all emotional about it instead of being helpful, go bother someone else. If you don't want to be tied to Google then don't use the apps they produce, but also don't be surprised that they want you to sign in when using services that they provide. And you can only download free apps using the Yalp Store. As for the G-Mail paranoia, there's a site called spokeo where you type a name it and it'll give out more info than you would think.
From here, there are three options that you'll probably want to enable— Check for updates, Hide paid apps, and Install apps immediately. Lookout backs up your android apps, contacts, and calls. Our company uses Airwatch and find that it works quite well for our needs. It's basically an open-source, more resource-friendly port of Google Services. Never, and I mean ever, use sideloading as a way to pirate applications; doing so will likely result in your Android device getting a virus.
I also wanted to exclusively use open source software. If you wish to have a quick download of the App then go for it. You mentioned that your policy will be that this is for company use only. Instant Messaging The situation of Free Software apps for instant messaging is complicated. On a Pixel device, the situation is a bit more challenging, since these devices come loaded with Google software.
It runs Android, that is linux based and open source; I'm aware of the fact that Google drives the development, and I like their product a lot. We are an enthusiast site dedicated to everything Android Tablet. Just because someone has a gmail account or Google Play account associated with the phone does not prevent someone from sending all of their email through the exchange server. This can be enabled by selecting the Verify apps option in the Security settings. It is recommended to not use this market at all.
To do so untick the Scan device for security threats and Improve harmful app detection options. Apps available on F-Droid do not contain hidden costs, are safe for children, and are transparent about possible. Let's think about it for a while. Simply search for any free app on the Google Play Store, then select it from the list. How Do You Get More Apps Without Google? I just picked up an android phone. Personally, to try to make this phone work without being connected to google would be just like the lawn mower comment.
I managed to download the dejaoffice app to the Galaxy, twice. Examples like you made it with the lawn moyer never really work a hundred percent, but to get your above example closer to reality, it would be like having a lawn mower that could be operated only using the brand of gas of one specific company without real necessity, so a pragmatic person would think about ways to refill it with other available brands. Should we sign up for the free Google Apps account and have users use that or let them use their own gmail accounts? In its place, just install the Yalp Store, which will let you download apps from the Play Store, and even install updates. This began rumors that Google was planning to enter the mobile phone market. I apologise if this appears to be a rant to anyone. So I guess the options are: 1.
For example, Grooveshark, a free online music streaming service, previously had an app in the Play store. Root typically allows android apps to flash a new recovery i. Install Apps Without Google PlayStore — Google Play Alternatives There are several ways of installing apps on Android externally but in this article, I will be laying emphasis on few that has been used for years and trusted not just by me but by others too. Also, there are some apps anonymously available for download from the google codebase. Even if you block the contact and calendar sync, you'd at least need it for the marketplace.
Open the F-Droid App and let the App update the repository. Besides that, it is a good idea to check if there are free versions of Android. The Android Market makes an effort to be sure that the software there is malware free. My point is, with or without a G-mail account, there's already a plethora of data about you on the internet already. To start your download, all you have to do is install the multipurpose app Vidmate and after that, proceed to click and click on the search option, enter the app name and press search. Google became a way to browse the web and sync every page I visited. So to start, head to the F-Droid repository, where the Yalp Store app makes its home.
|
OPCFW_CODE
|
The first assignment of the Game Behaviour module in the final year of my course at the University of Derby was to develop a 2D Physics Sandbox using Box2D, allowing for the construction of a various modular, wheeled vehicles.
Other than Box2D, naturally, I used a number of other open source libraries in the development of my sandbox, DropCakes. These include the 2D graphics library SFML and aditional vector maths header RixMath.h, used for all rendering; the GUI library Gwen (GUI Without Extravagant Nonsense), used for all GUI elements; and finally, the XML parser library pugixml, used for scene serialization and de-serialization.
In addition to these libraries I developed a custom component engine, Crispis, based heavily on Coment, but with numerous modifications based on personal preference and goals. This was intended to allow for dynamic creation and removal of components in the editor/sandbox, which would allow the user to easily apply various behaviours to objects and configure them. Due to time constraints the only example of this at hand-in is the Motor Controller Component which allows for manual (keyboard) control of motored joints. Notable differences from Coment include:
- Processes (a.k.a systems) are sorted and executed based on priority values.
- Entities must be manually registered with processes
Here I will provide a brief overview of DropCakes’s features, omitting any instructional information, as this is included in the readme.txt provided with both source and executable distributions.
- Linear drag based upon an estimation of an object’s reference area.
- Angular drag based upon an estimate of an object’s width/radius.
- Fixed time step physics simulation decoupled from framerate (time delta is capped so that at <10 fps slowdown will occur in order to avoid the physics simulation going into a death spiral).
- Create static and dynamic Box2D physics bodies:
– Rectangles, Circles, Triangle-tessellated Polygons
- Edit limited properties of Box2D physics bodies after creation, such as restitution and friction.
- Create Box2D physics joints:
– Weld, Distance, Revolute, Prismatic, Pulley, Rope,Wheel
- Edit limited properties of Box2D physics joints after creation, including limits and motors.
- Delete Box2D physics bodies.
- Edit properties of the physics simulation including time step, linear and angular drag, air density, and gravity.
- Move objects within the simulation using the Physics Movement Cursor.
- Attach Crispis Entities and Components to Box2D bodies and joints for dynamic behaviour:
– Currently only the Motor Controller Component is supplied, which allows for binding keyboard inputs to any joint motor.
- Save and Load worlds via XML serialization:
– Implementation of Entity and Component serialization required a lot of workarounds, so enabling this for new components requires more work than it really should.
- Play, Pause, Resume, and Stop simulation:
– World is automatically serialized on play and reset to this state on stop, so you can easily test out devices without risking breaking them.
- Zoomable, translatable camera.
- Recursive, infinite grid rendering.
- Triangle tessellation for the Polygon tool:
– Implementation is imperfect and failed tessellations of highly irregular shapes will fail to spawn without warning.
|
OPCFW_CODE
|
Cost of Air Travel With Kids Now Sky High
It's the time of year when many families are planning holiday travel, and if yours includes a flight with kids, listen up. Consumer Reports says there are new rules and added fees that could quickly take the jolly out of your holiday air travel. The standard fee for an unaccompanied minor has in some cases more than doubled over the past decade, from $200 to $300 depending on the airline. And the fees don’t stop there: If you actually want to sit next to your kids, you might have to pay more for that, too.
Holocaust Survivors Share Their Stories With Broward County Students
The Holocaust Documentation and Education Center organized a gathering at the Broward County Convention Center for Holocaust survivors to share their stories with about a thousand high school students from 18 Broward high schools. "Survivors are here today because telling their story, they don't want what happened to them to happen to the students and so they're here recounting their memories," explained Rositta Kenigsberg, president of Holocaust Documentation and Education Center. Some of the survivors were over 100 years old.
Woman Warns Others After Buying Weight Loss Product
It was the promise of fast results and a risk-free guarantee that got Rose Marie Noguera to click on a social media ad earlier this year. "I was looking for something to help me lose weight," she said. "It popped up and I said, 'Let me give it a try.'" She paid about $99 for three bottles of the product and took several pills a day, for weeks. "I was just saying, if it helps give me a jump start, I'll be happy," she said. "But it did not." The company did not respond to her request for a refund until NBC 6 Responds reached out to them; Noguera says that next time, she will be more careful with unfamiliar products from social media ads.
Brrr! Florida’s First Snow Park Opening in 2020
Do you want to build a snowman in Florida? Well, soon you’ll be able to. A one-of-a-kind alpine snow park -- with real snow -- is coming to the Sunshine State. Featuring a massive snow tubing hill, Alpine Village and a 10,000 square foot snow play dome, Snowcat Ridge is set to open in Dade City in November 2020. “Snowcat Ridge will be unlike anything anyone else has seen before in the Sunshine State and we are incredibly excited to unveil our vision for the new alpine snow park today,” the park’s CEO Benjamin Nagengast said in a statement.
Burmese Python Found Outside Kendall Neighborhood
A scary situation took place outside a Kendall home when the family found a giant Burmese python snake outside. Officials say the scene took place in the Cherry Grove neighborhood off Southwest 91st Street, as the homeowners spotted the invasive reptile outside the home and called in licensed snake hunters. The hunters were able to capture the snake and kill it. State officials have encouraged hunters to capture any invasive creatures like Burmese pythons and kill them humanly.
|
OPCFW_CODE
|
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Time : 2018/11/8 9:52
# @Author : lcyanxi
# @Email : lcyanxi.com
# @File : insertSort.py
# @Software: PyCharm
"""
直接插入排序算法:
思想:将一个新的数据插入到一个有序数组中,并继续保持有序,第后一个不断的与前一个比较
3,1,5,4,2
第一次排序,1比3小,则1和3换位置,变为了1,3,5,4,2
第二次排序,5比3大,不需要调整,仍未1,3,5,4,2
"""
def insertSort(data_list):
finish_list = []
finish_list.append(data_list[0])
for num in range(1, len(data_list)):
for pre in range(0, num):
# 如果待加入的数据已经比已排好的最小数据还小,直接放在最前面就行
if data_list[num] <= finish_list[pre]:
finish_list.insert(pre, data_list[num])
break
# 如果待加入的数据已经比已排好最大值还大,直接放在最后面就行
elif data_list[num] >= finish_list[-1]:
finish_list.append(data_list[num])
break
elif data_list[num] >= finish_list[pre] and data_list[num] < finish_list[pre + 1]:
finish_list.insert(pre + 1, data_list[num])
break
else:
pass
return finish_list
"""
选择排序:
是每一次从待排序的数据元素中选出最小的一个元素,存放在序列的起始位置,直到全部待排序的数据元素排完
"""
def chooseSort(data_list):
N = len(data_list)
for num in range(N):
for pre in range(num + 1, N):
if data_list[num] <= data_list[pre]:
pass
else:
data_list[num], data_list[pre] = data_list[pre], data_list[num]
return data_list
def chooseSortNew(data_list):
N = len(data_list)
num = 0
while num < N:
for index in range(num, N):
if data_list[num] <= data_list[index]:
pass
else:
data_list[num], data_list[index] = data_list[index], data_list[num]
num += 1
return data_list
"""
冒泡算法的运作规律如下:
①、比较相邻的元素。如果第一个比第二个大,就交换他们两个。
②、对每一对相邻元素作同样的工作,从开始第一对到结尾的最后一对。这步做完后,最后的元素会是最大的数(也就是第一波冒泡完成)。
③、针对所有的元素重复以上的步骤,除了最后一个。
④、持续每次对越来越少的元素重复上面的步骤,直到没有任何一对数字需要比较。
"""
def bubblSort(data_list):
N = len(data_list)
for num in range(N):
for index in range(num, N - 1):
if data_list[index] > data_list[index + 1]:
data_list[index + 1], index[index] = data_list[index], data_list[index + 1]
return data_list
def bubblSortNew(data_list):
N = len(data_list)
index = N - 1
while index > 0:
num = 0
while num < index:
if data_list[num] > data_list[num + 1]:
data_list[num], data_list[num + 1] = data_list[num + 1], data_list[num]
index -= 1
return data_list
if __name__ == "__main__":
data_list = [5, 1, 7, 9, 3, 3, 4, 6]
print("源数据:" + str(data_list))
print(insertSort(data_list))
print(chooseSort(data_list))
print(chooseSortNew(data_list))
print(bubblSort(data_list))
print(bubblSortNew(data_list))
|
STACK_EDU
|
Detect collisions between circles with rectangles inside
I'm working on a project, and I need to be able to detect collisions between circles. I already found a mathematical formula for that : http://cgp.wikidot.com/circle-to-circle-collision-detection
But I've got a question, how can I detect if there is a rectangle in this area ? Or just a part of a rectangle inside ?
I've got : coordinates of the center of a circle and the radius, and for the rectanble I've got a x and y coordinate, and width an height. I guess that x and y are just a point and with that I'm able to guess the form with the width and the height.
Any idea ?
Thanks a lot !
Better try on http://math.stackexchange.com/.
Write a method to check whether a point lies within a circle or not.
Call that method for all corner points of the rectangle (calculated from x, y, width and height) on both circles.
Use your existing circle intersection detector method to prune calls.
Hope this helps.
Good luck.
This won't work for all cases... what about when the rectangle completely contains the intersecting area of the circles? (Or completely contains both circles, etc?)
Or worse, when the rectangle separately intersects with each circle, but not in a point where the circles themselves intersect).
You can use java.awt.geom.Area class.
It has a constructor to create an Area from Shape
public Area(Shape s)
So create simple areas for your circles and rectangle. Then you can either combine areas using the methods to obtain new areas.
public void add(Area rhs)
public void subtract(Area rhs)
And check whether area intersects or contains another area via
public void intersect(Area rhs)
public boolean contains(Rectangle2D r)
This sounds like a variation of the technique described in this answer.
The same two cases apply (either the circle's centre is in the region, or one or more of the rectangle's edges intersects the region)... the difference is that instead of considering the circle in general, you need to consider the intersection of the circle.
The first case is easy because you can swap centre points. If the rectangle's centre point is in the intersection of the circles, then the rectangle is partly inside. This is easy to determine: find the centre point of the rectangle, see if it's in the first circle, see if it's in the second circle.
The second case is complicated, because it requires you to calculate the curves where the circles intersect. If the edges of the rectangle intersect either of those curves then the rectangle overlaps the intersection. As a special case, if one circle lies completely inside the other one, then the line to check is the border of the smaller circle.
If you don't need an exact answer, then the second case can be approximated. First, find the points where the two circles intersect (or use the method you've already come up with, if you can). These two points can be used to construct a bounding rectangle (they are either the top left/bottom right or top right/bottom left points of a rectangle). If this bounding rectangle intersects with your rectangle, then your rectangle probably overlaps the circle intersection.
All in all, this is fairly complicated if you want to an exact answer that works properly with all of the special cases (one circle completely inside the other, the rectangle intersects both circles but not their intersection, etc). I hope this helps a little.
A library I've used before called the JTS topology suite might be appropriate for your needs. It's orientated more towards GIS operations than pure euclidean geometry, but it can easily do all of these calculations for you once you've got the shapes defined:
import com.vividsolutions.jts.util.*
GeometricShapeFactory gsf = new GeometricShapeFactory();
gsf.setSize(100);
gsf.setNumPoints(100);
gsf.setBase(new Coordinate(100, 100));
//configure the circle as appropriate
Polygon circleA = gsf.createCircle();
//configure again and create a separate circle
Polygon circleB = gsf.createCircle();
//configure a rectangle this time
Polygon rectangle = gsf.createRectangle();
Geometry circleIntersection = circleA.intersection(circleB);
return rectangle.intersects(circleIntersection);
Very nice answer ! Thanks a lot, I'm going to take a deep look :D
|
STACK_EXCHANGE
|
up vote 0 down vote I realize that this can be an outdated post but for all people who might end up below anyway, this can be beneficial...
The User who commences the agent should have the subsequent rights Should the INI-file parameter logon= has become set to "one":
You're reporting the next put up: How can I resolve "A required privilege just isn't held by client " ? This submit is flagged and can be reviewed by our workers. Thank you for assisting us preserve CNET's great community.
Your software now includes a token to access Azure Resource Manager on behalf in the person. The following phase is to attach your app on the membership. Following connecting, your app can manage All those subscriptions even when the user isn't really existing (long-phrase offline access).
Only users with access management permission over the membership are able to disconnect the subscription.
"Program Error 1314 has transpired a required privilege isn't held through the client" Any individual knows how to fix this?
If you have various tenants or you should allow users to reset their own passwords, it’s essential which you use ideal safety insurance policies to avoid abuse.
Automatic Database Diagnostic Watch(ADDM) can evaluate performance challenges in the course of a particular period of time and provide recommendation. An ADDM analysis is done on here a list of awr snapshots. addmrpt.sql script is used to generate addm report.
These are just placeholders that point out we’re making some services updates and we haven’t finalized the update very here nonetheless. There isn’t something you should do on your own check here side. Obviously, it is possible to normally check the What’s New Webpage to discover what’s improved recently.
Even if you choose to use federation with Lively Directory Federation Providers (AD FS) or other id suppliers, it is possible to optionally create password hash synchronization as being a backup in the event that your on-premises servers are unsuccessful or turn into temporarily unavailable. This enables users to check in on the support by using the exact password they use to register to their on-premises Lively Directory instance.
I believe you merely choose to add a single card to just one VM, then install the right drivers while in the VM.
Once you make any adjustments to ESXi, People improvements are fully commited only into the in-memory configuration and so will not persist following a reboot. To combat this, VMware has a shell script called /sbin/auto-backup.sh that runs routinely. What this script does is take all of the collective configuration data files (together with esx.conf) and shops them in a compressed file termed area.
The next table displays the default safety roles Defined sets of privileges.The safety role assigned to a user establishes which tasks the user can perform and which parts of the consumer interface the person can view. All users must be assigned at the least one particular security job in an effort to access the system. required to accomplish Every single job, and whether or not the process can be done while utilizing the Microsoft Dynamics CRM for Outlook offline.
Don’t use this process to silence your system attempting to warn you about something which you happen to be executing which could have an affect on its normal operation.
|
OPCFW_CODE
|
Is this drywall mud peeling under the paint?
I just bought my first house, and before moving in I’ve been doing a little bit of small-scale DIY stuff. I just removed an awkwardly-placed walk in closet in the primary bedroom, and the paint on side wall it was connected to started peeling (no big deal), but then I noticed the peeling was pretty thick. Maybe they put mud and paint over wallpaper? I thought the side wall might be plaster before I took apart the closet wall (which was added by the last owner). For context, the house was built in 1910, but I think the closet was added in 2000/2001.
I also can’t totally tell what’s beneath the peeling. It looks like maybe it’s just wood? There’s this screw that’s now exposed:
How can I patch this? I was thinking maybe I can cut a straight line in it and peel to there, then sand the edge, mud over it, sand, prime, and paint. Would that work? What kind of mud should I use? I’ve never patched a wall before, and I’m going to practice a little first just by patching some holes from where the closet shelves were drywall anchored in (to a different wall that is drywall). Any help is appreciated – thank you!
Update: I looked closer at what looked like wood and noticed what looked like a leaf design. I scraped away and found plaster:
I think I’m going to at least start by scraping away at the rest of the wall and then either applying a skim coat myself or call a plasterer to do it for me. Thanks for the help!
It looks to me more like multiple layers of paint over multiple layers of wallpaper.
It was probably someone just slapping paint on the wall without doing any prep work. Prep work is about 80 or 90% of a good paint job. Remove all loose paint, sand, clean, dust, wipe down, clean again and then you might do a good paint job.
That sure does look like wood to me. What happens if you poke it with a screwdriver?
Standard warning about lead paint: don’t sand it without professional-style abatement measures. Flakes are reasonably okay if you clean up and don’t have infants chewing on them. See the EPA website for more info.
@Huesmann I poked pretty hard with a utility knife and it seems solid (no rot). Assuming it is wood (I might peel a little more to be sure), is it fine to just mud over it? The wall is pretty even, and I'd like to avoid redoing this entire wall if I can help it.
@AloysiusDefenestrate I'm doing an at-home test today to get an idea if there's lead!
There isn't any mud in your picture. It looks like someone has painted over wallpaper, wallpapered, then painted over it again. This is a nightmare amount of work - to the point where I would contemplate rehanging drywall.
There isn't a right way to get rid of this mess or an easy way out. You have to take off all layers to start over - even a flat layer will fail over time. What you end up with is drywall terribly messed up after. Hence might as well rip out the sheet and put up new drywall as new drywall will require less mudding and work.
Circa 1910 likely horse hair and wood lath plaster was involved or even steel mesh. I agree to consider redry- walling. A good place to learn the skills is doing a closet
It turns out the thing that looks like wood is actually more wallpaper… I scraped it away and found plaster
Wallpaper can get very very wood like and thick if it has a few coats of paint on it. Especially lead paint and oil based. I still think it is a waste of time. Also the more you work on it you may need to remediate which if you have to remediate a closet for lead (and you have no children around eating walls) it is a complete hassle and waste of money.
|
STACK_EXCHANGE
|
Hi, I like to select saved Snapshots with a external midi-controler command but I don’t know how.
The SZ3 selection via midi PC commands works fine.
Tried to send via Channel 16 with Master-Channel on (16) and also without the Master-Channel on, it does not work yet.
Any advice is appreciated.
When you save a ZS3 snapshot you do it under a Program Change number.
You need to send MIDI Program Change commands from the MIDI conttroller and you’ll be done.
The Zs3 snapshots work fine with Midi Program change commands, but what is the method
for selecting a Snapshot with Midi Program Change?,
Mmmmmm snapshots are slow, they take a significant amount of time to swap.
I don’t think you can assign a Program Change to select a snapshot, I have looked and I haven’t found it.
Yes, it’s probably slower than a ZS3 change but I need the Chaannel changes.
In the userguide it says “It is possible to recall snapshots via MIDI Program Change. New snapshots are assigned the next available MIDI Program Change”.
and in the Snapshot page you can set a program (I asume a Midi Program Change) number.
There’s a new version coming soon that unbinds MIDI channels from what in zynthian are called chains, now a MIDI channel is linked to an engine.
Until now I’ve been able to use ZS3 snapshots for live performances, Program Change is super fast and you can change the engine preset every time. The only limitation is that you need to load all your engines in the snapshot. IMHO there’s no need to use snapshots unless you have a completely different set of engines
Anyway, maybe someone that knows better than I do can answer
I can confirm that the documentation does state that snapshots can be recalled by program change and that this does not seem to work in any scenarios / configurations I have tried.
Progam Change operates differently depending on the mode of the device, e.g. recalling the ZS3 or the engine’s native preset. I can imagine that the behaviour described in this part of the docs may be intended for master channel but it does not work.
@jofemodo could you explain what the intended behaviour is? (I have been working on program change behaviour recently so this question is timely.)
Thanks Pau, your method using ZS3’s is understood en working.
Due to the fact changing Snapshots via Midi Program Change Commands seems interesting (at least for me) and in the current implementation Midi Program Changes are used to select ZS3’s maybe the masterchannel could be used. So Midi Program Change commands in the masterchannel are used for Snapshot selection and Midi Program Changes on the other 15 channels are used for selection of ZS3’s.
This is exactly the intended behaviour It could be broken in some recent update, given that not many people use this feature. Let me check it.
I just committed a fix that solves the problem. It’s on stable & testing branches.
Wiki needs updating to describe this behaviour more accurately.
This only works after the snapshot page has been accessed once. If you start Zynthian up fresh and send program change on master channel then the snapshot is not loaded.
Thanks for your effort both! Your quick responses are really fantastic. I will play with this functionality the comming days.
The fix should solve this.
Sorry - my update must have not worked properly. I see this expected behaviour now, cheers!
[Edit] Wiki updated to indicate that MIDI Program Change on Master MIDI Channel can select Snapshot.
|
OPCFW_CODE
|
java assignment help SecretsA person stage access for preserving a tab on all the staff. One can use This method for taking care of the employees on specified projects. Life will look better and less difficult.
Java is a difficult programming language and platform. 1 should be effectively versed with the basics to carry out a project that may the impress and likewise provide the marketing and advertising potentials.
Write a application to estimate the prospect which the weaker groups wins the World Sequence also to estimate how many online games on ordinary it can choose.
This tutorial is organized with the inexperienced persons to help them fully grasp The fundamental to State-of-the-art concepts associated with Java Programming language.
The AppSensor project defines a conceptual framework and methodology that offers prescriptive direction to put into action intrusion detection and automated response into apps.
org I requested listed here to help, Sarfaraj promised me to that He'll total my c programming assignment just before time and he had performed it successfully, I bought 95% marks in my assignments, I hugely recommend for you personally, He extremely co-operative
Operate/debug configurations can do a lot more than simply run programs. They might also build applications and accomplish other practical jobs. In case you consider the settings with the HelloWorld operate configuration (Run
Resulting from casting, C++ const is actually a soft guideline and it can certainly be overridden by the programmer; the programmer can easily Solid a const reference to your non-const reference. Java's closing is actually a strict rule this kind of that it's extremely hard to compile code that directly breaks or bypasses the ultimate constraints.
You need resources that makes it simple to discover general performance troubles in advance of your consumers do and take care of them ahead of they impact check my reference the enterprise. That’s why tens of thousands of customers around the globe like WhatsUp Gold. Start A Free Trial Advertisement
Colleges, colleges, and Universities are going to love This method. This special java project Thoughts can function as 1 level of obtain for universities and schools. They can acquire whole details relevant to a pupil with good relieve.
The aim is to let builders use a similar list of logging APIs they are now accustomed to from about ten years of knowledge with Log4J and its successors, even though also adding impressive safety features.
When you ever want to alter your project's settings immediately after it has been designed, correct directory click on the project's title and navigate to your required choice.
Inexpensive ass minimal lives who go on to beg you for material you don’t have, yet they are able to’t shut up and go and fund their own individual instruction. What a pathetic breed of human this web page attracts.
This report outlines the situation assertion, qualifications review, current and also future answers, application engineering tactics that the workforce adopted, vital architectural choices and many of the implementation Carry on examining →
|
OPCFW_CODE
|
# coding=utf-8
# Author: 'Marissa Saunders' <marissa.saunders@thinkbiganalytics.com>
# License: MIT
# Author: 'Andrey Sapegin' <andrey.sapegin@hpi.de> <andrey@sapegin.org>
# Adapted to Spark Interface by: 'Lucas Miguel Ponce' <lucasmsp@dcc.ufmg.br>
from copy import deepcopy
from collections import defaultdict
from .util import *
import numpy as np
class Metamode:
def __init__(self, mode):
# Initialisation of metamode object
self.attrs = deepcopy(mode.attrs)
# the metamode is initialised with frequencies,
# it means that the metamode will have 1 element right after
# initialisation. So, frequencies are copied from the mode
self.attr_frequencies = deepcopy(mode.attr_frequencies)
# The count and freq are different from frequencies of mode attributes.
# They contain frequencies/counts for all values in the cluster,
# and not just frequencies of the most frequent attributes
# (stored in the mode)
self.count = deepcopy(mode.count)
# used only to calculate distance to modes
self.freq = deepcopy(mode.freq)
# Number of members (modes) of this metamode, initially set to 1
# (contains mode from which initialisation was done)
self.nmembers = 1
# number of all records in all modes of this metamode
self.nrecords = deepcopy(mode.nmembers)
def calculate_freq(self):
# create frequencies from counts by dividing each count on total
# number of values for corresponding attribute for corresponding
# cluster of this mode
self.freq = [defaultdict(float) for _ in range(len(self.attrs))]
for i in range(len(self.count)):
self.freq[i] = {k: v / self.nrecords for k, v in
self.count[i].items()}
def add_member(self, mode):
self.nmembers += 1
self.nrecords += mode.nmembers
for i in range(len(self.count)):
# sum and merge mode count to metamode count
self.count[i] = {
k: self.count[i].get(k, 0) + mode.count[i].get(k, 0) for k in
set(self.count[i]) | set(mode.count[i])}
def subtract_member(self, mode):
self.nmembers -= 1
self.nrecords -= mode.nmembers
if self.nmembers == 0:
print(
"Last member removed from metamode! "
"This situation should never happen in incremental "
"k-modes! "
"Reason could be non-unique modes/metamodes or same "
"distance "
"from mode to two or more metamodes.")
for i in range(len(self.count)):
# substract and merge mode count from metamode count
self.count[i] = {
k: self.count[i].get(k, 0) - mode.count[i].get(k, 0) for k in
set(self.count[i]) | set(mode.count[i])}
def update_metamode(self):
new_mode_attrs = []
new_mode_attr_freqs = []
for ind_attr, val_attr in enumerate(self.attrs):
key, value = get_max_value_key(self.count[ind_attr])
new_mode_attrs.append(key)
new_mode_attr_freqs.append(value / self.nrecords)
self.attrs = new_mode_attrs
self.attr_frequencies = new_mode_attr_freqs
self.calculate_freq()
class Mode:
"""
This is the k-modes mode object
- Initialization:
- just the mode attributes will be initialised
- Structure:
- the mode object
-- consists of mode and frequencies of mode attributes
- the frequency at which each of the values is observed for each
category in each variable calculated over the cluster members
(.freq)
- Methods:
- add_member(record): add a data point to the cluster
- subtract_member(record): remove a data point from the cluster
- update_mode: recalculate the centroid of the cluster based on the
frequencies.
"""
def __init__(self, record, mode_id):
# Initialisation of mode object
self.attrs = deepcopy(record)
# the mode is initialised with frequencies, it means that the cluster
# contains record already. So, frequencies should be set to 1
self.attr_frequencies = [1] * len(self.attrs)
# The count and freq are different from frequencies of mode attributes.
# They contain frequencies/counts for all values in the cluster,
# and not just frequencies of the most frequent attributes (stored
# in the mode)
self.count = [defaultdict(int) for _ in range(len(self.attrs))]
for ind_attr, val_attr in enumerate(record):
self.count[ind_attr][val_attr] += 1
self.freq = None # used only to calculate distance to metamodes, will
# be initialised within a distance function
# Number of members of the cluster with this mode, initially set to 1
self.nmembers = 1
# index contains the number of the metamode, initially mode does not
# belong to any metamode, so it is set to -1
self.index = -1
self.mode_id = mode_id
def calculate_freq(self):
# create frequencies from counts by dividing each count on total
# number of values for corresponding attribute for corresponding
# cluster of this mode
self.freq = [defaultdict(float) for _ in range(len(self.attrs))]
for i in range(len(self.count)):
self.freq[i] = {k: v / self.nmembers for k, v in
self.count[i].items()}
def add_member(self, record):
self.nmembers += 1
for ind_attr, val_attr in enumerate(record):
self.count[ind_attr][val_attr] += 1
def subtract_member(self, record):
self.nmembers -= 1
for ind_attr, val_attr in enumerate(record):
self.count[ind_attr][val_attr] -= 1
def update_mode(self):
new_mode_attrs = []
new_mode_attr_freqs = []
for ind_attr, val_attr in enumerate(self.attrs):
key, value = get_max_value_key(self.count[ind_attr])
new_mode_attrs.append(key)
new_mode_attr_freqs.append(value / self.nmembers)
self.attrs = new_mode_attrs
self.attr_frequencies = new_mode_attr_freqs
def update_metamode(self, metamodes, similarity):
# metamodes contains a list of metamode objects. This function
# calculates which metamode is closest to the mode contained in this
# object and changes the metamode to contain the index of this mode.
# It also updates the metamode frequencies.
if similarity == "hamming":
diss = hamming_dissim(self.attrs, metamodes)
elif similarity == "frequency":
diss = frequency_based_dissim(self.attrs, metamodes)
else: # if (similarity == "meta"):
diss = all_frequency_based_dissim_for_modes(self, metamodes)
new_metamode_index = np.argmin(diss)
moved = 0
if self.index == -1:
# First cycle through
moved += 1
self.index = new_metamode_index
metamodes[self.index].add_member(self)
metamodes[self.index].update_metamode()
elif self.index == new_metamode_index:
pass
else:
if diss[self.index] == 0.0:
print(
"Warning! Mode dissimilarity to old metamode was 0, "
"but dissimilarity to another metamode is also 0! "
"KMetaModes is going to fail...")
print("New metamode data: ")
print("Attributes: ", metamodes[new_metamode_index].attrs)
print("Attribute frequencies: ",
metamodes[new_metamode_index].attr_frequencies)
print("Number of members: ",
metamodes[new_metamode_index].nmembers)
print("Number of records: ",
metamodes[new_metamode_index].nrecords)
print("Counts: ", metamodes[new_metamode_index].count)
print()
print("Old metamode data: ")
print("Attributes: ", metamodes[self.index].attrs)
print("Attribute frequencies: ",
metamodes[self.index].attr_frequencies)
print("Number of members: ", metamodes[self.index].nmembers)
print("Number of records: ", metamodes[self.index].nrecords)
print("Counts: ", metamodes[self.index].count)
print()
moved += 1
metamodes[self.index].subtract_member(self)
metamodes[self.index].update_metamode()
metamodes[new_metamode_index].add_member(self)
metamodes[new_metamode_index].update_metamode()
self.index = new_metamode_index
return metamodes, moved
|
STACK_EDU
|
Mac OS X Leopard is here. If you placed a pre-order through the Apple Store, your copy should show up sometime this afternoon, though a handful of lucky people seem to have received their copy late Thursday evening. We've covered what you need to know before you install, and how to get your Mac ready. Now here we go with the actual install.
The packaging is considerably more minimal than the last release. The box is roughly the size of a double disc music CD and contains a single installation DVD and a "Welcome to Leopard" booklet. The outer box contains a strange 3-D reflective hologram which looks kind of cool, but doesn't show up in photographs. You'll have to take my word for it.
To get started, just pop in the DVD and launch the Leopard installer. Once your computer reboots from the DVD, select a destination and click "Install." A couple of things to note. Initially, my MacBook hung up on the screen that asks you to select an installation destination. I'm not sure if it was just slow or if perhaps my Boot Camp partition was confusing it. Whatever the case, I headed into Disk Utility to repartition the drive. That failed, too. It turns out you need to be on the first screen of the installer in order to use Disk Utility. If you're any farther along you'll get an error about the disk being in use.
With a drive wiped clean, the installer had no trouble finding the right volume and offered to proceed with an 11.4 GB install, which is about twice the size I recall Tiger being. Curious, I fired up the custom options and noticed print drivers take up a whopping 3.4 gigs of space. Epson is the worst culprit at 1.5 GB worth of drivers.
Once I whittled down the unnecessary drivers and fonts, I ended up with a 6.3 GB install, which isn't too bad.
So far, it's been about 20 minutes and the process says there are about 30 minutes left. I'll update with some more screenshots as soon as it's done.
In the end, Leopard took about 35 minutes to install. I haven't added the developer tools yet, but the install was definitely faster than Tiger. The set up screens are about the same – set up a user account, give Apple a bunch of fake personal data, reject pointless .Mac tie-ins and you're done.
Quick first impressions: Leopard looks very nice. Just browsing my backup of Tiger trying to decide what I need to copy and I'm already addicted to Quick Look – very useful. The icons are more subdued, especially the folders, which I'm not too fond of. But I'm a Quicksilver junkie, so I don't spend too much time in the Finder. Speaking of which, I hope someone figures out a way to leverage Quick View through Quicksilver.
Hidden Gems: Haven't had time to find them all of course, but I like that there's a new option to turn off the Finder warning about changing file extensions, and coders will be happy to see that Leopard ships with Python 2.5, Ruby 1.8 and Perl 5.8, which is much more up-to-date than previous installs. When you download and install a new app, the first time it launches you'll be asked to OK it. The dialog box now includes the original URL, as well as date and time downloaded, which is nice for reference. I'll post more gems as I come across them.
|
OPCFW_CODE
|
Liter as a lower case letter in siunitx
In german the unit "liter" is usually abbreviated with the lower case "l".
But no matter whether I use \qty{}{\liter} or \qty{}{\litre}, the unit is always abbreviated with the capital letter "L".
Any suggestions how to fix this?
\documentclass[ngerman]{scrartcl}
\usepackage{babel}
\usepackage{siunitx}
\begin{document}
\qty{2}{\liter} oder \qty{2}{\litre}
\end{document}
You should declare \litre with the symbol you want
\documentclass[ngerman]{scrartcl}
\usepackage{babel}
\usepackage{siunitx}
\DeclareSIUnit{\litre}{l}
\begin{document}
\qty{2}{\liter} oder \qty{2}{\litre}
\end{document}
Could it be that in earlier versions of siunitx \litre was abbreviated with the lower case letter "l" and not "L"?
Perhaps there should be a package option for this: the symbol for the liter is either l or L: both are accepted and siunitx should not force either (maybe with l as default, because it's listed first in the brochure).
@egreg The documentation of siunitx actually clarifies that: "In contrast to metres, however, there is more likelihood of users wishing to adjust the appearance of litres: both 'l' and 'L' are commonly used. The recommended approach to adjustment is to re-declare the \litre macro, as \liter will follow automatically. \DeclareSIUnit\litre{l}" (p. 52).
@ManuelWeinkauf I find it really awkward, because I spell “liter”. Also the symbol “L” is almost never used in Italy, for instance and it's almost always ”l”.
@egreg Yeah well, I also prefer "l" instead of "L", but some journals insist in it, and it is a valid abbreviation. Personally, I appreciate that \litre is the default macro (in contrast to liter), but that is just my preference for British English. At least it is declared in the manual, but having it as an option for sisetup may be the preferable option for some future update... I think this is (at least nearly) the only SI-abbreviation where two options are equally correct.
When I used siunitx about a month ago, I seem to remember that either \liter or \litre was abbreviated with the lowercase "l". It is a pity that I have to enter an extra command to correctly display the unit in german now.
If you are disappointed with the recent non-backwards-compatible changes to the package (as I was) you can use \usepackage{siunitx}[=v2]
@egreg The one piece of customisation that is not keyval based is the symbols for units: that's what \DeclareSIUnit is for, and is why I've deprecated \SIUnitSymbol.... So I feel an option here is 'the wrong message'.
@Schubladenzieher You are correct: when I reflected on it, I felt relying on a spelling variant for l vs L was not the best interface decision.
@RobinGeorg I stepped the major version as there were breaking changes - the biggest one is the font selection system (which was really an issue), but ironically it's the one change no-one seems to mind. I have tried as far as possible to avoid breaking changes, but at the same time there are places I can see I got things not just a bit awkward but out-and-out wrong.
|
STACK_EXCHANGE
|
ZK Watchers remain after table delete
Describe the bug
When a table is created, Accumulo creates around 20 Zookeeper watchers associated with that table. This is with the default configuration for that table. The more properties and iterators that are configured on the table, the more watchers that will exist. When the table is dropped, anywhere from 8 to 15 watchers will persist indefinitely. The only way for ZK to drop these watchers is for a restart of the server persisting the connections (ZK, tserver or master). This becomes a problem on a large cluster with a lot of tables being created and deleted as ZK will eventually become inoperable. Restarting ZK or the master is not always advisable since this can lead to more problems on an active cluster.
Versions (OS, Maven, Java, and others, as appropriate):
Affected version(s) of this project: 1.10, 2.0, 2.1
To Reproduce
Start up a cluster using Uno and have netcat installed for running ZK four letter commands. You will probably have modify the ZK whitelist in zoo.cfg. For example, vi <uno_home>/install/apache-zookeeper-3.6.1-bin/conf/zoo.cfg and modify the property: 4lw.commands.whitelist=*.
Create a table. For example, accumulo shell -e "createtable test"
Get the table ID for that table: accumulo shell -e "tables -l"
Get a count of the number of watchers associated with that table ID. For table ID=4:
echo wchp | nc localhost 2181 | grep "tables/4". This returned a count = 23
Drop the table: accumulo shell -e "droptable test -f"
Get the number of watchers again for that table ID. Command returned a count = 15
Expected behavior
ZK Watchers associated with a table should be dropped when the table is deleted.
Additional context
There is a very good chance this will be fixed with the 2.1 change #1454. But until that change is made, this is a critical bug in 1.10 and 2.0.
The watchers that persist seem to be associated with table configuration:
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.split.threshold
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.replication
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.balancer
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.groups.enabled
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.compaction.minor.logs.threshold
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/namespace
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.classpath.context
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/tserver.dir.memdump
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.compaction.selector
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.majc.compaction.strategy
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/tserver.walog.max.referenced
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.compaction.dispatcher
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/table.split.endrow.size.max
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/name
/accumulo/d12e80e5-3008-43af-b050-195094437b44/tables/p/conf/tserver.memory.maps.native.enabled
While the issue with large number of watchers was document in https://issues.apache.org/jira/browse/ACCUMULO-2757 - the resolution of that is still open. This is a separate issue that aggravates the problem with lots of watches because the tservers (and the master) seem to keep unnecessary watches after a table is deleted.
Maybe some of the ideas in this old and closed pull request can motivate the ultimate solution to this issue:
https://github.com/apache/accumulo/pull/1443
Here is another ticket for reference: https://github.com/apache/accumulo/issues/1423
Do we have a utility that uses ZKUtil.visitSubTreeDFS() to traverse the tables in ZK and for those that don't exist delete the watches in the callback?
@dlmarion I don't anticipate us shipping a fix for 1.10, since doing so would likely break forward-compatibility with other 1.10 releases in the way things are persisted to ZK. This has been a constraint for a long time in 1.x, and I don't think it would be super urgent to fix for 1.10. There are some workarounds... such as creating watched missing entries and deleting them, so the delete causes the watches to trigger, or bouncing tservers with excessive watchers. This issue is assigned to @EdColeman and I believe he is making progress in his fork on a more robust solution for 2.1 that, among other changes, involves using significantly fewer ZK nodes to store config.
One of the issues that blocks this as a specific fix is that currently, watchers are created with the expectation that the node might appear - and then code will receive notification on node creation. If the property never set, the node never comes into existence and the watcher that was set remains even when the table is deleted. To clear the watcher, it would be necessary to create the "missing" nodes and then delete them to trigger the watcher, which hopefully would not them be reset - while this might be a solution, the rework of the properties in 2.1 will make this unnecessary.
I explored a stand-alone utility that can gather the watchers using a zk four-letter work command (either wchp or wchc work) and then create / delete the nodes with watchers when a corresponding table id does not exist. The four-letter commands can be disruptive and there was no interest in the utility.
Should we remove this from the 1.10.2 project then?
If were seemed likely to release a 1.10.2, then if might be possible to add some mitigations as @ctubbsii mentioned - I need to finish the 2.1 approach first.
The should be resolved with #2569
|
GITHUB_ARCHIVE
|
For the last couple of months my blog has been abandoned. This happens when someone becomes so excited about his idea, that he forgets about everything, and devotes all the spare time he has to developing this idea. Which exactly what has happened to me.
I love puzzles. This is the only type of games I play on my mobile phone. Last year I came across a very good puzzle game, which took away a couple of weeks of my life: REBUS — Absurd Logic Game. And recently I thought, what if I inverse the idea, and instead of finding out the word encoded in a picture, try encode that word using the set of given pictures.
Developing a quick prototype took about a week, and when I showed it to my wife, she found it playable. So I decided to move on and develop a complete game.
There is a picture, representing some word. You have to guess this word, and then construct it, using other pictures given below. Of course, most of times you need only certain parts of these words, so you have to remove some letters from the front or the back of these words. Like this:
For each correctly composed word you get some coins. Initially, the number of coins equals to the number of letter the target word has. Each time you remove a letter, while composing the target word, the number of coins is decreased by 1. The aim is to find out the word while removing as fewer letters as possible.
The puzzles are composed in such way that there is always one best solution (which gives you at least one coin after solving it), and there are other not-so-bad solutions (which give less or even no coins).
Coins can be used to get various types of hints, like revealing the target word, or the word options, or they can be used to unlock new puzzles. Initially, only 10 out of 250+ puzzles are available, and each time you solve one, a new one is unlocked.
The player can purchase additional coins using in-app purchases, and Google provides a very easy way of integrating such a feature into your app.
Android. Just because I’ve already had some experience developing Android apps and recently completed a few Android Development courses, I decided to make it the primary platform. The iOS version will follow, if the Android gets even slightly successful, meaning that it gets at least 1000 downloads by the end of 2017.
Probably, this was the most time-consuming part. Constructing puzzles manually would take weeks and would have been error-prone. So I decided to automate this task.
I downloaded a file containing 1000 most popular English nouns. This dictionary was fed into a Python script, which I developed.
On the first iteration the script creates all possible splits of each word; each split containing word parts with at least 2 letters, like this: “massage” = [[“ma”, “ss”, “age”], [“mas”, “sa”, “ge”], [“mass”, “age”], [“massa”, “ge”], [“mas”, “sage”], [“massa”, “ge”]].
One the second iteration the script looks at each word, and for each split it finds the full words, from which each word part of the given word can be obtained by removing letters either from the front or the back. For each word part there can be many full words, from which it can be obtained, so the script keeps track of how many letters have to be removed from the full word to get the required part — the cost of that word.
Then it sums up the costs for each combination of full words that construct the target word, and drops those, whose costs are equal or higher than the number of letters in the target word. The combination of words, which has the lowest cost becomes the best possible solution. Limiting the cost of the best solution to the target [word length — 1] ensures that the player receives at least one coin for successfully solving the puzzle using the best solution.
After running the script on the entire dictionary, something like 400 questions have been generated. These questions then have been manually checked. Many puzzles had similar word options, because, for example, if there are many target words containing the part “id”, and the shortest word option in your dictionary, which contains this part is “idea”, of course the script will suggest this word for almost all of such puzzles. So these had to be manually substituted to a different word options. Often, this led to increasing the cost of the best solution, but gave a greater variety of word options.
The resulting set of puzzles has been saved into an XML file, and imported into the Android project. The app would then parse this file, and use it to display the puzzles.
Tricky part. Android runs on a vast range of devices, each with different screen size and resolution. There are literally thousands of possible combinations of these, and it is impractical to try to create images that suit the majority of them. So the solution was to use vector icons in SVG format. Android Studio can import this format, so that there is only one resource for all screen sizes and densities. Also, this greatly reduces the size of the app, because often, vector images take less space than raster ones.
I still had to create different variations of the main puzzle screen, where the lower part of the screen that contains the buttons has been rearranged for better utilization of the available space on larger screens. However, this was just one such place in the app, and all the rest of layouts are completely identical for small, medium and large screens.
Surprisingly, this was the easiest part. There is a singleton, which keeps track of the score, provides the puzzle data, decides, whether the player has solved it or not, and saves the progress to a small internal database. The rest is quite straightforward, so I won’t describe it in too much details.
The game is now available on Google Play, and it is free. Now I’m trying to figure out how to cheaply promote it so that I get the desired 1000 downloads by the end of this year. Social media has been utilized, but this did not result in too many installs. I’ve sent review requests to a few dozens of Android gaming websites, and still waiting for responses. I even tried to run an AdWords campaign, which brought me about 50 installs for the $15 spent. Still, 1000 installs is quite far away. Probably, a larger promotion budget would have given me the required numbers, but unfortunately I don’t have a spare penny at the moment to spend it on promotion.
All in all, this was a good exercise, which gave me the opportunity to get some experience in all stages of an app development, starting from the concept, to prototype, to the complete app, and to promotion. It would be nice to do the same for the iOS version, especially, if the app also brings some money.
|
OPCFW_CODE
|
Run gprofiler without root/sudo
Description
This PR adds support for running gprofiler without root/sudo as discussed in issue 905. There are several assumptions and components that I will mention here.
This PR requires a change in the granulate-utils repo that defines the run_in_ns_wrapper function found in PR 265 on that repo https://github.com/Granulate/granulate-utils/pull/265.
Assumptions when running without root:
When running without root, the user must use --pids to select user owned processes only.
The user must direct the log and pids files to a user owned directory (e.g. with --log-file and --pid-file parameters).
The user must set certain system parameters such as kernel.perf_even_paranoid as needed to allow gprofiler to run.
Some of the corner cases that require fallback rw exec directories for POSSIBLE_AP_DIRS may not be resolved.
Components:
Replaced exit/error when is_root check fails in verify_precondiftions and replaced it with this message: "Not running as root, and therefore functionality is limited. Profile is limted to only processes owned by this user that are passed with --pids. Some additional configuration (e.g. perf_event_paranoid) may be configured to operate without root."
Created run_in_ns_wrapper function which bypasses the code to enter name spaces when not root (as we assume we're always in the correct namespace for the processes being profiled)
Added a parameter to the pgrep_maps function to ignore permissions errors. Each time a profiler calls this function, it will check if root and if not, will pass "True".
Redirected the default value of TEMPORARY_STORAGE_PATH to the resources directory.
Added mkdir_owned_user function which is used in main where the TEMPORARY_STORAGE_PATH creates gprofiler_tmp, so that it doesn't throw an error when we aren't root, but still ensures the directory is owned by the current user.
Potential issues:
Is there anything I should add/change in the message that is displayed when the is_root check in verify_preconditions fails? Also, is print() to stdout correct here?
Is it fine to redirect TEMPORARY_STORAGE_PATH to the resources directory even in the default case, or should I add a check to only do this when not root?
Do we need to resolve the fallback rw exec directories for POSSIBLE_AP_DIRS?
I've tested this on two systems and it works on both, but on one of them I receive this warning (though gprofiler completes and produces valid results). I discuss this more in the corresponding granulate-utils PR since that is the source of the error.
[2024-12-02 19:59:55,557] WARNING: gprofiler.profilers.java: Failed to enable proc_events listener for exited Java processes (this does not prevent Java profiling)
Traceback (most recent call last):
File "granulate_utils/linux/proc_events.py", line 222, in start
PermissionError: [Errno 1] Operation not permitted
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gprofiler/profilers/java.py", line 1395, in start
proc_events.register_exit_callback(self._proc_exit_callback)
File "granulate_utils/linux/proc_events.py", line 272, in wrapper
File "granulate_utils/linux/ns.py", line 305, in run_in_ns_wrapper
File "granulate_utils/linux/ns.py", line 299, in run_in_ns_wrapper
File "granulate_utils/linux/proc_events.py", line 260, in _start_listener
File "granulate_utils/linux/proc_events.py", line 225, in start
PermissionError: This process doesn't have permissions to bind/connect to the process events connector
Related Issue
https://github.com/Granulate/gprofiler/issues/905
Motivation and Context
Users of this gprofiler have requested this feature as some cloud instances do not have root access, but still want to profile user owned processes.
How Has This Been Tested?
I ran stress-ng and targeted gprofiler to the stress-ng pids without sudo. It successfully produced flamegraphs
Sample command line: ./build/x86_64/gprofiler --pids 1421864 -o results/ -d 15 --log-file ./gprofiler.log --pid-file ./gprofiler.pid
I have tested this on x86 using scripts/build_x86_64_executable.sh script. Centos 9 Stream w/ kernel 6.6
Also tested using sudo targeting specific pid(s) and system-wide, and it still works.
Was not able to run tests/test.sh as it required apt-get/debian environment.
Screenshots
Checklist:
The code is linted.
I have not updated the README.md doc here. Might need some guidance.
[x] I have read the CONTRIBUTING document.
[ ] I have updated the relevant documentation.
[ ] I have added tests for new logic.
Is there anything I should add/change in the message that is displayed when the is_root check in verify_preconditions fails? Also, is print() to stdout correct here?
I prefer that the rootless mode will be opt-in, and not a default in case of non-root user (which allows misconfigurations to continue misusing the profiler).
So a flag like --rootless that enabled this behavior & checks that we're not root. And the verify_preconditions() function will suggest using --rootless.
Is it fine to redirect TEMPORARY_STORAGE_PATH to the resources directory even in the default case, or should I add a check to only do this when not root?
I left some comments on the PR about this topic.
Do we need to resolve the fallback rw exec directories for POSSIBLE_AP_DIRS?
Not sure I got you here.
I've tested this on two systems and it works on both, but on one of them I receive this warning (though gprofiler completes and produces valid results). I discuss this more in the corresponding granulate-utils PR since that is the source of the error.
Yeah, it's fine - as I commented on granulate-utils, proc_events are expected to fail in rootless.
|
GITHUB_ARCHIVE
|
I recently completed the LPIC-1 certification offered by the Linux Professional Institute, which tests candidates on Linux internals and system administration.
LPIC-1 certification is broken down into two exams: 101-400 and 102-400. 101-400 covers topics such as Linux system architecture, installation, package management, devices, and filesystems. Conversely, 102-400 explores shell scripting, X11/GDM, service management, basic network configuration, and security concepts such as user and group permissions. The objectives emphasize the location of crucial system and configuration files, while also delving heavily into command line utilities and their respective options and switches. There’s also a focus on the use of vi as a text editor, as well as some rudimentary exploration of SQL using MariaDB, which I thought was a good general-purpose addition for any aspiring sysadmin.
As the certification is vendor-agnostic, the course objectives cover both Red Hat and Debian derivatives, including their respective package managers and distribution-specific utilities. At times, this became a bit overwhelming, but I understood the need for a prospective Linux sysadmin to work with both alternatives due to their ubiquity and market share. A little more puzzling was an equal focus on both System V and systemd init systems, which feels less essential in the present day. Despite its detractors, systemd has taken hold as a standard in the Linux community, and I wouldn’t be shocked to see System V init abandoned entirely in future versions of the certification.
I was disappointed when I discovered that the exam questions are all multiple-choice. While rote knowledge of the commands and concepts is impressive, the lack of simulation-based content may turn off prospective employers that seek a more practical test of a candidate’s technical knowledge. Many certs have been devalued by cheating and freely available online “brain-dumps”, and I doubt these exams are any exception to the rule.
Speaking as a casual Linux user since the mid-1990s, I was shocked by how unfamiliar the content felt. There’s a particular emphasis on system administration and management that the average user typically won’t touch in the vast majority of cases. If anything, I think this speaks to the ease of use of most modern Linux distros; for instance, most home users don’t have to consider their hard drive’s partition layout during installation, nor do they have to toil on the command line to configure their system when graphical desktop environments such as GNOME lay the options out in a user-friendly manner and provide all the necessary buttons and sliders.
My study materials were a combination of the LPIC-1 video course available at www.linuxacademy.com, the course objectives from the LPI website, and a selection of “how-tos” gleaned from various websites. It’s important to stress the use of multiple information sources in combination – given the breadth and depth of the exam objectives, I don’t think any one source would have helped me pass the exams on its own.
Drawbacks aside, I feel the certification is still worth the time and money (~$400 US, with various vouchers and discounts available to offset the cost). The knowledge I gained as a relative novice was a good return-on-investment and would serve as a good stepping stone to a more intensive and practical certification such as those offered by Red Hat. As such, I’d recommend LPIC-1 to anyone seeking a certificate reflecting a vendor-agnostic approach to Linux system administration.
|
OPCFW_CODE
|
MICROSOFT PROJECT 2010 ANSWERS
Microsoft Project 2010 download
Oct 22, 2013Project Online General Questions and Answers https: Thanks, but although this says it's the Microsoft Project 2010 this also defaults to the trial version of Microsoft Project 2013. I'm trying to download the trial version of Microsoft Project 2010 which will run with Windows XP.MS Project compatiblity with Windows 10Aug 26, 2018Downloading Microsoft Project 2010 60 day trial versionJan 25, 2018When is the end of support for MS Project 2010 SP2Jun 04, 2015Books for MS Project 2010May 08, 2011See more results
Microsoft Project Professional 2010 - Microsoft Community
Nov 10, 2017Microsoft Office Professional 2010 has been running continuously without a problem. But then over the weekend I wanted to use Microsoft Project Professional 2010 and as soon as I opened it up I got the alert message “this product is not activated” and a big red band across the top.How to open 2 instances of Microsoft Project 2010 for dualJul 11, 2019Microsoft Project 2010Sep 21, 2016Microsoft Project 2010Sep 20, 2015Project Professional 2010Jun 08, 2010See more results
Page breaks in Project 2010 Standard - Microsoft Community
Nov 08, 2010Welcome to this Microsoft Project forum:) For some reason this is not in the Insert group of the Task ribbon. Try a right click in the Insert group, and select customize ribbon. At the top of the dialog, from the pick list, select Commands not in the ribbon You can now scroll down the list to
Microsoft Project 2010 - Error message when loading file
May 15, 2014Replies (7) . To do this, open a new blank project and then click Project > Subproject. In the Insert Project dialog, select the troublesome MPP file, DESELECT the Link to Project checkbox, and then click the Insert button. If Microsoft Project 2010 is able to resurrect the file, you will see the project inserted as an unlinked subproject. Hope this helps.
Microsoft Project 2010 For Dummies Cheat Sheet - dummies
How to Use Microsoft Project 2010 to Resolve Resource Conflicts. With Microsoft Project 2010, you can resolve resource conflicts by modifying assignments, changing scheduling, and more. Consider the following tactics to resolve resource conflicts: Revise the resource’s availability to the project. For example, change the person’s availability from 50 percent to 100 percent.
Latest Questions & Answers on Project 2010 , including how
Sep 23, 2009Last week the Project Team publicly disclosed Microsoft Project 2010 at the Project Conference 2009 in Phoenix, AZ, USA. It was a great event with a fired up group of 1,300 customers and partner attendees. Press and analyst coverage was very positive with the overriding themes being Project 2010 is the most significant release for..
Microsoft Project 2010 Archives - IT Answers
View Answer albertlekaf 10 points Badges: Hi all, I am trying to recover a Microsoft project 2010 file that I spent around 4 hours work on, that was overwritten by an older version of the file. It happened on a school laptop therefore was saved on my USB. Any suggestions would be great.
What are the features of Microsoft Project 2010? - Quora
Feb 12, 2017Features of Microsoft Project 2010 : Team Planner: This component gives you a chance to relocate assets to experiment with different venture situations. Task Inspector: If an undertaking has a planning struggle or an asset is over-apportioned, this component gives you a chance to settle the issue.
What are the benefits of Microsoft Project 2010? - Quora
Feb 01, 2017Benefits of MS Project : Microsoft extend 2010 conveys outwardly improved methodologies to productively deal with a wide cluster of projects and activities. From enabling your group to selecting the proper assets and meeting vital due dates, Venture 2010 offers more natural encounters to guarantee you stay profitable and accomplish stunning outcomes.
Common Project Server 2010 Questions and Answers - Blogger
Jan 31, 2012Common Project Server 2010 Questions and Answers 20120201 Question - Can I save a copy of my project schedule as an file, work on it disconnected from project server and when I am ready save over the top of my schedule version in project server?
Related searches for microsoft project 2010 answers
microsoft projects 2010 downloadmicrosoft project 2010 softwaremicrosoft project 2010 download freemicrosoft project 2010 tutorialmicrosoft project standard 2010 downloadmicrosoft answers communitymicrosoft answers forummicrosoft office project professional 2010
|
OPCFW_CODE
|
Claiming in the Software Industry
Software is a notoriously complex area for the submission of robust and defendable R&D claims
It is the main technology area where HMRC enquire into and reject (or substantially reduce the value) of claims.
The main difficulty arising is that companies developing software can spend significant resources in planning, writing, testing and debugging software yet whether part of these costs are eligible for R&D Tax Credit purposes comes down to whether the software development meets the HMRC criteria for R&D and whether that can be communicated effectively to HMRC.
With generalist accountants remaining understandably under-skilled regarding R&D claims, and very few R&D specialists also being software engineers, many software companies have either under claimed or not claimed.whilst others have incorrectly claimed without the necessary supporting evidence and would be unable to defend their claim if HMRC launched an inquiry, which they can do up to six years following the submission and payment of a claim.
Others have incorrectly claimed without the necessary supporting evidence and would be unable to defend their claim if HMRC launched an inquiry, which they can do up to six years following the submission and payment of a claim.
Many advisors and accountants incorrectly advise their clients that work spent developing bespoke software code to provide new ‘features and benefits’ de facto qualifies as R&D.
This is certainly not the case and many claims are rejected by HMRC because of the claimant (often supported by inexperienced advisors), preparing a detailed report focusing on the features and benefits of the software development, rather than demonstrate the underlying advancement of science and technology required to realise them.
What you need to know:
Requirements to prepare a robust and defendable claim
To prepare a robust claim, it is necessary to have supporting documentation identifying the advance in technology sought by the project, the technological uncertainties that were required to be overcome to achieve the technological advance and the reasons as to why the resolution of these technological uncertainties would not be obvious to a competent professional in the field.
HMRC Qualifying Criteria for Software Projects
To count as ‘qualifying R&D’, according to HMRC Guidance (CIRD 81960) a software project must:
- Seek to achieve an advance in software / IT (para3).
- There must be an advance in overall knowledge or capability in software / IT, not just the company’s own state of knowledge or capability alone (para6).
- The development of a software product does not represent an advance in software / IT simply because it is software (para8).
- Routine adaptation of an existing product or process is not R&D (para12).
- Assembling components of a program to an established pattern or using routine methods for doing so is not R&D (para29).
- Combining standard technologies can be R&D if a competent professional in the field cannot readily deduce how the separate components should be combined to have the intended function (para30).
3 Key Areas identify if a project is eligible to qualify
- The project must seek to achieve an advance in science or technology. (There must be an advance in overall knowledge or capability in a field of science or technology, not just the company’s own state of knowledge or capability alone.)
- The advance in science and technology must be resolved (or attempt to be resolved in the case of a failed project), through the resolution of a technological uncertainty.
- The resolution of the technological uncertainty should not be obvious to a professional competent in the area of software for which the claim is being made.
Advance in science or technology
To demonstrate an advance in science/technology it is necessary to provide a state of the art review of the field at the start of the project.
In order to identify the state of the art in a field and thus articulate clearly to HMRC how our clients’ work meets the requirements of advancing the field, we have compiled the most comprehensive of industry databases identifying early stage technology companies developing state of the art solutions in over 190 areas of Software and IT (see the full list on the Software / IT tab in our Sector Experience links above) and describe the projects undertaken by our clients against the state of the art in the field.
To be successful a project development must be able to show that the outcome of the development required is the resolution of a technological uncertainty. Hence a major fact in determining the eligibility of a claim is in identifying the technological uncertainty of a project.
Many would-be claimants have tried to argue that integrating a multitude of leading edge tools and languages self-represents a ‘system technological uncertainty’ but such arguments require specific details to justify this contention with HMRC guidance against such arguments:
“It may be claimed that there are always system uncertainties involved with software. It is true that there is always some uncertainty about anything. But uncertainties that can be resolved through discussions with peers or through established methods of analysis are routine design uncertainties rather than technological uncertainties. Technical problems that have been overcome in previous projects on similar operating systems, or computer architecture, are not technological uncertainties.”
Obviousness to a Competent Professional
This criterion could be considered somewhat subjective and it is not uncommon in our experience that the most skilled software engineers downplay their advances relative to their peers. Occasionally the weaker ones believe that everything they have developed represents a major advance in the field!
As with many areas of R&D claims there is no substitute for sector knowledge and experience of having made previous claims in similar areas of technology.
At May Figures, all our software claims are written by technical analysts who between them have prepared over 600 Software / IT claims and possess:
- a PhD or degree in Computer Science
- 20+ years’ experience in software code development / IT project management
- 4+ years of R&D claim writing experience mainly on Software /IT R&D claims
Complications in real-world software development
Whilst such broad definitions may be appropriate in differentiating blue sky or pure research (fully qualifying) from ‘routine commercial software development’ (non-qualifying), they are very subjective in the world of commercial software development; where developers typically invest considerable effort in developing commercial solutions without necessarily assessing how their developments could be considered to be making an overall advance in knowledge of capability in the field of software / IT rather than just an advance for their company.
It is rare for developers to fully develop a software solution entirely from scratch, typical best practice development processes rely on building bespoke application functionality with software modules or micro-services, residing on top of a code bases developed from earlier projects or even open-source communities.
Examples of Qualifying Projects:
HMRC give examples of qualifying projects as being activities such as:
- Developing new operating systems or languages.
- Creating new search engines using materially new search methods.
- Resolving conflicts within hardware or software, where the existence of a problem area and the absence of a known solution have been documented.
- Creating new or more efficient algorithms whose improvements depend on previously untried techniques.
- Creating new encryption or security techniques that do not follow established methodologies.
- The handling of interactions with users. This covers areas such as development of data entry procedures and user interfaces.
- The visual presentation of information to users.
- Creating software that replicates an established paper procedure, possibly building in best practices. The fact that a previously manual task has been automated does not by itself make it R&D.
- The assembling, carrying out routine operations on and the presenting of, data.
- Using standard methods of encryption, security verification and data integrity testing.
- Creation of websites or software using tools designed for that purpose.
What if only part of a project comprises a qualifying activity?
Typically, a large software / IT development project contains elements of qualifying R&D, but according to HMRC guidance: ‘”The project as a whole will not qualify as R&D, but there may be elements in the project that do qualify as R&D. Most projects for the development of a commercial product will go further than resolving technological uncertainties and so will not qualify as R&D in their entirety.”
Hence the job of an advisor is to analyse a large complex IT project and break down into its constituent elements, identifying those aspects which can be legitimate claims as well as those which cannot be claimed.
Software development in rapidly developing fields – qualifying or non-qualifying?
In fast developing fields such as Cloud Computing or Big Data, adapting state of the art tools / open-source products for use in an application may not be routine at the time that they were undertaken, even though by the time the claim is made (up to 3 years later), the use of such tools and products may well be widely established.
Software development using technology from a different domain – qualifying or non-qualifying?
Another difficulty faced in determining whether a project is qualifying R&D especially in fast moving fields is where the technology stack has been used previously but in a slightly different application. In the field of Big Data, we saw many early claims where the R&D level was low; simply moving a SQL database to a NoSQL database for a type of data analytics which had previously not been widely reported as using NoSQL databases.
Now that NoSQL has become an established technology to be considered a qualifying R&D project, there will likely to have to be some aspect of technical innovation beyond merely porting a database from a relational format to a schema-less format. We have recently successfully claimed NoSQL projects based on an advance in aspects such as deployment optimisation or security enhancement rather than merely being on the basis of using NoSQL.
Major Software / IT Technology Sectors in which we have claimed:
- Algorithms (Evolutionary and Genetic, Fuzzy Logic, Neural Networks, Graphical)
- BigData / NoSQL (Cassandra, Hadoop, MongoDB, Neo4j)
- Cloud Computing (Service Virtualisation, Microservices, Dynamic Deployment, Security)
- CMS Systems (Datastructures, Algorithms and Optimisations)
- Compliance Record Keeping (Training, Health & Safety, Pharmaceutical CFR21 part 11)
- Data Analytics / Data Visualisation (Hypercubes)
- E-Commerce Solutions (Faceted Search Algorithms)
- E-Health (NHS Spine, Telemedicine, N3 Security, Patient Records)
- ERP Systems (optimisation, visualisation, business logic and integration)
- Financial Transaction Processing (Payroll processing, Real-Time TOMS, Erlang)
- Gaming (GPU Optimisation, Algorithms, Gambling Regulation compliance)
- Geolocation / GIS Systems / Gazetteer Management Systems
- Image Processing (Algorithms, Optimisation, Visualisation)
- Mobile Workforce (Scheduling and Route Planning)
- SEO optimisation (Algorithms / Penguin update issues)
- Software Reliability and Testing (Fault Tree Analysis / FEMA)
- Text Messaging (Architectures, Charging Algorithms, Communications)
- Web Architectures (Real-Time Event Processing, Reactive Blocking Architectures)
- Wireless Sensor Networks (Data comms, Data Storage)
|
OPCFW_CODE
|
Automate running backups and restores w/ fusion
What this PR does / why we need it:
Creates an automation script to trigger and wait for Fusion backups and restores
Which issue(s) this PR fixes:
Work for https://github.ibm.com/IBMPrivateCloud/roadmap/issues/64245 & https://github.ibm.com/IBMPrivateCloud/roadmap/issues/64247
Special notes for your reviewer:
How the test is done?
Setup cluster using instructions from #2142
Update variables in script to match expected (ie BACKUP_STORAGE_LOCATION_NAME)
run auto-br.sh with appropriate parameters
verify backup and restore complete
Outstanding items:
[x] Usage function needs to be updated
[x] Variables at the top of the script need to either be parameterized or updated via env.properties
[x] prereq checks for proper variables set before running the script
[x] update logging statements
[x] clean up commented out code and remove defaults for values that shouldn't have one
How to backport this PR to other branch:
Add label to this PR with the target branch name backport <branch-name>
The PR will be automatically created in the target branch after merging this PR
If this PR is already merged, you can still add the label with the target branch name backport <branch-name> and leave a comment /backport to trigger the backport action
This pr is a bit ugly because I started work on this script before 2142 was merged in. The changes specific to this pr start from https://github.com/IBM/ibm-common-service-operator/pull/2157/commits/f9ce24b765f42f7579568a5554fe3181eb4309f6. I also opened the same pr in my fork branch for clarity on the difference https://github.com/bluzarraga/ibm-common-service-operator/pull/4/files
I have passed the test for Backup automation part.
Result:
./auto-br.sh --backup --backup-name cs-application-backuo-test --cluster-type hub --target-cluster cutie1
All arguments passed into the auto-br.sh: --backup --backup-name cs-application-backuo-test --cluster-type hub --target-cluster cutie1
[✔] oc command available
[✔] yq command available
[✔] oc command logged in as kube:admin
# Creating Spectrum Fusion backup resource for hub cluster.
[INFO] Copying template files...
[INFO] Editing backup yaml...
backup.data-protection.isf.ibm.com/cs-application-backuo-test unchanged
[✔] Backup cs-application-backuo-test successfully applied on hub server https://api.cutie1.cp.fyre.ibm.com:6443 to backup target cluster cutie1
# Waiting for backup cs-application-backuo-test to complete...
Completed && Completed && oc get backup.data-protection.isf.ibm.com cs-application-backuo-test -n ibm-spectrum-fusion-ns -o jsonpath='{.status.phase}'
[INFO] backup cs-application-backuo-test can be further tracked in the UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backuo-test
[✔] backup cs-application-backuo-test completed successfully for cutie1.
[INFO] For more info, see job in the UI (https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backuo-test) or use "oc get backup cs-application-backuo-test -n ibm-spectrum-fusion-ns -o yaml | yq '.status'".
[✔] Backup cs-application-backuo-test of cluster cutie1 completed. See results in Fusion UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backuo-test
Updated some of the parameters and prereq checking after testing the script. I've got it running backups and restores consistently so I think this is ready to merge and ready to hand over to SERT team for further testing/use
Backup test pass
./auto-br.sh --backup --backup-name cs-application-backup-test
All arguments passed into the auto-br.sh: --backup --backup-name cs-application-backup-test
[✔] oc command available
[✔] yq command available
[✔] oc command logged in as kube:admin
# Creating Spectrum Fusion backup.data-protection.isf.ibm.com resource for cluster.
[INFO] Copying template files...
[INFO] Editing backup yaml...
backup.data-protection.isf.ibm.com/cs-application-backup-test created
[✔] Backup cs-application-backup-test successfully applied on hub server https://api.cutie1.cp.fyre.ibm.com:6443 to backup target cluster
# Waiting for backup.data-protection.isf.ibm.com cs-application-backup-test to complete...
[INFO] backup.data-protection.isf.ibm.com cs-application-backup-test can be further tracked in the UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backup-test
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: InventoryInProgress
[INFO] Current sequence status:
...
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: RecipeInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: SnapshotInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: SnapshotInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: DataTransferInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: DataTransferInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: DataTransferInProgress
[INFO] Waiting on backup.data-protection.isf.ibm.com cs-application-backup-test to complete. Current status: Completed
[✔] backup.data-protection.isf.ibm.com cs-application-backup-test completed successfully for .
[INFO] For more info, see job in the UI (https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backup-test) or use "oc get backup.data-protection.isf.ibm.com cs-application-backup-test -n ibm-spectrum-fusion-ns -o yaml | yq '.status'".
[✔] Backup cs-application-backup-test of cluster completed. See results in Fusion UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/backups/cs-application-backup-test
Restore process also passed:
./auto-br.sh --restore --backup-name cs-application-backup-test --restore-name cs-application-backup-test-1 --cluster-type spoke --target-cluster apps.cutie1.cp.fyre.ibm.com
All arguments passed into the auto-br.sh: --restore --backup-name cs-application-backup-test --restore-name cs-application-backup-test-1 --cluster-type spoke --target-cluster apps.cutie1.cp.fyre.ibm.com
[✔] oc command available
[✔] yq command available
[✔] oc command logged in as kube:admin
# Creating Spectrum Fusion restore.data-protection.isf.ibm.com resource for spoke cluster.
[INFO] Editing restore yaml...
restore.data-protection.isf.ibm.com/cs-application-backup-test-1 created
[✔] Restore cs-application-backup-test-1 successfully applied on hub server https://api.cutie1.cp.fyre.ibm.com:6443 to restore target cluster apps.cutie1.cp.fyre.ibm.com
# Waiting for restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete...
[INFO] restore.data-protection.isf.ibm.com cs-application-backup-test-1 can be further tracked in the UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/restores/cs-application-backup-test-1
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: InventoryInProgress
[INFO] Current sequence status:
null
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Current sequence status:
...
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestorePvcsInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: RestoreEtcdInProgress
[INFO] Waiting on restore.data-protection.isf.ibm.com cs-application-backup-test-1 to complete. Current status: Completed
[✔] restore.data-protection.isf.ibm.com cs-application-backup-test-1 completed successfully for apps.cutie1.cp.fyre.ibm.com.
[INFO] For more info, see job in the UI (https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/restores/cs-application-backup-test-1) or use "oc get restore.data-protection.isf.ibm.com cs-application-backup-test-1 -n ibm-spectrum-fusion-ns -o yaml | yq '.status'".
[✔] Restore cs-application-backup-test-1 to cluster apps.cutie1.cp.fyre.ibm.com completed. See results in Fusion UI here: https://console-ibm-spectrum-fusion-ns.apps.cutie1.cp.fyre.ibm.com/backupAndRestore/jobs/restores/cs-application-backup-test-1
|
GITHUB_ARCHIVE
|
Management Consultant for a Data & Analytics Consultancy
Are you passionate about creating business value with data and want to utilize your entrepreneurial mindset at a place where you have a lot of autonomy?
Then one of my clients are looking for a Management Consultant where you can do just that.
They are on their way to create a leading consulting company in the intersection between Business Development, Analysis and Data. At the center of their success is a strong passion for creating lasting business value through data and analysis. This is complemented by a constant eagerness to learn combined with integrity and teamwork.
Some of their clients include Spotify, King, Trustly, Fishbrain and Epidemic Sound.
Success for us is not short-term financial results but building a company focusing on the long-term, prioritising the development of an amazing team.
What you will do
As a Management Consultant within Analytics you would be exposed to a variety of data projects spanning from strategy to implementation. Depending on your profile you could be advising our clients on how to harness the value of their data utilizing advanced analytics for minimizing customer churn for example.
Who you are
* Have a few years of consulting experience
* A wiz at client communication and relationship building
* Understands how data and analysis can create value and are able to find relevant use cases at our clients
* Skilled in analysis and have some experience with SQL or/and Python
* Have experience of one or a few cloud platforms such as GCP, AWS or Azure
* Have an understanding of agile methodologies
Experience in any of these areas is advantageous:
* Strategic projects within data
* Project management
* Cloud architecture
* Data modelling
To succeed in the role, we also believe that you:
* Like to familiarize yourself with business challenges and are passionate about helping customers create value
* Are analytical, solution-oriented and loves to learn new things
* Believe that the best solutions are created in cross-functional teams
* Want to continue building Data Edge together with us!
What we offer
* Possibility for hybrid work depending on the project and the client
* Learning from and working with some of the best data professionals in Stockholm
* Learning about the latest trends and working with the latest tech on the market
* Great culture combining passion for data, integrity and teamwork
* A chance to build Data Edge together with us, to have autonomy and apply your entrepreneurial spirit
* An office in central Stockholm should you want to socialize with colleagues
* Conferences, trips, social events and knowledge sharing sessions for bringing the team together
* Competitive market salary
* other benefits such as wellness allowances, 30 days annual vacation, coverage for competence development, private health insurance and more
To find out more about this role (and others) , apply below or contact me via Linkedin, phone or email.
+46 8 502 425 47
|
OPCFW_CODE
|
According to the flow of \ (kruskal \) algorithm, the relationship of edge weight in the minimum / large spanning tree is mapped to a binary tree
The specific implementation is also very simple
In the original \ (kruskal \) algorithm, each time two points not in the same set are found, a new node is opened
Then connect the ancestor node ...
Posted by dink87522 on Thu, 28 Apr 2022 01:00:33 +0300
The solution of this problem is rich and colorful!
The line segment tree approach is excellent
Very little code
The idea of maintaining a \ + 1 line segment is obvious
How can we maintain the space without exploding?
We found that although \ (n \times m \) is so large
But \ (q \) is much smaller than \ (n \times m \)
In other w ...
Posted by Bobulous on Wed, 27 Apr 2022 15:31:28 +0300
java data structure and algorithm question brushing directory (Sword finger Offer, LeetCode, ACM) --- main directory ------ continuous update (if I can't get in, it means I haven't finished writing): https://blog.csdn.net/grd_java/article/details/123063846
Train of thought analysis
Double pointer traversal method, first get the subs ...
Posted by melissal on Wed, 27 Apr 2022 11:41:31 +0300
Alice and Bob share an undirected graph with n nodes and 3 types of edges:
Type 1: can only be used by Alice traverse.
Type 2: can only be Bob traverse.
Type 3: Alice and Bob can be traversed.
You are given an array edges , where edges[i] = [typei, ui, vi] means that there is a bidirectional edge of type typei between nodes ui and vi . Pleas ...
Posted by Tokunbo on Wed, 27 Apr 2022 02:30:06 +0300
Data structure experiment
Chapter I personal library information management system Chapter II parking lot management Chapter 3 Huffman coding
The sequential representation of linear table is also called sequential storage structure or sequential image. Sequential storage definition: a storage structure that stores logicall ...
Posted by Heero on Tue, 26 Apr 2022 16:42:53 +0300
Today's talk is about merging and sorting. In order to understand merging and sorting more easily, let's first look at an algorithm problem of leetcode. Through the solution idea of this problem, it will make it easier for us to understand the idea of merging and sorting.
the idea of solving this problem is actually th ...
An undirected graph with n nodes is given by adjacency matrix. Each node represents a node in a network. Give another array
Posted by msarefin on Tue, 26 Apr 2022 09:58:44 +0300
Python container you don't know
Python container you don't know
Yesterday, I read the fifth chapter "Common Data Structures in Python" in Python Tricks: The Book, which introduces the usage and precautions of data structures such as dictionary, array, se ...
Posted by omidh on Mon, 25 Apr 2022 14:54:26 +0300
Data structure binary tree entry Go language implementation
We have been talking about one-to-one linear structure before, but in reality, there are still many one to many situations to deal with, so we need to study this one to many data structure - "tree" and consider its various characteristics to solve the relevant problems ...
I HashSet overview
HashSet is an implementation class of Java Collection set. Set is an interface. In addition to HashSet, its implementation class also has TreeSet and inherits the Collection. HashSet Collection is very common and is also a knowledge point often asked by programmers during interview. The following is the structure diagram
Posted by darlingm on Sun, 24 Apr 2022 16:29:03 +0300
|
OPCFW_CODE
|
How to deal with the
SystemRequirements: C++11 NOTE when running
R CMD check
For the development version of R-devel which will become 4.3.0, there is a new warning in
R CMD check that comes up for some packages:
* checking C++ specification ... NOTE Specified C++11: please drop specification unless essential
(In earlier versions of R-devel, it said
please update to current default of C++17.)
Link to the code that generates the NOTE.
CRAN is now asking to fix and resubmit packages which raise this NOTE.
This happens when the package's DESCRIPTION file has the following:
Packages that use C++11 generally would also have the following in the
src/Makevars.win files (and
src/Makevars.ucrt, if present):
This tells R to use the C++11 standard when compiling the code.
To understand the NOTE, a bit of history will be helpful:
- In R 3.5 and below, on systems with an old compiler, it would default to using the C++98 standard. If a package needed a C++11 compiler, the
DESCRIPTIONfile was supposed to have
SystemRequirements: C++11, and the
src/Makevars.ucrtfiles needed to have
CXX_STD=CXX11. However, systems with newer compilers appear to default to C++11, even without setting
- In R 3.6.2, it defaulted to compiling packages with the C++11 (if the compiler supported C++11 -- and in practice, essentially all systems by that time had a C++11 compiler).
- In R 4.0, it required a C++11 compiler, so
SystemRequirements: C++11was no longer necessary.
- In (the forthcoming) R 4.3, it raises a NOTE if
SystemRequirements: C++11is present, which will block a package submission to CRAN.
How to fix it
- Edit the DESCRIPTION file and remove
After making these changes, the package should install without trouble on R 3.6 and above. However, on R 3.5 and below, there may be systems where it won't build (these are systems with very old compilers on them). Note that in my testing on GitHub Actions, with R 3.5 on Ubuntu 20.04 and 22.04, packages using C++11 build just fine. But on systems with older compilers, it could potentially be a problem.
If you want to be confident that your package to still be installable on R 3.5 and below with older compilers, then you need to use a
configure script at the top level of the package, and have it add
CXX_STD=CXX11 for R 3.5 and below.
I think something like this would work if you don't already have
configure.Rcould be modified to write the entire contents if you do... Adapted from here.
If you require building on R <= 3.5.x, you must specify
src/Makevarson those systems. One way to do this is to create the following files
that check the R version and if it < 3.6.2, sets that flag in
Makevars. This assumes you
don't already have
|
OPCFW_CODE
|
How to Write a Basic Text File in PHP for HTML5 and CSS3 Programming
Often, you’ll want to do something in PHP as simple as record information from a form into a text file for HTML5 and CSS3 programming. Here is a simple program that responds to a form and passes the input to a text form.
The code for this form is basic HTML.
When the user enters contact data into this form, it will be passed to a program that reads the data, prints out a response, and stores the information in a text file.
The more interesting behavior of the program is not visible to the user. The program opens a file for output and prints the contents of the form to the end of that file. Here are the contents of the data file after a few entries:
first: Andy last: Harris email: firstname.lastname@example.org phone: 111-1111 first: Bill last: Gates email: bill@Microsoft.com phone: 222-2222 first: Steve last: Jobs email: email@example.com phone: 333-3333 first: Linus last: Torvalds email: firstname.lastname@example.org phone: 444-4444 first: Rasmus last: Lerdorf email: email@example.com phone: 123 456 7890
The program to handle this input is not complicated. It essentially grabs data from the form, opens up a data file for output, and appends that data to anything already in the file. Here’s the code for addContact.php:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>addContact.html</title> <link rel = "stylesheet" type = "text/css" href = "contact.css" /> </head> <body> <?php //read data from form $lName = filter_input(INPUT_POST, "lName"); $fName = filter_input(INPUT_POST, "fName"); $email = filter_input(INPUT_POST, "email"); $phone = filter_input(INPUT_POST, "phone"); //print form results to user print <<< HERE <h1>Thanks!</h1> <p> Your spam will be arriving shortly. </p> <p> first name: $fName <br /> last name: $lName <br /> email: $email <br /> phone: $phone </p> HERE; //generate output for text file $output = <<< HERE first: $fName last: $lName email: $email phone: $phone HERE; //open file for output $fp = fopen("contacts.txt", "a"); //write to the file fwrite($fp, $output); fclose($fp); ?> </body> </html>
The process is straightforward:
Read data from the incoming form.
Just use the filter_input mechanism to read variables from the form.
Report what you’re doing.
Let users know that something happened. As a minimum, report the contents of the data and tell them that their data has been saved. This is important because the file manipulation will be invisible to the user.
Create a variable for output.
In this simple example, you print nearly the same values to the text file that you reported to the user. The text file does not have HTML formatting because it’s intended to be read with a plain text editor. (Of course, you could save HTML text, creating a basic HTML editor.)
Open the file in append mode.
You might have hundreds of entries. Using append mode ensures that each entry goes at the end of the file, rather than overwriting the previous contents.
Write the data to the file.
Using the fput() or fwrites() function writes the data to the file.
Close the file.
Don’t forget to close the file with the fclose() function.
The file extension you use implies a lot about how the data is stored. If you store data in a file with an .txt extension, the user will assume it can be read by a plain text editor.
The .dat extension implies some kind of formatted data, and .csv implies comma-separated values. You can use any extension you want, but be aware you will confuse the user if you give a text file an extension like .pdf or .doc.
|
OPCFW_CODE
|
Hacking dummies or how to go on other people's computers
I will teach you how to go to someone else's computers - to sit on the Internet at someone else's account (Freebies) .
Explains the principle: When you access the Internet, you enter the network (that is, you can go to other computers!) .
- Go to "My laptop" -> "Control Panel" -> "Network".
- The following components must be installed.
- Microsoft Network Client
- Remote access controller
- TCP / IP
- Surely you do not have "Clint for Microsoft networks" and "NetBEUI".
- Button "Add" -> "". In the tear-off window, select "client", on the right - "Client for Microsoft networks".
- Button "Add" -> "".
- In the tear-off window, select "Protocol".
- In the tear-off window, select "Microsoft" - "NETBEUI".
- Everything, Click "OK", reboot.
Next we configure the connection properties:
- Go to "My Computer" -> "Remote Access to the Network."
- Choose your connection, call "Properties".
On the "Server Type" tab should be ticked on:
- Sign in to the network
- Software data compression
- TCP / IP
- What is not add! So, everything is OK with the settings.
- You need 2 programs:
Shared Resourse Scanner 6.2, the program itself, which comes to other people's computers. crack
- PwlToolsNet 6.5, it decrypts the passwords that you “borrowed”.
Download programs, install (it is easy) them.
You enter the Internet, press "START" "Run".
Enter winipcfg in the field
In the window that opens, remember the IP address!
It is approximately like this: 192.168.255.45 Ie 4 digits
Now run srs (Shared Resourse Scanner 6.2 that you downloaded)
On the right is the control panel. Where there are 2 fields, which are divided into points, enter the IP address
Suppose you have 192.168.255.45, then in the upper field you enter the entire Ip-address, only the latest - change to "1"
It was 192.168.255.45, you write 192.168.255.1
Also enter the bottom field, only the last one - change to "255".
It was 192.168.255.45, you write 192.168.255.255 Now in the "Time Out:" Enter "100".
In the program window appear computers. Thin font designated protected computers. In bold - poorly protected. Red thick - fully open.
Come on computers with a mark of Win98 / Me.
You select a fat or red computer -> "Open". In the window that opens, after a while, the lamer discs will appear.
Select the drive "C", then the folder "Windows" or "Win98" or "Win" or "Windows", In general, the system folder Windows 98.
Find files with a * .pwl file extension (But if PwlNetTools works fine for you, then the file will be in the form of an icon).
So, we copy it to ourselves.
- Open the PwlNetTools program.
- Button "Browse" -> Select the file that we downloaded.
- Button "CheckPass" We learn passwords.
That's all !!!
PS By the way about catching self-taught hackers ...
You go to the Internet under someone else's passwords, Sit, in short, enjoy.
The person from whom you "borrowed" the Internet also wants to go.
Here he comes - "Checking the name and password."
"Error: Such user exists."
He, swearing, calls the phone support.
-You, blah, che create! I do not go, blah!
-Oh, now we'll see!
They look - this is already climbing in the vast WWW.
Determine the phone - Pease @ es!
PPS To prevent this from happening, determine at what time this teapot sits on the Internet.
|
OPCFW_CODE
|
M: Ask HN: What was the chances the poland president die in a plane crash ? - mickeyben
I already heard it's about 1 on 10.5 millions for anyone, but what about the chances it was the president ?
R: rdl
Tupolevs aren't exactly the safest of planes, and I think the President flew a
lot more hours than most people.
Also, I wouldn't be surprised if there was pressure put on the aircrew to not
divert from their schedule/destination, due to VIPs.
I wouldn't rule out conspiracy theories, but I would definitely say there was
a higher than baseline chance that the official story is what happened.
R: regularfry
Apparently it should be taken as significantly higher for Polish politicians,
given the safety record of their air fleet. They had a previous prime minister
survive a helicopter crash in 2003.
R: hga
Reasonably low ... but I can't see appointing Putin to head the investigation
as anything other than a slap in the face.
R: CamperBob
That's a complex question. The answer depends on whether there were more Poles
on the left side of the plane, or the right.
|
HACKER_NEWS
|
Add A SpawnPool Component
| 1 . Create an empty GameObject, select it and add...
Component > Path-o-logical > PoolManager > SpawnPool.
You should now see this in the Inspector:
|SETUP TIP: You can add a
SpawnPool component to any GameObject, or even add several. For
example, you could have all the pools you need for a particular
level/map all in one GameObject, then make that a prefab and load it
with the level. However, it would probably be easier to work with if you kept 1 per GameObject and just made them all a child of a prefab. |
You could also keep a prefab per SpawnPool as templates.
SETUP TIP: Keep your Pool
Names as generic as possible. Think about archetypes in your game, e.g.
"Enemies", "Projectiles", etc.
2. Enter the name you want to use to access the pool. If you don't enter a name, PoolManager will use the name of the GameObject (without the word "Pool" if found, so "MyPool" would automatically create a pool named "My" - this only happens when the field is left blank, otherwise the field value will be used as is).
When an instance is initially created, it will be scaled relative to the Spawn Pool's GameObject. This is great for sprite-based GUI systems, such as nGUI, which often uses a small scale.
When an instance is initially created, it's layer will be set to match Spawn Pool's GameObject.
Activates Unity's DontDestroyOnLoad behavior so the SpawnPool's GameObject is persistent. See the Unity documentation for more information.
Even if you use PoolManager exclusively through scripting, Log Messages
may come in handy during development. Turn this on and start the game to see what PoolManager is doing (Messages are shown in the Unity Console). You can also set this on and off during game-play using PoolManager.LogMessages
You can choose to log messages for the entire SpawnPool, or per-prefab (explained on the next page)
First, add the namespace 'using' line to the top of the script file, e.g. 'using PathologicalGames;'
Then you can access the code in PoolManager to begin integrating it with your project. For example, you would do this with Unity's methods:
// Create an instance
GameObject myInstance = Instantiate(myPrefab);
// Destroy an instance
With PoolManager, using a pool named "Shapes", you would do this instead:
// Spawn an instance (with same argument options as Instantiate())
Transform myInstance = PoolManager.Pools["Shapes"].Spawn(myPrefab.transform);
// Despawn an instance (makes an instance available for reuse)
Prefabs versus Instances
To clarify the difference, think of instances as copies, or clones, of a prefab. When you use Spawn() you are telling PoolManager you want a clone, so pass a prefab to Spawn()
. When despawning, you are telling an instance you are done with it, so pass an instance to Despawn()
. If you pass an instance to Spawn(), you will make a new instance but it will also start a new PrefabPool because it is a copy of something new.
Manage Instances using the OnSpawned() and OnDespawned() Events
Awake() only runs when an object is instantiated and will not be called again when spawned by PoolManager. You can continue to use Awake() to initialize references and do any time-expensive initialization tasks, but two events are made (optionally) available to manage the behavior of instances as they spawn and de-spawn: "OnSpawned()" and "OnDespawned()".
For example, use Awake() to cache your Transform for internal use in your script, and use OnSpawned() to re-initiliazes states, such as the level of a monster.
|
OPCFW_CODE
|
SMF Packages Not Installing
On two Simple Machines Forum (SMF)
forums, whenever I tried to install a package, I would see a message that
the package installed successfully, but whenever I checked Installed
, I would see "No mods currently installed".
I would click on Browse Packages, then click on Apply
In some cases, if there was no
I would see the message below:
An Error Has Occured!
You cannot download or install new packages because the Packages directory or
one of the files in it are not writable!
If I created the
temp directory and gave it permissions
of 777 with
mkdir temp; chmod 777 temp, I would be able
to click on Apply Mod and see the Installation Readme
page where I could click on Install Now. I would then see
a message that I would be redirected to the application's configuration page,
if it had one, but I would instead just be redirected to the forum's main page
or, if the package didn't have a configuration page, I would see the message
The package was installed successfully. You should now be able to use whatever
functionality it adds or changes; or not be able to use functionality it
But when I would check Installed Packages afterwards, the package
would never be there.
When I checked the Forum Error Log which is under the Admin
link for the forum, I would see lots of entries similar to the following:
fopen(/home/jdoe/example/forum/Sources/ManagePermissions.php) [function.fopen]: failed to open stream: Permission denied
Some entries might have
gzwrite instead of
2: gzwrite(): supplied argument is not a valid stream resource
Under Download Packages
for FTP Information Required
I saw "To download packages, the Packages directory and files in it need to be
writable - and they are not currently. The package manager can use your FTP
information to fix this." I put in the FTP username, password, and forum
information. I replaced "/forum" with the full path to the forum in the
Local path to SMF
"/home/jdoe/example/forum". The message went away when I tested the settings.
I was then able to select a package from beneath Browse Packages,
choose Apply Mod, then Install Now and then see the package
appear when I checked Installed Packages.
Adding httpBL to Block Forum Spammers
There is a Simple Machines Forum
, that can help you combat forum spammers. This mod uses the
http:BL API from
Project Honey Pot
to stop spammers from accesing your forum. The mod
is completely compatible with M-DVD's
. You can have both mods installed or only one of them to stop the
spammers in your forum, but I would recommend you use both.
The developer of the httpBL module describes the differences between
the two modules as follows:
- MOD Stop Spammer checks
the database from Stop Forum Spam
while MOD httpBL checks the database from
Project Honey Pot. A lot of spammers are already in both databases,
but some spammers are only in one of them, so it won't be a bad idea
anyway to check both databases.
- MOD Stop Spammer cheks if the visitor is a spammer when
they try to register inside the forum while MOD httpBL
checks them as soon as they arrive to the forum and redirect them
to a file called warning.php making the whole site
invisible to them. This way even harvesters (robots that never
post in a forum, but search for email addresses to send them spam
later) and any other kind of malicious web robots cannot even see
any part of the site.
As recommended on the
httpBL mod webpage, you should read the well-written
tutorial prior to installing the module.
There are some steps you need to take prior to using the module as is
explained in the tutorial.
Once you've taken the above steps, you can install the
httpBL module the way you normally would install an SMF module.
E.g., you could take the following steps to install it after you
the httpBL module.
- Log into your SMF forum with an administrator account.
- Click on Admin.
- Click on Packages.
- Click on Download Packages.
- Under Upload a Package, click on the Browse button
to browse to where you've downloaded the module then click on the
Upload button once you've selected the zip file you downloaded.
- To the right of the module name, which is httpBL, you will
see an Apply Mod link; click on it.
- You will then see the installation readme file. Click on the
Install Now button at the bottom of the page. Provide the
password for the administrator account, if prompted. You should then
be taken to a forum webpage where you can configure http:BL. If
you need to reconfigure it later, you should see the option
Mod httpBL listed under Members when you click
Note: if you aren't using the default theme, you may need to edit
index.template.php manually as explained in the
|
OPCFW_CODE
|
Most of the people suggested that the probelm is with the Memory, which is not
in my case. I had tested the memory before posting the problem to the list, to be
double sure I tested again and found the memory is OK. I have tried to see the
POST messages by connecting the ttyA to a working machine serial port using
tip , but the POST doesn't show any errors. The m/c hangs after completing the
RAM test. It doesn't show any error. Finally I have come to a conclusion that
the Motherboard has gone BAD :((. Needs replacement. My case seems to be
same as Davidson's case.
I have tried the following also, but no use.
1. setenv auto-boot? false
2. test-all -- doesn't show any errors
3. As a routine check tried replacing the Powersupply unit.
4. Press StopA before the memory test and give 'boot net' -- just to try to boot from network
5. Press StopA before the memory test and give ' boot disk0'
If someone have any suggestions, please pass on to me. I still want to know why the
hell it's hanging without giving any error message.
Thanks to the following people for their suggestions . Sorry If I have missed out somebody.
1. William Teo
Suggested to replace the RAM and try.
2. Brent Parish:
I just had this with a brand new Ultra 60. The Sun tech replaced the mother board before
realizing it was the RAM. Its flakey - the banner would show 256 MB (the proper amount), but it
would not get to ok prompt (if auto-boot? = true). If you can limp on less ram, try pulling
chipsets until it does boot (isolate bad chipsets) - be aware of pulling, whether it needs to be
in pairs or fours (Ultra 1 I think is pairs, Ultra 2 in fours).
Unix/Network Systems Administrator
26 Landsdowne Street
Cambridge, MA 02139
3. "Davidson, T. (CIV)" <email@example.com>:
I have had 3 Ultra 1's with openboot ver 3.0 go bad. I had to swap the
mother boards out with new ones under warranty. I believe the problem is
that the information in the openboot prom gets corrupted. The openboot
prom is soldered on the mother board, starting with the Ultras. The lower
models you could swap out the NVRAM and the OpenBoot prom. I Know the
Information in the soldered on openboot proms can be reloaded but I
haven't tried it yet. I need to find out exactly how to do this since we
have 30 of these ultras. Sorry I couldn't help much but thats my
experience with these ultra 1's. If someone sends you the info please let
me know. Thanks.
I don't know if you have already tried this,
You said it hangs at memory test. Is it memory problem ? try running test mem
at OK prompt. or remove all memory and try with only one module and then
gradualy add other modules after confirming that there is no problem with
Memory and test goes thru.
How about checking probe-scsi ? does system recognise the disk ?
If no, then either disk is bad or motherboard has some problem.
If yes,then try specifying boot disk, as boot disk3 or boot disk0 etc, or if
the system has CDROM, try booting from CD. If this is a new system, I guess
you have jumpstart setup, try doing a boot net on the system, that will
confirm the motherboard and memory working.
hope this helps, If not then you got a serious problem.
Good luck ...Sandeep
5. Robert T. Clift" <firstname.lastname@example.org>:
Try setting the OBP setting selftest-#megs to all the ram in your system and
let it check every memory simm upon boot. Later,
6. email@example.com and Janet Hoo suggested:
Have you tried doing a "tip hardwire" from a good machine (attach a null
modem cable to serial port A of the broken machine and serial port B of the
good machine and no keyboard attached to the bad machine - make sure bad
machine is powered down when you take it's keyboard away)? That way you
can see the PowerOnSelfTest (POST) run. There's lots of information
available there about the hardware....Robin
7. firstname.lastname@example.org (Bismark Espinoza):
Put it in diagnostic mode and perform all tests.
8. Jacques Rall <email@example.com>:
What about running the 'test*' commands at the OK prompt?
Maybe it will pick up something obscure.
9. Ade E Oyeyemi:
boot -as ====>> This should interactively take you to single user mode
, reply as appropriate to all prompts, mostly the default will suffice
Let me know how you get on.
My problem posting:
> Two of our Ultra-1's ( Ultra-1@143Mhz, Openboot Ver 3.0 ) are hanging immdiately
> after initializing the RAM. It doesn't even come to the OK prompt.
> Problem Symptoms: After the banner message and memory initialisation the machine
> hangs without coming to the OK prompt. After that, it stops responding.
> If we abort the memory initialisation with stop A , it comes to the OK prompt. At this
> stage i can run the forth commands. If i give boot, again it simply hangs.
> The problem is not with the EPROM . I have replaced the EPROM and tried. Could any
> of you suggest where the problem is ? ( Without replacing the Mother board :-) )
> email: firstname.lastname@example.org
attached mail follows:
Two of our Ultra-1's ( Ultra-1@143Mhz, Openboot Ver 3.0 ) are hanging immdiately
after initializing the RAM. It doesn't even come to the OK prompt.
Problem Symptoms: After the banner message and memory initialisation the machine
hangs without coming to the OK prompt. After that, it stops responding.
If we abort the memory initialisation with stop A , it comes to the OK prompt. At this
stage i can run the forth commands. If i give boot, again it simply hangs.
The problem is not with the EPROM . I have replaced the EPROM and tried. Could any
of you suggest where the problem is ? ( Without replacing the Mother board :-) )
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:12:38 CDT
|
OPCFW_CODE
|
...and a New VM for Mom
Christmas is the time for giving, and for many people this means technology. It's certainly no different in my family—Dad got a new Gigabit Ethernet rig, my little brother got some games, and my niece got a portable DVD player. Mom? She got a new VM.
This probably needs some explanation. My mother is a professor of ancient civilization, and being in the educational profession means that she does a lot of research writing, some of which involves citations of ancient languages in their original form. This isn't too much of a problem with modern operating systems and applications that are capable of using Unicode character sets and encodings, but until just a few years ago, most operating systems and applications had to rely on platform- or even program-specific encoding technologies, and this was particularly true with dead languages like Latin and ancient Greek (the latter of which has different glyphs than modern Greek). As such, most of the manuscripts and papers that she wrote just a few years ago used program-specific character technologies.
Some of these tools, such as the old WinGreek, were married to specific technologies, such as a particular version of Microsoft Word for Windows, which in turn only ran on Windows 3.x. Thus, my Christmas present for Mom was to migrate her old systems into VMware Player so that she could still work with the original materials and programs, but using her everyday (modern) system, with the legacy stuff running in a virtual machine.
This was actually a bit more challenging than it sounds, given that some of her equipment is around 15 years old. For example, she had a Toshiba T4850CT laptop and an IBM PS/2-E mini-desktop (a laptop in a nonmobile form factor, basically), both of which were limited to PCMCIA cards for system I/O, and neither of which had built-in support for CD-ROM drives or networking capabilities (never mind USB). Meanwhile, the Toshiba laptop had 8 Mbytes of RAM, while the PS/2-E had less than 4 Mbytes.
Normally, I would simply remove the hard drives and use imaging software to create a snapshot of the drive, but that was a no-go in this situation. The hard drive in the laptop uses an older thick form factor that physically won't work with any of my other laptops and requires a special controller to get power over the IDE interface. As for the PS/2-E, the case was locked and Mom had long since lost the key. If I was going to do this, I had to use software.
Even though I have a multiboot rescue CD that is capable of working on these classes of system, the lack of CD-ROM drive support meant that it wasn't possible to use those tools in that form factor. Next I considered using the imaging tools from floppy, but Acronis True Image required multiple floppy disks to create a bootable image and also required more memory than either system had (Acronis uses a customized Linux-based kernel, which is very nice, but it also has high system requirements). Eventually I was able to get a single-floppy install of Norton Ghost 2003 (since discontinued) working with a Belkin direct-connect cable (a bidirectional, host-to-host parallel cable) that I picked up at Staples on Christmas Eve. In this setup, I booted the Toshiba laptop under the Ghost agent, and then used her modern PC to suck the contents of the hard drive into an image file across the direct-connect link. From there it was a simple process of creating a new VM in VMware Workstation, and then using ghost to restore the image to the new VM.
This didn't work on the PS/2-E system, however, because it did not have enough memory to run the Ghost agent. For that machine, I used the old DOS interlnk utility to create a host-to-host network connection across the direct-connect cable, then mounted the old PS/2-E hard drive from within the modern PC. Once that was done, I was then able to use the old msbackup utility to create a backup of the system, and then was able to restore that backup to another VM.
I spent another day or two doing cleanup work, installing networking, and doing other minor tweaks to get the systems integrated into their new homes, and I still need to do a lot more. For example, I need to upgrade the systems to Windows for Workgroups 3.11 (look out, eBay, here I come), without breaking her legacy applications. Another problem is that Windows 3.x doesn't have support for the HLT instruction that puts the CPU into "idle" mode when nothing else is happening, so these VMs run the CPU at 100% whenever they are "on," which prevents her from doing much of anything else even with her modern kit, so I need to find a way to tame the CPU under Win 3.x. I also need to consolidate her files and get her VMs integrated into their normal backup schedule. All told, this is shaping up to be quite the project.
But still, it's a lot better than worrying about the calls that would come whenever one of her ancient machines eventually took the inevitable nose-dive. As difficult as this has proven to be, it certainly would be a lot harder if the machines weren't working, so all told this is time well-spent.
This whole process also has got me to thinking about how common this kind of problem must be. There are millions of old systems out there running on ancient hardware without any of the luxury technologies that we take for granted today, such as old NetWare servers and OS/2 application servers and SCO servers, and ... and many of those systems date back to the early '90s, with system specs that are appropriate to that era. Yet recovery tools are becoming harder to find for this class of machine—imaging software now routinely requires more memory than the average workstation had back then, and new versions often assume technology that simply didn't exist back then.
|
OPCFW_CODE
|
Thanks - comments inline.
On 20-Jun-17 23:48, Barry Warsaw wrote:
On Jun 20, 2017, at 11:18 AM, tlhackque via Mailman-users wrote:
> I'd like to deploy a new project on Mailman 3, which I prototyped on
> 2.1). Unfortunately, it's organized as one mailing list with ~ 120
> topics. (The project follows another source, and it's much easier to
> spin up (or down) a new topic than to instantiate & manage a new list.)
> I see that the latest Mailman 3 release (Congrats!) doesn't support
> topics yet.
> Where is that on your todo list?
It's not. We've discussed topics quite a bit and unfortunately, I don't
it's something we've seen enough traction on to want to support in Mailman 3.
We think something along the lines of "dlists" (dynamic sublists, as
originally implemented for Mailman 2 by Systers) as a better overall feature
for supporting sub-conversations in parent lists.
That said, I think topics could be a good candidate for a plugin, and the work
being done for GSoC this year will greatly improve the plugin architecture.
So while topics may not be a built-in feature, it could be an interesting
third party add-on.
That's disappointing; I was expecting feature parity.
Topics are a good
fit for my use case - which is a bunch of announce-only lists on
related topics. E.g. Project foo news, meeting agendas, meeting
minutes, x several dozen projects.
I suppose I can map these to virtual e-mail addresses for Mailman's
benefit (I have a process between the MTA and Mailman), but I'm not sure
how that will play out with list creation via the API & the user
interface. Perhaps creating a list isn't as heavyweight in V3 as it is
In V2, all I have to do to spin up a topic is use config_list to extract
the the configuration, add the topic name, regexp & description, and
then push it back, which is easy to do on the discovery path. Creating
a list requires updating the MTA (aliases) & establishing the list's
admin controls, templates, separate directories, user registrations, etc...
Creating a new list also doesn't provide the 'unless you do something,
you get everything' default of V2's topics.... But I'll keep an open
mind. Many of the other changes you've made seem, at first blush, to be
Once I get V3 to where I can experiment, I guess I have more work to
do... Hoping for maybe a 3rd party plugin to appear doesn't constitute a
plan on my end :-) And I don't want to switch UIs on my users soon
after initial deployment.
I don't mean to sound negative - I'm excited about trying to make this
work. Just trying to share a perspective from outside your usual
I seem to be about 1/3 of the way to getting MM3 up -- I finally got the
core running in developer mode on Fedora. I'll post my "new user" notes
on the process... they're not refined, but you only do something for
the first time once :-)
Hopefully they'll be helpful.
> First bug: Looking at
> If I set "results per page" to a large number & switch months, results
> per page reverts to the default (10).
> It should stick with whatever I select...
> It also would be nice to have an account preference for this, so it
> doesn't have to be set on each visit.
The best place to capture this is on the HyperKitty tracker:
Done, Issue #138
Thanks again for all the good work.
Mailman-users mailing list
|
OPCFW_CODE
|
I am inserting some values into SQL Server table using C. Actually I want to insert values and return the ID of last inserted record. I use the following code: String name txtname.Text.Trim() string gender Gender.Text.Trim() string citizen Citizen.Text.Trim() int IdNo Error While Trying To Install SQL Server Express 2005 Express Edition. Does MS SQL Server 2005 Workgroup Edition Support Command Notifications Caching?How Read CSV File In Other Server Using Bulk Insert Command In Sql Server In Different Server 3. Text link: c - Execute Insert command and return inserted Id in SqlDescription: Adds one or more rows to a table or a view in SQL Server. For examples, see Examples. Specifies the temporary named result set, also known as common table expression C SQL insert command. Can anyone tell me the following 2 ways of inserting record creates better performance?The second approach looks faster than 1 because you send the INSERT commands at once. In the first theres a round trip to the SQL server for each ExecuteNonQuery. SQL Insert C. Hi Peeps. I have recently created a program using a OleDB connection using dynamic SQL statements, which worked perfectly.From HELP for the ADO Command object: "The Microsoft .NET Framework Data Provider for SQL Server does not support the question mark (?) placeholder Therefore, two statements are in the same scope if they are in the same stored procedure, function, or batch.
You can use SqlCommand.ExecuteScalar to execute the insert command and retrieve the new ID in one query. SQL Server 2005 version Error Insert command help me. by pad in Development.How to insert a row into SQL Server 2005 using c. by the vk in Development. You can use SqlCommand.ExecuteScalar to execute the insert command and retrieve the new ID in one query. using (var con new SqlConnection(ConnectionString)) int newID var cmdVALUES (bar) OUTPUT can return a result set (among other things), see: OUTPUT Clause (Transact- SQL). When I make the call from c it looks something like this. string sqlIns " INSERT INTO table (name, information, other) VALUES (name, information, other)"sql does have a command that will return that last Identity from a command, IDENTITY or SCOPEIDENTITY which is more specificsql-server (2000 inserts), and then running an SP (or other SQL command) that does the cross join and calculations locally to save round trips.
Prepend your Execute command with (Int32). SQL Insert Query Using C - Stack Overflow. I assume you have a connection to your database and you can not do the insert parameters using c . args) string connetionString null SqlConnection connection SqlCommand command string sql null connetionString "Data SourceServer Name
|
OPCFW_CODE
|
Behind the scenes experience – Up close program at Uganda Wildlife Education Centre
Behind the scenes experience – The Uganda Wildlife Education Centre (Entebbe Zoo) behind the scenes program gives you a real time experience and close up contact with wildlife in terms how a visitor and the animal keepers interact. This close up interaction is in terms of how the keepers prepare food for the animals and how a visitor participates in the above activities. As a visitor you will have a personal petting interaction with the rhinos, the famous shoebill stork (Shoe ship) and giraffes. Each one of these interactions is unique for instance getting to feel the rough skin of the southern white rhinoceros and being able to feed them, but also seeing them scramble for your attention is simply awesome. Getting the giraffes feed off your hand and being sandwiched in between for a “selfie” is simply unforgettable. Then comes the interaction with the friendliest shoebill stork in its semi-natural environment is amazing.
In an effort to save the African Rock python, the Entebbe Zoo has designed a special snake conservation education talk as one of the activities of the behind the scenes programs. Here the visitor gets to personally pet the already tame pythons at the Centre if they do not have a phobia for the biggest snake on the African continent. This gets even more interesting when a visitor is asked to figure words out of the natural patterns on the snake.
In addition, the visitor is allowed to take beautiful photos with the giant reptile after being educated about how it was rescued, its habitat, threats and conservation status. It is therefore, thrilling to be able to participate. This program runs between one and a half to two hours Monday to Sunday from 9am to 6pm.
Best things to do in Entebbe Uganda
Best things to do in Entebbe Uganda. Entebbe town is Uganda’s gate away town being the home to the only international airport. Anyone flying to Uganda will obviously go through Entebbe town before they head to the capital or any other wildlife or safari site in Uganda. Entebbe is nestled gracefully at the shores of the largest fresh water body in Africa- Lake Victoria. Entebbe is one of the place Ugandans think of when it comes to beach holidays. Apart from beaches and excruciating views, there are a number of activities that you can participate in whilst Entebbe. Below we have listed some of the places you can visit or activities you can participate in
Day shoebill trips For any birders list of must see birds the iconic shoebill stork is highly rated, being critically endangered species of birds due to loss of habitat mamba swamp which is 45 minutes by boat from Entebbe and 1:30 minutes’ drive from Entebbe is considered the best spot, day trips can be arranged to the swamp in the morning or in the afternoon.
Visit Ngamba island chimpanzee sanctuary
This island is part of Jane Good all institute research Centre, its 23 KMs south of Entebbe, this can be accessible by boat.
The island has more than 45 chimpanzees roaming freely in the 100 hectares forested island.
Visit Uganda wildlife education Centre
This was first set in colonial times to provide custody for rescued wildlife and was called the UWEC zoo, later it was transformed as an educational site many Ugandan wildlife species can be seen including the chimpanzees.
Tour the botanical gardens
The 40 hectors of the heaven of Entebbe, here you advised to hire a local site guide who are always available every day
Here you can see tree species which are 100 years old, monkeys, beautiful birds and nice views of Lake Victoria.
Boat cruises on Lake Victoria
Boat excursion like normal bird watching cruise, sunset, sunrise, can be arranged with both speed boat and traditional motorized canoe. However this activities must be pre-booked as boats are not available all the time
Fishing trips on Lake Victoria
Lake Victoria is where Nile perch was first introduced by the British colonial master over years the breeding become successful making it one of the best places to catch the Nile perch the fish also spread to the Nile river now breeding below the Murchison falls.
Uganda reptile’s village
This is a nonprofit organization set up by Mr. kazibwe yasin to help rescue reptiles from people’s homes and properties Entebbe being a bit forested sometimes snakes, pythons and other reptiles still encroach homes and which the locals are afraid so he will come catch them and keep them at the reptiles village before realizing them in the wild
Craft & souvenir shopping
Entebbe being an entry and exit for many travelers and an idle town for foreign experts a lot of craft markets have been set up, here you can go and buy beautiful carvings, African fabric ,authentic cow products, incident pigmy art and craft, beautiful traditional drums and extremely un believable ceramic products.
Tour fishing villages
Entebbe has about 3 fishing villages this can be visited by tourists as you try to learn the life of people living in the fishing villages, this areas are usually over populated with temporary buildings
Boda Boda tours
This is one of Uganda’s top activities and fun as one will ride behind a motorcycle with the rider, this is the most common type of transport in Uganda‘s main cities.
The rider will take you to beautiful places, shopping centers.
You can rent a bicycle and ride around Entebbe with a guide who will take you to different places of interest around Entebbe town .Entebbe is considered safe so it’s easy to move any time of the day or late evening
When to book Entebbe city tour?
The best things to do in Entebbe Uganda and activities can be arranged anytime of your convenience except late evening and at night time.
How to get to the activities?
Local taxi can be hired or this tour operator can arrange you transfers and arrange book the activities for you.
Mode of payment
Cash is much preferred as many of activities are cheaper and use of credit card can in cure sure charges of up to 4.5%, Great Adventure Safaris organize the best things to do in Entebbe Uganda. Great Adventure Safaris arranges behind the scenes experience at UWEC in Entebbe town.
|
OPCFW_CODE
|
In my experience it's not getting any better for PGP messages that are not
composed in a basic text editor. Users composing messages on a mobile
devices, for example, do not always default to UTF8, they use the
system-wide character encoding setting (or the charset encoding specified
by the composing app itself).
For example, iOS Apple basically says if you don't know the original
encoding, you have to basically "guess" by trying various encodings until
you find one that works.
Fortunately, it usually only takes a few tries to get it right if its not
I agree that UTF-8 should be preferred and enforced wherever possible. But
in cases where it is not, it would help if the sender was able to provide a
hint as to what the encoding actually is, and do so in a standardized
manner that can be easily implemented.
On Tue, Mar 17, 2015 at 3:00 PM Tim Bray <tbray(_at_)textuality(_dot_)com>
This would be a huge step backward. The proportion of text on the internet
that is UTF-8 is monotonically increasing toward 100%. Thank goodness.
On Mar 18, 2015 4:38 AM, "Wyllys Ingersoll" <wyllys(_at_)gmail(_dot_)com>
One area that I think needs some attention is the character encoding and
charsets for encrypted text messages.
4880 says that everything should be UTF-8. However, the reality is that
UTF8 is not used everywhere and there are lots of clients that compose
messages in their native preferred character set (Latin5, Greek, Kanji,
etc) and its very difficult as an implementor to figure it out after the
fact without some indication from the sender.
The literal packet format only specifies 3 possible values - binary,
UTF8, or plain. The ASCII Armor header may specify a different charset
(though unfortunately very few agents add the "Charset" PGP header).
Additionally, if the message had MIME headers, there may be yet another
charset indicated in MIME that differs from the ASCII Armor charset and the
literal packet data format byte.
If the encrypting PGP software knows what character encoding was used to
compose the original message, there should be some way to communicate this
in the message that would be definitive so that the decrypting software can
present it the way it was originally intended. As an implementor, this is
one of the trickiest areas to get right so that the end user sees the
messages as it was originally intended.
openpgp mailing list
openpgp mailing list
|
OPCFW_CODE
|
Newsletter 6th grade: middle school warm-ups: home: computer hardware mr young cd cover assignment. Intro to computer studies, grade 10, open use correct terminology to describe computer hardware intro to computer programming assignment. This unit determined by coursework of three assignments, grading at pass, merit or distinction the first assignment is about developing understanding for : the hardware components that is found in a computer system, the devices that can be attached to the components, different. Learn about the parts of a computer: cpu, monitor, keyboard, mouse, printer, and router this page features printable worksheets for students. Organizations acquire elevated expenses and superiority issues on computer hardware maintenance one of the ordinary complaints is deprived service by the it supplier.
Looking for operating system assignment help operating system function as an intermediate between a user and a computer hardware system technically,. A training programme understanding computers: the key components of a computer system (hardware, software, data) 2 the basics of how computers work 3. Job roles that demonstrate installing and maintaining computer hardware include computer technician, assignment 3 - repair and upgrade, p4, p5, m3,. Information systems assignment help information system is a collection of hardware, software, what is computer hardware.
This is a complete btec unit 2 including the assignment and scheme of work btec unit 2 hardware and software 25 2 customer unit 2 hardware and software. For more course tutorials visit wwwuophelpcom review the details of the instructions in appendix d complete the troubleshooting computer hardware assignment by writing a 150-word response to each of the. Content: computer benefit percentage _02 /15 three adverts [pc, projector + screen] (2) identify and describe hardware 2010 assignment 1. Best practices of component-based software engineering dency occurs when a component must execute on a computer with a nent must interact with a hardware.In this file it 286 week 6 assignment appendix dtroubleshooting computer hardware you will find solution to the following task: review appendix dcomplete the troubleshooting computer hardware assignment by writing a 150-word response to each questionpost appendix d as an attachment. Management information sys assignment help, evaluate computer hardware, aevaluate computer hardware record today’s date view the specifications of a computer below. Assignment point - solution for best e-commerce hardware and software 2 software customization tools, computer expertise required of the merchant. Btec unit 9 computer networks (assignment 3-service which typically allude to computer information or hardware devices that can be simply approached from a. Hardware - a generic term used to describe any component of a computer system with a physical presence and which can, therefore, been seen and touched input devices are hardware devices which take information from the user of the computer system, convert it into electrical signals and transmit it. Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work. Computer hardware raft assignment differentiated instruction teaching and learning examples 2010 ontario ministry of education—student success/learning to 18 implementation, training and evaluation branch 1. Credit value: btec’s own resources 40 explain the function of computer hardware components see assessment activity 21, page 19 p2.
Operating software or operating system is the program that your computer uses to manage it's resources windows is a prime example as is linux, freebsd, unix, and macos (to name a few. Purpose of assignmentthis assignment will help you assess hardware and software specifications for different computer and business purposes, and provides the st. Computer assembly & configuration lab & homework assignments apm defines a layer between the hardware and the operating system computer, allowing it to.
Computer hardware is the physical part of a computer, as distinguished from the computer software that executes or runs on the hardware the hardware of a computer is infrequently changed, while software and data are modified frequently the term soft refers to readily created, modified, or erased. Unit 2: computer systems unit code: learners could research the different internal and external hardware components of computer assignment 1 – decoding the. Mgoct10 btec national in it student assignment package unit: 2 computer systems assignment 1 – guide to pc hardware. Free essay: computer hardware hit 1403 assignment 1 bektemir kassymov table of contents introduction.
Computer hardware hardware basics hardware system unit peripheral devices input devices devices used to enter information or instructions into the computer keyboard &ndash a free powerpoint ppt presentation (displayed as a flash slide show) on powershowcom - id: 3b7068-otfho. Purpose of assignment the purpose of this assignment is to understand what basic hardware and software components make up a computer students will research hardware components, operating system software, and application software to determine how they work together to process information.Download
|
OPCFW_CODE
|
We use the ideal brain to help you to definitely protected fantastic ranks in online examinations. It is difficult to learn essential factors for having online exam. It's therefore our industry experts provide resolutions and appropriate guidance to solve all style of issues and provide students a sense of time administration to contend nicely in the hardest timetable for all online exam help related things. Commonly, our professionals offer all close to finish help for online exam help to the students.
We go over all well-known Universities the world over to offer online exam, quiz, assignments and check help. A number of the well-known Universities we protect are as follows:
You should strategy to reach at your tests Heart no less than half an hour previous to your scheduled exam time and present your Image identification towards the proctor.
Apply for the pc-delivered GRE General Take a look at with POWERPREP® Online. TWO FREE apply checks simulate the particular check and consist of exactly the same examination-taker helpful design options you’ll encounter on exam working day, like moving forwards and backwards in between thoughts, changing responses in just a bit as well as on-display screen calculator.
Am i able to improve my possibilities of passing the particular social get the job done exam if I buy a practice take a look at and/or exam guide?
Right after because of processing the ODES end result are going to get more be declared and printed by means of NIOS Web page during past week of every month for your examinations performed over the preceded thirty day period.
In the event you get there after your exam commence time, you might not be admitted on the examination site. You might not use any of the subsequent in the course of the exam:
Our Find Out More CDS online exercise exams will be available pretty soon in Are living. Kindly continue to keep viewing our webpage for the same.
The service involves the categories of concerns you answered effectively and improperly organized by skill region, and click for more the difficulty stage and time put in on Every issue.
I've procured the cds gk established but in which to Opt for working towards it...r they wil b delivered to me by post in your own home or I'm able to only observe it online???plz help
Once you've been produced qualified and obtained your ATT, it's possible you'll program your examination by picking out both Register or Register with the menu.
These means can help decrease pretest anxiousness and might aid candidates in knowing their very own strengths and weaknesses, but they don't supply the minimum amount know-how to pass any ASWB social perform examination. a lot less
It truly is for that reason reason we have started out online exam help to guideline pupils to ace their exams. This online exam help will help college student in performing their online exams, a more information lot quicker, accurate and with superior grades. We offer exam and quiz help for all subjects and topics, all online classes like length schooling ,certification systems etc.
|
OPCFW_CODE
|
On 29 Apr 2013, at 09:39, Peter Markou <markoupetr(a)gmail.com> wrote:
> Hello everyone in the community. As you may or may not know,
> I've applied a proposal in Melange for Web Posting Interface.
> The description of the project idea states: "the interface
> itself may not take the whole summer", so I've come up with
> some features that I would like to implement for Mailman web
> interface. These features are listed below: **
> 3.) Keyword Summary,
> 6.) User Profiles(support for name, photo, bio, past posts & files,
> statistics about how people post),
> 10.) Top X Threads of all-time,
> 11.) Topics to be Wary Of,
> 12.) User's Filter Tools,
> 14.) Keyword-Based Thread Browse,
> 17.) List Monthly Health,
> 22.) Mentioned in thread refs.
> I would appreciate any feedback from developers/users, about
> which of them would be useful and fit well with Mailman web
The first thing to remember is that there are at least three types of user:
And, of course there will be a variety of levels of technical skill within each user set. I imagine that a Web Posting Interface will be for List Members. They'll need to be able to access their personal profile, list archives, and a form for composing and sending a message to the list. In my view, the essential features for that are:
1. A nice editing tool, that a list admin can configure to send ONLY unformatted text. That should probably be the default for new lists, for new members, and for new messages.
2. The tool should do it's utmost to preserve threads when a member is replying to a message, but not otherwise.
3. If the message is a reply, then the rest of the thread messages should have some visibility: to discourage members from replying before they've seen the whole thread.
4. There should be a link to a profile page, but it would be neat if the profile page could leverage some existing identity. Various options exist, but given that we have an email address, we can often get some profile information from the email service provider, for example when they're openid providers. Don't work on this until 1-3 are complete.
5. There should be a link to a list archive. An archive browser is probably a project of its own.
My view is that the keyword and topic type stuff all belong in (5), if anywhere at all. In the message posting page, they'd be unnecessary clutter.
Postmaster, University of Sussex
+44 (0) 1273 87-3148
|
OPCFW_CODE
|
While updating my work website recently, I came across an old page on a dark galaxy candidate, VIRGOHI 21, that I had worked on back in 2004 - 2007. The page had not been updated since then, despite things having changed in the meantime. While updating it, I decided to write this blog post, giving some personal recollections and an explanation of my current thinking on the subject.
In 2005, I was first author on a paper about VIRGOHI 21 called “A Dark Hydrogen Cloud in the Virgo Cluster”. We had found the 'dark cloud' in a neutral hydrogen survey of the Virgo Cluster carried out a few years earlier with the Lovell Telescope at Jodrell Bank, and had already published initial results in an paper by Jon Davies the year before, “A multibeam HI survey of the Virgo cluster - two isolated HI clouds?”. The second cloud, VIRGOHI 27, was observed with the Giant Metrewave Radio Telescope in India and found to be a very faint galaxy, but deep optical images of the area of VIRGOHI 21 from the Isaac Newton Telescope in the Canary Islands did not show anything. Another oddity was that VIRGOHI 21 looked As if it was, like most galaxies, rotating - in observations with Arecibo we could see that the velocity changed from north to south. This made it look like a galaxy without any stars - a dark galaxy!
|Isaac Newton Telescope image from our press release, the ellipse shows the extent of VIRGOHI 21 based on observations with Arecibo|
We considered whether VIRGOHI 21 could be tidal debris. At that time, it was generally thought that the only way to form long streams was through slow, tidal interactions. We were able to rule this out as there was no large galaxy in the right position to have pulled VIRGOHI 21 into the shape we saw.
|Sloan Digital Sky Survey image of VIRGOHI 21 from their Image of the Week gallery. The orginal caption reads: Radio telescopes at Arecibo and Jodrell Bank Observatory detect a large cloud of hydrogen gas at the center of the region of the sky covered by this image, but no corresponding objects can be seen in it. The rotation of this cloud indicates the presence of a significant mass of dark matter (matter that we cannot currently detect directly) as well.|
The discovery led to headlines such as "Astronomers claim first 'dark galaxy' find" (New Scientist), "Astronomers find star-less galaxy" (BBC) and "Not even a twinkle out of galaxy with no stars" (The Times). VIRGOHI 21 even got its own entry in Wikipedia. But the story was far from over...
At the time the first paper was published, we were already planning high-resolution observations with the Westerbork Synthesis Radio Telescope in the Netherlands. We hadn't been able to detect VIRGOHI 21 with the Indian telescope earlier, but this time we saw it. These observations showed that VIRGOHI 21 was linked to a nearby galaxy, NGC 4254 by a bridge of hydrogen. This galaxy has an unusual, lopsided structure, with one very large spiral arm, and we had already discussed internally whether this could be linked to VIRGOHI 21 and dismissed the idea as something we did not have enough evidence to speculate on. Now we had the evidence.
We also had even deeper optical imaging from the Hubble Space Telescope that still failed to reveal any visible galaxy. The Hubble observations also answered an alternative scenario that some simulations suggested - a fast encounter that would rip gas out of NGC 4254 but would also, necessarily, pluck stars out of that galaxy and leave them floating freely in space in the same area as the gas. Had they been there, these stars would have been visible to Hubble - but we saw no evidence of them.
We published our Westerbork and Hubble results in a paper called “21-cm synthesis observations of VIRGOHI 21 - a possible dark galaxy in the Virgo Cluster”. At about the same time, the ALFALFA team published their map of VIRGOHI 21, which showed that the neutral hydrogen stream extended further to the north. This wasn't particularly shocking - seeing a 'leading arm' in front of an interacting galaxy is not uncommon.
Both the ALFALFA data and our Westerbork data were then used by a team in France who were modelling the system as a 'hyperbolic' interaction, a kind of cosmic 'hit and run' where another galaxy shot past NGC 4254 very quickly and then left the area before it could be identified. It may seem surprising that we gave our data freely to someone who were trying to prove us wrong - but that's the way science works!
The French team found that: “High-speed collisions, although current in clusters of galaxies, have long been neglected, as they are believed to cause little damages to galaxies except when they are repeated, a process called ‘harassment.’ In fact, they are able to produce faint but extended gaseous tails.” In other words, it was possible to explain VIRGOHI 21 as part of a tidal tail, formed from a high-speed galaxy encounter rather than the low-speed encounters we had considered.
While this did not absolutely rule out the hypothesis that VIRGOHI 21 was a dark galaxy, it presented a less exotic alternative - and as a general rule (known as Occam's Razor, after the medieval monk William of Ockham), the least exotic idea is considered the most likely explanation. But the simulations were far from being a great match to the observations, and further observations (unpublished) of the northern end of the stream showed it continuing at the same velocity as VIRGOHI 21 - more consistent with it being a leading arm than with the simulations. Could these simulations really explain VIRGOHI 21?
You're probably wondering why we didn't publish the new observations of the northern end of the stream, and point out the other inconsistencies between the simulations and the data. There are two reasons for this. The first is that while the simulations may not have been a great match to the data, they established the principle that it was possible to get long streams of neutral hydrogen via fast interactions. Once the stream was drawn out, there were many different forces acting within the cluster environment that could have altered its shape in myriad ways, so no simulation could be expected to provide an exact match.
The second reason is more complex, and is (to my mind) the most conclusive evidence against VIRGOHI 21. To understand this, it is necessary to go back to earlier in the story and look at predictions we made around the time the Arecibo surveys were starting up.
When we announced the Westerbork results, Jon Davies said in our press release that “We’re going to be searching for more Dark Galaxies with the new ALFA instrument at Arecibo Observatory. We hope to find many more over the next few years – this is a very exciting time!”
This wasn't just idle speculation - a year earlier we had published a paper, "The existence and detection of optically dark galaxies by 21-cm surveys", where we had argued that if VIRGOHI were a dark galaxy it implied the existence of many more dark galaxies. If these existed, they should be discovered by the next generation of neutral hydrogen surveys then getting started at Arecibo - and in large numbers. These dark galaxies would make up almost a quarter of the sources found in the deep AGES survey, while over a thousand would be seen by the shallower but much larger ALFALFA survey.
This meant that the final test of whether or not VIRGOHI 21 was likely to be a dark galaxy was whether other dark galaxies could be found. If there turned out to be a large population of dark galaxies then VIRGOHI 21 would be vindicated, but if no other examples were found it would imply that VIRGOHI 21 was nothing more than tidal debris.
So, what happened? To date, AGES has found very few confirmed HI sources without optical counterparts, and none like VIRGOHI 21. ALFALFA has found around 50 candidates - far fewer than required, and these are (as I understand it) potential mini-halos in the Local Group rather than objects similar to VIRGOHI 21. The only conclusion that can be drawn is that the predicted population of dark galaxies doesn't exist - which means, whatever the evidence for it individually might be, VIRGOHI 21 is highly unlikely to truly be a dark galaxy.
So - VIRGOHI 21 is almost certainly not a dark galaxy. Less exotic explanations have been given for its existence, and more recent surveys have failed to discover any similar dark galaxies. Does this mean we were wrong to publish what we did? No - the less exotic explanation was only discovered in response to our papers, and the surveys that should have uncovered more dark galaxies happened after our discovery. By publishing, we advanced science, even if those advances ended up showing that we were wrong!
|
OPCFW_CODE
|
Create new events on Google Calendar from Office 365 By Microsoft Power Automate Community Automatically create a Google Calendar event when a new Office 365 Calendar event is created. Sync events from Office 365 Calendar to Google Calendar.
Finally how to connect it to Meeting Room Schedule.
Microsoft flow office 365 to google calendar. Up until recently I could sync my Outlook calendar with my Google calendar. When an event is added updated or deleted in Office 365 Outlook Calendar update the Google Calendar and the Excel OnlineBusiness spreadsheet as per the action selected. In addition there are numerous articles online which state that it can take some time to sync entries Ie.
Set up a Microsoft Flow account. Add Office 365 Calendars to Google Calendar November 3 2017 Josh Reichardt Exchange General Productivity I ran into a scenario recently where I wanted to be able to combine both my work Office 365 and personal Google Calendar calendars which I found to be a really painful process. My husband and I have an Outlook 365 account.
To look for these templates select the Event and calendar category andor type in keywords like Google Office 365 calendar sync and youll find them. I am using the Flow templates for both creating and modifying events in Office 365 and having them flow to Google Calendar. Office 365 Calendar to Google Calendar 2020 Microsoft Flow December 14 2019 21 Comments So it appears that Microsoft pulled all the Microsoft Flow templates that used to work flawlessly and give you two way sync between Office 365 and Google Calendar.
Up to 12 hours so may not be a great experience. So far this is working well however when I delete an event in Office 365 the event does not get deleted in Google Calendar. The trigger only picks up new events added to your OutlookOffice 365 calendar and pushes them to your Google calendar.
Although you can also set up a Flow that maps Google Calendar events to Office 365 its best to only use this type of operation from one master calendar to a secondary one. 10052019 Updated the flow to fix a substring issue with the HTML to Text conversion in the Create an Event step. Sync Office 365 Outlook Calendar with Google Calendar and Excel Business By Microsoft Power Automate Community.
This might not sound like a massive issue. Copy new events in Office 365 to Google Calendar and send a notification By Microsoft Power Automate Community When you create a new Calendar event in Office 365 this flow creates the event in your Chosen Google Calendar and then sends a push notification. Google Calendar lets you organize your schedule and share events with co-workers and friends.
1881 Create a OneNote page for new Google calendar events. 4282 Add new events to a Google Calendar when I add tasks in a Trello list. By Microsoft Flow Community.
Why is there a limitation on this. Replaced with the full body of the conversion. By Microsoft Flow Community.
Article title does not match flow. By Microsoft Whenever a new event is created on your Google Calendar get a copy of that event created on your Office 365 Calendar. Another way is visiting the templates via the direct links as below.
By Microsoft Power Automate Community If you previously used the Office 365 to Google Calendar template to copy new events from Office 365 to Google Calendar then you can use this template to update those events. Theres also a similar but less functional flow called Copy new events in Office 365 to Google Calendar and send a notification. Is this something that is coming for these two a.
Delete Office 365 Outlook event when it is deleted from Google Calendar. Updates and deletes in OutlookOffice 365 do not get syncd to Google. As a caveat this is the personal calendar not the calendar for the team in the shared mailbox attached to the Office 365 group.
Sync events from Google Calendar to Office 365 Calendar. How do I do this. If you dont have a Microsoft Flow account read this article and watch the.
By Microsoft Whenever a new event is created on your Google Calendar get a copy of that event created on your Office 365 Calendar. Check Microsoft Flow Pricing for the latest pricing information. I have searched through the Microsoft Flow pages and been blocked when trying to create a new flow which accesses the Outlook calendar.
With Googles free online calendar its easy to keep track of your daily schedule.
|
OPCFW_CODE
|
Hi, i am using Vegas Pro 17 to edit videos, and i need to import .ass subtitles over them, but vegas pro doesnt support .ass subtitles. So i though i could prerender the ass subtitles as .png image sequence and then import it in vegas and overlay it. I have Subtitle Edit, and i found that it can export subtitles to .png, but it exports each line as 1 png and adds the lenght of the png into .xml file (which again the vegas doesnt support)
Is there a way to prerender it into sequence that has the correct lenght and framerate? (for example if 1 subtitle line lasts 1 second, its gonna export the line to 24 pngs (if the video is 24fps))
+ Reply to Thread
Results 1 to 7 of 7
Try rendering them in front of a green screen
Converting to SRT may lose much of the formatting of ASS subs. You could use ffmpeg to create a black video with the ASS subs. Then use that video as an overlay (and alpha channel?) in Vegas.
ffmpeg -y -f lavfi -i color=c=black:s=720x480:r=29.94 -vf ass=subs.ass -to 00:02:00.000 -preset ultrafast subs.mp4
[Attachment 66407 - Click to enlarge]
Yes, if the .ass is more than plain text (ie. colors, fonts, effects/ karaoke,formatting etc...) , then a .srt conversion will lose all those .
Another option to render an .ass properly is using an alpha channel mask (essentially a "transparent video") using masksub in avisynth using one of the vsfilter derivatives . Most of them (or all of them) require adding FlipVertical() to the end of the script. In the link below there is an example that uses the clip properties to automatically fill in the width,height, length, fps. You would use an appropriate source filter - eg. if you had MKV, you would use LWLibavVideoSource or FFVideoSource
You can encode a video using lagarith or utvideo in RGBA mode using vdub2 or ffmpeg (or anything that accepts avs scripts) - so as a video that preserves the framecount , fps and timing . You may have to interpret the alpha channel in the vegas media bin (instead of "none") . Or you can use avfs - a "virtual" file instead of encoding something. But if you have a large project, massive number of edits, a "physical" intermediate can perform better
Install AviSynth and download VSFilter.
Make a very simple AviSynth script that looks like this:
MaskSub("C:\Subtitles.ass", video_width, video_height, fps, movie_length_in_frames) Flipvertical()
Use that AviSynth script as a video source in Vegas. May have to tell Vegas it has an alpha channel.
|
OPCFW_CODE
|
Almost everybody who is developing Ionic applications will at some point or another run into a fun little error that looks something like this:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8100' is therefore not allowed access.
Access-Control-Allow-Origin error you see here is the result of the browser’s implementation of CORS (Cross-Origin Resource Sharing). This is an exceedingly common error, but it is also something that is also widely misunderstood.
This is an issue I’ve seen popping up on forums more and more frequently lately (likely due to people upgrading from UIWebView to WKWebView). I wanted to write this quick guide to explain what CORS is, and how you can work with it (or sometimes, against it).
Once you understand what is going on, these errors become much less intimidating and much easier to solve.
What is CORS?
CORS stands for Cross-Origin Resource Sharing and it is a security protocol implemented by browsers that allow a server to determine what domains/origins should be allowed access to its resources.
Since an Ionic application runs inside of a browser, CORS will apply to requests that are launched from within an Ionic application.
By default, the same-origin security policy is used, which means that the browser will only allow the loading of resources from the server if the request was launched from that same origin. That means that if my application is running on
coolfishieswithhats.com and I am trying to make a request to a server on
dogswithfancyshoes.com it will be blocked by CORS with an error that looks something like this:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://coolfishieswithhats.com' is therefore not allowed access.
The browser will allow cross-origin requests like this to succeed, but only if the server the request is being made to explicitly allows requests from that origin (or from all origins) using an appropriate header.
If we are developing an Ionic application on a desktop, then the origin in the browser will be:
Since our requests would be coming from the
localhost origin, then any server we are trying to request resources from would need to allow that origin.
If we were using Ionic and their web view plugin to run the application on a device, then the origin would also be
localhost because Ionic spins up a local web server to serve content on the device
If you are just using the standard Cordova web view then assets will be served from the
If you’re not too sure about what
origin your application is running on it doesn’t really matter, because the
Access-Control-Allow-Origin error will soon tell you if you’re wrong!
The Solution for CORS Issues
The best way to deal with CORS is to abide by the rules of the browser and implement CORS correctly. That means enabling CORS on the server you are making a request to.
The reason not enabling CORS is an issue is because the server you are making a request to is saying “No, I don’t give permission for external domains to access my resources” in the eyes of the browser. The solution then is to modify your server so that it instead says “Yes, external domains are free to use my resources” or “Yes, it is OK for coolfishieswithhats.com to access my resources”.
Exactly how you enable CORS depends on your server. In general, you need to add a header to the server response that looks like this:
NOTE: This will allow access to all origins, but you can also just allow specific origins if you want.
But, the way in which you add headers will depend on what server-side technologies you are using. If you are not sure how to add the header, I would recommend taking a look at enable-cors.org. This is a fantastic resource that contains just about everything you would ever need to know about CORS, and also has a lot of example implementations for various server-side technologies.
Ultimately, the best way to deal with CORS is to add the appropriate header to the response from the server. However, you don’t always necessarily have the ability to add this header, perhaps for the following reasons:
- You are building a solution for an existing server that a client won’t give you permission to change
- You are using a service that does not allow you to modify the CORS settings
- You are attempting to pull in resources from somebody else’s server that does not have CORS enabled
As a first step to solving these, I think the “best” solutions are:
- Convince the client to make this change
- Use a different service that provides the ability to configure CORS correctly
- Consider whether you are using this server in an intended manner, and if you are, perhaps suggest that they enable CORS
However, we live in the real world and often we do end up in situations where we can’t always use the “best” solution, which brings me to potential workarounds…
Working Around CORS Issues
As I mentioned, CORS is implemented by browsers, and fortunately for us, there are some ways we can work around that. Again, I’d like to stress that these options should only be used if necessary – the fewer workarounds you use to build your app, the better.
I will list these workarounds in my order of preference from best to worst.
1. Proxy requests through an additional server
Since CORS is implemented by browsers, it won’t stop you making a request from a server you control to the server that does not implement CORS (the communication happening here is server to server, no browser is involved). Therefore, you can proxy a request through your own server to avoid CORS issues. Your application makes a request to your server, your server makes a request for the desired resource, and then your server returns that resource to your application.
Of course, you would need to make sure that your own server responds with the appropriate CORS header!
2. Proxy requests through native code
Similarly to the last solution, since CORS is implemented by browsers we can also avoid it by launching the request from native code on the device. You can do this using the native HTTP plugin in place of your normal HTTP requests.
This proxies the HTTP requests through native code, which completely circumvents CORS just like proxying the request through a server does. There isn’t really much difference to this approach and the previous one.
3. Downgrade to UIWebView (not recommended)
The reason a lot of people suddenly face CORS issues is that they upgrade from UIWebView to WKWebView (perhaps not knowingly). UIWebView is an old web view that is used to display your Ionic applications on iOS. WKWebView is a new web view that performs a lot better than its predecessor. However, UIWebView does not enforce CORS, whereas WKWebView does.
This means that you could downgrade to UIWebView to circumvent any CORS issues completely because UIWebView doesn’t care about CORS. However, this is not a good solution because you are sacrificing the overall performance of your application to fix an issue that can be solved in a better way.
Not a solution for CORS issues…
As well as all of the potential solutions for CORS issues that I have listed, I also wanted to specifically mention something that is not a solution.
I quite often see people confused who have installed some kind of CORS extension for the browser, or set some specific security flag to disable CORS. During development, this will circumvent CORS issues, but then when they move to production they start facing CORS issues.
These extensions or settings only modify your own browser to ignore CORS, so when other people start using the application, or you use it on a device, those CORS issues will still be present.
The only time using something like this is viable is if the CORS issues are only present for local development and you are trying to work around that. For example, maybe you are launching a PWA which works fine with CORS in production (since it is hosted on your own domain) but during development, you need to make requests from
CORS is a major source of frustration for many developers. Although the errors can be somewhat confusing, and the concept seems to be a little intimidating, once you understand the basics it is a reasonably simple concept with a fixed set of possible solutions.
|
OPCFW_CODE
|
Multiple endpoints executed
When we have multiple endpoints that can be matched with the request path all endpoints are executed, but I expected only first one to be executed.
I made an example to demonstrate my problem:
import * as express from "express";
import { Server, Path, GET, PathParam } from "typescript-rest";
@Path("/api")
class ApiController {
@Path("/test")
@GET
f1(): string {
console.log('path: /test');
return 'path: /test';
}
@Path("/:name")
@GET
f2(@PathParam('name') name: string): string {
console.log('path: /:name (' + name + ')');
return 'path: /:name (' + name + ')';
}
}
let app: express.Application = express();
Server.buildServices(app);
app.listen(8080, function () {
console.log('Rest Server listening on port 8080!');
});
If I make a GET request 'http://localhost:8080/api/test' it will execute both f1 and f2 functions but will fail with following error:
Error: Can't set headers after they are sent.
because response headers are set in the first one.
Using pure javascript I don't have this problem:
const express = require('express');
const app = express();
const router = express.Router();
router.get('/test', function (req, res, next) {
console.log('path: /test');
res.json({ message: 'path: /test' });
});
router.get('/:name', function (req, res, next) {
console.log('path: /:name');
res.json({ message: 'path: /:name' });
});
app.use('/api', router);
app.listen(8080);
console.log('app started...');
because after matched endpoint next is not called.
Using typescript-rest I don't have this control and next is always called.
I traced a possible solution in server-container.ts line 328 (part of buildServiceMiddleware function) where I could replace
next();
with
if (!res.headersSent) next();
If this is intended by design could this be made optional by adding another decorator or adding options to path decorator that will indicate to only execute first matched endpoint?
I'm not satisfied with current solution to change paths so this does not happen.
Hi @darko1979 ,
Yes. it is a problem... But I am not sure what is the best solution to solve it. We need to call next to ensure that any middleware placed after the typescript-rest middlewares be called.
Take a look at #68
I think we need to add something to give more control to developer to specify if he wants that typescript-rest send a response to the client... I will think about how could be this contract...
And we are accepting suggestions
I have a suggestion: add option in Path decorator to control if next is called. Here are proposed changes:
feat: path decorator option to call next if headers are sent
In my solution this option to call next if headers are sent is enabled by default, and if somebody does not wan't this it can set this option to false, for example:
@Path("/api", false)
This can be applied to whole class or only a single method.
Or this option can be disabled by default so that default behavior is not to call next if headers are send and if somebody want's this functionality then it has to turn it on with this Path option.
Hello,
I landed here as we are experiencing this issue.
We have a few resources that has the pattern "ressource/:id and "resource/someCommand" that are now broken.
The change linked to this issue looks more like a breaking change to me.
Besides, I don't see a good enough reason why you would add a middleware after builder services. I believe this makes the lib less predictable.
Adding an option to choose the strategy (nextAfterResponse) globally would fix the issue for me.
Cheers
Just want to add that if you follow the recommended pattern of doing an app.use(...) as the last route to capture and format 404 routes you will get bitten by this as well. Every supposedly handled path will also trigger the 404 catch all.
Hi,
I've put together the different suggestions made here and I think we have a solution:
Next function
By default, we call the next function after endpoint execution (even if headers are already sent). As discussed previously, there are use cases that need this behaviour (logs, for example). If we need to disable this, we must be able to inform it explicitly.
So, the proposal is to have an annotation @IgnoreNextMiddlewares. Check here the documentation for the proposal.
@Path('test')
class TestService {
@GET
@IgnoreNextMiddlewares
test() {
//...
return 'OK';
}
}
Remember that we already have a way to explicitly call next if we need to do it according with a specific condition inside our service handler. We can use Context.next
Service Return
By default, we serialize the service result to the user as a response. If the method does not return nothing (void), we send an empty response with a 204 status. It is useful to simplify all the situations where we don't have nothing to send as response.
But, if you need to handle the user response by yourself, you should be able to explicitly inform this. The proposal here is to have a specific return value to inform that you don't want to return nothing.
The Return.NoResponse (check the proposal here) should be used to this:
import {Return} from "typescript-rest";
@Path("noresponse")
class TestNoResponse {
@GET
public test() {
return Return.NoResponse;
}
}
app.use("/noresponse", (req, res, next) => {
res.send("I am handling the response here, in other middleware");
});
or
import {Return} from "typescript-rest";
@Path("noresponse")
class TestNoResponse {
@GET
public test(@ContextResponse res: express.Response) {
res.send("I am handling the response here, don't do it automatically");
return Return.NoResponse;
}
}
I think that these proposal:
keeps compatibility with previous versions
Keeps the usage simple for the most used cases
And allow customisations to handle all those cases reported here
Re: Next Function, I still believe the logging use case has better ways of handling than altering the expected workflow for Express users (such as response event handling which is an already existing solution). Or at least switching the default so most users don't have to annotate IgnoreNextMiddlewares on every service.
W.r.t. backwards compatibility, since #68 was itself a breaking change, I see this more as a bugfix than a backwards compatibility issue.
Hi @greghart ,
I still believe the logging use case has better ways of handling than altering the expected workflow for Express users
I did not disagree about the logging use case. But the question is what is the expected workflow for Express users?
I believe that if somebody add a middleware, after the Typescript-rest handler, directly in the express router, that person expects that it will be called after a service call.
let app: express.Application = express();
Server.buildServices(app);
app.use("/mypath", (req, res, next) => {
// If I add something here, with a middleware in express directly, I expect that it must be called.
});
That is why we handled #68 as a bug and not a change.
But we can put a configuration property in the Serverto allow switch the default, avoiding IgnoreNextMiddlewares in more than one service, if you need to disable it more than once.
something like:
Server.IgnoreNextMiddlewares(true);
I think your example highlights the mis-communication, as I think conventional Express users (and the other people in this thread) would expect that next middleware not to happen if Server.buildServices(app) is going to end the response by calling res.send.
This is based on documentation and examples of Express, and common usage. It's not prohibited to call next after send, but it's certainly not the expected norm.
Hi,
I think your example highlights the mis-communication, as I think conventional Express users (and the other people in this thread) would expect that next middleware not to happen if Server.buildServices(app) is going to end the response by calling res.send.
Maybe... but the point is that expressjs users have the chance to choose when and how to call the methods next and response.send, so I don't think we can reduce the problem to "Don't call next after a service method".
We need to add support to give more control to users of typescript-rest. The changes proposed here allow all possible uses cases.
And with the top level switch (Server.ignoreNextMiddlewares(true);) it is possible to keep the previous behaviour.
But thanks a lot for all contributions on this topic
|
GITHUB_ARCHIVE
|
M3UA is lower layer of SS7 SIGTRAN Protocol stack which replaces MTP1, MTP2, MTP3 in SS7. The advantages of using M3UA is that there is no need of additional costly SS7 Hardware as M3UA works on existing Ethernet network and requires only existing IP network.
Open Sigtran M3UA stack comes in two applications bundle M3UA and SCTP both the applications share common configuration file. Open Sigtran M3UA is fully compatible with Dialog SS7 stack and replaces costly Dialogic SS7 M3UA protocol binary.
Open Sigtran M3UA stack has two modules listed below:
SCTP module uses SCTP development package in linux which includes in build SCTP stack. The main function of SCTP module is to create SCTP links and send/receive data to and from SCTP sockets.
M3UA module implements M3UA as defined in RFC 3332. The main function of M3UA module is to initialte M3UA links and take care of procedures as defined in the RFC.
Please have a look at the following flow diagram :
As shown in the flow diagram when the SCTP and M3UA modules start, The initial Stage is OS_SCTP_DOWN. For each link configured in the configuration file, SCTP module tries to initiate the SCTP link. when an SCTP connection is made and is ready for data to be received or sent, SCTP module sends OS_IPC_SCTP_UP message to M3UA module. After reception of the message, M3UA module sends ASP_UP message and waits for its acknowledgement. When the acknowledgement is received, it sends ASP_ACTIVE message. ASP_ACTIVE_ACK message is received in response of ASP_ACTIVE. At this stage the M3UA link is UP. Each M3UA end can send M3UA HEARTBEAT message after a pre-configured interval and the other end acknowledges the message by sending M3UA_HEARTBEAT_ACK message. When a PAYLOAD message is received from SCTP, it is decoded and important values like OPC, DPC, SI, SLS are extracted. Based on the SI, the message is transferred to the Dialogic Stack SCCP queue or ISUP queue. Similarly when a message is received from Dialogic Stack, it is encoded in the form of Payload Data and sent to SCTP module.
When ever an SCTP link gets down due to network failure or any other issue(e.g. Heart Beat not received for some interval), SCTP send OS_IPC_SCTP_DOWN message to M3UA. At this time, the link is down and all message received from SS7 Dialogic stack are discarded. When ever the peer is accessible and SCTP link is up, SCTP sends OS_IPC_SCTP_UP message to M3UA. Upon reception of the message, M3UA re initiates the process of making M3UA link up by sending ASP_UP and then ASP_ACTIVE messages. When the response of ASP_ACTIVE message is received, M3UA link becomes Up and normal M3UA messages can be exchanged.
SCTP and M3UA module share a common configuration file.
The file is shown below with description in comments:
|
OPCFW_CODE
|
Thread and mutable params
I'm new to Java, so pls excuse if answer to below simple case is obvious.
class A{
public void foo(Customer cust){
cust.setName(cust.getFirstName() + " " + cust.getLastName());
cust.setAddress(new Address("Rome"));
}
}
I've a Singleton object (objectA) created for class A.
Given I don't have any class variable, is it thread safe if I call objectA.foo(new Customer()) from different threads?
What if I change foo to static and call A.foo(new Customer()) from different threads?
is it still thread safe?
Making the method synchronized does not automatically make it thread-safe. In fact, in this example, it wouldn't do anything useful.
Given I don't have any class variable, is it thread safe if I call
objectA.foo(new Customer()) from different threads?
Of course it is. Your foo() method doesn't change any state of the A object (since it doesn't have any) and the object you pass, new Customer(), as an argument to the method is not available to any other thread.
What if I change foo to static and call A.foo(new Customer()) from
different threads? is it still thread safe?
As long as you don't have any mutable static state, you're still good.
Probably the person with that gave you a down vote did not understand your answer.
Thanks for the answer, The premise I was asking this question was, http://stackoverflow.com/a/18547670/388889 , Not sure how accurate it is. But it says "if every object you pass in the o parameter is immutable" and here Customer is not immutable. Any hint?
@Aymer the keyword here is shared mutable data. If the objects aren't shared between various threads, than, despite it being mutable, it is still thread-safe
@Aymer Because you create the object (new Customer()) directly in the method invocation, there's no chance of a reference to it leaking to any other threads.
@SotiriosDelimanolis & John, Thanks for the clarification. On a side note, never expected this kind of super fast responses!
Yes, it will be thread-safe IF you call foo(new Customer()) from different threads. But this is only because each time you call new Customer() you are making a new (and therefore different) Customer object, and all that foo does is alter the state of the Customer that is passed to it. Thus these threads will not collide, because even though they are calling the same method, they will be manipulating different customers.
However, if you were to create a customer variable first
Customer bob = new Customer()
and then call foo(bob) from two different threads, it would not be thread safe. The first thread could be changing the address while the second thread is changing the name, causing inconsistent behavior and / or corrupt data.
If you want to make this method truly thread-safe, just declare the method synchronized:
public synchronized void foo(Customer cust) {...}
thread safety is required where a function is accessing a static shared variable. like a function which is updating a shared document, so if two thread in parallel updated changes of one thread will get ignore. Or a static variable which is shared across the application, singleton object.
Above are some situation where thread safety required In your case you are not updating any shared resource so this is a thread safe.
|
STACK_EXCHANGE
|
<?php
namespace mahara\blocktype\CaldavCalendarPlugin\libical;
use mahara\blocktype\CaldavCalendarPlugin\IcalRecur;
use mahara\blocktype\CaldavCalendarPlugin\IcalNumberedWeekday;
/**
* Descripbes RECUR elements, like the recurrence rule
* @author Tobias Zeuch
*/
class LibIcalRecurImpl implements IcalRecur {
/** = "SECONDLY" / "MINUTELY" / "HOURLY" / "DAILY" / "WEEKLY" / "MONTHLY" / "YEARLY"*/
const FREQ = 'FREQ';
/** 1*DIGIT */
const INTERVAL = 'INTERVAL';
/** 1*DIGIT */
const COUNT = 'COUNT';
/**
* date
* date-time ;An UTC value
*/
const UNTIL = 'UNTIL';
/**
* seconds / ( seconds *("," seconds) )
* seconds = 1DIGIT / 2DIGIT ;0 to 59
*/
const BYSECOND = 'BYSECOND';
/**
* minutes / ( minutes *("," minutes) )
* minutes = 1DIGIT / 2DIGIT ;0 to 59
*/
const BYMINUTE = 'BYMINUTE';
/**
* hour / ( hour *("," hour) )
* hour = 1DIGIT / 2DIGIT ;0 to 23
*/
const BYHOUR = 'BYHOUR';
/**
* weekdaynum / ( weekdaynum *("," weekdaynum) )
* weekdaynum = [([plus] ordwk / minus ordwk)] weekday
* plus = "+"
* minus = "-"
* ordwk = 1DIGIT / 2DIGIT ;1 to 53
* weekday = "SU" / "MO" / "TU" / "WE" / "TH" / "FR" / "SA"
* ;Corresponding to SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY,
* ;FRIDAY, SATURDAY and SUNDAY days of the week.
*/
const BYDAY = 'BYDAY';
/**
* monthdaynum / ( monthdaynum *("," monthdaynum) )
* monthdaynum = ([plus] ordmoday) / (minus ordmoday)
* ordmoday = 1DIGIT / 2DIGIT ;1 to 31
*/
const BYMONTHDAY = 'BYMONTHDAY';
/**
* yeardaynum / ( yeardaynum *("," yeardaynum) )
* yeardaynum = ([plus] ordyrday) / (minus ordyrday)
* ordyrday = 1DIGIT / 2DIGIT / 3DIGIT ;1 to 366
*/
const BYYEARDAY = 'BYYEARDAY';
/**
* weeknum / ( weeknum *("," weeknum) )
* weeknum = ([plus] ordwk) / (minus ordwk)
* ordwk = 1DIGIT / 2DIGIT ;1 to 53
*/
const BYWEEKNO = 'BYWEEKNO';
/**
* monthnum / ( monthnum *("," monthnum) )
* monthnum = 1DIGIT / 2DIGIT ;1 to 12
*/
const BYMONTH = 'BYMONTH';
/**
* setposday / ( setposday *("," setposday) )
* setposday = yeardaynum
*/
const BYSETPOS = 'BYSETPOS';
/**
*the wrapped \Rrule
* @var Rrule
*/
private $rrule;
public function __construct(\Rrule $rrule) {
$this->rrule = $rrule;
}
/**
* returns the number of repetitions this rule defines. Can be empty. In
* that case, null is returned
* @return int
*/
public function get_count() {
if (array_key_exists(self::COUNT, $this->rrule->params)) {
return $this->rrule->params[self::COUNT];
}
return null;
}
/**
* returns the until-date, that is, the date when the last occurence of
* the repetition will take place. Can be empty, in which case null is
* returned
* @return DateTime
*/
public function get_until() {
if (array_key_exists(self::UNTIL, $this->rrule->params)) {
return \RemoteCalendarUtil::ical_date_to_DateTime($this->rrule->params[self::UNTIL]);
}
return null;
}
/**
* returns the frequencey, which is a value defined in class Frequencies
* @return string
*/
public function get_frequency() {
if (array_key_exists(self::FREQ, $this->rrule->params)) {
return $this->rrule->params[self::FREQ];
}
return null;
}
/**
* the interval specifies, how often an event occurs in a given time. It
* combines with the frequency (@see CaldavRecur::get_frequency)
* @return string
*/
public function get_interval() {
if (array_key_exists(self::INTERVAL, $this->rrule->params)) {
return $this->rrule->params[self::INTERVAL];
}
return null;
}
/**
* returns a list of numbers that define on which month(s) the event will
* take place. This can either be a filter, or expansion, depending on whether
* the frequency is bigger or smaller than Frequencies::MONTHLY
* @erturn array;
*/
public function get_by_months() {
if (array_key_exists(self::BYMONTH, $this->rrule->params)) {
$monthlist = $this->rrule->params[self::BYMONTH];
return explode(',', $monthlist);
}
return null;
}
/**
* gets a list of of the year on that the event takes place. Negative numbers means
* that these days are excluded. <br/>
* @return array
*/
public function get_by_year_days() {
if (array_key_exists(self::BYYEARDAY, $this->rrule->params)) {
$daylist = $this->rrule->params[self::BYYEARDAY];
return explode(',', $daylist);
}
return null;
}
/**
* returns a list of positions of the day of the year that are always used
* as a filter for the expanded values
* @return array
*/
public function get_by_set_pos() {
if (array_key_exists(self::BYSETPOS, $this->rrule->params)) {
$poslist = $this->rrule->params[self::BYSETPOS];
return explode(',', $poslist);
}
return null;
}
/**
* returns a list of days of the week
* each of theses can be positive or negative which means occurence or
* negative filter
* @return array
*/
public function get_by_days() {
if (array_key_exists(self::BYDAY, $this->rrule->params)) {
$daylist = $this->rrule->params[self::BYDAY];
$days = explode(',', $daylist);
$numberedWeekdays = array();
foreach ($days as $day) {
$number = null;
$weekday = $day;
if (strlen($day) > 2) {
$number = substr($day, 0, strlen($day) - 2);
$weekday = substr($day, -2);
}
$numberedWeekdays []= new IcalNumberedWeekday($number, $weekday);
}
return $numberedWeekdays;
}
return null;
}
/**
* returns a list of days of the month (1-31)
* @return array
*/
public function get_by_days_of_month() {
if (array_key_exists(self::BYMONTHDAY, $this->rrule->params)) {
$daylist = $this->rrule->params[self::BYMONTHDAY];
return explode(',', $daylist);
}
return null;
}
/**
* returns al ist of numbers of hours
* @return array
*/
public function get_by_hours() {
if (array_key_exists(self::BYHOUR, $this->rrule->params)) {
$hourlist = $this->rrule->params[self::BYHOUR];
return explode(',', $hourlist);
}
return null;
}
/**
* returns al ist of numbers of minutes
* @return array
*/
public function get_by_minutes() {
if (array_key_exists(self::BYMINUTE, $this->rrule->params)) {
$minutelist = $this->rrule->params[self::BYMINUTE];
return explode(',', $minutelist);
}
return null;
}
/**
* returns al ist of numbers of seconds
* @return array
*/
public function get_by_seconds() {
if (array_key_exists(self::BYSECOND, $this->rrule->params)) {
$secondlist = $this->rrule->params[self::BYSECOND];
return explode(',', $secondlist);
}
return null;
}
/**
* returns a weekday, represented by class {@link WeekDays}
*/
public function get_week_start() {
if (array_key_exists(self::WKST, $this->rrule->params)) {
return $this->rrule->params[self::WKST];
}
return null;
}
public function get_by_week_no() {
if (array_key_exists(self::BYWEEKNO, $this->rrule->params)) {
return $this->rrule->params[self::BYWEEKNO];
}
return null;
}
}
|
STACK_EDU
|
How do I get the randomly generated primary key of an inserted row in MariaDB/MySQL?
Okay, I am currently developing a website that is supposed to have a searchable database of pool pumps. As part of this system, to prevent people from reading hidden data, I had the primary key of the pool pump stock randomly generated. Here's the code I wrote for the MariaDB backend:
DELIMITER $$
CREATE TRIGGER random_pump_id BEFORE INSERT ON tbl_stock FOR EACH ROW
BEGIN
DECLARE temp_id MEDIUMINT;
REPEAT
SET temp_id = FLOOR(RAND() * 16777216);
UNTIL (SELECT COUNT(*) FROM tbl_stock WHERE pump_id = temp_id) <= 0 END REPEAT;
SET NEW.pump_id = temp_id;
END
$$
But now I've run into a dilemma. Every time I want to insert a row, I need a way to retrieve the primary key I just generated. I know if I used AUTO_INCREMENT I could use the LAST_INSERT_ID function, or lastInsertId in PDO. But since I am not using AUTO_INCREMENT, and instead am using a separate trigger, these functions will only return a 0. I know I can do it in PostgreSQL by using the RETURNING clause, but I can't find a way to accomplish this in MariaDB.
Does anyone know of any solution? Perhaps some obscure trigger I don't know about? Please?
"to prevent people from reading hidden data". If they have direct access to the database they can read the complete table anyway :-?
What you're doing is a terrible idea. Instead of your idea, what you should have done is use auto_increment but when you show the data to the user - encrypt the id's. That will prevent people from guessing what the actual value is. What you did there will annihilate your db performance, and that's not even the biggest issue.
@ÁlvaroGonzález he means changing the URL or any other point of entry to another ID
@Mjh I don't understand what you mean by 'encrypt the id's'. The id's what? What exactly do the ids possess that I should encrypt. Unless you mean, 'encrypt the ids' but I don't know what you mean by that either. Unless you're referring to hashing, but that would only work for displaying the id. When I try to call a record using this 'encrypted id' it would be impossible unless you solved PvNP.
What I mean is encrypt the value of auto_increment when you show it to the end user in a URL or whatever kind of UI you have. When you receive the value back, decrypt it and use the database as it was meant to be used. Don't obfuscate or "randomize" the primary key, that's not a good idea. What you're after is disabling people from using sequential numbers to obtain data via a crawler or similar - your best approach is to encrypt this sensitive primary key information before you show it to the public. That's what I mean. Please don't randomize your primary keys, especially this way.
Yeah, you're not giving me a reason. You're saying it's a bad idea, without saying why. And if it's a problem with performance, I should point out encryption will also hurt performance, and will be happening for every element the visitors search for and view. Meanwhile I have one person who will be inserting items in the database on occasion.
InnoDB clusters based on primary key which is expected to be sequential (next one larger than previous). Using last_insert_id() won't work with your solution. You are prone to concurrency issues, how do you handle duplicates? To assert you're not getting a duplicate, you're performing a select - that won't prevent clashes at all, that will produce ERRORS because you will get false positives. Your numbers are still guessable since they're numbers. What you did there is introduce errors for no gain, and you made your writes slower. Encryption beats your solution by far.
You could have used UUID(), which is less-guessable than what you did (but still guessable). If you want to argue that encryption will be slower - no, it won't because encryption depends on CPU while your solution involves HDD subsystem into calculation - therefore, your solution will be I/O constrained while encryption will be CPU constrained. CPU speed >>>>>>> HDD speed, therefore encryption is by definition quicker method than yours, even though it's slower compared to no encryption. There's a few reasons, it's up to you if you'll listen. It's your project and good luck with that! :)
See, that's a reason. Thank you. I'm not a database expert by any stretch so I had no idea. The thing is, I also have no idea how to do what you suggest. Should I use AES or something? Is there a symmetrical encryption function in php? Because I couldn't find it, all I found were public key-based.
But it does strike me that a storage system that relies on sequential primary keys is really dumb since most of the time (at least, this is what I was taught) you don't want to use artificial primary keys, but instead something inherent to the subject. So Social Insurance Number, phone number, or email address. Are those ever sequential?
Storage engine wants to be fast, so it will happily create a hidden key if it can't use one from table definition (but that wastes space). However, since auto_increment does this job, they probably decided to use it so they don't waste too much space for no reason. Basically, using auto_increment should always be enough, using unique should be used for types of information you mentioned. PHP has libraries such as this one, Laravel implements its own and gives you encrypt() and decrypt() functions.
Just a note - don't implement encryption on your own, use the library I linked, it has all the fine details handled for you, all you need to do is set up your symmetric key and use the library. AES-128 should be more than fine for your purpose. As you see, this way you protect your data from prying eyes and you don't prevent yourself from using database-specific functions (last_insert_id() etc.).
Presumably you have some other way to locate the record? Probably via a UNIQUE key? If so, the fetch the row after adding the random id.
Don't use a trigger, instead write application code. (Triggers can't solve all problems.)
No, there's no reliable way to call the record using a UNIQUE field. Also, I like a trigger. It runs automatically on the server without me having to bother. PHP code is inherently less efficient. But none of it matters, I solved the problem by using AUTO_INCREMENT and encrypting the primary keys. Works much better for my purposes.
|
STACK_EXCHANGE
|
Thinking about map spaces that have multiple maps on the same grid, maybe there could be a pin that the GM can assign to a map and there is a table of contents (kind of like bookmarking a spot on a Google Doc and having a table of contents to jump to any of the bookmarks). Maybe this would be nice if an elaborate campaign develops over the same map space.
It could also be interesting if someone slowly builds out a full map (like Barovia) over the course of an entire campaign.
I personally would like this kind of feature - if I’ve prepped a large dungeon in advance on a big map that’s covered in fog of war, I’d like to be able to snap the player view to the spot where the action is at… or at least be able to say “click on ‘Entrance’ in the table of contents”. The default spot when you join a game is the center of the map, but the entrance to the dungeon might be in a corner. It’s not too much trouble to say “ok everybody pan to the bottom left corner”. I’m not sure how many people intend to use Shmeppy to prep maps in advance like me though; it seems to be more intended to let you start drawing quick maps as soon as you sit down.
I have a different way of addressing the same problem that this feature tries to tackle. I wrote about this in an email where someone asked a similar question. Here’s an excerpt:
Will you be adding a feature that allows us to jump from area to area on the map (if I pre-draw a dungeon and several other key rooms could I quickly “jump”) to that area on the grid?
Yes! In fact, quite soon. There are many pieces involved in the mobile support I’ve started on this month (March 2020 Roadmap). One of them is compensating for mobile device’s small screen size.
Currently it’s pretty easy to make a map that exceeds what a desktop monitor can display all of fairly quickly, but it’s generally manageable since monitors are really quite large. But I think the problem will be unmanageable with mobile devices, and “getting lost” in the map will be common.
To compensate for this, I have three features planned: (1) I’m going to add “pinging” so if you click with the laser tool it’ll mark a point temporarily, (2) I’m going to add little arrow hints when a user is pinging or using their laser out-of-view and if you click/tap the arrow it will snap you there, (3) I’m going to add a minimap that will let you quickly jump to different areas of the map and allow you to orient yourself.
I think this should tidily solve the problem of getting lost, and it sounds like, also satisfy the need you’re encountering.
|
OPCFW_CODE
|
Search results for "module:Tree"
Tree - An N-ary tree
This is meant to be a full-featured N-ary tree representation with configurable error-handling and a simple events system that allows for transparent persistence to a variety of datastores. It is derived from Tree::Simple, but has a simpler interface...RSAVAGE/Tree-1.16 - 24 Jul 2023 00:28:36 UTC
Tree::Fast - the fastest possible implementation of a tree in pure Perl
This is meant to be the core implementation for Tree, stripped down as much as possible. There is no error-checking, bounds-checking, event-handling, convenience methods, or anything else of the sort. If you want something fuller-featured, please loo...RSAVAGE/Tree-1.16 - 24 Jul 2023 00:28:36 UTC
Bio::Tree::Tree - An implementation of the TreeI interface.
This object holds handles to Nodes which make up a tree....CJFIELDS/BioPerl-1.7.8 - 03 Feb 2021 05:15:14 UTC
B::Tree - Simplified version of B::Graph for demonstration
This is a very cut-down version of "B::Graph"; it generates minimalist tree graphs of the op tree of a Perl program, merely connecting the op nodes and labelling each node with the type of op. It was written as an example of how to write compiler mod...SIMON/B-Tree-0.02 - 29 Nov 2000 12:36:59 UTC
Tree::M - implement M-trees for efficient "metric/multimedia-searches"
(not yet) Ever had the problem of managing multi-dimensional (spatial) data but your database only had one-dimensional indices (b-tree etc.)? Queries like select data from table where latitude > 40 and latitude < 50 and longitude> 50 and longitude< 6...MLEHMANN/Tree-M-0.031 - 03 Mar 2005 17:56:08 UTC
Tree::R - Perl extension for the R-tree data structure and algorithms
R-tree is a data structure for storing and indexing and efficiently looking up non-zero-size spatial objects. EXPORT None by default....AJOLMA/Tree-R-0.072 - 14 Sep 2015 08:50:59 UTC
Tree::RB - Perl implementation of the Red/Black tree, a type of balanced binary search tree.
This is a Perl implementation of the Red/Black tree, a type of balanced binary search tree. A tied hash interface is also provided to allow ordered hashes to be used. See the Wikipedia article at <http://en.wikipedia.org/wiki/Red-black_tree> for furt...ARUNBEAR/Tree-RB-0.500006 - 07 Oct 2017 13:34:31 UTC
Tk::Tree - Create and manipulate Tree widgets
The Tree method creates a new window and makes it into a Tree widget and return a reference to it. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the Tree widget such as its...CTDEAN/Tk-Tree-0.05 - 13 Jan 1998 08:43:11 UTC
Tree::BK - Structure for efficient fuzzy matching
The Burkhard-Keller, or BK tree, is a structure for efficiently performing fuzzy matching. By default, this module assumes string input and uses "distance" in Text::Levenshtein::XS to compare items and build the tree. However, a subroutine giving the...NGLENN/Tree-BK-0.02 - 11 Oct 2014 08:52:11 UTC
Tree::VP - Vantage-Point Tree builder and searcher.
11 Jul 2016 21:05:48 UTC
TM::Tree - Topic Maps, trait for induced tree retrieval
Obviously, topic maps can carry information which is tree structured. A family pedigree is a typical example of it; associations having a particular type, particular roles and you can derive a tree structure from that. This is exactly what this opera...DRRHO/TM-1.56 - 08 Nov 2010 06:58:01 UTC
Tree::AVL - An AVL (balanced binary) tree for time-efficient storage and retrieval of comparable objects
AVL Trees are balanced binary trees, first introduced in "An Algorithm for the Organization of Information" by Adelson-Velskii and Landis in 1962. Balance is kept in an AVL tree during insertion and deletion by maintaining a 'balance' factor in each ...MBEEBE/Tree-AVL-1.077 - 13 Nov 2014 19:45:54 UTC
Tree::Fat - Perl Extension to Implement Fat-Node Trees
Implements object-oriented trees using algorithms adapted from b-trees and AVL trees (without resorting to yucky C++). Fat-node trees are not the best for many niche applications but they do have excellent all-terrain performance. TYPE Speed Flexibil...JPRIT/Tree-Fat-1.111 - 10 Mar 1999 16:10:50 UTC
TAP::Tree - TAP (Test Anything Protocol) parser which supported the subtest
TAP::Tree is a simple parser of TAP which supported the subtest. It parses the data of a TAP format to the data of tree structure. Moreover, the iterator for complicated layered tree structure is also prepared....MAGNOLIA/TAP-Tree-v0.0.5 - 10 Jun 2014 13:20:26 UTC
SQL::Tree - Generate a trigger-based SQL tree implementation
SQL::Tree generates a herarchical data (tree) implementation for SQLite and PostgreSQL using triggers, as described here: http://www.depesz.com/index.php/2008/04/11/my-take-on-trees-in-sql/ A single subroutine is provided that returns a list of SQL s...MLAWREN/SQL-Tree-0.05 - 28 Jan 2021 15:12:31 UTC
Log::Tree - lightweight but highly configurable logging class
04 Nov 2016 22:24:20 UTC
Pod::Tree - Create a static syntax tree for a POD
"Pod::Tree" parses a POD into a static syntax tree. Applications walk the tree to recover the structure and content of the POD. See "Pod::Tree::Node" for a description of the tree....MANWAR/Pod-Tree-1.31 - 22 Feb 2019 10:53:09 UTC
PFT::Tree - Filesystem tree mapping a PFT site
The structure is the following: ├── build ├── content │ └── ... ├── inject ├── pft.yaml └── templates Where: "content" is a directory is handled with a "PFT::Content" instance. "pft.yaml" is a configuration file handled with "PFT::Conf" The remaining...DACAV/PFT-v1.4.1 - 23 Jul 2019 15:14:22 UTC
SVN::Tree - SVN::Fs plus Tree::Path::Class
This module marries Tree::Path::Class to the Perl Subversion bindings, enabling you to traverse the files and directories of Subversion revisions and transactions, termed roots in Subversion API parlance....MJGARDNER/SVN-Tree-0.005 - 15 Mar 2012 18:22:52 UTC
|
OPCFW_CODE
|
CI fail on two tests
The CI keeps failing on this test, I'm not sure why.
Example run: https://github.com/huggingface/huggingface_hub/runs/5628640361?check_suite_focus=true
Ping @LysandreJik @osanseviero
_______________________ HfApiPublicTest.test_model_info ________________________
self = <tests.test_hf_api.HfApiPublicTest testMethod=test_model_info>
@with_production_testing
Downloading: 100%|██████████| 2.00/2.00 [00:00<00:00, 1.42kB/s]
def test_model_info(self):
_api = HfApi()
model = _api.model_info(repo_id=DUMMY_MODEL_ID)
self.assertIsInstance(model, ModelInfo)
self.assertNotEqual(model.sha, DUMMY_MODEL_ID_REVISION_ONE_SPECIFIC_COMMIT)
# One particular commit (not the top of `main`)
model = _api.model_info(
repo_id=DUMMY_MODEL_ID, revision=DUMMY_MODEL_ID_REVISION_ONE_SPECIFIC_COMMIT
)
self.assertIsInstance(model, ModelInfo)
self.assertEqual(model.sha, DUMMY_MODEL_ID_REVISION_ONE_SPECIFIC_COMMIT)
model = _api.model_info(
repo_id=DUMMY_MODEL_ID,
revision=DUMMY_MODEL_ID_REVISION_ONE_SPECIFIC_COMMIT,
securityStatus=True,
)
self.assertEqual(
> getattr(model, "securityStatus"),
{"containsInfected": False, "infectionTypes": []},
)
E AttributeError: 'ModelInfo' object has no attribute 'securityStatus'
tests/test_hf_api.py:725: AttributeError
__________________ InferenceApiTest.test_inference_with_audio __________________
self = <tests.test_inference_api.InferenceApiTest testMethod=test_inference_with_audio>
@with_production_testing
def test_inference_with_audio(self):
api = InferenceApi("facebook/wav2vec2-large-960h-lv60-self")
dataset = datasets.load_dataset(
"patrickvonplaten/librispeech_asr_dummy", "clean", split="validation"
)
data = self.read(dataset["file"][0])
result = api(data=data)
self.assertIsInstance(result, dict)
> self.assertTrue("text" in result)
E AssertionError: False is not true
tests/test_inference_api.py:73: AssertionError
@McPatate @Pierrci was the security status added to the staging endpoint? It seems as if the field was empty.
I would say no, but I'll let @Pierrci confirm.
For the second test, it seems the API is returning malformed soundilfe
api = InferenceApi("facebook/wav2vec2-large-960h-lv60-self")
dataset = datasets.load_dataset(
"patrickvonplaten/librispeech_asr_dummy", "clean", split="validation"
)`
def read(filename: str) -> bytes:
with open(filename, "rb") as f:
bpayload = f.read()
return
data = read(dataset["file"][0])
result = api(data=data)
result
>>> {'error': 'Malformed soundfile'}
@Narsil, on the error above, I think the code snippet above was working some time ago. The dataset is unchanged, so I'm a bit surprised about this. Was there any change in the API?
For the second test, it seems to be flaky; re-running it a couple of times locally makes it succeed. Couldn't reproduce on the CI, where it always fails.
In my case running the code of the second test in a clean colab always led to the malformed soundfile unfortunately. I'm surprised it passed locally for you.
@osanseviero @adrinjalali ,
facebook/wav2vec2-large-960h-lv60-self is not a pinned model anymore, so you're just hitting the load message.
Using facebook/wav2vec2-base-960h which is pinned (most testing purposes of the API too) will hopefully fix that.
I also suggest using a different test call
self.assertTrue("text" in result, f"We received {result} instead")
Would help understand what's going on earlier.
Another possibility would be to test the full output:
self.assertEqual(dict, {"text":"MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL"})
Should by default have a better error message and we would be also testing
Note @osanseviero your malformed soundfile is because your are returning None in your read function :)
Thanks @Narsil! The copy paste from the test went wrong :man_facepalming:. I will be testing this out in https://github.com/huggingface/huggingface_hub/pull/788
@McPatate @Pierrci was the security status added to the staging endpoint? It seems as if setting securityStatus in the param does not do any change
It's using the same security scanner endpoint/instance as in prod (meaning that if you test with a model also in prod you should see something), but the webhook isn't plugged on it (meaning any random model added in staging will not be processed)
Also I'm not sure the scanner would manage doing its job with staging stuff as it only uses the Hub in production.
|
GITHUB_ARCHIVE
|
[AERIE 1885] Add enabled/disabled to goal in sceduling specification
Tickets addressed: AERIE-1885
Review: By commit
Merge strategy: Merge (no squash)
Description
This PR adds the capability to toggle a scheduling goal within a scheduling specification as either enabled or disabled. A disabled scheduling goal is ignored during scheduling execution.
Verification
A test is added testSingleActivityPlanGoalDisabled() in SchedulingIntegrationTests.java. Also tested through the UI by adding three goals, disabling the second goal directly via the db, observed correct behavior.
Documentation
Two sentence have been added to https://github.com/NASA-AMMOS/aerie/wiki/Scheduling-Guide#specifying-the-order-of-goals noting the effect of enabled/disabled goals.
Future work
There have been discussions around how the ordering/priority of goals is implemented in the UI. These discussions are some ways off resulting in tickets, so for now there are no clear next steps resulting from this PR.
First off, thank you for your thoughtfulness!
I had some similar thoughts as well. I think the solution lies somewhere between option 1 and option 2. A refactoring maybe in order.
It would seem that PostgresSpecificationRepository#getSpecification should return the full specification, disabled goals and all. Previously, it made sense that the goals were being compiled in getSpecification because all goals in the specification were used in a scheduling execution. It also makes sense that the goal's compilation is orchestrated by the PostgresSpecificationRepository as the output type Specification only makes sense with the Java representation of goals. With the addition of enabled/disabled, we'd like to not compile disabled goals (as you identified).
Okay so I started writing thinking the answer lay between options 1 and 2, but now I'm veering towards option 2 as being the better solution. Right now the system has need to retrieve a Specification with only enabled goals, therefore not compiling disabled goals. If a future use case arises that needs the full Specification, we can refactor.
Again, thank you for your thoughtful response.
Previously, it made sense that the goals were being compiled in getSpecification because all goals in the specification were used in a scheduling execution. It also makes sense that the goal's compilation is orchestrated by the PostgresSpecificationRepository as the output type Specification only makes sense with the Java representation of goals.
Am I reading correctly that the database driver is also responsible for compiling scheduling TypeScript into Java rules? 🤔
The PostgresSpecificationRepository currently makes the call to compile the goals, with the Specification being returned to the Scheduler. PostgresSpecificationRepository
Am I reading correctly that the database driver is also responsible for compiling scheduling TypeScript into Java rules? 🤔
Yes, and I don't love it either 😅 Originally this decision was made because the Specification was defined in terms of the scheduler's Goal type, so in order to create a Specification object, you needed to turn the typescript strings into Goal objects. I would love to decouple that, but I'm having trouble envisioning exactly how.
In some ways, the Specification doesn't need to be concerned with the contents of the goals - it merely needs to refer to those goals (whether by id, or by reference to a java object, perhaps hidden behind an interface).
We could have the specification store a list of thunks, () -> Goal, (or, in what I think is a morally equivalent way, a list of GoalIds) which would allow us to defer compilation until after a specification has been created, without the specification needing to know that the goals are represented as typescript.
without the specification needing to know that the goals are represented as typescript.
:thinking: I think the domain model is a little thin here, specifically regarding what a "specification" is. Because the scheduler wants to work with concrete goals, and there's nowhere else between the scheduler and the repository to turn specifications into goals, the responsibility for performing this interpretation has fallen to the repository. (Let me know if I've misunderstood the domain here.)
It seems to me that the database should be returning the source-level data, and whoever receives it should then pass the specification on to some other component to turn the specification into goals, before then passing the goals on to whoever needed them. At a very abstract level, this could be done by dependency injection, giving whoever calls the repository a component that knows how to interpret specifications as goals.
Put slightly differently, it's not a Postgres abstraction that goals are stored as typescript -- it's a high-level design decision. Individual components may not need to know about this, but that decision does need to be located somewhere. Imagine, for the sake of probing the architecture, if we used the filesystem instead of Postgres to store TypeScript specifications -- would we really want to duplicate that code between multiple database drivers, or couple two drivers to the same post-processing concern? This is really the remit of the caller to interpret the stored data.
|
GITHUB_ARCHIVE
|
import struct
import sys
class WAL():
# http://www.cclgroupltd.com/the-forensic-implications-of-sqlites-write-ahead-log/
def __init__(self, f):
self.f=f
HEADER = ">LLLLLLLL"
size = struct.calcsize(HEADER)
data = self.f.read(size)
(self.signature, self.version, self.page_size, self.sequence, self.salt1, self.salt2, self.checksum1, self.checksum2) = (struct.unpack(HEADER,data))
if not (self.signature==0x377F0682 or self.signature==0x377F0683):
raise Exception("Invalid signature ({:02x})".format(self.signature))
pass
def frames(self):
while True:
FRAME = ">LLLLLL"
size = struct.calcsize(FRAME)
data = self.f.read(size)
if (len(data)) == 0:
break
(page_number, size_in_pages, salt1, salt2, checksum1, checksum2) = (struct.unpack(FRAME,data))
page = self.f.read(self.page_size)
yield (page_number, size_in_pages, salt1, salt2, checksum1, checksum2, page)
|
STACK_EDU
|
[Resolved] All websites that use Apache (Nginx forwards requests to Apache) show 502 Bad Gateway after upgrade
The upgrade from CentOS 7.9 to AlmaLinux 8 went smoothly. The upgrade tool also did not indicate any problems. After the upgrade was complete, all websites that only used NGINX in proxy mode were no longer accessible. Only the websites that only used NGINX worked.
We have already tried to check for errors and repair them using the Plesk "Diagnostics & Repair" function. No improvement or change could be found here.
We also tried to follow these instructions, which also did not work: https://support.plesk.com/hc/en-us/articles/12377559249559-Site-does-not-work-on-PHP-FPM-handler-on-Plesk-server-503-service-unavailable
We also checked fail2ban. All server IP addresses are on the whitelist. We also deleted/activated all blocked IP addresses again for testing purposes.
Attached are screenshots of the problem and the errors and the generated centos2alma_feedback.zip
I hope there is a quick fix for this. I had to reset the server to a snapshot before the upgrade.
Best regards
Falk
2024-10-14_centos2alma_feedback.zip
@SandakovMM Do you have any ideas about that problem/bug?
@SandakovMM
After another attempt with the migration script 1.4.3, the same error occurred again. Short Solution: Apache Module watchdog was not enabled after Upgrade to AlmaLinux 8.
After careful analysis, I found that the Apache service had not started and was in an error state.
I checked the status of the Apache service on AlmaLinux 8 with the following command:
systemctl status httpd
After this service could not be set to the status "Active: active (running)" even after a restart, I checked the log file:
cat /etc/httpd/logs/error_log
The output provided a clue as to what the problem was:
[Sun Oct 27 20:53:29.495534 2024] [lbmethod_heartbeat:notice] [pid 93807:tid<PHONE_NUMBER>60352] AH02282: No slotmem from mod_heartmonitor
[Sun Oct 27 20:53:29.617938 2024] [proxy_hcheck:crit] [pid 93807:tid<PHONE_NUMBER>60352] AH03262: mod_watchdog is required
[Sun Oct 27 20:53:29.618025 2024] [:emerg] [pid 93807:tid<PHONE_NUMBER>60352] AH00020: Configuration Failed, exiting
After the command to monitor the ports on which Apache must listen also produced no output:
netstat -tunap | grep httpd
I found it on the Internet: https://support.plesk.com/hc/en-us/articles/12377651410839-Unable-to-start-Apache-on-a-Plesk-server-AH02093-mod-watchdog-is-required
After I applied this KB article (Tools & Settings --> Apache Webserver --> check the watchdog box), the problem was solved.
By the way, temporarily disabling and re-enabling nginx reverse proxy didn't help before finding a solution. The problem remained. Nevertheless, this test may be helpful for one or the other, so I am also linking this KB article:
https://support.plesk.com/hc/en-us/articles/12377745816599-How-to-install-and-enable-nginx-reverse-proxy-on-a-Plesk-for-Linux-server
nginx status: /usr/local/psa/admin/sbin/nginxmng --status
temporarily disable nginx: /usr/local/psa/admin/sbin/nginxmng --disable
re-enable nginx: /usr/local/psa/admin/sbin/nginxmng --enable
Hello @fbroen
I apologize for the delayed response, was occupied with publishing tasks.
Thank you for your thorough investigation! Let's keep this issue open until we add an action to re-enable the watchdog module during conversion, in case someone else face the same problem.
|
GITHUB_ARCHIVE
|
I'm still interested to know how they knew you were using linux.
What is interesting is that they require java considering it is basically unsafe to use java at this point. Mozilla disables it by default. Flash is also a total disaster, but I guess we are stuck with it until some better standardized technology comes along to replace it.
Somehow or other, they were able to make that determination. I looked into user agent switchers, but in the interests of getting my homework done, I found. .
Theoretically, a website shouldn't even know what OS you are using. It is more likely to be looking at which browser you are using via the user agent, and sometimes this can be a giveaway as to which OS you are using (e.g. you are probably running Linux if you are running iceweasel). This user agent field is changeable to whatever you want, it is usually there to help web servers send you stuff that is going to work in your browser. Usually websites either give you a version of their website that is customized for your browser or some generic default if they don't recognize the browser you are using. Maybe the McGraw Hill Connect website is just written sloppily and it is rejecting your user agent. Depending on your browser, there is probably a way to change it. There may even be a way to change it for just that one web site.
There is also a possibility that you are using a browser that is blocking popups by default. Some websites assume you are using IE and therefore likely have popups enabled. This is kind of dumb, because I am not even sure IE allows them anymore.
TELL me about it. There was a page in the support section to test a computers suitability. Compatible browser, Java installed, Flash installed, pop-up blocker turned off, COMPATIBLE OS. I hit every mark except Linux got red-flagged. Pissed me off immensely and their support agent was as expected--no help. I found another page on McGraw Hill's Connect web site that asked for log in info, and it sent me right to the courseware. Been completing lessons ever since. Also taking advantage of every soapbox I can find to warn folks that McGraw Hill is unfair to Linux users.
Like I said in an earlier post. McGraw Hill online courseware wouldn't let me log in simply because I wan running Linux. When I found a way around the courses log-in page, everything ran fine.
Or check out this site: http://www.ubuntu.com/download/desktop
...pick a job in which you can't be replaced by a computer.
From the article: 'One reason for the underwhelming performance on the desktop is that the Bulldozer architecture emphasizes multithreaded performance over single-threaded performance. For desktop applications, where single-threaded performance is still king, this is a problem. Server workloads, in contrast, typically have to handle multiple users, network connections, and virtual machines concurrently. This makes them a much better fit for processors that support lots of concurrent threads. Some commentators have even suggested that Bulldozer was, first and foremost, a server processor; relatively weak desktop performance was to be expected, but it would all come good in the server room.
Unfortunately for AMD, it looks as though the decisions that hurt Bulldozer on the desktop continue to hurt it in the server room. Although the server benchmarks don't show the same regressions as were found on the desktop, they do little to justify the design of the new architecture.'
It's probably much too early to start editorializing about the end of AMD, or even to say with certainty that Bulldozer has failed, but my untrained eye can't yet see any possible silver lining in these new processors.
Link to Original Source
|
OPCFW_CODE
|
Enabled the share buttons on desktop version
added display block
:)
Haha, I wish it was that easy… The main question is how to integrate it in the layout.
How about enabling the share button in the post's page? Hide the comment
icon, and display the share button?
On Sat, Aug 1, 2015 at 10:41 AM, Sacha Greif<EMAIL_ADDRESS>wrote:
Haha, I wish it was that easy… The main question is how to integrate it in
the layout.
—
Reply to this email directly or view it on GitHub
https://github.com/TelescopeJS/Telescope/pull/1090#issuecomment-126853382
.
--
Regards,
Val Galin
Let's be friends! :)
t: @valgalin http://twitter.com/valgalin
g+: +ValGalin https://plus.google.com/+ValGalin
It's just like how it's implemented on Screenings.io, sharing buttons are
in the post's page. Thoughts?
On Sat, Aug 1, 2015 at 10:58 AM, Val Galin<EMAIL_ADDRESS>wrote:
How about enabling the share button in the post's page? Hide the comment
icon, and display the share button?
On Sat, Aug 1, 2015 at 10:41 AM, Sacha Greif<EMAIL_ADDRESS>wrote:
Haha, I wish it was that easy… The main question is how to integrate it
in the layout.
—
Reply to this email directly or view it on GitHub
https://github.com/TelescopeJS/Telescope/pull/1090#issuecomment-126853382
.
--
Regards,
Val Galin
Let's be friends! :)
t: @valgalin http://twitter.com/valgalin
g+: +ValGalin https://plus.google.com/+ValGalin
--
Regards,
Val Galin
Let's be friends! :)
t: @valgalin http://twitter.com/valgalin
g+: +ValGalin https://plus.google.com/+ValGalin
Yeah that could work in theory, but in practice both pages just include the same template. I wonder if it's worth making the logic more complex just for this…
Hi Sacha, That's what my first thinking: it's a hassle logically. But it
occur to me that I have successfully hidden the discuss icon on the Iris
theme's single post via CSS.
So I made a PR where I showed the share button on the post's page and hide
the discuss icon. Thoughts? :)
On Sat, Aug 1, 2015 at 11:34 AM, Sacha Greif<EMAIL_ADDRESS>wrote:
Yeah that could work in theory, but in practice both pages just include
the same template. I wonder if it's worth making the logic more complex
just for this…
—
Reply to this email directly or view it on GitHub
https://github.com/TelescopeJS/Telescope/pull/1090#issuecomment-126856909
.
--
Regards,
Val Galin
Let's be friends! :)
t: @valgalin http://twitter.com/valgalin
g+: +ValGalin https://plus.google.com/+ValGalin
|
GITHUB_ARCHIVE
|
Threshold is the team leader of Halcyon's Seed Nineteen. She and her unit went on their own to rescue Halcyon's most powerful Seedling warrior, Tesseract from the Acheron Empire. She and her team rescued Tesseract and narrowly escaped to the lower and uncharted dimensions and find themselves on Earth's New York City. At first, Threshold and her team mistaken the primitive Earth as a hostile world just like their own home dimension and came into a misunderstanding with the Fantastic Four. However, Seed Nineteen's misunderstanding was defused with Dreamcatcher's revelation, but Threshold learned from Reed Richards about Gallowglass and realized that Tesseract is in danger.
Both Seed Nineteen and the Fantastic Four teamed up and chased after Gallowglass with a captured Tesseract to the firedrake, Redeemer. When into conflict with Galloglass and his forces, both Seed Nineteen and Four were on the verge of defeat until Susan Storm caused Gallowglass' apparent death by exploding him and destroying the Redeemer, with Threshold and her team (minus Dreamcatcher) escaped and separated from the Fantastic Four and Tesseract on Pyx's Ironwater City.
Threshold and her unit later spied Ronan the Accuser, who have been informed of the Fantastic Four's location. Threshold and Seed Nineteen followed Ronan back to Earth and battled him in rescuing the Fantastic Four. Though Ronan proved himself to be more powerful even to Tesseract, Reed Richards coordinated both his team and Seed Nineteen to defeat Ronan, in which Threshold had her ability to absorb the powers of her allies and Ronan's cosmic power against the accuser. After Ronan's defeat, Threshold was impressed with Reed Richards and consider him a warrior without equal, in which she embrace him with a passionate kiss (considering that his "seed" is "precious"). Susan Storm immediately stopped her, and both Threshold and Seed Nineteen departed back to their dimension and planned to liberate Pyx from the Acheron Empire with help from other Seed units.
Absorb objects and energies, such as bullets and explosion, in which she create kinetic attacks.
Threshold, as a team leader, is Seed Nineteen's voice of authority and tactical coordinator.
- 6 Appearances of Threshold (Earth-1610)
- Minor Appearances of Threshold (Earth-1610)
- Media Threshold (Earth-1610) was Mentioned in
- 3 Images featuring Threshold (Earth-1610)
- Quotations by or about Threshold (Earth-1610)
- Character Gallery: Threshold (Earth-1610)
Discover and Discuss
- Search this site for:
|Like this? Let us know!|
|
OPCFW_CODE
|
The next big change that’s in the works for the arcade app is an upgrade of the version of the Starling framework that we use. This has involved a complete overhaul of most of the menus and views in the app, and also to some of the background systems that are used. However, this work has been extremely beneficial to the overall quality of the app and its codebase.
There are a few main advantages to updating to Starling 2.0:
- Any new or updated libraries we use were beginning to drop support for older versions of starling, meaning we couldn’t keep up to date with bug fixes or new features we might want to use.
- Improved memory management with the introduction of an automatic pooling system for some of the more commonly used objects.
- The new Skip Unchanged Frames feature that was added into Starling 2.0.
Being able to upgrade to the latest version of any libraries we used is always a good idea, so I won’t stay on that subject. A greater benefit came from the object pooling system that was added. In the app we use a lot of Tweens and Points, the object pools mean that we can reuse these objects without having to constantly allocate and de-allocate new objects, which is a slow process, this also stops the ActionScript garbage collector from triggering too frequently, which is a big cause of stuttering or slowdown in our apps.
The single biggest reason we had for upgrading to the Starling 2.0 was the new feature that was added that allows the renderer to skip redrawing frames that haven’t changed. Given that parts of the app are almost always static, this is a feature that could deliver a huge improvement in performance.
Testing the benefits
To test the possible benefits of this frame skipping feature we built a test application that took one of the backgrounds from bubble and profiled the results with the new feature both enabled and disabled using Adobe’s profiler, Scout.
The device that we used was an older phone, the Galaxy Nexus, and the sea background from the bubble game. The phone was chosen because while it is an older phone it is still equivalent to a low budget device and this background because it has full screen animations and the billowing clouds are quite complex.
The scene was setup to match the target frame rate of the arcade app, which is 40fps; the animation was exported to target 24fps. This is what the scene looked like on the device:
The first test that we performed was a baseline, using the older version of Starling and the background animations playing and looping. This was the result, taken directly from Scout (The blue bars represent the amount of time spent processing each frame and the red line is the target to maintain a consistent 40 frames per second):
This shows that the average time to render the scene was just over 32 milliseconds this is almost twice the time that was have allocated for each frame at our desired frame rate.
I then updated the project to use Starling 2.0 and enabled the Skip Unchanged Frames feature. The scene and the device remained exactly the same. This is the result:
This shows a vast improvement in how long each frame was taking to render. Even the slowest frames are only taking 18 milliseconds. It also shows when the frames are skipped because there is no change between individual frames.
As a further test to see what the difference would be on a scene that was entirely static, I repeated the above tests with the animation paused. Old version on the top, Starling 2.0 on the bottom:
This is a fantastic difference, with frames taking less than 1 millisecond to render. This is quite a significant result in relation to the arcade app because there are a lot of times when we just display a static screen, especially on the home screen.
Of course these tests are purely focused on profiling the time it takes to render a frame, there is no background processing being performed outside of the animation system, but given that rendering now takes considerably less time there is a knock on effect for when we do have more processing to perform.
Updating the App
Updating the arcade app to take full advantage of these new features was a large task, despite our efforts to create all of the menus used in the app with Starling components alongside the feathers framework we still had some parts of the app that were built with traditional flash components. Given that skipping unchanged frames is not possible where there are objects on the native flash stage we had to start by converting these views over to starling.
The biggest of these was the in-app chat client. This has had a complete overhaul so that it takes full advantage of the hardware-accelerated rendering offered by using Starling. It also allows us to share fonts between the games and the chat client, reducing the overall memory footprint of the app and reducing the number of texture swaps that need to occur on the GPU.
While converting the flash-based views we found that some of the Starling-based views were resizing and refreshing constantly, even when stationary. This was something that stopped the frames from skipping, even though they looked like they hadn’t changed. Once this had been corrected we then made a pass of the other views in the game, to make sure there were no other occurrences of this.
During this upgrade we took the opportunity to strip out any old code or libraries that we are no longer using, this not only simplifies the code it also considerably reduces the time it takes to fully build the app and slightly reduced the size of the generated SWF file that gets packaged into the final app.
Results in the full app
After upgrading the arcade app to Starling 2.0 and updating the menus to fully support skipping unchanged frames I performed some more profiling to see what effect the upgrade had in the full app.
This test was performed on a OnePlus Three on the home screen of the app. This includes scrollable containers for the blog posts, game launcher and centre panel banners. The test included scrolling through the blog posts, swiping through the centre panels and game list and opening the navigation menu by tapping on the toolbar.
This is an example of what the scene would look like:
The first test was a baseline, using a version of the app built with the old version of Starling.
While on the home screen this is pretty much the pattern no matter what I do, its well within the target line, but is constantly processing in all frames.
The second test was just a static home screen in an app built using Starling 2.0
This is as we expected from the results of the initial tests that were performed on the bubble background earlier. Since nothing is changing there is very little processing that needs to be done on each frame.
The third test was to see what the performance looked like when scrolling through the blog posts and swiping through the game launcher.
This shows that between scrolls and swipes there is very little processing going on, but as soon as you start to scroll it jumps up to what it was in the initial baseline test, before dropping back down when the screen becomes static again.
The final test on the home screen was to open the navigation menu by tapping on the toolbar. This is an important test because this is one of the few components of the app that is still rendered using flash’s normal vector renderer, so it will show how skip unchanged frames works when paired with native stage objects.
You can see that as the menu opens it stops skipping frames, even though the starling scene hasn’t changed. This is because the native flash stage is always rendered on top of the starling stage, which means that it needs to be redrawn every frame to avoid any flash content leaving a trail wherever it moves on the screen.
The following image shows that the frames start to skip again once the navigation menu gets removed from the stage.
To see what difference these changes had made in the games tests were run in spin, on the same device as previously used. This was the result of the old version of starling:
And then the same test ran again with Starling 2.0:
This graph shows the frame time between spins, there is very little happening on screen so there are the frames that do nothing, with some redraws happening every other frame, these are likely caused by the chat client updating itself.
During a spin it looks exactly as we would expect, since there is a lot of movement on screen it needs to redraw itself constantly, once the spin completes it returns to the same pattern as the previous image.
While there has been a lot of work involved in this upgrade, it has been more than worth it. A lot of the internal systems and many of the menus have been updated or re-written, but there shouldn’t be a need for such a major update for a long time and anything new that we might need to use will be much easier to implement given that starling 2.0 now has more support than the older version we were using.
I have mainly focused on the skipping frames feature, but there have been other benefits as well. The blog view alone now uses an average of 10MB less memory thanks to the way textures are handled in Starling 2.0 and the new memory pooling features.
A side effect of upgrading has also been a boost to the time it takes to change orientation on Android devices. Orientation change on Android causes the render context to be lost, meaning that all textures need to be restored to the GPU, previously this has taken a long time to do, but the changes made in the latest versions of Adobe AIR (required for Starling 2.0) allow asynchronous texture uploading has reduced this time greatly.
The changes and refactoring that we made to the code has made it much easier to extend and improve, so any big changes like this upgrade will be easier to do in the future.
We’ve only really used a few of the new features that were added in this new version of starling. There are some other interesting new features, such as being able to add dynamic lights using bump-maps and more filters and masking options that could be used to make the clients look even more visually rich. But that’s all something for the future…
|
OPCFW_CODE
|
URL of experiment on Pavlovia: Pavlovia
URL of experiment on Gitlab: Milestones of Early Cognitive Development / PlaymoJoseph · GitLab
Description of the problem: After encountering problems running a more complex experiment online, I created a completely new project on gitlab.pavlovia.org and synced an experiment that ONLY shows a text message (no additional code, absolutely nothing fancy). There seem to be no problems synchronising.
But when I try to run the experiment online just a blank screen appears. This happens if I try to run it online via PsychoPy (button “Run the study online (with pavlovia.org)”) and also if I try to pilot it via the Pavlovia website. If I try running it online via the “runner” window (button “Run PsychoJS task from Pavlovia”), the error message “404 Not Found nginx” is displayed in the browser.
I checked that:
- experiment name, folder name and project name are the same.
- ssh keys are working
- I am using the last version of PsychoPy
I tried it on different browsers (chrome and safari) and a new MacBook (2020) with the current operating system. I am NOT new to PsychoPy and I tried everything I could find in different threads on the problem, but nothing is working and I would gladly appreciate any help you could offer. Thanks in advance!
you have a folder called index.html in your repository. Delete this folder and try syncing again. Pavlovia expects an index.html and two .js-files (experimentname-legacy-browers.js, experimentname.js) in the root-folder of the experiment.
Did you specify an Output path in the online tab of the Experiment properties? If so, leave this line blank.
As long as Pavlovia does not know the software platform and the platform version, your experiment won’t run online.
Best wishes Jens
Thank you SO much. The output path was empty, but deleting the index folder did the trick. Thanks!!!
@KatrinR - thanks for posting this. When you initially synced your experiment for the first time, did you get any kind of 403 error in your browser? I am repeatedly having this issue today. I try to add a study made in v2022.2.4 Builder to Pavlovia, and get as far as pressing ‘OK’ on the Committing changes box, then I get a 403 error in my browser, followed by what looks like the same issue you had - a folder called index.html appears in the local folder and repository and the study won’t run in Pilot / Run mode, as you describe.
I’m having other issues today as well that may / may not be linked, but could you or @JensBoelte please help me - how do I best delete the index.html folder? I can’t see how to delete it from the repository - am I missing a button / option somewhere? Help I’ve found online doesn’t align with what I see when I got into the Repository.
If I try to delete it locally and then sync again I get the following error:
Traceback (most recent call last):
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\app\builder\builder.py”, line 1369, in onPavloviaSync
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\app\builder\builder.py”, line 804, in fileExport
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\scripts\psyexpCompile.py”, line 71, in generateScript
compileScript(infile=exp, version=None, outfile=filename)
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\scripts\psyexpCompile.py”, line 245, in compileScript
_makeTarget(thisExp, outfile, targetOutput)
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\scripts\psyexpCompile.py”, line 217, in makeTarget
script = thisExp.writeScript(outfile, target=targetOutput, modular=True)
File “C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\experiment_experiment.py”, line 286, in writeScript
File "C:\Users\clarele\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\experiment\components\settings_init.py", line 857, in writeInitCodeJS
with open(os.path.join(folder, “index.html”), ‘wb’) as html:
PermissionError: [Errno 13] Permission denied: ‘C:\Users\clarele\OneDrive - Edge Hill University\Package 3\Pilot_Valence_Task\index.html’
I deleted the html folder in my local folder on my Mac and I checked in PsychoPy under “Settings/Online” that the entry “Output path” was empty (the entry was html although I am pretty sure I did not select that, but it might be an issue when changing from an older version of PsychoPy to the newest one). Then, I synchronised my experiment to Pavlovia and it worked. I was not able to repair the more complex version though and I decided to rebuilt it step by step (which makes sense in my case as this more complex experimental version was built with a much older version of PsychoPy). I hope this helps and you can resolve your issue soon!
Thanks! I’m following the same process as you in that case, only I am not able to re-sync my experiment once I have deleted the folder. I’ll see if I can find out why that might be the case
delete the index.html-folder locally and sync again. AFAIK, online you can only delete files from a git-repository, not folders.
You could try to delete the folder using git itself, see
BTW, I have seen that you have your PsychoPy-experiment on a onedrive. cloud-based storage and git don’t seem to run along well. So rather store your PsychoPy-experiment not in your onedrive (google drive, dropbox, aso.) and rely on the syncing via git to move an experiment from one computer to another.
Best wishes Jens.
Thank you! I have indeed been working on it with a PhD student, on a university machine where we have to use OneDrive. I will take yours and KatrinR’s advice and try moving the experiment to a new machine where we can build it on the hard disk. I’ll have a go at this now. Thanks so much for the guidance on dealing with deleting the folder. I’ll report back on how it goes.
|
OPCFW_CODE
|
Setting up a Node.js package registry is often the first step towards scaling code-sharing and making life a little bit easier for your team.
Sharing more modules means duplicating less code. It also helps in building more modular and maintainable software. However, the overhead around setting up and maintaining a private NPM registry can be massive.
Using Bit, you can remove most of the overhead around a private registry while reducing the overhead around the packaging and publishing process.
In this short tutorial, I’ll show you how, using Bit, you can set up a private Node.js registry and publish dozens of components and modules in a just few minutes, in 3 steps.
Let’s get started.
Let’s set a private package registry for your team.
We’ll use Bit’s web platform to host the modules we share and the native NPM/Yarn client to install them.
First thing’s first, set up a registry.
a. Head over to bit.dev Click on get started.
b. Sign-Up. It’s free.
c. Create a collection:
To set a private collection, just select “private”. That’s it!
You now have a collection in Bit’s web platform, which also functions as a package registry. Let’s see how to publish packages to this registry.
Now let’s publish modules and components to our newly created registry. Since we set up the registry on Bit’s platform, we can leverage Bit for this workflow as well, to save precious time and effort.
First, install Bit. Then, head over to the project in which you have the packages you want to publish. Note that since we are using Bit, you can publish packages right from any existing project without refactoring.
#1 Install Bitnpm install bit-bin -g #2 Create a local workspace for your project$ cd project-directory$ bit init
Instead of having to create a new repository, configure the package etc, let’s use Bit to isolate components and modules from existing projects and publish them as packages.
Let’s point Bit to the right packages in the project using the
bit add command.
Let’s track the components
logo in the following project’s directory structure.
$ tree.├── App.js├── App.test.js├── favicon.ico├── index.js└── src └── components ├── button │ ├── Button.js │ ├── Button.spec.js │ └── index.js ├── login │ ├── Login.js │ ├── Login.spec.js │ └── index.js └── logo ├── Logo.js ├── Logo.spec.js └── index.js 5 directories, 13 files
To track these files as components we can use bit add with a glob pattern, pointing Bit to the path in which the modules we want to publish are found.
$ bit add src/components/*tracking 3 new components
Note that Bit will automatically run through the module’s file & package dependancies, and create and isolate environment for the code which contains everything it needs in order to run in other projects.
Here’s a recommended example for React components.
$ bit import bit.envs/compilers/babel --compiler $ bit import bit.envs/testers/mocha --tester
Now, let’s tag a version for the packages we are about to publish (following the previous example).
$ bit tag --all 1.0.03 components tagged | 3 added, 0 changed, 0 auto-taggedadded components: components/[email protected], components/[email protected], components/[email protected]
bit login to authenticate your machine to Bit’s platform.
$ bit loginYour browser has been opened to visit: http://bit.dev/bit-login?redirect_uri=http://localhost:8085...
Finally, export (publish) the packages.
$ bit export user-name.collection-nameexported 3 components to scope user-name.collection-name
All your packages will now be available in your collection, ready to install using NPM/Yarn in any project. Piece of cake, and we can use this workflow to quickly publish large numbers of packages in very little time.
Now that our packages are ready, let’s learn how to install them.
First, configure bit.dev as a scoped registry to your NPM client.
npm config set '@bit:registry' https://node.bit.dev
That’s it :)
Any package can now be installed using your native NPM/Yarn client.
Head over to the component/module page (Example).
Check out the pane on the top-right side. Choose the “NPM” tab and copy the command:
npm i @bit/user-name.collection-name.namespace.packagename
Let’s see an example.
Here’s a React Hero component shared as a package. Let’s use the following command to install it (user = bit, collection = movie-app, name space = components, package name = hero):
npm i @bit/bit.movie-app.components.hero
That’s it. You can now freely share and install these packages just as if you published them to any other NPM registry. The best
Another advantage of installing packages via Bit’s registry is that you can use Bit to import and make changes to a version of the actual source code of the packages right from any project you’re working on.
Unlike other registries, which require a cumbersome process for cloning and publishing changes to a package, Bit lets different team members import and modify packages from different projects.
For example, let’s look at this repo structure.
$ tree ..├── bit.json├── package.json└── src ├── hello-world │ ├── hello-world.js │ └── index.js └── utils ├── noop.js └── left-pad.js
We’ll use Bit to import the left-pad component into your local project.
$ bit init$ bit import bit.utils/string/left-pad --path src/left-pad
We can now make the required changes,
export them back to the collection (creating a new version) or to a new collection to share.
In this short tutorial we learned how to:
Sharing more code in a managed way, while reducing the overhead and time involved in this process, means your team can speed development and simplify the maintenance of your codebase.
|
OPCFW_CODE
|
M: Ask HN: What are some ways to gamify writing? - rayalez
I'm practicing writing fiction, and it's gradually coming along, but it's pretty hard for me. I really want to get good at it, but my reddit-addicted brain just seems to refuse to engage in this activity, doesn't enter the flow. Programming, on the other hand, works very well(it has an immediate feedback/gratification loop and clear goals). So I'm trying to figure out what kind of system would help me to experience the same thing in writing.<p>/r/WritingPrompts is pretty helpful, and blogging has sort of natural gamification(traffic/upvotes/comments) embedded in it, but these are misleading, my brain is getting dopamine spikes out of seing upvotes or refreshing the stats, not out of the writing process itself.<p>I've been thinking about making a website with daily flashfiction challenges(100-1000 words, winners determined by voting, leaderboard of the best writers), but it's not that different from writingprompts. Github-like streaks could help perhaps(a visual representation of how many words you have written every day, and how many days in a row you write). Or maybe a text editor with a progress bar that would show how much words you have written until reaching a daily goal....<p>There's gotta be a way to design a feedback loop that would make writing fiction addictive.<p>What are some ways to make writing process more fun and engaging? Are there some tools/techniques I can use?
R: exolymph
Something I love doing is collaborative writing -- my chat group [1] has a
#storytelling channel and people toss around ideas in there. Or I'll chat back
and forth with one individual and we'll shape the story together. Not exactly
gamified, but when you're working with another person there's a pressure to
respond quickly that helps me loosen up and just write.
Another idea: use
[http://www.themostdangerouswritingapp.com/](http://www.themostdangerouswritingapp.com/)
and literally reward yourself after each sprint -- try chocolate chips or
something.
[1] [http://exolymph.com/cyberpunk-futurism-chat-
group/](http://exolymph.com/cyberpunk-futurism-chat-group/)
R: bayonetz
A sort of tangent to this was when I would play writing/poetry golf with
someone else. Literally, I write a line, then they write a line, and so on.
Our products would often lead to ideas I'd take on solo later.
|
HACKER_NEWS
|
How can I make spring-boot-devtools work without @EnableAutconfiguration or @SpringBootApplication?
I'm working on an application using spring-boot v1.3.5.RELEASE, which doesn't use @SpringBootApplication nor @EnableAutconfiguration (customer requirement).
I want to enable spring-boot-devtools to have the "restart" feature in my IDE. I've put it as dependency, hit mvn spring-boot:run. the restarter works:
DEBUG o.s.b.d.r.Restarter - Creating new Restarter for thread Thread[main,5,main]
DEBUG o.s.b.d.r.Restarter - Immediately restarting application
DEBUG o.s.b.d.r.Restarter - Created RestartClassLoader org.springframework.boot.devtools.restart.classloader.RestartClassLoader@22843e39
DEBUG o.s.b.d.r.Restarter - Starting application application.Application with URLs [file:/D:/git/repos/...]
But it doesn't reload after IDE code modification and rebluid (Ctrl+B on eclipse).
The problem seems to be that devtools relies on @EnableAutconfiguration (factories loaded with the META-INF/spring.factories) to be configured (and I can't use this annotation).
Basically, I need to do that by myself (see below the content of the devtools spring.factories file):
# Application Initializers
org.springframework.context.ApplicationContextInitializer=\
org.springframework.boot.devtools.restart.RestartScopeInitializer
# Application Listeners
org.springframework.context.ApplicationListener=\
org.springframework.boot.devtools.restart.RestartApplicationListener
# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.devtools.autoconfigure.DevToolsDataSourceAutoConfiguration,\
org.springframework.boot.devtools.autoconfigure.LocalDevToolsAutoConfiguration,\
org.springframework.boot.devtools.autoconfigure.RemoteDevToolsAutoConfiguration
# Environment Post Processors
org.springframework.boot.env.EnvironmentPostProcessor=\
org.springframework.boot.devtools.env.DevToolsHomePropertiesPostProcessor,\
org.springframework.boot.devtools.env.DevToolsPropertyDefaultsPostProcessor
# Restart Listeners
org.springframework.boot.devtools.restart.RestartListener=\
org.springframework.boot.devtools.log4j2.Log4J2RestartListener
How can I do that (I'm not particulary fluent in spring-boot lingua) ?
You should be able to @Import the DevTools auto-configuration classes that you want to use via a @Configuration class of your own. You may not want the remote support, in which case the following should suffice:
@Configuration
@Import({LocalDevToolsAutoConfiguration.class, DevToolsDataSourceAutoConfiguration.class})
class ManaulDevToolsConfiguration {
}
In the unlikely event that you want to use a bean of your own in place of any of DevTools' auto-configured beans, you'll need to order things carefully to ensure that your beans are defined before the DevTools auto-configuration classes are imported and the evaluation of any on missing bean conditions is performed.
Thank you for your response. I've tried the solution but it doesn't work: It seems that the restarter feature needs also the application initializer and listeners to work.
So I've made a spring "profile" and added in properties "context.initializer.classes" and "context.listener.classes". That's doing the trick.
But I have no solutions for Environment post processors (need this to overload "spring-devtools.properties" in my HOME) and restart listener (don't know if it is useful - we use log4j in our app).
Have you an hint for these (register env. post processor & restart listener) ? That would be great.
They should be loaded without using auto-configuration via SpringApplication or SpringApplicationBuilder.
I agree with you for ApplicationContextInitializer and ApplicationListener (I used properties because on this project I can't modify SpringApplication), but for the others (EnvironmentPostProcessor and RestartListener), I'm stuck...
|
STACK_EXCHANGE
|
Hopefully I can explain this problem. I have an extention forwarded to voice mail, when Voice mail answers I want it to transfer to another extention. So first I set up a Call handler and in the extention field I put the extention of the forwared phone in, then in the call transfer Page I set up to transfer to the other extention. However I kept getting the "System is Temperarily unavaiable Message". So I searched the this forum and found a bunch of post on the matter, however I followed most of the suggestions but to no avail. So I then added a Subcriber Mailbox doing the same thing, and I get the same message. However if I call the Voice Mail Pilot number and dial the extention that is forwarded it will work fine (it transfers to the other extention).
I am also getting these errors.
GetCallHandlerProperties returned [0x8004010f] on line 433 of file e:\views\cs_UE184.108.40.206\un_Conv2\AvConvPhoneHandler\AvConvPHGreetingSvr\AvSPlayGreeting.cpp
Running conversation PHGreeting on Port 1
I notice on another post to run configmgr and set up the default data base which I did, but again it did not help
Sounds like the callhandler might be a little messed up. However, if the callhandler wasn't messed up, I think this still wouldn't do what you want it to do (at least with the way it sounds like this is set up).
When a fowarded call comes into Unity, it's going to act on the forwarding call rule. That rule is going to be "send to greeting" for a matching DTMF ID of a call handler or subscriber. So, you'd actually hit the greeting of the call hander as opposed to getting transferred back out.
You can pretty close to what you want by making the greeting in that call handler a blank greeting, and making the "after greeting" action to be "send caller to" a call handler (and choose the same call handler that you're already in) and make sure the "conversation" is set to "attempt transfer". It sounds like the call handler already has the correct transfer rules set up (since it transfers to where you like it to when it's DTMF ID is simply dialed inside Unity).
Now, for that call handler, you can try changing the "owner" field on the SA to some valid subscriber. It doesn't matter who it is, just as long as they are a valid subscriber. This setting isn't really used quite yet, but it's still required. Give that a try. If that does not work, you can try running DBWalker.exe.
Thanks for your quick reply, I setup a subscriber made a blank greeting then setup to after the greeting to attempt transfer to the call handler I set up, it had a valid owner (example Administrator) but I changed it to another user. And I am still getting the same issue. So I ran DBWalker.exe and these are the errors I recieved on it.
Handler Text Name=New York Operator
Location Object Alias=default
Handler owner alias=(error) Invalid Handler Owner <---OWNER is Example ADministrator
Can you try changing to call handler's (if New York Operator is the call handler that handles the fowarding call to transfer back out) owner and message recipient to something other than EAdmin? Maybe Eadmin isn't as kosher as we think.
That's pretty suprising... no, I have no clue what's going on then. I'd need to dial into your system and take a look around to offer any more here. dbWalker is making a very, very simple check there and if it's failing it would have to mean the link is bad. It's simply searching for the ObjectID of the subscriber marked in the recipient and administrator fields on the primary call handler and seeing if that user exists in the subscriber collection. That's it. There's not very many reasons that would be failing.
The only thing I can think of is you could have two handlers named something similar and you're editing a different on than dbWalker is complaining about - I've seen that before. Beyond that I'd have to look around to see what's going on. If you have WTS access you can ping me at firstname.lastname@example.org and I can take a peek.
Thanks for the offer to look around a bit, but I have opened a TAC case and I had the firewall administrator to allow only 1 external IP address(the TAC engineers) to the unity server(what a hassle that was). However I really think something is messed up in unity. The TAC engineer had me set up a Dialing rule to fix my original issue and that seems to be working, but there are more underlying serious issues that are still happening, such as I cannot add a Mailbox, However I was able to just a few days ago. Also all of my templates have issues with them, for example on the greeting page on any of my subsciber templates ,in the drop down list to choose which greeting (Standard, internal, closed,ect) there is nothing in there. The same thing happened when I added a call handler there was nothing in the greeting field. However when I based the call handler off an exsisting Call hander that is working, the new call handler worked fine. So anyways the TAC Engineer is supose to be getting into the Unity to try and see why that is happening.
By the way Jeff, great documents on Answermonkey, and I look forward to getting your book when it comes out.
SIP traces provide key information in troubleshooting SIP Trunks, SIP
endpoints and other SIP related issues. Even though these traces are in
clear text, these texts can be gibberish unless you understand fully
what they mean. This document attempts to br...
Please find the attached HTML document, download and open it on your PC.
This provides an easy to use form where you simply answer a few
questions and it will render the proper jabber-config.xml file for you
to copy/paste. There is built in logic to verif...
[toc:faq]CUCM Database Replication is an area in which Cisco customers
and partners have asked for more in-depth training in being able to
properly assess a replication problem and potentially resolve an issue
without involving TAC. This document discusse...
|
OPCFW_CODE
|