Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 9 new columns ({'backported_patch', 'repo', 'source_file_path', 'source_file_content', 'metadata', 'upstream_commit', 'id', 'upstream_patch', 'cve'}) and 5 missing columns ({'patch_name', 'cves', 'files_modified', 'patch_content', 'incident_id'}).

This happened while the json dataset builder was generating data using

hf://datasets/anicka/opensuse-cve-backport-dataset/data/training-data-v1.jsonl (at revision bc6d9020d693b7847d9463bac70ca791ad8a2a67), [/tmp/hf-datasets-cache/medium/datasets/14962980716974-config-parquet-and-info-anicka-opensuse-cve-backp-6b20bea8/hub/datasets--anicka--opensuse-cve-backport-dataset/snapshots/bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/train.jsonl (origin=hf://datasets/anicka/opensuse-cve-backport-dataset@bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/train.jsonl), /tmp/hf-datasets-cache/medium/datasets/14962980716974-config-parquet-and-info-anicka-opensuse-cve-backp-6b20bea8/hub/datasets--anicka--opensuse-cve-backport-dataset/snapshots/bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/training-data-v1.jsonl (origin=hf://datasets/anicka/opensuse-cve-backport-dataset@bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/training-data-v1.jsonl)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 674, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: string
              cve: string
              package: string
              upstream_commit: string
              repo: string
              source_file_path: string
              source_file_content: string
              upstream_patch: string
              backported_patch: string
              metadata: struct<cherry_picked: bool, upstream_pr: string, files_count: int64>
                child 0, cherry_picked: bool
                child 1, upstream_pr: string
                child 2, files_count: int64
              license: string
              to
              {'incident_id': Value('int64'), 'package': Value('string'), 'patch_name': Value('string'), 'patch_content': Value('string'), 'cves': List(Value('string')), 'files_modified': List(Value('string')), 'license': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 9 new columns ({'backported_patch', 'repo', 'source_file_path', 'source_file_content', 'metadata', 'upstream_commit', 'id', 'upstream_patch', 'cve'}) and 5 missing columns ({'patch_name', 'cves', 'files_modified', 'patch_content', 'incident_id'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/anicka/opensuse-cve-backport-dataset/data/training-data-v1.jsonl (at revision bc6d9020d693b7847d9463bac70ca791ad8a2a67), [/tmp/hf-datasets-cache/medium/datasets/14962980716974-config-parquet-and-info-anicka-opensuse-cve-backp-6b20bea8/hub/datasets--anicka--opensuse-cve-backport-dataset/snapshots/bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/train.jsonl (origin=hf://datasets/anicka/opensuse-cve-backport-dataset@bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/train.jsonl), /tmp/hf-datasets-cache/medium/datasets/14962980716974-config-parquet-and-info-anicka-opensuse-cve-backp-6b20bea8/hub/datasets--anicka--opensuse-cve-backport-dataset/snapshots/bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/training-data-v1.jsonl (origin=hf://datasets/anicka/opensuse-cve-backport-dataset@bc6d9020d693b7847d9463bac70ca791ad8a2a67/data/training-data-v1.jsonl)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

incident_id
int64
package
string
patch_name
string
patch_content
string
cves
list
files_modified
list
license
string
16,027
python-aiohttp
invalid-escapes-in-tests.patch
--- a/tests/test_http_parser.py +++ b/tests/test_http_parser.py @@ -369,7 +369,7 @@ def test_max_header_field_size(parser, s name = b't' * size text = (b'GET /test HTTP/1.1\r\n' + name + b':data\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -399,7 +399,7 @@ def test_max_header_value_size(parser, s text = (b'GET /test HTTP/1.1\r\n' b'data:' + name + b'\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -430,7 +430,7 @@ def test_max_header_value_size_continuat text = (b'GET /test HTTP/1.1\r\n' b'data: test\r\n ' + name + b'\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -551,7 +551,7 @@ def test_http_request_parser_bad_version @pytest.mark.parametrize('size', [40965, 8191]) def test_http_request_max_status_line(parser, size): path = b't' * (size - 5) - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data( @@ -595,7 +595,7 @@ def test_http_response_parser_utf8(respo @pytest.mark.parametrize('size', [40962, 8191]) def test_http_response_parser_bad_status_line_too_long(response, size): reason = b't' * (size - 2) - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): response.feed_data( --- a/tests/test_streams.py +++ b/tests/test_streams.py @@ -721,7 +721,7 @@ class TestStreamReader: async def test___repr__waiter(self, loop): stream = self._make_one() stream._waiter = loop.create_future() - assert re.search("<StreamReader w=<Future pending[\S ]*>>", + assert re.search(r"<StreamReader w=<Future pending[\S ]*>>", repr(stream)) stream._waiter.set_result(None) await stream._waiter --- a/tests/test_urldispatch.py +++ b/tests/test_urldispatch.py @@ -586,7 +586,7 @@ def test_add_route_with_invalid_re(route def test_route_dynamic_with_regex_spec(router): handler = make_handler() - route = router.add_route('GET', '/get/{num:^\d+}', handler, + route = router.add_route('GET', r'/get/{num:^\d+}', handler, name='name') url = route.url_for(num='123') @@ -595,7 +595,7 @@ def test_route_dynamic_with_regex_spec(r def test_route_dynamic_with_regex_spec_and_trailing_slash(router): handler = make_handler() - route = router.add_route('GET', '/get/{num:^\d+}/', handler, + route = router.add_route('GET', r'/get/{num:^\d+}/', handler, name='name') url = route.url_for(num='123') @@ -1125,7 +1125,7 @@ def test_plain_resource_canonical(): def test_dynamic_resource_canonical(): canonicals = { '/get/{name}': '/get/{name}', - '/get/{num:^\d+}': '/get/{num}', + r'/get/{num:^\d+}': '/get/{num}', r'/handler/{to:\d+}': r'/handler/{to}', r'/{one}/{two:.+}': r'/{one}/{two}', } --- a/tests/test_web_request.py +++ b/tests/test_web_request.py @@ -340,7 +340,7 @@ def test_single_forwarded_header_multipl def test_single_forwarded_header_quoted_escaped(): - header = 'BY=identifier;pROTO="\lala lan\d\~ 123\!&"' + header = r'BY=identifier;pROTO="\lala lan\d\~ 123\!&"' req = make_mocked_request('GET', '/', headers=CIMultiDict({'Forwarded': header})) assert req.forwarded[0]['by'] == 'identifier'
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
Apache-2.0
16,008
python-aiohttp
invalid-escapes-in-tests.patch
--- a/tests/test_http_parser.py +++ b/tests/test_http_parser.py @@ -369,7 +369,7 @@ def test_max_header_field_size(parser, s name = b't' * size text = (b'GET /test HTTP/1.1\r\n' + name + b':data\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -399,7 +399,7 @@ def test_max_header_value_size(parser, s text = (b'GET /test HTTP/1.1\r\n' b'data:' + name + b'\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -430,7 +430,7 @@ def test_max_header_value_size_continuat text = (b'GET /test HTTP/1.1\r\n' b'data: test\r\n ' + name + b'\r\n\r\n') - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data(text) @@ -551,7 +551,7 @@ def test_http_request_parser_bad_version @pytest.mark.parametrize('size', [40965, 8191]) def test_http_request_max_status_line(parser, size): path = b't' * (size - 5) - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): parser.feed_data( @@ -595,7 +595,7 @@ def test_http_response_parser_utf8(respo @pytest.mark.parametrize('size', [40962, 8191]) def test_http_response_parser_bad_status_line_too_long(response, size): reason = b't' * (size - 2) - match = ("400, message='Got more than 8190 bytes \({}\) when reading" + match = (r"400, message='Got more than 8190 bytes \({}\) when reading" .format(size)) with pytest.raises(http_exceptions.LineTooLong, match=match): response.feed_data( --- a/tests/test_streams.py +++ b/tests/test_streams.py @@ -721,7 +721,7 @@ class TestStreamReader: async def test___repr__waiter(self, loop): stream = self._make_one() stream._waiter = loop.create_future() - assert re.search("<StreamReader w=<Future pending[\S ]*>>", + assert re.search(r"<StreamReader w=<Future pending[\S ]*>>", repr(stream)) stream._waiter.set_result(None) await stream._waiter --- a/tests/test_urldispatch.py +++ b/tests/test_urldispatch.py @@ -586,7 +586,7 @@ def test_add_route_with_invalid_re(route def test_route_dynamic_with_regex_spec(router): handler = make_handler() - route = router.add_route('GET', '/get/{num:^\d+}', handler, + route = router.add_route('GET', r'/get/{num:^\d+}', handler, name='name') url = route.url_for(num='123') @@ -595,7 +595,7 @@ def test_route_dynamic_with_regex_spec(r def test_route_dynamic_with_regex_spec_and_trailing_slash(router): handler = make_handler() - route = router.add_route('GET', '/get/{num:^\d+}/', handler, + route = router.add_route('GET', r'/get/{num:^\d+}/', handler, name='name') url = route.url_for(num='123') @@ -1125,7 +1125,7 @@ def test_plain_resource_canonical(): def test_dynamic_resource_canonical(): canonicals = { '/get/{name}': '/get/{name}', - '/get/{num:^\d+}': '/get/{num}', + r'/get/{num:^\d+}': '/get/{num}', r'/handler/{to:\d+}': r'/handler/{to}', r'/{one}/{two:.+}': r'/{one}/{two}', } --- a/tests/test_web_request.py +++ b/tests/test_web_request.py @@ -340,7 +340,7 @@ def test_single_forwarded_header_multipl def test_single_forwarded_header_quoted_escaped(): - header = 'BY=identifier;pROTO="\lala lan\d\~ 123\!&"' + header = r'BY=identifier;pROTO="\lala lan\d\~ 123\!&"' req = make_mocked_request('GET', '/', headers=CIMultiDict({'Forwarded': header})) assert req.forwarded[0]['by'] == 'identifier'
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
Apache-2.0
16,027
python-aiohttp
rename-request-fixture.patch
diff -Nru aiohttp-3.4.4.orig/aiohttp/web_response.py aiohttp-3.4.4/aiohttp/web_response.py --- aiohttp-3.4.4.orig/aiohttp/web_response.py 2018-09-05 09:40:54.000000000 +0200 +++ aiohttp-3.4.4/aiohttp/web_response.py 2018-11-20 13:49:49.197173589 +0100 @@ -279,27 +279,27 @@ # remove the header self._headers.popall(hdrs.CONTENT_LENGTH, None) - def _start_compression(self, request): + def _start_compression(self, mock_request): if self._compression_force: self._do_start_compression(self._compression_force) else: - accept_encoding = request.headers.get( + accept_encoding = mock_request.headers.get( hdrs.ACCEPT_ENCODING, '').lower() for coding in ContentCoding: if coding.value in accept_encoding: self._do_start_compression(coding) return - async def prepare(self, request): + async def prepare(self, mock_request): if self._eof_sent: return if self._payload_writer is not None: return self._payload_writer - await request._prepare_hook(self) - return await self._start(request) + await mock_request._prepare_hook(self) + return await self._start(mock_request) - async def _start(self, request, + async def _start(self, mock_request, HttpVersion10=HttpVersion10, HttpVersion11=HttpVersion11, CONNECTION=hdrs.CONNECTION, @@ -310,15 +310,15 @@ SET_COOKIE=hdrs.SET_COOKIE, SERVER_SOFTWARE=SERVER_SOFTWARE, TRANSFER_ENCODING=hdrs.TRANSFER_ENCODING): - self._req = request + self._req = mock_request keep_alive = self._keep_alive if keep_alive is None: - keep_alive = request.keep_alive + keep_alive = mock_request.keep_alive self._keep_alive = keep_alive - version = request.version - writer = self._payload_writer = request._payload_writer + version = mock_request.version + writer = self._payload_writer = mock_request._payload_writer headers = self._headers for cookie in self._cookies.values(): @@ -326,13 +326,13 @@ headers.add(SET_COOKIE, value) if self._compression: - self._start_compression(request) + self._start_compression(mock_request) if self._chunked: if version != HttpVersion11: raise RuntimeError( "Using chunked encoding is forbidden " - "for HTTP/{0.major}.{0.minor}".format(request.version)) + "for HTTP/{0.major}.{0.minor}".format(mock_request.version)) writer.enable_chunking() headers[TRANSFER_ENCODING] = 'chunked' if CONTENT_LENGTH in headers: @@ -597,7 +597,7 @@ else: await super().write_eof() - async def _start(self, request): + async def _start(self, mock_request): if not self._chunked and hdrs.CONTENT_LENGTH not in self._headers: if not self._body_payload: if self._body is not None: @@ -605,7 +605,7 @@ else: self._headers[hdrs.CONTENT_LENGTH] = '0' - return await super()._start(request) + return await super()._start(mock_request) def _do_start_compression(self, coding): if self._body_payload or self._chunked: diff -Nru aiohttp-3.4.4.orig/tests/test_client_connection.py aiohttp-3.4.4/tests/test_client_connection.py --- aiohttp-3.4.4.orig/tests/test_client_connection.py 2018-09-05 09:40:55.000000000 +0200 +++ aiohttp-3.4.4/tests/test_client_connection.py 2018-11-20 13:38:21.602987474 +0100 @@ -12,7 +12,7 @@ @pytest.fixture -def request(): +def mock_request(): return mock.Mock() diff -Nru aiohttp-3.4.4.orig/tests/test_web_exceptions.py aiohttp-3.4.4/tests/test_web_exceptions.py --- aiohttp-3.4.4.orig/tests/test_web_exceptions.py 2018-09-05 09:40:55.000000000 +0200 +++ aiohttp-3.4.4/tests/test_web_exceptions.py 2018-11-20 14:04:51.565410583 +0100 @@ -15,7 +15,7 @@ @pytest.fixture -def request(buf): +def mock_request(buf): method = 'GET' path = '/' writer = mock.Mock() @@ -54,9 +54,9 @@ assert name in web.__all__ -async def test_HTTPOk(buf, request): +async def test_HTTPOk(buf, mock_request): resp = web.HTTPOk() - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match(('HTTP/1.1 200 OK\r\n' @@ -87,11 +87,11 @@ assert 1 == codes.most_common(1)[0][1] -async def test_HTTPFound(buf, request): +async def test_HTTPFound(buf, mock_request): resp = web.HTTPFound(location='/redirect') assert '/redirect' == resp.location assert '/redirect' == resp.headers['location'] - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match('HTTP/1.1 302 Found\r\n' @@ -111,12 +111,12 @@ web.HTTPFound(location=None) -async def test_HTTPMethodNotAllowed(buf, request): +async def test_HTTPMethodNotAllowed(buf, mock_request): resp = web.HTTPMethodNotAllowed('get', ['POST', 'PUT']) assert 'GET' == resp.method assert ['POST', 'PUT'] == resp.allowed_methods assert 'POST,PUT' == resp.headers['allow'] - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match('HTTP/1.1 405 Method Not Allowed\r\n' @@ -168,7 +168,7 @@ resp.body is None -def test_link_header_451(buf, request): +def test_link_header_451(buf, mock_request): resp = web.HTTPUnavailableForLegalReasons(link='http://warning.or.kr/') assert 'http://warning.or.kr/' == resp.link
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
Apache-2.0
16,008
python-aiohttp
rename-request-fixture.patch
diff -Nru aiohttp-3.4.4.orig/aiohttp/web_response.py aiohttp-3.4.4/aiohttp/web_response.py --- aiohttp-3.4.4.orig/aiohttp/web_response.py 2018-09-05 09:40:54.000000000 +0200 +++ aiohttp-3.4.4/aiohttp/web_response.py 2018-11-20 13:49:49.197173589 +0100 @@ -279,27 +279,27 @@ # remove the header self._headers.popall(hdrs.CONTENT_LENGTH, None) - def _start_compression(self, request): + def _start_compression(self, mock_request): if self._compression_force: self._do_start_compression(self._compression_force) else: - accept_encoding = request.headers.get( + accept_encoding = mock_request.headers.get( hdrs.ACCEPT_ENCODING, '').lower() for coding in ContentCoding: if coding.value in accept_encoding: self._do_start_compression(coding) return - async def prepare(self, request): + async def prepare(self, mock_request): if self._eof_sent: return if self._payload_writer is not None: return self._payload_writer - await request._prepare_hook(self) - return await self._start(request) + await mock_request._prepare_hook(self) + return await self._start(mock_request) - async def _start(self, request, + async def _start(self, mock_request, HttpVersion10=HttpVersion10, HttpVersion11=HttpVersion11, CONNECTION=hdrs.CONNECTION, @@ -310,15 +310,15 @@ SET_COOKIE=hdrs.SET_COOKIE, SERVER_SOFTWARE=SERVER_SOFTWARE, TRANSFER_ENCODING=hdrs.TRANSFER_ENCODING): - self._req = request + self._req = mock_request keep_alive = self._keep_alive if keep_alive is None: - keep_alive = request.keep_alive + keep_alive = mock_request.keep_alive self._keep_alive = keep_alive - version = request.version - writer = self._payload_writer = request._payload_writer + version = mock_request.version + writer = self._payload_writer = mock_request._payload_writer headers = self._headers for cookie in self._cookies.values(): @@ -326,13 +326,13 @@ headers.add(SET_COOKIE, value) if self._compression: - self._start_compression(request) + self._start_compression(mock_request) if self._chunked: if version != HttpVersion11: raise RuntimeError( "Using chunked encoding is forbidden " - "for HTTP/{0.major}.{0.minor}".format(request.version)) + "for HTTP/{0.major}.{0.minor}".format(mock_request.version)) writer.enable_chunking() headers[TRANSFER_ENCODING] = 'chunked' if CONTENT_LENGTH in headers: @@ -597,7 +597,7 @@ else: await super().write_eof() - async def _start(self, request): + async def _start(self, mock_request): if not self._chunked and hdrs.CONTENT_LENGTH not in self._headers: if not self._body_payload: if self._body is not None: @@ -605,7 +605,7 @@ else: self._headers[hdrs.CONTENT_LENGTH] = '0' - return await super()._start(request) + return await super()._start(mock_request) def _do_start_compression(self, coding): if self._body_payload or self._chunked: diff -Nru aiohttp-3.4.4.orig/tests/test_client_connection.py aiohttp-3.4.4/tests/test_client_connection.py --- aiohttp-3.4.4.orig/tests/test_client_connection.py 2018-09-05 09:40:55.000000000 +0200 +++ aiohttp-3.4.4/tests/test_client_connection.py 2018-11-20 13:38:21.602987474 +0100 @@ -12,7 +12,7 @@ @pytest.fixture -def request(): +def mock_request(): return mock.Mock() diff -Nru aiohttp-3.4.4.orig/tests/test_web_exceptions.py aiohttp-3.4.4/tests/test_web_exceptions.py --- aiohttp-3.4.4.orig/tests/test_web_exceptions.py 2018-09-05 09:40:55.000000000 +0200 +++ aiohttp-3.4.4/tests/test_web_exceptions.py 2018-11-20 14:04:51.565410583 +0100 @@ -15,7 +15,7 @@ @pytest.fixture -def request(buf): +def mock_request(buf): method = 'GET' path = '/' writer = mock.Mock() @@ -54,9 +54,9 @@ assert name in web.__all__ -async def test_HTTPOk(buf, request): +async def test_HTTPOk(buf, mock_request): resp = web.HTTPOk() - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match(('HTTP/1.1 200 OK\r\n' @@ -87,11 +87,11 @@ assert 1 == codes.most_common(1)[0][1] -async def test_HTTPFound(buf, request): +async def test_HTTPFound(buf, mock_request): resp = web.HTTPFound(location='/redirect') assert '/redirect' == resp.location assert '/redirect' == resp.headers['location'] - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match('HTTP/1.1 302 Found\r\n' @@ -111,12 +111,12 @@ web.HTTPFound(location=None) -async def test_HTTPMethodNotAllowed(buf, request): +async def test_HTTPMethodNotAllowed(buf, mock_request): resp = web.HTTPMethodNotAllowed('get', ['POST', 'PUT']) assert 'GET' == resp.method assert ['POST', 'PUT'] == resp.allowed_methods assert 'POST,PUT' == resp.headers['allow'] - await resp.prepare(request) + await resp.prepare(mock_request) await resp.write_eof() txt = buf.decode('utf8') assert re.match('HTTP/1.1 405 Method Not Allowed\r\n' @@ -168,7 +168,7 @@ resp.body is None -def test_link_header_451(buf, request): +def test_link_header_451(buf, mock_request): resp = web.HTTPUnavailableForLegalReasons(link='http://warning.or.kr/') assert 'http://warning.or.kr/' == resp.link
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
Apache-2.0
7,497
tboot
tboot-grub2-fix-xen-submenu-name.patch
From: Michael Chang <mchang@suse.com> Subject: fix xen submenu name to show tboot version References: bnc#865815 Patch-Mainline: no Index: tboot-1.9.6/tboot/20_linux_xen_tboot =================================================================== --- tboot-1.9.6.orig/tboot/20_linux_xen_tboot +++ tboot-1.9.6/tboot/20_linux_xen_tboot @@ -232,7 +232,7 @@ while [ "x${xen_list}" != "x" ] ; do rel_tboot_dirname=`make_system_path_relative_to_its_root $tboot_dirname` tboot_version="1.9.6" list="${linux_list}" - echo "submenu \"Xen ${xen_version}\" \"Tboot ${tboot_version}\"{" + echo "submenu \"Xen ${xen_version} with Tboot ${tboot_version}\"{" while [ "x$list" != "x" ] ; do linux=`version_find_latest $list` echo "Found linux image: $linux" >&2
[ "CVE-2017-16837" ]
[ "tboot/20_linux_xen_tboot" ]
BSD-3-Clause
7,497
tboot
tboot-grub2-fix-menu-in-xen-host-server.patch
From: Michael Chang <mchang@suse.com> Subject: [PATCH] fix menu in xen host server References: bnc#771689, bnc#757895 Patch-Mainline: no When system is configred as "Xen Virtual Machines Host Server", the grub2 menu is not well organized. We could see some issues on it. - Many duplicated xen entries generated by links to xen hypervisor - Non bootable kernel entries trying to boot xen kernel natively - The -dbg xen hypervisor takes precedence over release version This patch fixes above three issues. v2: References: bnc#877040 Create only hypervisor pointed by /boot/xen.gz symlink to not clutter the menu with multiple versions and also not include -dbg. Use custom.cfg if you need any other custom entries. v3: References: bnc#865815 Porting to tboot in order to fix duplicated xen entries Index: tboot-1.9.6/tboot/20_linux_tboot =================================================================== --- tboot-1.9.6.orig/tboot/20_linux_tboot +++ tboot-1.9.6/tboot/20_linux_tboot @@ -225,6 +225,49 @@ while [ "x${tboot_list}" != "x" ] && [ " break fi done + + config= + for i in "${dirname}/config-${version}" "${dirname}/config-${alt_version}" "/etc/kernels/kernel-config-${version}" ; do + if test -e "${i}" ; then + config="${i}" + break + fi + done + + # try to get the kernel config if $linux is a symlink + if test -z "${config}" ; then + lnk_version=`basename \`readlink -f $linux\` | sed -e "s,^[^0-9]*-,,g"` + if (test -n ${lnk_version} && test -e "${dirname}/config-${lnk_version}") ; then + config="${dirname}/config-${lnk_version}" + fi + fi + + # check if we are in xen domU + if [ ! -e /proc/xen/xsd_port -a -e /proc/xen ]; then + # we're running on xen domU guest + dmi=/sys/class/dmi/id + if [ -r "${dmi}/product_name" -a -r "${dmi}/sys_vendor" ]; then + product_name=`cat ${dmi}/product_name` + sys_vendor=`cat ${dmi}/sys_vendor` + if test "${sys_vendor}" = "Xen" -a "${product_name}" = "HVM domU"; then + # xen HVM guest + xen_pv_domU=false + fi + fi + else + # we're running on baremetal or xen dom0 + xen_pv_domU=false + fi + + if test "$xen_pv_domU" = "false" ; then + # prevent xen kernel without pv_opt support from booting + if (grep -qx "CONFIG_XEN=y" "${config}" 2> /dev/null && ! grep -qx "CONFIG_PARAVIRT=y" "${config}" 2> /dev/null); then + echo "Skip xenlinux kernel $linux" >&2 + list=`echo $list | tr ' ' '\n' | grep -vx $linux | tr '\n' ' '` + continue + fi + fi + if test -n "${initrd}" ; then echo "Found initrd image: ${dirname}/${initrd}" >&2 else Index: tboot-1.9.6/tboot/20_linux_xen_tboot =================================================================== --- tboot-1.9.6.orig/tboot/20_linux_xen_tboot +++ tboot-1.9.6/tboot/20_linux_xen_tboot @@ -52,6 +52,12 @@ fi export TEXTDOMAIN=grub export TEXTDOMAINDIR=${prefix}/share/locale +if [ ! -e /proc/xen/xsd_port -a -e /proc/xen ]; then +# we're running on xen domU guest +# prevent setting up nested virt on HVM or PV domU guest + exit 0 +fi + CLASS="--class gnu-linux --class gnu --class os --class xen" if [ "x${GRUB_DISTRIBUTOR}" = "x" ] ; then @@ -185,9 +191,17 @@ linux_list=`for i in /boot/vmlinu[xz]-* if [ "x${linux_list}" = "x" ] ; then exit 0 fi -xen_list=`for i in /boot/xen*; do - if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi - done` +# bnc#877040 - Duplicate entries for boot menu created +# only create /boot/xen.gz symlink boot entry +if test -L /boot/xen.gz; then + xen_list=`readlink -f /boot/xen.gz` +else + # bnc#757895 - Grub2 menu items incorrect when "Xen Virtual Machines Host Server" selected + # wildcard expasion with correct suffix (.gz) for not generating many duplicated menu entries + xen_list=`for i in /boot/xen*.gz; do + if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then echo -n "$i " ; fi + done` +fi tboot_list=`for i in /boot/tboot*.gz; do if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi done`
[ "CVE-2017-16837" ]
[ "tboot/20_linux_tboot", "tboot/20_linux_xen_tboot" ]
BSD-3-Clause
11,147
u-boot-jetson-tk1
0010-CVE-2019-13106-ext4-fix-out-of-boun.patch
From 323c3196640bbadb8d2817ca6ec9ec7833381cb2 Mon Sep 17 00:00:00 2001 From: Paul Emge <paulemge@forallsecure.com> Date: Mon, 8 Jul 2019 16:37:07 -0700 Subject: [PATCH] CVE-2019-13106: ext4: fix out-of-bounds memset In ext4fs_read_file in ext4fs.c, a memset can overwrite the bounds of the destination memory region. This patch adds a check to disallow this. This fixes bsc#1144656. Signed-off-by: Paul Emge <paulemge@forallsecure.com> (cherry picked from commit e205896c5383c938274262524adceb2775fb03ba) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- fs/ext4/ext4fs.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/fs/ext4/ext4fs.c b/fs/ext4/ext4fs.c index 2a28031d14..54f65e7e11 100644 --- a/fs/ext4/ext4fs.c +++ b/fs/ext4/ext4fs.c @@ -61,6 +61,7 @@ int ext4fs_read_file(struct ext2fs_node *node, loff_t pos, lbaint_t delayed_skipfirst = 0; lbaint_t delayed_next = 0; char *delayed_buf = NULL; + char *start_buf = buf; short status; if (blocksize <= 0) @@ -130,6 +131,7 @@ int ext4fs_read_file(struct ext2fs_node *node, loff_t pos, } } else { int n; + int n_left; if (previous_block_number != -1) { /* spill */ status = ext4fs_devread(delayed_start, @@ -142,8 +144,9 @@ int ext4fs_read_file(struct ext2fs_node *node, loff_t pos, } /* Zero no more than `len' bytes. */ n = blocksize - skipfirst; - if (n > len) - n = len; + n_left = len - ( buf - start_buf ); + if (n > n_left) + n = n_left; memset(buf, 0, n); } buf += blocksize - skipfirst;
[ "CVE-2019-13104", "CVE-2019-13106" ]
[ "fs/ext4/ext4fs.c" ]
SUSE-Proprietary-or-OSS
11,147
u-boot-jetson-tk1
0011-CVE-2019-13104-ext4-check-for-under.patch
From 4c97c0c06a31b7e15d4acbe9986968e8dbf47d14 Mon Sep 17 00:00:00 2001 From: Paul Emge <paulemge@forallsecure.com> Date: Mon, 8 Jul 2019 16:37:05 -0700 Subject: [PATCH] CVE-2019-13104: ext4: check for underflow in ext4fs_read_file in ext4fs_read_file, it is possible for a broken/malicious file system to cause a memcpy of a negative number of bytes, which overflows all memory. This patch fixes the issue by checking for a negative length. This fixes bsc#1144675. Signed-off-by: Paul Emge <paulemge@forallsecure.com> (cherry picked from commit 878269dbe74229005dd7f27aca66c554e31dad8e) [mb: delete ext_cache_fini() call as we don't implent caches for ext4] Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- fs/ext4/ext4fs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/ext4/ext4fs.c b/fs/ext4/ext4fs.c index 54f65e7e11..0b4dc3157c 100644 --- a/fs/ext4/ext4fs.c +++ b/fs/ext4/ext4fs.c @@ -64,13 +64,13 @@ int ext4fs_read_file(struct ext2fs_node *node, loff_t pos, char *start_buf = buf; short status; - if (blocksize <= 0) - return -1; - /* Adjust len so it we can't read past the end of the file. */ if (len + pos > filesize) len = (filesize - pos); + if (blocksize <= 0 || len <= 0) + return -1; + blockcnt = lldiv(((len + pos) + blocksize - 1), blocksize); for (i = lldiv(pos, blocksize); i < blockcnt; i++) {
[ "CVE-2019-13104", "CVE-2019-13106" ]
[ "fs/ext4/ext4fs.c" ]
SUSE-Proprietary-or-OSS
11,147
u-boot-jetson-tk1
0001-XXX-openSUSE-XXX-Prepend-partition-.patch
From f157d20e42d1f151c9fa402dc635c2879539ee2f Mon Sep 17 00:00:00 2001 From: Guillaume GARDET <guillaume.gardet@free.fr> Date: Wed, 13 Apr 2016 13:44:29 +0200 Subject: [PATCH] XXX openSUSE XXX: Prepend partition 2 (and 3 fo chromebook snow) to list of boot partition to load DTB before EFI Also add new folders to find DTB --- include/config_distro_bootcmd.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/config_distro_bootcmd.h b/include/config_distro_bootcmd.h index 555efb7433..6579ffe107 100644 --- a/include/config_distro_bootcmd.h +++ b/include/config_distro_bootcmd.h @@ -141,7 +141,7 @@ "load ${devtype} ${devnum}:${distro_bootpart} " \ "${fdt_addr_r} ${prefix}${efi_fdtfile}\0" \ \ - "efi_dtb_prefixes=/ /dtb/ /dtb/current/\0" \ + "efi_dtb_prefixes=/ /dtb/ /dtb/current/ /boot/ /boot/dtb/ /boot/dtb/current/\0" \ "scan_dev_for_efi=" \ "setenv efi_fdtfile ${fdtfile}; " \ BOOTENV_EFI_SET_FDTFILE_FALLBACK \ @@ -412,7 +412,7 @@ "scan_dev_for_boot_part=" \ "part list ${devtype} ${devnum} -bootable devplist; " \ "env exists devplist || setenv devplist 1; " \ - "for distro_bootpart in ${devplist}; do " \ + "for distro_bootpart in 2 3 ${devplist}; do " \ "if fstype ${devtype} " \ "${devnum}:${distro_bootpart} " \ "bootfstype; then " \
[ "CVE-2019-13104", "CVE-2019-13106" ]
[ "include/config_distro_bootcmd.h" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-jetson-tk1
0013-usb_kdb-only-process-events-success.patch
From 2a332dbdb3258831e16eabb9e89204c6bb041067 Mon Sep 17 00:00:00 2001 From: Michal Suchanek <msuchanek@suse.de> Date: Sun, 18 Aug 2019 10:55:24 +0200 Subject: [PATCH] usb_kdb: only process events successfully received Causes unbound key repeat on error otherwise. Signed-off-by: Michal Suchanek <msuchanek@suse.de> (cherry picked from commit 3e816a2424c428f452f2003aff105511dc0cd9d4) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- common/usb_kbd.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/common/usb_kbd.c b/common/usb_kbd.c index 020f0d4117..b793dbe1f6 100644 --- a/common/usb_kbd.c +++ b/common/usb_kbd.c @@ -317,10 +317,9 @@ static inline void usb_kbd_poll_for_event(struct usb_device *dev) struct usb_kbd_pdata *data = dev->privptr; /* Submit a interrupt transfer request */ - usb_submit_int_msg(dev, data->intpipe, &data->new[0], data->intpktsize, - data->intinterval); - - usb_kbd_irq_worker(dev); + if (usb_submit_int_msg(dev, data->intpipe, &data->new[0], + data->intpktsize, data->intinterval) >= 0) + usb_kbd_irq_worker(dev); #elif defined(CONFIG_SYS_USB_EVENT_POLL_VIA_CONTROL_EP) || \ defined(CONFIG_SYS_USB_EVENT_POLL_VIA_INT_QUEUE) #if defined(CONFIG_SYS_USB_EVENT_POLL_VIA_CONTROL_EP)
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "common/usb_kbd.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-jetson-tk1
0004-Temp-workaround-for-Chromebook-snow.patch
From b9c7341bde4c8eb3f60bf20a3198055a24490503 Mon Sep 17 00:00:00 2001 From: Guillaume GARDET <guillaume.gardet@free.fr> Date: Mon, 9 Apr 2018 10:28:26 +0200 Subject: [PATCH] Temp workaround for Chromebook snow to avoid the 'unable to select a mode' error --- drivers/mmc/dw_mmc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/mmc/dw_mmc.c b/drivers/mmc/dw_mmc.c index 7544b84ab6..b10f6bb2f4 100644 --- a/drivers/mmc/dw_mmc.c +++ b/drivers/mmc/dw_mmc.c @@ -539,7 +539,8 @@ void dwmci_setup_cfg(struct mmc_config *cfg, struct dwmci_host *host, cfg->host_caps |= MMC_MODE_4BIT; cfg->host_caps &= ~MMC_MODE_8BIT; } - cfg->host_caps |= MMC_MODE_HS | MMC_MODE_HS_52MHz; + /* Temp workaround for Chromebook snow to avoid the 'unable to select a mode' error */ +// cfg->host_caps |= MMC_MODE_HS | MMC_MODE_HS_52MHz; cfg->b_max = CONFIG_SYS_MMC_MAX_BLK_COUNT; }
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "drivers/mmc/dw_mmc.c" ]
SUSE-Proprietary-or-OSS
7,303
cross-s390-gcc48-icecream-backend
m68k-notice-update-cc.patch
diff --git a/gcc/config/m68k/m68k.c b/gcc/config/m68k/m68k.c index 5e3236f..7035504 100644 --- a/gcc/config/m68k/m68k.c +++ b/gcc/config/m68k/m68k.c @@ -4209,6 +4209,13 @@ notice_update_cc (rtx exp, rtx insn) && cc_status.value2 && reg_overlap_mentioned_p (cc_status.value1, cc_status.value2)) cc_status.value2 = 0; + /* Check for PRE_DEC in dest modifying a register used in src. */ + if (cc_status.value1 && GET_CODE (cc_status.value1) == MEM + && GET_CODE (XEXP (cc_status.value1, 0)) == PRE_DEC + && cc_status.value2 + && reg_overlap_mentioned_p (XEXP (XEXP (cc_status.value1, 0), 0), + cc_status.value2)) + cc_status.value2 = 0; if (((cc_status.value1 && FP_REG_P (cc_status.value1)) || (cc_status.value2 && FP_REG_P (cc_status.value2)))) cc_status.flags = CC_IN_68881;
[ "CVE-2017-11671" ]
[ "gcc/config/m68k/m68k.c" ]
SUSE-Proprietary-or-OSS
7,303
cross-s390-gcc48-icecream-backend
gcc48-bnc1050947.patch
CVE-2017-11671 2017-03-25 Uros Bizjak <ubizjak@gmail.com> PR target/80180 * config/i386/i386.c (ix86_expand_builtin) <IX86_BUILTIN_RDSEED{16,32,64}_STEP>: Do not expand arg0 between flags reg setting and flags reg using instructions. <IX86_BUILTIN_RDRAND{16,32,64}_STEP>: Ditto. Use non-flags reg clobbering instructions to zero extend op2. Index: gcc/config/i386/i386.c =================================================================== --- gcc/config/i386/i386.c (revision 246478) +++ gcc/config/i386/i386.c (revision 246479) @@ -39533,9 +39533,6 @@ ix86_expand_builtin (tree exp, rtx targe mode0 = DImode; rdrand_step: - op0 = gen_reg_rtx (mode0); - emit_insn (GEN_FCN (icode) (op0)); - arg0 = CALL_EXPR_ARG (exp, 0); op1 = expand_normal (arg0); if (!address_operand (op1, VOIDmode)) @@ -39543,6 +39540,10 @@ rdrand_step: op1 = convert_memory_address (Pmode, op1); op1 = copy_addr_to_reg (op1); } + + op0 = gen_reg_rtx (mode0); + emit_insn (GEN_FCN (icode) (op0)); + emit_move_insn (gen_rtx_MEM (mode0, op1), op0); op1 = gen_reg_rtx (SImode); @@ -39551,8 +39552,20 @@ rdrand_step: /* Emit SImode conditional move. */ if (mode0 == HImode) { - op2 = gen_reg_rtx (SImode); - emit_insn (gen_zero_extendhisi2 (op2, op0)); + if (TARGET_ZERO_EXTEND_WITH_AND + && optimize_function_for_speed_p (cfun)) + { + op2 = force_reg (SImode, const0_rtx); + + emit_insn (gen_movstricthi + (gen_lowpart (HImode, op2), op0)); + } + else + { + op2 = gen_reg_rtx (SImode); + + emit_insn (gen_zero_extendhisi2 (op2, op0)); + } } else if (mode0 == SImode) op2 = op0; @@ -39584,9 +39597,6 @@ rdrand_step: mode0 = DImode; rdseed_step: - op0 = gen_reg_rtx (mode0); - emit_insn (GEN_FCN (icode) (op0)); - arg0 = CALL_EXPR_ARG (exp, 0); op1 = expand_normal (arg0); if (!address_operand (op1, VOIDmode)) @@ -39594,6 +39604,10 @@ rdseed_step: op1 = convert_memory_address (Pmode, op1); op1 = copy_addr_to_reg (op1); } + + op0 = gen_reg_rtx (mode0); + emit_insn (GEN_FCN (icode) (op0)); + emit_move_insn (gen_rtx_MEM (mode0, op1), op0); op2 = gen_reg_rtx (QImode);
[ "CVE-2017-11671" ]
[ "gcc/config/i386/i386.c" ]
SUSE-Proprietary-or-OSS
7,303
cross-s390-gcc48-icecream-backend
libjava-no-multilib.diff
Index: libjava/configure =================================================================== --- libjava/configure.orig 2011-11-03 16:31:17.000000000 +0100 +++ libjava/configure 2011-11-03 17:08:27.000000000 +0100 @@ -3364,6 +3364,26 @@ else fi +# Default to --enable-libjava-multilib +# Check whether --enable-libjava-multilib or --disable-libjava-multilib was given. +if test "${enable_libjava_multilib+set}" = set; then + enableval="$enable_libjava_multilib" + case "${enableval}" in + yes) multilib=yes ;; + no) multilib=no ;; + *) { { echo "$as_me:$LINENO: error: bad value ${enableval} for libjava-multilib option" >&5 +echo "$as_me: error: bad value ${enableval} for libjava-multilib option" >&2;} + { (exit 1); exit 1; }; } ;; + esac +else + multilib=yes +fi; +if test "$multilib" = no; then +# Reset also --enable-multilib state, as that is what is looked at +# by config-ml.in + ac_configure_args="$ac_configure_args --disable-multilib" +fi + # It may not be safe to run linking tests in AC_PROG_CC/AC_PROG_CXX.
[ "CVE-2017-11671" ]
[ "libjava/configure" ]
SUSE-Proprietary-or-OSS
7,303
cross-s390-gcc48-icecream-backend
gcc48-bnc955382.patch
Backport from mainline 2015-10-14 Peter Bergner <bergner@vnet.ibm.com> Torvald Riegel <triegel@redhat.com> PR target/67281 * config/rs6000/htm.md (UNSPEC_HTM_FENCE): New. (tabort, tabort<wd>c, tabort<wd>ci, tbegin, tcheck, tend, trechkpt, treclaim, tsr, ttest): Rename define_insns from this... (*tabort, *tabort<wd>c, *tabort<wd>ci, *tbegin, *tcheck, *tend, *trechkpt, *treclaim, *tsr, *ttest): ...to this. Add memory barrier. (tabort, tabort<wd>c, tabort<wd>ci, tbegin, tcheck, tend, trechkpt, treclaim, tsr, ttest): New define_expands. * config/rs6000/rs6000-c.c (rs6000_target_modify_macros): Define __TM_FENCE__ for htm. * doc/extend.texi: Update documentation for htm builtins. Backport from mainline: 2015-08-03 Peter Bergner <bergner@vnet.ibm.com> * config/rs6000/htm.md (tabort.): Restrict the source operand to using a base register. Index: gcc/config/rs6000/htm.md =================================================================== --- gcc/config/rs6000/htm.md (revision 228847) +++ gcc/config/rs6000/htm.md (working copy) @@ -27,6 +27,14 @@ ]) ;; +;; UNSPEC usage +;; + +(define_c_enum "unspec" + [UNSPEC_HTM_FENCE + ]) + +;; ;; UNSPEC_VOLATILE usage ;; @@ -45,96 +53,223 @@ UNSPECV_HTM_MTSPR ]) +(define_expand "tabort" + [(parallel + [(set (match_operand:CC 1 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand:SI 0 "base_reg_operand" "b")] + UNSPECV_HTM_TABORT)) + (set (match_dup 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[2] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[2]) = 1; +}) -(define_insn "tabort" +(define_insn "*tabort" [(set (match_operand:CC 1 "cc_reg_operand" "=x") - (unspec_volatile:CC [(match_operand:SI 0 "gpc_reg_operand" "r")] - UNSPECV_HTM_TABORT))] + (unspec_volatile:CC [(match_operand:SI 0 "base_reg_operand" "b")] + UNSPECV_HTM_TABORT)) + (set (match_operand:BLK 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tabort. %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tabort<wd>c" +(define_expand "tabort<wd>c" + [(parallel + [(set (match_operand:CC 3 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand 0 "u5bit_cint_operand" "n") + (match_operand:GPR 1 "gpc_reg_operand" "r") + (match_operand:GPR 2 "gpc_reg_operand" "r")] + UNSPECV_HTM_TABORTXC)) + (set (match_dup 4) (unspec:BLK [(match_dup 4)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[4] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[4]) = 1; +}) + +(define_insn "*tabort<wd>c" [(set (match_operand:CC 3 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand 0 "u5bit_cint_operand" "n") (match_operand:GPR 1 "gpc_reg_operand" "r") (match_operand:GPR 2 "gpc_reg_operand" "r")] - UNSPECV_HTM_TABORTXC))] + UNSPECV_HTM_TABORTXC)) + (set (match_operand:BLK 4) (unspec:BLK [(match_dup 4)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tabort<wd>c. %0,%1,%2" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tabort<wd>ci" +(define_expand "tabort<wd>ci" + [(parallel + [(set (match_operand:CC 3 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand 0 "u5bit_cint_operand" "n") + (match_operand:GPR 1 "gpc_reg_operand" "r") + (match_operand 2 "s5bit_cint_operand" "n")] + UNSPECV_HTM_TABORTXCI)) + (set (match_dup 4) (unspec:BLK [(match_dup 4)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[4] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[4]) = 1; +}) + +(define_insn "*tabort<wd>ci" [(set (match_operand:CC 3 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand 0 "u5bit_cint_operand" "n") (match_operand:GPR 1 "gpc_reg_operand" "r") (match_operand 2 "s5bit_cint_operand" "n")] - UNSPECV_HTM_TABORTXCI))] + UNSPECV_HTM_TABORTXCI)) + (set (match_operand:BLK 4) (unspec:BLK [(match_dup 4)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tabort<wd>ci. %0,%1,%2" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tbegin" +(define_expand "tbegin" + [(parallel + [(set (match_operand:CC 1 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] + UNSPECV_HTM_TBEGIN)) + (set (match_dup 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[2] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[2]) = 1; +}) + +(define_insn "*tbegin" [(set (match_operand:CC 1 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] - UNSPECV_HTM_TBEGIN))] + UNSPECV_HTM_TBEGIN)) + (set (match_operand:BLK 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tbegin. %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tcheck" +(define_expand "tcheck" + [(parallel + [(set (match_operand:CC 0 "cc_reg_operand" "=y") + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TCHECK)) + (set (match_dup 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[1] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[1]) = 1; +}) + +(define_insn "*tcheck" [(set (match_operand:CC 0 "cc_reg_operand" "=y") - (unspec_volatile:CC [(const_int 0)] - UNSPECV_HTM_TCHECK))] + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TCHECK)) + (set (match_operand:BLK 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tcheck %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tend" +(define_expand "tend" + [(parallel + [(set (match_operand:CC 1 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] + UNSPECV_HTM_TEND)) + (set (match_dup 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[2] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[2]) = 1; +}) + +(define_insn "*tend" [(set (match_operand:CC 1 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] - UNSPECV_HTM_TEND))] + UNSPECV_HTM_TEND)) + (set (match_operand:BLK 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tend. %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "trechkpt" +(define_expand "trechkpt" + [(parallel + [(set (match_operand:CC 0 "cc_reg_operand" "=x") + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TRECHKPT)) + (set (match_dup 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[1] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[1]) = 1; +}) + +(define_insn "*trechkpt" [(set (match_operand:CC 0 "cc_reg_operand" "=x") - (unspec_volatile:CC [(const_int 0)] - UNSPECV_HTM_TRECHKPT))] + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TRECHKPT)) + (set (match_operand:BLK 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "trechkpt." [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "treclaim" +(define_expand "treclaim" + [(parallel + [(set (match_operand:CC 1 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand:SI 0 "gpc_reg_operand" "r")] + UNSPECV_HTM_TRECLAIM)) + (set (match_dup 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[2] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[2]) = 1; +}) + +(define_insn "*treclaim" [(set (match_operand:CC 1 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand:SI 0 "gpc_reg_operand" "r")] - UNSPECV_HTM_TRECLAIM))] + UNSPECV_HTM_TRECLAIM)) + (set (match_operand:BLK 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "treclaim. %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "tsr" +(define_expand "tsr" + [(parallel + [(set (match_operand:CC 1 "cc_reg_operand" "=x") + (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] + UNSPECV_HTM_TSR)) + (set (match_dup 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[2] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[2]) = 1; +}) + +(define_insn "*tsr" [(set (match_operand:CC 1 "cc_reg_operand" "=x") (unspec_volatile:CC [(match_operand 0 "const_0_to_1_operand" "n")] - UNSPECV_HTM_TSR))] + UNSPECV_HTM_TSR)) + (set (match_operand:BLK 2) (unspec:BLK [(match_dup 2)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tsr. %0" [(set_attr "type" "htm") (set_attr "length" "4")]) -(define_insn "ttest" +(define_expand "ttest" + [(parallel + [(set (match_operand:CC 0 "cc_reg_operand" "=x") + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TTEST)) + (set (match_dup 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))])] + "TARGET_HTM" +{ + operands[1] = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode)); + MEM_VOLATILE_P (operands[1]) = 1; +}) + +(define_insn "*ttest" [(set (match_operand:CC 0 "cc_reg_operand" "=x") - (unspec_volatile:CC [(const_int 0)] - UNSPECV_HTM_TTEST))] + (unspec_volatile:CC [(const_int 0)] UNSPECV_HTM_TTEST)) + (set (match_operand:BLK 1) (unspec:BLK [(match_dup 1)] UNSPEC_HTM_FENCE))] "TARGET_HTM" "tabortwci. 0,1,0" [(set_attr "type" "htm") Index: gcc/config/rs6000/rs6000-c.c =================================================================== --- gcc/config/rs6000/rs6000-c.c (revision 228847) +++ gcc/config/rs6000/rs6000-c.c (working copy) @@ -351,7 +351,11 @@ if ((flags & OPTION_MASK_VSX) != 0) rs6000_define_or_undefine_macro (define_p, "__VSX__"); if ((flags & OPTION_MASK_HTM) != 0) - rs6000_define_or_undefine_macro (define_p, "__HTM__"); + { + rs6000_define_or_undefine_macro (define_p, "__HTM__"); + /* Tell the user that our HTM insn patterns act as memory barriers. */ + rs6000_define_or_undefine_macro (define_p, "__TM_FENCE__"); + } if ((flags & OPTION_MASK_P8_VECTOR) != 0) rs6000_define_or_undefine_macro (define_p, "__POWER8_VECTOR__"); if ((flags & OPTION_MASK_QUAD_MEMORY) != 0) Index: gcc/doc/extend.texi =================================================================== --- gcc/doc/extend.texi (revision 228847) +++ gcc/doc/extend.texi (working copy) @@ -14573,6 +14573,28 @@ unsigned int __builtin_tsuspend (void) @end smallexample +Note that the semantics of the above HTM builtins are required to mimic +the locking semantics used for critical sections. Builtins that are used +to create a new transaction or restart a suspended transaction must have +lock acquisition like semantics while those builtins that end or suspend a +transaction must have lock release like semantics. Specifically, this must +mimic lock semantics as specified by C++11, for example: Lock acquisition is +as-if an execution of __atomic_exchange_n(&globallock,1,__ATOMIC_ACQUIRE) +that returns 0, and lock release is as-if an execution of +__atomic_store(&globallock,0,__ATOMIC_RELEASE), with globallock being an +implicit implementation-defined lock used for all transactions. The HTM +instructions associated with with the builtins inherently provide the +correct acquisition and release hardware barriers required. However, +the compiler must also be prohibited from moving loads and stores across +the builtins in a way that would violate their semantics. This has been +accomplished by adding memory barriers to the associated HTM instructions +(which is a conservative approach to provide acquire and release semantics). +Earlier versions of the compiler did not treat the HTM instructions as +memory barriers. A @code{__TM_FENCE__} macro has been added, which can +be used to determine whether the current compiler treats HTM instructions +as memory barriers or not. This allows the user to explicitly add memory +barriers to their code when using an older version of the compiler. + The following set of built-in functions are available to gain access to the HTM specific special purpose registers. Index: gcc/testsuite/gcc.target/powerpc/htm-tabort-no-r0.c =================================================================== --- gcc/testsuite/gcc.target/powerpc/htm-tabort-no-r0.c (revision 0) +++ gcc/testsuite/gcc.target/powerpc/htm-tabort-no-r0.c (working copy) @@ -0,0 +1,12 @@ +/* { dg-do compile { target { powerpc*-*-* } } } */ +/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */ +/* { dg-require-effective-target powerpc_htm_ok } */ +/* { dg-options "-O2 -mhtm -ffixed-r3 -ffixed-r4 -ffixed-r5 -ffixed-r6 -ffixed-r7 -ffixed-r8 -ffixed-r9 -ffixed-r10 -ffixed-r11 -ffixed-r12" } */ + +/* { dg-final { scan-assembler-not "tabort\\.\[ \t\]0" } } */ + +int +foo (void) +{ + return __builtin_tabort (10); +}
[ "CVE-2017-11671" ]
[ "gcc/config/rs6000/htm.md", "gcc/config/rs6000/rs6000-c.c", "gcc/doc/extend.texi", "gcc/testsuite/gcc.target/powerpc/htm-tabort-no-r0.c" ]
SUSE-Proprietary-or-OSS
7,303
cross-s390-gcc48-icecream-backend
libgcj_bc-install.patch
From c8422f7c18a774e46360575c6ab93cb8ff14dc89 Mon Sep 17 00:00:00 2001 From: Andreas Schwab <schwab@suse.de> Date: Tue, 16 Apr 2013 17:57:27 +0200 Subject: [PATCH] Properly install libgcc_bc dummy library * Makefile.am (toolexeclib_LTLIBRARIES) [USE_LIBGCJ_BC]: Use install/libgcj_bc.la instead of libgcj_bc.la. (noinst_LTLIBRARIES) [USE_LIBGCJ_BC]: Define. (install_libgcj_bc_la_SOURCES): Define. (install/libgcj_bc.la): New rule. --- libjava/Makefile.am | 9 +++++- libjava/Makefile.in | 92 +++++++++++++++++++++++++++++++++++------------------ 2 files changed, 69 insertions(+), 32 deletions(-) diff --git a/libjava/Makefile.am b/libjava/Makefile.am index a4941a9..208e632 100644 --- a/libjava/Makefile.am +++ b/libjava/Makefile.am @@ -212,7 +212,8 @@ LIBJAVA_CORE_EXTRA = endif if USE_LIBGCJ_BC -toolexeclib_LTLIBRARIES += libgcj_bc.la +toolexeclib_LTLIBRARIES += install/libgcj_bc.la +noinst_LTLIBRARIES = libgcj_bc.la endif if XLIB_AWT @@ -606,6 +607,7 @@ lib_gnu_awt_xlib_la_LINK = $(LIBLINK) $(lib_gnu_awt_xlib_la_LDFLAGS) \ ## This lets us have one soname in BC objects and another in C++ ABI objects. ## This library is not linked against libgcj. libgcj_bc_la_SOURCES = libgcj_bc.c +install_libgcj_bc_la_SOURCES = $(libgcj_bc_la_SOURCES) libgcj_bc_la_LDFLAGS = -rpath $(toolexeclibdir) -no-static -version-info 1:0:0 \ $(LIBGCJ_LD_SYMBOLIC_FUNCTIONS) $(LIBJAVA_LDFLAGS_NOUNDEF) libgcj_bc_la_DEPENDENCIES = libgcj.la $(libgcj_bc_la_version_dep) @@ -628,6 +630,11 @@ libgcj_bc.la: $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_DEPENDENCIES) rm .libs/libgcj_bc.so.1; \ $(LN_S) libgcj_bc.so.1.0.0 .libs/libgcj_bc.so.1 +## This rule creates the libgcj_bc library that is actually installed. +install/libgcj_bc.la: $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_DEPENDENCIES) install/$(am__dirstamp) + $(libgcj_bc_la_LINK) $(am_libgcj_bc_la_rpath) $(libgcj_bc_la_LDFLAGS) \ + $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_LIBADD) $(LIBS) + ## Note that property_files is defined in sources.am. propertyo_files = $(patsubst classpath/resource/%,%,$(addsuffix .lo,$(property_files))) diff --git a/libjava/Makefile.in b/libjava/Makefile.in index c3c471c..e6f6c67 100644 --- a/libjava/Makefile.in +++ b/libjava/Makefile.in @@ -40,7 +40,7 @@ host_triplet = @host@ target_triplet = @target@ @TESTSUBDIR_TRUE@am__append_1 = testsuite @BUILD_SUBLIBS_TRUE@am__append_2 = libgcj-noncore.la -@USE_LIBGCJ_BC_TRUE@am__append_3 = libgcj_bc.la +@USE_LIBGCJ_BC_TRUE@am__append_3 = install/libgcj_bc.la @XLIB_AWT_TRUE@am__append_4 = lib-gnu-awt-xlib.la @INSTALL_ECJ_JAR_TRUE@am__append_5 = $(ECJ_BUILD_JAR) @CREATE_GJDOC_TRUE@@NATIVE_TRUE@am__append_6 = gjdoc @@ -156,9 +156,16 @@ am__installdirs = "$(DESTDIR)$(dbexecdir)" \ "$(DESTDIR)$(libexecsubdir)" "$(DESTDIR)$(bindir)" \ "$(DESTDIR)$(dbexecdir)" "$(DESTDIR)$(jardir)" \ "$(DESTDIR)$(toolexecmainlibdir)" -LTLIBRARIES = $(dbexec_LTLIBRARIES) $(toolexeclib_LTLIBRARIES) +LTLIBRARIES = $(dbexec_LTLIBRARIES) $(noinst_LTLIBRARIES) \ + $(toolexeclib_LTLIBRARIES) +install_libgcj_bc_la_LIBADD = +am__objects_1 = libgcj_bc.lo +am_install_libgcj_bc_la_OBJECTS = $(am__objects_1) +install_libgcj_bc_la_OBJECTS = $(am_install_libgcj_bc_la_OBJECTS) +@USE_LIBGCJ_BC_TRUE@am_install_libgcj_bc_la_rpath = -rpath \ +@USE_LIBGCJ_BC_TRUE@ $(toolexeclibdir) am__dirstamp = $(am__leading_dot)dirstamp -am__objects_1 = gnu/gcj/xlib/lib_gnu_awt_xlib_la-natClip.lo \ +am__objects_2 = gnu/gcj/xlib/lib_gnu_awt_xlib_la-natClip.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natColormap.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natDisplay.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natDrawable.lo \ @@ -178,7 +185,7 @@ am__objects_1 = gnu/gcj/xlib/lib_gnu_awt_xlib_la-natClip.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natXExposeEvent.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natXImage.lo \ gnu/gcj/xlib/lib_gnu_awt_xlib_la-natXUnmapEvent.lo -am_lib_gnu_awt_xlib_la_OBJECTS = $(am__objects_1) +am_lib_gnu_awt_xlib_la_OBJECTS = $(am__objects_2) lib_gnu_awt_xlib_la_OBJECTS = $(am_lib_gnu_awt_xlib_la_OBJECTS) @XLIB_AWT_TRUE@am_lib_gnu_awt_xlib_la_rpath = -rpath $(toolexeclibdir) am_libgcj_noncore_la_OBJECTS = @@ -320,13 +327,13 @@ am__DEPENDENCIES_3 = $(am__DEPENDENCIES_2) $(propertyo_files) \ @BUILD_SUBLIBS_TRUE@am__DEPENDENCIES_4 = \ @BUILD_SUBLIBS_TRUE@ $(CORE_PACKAGE_SOURCE_FILES_LO) am__DEPENDENCIES_5 = -@INTERPRETER_TRUE@am__objects_2 = jvmti.lo interpret.lo -@INTERPRETER_TRUE@am__objects_3 = gnu/classpath/jdwp/natVMFrame.lo \ +@INTERPRETER_TRUE@am__objects_3 = jvmti.lo interpret.lo +@INTERPRETER_TRUE@am__objects_4 = gnu/classpath/jdwp/natVMFrame.lo \ @INTERPRETER_TRUE@ gnu/classpath/jdwp/natVMMethod.lo \ @INTERPRETER_TRUE@ gnu/classpath/jdwp/natVMVirtualMachine.lo -@INTERPRETER_TRUE@am__objects_4 = gnu/gcj/jvmti/natBreakpoint.lo \ +@INTERPRETER_TRUE@am__objects_5 = gnu/gcj/jvmti/natBreakpoint.lo \ @INTERPRETER_TRUE@ gnu/gcj/jvmti/natNormalBreakpoint.lo -am__objects_5 = $(am__objects_3) gnu/classpath/natConfiguration.lo \ +am__objects_6 = $(am__objects_4) gnu/classpath/natConfiguration.lo \ gnu/classpath/natSystemProperties.lo \ gnu/classpath/natVMStackWalker.lo gnu/gcj/natCore.lo \ gnu/gcj/convert/JIS0208_to_Unicode.lo \ @@ -337,7 +344,7 @@ am__objects_5 = $(am__objects_3) gnu/classpath/natConfiguration.lo \ gnu/gcj/convert/natOutput_EUCJIS.lo \ gnu/gcj/convert/natOutput_SJIS.lo \ gnu/gcj/io/natSimpleSHSStream.lo gnu/gcj/io/shs.lo \ - $(am__objects_4) gnu/gcj/runtime/natFinalizerThread.lo \ + $(am__objects_5) gnu/gcj/runtime/natFinalizerThread.lo \ gnu/gcj/runtime/natSharedLibLoader.lo \ gnu/gcj/runtime/natSystemClassLoader.lo \ gnu/gcj/runtime/natStringBuffer.lo gnu/gcj/util/natDebug.lo \ @@ -384,24 +391,24 @@ am__objects_5 = $(am__objects_3) gnu/classpath/natConfiguration.lo \ java/util/concurrent/atomic/natAtomicLong.lo \ java/util/logging/natLogger.lo java/util/zip/natDeflater.lo \ java/util/zip/natInflater.lo sun/misc/natUnsafe.lo -@USING_BOEHMGC_TRUE@am__objects_6 = boehm.lo -@USING_NOGC_TRUE@am__objects_7 = nogc.lo -@USING_POSIX_PLATFORM_TRUE@am__objects_8 = posix.lo -@USING_WIN32_PLATFORM_TRUE@am__objects_9 = win32.lo -@USING_DARWIN_CRT_TRUE@am__objects_10 = darwin.lo -@USING_POSIX_THREADS_TRUE@am__objects_11 = posix-threads.lo -@USING_WIN32_THREADS_TRUE@am__objects_12 = win32-threads.lo -@USING_NO_THREADS_TRUE@am__objects_13 = no-threads.lo +@USING_BOEHMGC_TRUE@am__objects_7 = boehm.lo +@USING_NOGC_TRUE@am__objects_8 = nogc.lo +@USING_POSIX_PLATFORM_TRUE@am__objects_9 = posix.lo +@USING_WIN32_PLATFORM_TRUE@am__objects_10 = win32.lo +@USING_DARWIN_CRT_TRUE@am__objects_11 = darwin.lo +@USING_POSIX_THREADS_TRUE@am__objects_12 = posix-threads.lo +@USING_WIN32_THREADS_TRUE@am__objects_13 = win32-threads.lo +@USING_NO_THREADS_TRUE@am__objects_14 = no-threads.lo am_libgcj_la_OBJECTS = prims.lo jni.lo exception.lo stacktrace.lo \ - link.lo defineclass.lo verify.lo $(am__objects_2) \ - $(am__objects_5) $(am__objects_6) $(am__objects_7) \ - $(am__objects_8) $(am__objects_9) $(am__objects_10) \ - $(am__objects_11) $(am__objects_12) $(am__objects_13) + link.lo defineclass.lo verify.lo $(am__objects_3) \ + $(am__objects_6) $(am__objects_7) $(am__objects_8) \ + $(am__objects_9) $(am__objects_10) $(am__objects_11) \ + $(am__objects_12) $(am__objects_13) $(am__objects_14) libgcj_la_OBJECTS = $(am_libgcj_la_OBJECTS) libgcj_bc_la_LIBADD = am_libgcj_bc_la_OBJECTS = libgcj_bc.lo libgcj_bc_la_OBJECTS = $(am_libgcj_bc_la_OBJECTS) -@USE_LIBGCJ_BC_TRUE@am_libgcj_bc_la_rpath = -rpath $(toolexeclibdir) +@USE_LIBGCJ_BC_TRUE@am_libgcj_bc_la_rpath = am_libgij_la_OBJECTS = gij.lo libgij_la_OBJECTS = $(am_libgij_la_OBJECTS) am_libjvm_la_OBJECTS = jni-libjvm.lo @@ -486,7 +493,8 @@ GCJCOMPILE = $(GCJ) $(AM_GCJFLAGS) $(GCJFLAGS) LTGCJCOMPILE = $(LIBTOOL) --tag=GCJ $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \ --mode=compile $(GCJ) $(AM_GCJFLAGS) $(GCJFLAGS) GCJLD = $(GCJ) -SOURCES = $(lib_gnu_awt_xlib_la_SOURCES) $(libgcj_noncore_la_SOURCES) \ +SOURCES = $(install_libgcj_bc_la_SOURCES) \ + $(lib_gnu_awt_xlib_la_SOURCES) $(libgcj_noncore_la_SOURCES) \ $(libgcj_tools_la_SOURCES) $(libgcj_la_SOURCES) \ $(EXTRA_libgcj_la_SOURCES) $(libgcj_bc_la_SOURCES) \ $(libgij_la_SOURCES) $(libjvm_la_SOURCES) $(ecjx_SOURCES) \ @@ -940,6 +948,7 @@ CORE_PACKAGE_SOURCE_FILES_LO = $(filter-out $(LOWER_PACKAGE_FILES_LO),$(ALL_PACK @BUILD_SUBLIBS_TRUE@LIBJAVA_LDFLAGS_NOUNDEF = $(LIBGCJ_SUBLIB_LTFLAGS) @BUILD_SUBLIBS_FALSE@LIBJAVA_CORE_EXTRA = @BUILD_SUBLIBS_TRUE@LIBJAVA_CORE_EXTRA = @LIBGCJ_SUBLIB_CORE_EXTRA_DEPS@ +@USE_LIBGCJ_BC_TRUE@noinst_LTLIBRARIES = libgcj_bc.la dbexec_LTLIBRARIES = libjvm.la pkgconfigdir = $(libdir)/pkgconfig jardir = $(datadir)/java @@ -1147,6 +1156,7 @@ lib_gnu_awt_xlib_la_LINK = $(LIBLINK) $(lib_gnu_awt_xlib_la_LDFLAGS) \ $(lib_gnu_awt_xlib_la_version_arg) libgcj_bc_la_SOURCES = libgcj_bc.c +install_libgcj_bc_la_SOURCES = $(libgcj_bc_la_SOURCES) libgcj_bc_la_LDFLAGS = -rpath $(toolexeclibdir) -no-static -version-info 1:0:0 \ $(LIBGCJ_LD_SYMBOLIC_FUNCTIONS) $(LIBJAVA_LDFLAGS_NOUNDEF) @@ -8821,6 +8831,15 @@ clean-dbexecLTLIBRARIES: echo "rm -f \"$${dir}/so_locations\""; \ rm -f "$${dir}/so_locations"; \ done + +clean-noinstLTLIBRARIES: + -test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES) + @list='$(noinst_LTLIBRARIES)'; for p in $$list; do \ + dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ + test "$$dir" != "$$p" || dir=.; \ + echo "rm -f \"$${dir}/so_locations\""; \ + rm -f "$${dir}/so_locations"; \ + done install-toolexeclibLTLIBRARIES: $(toolexeclib_LTLIBRARIES) @$(NORMAL_INSTALL) test -z "$(toolexeclibdir)" || $(MKDIR_P) "$(DESTDIR)$(toolexeclibdir)" @@ -8852,6 +8871,9 @@ clean-toolexeclibLTLIBRARIES: echo "rm -f \"$${dir}/so_locations\""; \ rm -f "$${dir}/so_locations"; \ done +install/$(am__dirstamp): + @$(MKDIR_P) install + @: > install/$(am__dirstamp) gnu/gcj/xlib/$(am__dirstamp): @$(MKDIR_P) gnu/gcj/xlib @: > gnu/gcj/xlib/$(am__dirstamp) @@ -10129,6 +10151,7 @@ clean-libtool: -rm -rf gnu/java/nio/.libs gnu/java/nio/_libs -rm -rf gnu/java/nio/channels/.libs gnu/java/nio/channels/_libs -rm -rf gnu/java/security/jce/prng/.libs gnu/java/security/jce/prng/_libs + -rm -rf install/.libs install/_libs -rm -rf java/io/.libs java/io/_libs -rm -rf java/lang/.libs java/lang/_libs -rm -rf java/lang/ref/.libs java/lang/ref/_libs @@ -10408,6 +10431,7 @@ distclean-generic: -rm -f gnu/java/nio/channels/$(am__dirstamp) -rm -f gnu/java/security/jce/prng/$(DEPDIR)/$(am__dirstamp) -rm -f gnu/java/security/jce/prng/$(am__dirstamp) + -rm -f install/$(am__dirstamp) -rm -f java/io/$(DEPDIR)/$(am__dirstamp) -rm -f java/io/$(am__dirstamp) -rm -f java/lang/$(DEPDIR)/$(am__dirstamp) @@ -10444,8 +10468,9 @@ maintainer-clean-generic: clean: clean-multi clean-recursive clean-am: clean-binPROGRAMS clean-dbexecLTLIBRARIES clean-generic \ - clean-libexecsubPROGRAMS clean-libtool clean-noinstPROGRAMS \ - clean-toolexeclibLTLIBRARIES mostlyclean-am + clean-libexecsubPROGRAMS clean-libtool clean-noinstLTLIBRARIES \ + clean-noinstPROGRAMS clean-toolexeclibLTLIBRARIES \ + mostlyclean-am distclean: distclean-multi distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) @@ -10532,12 +10557,13 @@ uninstall-am: uninstall-binPROGRAMS uninstall-binSCRIPTS \ all all-am all-multi am--refresh check check-am clean \ clean-binPROGRAMS clean-dbexecLTLIBRARIES clean-generic \ clean-libexecsubPROGRAMS clean-libtool clean-multi \ - clean-noinstPROGRAMS clean-toolexeclibLTLIBRARIES ctags \ - ctags-recursive distclean distclean-compile distclean-generic \ - distclean-libtool distclean-local distclean-multi \ - distclean-tags dvi dvi-am html html-am info info-am install \ - install-am install-binPROGRAMS install-binSCRIPTS install-data \ - install-data-am install-data-local install-dbexecDATA \ + clean-noinstLTLIBRARIES clean-noinstPROGRAMS \ + clean-toolexeclibLTLIBRARIES ctags ctags-recursive distclean \ + distclean-compile distclean-generic distclean-libtool \ + distclean-local distclean-multi distclean-tags dvi dvi-am html \ + html-am info info-am install install-am install-binPROGRAMS \ + install-binSCRIPTS install-data install-data-am \ + install-data-local install-dbexecDATA \ install-dbexecLTLIBRARIES install-dvi install-dvi-am \ install-exec install-exec-am install-exec-hook install-html \ install-html-am install-info install-info-am install-jarDATA \ @@ -10575,6 +10601,10 @@ libgcj_bc.la: $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_DEPENDENCIES) rm .libs/libgcj_bc.so.1; \ $(LN_S) libgcj_bc.so.1.0.0 .libs/libgcj_bc.so.1 +install/libgcj_bc.la: $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_DEPENDENCIES) install/$(am__dirstamp) + $(libgcj_bc_la_LINK) $(am_libgcj_bc_la_rpath) $(libgcj_bc_la_LDFLAGS) \ + $(libgcj_bc_la_OBJECTS) $(libgcj_bc_la_LIBADD) $(LIBS) + $(propertyo_files): %.lo: classpath/resource/% $(mkinstalldirs) `dirname $@`; \ $(LTGCJCOMPILE) -o $@ -c $< -Wc,--resource,$(@:.lo=) -- 1.8.2.1
[ "CVE-2017-11671" ]
[ "libjava/Makefile.am", "libjava/Makefile.in" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-hikey
0024-cmd-gpt-Address-error-cases-during-.patch
From def94a760f5171f3e9d5cf70af6690c5c62f1e63 Mon Sep 17 00:00:00 2001 From: Tom Rini <trini@konsulko.com> Date: Tue, 21 Jan 2020 11:53:38 -0500 Subject: [PATCH] cmd/gpt: Address error cases during gpt rename more correctly New analysis by the tool has shown that we have some cases where we weren't handling the error exit condition correctly. When we ran into the ENOMEM case we wouldn't exit the function and thus incorrect things could happen. Rework the unwinding such that we don't need a helper function now and free what we may have allocated. Fixes: 18030d04d25d ("GPT: fix memory leaks identified by Coverity") Reported-by: Coverity (CID: 275475, 275476) Cc: Alison Chaiken <alison@she-devel.com> Cc: Simon Goldschmidt <simon.k.r.goldschmidt@gmail.com> Cc: Jordy <jordy@simplyhacker.com> Signed-off-by: Tom Rini <trini@konsulko.com> Reviewed-by: Simon Goldschmidt <simon.k.r.goldschmidt@gmail.com> (cherry picked from commit 5749faa3d6837d6dbaf2119fc3ec49a326690c8f) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- cmd/gpt.c | 47 ++++++++++++----------------------------------- 1 file changed, 12 insertions(+), 35 deletions(-) diff --git a/cmd/gpt.c b/cmd/gpt.c index 638870352f..cfd649d120 100644 --- a/cmd/gpt.c +++ b/cmd/gpt.c @@ -632,21 +632,6 @@ static int do_disk_guid(struct blk_desc *dev_desc, char * const namestr) } #ifdef CONFIG_CMD_GPT_RENAME -/* - * There are 3 malloc() calls in set_gpt_info() and there is no info about which - * failed. - */ -static void set_gpt_cleanup(char **str_disk_guid, - disk_partition_t **partitions) -{ -#ifdef CONFIG_RANDOM_UUID - if (str_disk_guid) - free(str_disk_guid); -#endif - if (partitions) - free(partitions); -} - static int do_rename_gpt_parts(struct blk_desc *dev_desc, char *subcomm, char *name1, char *name2) { @@ -654,7 +639,7 @@ static int do_rename_gpt_parts(struct blk_desc *dev_desc, char *subcomm, struct disk_part *curr; disk_partition_t *new_partitions = NULL; char disk_guid[UUID_STR_LEN + 1]; - char *partitions_list, *str_disk_guid; + char *partitions_list, *str_disk_guid = NULL; u8 part_count = 0; int partlistlen, ret, numparts = 0, partnum, i = 1, ctr1 = 0, ctr2 = 0; @@ -696,14 +681,8 @@ static int do_rename_gpt_parts(struct blk_desc *dev_desc, char *subcomm, /* set_gpt_info allocates new_partitions and str_disk_guid */ ret = set_gpt_info(dev_desc, partitions_list, &str_disk_guid, &new_partitions, &part_count); - if (ret < 0) { - del_gpt_info(); - free(partitions_list); - if (ret == -ENOMEM) - set_gpt_cleanup(&str_disk_guid, &new_partitions); - else - goto out; - } + if (ret < 0) + goto out; if (!strcmp(subcomm, "swap")) { if ((strlen(name1) > PART_NAME_LEN) || (strlen(name2) > PART_NAME_LEN)) { @@ -765,14 +744,8 @@ static int do_rename_gpt_parts(struct blk_desc *dev_desc, char *subcomm, * Even though valid pointers are here passed into set_gpt_info(), * it mallocs again, and there's no way to tell which failed. */ - if (ret < 0) { - del_gpt_info(); - free(partitions_list); - if (ret == -ENOMEM) - set_gpt_cleanup(&str_disk_guid, &new_partitions); - else - goto out; - } + if (ret < 0) + goto out; debug("Writing new partition table\n"); ret = gpt_restore(dev_desc, disk_guid, new_partitions, numparts); @@ -794,10 +767,14 @@ static int do_rename_gpt_parts(struct blk_desc *dev_desc, char *subcomm, } printf("new partition table with %d partitions is:\n", numparts); print_gpt_info(); - del_gpt_info(); out: - free(new_partitions); - free(str_disk_guid); + del_gpt_info(); +#ifdef CONFIG_RANDOM_UUID + if (str_disk_guid) + free(str_disk_guid); +#endif + if (new_partitions) + free(new_partitions); free(partitions_list); return ret; }
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "cmd/gpt.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-hikey
0018-CVE-net-fix-unbounded-memcpy-of-UDP.patch
From 4f40ad5f63ca100dc3f0d75bca4287611a7c7514 Mon Sep 17 00:00:00 2001 From: "liucheng (G)" <liucheng32@huawei.com> Date: Thu, 29 Aug 2019 13:47:33 +0000 Subject: [PATCH] CVE: net: fix unbounded memcpy of UDP packet MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This patch adds a check to udp_len to fix unbounded memcpy for CVE-2019-14192, CVE-2019-14193 and CVE-2019-14199. Signed-off-by: Cheng Liu <liucheng32@huawei.com> Reviewed-by: Simon Goldschmidt <simon.k.r.goldschmidt@gmail.com> Reported-by: Fermín Serna <fermin@semmle.com> Acked-by: Joe Hershberger <joe.hershberger@ni.com> (cherry picked from commit fe7288069d2e6659117049f7d27e261b550bb725) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- net/net.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/net.c b/net/net.c index a5a216c3ee..ecc1649d74 100644 --- a/net/net.c +++ b/net/net.c @@ -1258,6 +1258,9 @@ void net_process_received_packet(uchar *in_packet, int len) return; } + if (ntohs(ip->udp_len) < UDP_HDR_SIZE || ntohs(ip->udp_len) > ntohs(ip->ip_len)) + return; + debug_cond(DEBUG_DEV_PKT, "received UDP (to=%pI4, from=%pI4, len=%d)\n", &dst_ip, &src_ip, len);
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "net/net.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-hikey
0002-Revert-Revert-omap3-Use-raw-SPL-by-.patch
From 2f9efa00db72a83d3591392735f67a6a559f4b14 Mon Sep 17 00:00:00 2001 From: Alexander Graf <agraf@suse.de> Date: Mon, 2 May 2016 23:25:07 +0200 Subject: [PATCH] Revert "Revert "omap3: Use raw SPL by default for mmc1"" This reverts commit 7fa75d0ac5502db813d109c1df7bd0da34688685. --- arch/arm/mach-omap2/boot-common.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm/mach-omap2/boot-common.c b/arch/arm/mach-omap2/boot-common.c index 176d4f67cb..6fba378af9 100644 --- a/arch/arm/mach-omap2/boot-common.c +++ b/arch/arm/mach-omap2/boot-common.c @@ -133,8 +133,6 @@ void save_omap_boot_params(void) (boot_device <= MMC_BOOT_DEVICES_END)) { switch (boot_device) { case BOOT_DEVICE_MMC1: - boot_mode = MMCSD_MODE_FS; - break; case BOOT_DEVICE_MMC2: boot_mode = MMCSD_MODE_RAW; break;
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "arch/arm/mach-omap2/boot-common.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-hikey
0012-rpi-add-Compute-Module-3.patch
From b013e1caffdf10d0eb6c046630a4000e8fbd6ffb Mon Sep 17 00:00:00 2001 From: Jonathan Gray <jsg@jsg.id.au> Date: Thu, 31 Jan 2019 09:24:44 +1100 Subject: [PATCH] rpi: add Compute Module 3+ Add Raspberry Pi Compute Module 3+ to list of models, the revision code is 0x10 according to the list on raspberrypi.org. v2: Use the same dtb name as CM3 as CM3+ is a drop in replacement for CM3. Signed-off-by: Jonathan Gray <jsg@jsg.id.au> Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Matthias Brugger <mbrugger@suse.com> (cherry picked from commit 7e2ae620e15ef578b8f2812ec21ec07fae6c1e2f) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- board/raspberrypi/rpi/rpi.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/board/raspberrypi/rpi/rpi.c b/board/raspberrypi/rpi/rpi.c index 153a1fdcb7..617c892dde 100644 --- a/board/raspberrypi/rpi/rpi.c +++ b/board/raspberrypi/rpi/rpi.c @@ -143,6 +143,11 @@ static const struct rpi_model rpi_models_new_scheme[] = { DTB_DIR "bcm2837-rpi-3-a-plus.dtb", false, }, + [0x10] = { + "Compute Module 3+", + DTB_DIR "bcm2837-rpi-cm3.dtb", + false, + }, }; static const struct rpi_model rpi_models_old_scheme[] = {
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "board/raspberrypi/rpi/rpi.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-hikey
0015-usb-storage-submit_int_msg-usb_int_.patch
From 7f458838efed76faa23dfc9254e914b18049faf0 Mon Sep 17 00:00:00 2001 From: Michal Suchanek <msuchanek@suse.de> Date: Sun, 18 Aug 2019 10:55:26 +0200 Subject: [PATCH] usb: storage: submit_int_msg -> usb_int_msg Use the wrapper as other callers do. Signed-off-by: Michal Suchanek <msuchanek@suse.de> (cherry picked from commit 50dce8fbf0c8b6f55e32c8d2d08ccf6e58168027) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- common/usb_storage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/common/usb_storage.c b/common/usb_storage.c index 8c889bb1a6..9a4155c08a 100644 --- a/common/usb_storage.c +++ b/common/usb_storage.c @@ -650,8 +650,8 @@ static int usb_stor_CBI_get_status(struct scsi_cmd *srb, struct us_data *us) int timeout; us->ip_wanted = 1; - submit_int_msg(us->pusb_dev, us->irqpipe, - (void *) &us->ip_data, us->irqmaxp, us->irqinterval); + usb_int_msg(us->pusb_dev, us->irqpipe, + (void *)&us->ip_data, us->irqmaxp, us->irqinterval); timeout = 1000; while (timeout--) { if (us->ip_wanted == 0)
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "common/usb_storage.c" ]
SUSE-Proprietary-or-OSS
16,044
libostree
ostree-grub2-location.patch
Index: libostree-2018.8/src/boot/grub2/grub2-15_ostree =================================================================== --- libostree-2018.8.orig/src/boot/grub2/grub2-15_ostree +++ libostree-2018.8/src/boot/grub2/grub2-15_ostree @@ -39,7 +39,7 @@ set -e # it's a lot better than reimplementing the config-generating bits of # OSTree in shell script. -. /usr/share/grub/grub-mkconfig_lib +. /usr/share/grub2/grub-mkconfig_lib DEVICE=${GRUB_DEVICE_BOOT:-${GRUB_DEVICE}}
[ "CVE-2021-21261" ]
[ "src/boot/grub2/grub2-15_ostree" ]
LGPL-2.0-or-later
14,956
u-boot-orangepipc2
0019-CVE-nfs-fix-stack-based-buffer-over.patch
From f185ad7069f86e90af5fe0a52b698c6e7ce954f1 Mon Sep 17 00:00:00 2001 From: "liucheng (G)" <liucheng32@huawei.com> Date: Thu, 29 Aug 2019 13:47:40 +0000 Subject: [PATCH] CVE: nfs: fix stack-based buffer overflow in some nfs_handler reply helper functions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This patch adds a check to nfs_handler to fix buffer overflow for CVE-2019-14197, CVE-2019-14200, CVE-2019-14201, CVE-2019-14202, CVE-2019-14203 and CVE-2019-14204. Signed-off-by: Cheng Liu <liucheng32@huawei.com> Reported-by: Fermín Serna <fermin@semmle.com> Acked-by: Joe Hershberger <joe.hershberger@ni.com> (cherry picked from commit 741a8a08ebe5bc3ccfe3cde6c2b44ee53891af21) Signed-off-by: Matthias Brugger <mbrugger@suse.com> --- net/nfs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/nfs.c b/net/nfs.c index d6a7f8e827..b7cf3b3a18 100644 --- a/net/nfs.c +++ b/net/nfs.c @@ -732,6 +732,9 @@ static void nfs_handler(uchar *pkt, unsigned dest, struct in_addr sip, debug("%s\n", __func__); + if (len > sizeof(struct rpc_t)) + return; + if (dest != nfs_our_port) return;
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "net/nfs.c" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-orangepipc2
0001-XXX-openSUSE-XXX-Prepend-partition-.patch
From f157d20e42d1f151c9fa402dc635c2879539ee2f Mon Sep 17 00:00:00 2001 From: Guillaume GARDET <guillaume.gardet@free.fr> Date: Wed, 13 Apr 2016 13:44:29 +0200 Subject: [PATCH] XXX openSUSE XXX: Prepend partition 2 (and 3 fo chromebook snow) to list of boot partition to load DTB before EFI Also add new folders to find DTB --- include/config_distro_bootcmd.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/config_distro_bootcmd.h b/include/config_distro_bootcmd.h index 555efb7433..6579ffe107 100644 --- a/include/config_distro_bootcmd.h +++ b/include/config_distro_bootcmd.h @@ -141,7 +141,7 @@ "load ${devtype} ${devnum}:${distro_bootpart} " \ "${fdt_addr_r} ${prefix}${efi_fdtfile}\0" \ \ - "efi_dtb_prefixes=/ /dtb/ /dtb/current/\0" \ + "efi_dtb_prefixes=/ /dtb/ /dtb/current/ /boot/ /boot/dtb/ /boot/dtb/current/\0" \ "scan_dev_for_efi=" \ "setenv efi_fdtfile ${fdtfile}; " \ BOOTENV_EFI_SET_FDTFILE_FALLBACK \ @@ -412,7 +412,7 @@ "scan_dev_for_boot_part=" \ "part list ${devtype} ${devnum} -bootable devplist; " \ "env exists devplist || setenv devplist 1; " \ - "for distro_bootpart in ${devplist}; do " \ + "for distro_bootpart in 2 3 ${devplist}; do " \ "if fstype ${devtype} " \ "${devnum}:${distro_bootpart} " \ "bootfstype; then " \
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "include/config_distro_bootcmd.h" ]
SUSE-Proprietary-or-OSS
11,150
u-boot-orangepipc2
0009-rpi-Allow-to-boot-without-serial.patch
From b18857389b6fc8641afec7077c16bbdea80febf7 Mon Sep 17 00:00:00 2001 From: Alexander Graf <agraf@suse.de> Date: Thu, 5 Apr 2018 11:36:22 +0200 Subject: [PATCH] rpi: Allow to boot without serial When we enable CONFIG_OF_BOARD on Raspberry Pis, we may end up without serial console support in early boot. Hence we need to make the serial port optional, otherwise we will never get to the point where serial would be probed. Signed-off-by: Alexander Graf <agraf@suse.de> --- configs/rpi_0_w_defconfig | 1 + configs/rpi_2_defconfig | 1 + configs/rpi_defconfig | 1 + 3 files changed, 3 insertions(+) diff --git a/configs/rpi_0_w_defconfig b/configs/rpi_0_w_defconfig index 3b7e4f7ad9..3958b768d1 100644 --- a/configs/rpi_0_w_defconfig +++ b/configs/rpi_0_w_defconfig @@ -34,3 +34,4 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_CONSOLE_SCROLL_LINES=10 CONFIG_PHYS_TO_BUS=y CONFIG_OF_LIBFDT_OVERLAY=y +# CONFIG_REQUIRE_SERIAL_CONSOLE is not set diff --git a/configs/rpi_2_defconfig b/configs/rpi_2_defconfig index de9c0e1937..e915c14219 100644 --- a/configs/rpi_2_defconfig +++ b/configs/rpi_2_defconfig @@ -34,3 +34,4 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_CONSOLE_SCROLL_LINES=10 CONFIG_PHYS_TO_BUS=y CONFIG_OF_LIBFDT_OVERLAY=y +# CONFIG_REQUIRE_SERIAL_CONSOLE is not set diff --git a/configs/rpi_defconfig b/configs/rpi_defconfig index d75032c420..62d37c0afb 100644 --- a/configs/rpi_defconfig +++ b/configs/rpi_defconfig @@ -34,3 +34,4 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_CONSOLE_SCROLL_LINES=10 CONFIG_PHYS_TO_BUS=y CONFIG_OF_LIBFDT_OVERLAY=y +# CONFIG_REQUIRE_SERIAL_CONSOLE is not set
[ "CVE-2019-13104", "CVE-2019-13106" ]
[ "configs/rpi_0_w_defconfig", "configs/rpi_2_defconfig", "configs/rpi_defconfig" ]
SUSE-Proprietary-or-OSS
14,956
u-boot-orangepipc2
0006-tools-zynqmpbif-Add-support-for-loa.patch
From eaabb2031f87e485b6b13d9af454532e76dc69ba Mon Sep 17 00:00:00 2001 From: Alexander Graf <agraf@suse.de> Date: Thu, 26 Apr 2018 13:30:32 +0200 Subject: [PATCH] tools: zynqmpbif: Add support for load=after Some times it's handy to have a partition loaded immediately after the end of the previous blob. The most obvious example for this is a U-Boot binary (coming from .elf) and a device tree file. This patch adds that logic. With this, the following bif snippet does what you would expect: [destination_cpu=a5x-0, exception_level=el-2] u-boot.elf [load=after] u-boot.dtb converts to FSBL payload on CPU a5x-0 (PS): Offset : 0x00590500 Size : 577768 (0x8d0e8) bytes Load : 0x08000000 Attributes : EL2 Checksum : 0xefca2cad FSBL payload on CPU none (PS): Offset : 0x0061d640 Size : 129760 (0x1fae0) bytes Load : 0x0808d0e8 (entry=0x00000000) Attributes : EL3 Checksum : 0xf7dd3d49 Signed-off-by: Alexander Graf <agraf@suse.de> --- tools/zynqmpbif.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/tools/zynqmpbif.c b/tools/zynqmpbif.c index 6c8f66055d..47c233c15f 100644 --- a/tools/zynqmpbif.c +++ b/tools/zynqmpbif.c @@ -42,6 +42,7 @@ enum bif_flag { BIF_FLAG_PUF_FILE, BIF_FLAG_AARCH32, BIF_FLAG_PART_OWNER_UBOOT, + BIF_FLAG_LOAD_AFTER, /* Internal flags */ BIF_FLAG_BIT_FILE, @@ -151,6 +152,11 @@ static char *parse_load(char *line, struct bif_entry *bf) { char *endptr; + if (!strncmp(line, "after", strlen("after"))) { + bf->flags |= (1ULL << BIF_FLAG_LOAD_AFTER); + return line + strlen("after"); + } + bf->load = strtoll(line, &endptr, 0); return endptr; @@ -336,6 +342,15 @@ static int bif_add_part(struct bif_entry *bf, const char *data, size_t len) if (r) return r; + if (bf->flags & (1ULL << BIF_FLAG_LOAD_AFTER) && + bif_output.last_part) { + struct partition_header *p = bif_output.last_part; + uint64_t load = le64_to_cpu(p->load_address); + + load += le32_to_cpu(p->len) * 4; + parthdr.load_address = cpu_to_le64(load); + } + parthdr.offset = cpu_to_le32(bf->offset / 4); if (bf->flags & (1ULL << BIF_FLAG_BOOTLOADER)) {
[ "CVE-2020-10648", "CVE-2019-14198", "CVE-2019-11059", "CVE-2019-11690", "CVE-2019-14195", "CVE-2019-14201", "CVE-2019-14200", "CVE-2019-14203", "CVE-2019-14196", "CVE-2019-14192", "CVE-2020-8432", "CVE-2019-14204", "CVE-2019-14202", "CVE-2019-14194", "CVE-2019-13103", "CVE-2019-14197", "CVE-2019-14193", "CVE-2019-14199" ]
[ "tools/zynqmpbif.c" ]
SUSE-Proprietary-or-OSS
11,150
u-boot-orangepipc2
0005-rpi3-Enable-lan78xx-driver.patch
From e75d103c47d687ecc49cb522833084ebf5da04fd Mon Sep 17 00:00:00 2001 From: Alexander Graf <agraf@suse.de> Date: Thu, 15 Mar 2018 09:54:10 +0100 Subject: [PATCH] rpi3: Enable lan78xx driver The new Raspberry Pi B 3+ has a lan78xx device attached to it. Let's add driver support in U-Boot for it. Signed-off-by: Alexander Graf <agraf@suse.de> --- configs/rpi_3_defconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/configs/rpi_3_defconfig b/configs/rpi_3_defconfig index ca55f8dc66..1ee04c68f8 100644 --- a/configs/rpi_3_defconfig +++ b/configs/rpi_3_defconfig @@ -36,3 +36,5 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_CONSOLE_SCROLL_LINES=10 CONFIG_PHYS_TO_BUS=y CONFIG_OF_LIBFDT_OVERLAY=y +CONFIG_PHYLIB=y +CONFIG_USB_ETHER_LAN78XX=y
[ "CVE-2019-13104", "CVE-2019-13106" ]
[ "configs/rpi_3_defconfig" ]
SUSE-Proprietary-or-OSS
16,008
python-uamqp
u_strip-werror.patch
diff -Nru uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/configs/azure_iot_build_rules.cmake uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/configs/azure_iot_build_rules.cmake --- uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/configs/azure_iot_build_rules.cmake 2019-07-03 18:02:50.000000000 +0200 +++ uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/configs/azure_iot_build_rules.cmake 2019-09-24 15:44:41.776180260 +0200 @@ -77,8 +77,6 @@ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /W4") endif() elseif(UNIX) #LINUX OR APPLE - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror") if(NOT (IN_OPENWRT OR APPLE)) set (CMAKE_C_FLAGS "-D_POSIX_C_SOURCE=200112L ${CMAKE_C_FLAGS}") endif() @@ -204,10 +202,6 @@ # Make warning as error add_definitions(/WX) ENDIF() -ELSE() - # Make warning as error - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror") ENDIF() diff -Nru uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/ctest/CMakeLists.txt uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/ctest/CMakeLists.txt --- uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/ctest/CMakeLists.txt 2019-07-03 18:02:50.000000000 +0200 +++ uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/ctest/CMakeLists.txt 2019-09-24 15:41:34.789932583 +0200 @@ -43,8 +43,6 @@ endif() elseif(LINUX) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror") if(NOT IN_OPENWRT) set (CMAKE_C_FLAGS "-D_POSIX_C_SOURCE=200112L ${CMAKE_C_FLAGS}") endif() diff -Nru uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/umock-c/CMakeLists.txt uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/umock-c/CMakeLists.txt --- uamqp-1.2.2.orig/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/umock-c/CMakeLists.txt 2019-07-03 18:02:50.000000000 +0200 +++ uamqp-1.2.2/src/vendor/azure-uamqp-c/deps/azure-c-shared-utility/testtools/umock-c/CMakeLists.txt 2019-09-24 15:44:14.731867458 +0200 @@ -26,9 +26,6 @@ if(MSVC) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /W4") -elseif(UNIX) # LINUX OR APPLE - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror") - set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror") endif() include (CTest)
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
SUSE-Proprietary-or-OSS
6,366
guile
guile-64bit.patch
Index: libguile/hash.c =================================================================== --- libguile/hash.c.orig +++ libguile/hash.c @@ -273,7 +273,7 @@ scm_hasher(SCM obj, unsigned long n, siz unsigned long scm_ihashq (SCM obj, unsigned long n) { - return (SCM_UNPACK (obj) >> 1) % n; + return ((unsigned long) SCM_UNPACK (obj) >> 1) % n; } @@ -309,7 +309,7 @@ scm_ihashv (SCM obj, unsigned long n) if (SCM_NUMP(obj)) return (unsigned long) scm_hasher(obj, n, 10); else - return SCM_UNPACK (obj) % n; + return (unsigned long) SCM_UNPACK (obj) % n; } Index: libguile/struct.c =================================================================== --- libguile/struct.c.orig +++ libguile/struct.c @@ -919,7 +919,7 @@ scm_struct_ihashq (SCM obj, unsigned lon { /* The length of the hash table should be a relative prime it's not necessary to shift down the address. */ - return SCM_UNPACK (obj) % n; + return (unsigned long) SCM_UNPACK (obj) % n; } /* Return the hash of struct OBJ, modulo N. Traverse OBJ's fields to
[ "CVE-2016-8605" ]
[ "libguile/hash.c", "libguile/struct.c" ]
GFDL-1.3-only AND GPL-3.0-or-later AND LGPL-3.0-or-later
5,728
guile
guile-net-db-test.patch
Index: guile-2.0.7/test-suite/tests/net-db.test =================================================================== --- guile-2.0.7.orig/test-suite/tests/net-db.test +++ guile-2.0.7/test-suite/tests/net-db.test @@ -79,6 +79,7 @@ (and (defined? 'EAI_NODATA) ; GNU extension (= errcode EAI_NODATA)) (= errcode EAI_AGAIN) + (= errcode EAI_SYSTEM) (begin (format #t "unexpected error code: ~a ~s~%" errcode (gai-strerror errcode)) @@ -105,6 +106,7 @@ ;; `EAI_NONAME'.) (and (or (= errcode EAI_SERVICE) (= errcode EAI_NONAME) + (= errcode EAI_SYSTEM) (and (defined? 'EAI_NODATA) (= errcode EAI_NODATA))) (string? (gai-strerror errcode))))))))
[ "CVE-2016-8606", "CVE-2016-8605" ]
[ "test-suite/tests/net-db.test" ]
GFDL-1.3-only AND GPL-3.0-or-later AND LGPL-3.0-or-later
6,366
guile
guile-threads-test.patch
Index: guile-2.0.5/test-suite/tests/threads.test =================================================================== --- guile-2.0.5.orig/test-suite/tests/threads.test +++ guile-2.0.5/test-suite/tests/threads.test @@ -414,8 +414,10 @@ (gc) (gc) (let ((m (g))) - (and (mutex? m) - (eq? (mutex-owner m) (current-thread))))))) + (or + (and (mutex? m) + (eq? (mutex-owner m) (current-thread))) + (throw 'unresolved)))))) ;; ;; mutex lock levels
[ "CVE-2016-8605" ]
[ "test-suite/tests/threads.test" ]
GFDL-1.3-only AND GPL-3.0-or-later AND LGPL-3.0-or-later
5,728
guile
guile-64bit.patch
Index: libguile/hash.c =================================================================== --- libguile/hash.c.orig +++ libguile/hash.c @@ -273,7 +273,7 @@ scm_hasher(SCM obj, unsigned long n, siz unsigned long scm_ihashq (SCM obj, unsigned long n) { - return (SCM_UNPACK (obj) >> 1) % n; + return ((unsigned long) SCM_UNPACK (obj) >> 1) % n; } @@ -309,7 +309,7 @@ scm_ihashv (SCM obj, unsigned long n) if (SCM_NUMP(obj)) return (unsigned long) scm_hasher(obj, n, 10); else - return SCM_UNPACK (obj) % n; + return (unsigned long) SCM_UNPACK (obj) % n; } Index: libguile/struct.c =================================================================== --- libguile/struct.c.orig +++ libguile/struct.c @@ -919,7 +919,7 @@ scm_struct_ihashq (SCM obj, unsigned lon { /* The length of the hash table should be a relative prime it's not necessary to shift down the address. */ - return SCM_UNPACK (obj) % n; + return (unsigned long) SCM_UNPACK (obj) % n; } /* Return the hash of struct OBJ, modulo N. Traverse OBJ's fields to
[ "CVE-2016-8606", "CVE-2016-8605" ]
[ "libguile/hash.c", "libguile/struct.c" ]
GFDL-1.3-only AND GPL-3.0-or-later AND LGPL-3.0-or-later
6,366
guile
guile-1.6.10-mktemp.patch
Index: libguile/guile-snarf.in =================================================================== --- libguile/guile-snarf.in.orig 2011-05-05 18:14:35.000000000 +0200 +++ libguile/guile-snarf.in 2011-09-22 17:56:41.010417735 +0200 @@ -84,8 +84,7 @@ fi cpp_ok_p=false if [ x"$TMPDIR" = x ]; then TMPDIR="/tmp" ; else : ; fi -tempdir="$TMPDIR/guile-snarf.$$" -(umask 077 && mkdir $tempdir) || exit 1 +tempdir=$(mktemp -d -q "$TMPDIR/snarf.XXXXXX") || { echo >&2 "guile-snarf: can not create temporary file"; exit 1; } temp="$tempdir/tmp" if [ x"$CPP" = x ] ; then cpp="@CPP@" ; else cpp="$CPP" ; fi
[ "CVE-2016-8605" ]
[ "libguile/guile-snarf.in" ]
GFDL-1.3-only AND GPL-3.0-or-later AND LGPL-3.0-or-later
16,027
python-jsondiff
remove_nose.patch
--- /dev/null +++ b/tests/_random.py @@ -0,0 +1,33 @@ +import sys +from random import Random + +PY3 = (sys.version_info[0] == 3) + + +def _generate_tag(n, rng): + return ''.join(rng.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789') + for _ in range(n)) + + +def randomize(n, scenario_generator, seed=12038728732): + def decorator(test): + def randomized_test(self): + rng_seed = Random(seed) + nseeds = n + # (rng_seed.getrandbits(32) for i in range(n)) + seeds = (_generate_tag(12, rng_seed) for i in range(n)) + for i, rseed in enumerate(seeds): + rng = Random(rseed) + scenario = scenario_generator(self, rng) + try: + test(self, scenario) + except Exception as e: + import sys + if PY3: + raise type(e).with_traceback(type(e)('%s with scenario %s (%i of %i)' % + (e.message, rseed, i+1, nseeds)), sys.exc_info()[2]) + else: + raise (type(e), type(e)('%s with scenario %s (%i of %i)' + % (e.message, rseed, i+1, nseeds)), sys.exc_info()[2]) + return randomized_test + return decorator --- a/tests/__init__.py +++ b/tests/__init__.py @@ -1,115 +0,0 @@ -import unittest - -from jsondiff import diff, replace, add, discard, insert, delete, update, JsonDiffer - -from .utils import generate_random_json, perturbate_json - -from nose_random import randomize - - -class JsonDiffTests(unittest.TestCase): - - def test_a(self): - - self.assertEqual({}, diff(1, 1)) - self.assertEqual({}, diff(True, True)) - self.assertEqual({}, diff('abc', 'abc')) - self.assertEqual({}, diff([1, 2], [1, 2])) - self.assertEqual({}, diff((1, 2), (1, 2))) - self.assertEqual({}, diff({1, 2}, {1, 2})) - self.assertEqual({}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2})) - self.assertEqual({}, diff([], [])) - self.assertEqual({}, diff(None, None)) - self.assertEqual({}, diff({}, {})) - self.assertEqual({}, diff(set(), set())) - - self.assertEqual(2, diff(1, 2)) - self.assertEqual(False, diff(True, False)) - self.assertEqual('def', diff('abc', 'def')) - self.assertEqual([3, 4], diff([1, 2], [3, 4])) - self.assertEqual((3, 4), diff((1, 2), (3, 4))) - self.assertEqual({3, 4}, diff({1, 2}, {3, 4})) - self.assertEqual({replace: {'c': 3, 'd': 4}}, diff({'a': 1, 'b': 2}, {'c': 3, 'd': 4})) - - self.assertEqual({replace: {'c': 3, 'd': 4}}, diff([1, 2], {'c': 3, 'd': 4})) - self.assertEqual(123, diff({'a': 1, 'b': 2}, 123)) - - self.assertEqual({delete: ['b']}, diff({'a': 1, 'b': 2}, {'a': 1})) - self.assertEqual({'b': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 3})) - self.assertEqual({'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2, 'c': 3})) - self.assertEqual({delete: ['b'], 'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'c': 3})) - - self.assertEqual({add: {3}}, diff({1, 2}, {1, 2, 3})) - self.assertEqual({add: {3}, discard: {4}}, diff({1, 2, 4}, {1, 2, 3})) - self.assertEqual({discard: {4}}, diff({1, 2, 4}, {1, 2})) - - self.assertEqual({insert: [(1, 'b')]}, diff(['a', 'c'], ['a', 'b', 'c'])) - self.assertEqual({insert: [(1, 'b')], delete: [3, 0]}, diff(['x', 'a', 'c', 'x'], ['a', 'b', 'c'])) - self.assertEqual( - {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, - diff(['x', 'a', {'v': 11}, 'c', 'x'], ['a', {'v': 20}, 'b', 'c']) - ) - self.assertEqual( - {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, - diff(['x', 'a', {'u': 10, 'v': 11}, 'c', 'x'], ['a', {'u': 10, 'v': 20}, 'b', 'c']) - ) - - def test_marshal(self): - differ = JsonDiffer() - - d = { - delete: 3, - '$delete': 4, - insert: 4, - '$$something': 1 - } - - dm = differ.marshal(d) - - self.assertEqual(d, differ.unmarshal(dm)) - - def generate_scenario(self, rng): - a = generate_random_json(rng, sets=True) - b = perturbate_json(a, rng, sets=True) - return a, b - - def generate_scenario_no_sets(self, rng): - a = generate_random_json(rng, sets=False) - b = perturbate_json(a, rng, sets=False) - return a, b - - @randomize(1000, generate_scenario_no_sets) - def test_dump(self, scenario): - a, b = scenario - diff(a, b, syntax='compact', dump=True) - diff(a, b, syntax='explicit', dump=True) - diff(a, b, syntax='symmetric', dump=True) - - @randomize(1000, generate_scenario) - def test_compact_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='compact') - d = differ.diff(a, b) - self.assertEqual(b, differ.patch(a, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) - - - @randomize(1000, generate_scenario) - def test_explicit_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='explicit') - d = differ.diff(a, b) - # self.assertEqual(b, differ.patch(a, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) - - @randomize(1000, generate_scenario) - def test_symmetric_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='symmetric') - d = differ.diff(a, b) - self.assertEqual(b, differ.patch(a, d)) - self.assertEqual(a, differ.unpatch(b, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) --- /dev/null +++ b/tests/test_jsondiff.py @@ -0,0 +1,112 @@ +import unittest + +from jsondiff import diff, replace, add, discard, insert, delete, update, JsonDiffer +from tests.utils import generate_random_json, perturbate_json +from tests._random import randomize + + +class JsonDiffTests(unittest.TestCase): + + def test_a(self): + + self.assertEqual({}, diff(1, 1)) + self.assertEqual({}, diff(True, True)) + self.assertEqual({}, diff('abc', 'abc')) + self.assertEqual({}, diff([1, 2], [1, 2])) + self.assertEqual({}, diff((1, 2), (1, 2))) + self.assertEqual({}, diff({1, 2}, {1, 2})) + self.assertEqual({}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2})) + self.assertEqual({}, diff([], [])) + self.assertEqual({}, diff(None, None)) + self.assertEqual({}, diff({}, {})) + self.assertEqual({}, diff(set(), set())) + + self.assertEqual(2, diff(1, 2)) + self.assertEqual(False, diff(True, False)) + self.assertEqual('def', diff('abc', 'def')) + self.assertEqual([3, 4], diff([1, 2], [3, 4])) + self.assertEqual((3, 4), diff((1, 2), (3, 4))) + self.assertEqual({3, 4}, diff({1, 2}, {3, 4})) + self.assertEqual({replace: {'c': 3, 'd': 4}}, diff({'a': 1, 'b': 2}, {'c': 3, 'd': 4})) + + self.assertEqual({replace: {'c': 3, 'd': 4}}, diff([1, 2], {'c': 3, 'd': 4})) + self.assertEqual(123, diff({'a': 1, 'b': 2}, 123)) + + self.assertEqual({delete: ['b']}, diff({'a': 1, 'b': 2}, {'a': 1})) + self.assertEqual({'b': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 3})) + self.assertEqual({'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2, 'c': 3})) + self.assertEqual({delete: ['b'], 'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'c': 3})) + + self.assertEqual({add: {3}}, diff({1, 2}, {1, 2, 3})) + self.assertEqual({add: {3}, discard: {4}}, diff({1, 2, 4}, {1, 2, 3})) + self.assertEqual({discard: {4}}, diff({1, 2, 4}, {1, 2})) + + self.assertEqual({insert: [(1, 'b')]}, diff(['a', 'c'], ['a', 'b', 'c'])) + self.assertEqual({insert: [(1, 'b')], delete: [3, 0]}, diff(['x', 'a', 'c', 'x'], ['a', 'b', 'c'])) + self.assertEqual( + {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, + diff(['x', 'a', {'v': 11}, 'c', 'x'], ['a', {'v': 20}, 'b', 'c']) + ) + self.assertEqual( + {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, + diff(['x', 'a', {'u': 10, 'v': 11}, 'c', 'x'], ['a', {'u': 10, 'v': 20}, 'b', 'c']) + ) + + def test_marshal(self): + differ = JsonDiffer() + + d = { + delete: 3, + '$delete': 4, + insert: 4, + '$$something': 1 + } + + dm = differ.marshal(d) + + self.assertEqual(d, differ.unmarshal(dm)) + + def generate_scenario(self, rng): + a = generate_random_json(rng, sets=True) + b = perturbate_json(a, rng, sets=True) + return a, b + + def generate_scenario_no_sets(self, rng): + a = generate_random_json(rng, sets=False) + b = perturbate_json(a, rng, sets=False) + return a, b + + @randomize(1000, generate_scenario_no_sets) + def test_dump(self, scenario): + a, b = scenario + diff(a, b, syntax='compact', dump=True) + diff(a, b, syntax='explicit', dump=True) + diff(a, b, syntax='symmetric', dump=True) + + @randomize(1000, generate_scenario) + def test_compact_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='compact') + d = differ.diff(a, b) + self.assertEqual(b, differ.patch(a, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm)) + + @randomize(1000, generate_scenario) + def test_explicit_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='explicit') + d = differ.diff(a, b) + # self.assertEqual(b, differ.patch(a, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm)) + + @randomize(1000, generate_scenario) + def test_symmetric_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='symmetric') + d = differ.diff(a, b) + self.assertEqual(b, differ.patch(a, d)) + self.assertEqual(a, differ.unpatch(b, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm))
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
MIT
16,008
python-jsondiff
remove_nose.patch
--- /dev/null +++ b/tests/_random.py @@ -0,0 +1,33 @@ +import sys +from random import Random + +PY3 = (sys.version_info[0] == 3) + + +def _generate_tag(n, rng): + return ''.join(rng.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789') + for _ in range(n)) + + +def randomize(n, scenario_generator, seed=12038728732): + def decorator(test): + def randomized_test(self): + rng_seed = Random(seed) + nseeds = n + # (rng_seed.getrandbits(32) for i in range(n)) + seeds = (_generate_tag(12, rng_seed) for i in range(n)) + for i, rseed in enumerate(seeds): + rng = Random(rseed) + scenario = scenario_generator(self, rng) + try: + test(self, scenario) + except Exception as e: + import sys + if PY3: + raise type(e).with_traceback(type(e)('%s with scenario %s (%i of %i)' % + (e.message, rseed, i+1, nseeds)), sys.exc_info()[2]) + else: + raise (type(e), type(e)('%s with scenario %s (%i of %i)' + % (e.message, rseed, i+1, nseeds)), sys.exc_info()[2]) + return randomized_test + return decorator --- a/tests/__init__.py +++ b/tests/__init__.py @@ -1,115 +0,0 @@ -import unittest - -from jsondiff import diff, replace, add, discard, insert, delete, update, JsonDiffer - -from .utils import generate_random_json, perturbate_json - -from nose_random import randomize - - -class JsonDiffTests(unittest.TestCase): - - def test_a(self): - - self.assertEqual({}, diff(1, 1)) - self.assertEqual({}, diff(True, True)) - self.assertEqual({}, diff('abc', 'abc')) - self.assertEqual({}, diff([1, 2], [1, 2])) - self.assertEqual({}, diff((1, 2), (1, 2))) - self.assertEqual({}, diff({1, 2}, {1, 2})) - self.assertEqual({}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2})) - self.assertEqual({}, diff([], [])) - self.assertEqual({}, diff(None, None)) - self.assertEqual({}, diff({}, {})) - self.assertEqual({}, diff(set(), set())) - - self.assertEqual(2, diff(1, 2)) - self.assertEqual(False, diff(True, False)) - self.assertEqual('def', diff('abc', 'def')) - self.assertEqual([3, 4], diff([1, 2], [3, 4])) - self.assertEqual((3, 4), diff((1, 2), (3, 4))) - self.assertEqual({3, 4}, diff({1, 2}, {3, 4})) - self.assertEqual({replace: {'c': 3, 'd': 4}}, diff({'a': 1, 'b': 2}, {'c': 3, 'd': 4})) - - self.assertEqual({replace: {'c': 3, 'd': 4}}, diff([1, 2], {'c': 3, 'd': 4})) - self.assertEqual(123, diff({'a': 1, 'b': 2}, 123)) - - self.assertEqual({delete: ['b']}, diff({'a': 1, 'b': 2}, {'a': 1})) - self.assertEqual({'b': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 3})) - self.assertEqual({'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2, 'c': 3})) - self.assertEqual({delete: ['b'], 'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'c': 3})) - - self.assertEqual({add: {3}}, diff({1, 2}, {1, 2, 3})) - self.assertEqual({add: {3}, discard: {4}}, diff({1, 2, 4}, {1, 2, 3})) - self.assertEqual({discard: {4}}, diff({1, 2, 4}, {1, 2})) - - self.assertEqual({insert: [(1, 'b')]}, diff(['a', 'c'], ['a', 'b', 'c'])) - self.assertEqual({insert: [(1, 'b')], delete: [3, 0]}, diff(['x', 'a', 'c', 'x'], ['a', 'b', 'c'])) - self.assertEqual( - {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, - diff(['x', 'a', {'v': 11}, 'c', 'x'], ['a', {'v': 20}, 'b', 'c']) - ) - self.assertEqual( - {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, - diff(['x', 'a', {'u': 10, 'v': 11}, 'c', 'x'], ['a', {'u': 10, 'v': 20}, 'b', 'c']) - ) - - def test_marshal(self): - differ = JsonDiffer() - - d = { - delete: 3, - '$delete': 4, - insert: 4, - '$$something': 1 - } - - dm = differ.marshal(d) - - self.assertEqual(d, differ.unmarshal(dm)) - - def generate_scenario(self, rng): - a = generate_random_json(rng, sets=True) - b = perturbate_json(a, rng, sets=True) - return a, b - - def generate_scenario_no_sets(self, rng): - a = generate_random_json(rng, sets=False) - b = perturbate_json(a, rng, sets=False) - return a, b - - @randomize(1000, generate_scenario_no_sets) - def test_dump(self, scenario): - a, b = scenario - diff(a, b, syntax='compact', dump=True) - diff(a, b, syntax='explicit', dump=True) - diff(a, b, syntax='symmetric', dump=True) - - @randomize(1000, generate_scenario) - def test_compact_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='compact') - d = differ.diff(a, b) - self.assertEqual(b, differ.patch(a, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) - - - @randomize(1000, generate_scenario) - def test_explicit_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='explicit') - d = differ.diff(a, b) - # self.assertEqual(b, differ.patch(a, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) - - @randomize(1000, generate_scenario) - def test_symmetric_syntax(self, scenario): - a, b = scenario - differ = JsonDiffer(syntax='symmetric') - d = differ.diff(a, b) - self.assertEqual(b, differ.patch(a, d)) - self.assertEqual(a, differ.unpatch(b, d)) - dm = differ.marshal(d) - self.assertEqual(d, differ.unmarshal(dm)) --- /dev/null +++ b/tests/test_jsondiff.py @@ -0,0 +1,112 @@ +import unittest + +from jsondiff import diff, replace, add, discard, insert, delete, update, JsonDiffer +from tests.utils import generate_random_json, perturbate_json +from tests._random import randomize + + +class JsonDiffTests(unittest.TestCase): + + def test_a(self): + + self.assertEqual({}, diff(1, 1)) + self.assertEqual({}, diff(True, True)) + self.assertEqual({}, diff('abc', 'abc')) + self.assertEqual({}, diff([1, 2], [1, 2])) + self.assertEqual({}, diff((1, 2), (1, 2))) + self.assertEqual({}, diff({1, 2}, {1, 2})) + self.assertEqual({}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2})) + self.assertEqual({}, diff([], [])) + self.assertEqual({}, diff(None, None)) + self.assertEqual({}, diff({}, {})) + self.assertEqual({}, diff(set(), set())) + + self.assertEqual(2, diff(1, 2)) + self.assertEqual(False, diff(True, False)) + self.assertEqual('def', diff('abc', 'def')) + self.assertEqual([3, 4], diff([1, 2], [3, 4])) + self.assertEqual((3, 4), diff((1, 2), (3, 4))) + self.assertEqual({3, 4}, diff({1, 2}, {3, 4})) + self.assertEqual({replace: {'c': 3, 'd': 4}}, diff({'a': 1, 'b': 2}, {'c': 3, 'd': 4})) + + self.assertEqual({replace: {'c': 3, 'd': 4}}, diff([1, 2], {'c': 3, 'd': 4})) + self.assertEqual(123, diff({'a': 1, 'b': 2}, 123)) + + self.assertEqual({delete: ['b']}, diff({'a': 1, 'b': 2}, {'a': 1})) + self.assertEqual({'b': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 3})) + self.assertEqual({'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'b': 2, 'c': 3})) + self.assertEqual({delete: ['b'], 'c': 3}, diff({'a': 1, 'b': 2}, {'a': 1, 'c': 3})) + + self.assertEqual({add: {3}}, diff({1, 2}, {1, 2, 3})) + self.assertEqual({add: {3}, discard: {4}}, diff({1, 2, 4}, {1, 2, 3})) + self.assertEqual({discard: {4}}, diff({1, 2, 4}, {1, 2})) + + self.assertEqual({insert: [(1, 'b')]}, diff(['a', 'c'], ['a', 'b', 'c'])) + self.assertEqual({insert: [(1, 'b')], delete: [3, 0]}, diff(['x', 'a', 'c', 'x'], ['a', 'b', 'c'])) + self.assertEqual( + {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, + diff(['x', 'a', {'v': 11}, 'c', 'x'], ['a', {'v': 20}, 'b', 'c']) + ) + self.assertEqual( + {insert: [(2, 'b')], delete: [4, 0], 1: {'v': 20}}, + diff(['x', 'a', {'u': 10, 'v': 11}, 'c', 'x'], ['a', {'u': 10, 'v': 20}, 'b', 'c']) + ) + + def test_marshal(self): + differ = JsonDiffer() + + d = { + delete: 3, + '$delete': 4, + insert: 4, + '$$something': 1 + } + + dm = differ.marshal(d) + + self.assertEqual(d, differ.unmarshal(dm)) + + def generate_scenario(self, rng): + a = generate_random_json(rng, sets=True) + b = perturbate_json(a, rng, sets=True) + return a, b + + def generate_scenario_no_sets(self, rng): + a = generate_random_json(rng, sets=False) + b = perturbate_json(a, rng, sets=False) + return a, b + + @randomize(1000, generate_scenario_no_sets) + def test_dump(self, scenario): + a, b = scenario + diff(a, b, syntax='compact', dump=True) + diff(a, b, syntax='explicit', dump=True) + diff(a, b, syntax='symmetric', dump=True) + + @randomize(1000, generate_scenario) + def test_compact_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='compact') + d = differ.diff(a, b) + self.assertEqual(b, differ.patch(a, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm)) + + @randomize(1000, generate_scenario) + def test_explicit_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='explicit') + d = differ.diff(a, b) + # self.assertEqual(b, differ.patch(a, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm)) + + @randomize(1000, generate_scenario) + def test_symmetric_syntax(self, scenario): + a, b = scenario + differ = JsonDiffer(syntax='symmetric') + d = differ.diff(a, b) + self.assertEqual(b, differ.patch(a, d)) + self.assertEqual(a, differ.unpatch(b, d)) + dm = differ.marshal(d) + self.assertEqual(d, differ.unmarshal(dm))
[ "CVE-2020-14343", "CVE-2020-25659" ]
[]
MIT
17,700
libmad
libmad-0.15.1b-ppc.patch
Index: libmad-0.15.1b/fixed.h =================================================================== --- libmad-0.15.1b.orig/fixed.h +++ libmad-0.15.1b/fixed.h @@ -392,8 +392,8 @@ mad_fixed_t mad_f_mul_inline(mad_fixed_t asm ("addc %0,%2,%3\n\t" \ "adde %1,%4,%5" \ : "=r" (lo), "=r" (hi) \ - : "%r" (lo), "r" (__lo), \ - "%r" (hi), "r" (__hi) \ + : "0" (lo), "r" (__lo), \ + "1" (hi), "r" (__hi) \ : "xer"); \ }) # endif
[ "CVE-2017-8373" ]
[ "libmad-0.15.1b/fixed.h" ]
GPL-2.0-or-later
7,826
libmad
libmad-0.15.1b-pkgconfig.patch
--- libmad-0.15.1b.old/configure.ac 2004-01-23 10:41:32.000000000 +0100 +++ libmad-0.15.1b/configure.ac 2004-08-07 02:25:24.633462168 +0200 @@ -429,5 +429,5 @@ dnl AC_SUBST(LTLIBOBJS) AC_CONFIG_FILES([Makefile msvc++/Makefile \ - libmad.list]) + libmad.list mad.pc]) AC_OUTPUT --- libmad-0.15.1b.old/mad.pc.in 1970-01-01 01:00:00.000000000 +0100 +++ libmad-0.15.1b/mad.pc.in 2004-08-07 02:04:59.617692872 +0200 @@ -0,0 +1,14 @@ +# libmad pkg-config source file + +prefix=@prefix@ +exec_prefix=@exec_prefix@ +libdir=@libdir@ +includedir=@includedir@ + +Name: mad +Description: MPEG Audio Decoder +Version: @VERSION@ +Requires: +Conflicts: +Libs: -L${libdir} -lmad -lm +Cflags: -I${includedir} --- libmad-0.15.1b.old/Makefile.am 2004-02-17 03:02:03.000000000 +0100 +++ libmad-0.15.1b/Makefile.am 2004-08-07 02:03:19.859858368 +0200 @@ -24,6 +24,9 @@ SUBDIRS = DIST_SUBDIRS = msvc++ +pkgconfigdir = $(libdir)/pkgconfig +pkgconfig_DATA = mad.pc + lib_LTLIBRARIES = libmad.la include_HEADERS = mad.h @@ -34,7 +37,8 @@ minimad_LDADD = libmad.la EXTRA_DIST = mad.h.sed \ - CHANGES COPYRIGHT CREDITS README TODO VERSION + CHANGES COPYRIGHT CREDITS README TODO VERSION \ + mad.pc.in exported_headers = version.h fixed.h bit.h timer.h stream.h frame.h \ synth.h decoder.h
[ "CVE-2017-8374" ]
[]
GPL-2.0-or-later
17,700
libmad
libmad-0.15.1b-automake.patch
--- Makefile.am-dist 2004-02-26 12:49:05.000000000 +0100 +++ Makefile.am 2004-02-26 12:49:13.000000000 +0100 @@ -21,6 +21,8 @@ ## Process this file with automake to produce Makefile.in +AUTOMAKE_OPTIONS = foreign + SUBDIRS = DIST_SUBDIRS = msvc++
[ "CVE-2017-8373" ]
[]
GPL-2.0-or-later
7,826
libmad
libmad-0.15.1b-automake.patch
--- Makefile.am-dist 2004-02-26 12:49:05.000000000 +0100 +++ Makefile.am 2004-02-26 12:49:13.000000000 +0100 @@ -21,6 +21,8 @@ ## Process this file with automake to produce Makefile.in +AUTOMAKE_OPTIONS = foreign + SUBDIRS = DIST_SUBDIRS = msvc++
[ "CVE-2017-8374" ]
[]
GPL-2.0-or-later
17,700
libmad
Provide-Thumb-2-alternative-code-for-MAD_F_MLN.diff
From: Dave Martin Subject: "rsc" doesnt exist anymore in thumb2 Index: libmad-0.15.1b/fixed.h =================================================================== --- libmad-0.15.1b.orig/fixed.h +++ libmad-0.15.1b/fixed.h @@ -275,12 +275,25 @@ mad_fixed_t mad_f_mul_inline(mad_fixed_t : "+r" (lo), "+r" (hi) \ : "%r" (x), "r" (y)) +#ifdef __thumb__ +/* In Thumb-2, the RSB-immediate instruction is only allowed with a zero + operand. If needed this code can also support Thumb-1 + (simply append "s" to the end of the second two instructions). */ +# define MAD_F_MLN(hi, lo) \ + asm ("rsbs %0, %0, #0\n\t" \ + "sbc %1, %1, %1\n\t" \ + "sub %1, %1, %2" \ + : "+&r" (lo), "=&r" (hi) \ + : "r" (hi) \ + : "cc") +#else /* ! __thumb__ */ # define MAD_F_MLN(hi, lo) \ asm ("rsbs %0, %2, #0\n\t" \ "rsc %1, %3, #0" \ - : "=r" (lo), "=r" (hi) \ + : "=&r" (lo), "=r" (hi) \ : "0" (lo), "1" (hi) \ : "cc") +#endif /* __thumb__ */ # define mad_f_scale64(hi, lo) \ ({ mad_fixed_t __result; \
[ "CVE-2017-8373" ]
[ "libmad-0.15.1b/fixed.h" ]
GPL-2.0-or-later
5,666
sgmltool
sgml-tools-retval.diff
--- sgmls-1.1/replace.c +++ sgmls-1.1/replace.c @@ -274,6 +274,8 @@ default: parse_error("bad input character `%c'", c); } + /* unreached */ + return 0; } static
[ "CVE-2016-6354" ]
[]
SUSE-Public-Domain
5,666
sgmltool
cflags-sgml-tools-1.0.9.diff
--- sgml-tools-1.0.9/Makefile.in.~1~ 2005-10-10 11:46:17.000000000 +0200 +++ sgml-tools-1.0.9/Makefile.in 2005-10-10 14:05:24.000000000 +0200 @@ -49,10 +49,10 @@ endif @echo "Compiling preprocessor (in sgmlpre/)..." ( cd sgmlpre ; \ - $(MAKE) CFLAGS="$(OPTIMIZE)" LEX=flex sgmlpre || exit -1 ) + $(MAKE) OPTIMIZE="$(OPTIMIZE)" LEX=flex sgmlpre || exit -1 ) @echo "Compiling RTF conversion tools (in rtf-fix/)..." ( cd rtf-fix ; \ - $(MAKE) CFLAGS="$(OPTIMIZE)" || exit -1 ) + $(MAKE) OPTIMIZE="$(OPTIMIZE)" || exit -1 ) install:: @echo "Installing binaries in $(bindir) ..." Diff finished. Mon Oct 10 14:05:44 2005
[ "CVE-2016-6354" ]
[]
SUSE-Public-Domain
5,666
sgmltool
sgmltool-1.0.9-expandsyntax.diff
--- sgml-tools-1.0.9/lib/SGMLTools.pm +++ sgml-tools-1.0.9/lib/SGMLTools.pm @@ -440,7 +440,7 @@ create_temp("$tmpbase.3"); system ("$main::progs->{SGMLSASP} $style $mapping <$tmpbase.2| - expand -$global->{tabsize} >$tmpbase.3"); + expand -t $global->{tabsize} >$tmpbase.3"); # # If a postASP stage is defined, let the format handle it.
[ "CVE-2016-6354" ]
[]
SUSE-Public-Domain
5,666
sgmltool
sgmltool-man-entities.diff
--- sgml-tools-1.0.9/lib/dist/fmt_txt.pl Mon Jun 29 21:59:17 1998 +++ sgml-tools-1.0.9/lib/dist/fmt_txt.pl Mon Jun 29 21:59:17 1998 @@ -81,11 +81,12 @@ { my ($infile, $outfile) = @_; my (@toc, @lines); - if ($txt->{manpage}) - { - copy ($infile, $outfile); - return; - } + #ke + #if ($txt->{manpage}) + # { + # copy ($infile, $outfile); + # return; + # } # note the conversion of `sdata_dirs' list to an anonymous array to # make a single argument Diff finished at Wed Jun 19 14:10:25
[ "CVE-2016-6354" ]
[]
SUSE-Public-Domain
16,042
bbswitch
0001-Update-proc_create_call-for-5.6-kernel.patch
From 314d223a1d1bab86370c2da3771b76bf8ac93e3b Mon Sep 17 00:00:00 2001 From: Adrian Curless <awcurless@wpi.edu> Date: Tue, 31 Mar 2020 22:43:28 -0400 Subject: [PATCH] Update proc_create call to pass proc_ops instead of file_operations, a change made in 5.6 Modified by Antonio Larrosa <alarrosa@suse.com> to not bump the version number since the patch was done by a fork of the upstream project [1] and the only change it does is to fix it to build with the new kernel. [1] https://github.com/awcurless/bbswitch/commit/314d223a1d1bab86370c2da3771b76bf8ac93e3b --- bbswitch.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/bbswitch.c b/bbswitch.c index 341608f..2934b5a 100644 --- a/bbswitch.c +++ b/bbswitch.c @@ -35,8 +35,9 @@ #include <linux/suspend.h> #include <linux/seq_file.h> #include <linux/pm_runtime.h> +#include <linux/proc_fs.h> -#define BBSWITCH_VERSION "0.8" #+#define BBSWITCH_VERSION "0.9" +#define BBSWITCH_VERSION "0.8" MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Toggle the discrete graphics card"); @@ -375,12 +376,12 @@ static int bbswitch_pm_handler(struct notifier_block *nbp, return 0; } -static struct file_operations bbswitch_fops = { - .open = bbswitch_proc_open, - .read = seq_read, - .write = bbswitch_proc_write, - .llseek = seq_lseek, - .release= single_release +static struct proc_ops bbswitch_ops = { + .proc_open = bbswitch_proc_open, + .proc_read = seq_read, + .proc_write = bbswitch_proc_write, + .proc_lseek = seq_lseek, + .proc_release = single_release, }; static struct notifier_block nb = { @@ -457,7 +458,7 @@ static int __init bbswitch_init(void) { } } - acpi_entry = proc_create("bbswitch", 0664, acpi_root_dir, &bbswitch_fops); + acpi_entry = proc_create("bbswitch", 0664, acpi_root_dir, &bbswitch_ops); if (acpi_entry == NULL) { pr_err("Couldn't create proc entry\n"); return -ENOMEM;
[ "CVE-2021-3444", "CVE-2021-3428", "CVE-2021-29647", "CVE-2021-29265", "CVE-2021-29264", "CVE-2021-28972", "CVE-2021-28971", "CVE-2021-28964", "CVE-2021-28688", "CVE-2021-28660", "CVE-2021-28375", "CVE-2021-28038", "CVE-2021-27365", "CVE-2021-27364", "CVE-2021-27363", "CVE-2020-35519", "CVE-2020-27815", "CVE-2020-27171", "CVE-2020-27170", "CVE-2019-19769", "CVE-2019-18814" ]
[ "bbswitch.c" ]
GPL-2.0-or-later
13,737
bbswitch
0001-Update-proc_create_call-for-5.6-kernel.patch
From 314d223a1d1bab86370c2da3771b76bf8ac93e3b Mon Sep 17 00:00:00 2001 From: Adrian Curless <awcurless@wpi.edu> Date: Tue, 31 Mar 2020 22:43:28 -0400 Subject: [PATCH] Update proc_create call to pass proc_ops instead of file_operations, a change made in 5.6 Modified by Antonio Larrosa <alarrosa@suse.com> to not bump the version number since the patch was done by a fork of the upstream project [1] and the only change it does is to fix it to build with the new kernel. [1] https://github.com/awcurless/bbswitch/commit/314d223a1d1bab86370c2da3771b76bf8ac93e3b --- bbswitch.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/bbswitch.c b/bbswitch.c index 341608f..2934b5a 100644 --- a/bbswitch.c +++ b/bbswitch.c @@ -35,8 +35,9 @@ #include <linux/suspend.h> #include <linux/seq_file.h> #include <linux/pm_runtime.h> +#include <linux/proc_fs.h> -#define BBSWITCH_VERSION "0.8" #+#define BBSWITCH_VERSION "0.9" +#define BBSWITCH_VERSION "0.8" MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Toggle the discrete graphics card"); @@ -375,12 +376,12 @@ static int bbswitch_pm_handler(struct notifier_block *nbp, return 0; } -static struct file_operations bbswitch_fops = { - .open = bbswitch_proc_open, - .read = seq_read, - .write = bbswitch_proc_write, - .llseek = seq_lseek, - .release= single_release +static struct proc_ops bbswitch_ops = { + .proc_open = bbswitch_proc_open, + .proc_read = seq_read, + .proc_write = bbswitch_proc_write, + .proc_lseek = seq_lseek, + .proc_release = single_release, }; static struct notifier_block nb = { @@ -457,7 +458,7 @@ static int __init bbswitch_init(void) { } } - acpi_entry = proc_create("bbswitch", 0664, acpi_root_dir, &bbswitch_fops); + acpi_entry = proc_create("bbswitch", 0664, acpi_root_dir, &bbswitch_ops); if (acpi_entry == NULL) { pr_err("Couldn't create proc entry\n"); return -ENOMEM;
[ "CVE-2020-14356", "CVE-2020-14331", "CVE-2020-16166", "CVE-2020-10135", "CVE-2020-0305", "CVE-2020-15780", "CVE-2020-10781" ]
[ "bbswitch.c" ]
GPL-2.0-or-later
10,311
NetworkManager
systemd-network-config.patch
Index: NetworkManager-1.4.0/data/NetworkManager.service.in =================================================================== --- NetworkManager-1.4.0.orig/data/NetworkManager.service.in +++ NetworkManager-1.4.0/data/NetworkManager.service.in @@ -1,7 +1,7 @@ [Unit] Description=Network Manager Documentation=man:NetworkManager(8) -Wants=network.target +Wants=remote-fs.target network.target After=network-pre.target dbus.service Before=network.target @DISTRO_NETWORK_SERVICE@ @@ -20,6 +20,6 @@ ProtectHome=read-only [Install] WantedBy=multi-user.target -Alias=dbus-org.freedesktop.NetworkManager.service +Alias=network.service Also=NetworkManager-dispatcher.service Index: NetworkManager-1.4.0/data/NetworkManager-wait-online.service.in =================================================================== --- NetworkManager-1.4.0.orig/data/NetworkManager-wait-online.service.in +++ NetworkManager-1.4.0/data/NetworkManager-wait-online.service.in @@ -7,7 +7,9 @@ Before=network-online.target [Service] Type=oneshot -ExecStart=@bindir@/nm-online -s -q --timeout=30 +Environment=NM_ONLINE_TIMEOUT=0 +EnvironmentFile=-/etc/sysconfig/network/config +ExecStart=/bin/bash -c "if [ ${NM_ONLINE_TIMEOUT} -gt 0 ]; then @bindir@/nm-online -s -q --timeout=${NM_ONLINE_TIMEOUT} ; else /bin/true ; fi" RemainAfterExit=yes [Install]
[ "CVE-2018-1000135" ]
[ "data/NetworkManager.service.in", "data/NetworkManager-wait-online.service.in" ]
GPL-2.0-or-later AND LGPL-2.1-or-later
851
NetworkManager
nm-treat-not-saved-secrets-just-like-agent-owned-when-cl.diff
From b4ccaf268f1c32d09df8f678dcf4c296f9b2b213 Mon Sep 17 00:00:00 2001 From: Ludwig Nussel <ludwig.nussel@suse.de> Date: Tue, 27 Sep 2011 12:34:11 +0200 Subject: [PATCH 2/3] treat not saved secrets just like agent owned when cleaning --- src/settings/nm-settings-connection.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/src/settings/nm-settings-connection.c b/src/settings/nm-settings-connection.c index cdad832..4cd9395 100644 --- a/src/settings/nm-settings-connection.c +++ b/src/settings/nm-settings-connection.c @@ -592,7 +592,7 @@ clear_nonagent_secrets (GHashTableIter *iter, NMSettingSecretFlags flags, gpointer user_data) { - if (flags != NM_SETTING_SECRET_FLAG_AGENT_OWNED) + if (!(flags & (NM_SETTING_SECRET_FLAG_AGENT_OWNED | NM_SETTING_SECRET_FLAG_NOT_SAVED))) g_hash_table_iter_remove (iter); return TRUE; } -- 1.7.3.4
[ "CVE-2012-2736" ]
[ "src/settings/nm-settings-connection.c" ]
GPL-2.0-or-later AND LGPL-2.1-or-later
851
NetworkManager
nm-disable-adhoc-wpa.patch
commit 69247a00eacd00617acbf1dfcee8497437b8ad39 Author: Dan Williams <dcbw@redhat.com> Date: Fri Mar 16 17:56:32 2012 -0500 wifi: disable Ad-Hoc WPA connections (lp:905748) The kernel is broken for Ad-Hoc WPA, and creates the connections as open connections instead. Yeah, eventually we can use wpa_supplicant with RSN support, but for now we just have to disable Ad-Hoc WPA because it's a problem to say we're creating a protected network but then have the kernel not do that for us. Will be re-enabled once all the necessary bits have been fixed. Note that Ad-Hoc WPA has been broken since at least 2.6.32 with mac80211-based drivers, which is what most users will be using. Index: NetworkManager-0.9.1.90/libnm-util/nm-utils.c =================================================================== --- NetworkManager-0.9.1.90.orig/libnm-util/nm-utils.c +++ NetworkManager-0.9.1.90/libnm-util/nm-utils.c @@ -1249,6 +1249,8 @@ nm_utils_security_valid (NMUtilsSecurity } break; case NMU_SEC_WPA_PSK: + if (adhoc) + return FALSE; /* FIXME: Kernel WPA Ad-Hoc support is buggy */ if (!(wifi_caps & NM_WIFI_DEVICE_CAP_WPA)) return FALSE; if (have_ap) { @@ -1275,6 +1277,8 @@ nm_utils_security_valid (NMUtilsSecurity } break; case NMU_SEC_WPA2_PSK: + if (adhoc) + return FALSE; /* FIXME: Kernel WPA Ad-Hoc support is buggy */ if (!(wifi_caps & NM_WIFI_DEVICE_CAP_RSN)) return FALSE; if (have_ap) { Index: NetworkManager-0.9.1.90/src/nm-device-wifi.c =================================================================== --- NetworkManager-0.9.1.90.orig/src/nm-device-wifi.c +++ NetworkManager-0.9.1.90/src/nm-device-wifi.c @@ -1312,6 +1312,36 @@ real_deactivate (NMDevice *dev) } static gboolean +is_adhoc_wpa (NMConnection *connection) +{ + NMSettingWireless *s_wifi; + NMSettingWirelessSecurity *s_wsec; + const char *mode, *key_mgmt; + + /* The kernel doesn't support Ad-Hoc WPA connections well at this time, + * and turns them into open networks. It's been this way since at least + * 2.6.30 or so; until that's fixed, disable WPA-protected Ad-Hoc networks. + */ + + s_wifi = nm_connection_get_setting_wireless (connection); + g_return_val_if_fail (s_wifi != NULL, FALSE); + + mode = nm_setting_wireless_get_mode (s_wifi); + if (g_strcmp0 (mode, NM_SETTING_WIRELESS_MODE_ADHOC) != 0) + return FALSE; + + s_wsec = nm_connection_get_setting_wireless_security (connection); + if (!s_wsec) + return FALSE; + + key_mgmt = nm_setting_wireless_security_get_key_mgmt (s_wsec); + if (g_strcmp0 (key_mgmt, "wpa-none") != 0) + return FALSE; + + return TRUE; +} + +static gboolean real_check_connection_compatible (NMDevice *device, NMConnection *connection, GError **error) @@ -1368,6 +1398,14 @@ real_check_connection_compatible (NMDevi } } + if (is_adhoc_wpa (connection)) { + g_set_error_literal (error, + NM_WIFI_ERROR, + NM_WIFI_ERROR_CONNECTION_INCOMPATIBLE, + "WPA Ad-Hoc disabled due to kernel bugs"); + return FALSE; + } + // FIXME: check channel/freq/band against bands the hardware supports // FIXME: check encryption against device capabilities // FIXME: check bitrate against device capabilities @@ -1528,6 +1566,18 @@ real_complete_connection (NMDevice *devi return FALSE; } + /* The kernel doesn't support Ad-Hoc WPA connections well at this time, + * and turns them into open networks. It's been this way since at least + * 2.6.30 or so; until that's fixed, disable WPA-protected Ad-Hoc networks. + */ + if (is_adhoc_wpa (connection)) { + g_set_error_literal (error, + NM_SETTING_WIRELESS_ERROR, + NM_SETTING_WIRELESS_ERROR_INVALID_PROPERTY, + "WPA Ad-Hoc disabled due to kernel bugs"); + return FALSE; + } + g_assert (ssid); str_ssid = nm_utils_ssid_to_utf8 (ssid); format = g_strdup_printf ("%s %%d", str_ssid); @@ -3223,6 +3273,16 @@ real_act_stage1_prepare (NMDevice *dev, connection = nm_act_request_get_connection (req); g_return_val_if_fail (connection != NULL, NM_ACT_STAGE_RETURN_FAILURE); + /* The kernel doesn't support Ad-Hoc WPA connections well at this time, + * and turns them into open networks. It's been this way since at least + * 2.6.30 or so; until that's fixed, disable WPA-protected Ad-Hoc networks. + */ + if (is_adhoc_wpa (connection)) { + nm_log_warn (LOGD_WIFI, "Ad-Hoc WPA disabled due to kernel bugs"); + *reason = NM_DEVICE_STATE_REASON_SUPPLICANT_CONFIG_FAILED; + return NM_ACT_STAGE_RETURN_FAILURE; + } + /* Set spoof MAC to the interface */ s_wireless = (NMSettingWireless *) nm_connection_get_setting (connection, NM_TYPE_SETTING_WIRELESS); g_assert (s_wireless); Index: NetworkManager-0.9.1.90/src/settings/nm-settings.c =================================================================== --- NetworkManager-0.9.1.90.orig/src/settings/nm-settings.c +++ NetworkManager-0.9.1.90/src/settings/nm-settings.c @@ -977,6 +977,38 @@ add_cb (NMSettings *self, dbus_g_method_return (context, nm_connection_get_path (NM_CONNECTION (connection))); } +/* FIXME: remove if/when kernel supports adhoc wpa */ +static gboolean +is_adhoc_wpa (NMConnection *connection) +{ + NMSettingWireless *s_wifi; + NMSettingWirelessSecurity *s_wsec; + const char *mode, *key_mgmt; + + /* The kernel doesn't support Ad-Hoc WPA connections well at this time, + * and turns them into open networks. It's been this way since at least + * 2.6.30 or so; until that's fixed, disable WPA-protected Ad-Hoc networks. + */ + + s_wifi = nm_connection_get_setting_wireless (connection); + if (!s_wifi) + return FALSE; + + mode = nm_setting_wireless_get_mode (s_wifi); + if (g_strcmp0 (mode, NM_SETTING_WIRELESS_MODE_ADHOC) != 0) + return FALSE; + + s_wsec = nm_connection_get_setting_wireless_security (connection); + if (!s_wsec) + return FALSE; + + key_mgmt = nm_setting_wireless_security_get_key_mgmt (s_wsec); + if (g_strcmp0 (key_mgmt, "wpa-none") != 0) + return FALSE; + + return TRUE; +} + void nm_settings_add_connection (NMSettings *self, NMConnection *connection, @@ -1002,6 +1034,19 @@ nm_settings_add_connection (NMSettings * callback (self, NULL, error, context, user_data); g_error_free (error); return; + } + + /* The kernel doesn't support Ad-Hoc WPA connections well at this time, + * and turns them into open networks. It's been this way since at least + * 2.6.30 or so; until that's fixed, disable WPA-protected Ad-Hoc networks. + */ + if (is_adhoc_wpa (connection)) { + error = g_error_new_literal (NM_SETTINGS_ERROR, + NM_SETTINGS_ERROR_INVALID_CONNECTION, + "WPA Ad-Hoc disabled due to kernel bugs"); + callback (self, NULL, error, context, user_data); + g_error_free (error); + return; } /* Do any of the plugins support adding? */
[ "CVE-2012-2736" ]
[ "libnm-util/nm-utils.c", "src/nm-device-wifi.c", "src/settings/nm-settings.c" ]
GPL-2.0-or-later AND LGPL-2.1-or-later
851
NetworkManager
nm-probe-radius-server-cert.patch
diff --git a/introspection/nm-device-wifi.xml b/introspection/nm-device-wifi.xml index fb50762..fdff623 100644 --- a/introspection/nm-device-wifi.xml +++ b/introspection/nm-device-wifi.xml @@ -14,6 +14,18 @@ </tp:docstring> </method> + <method name="ProbeCert"> + <annotation name="org.freedesktop.DBus.GLib.CSymbol" value="impl_device_probe_cert"/> + <arg name="ssid" type="ay" direction="in"> + <tp:docstring> + The SSID of the AP to be probed + </tp:docstring> + </arg> + <tp:docstring> + Probe the certificate of the RADIUS server. + </tp:docstring> + </method> + <property name="HwAddress" type="s" access="read"> <tp:docstring> The active hardware address of the device. @@ -81,6 +93,17 @@ </tp:docstring> </signal> + <signal name="CertReceived"> + <arg name="cert" type="a{sv}" tp:type="String_Variant_Map"> + <tp:docstring> + The certificate of the RADIUS server + </tp:docstring> + </arg> + <tp:docstring> + Emitted when wpa_supplicant replies the certificate of the RADIUS server. + </tp:docstring> + </signal> + <tp:flags name="NM_802_11_DEVICE_CAP" type="u"> <tp:docstring> Flags describing the capabilities of a wireless device. diff --git a/libnm-glib/libnm-glib.ver b/libnm-glib/libnm-glib.ver index 1a4a68c..51b8c50 100644 --- a/libnm-glib/libnm-glib.ver +++ b/libnm-glib/libnm-glib.ver @@ -98,6 +98,7 @@ global: nm_device_wifi_get_permanent_hw_address; nm_device_wifi_get_type; nm_device_wifi_new; + nm_device_wifi_probe_cert; nm_device_wimax_get_active_nsp; nm_device_wimax_get_bsid; nm_device_wimax_get_center_frequency; diff --git a/libnm-glib/nm-device-wifi.c b/libnm-glib/nm-device-wifi.c index 7d0e1b9..2bf4e3b 100644 --- a/libnm-glib/nm-device-wifi.c +++ b/libnm-glib/nm-device-wifi.c @@ -84,6 +84,7 @@ enum { enum { ACCESS_POINT_ADDED, ACCESS_POINT_REMOVED, + CERT_RECEIVED, LAST_SIGNAL }; @@ -385,6 +386,26 @@ nm_device_wifi_get_access_point_by_path (NMDeviceWifi *device, return ap; } +gboolean +nm_device_wifi_probe_cert (NMDeviceWifi *device, + const GByteArray *ssid) +{ + NMDeviceWifiPrivate *priv; + GError *error = NULL; + + g_return_val_if_fail (NM_IS_DEVICE_WIFI (device), FALSE); + + priv = NM_DEVICE_WIFI_GET_PRIVATE (device); + + if (!org_freedesktop_NetworkManager_Device_Wireless_probe_cert (priv->proxy, ssid, &error)) { + g_warning ("%s: error probe certificate: %s", __func__, error->message); + g_error_free (error); + return FALSE; + } + + return TRUE; +} + static void access_point_added_proxy (DBusGProxy *proxy, char *path, gpointer user_data) { @@ -440,6 +461,14 @@ access_point_removed_proxy (DBusGProxy *proxy, char *path, gpointer user_data) } static void +cert_received_proxy (DBusGProxy *proxy, GHashTable *cert, gpointer user_data) +{ + NMDeviceWifi *self = NM_DEVICE_WIFI (user_data); + + g_signal_emit (self, signals[CERT_RECEIVED], 0, cert); +} + +static void clean_up_aps (NMDeviceWifi *self, gboolean notify) { NMDeviceWifiPrivate *priv; @@ -719,6 +748,13 @@ constructor (GType type, G_CALLBACK (access_point_removed_proxy), object, NULL); + dbus_g_proxy_add_signal (priv->proxy, "CertReceived", + DBUS_TYPE_G_MAP_OF_VARIANT, + G_TYPE_INVALID); + dbus_g_proxy_connect_signal (priv->proxy, "CertReceived", + G_CALLBACK (cert_received_proxy), + object, NULL); + register_for_property_changed (NM_DEVICE_WIFI (object)); g_signal_connect (NM_DEVICE (object), @@ -888,4 +924,22 @@ nm_device_wifi_class_init (NMDeviceWifiClass *wifi_class) g_cclosure_marshal_VOID__OBJECT, G_TYPE_NONE, 1, G_TYPE_OBJECT); + + /** + * NMDeviceWifi::cert-received: + * @device: the wifi device that received the signal + * @subject: the subject of the RADIUS server + * @hash: the hash of the RADIUS server + * + * Notifies that a certificate of a RADIUS server is received. + **/ + signals[CERT_RECEIVED] = + g_signal_new ("cert-received", + G_OBJECT_CLASS_TYPE (object_class), + G_SIGNAL_RUN_FIRST, + G_STRUCT_OFFSET (NMDeviceWifiClass, cert_received), + NULL, NULL, + g_cclosure_marshal_VOID__BOXED, + G_TYPE_NONE, 1, + G_TYPE_HASH_TABLE); } diff --git a/libnm-glib/nm-device-wifi.h b/libnm-glib/nm-device-wifi.h index fb2ab27..313a3f3 100644 --- a/libnm-glib/nm-device-wifi.h +++ b/libnm-glib/nm-device-wifi.h @@ -53,6 +53,7 @@ typedef struct { /* Signals */ void (*access_point_added) (NMDeviceWifi *device, NMAccessPoint *ap); void (*access_point_removed) (NMDeviceWifi *device, NMAccessPoint *ap); + void (*cert_received) (NMDeviceWifi *device, GHashTable *cert); /* Padding for future expansion */ void (*_reserved1) (void); @@ -60,7 +61,6 @@ typedef struct { void (*_reserved3) (void); void (*_reserved4) (void); void (*_reserved5) (void); - void (*_reserved6) (void); } NMDeviceWifiClass; GType nm_device_wifi_get_type (void); @@ -79,6 +79,9 @@ NMAccessPoint * nm_device_wifi_get_access_point_by_path (NMDeviceWifi * const GPtrArray * nm_device_wifi_get_access_points (NMDeviceWifi *device); +gboolean nm_device_wifi_probe_cert (NMDeviceWifi *device, + const GByteArray *ssid); + G_END_DECLS #endif /* NM_DEVICE_WIFI_H */ diff --git a/libnm-util/libnm-util.ver b/libnm-util/libnm-util.ver index 53c2482..54a70e0 100644 --- a/libnm-util/libnm-util.ver +++ b/libnm-util/libnm-util.ver @@ -108,6 +108,7 @@ global: nm_setting_802_1x_get_anonymous_identity; nm_setting_802_1x_get_ca_cert_blob; nm_setting_802_1x_get_ca_cert_path; + nm_setting_802_1x_get_ca_cert_hash; nm_setting_802_1x_get_ca_cert_scheme; nm_setting_802_1x_get_ca_path; nm_setting_802_1x_get_client_cert_blob; diff --git a/libnm-util/nm-setting-8021x.c b/libnm-util/nm-setting-8021x.c index 07fdcc2..70c668d 100644 --- a/libnm-util/nm-setting-8021x.c +++ b/libnm-util/nm-setting-8021x.c @@ -64,6 +64,7 @@ **/ #define SCHEME_PATH "file://" +#define SCHEME_HASH "hash://server/sha256/" /** * nm_setting_802_1x_error_quark: @@ -390,6 +391,9 @@ get_cert_scheme (GByteArray *array) if ( (array->len > strlen (SCHEME_PATH)) && !memcmp (array->data, SCHEME_PATH, strlen (SCHEME_PATH))) return NM_SETTING_802_1X_CK_SCHEME_PATH; + else if ( (array->len > strlen (SCHEME_HASH)) + && !memcmp (array->data, SCHEME_HASH, strlen (SCHEME_HASH))) + return NM_SETTING_802_1X_CK_SCHEME_HASH; return NM_SETTING_802_1X_CK_SCHEME_BLOB; } @@ -400,7 +404,8 @@ get_cert_scheme (GByteArray *array) * * Returns the scheme used to store the CA certificate. If the returned scheme * is %NM_SETTING_802_1X_CK_SCHEME_BLOB, use nm_setting_802_1x_get_ca_cert_blob(); - * if %NM_SETTING_802_1X_CK_SCHEME_PATH, use nm_setting_802_1x_get_ca_cert_path(). + * if %NM_SETTING_802_1X_CK_SCHEME_PATH, use nm_setting_802_1x_get_ca_cert_path(); + * if %NM_SETTING_802_1X_CK_SCHEME_HASH, use nm_setting_802_1x_get_ca_cert_hash(). * * Returns: scheme used to store the CA certificate (blob or path) **/ @@ -464,6 +469,32 @@ nm_setting_802_1x_get_ca_cert_path (NMSetting8021x *setting) return (const char *) (NM_SETTING_802_1X_GET_PRIVATE (setting)->ca_cert->data + strlen (SCHEME_PATH)); } +/** + * nm_setting_802_1x_get_ca_cert_hash: + * @setting: the #NMSetting8021x + * + * Returns the CA certificate path if the CA certificate is stored using the + * %NM_SETTING_802_1X_CK_SCHEME_HASH scheme. Not all EAP methods use a + * CA certificate (LEAP for example), and those that can take advantage of the + * CA certificate allow it to be unset. Note that lack of a CA certificate + * reduces security by allowing man-in-the-middle attacks, because the identity + * of the network cannot be confirmed by the client. + * + * Returns: hash of the RADIUS server + **/ +const char * +nm_setting_802_1x_get_ca_cert_hash (NMSetting8021x *setting) +{ + NMSetting8021xCKScheme scheme; + + g_return_val_if_fail (NM_IS_SETTING_802_1X (setting), NULL); + + scheme = nm_setting_802_1x_get_ca_cert_scheme (setting); + g_return_val_if_fail (scheme == NM_SETTING_802_1X_CK_SCHEME_HASH, NULL); + + return (const char *) (NM_SETTING_802_1X_GET_PRIVATE (setting)->ca_cert->data); +} + static GByteArray * path_to_scheme_value (const char *path) { @@ -515,7 +546,8 @@ nm_setting_802_1x_set_ca_cert (NMSetting8021x *self, if (cert_path) { g_return_val_if_fail (g_utf8_validate (cert_path, -1, NULL), FALSE); g_return_val_if_fail ( scheme == NM_SETTING_802_1X_CK_SCHEME_BLOB - || scheme == NM_SETTING_802_1X_CK_SCHEME_PATH, + || scheme == NM_SETTING_802_1X_CK_SCHEME_PATH + || scheme == NM_SETTING_802_1X_CK_SCHEME_HASH, FALSE); } @@ -533,6 +565,17 @@ nm_setting_802_1x_set_ca_cert (NMSetting8021x *self, if (!cert_path) return TRUE; + if (scheme == NM_SETTING_802_1X_CK_SCHEME_HASH) { + int length = strlen (cert_path); + if ( length == (strlen (SCHEME_HASH) + 64) + && !g_str_has_prefix (cert_path, SCHEME_HASH)) + return FALSE; + data = g_byte_array_sized_new (length + 1); + g_byte_array_append (data, (guint8 *) cert_path, length + 1); + priv->ca_cert = data; + return TRUE; + } + data = crypto_load_and_verify_certificate (cert_path, &format, error); if (data) { /* wpa_supplicant can only use raw x509 CA certs */ @@ -2397,6 +2440,13 @@ verify_cert (GByteArray *array, const char *prop_name, GError **error) return TRUE; } break; + case NM_SETTING_802_1X_CK_SCHEME_HASH: + /* For hash-based schemes, verify that the has is zero-terminated */ + if (array->data[array->len - 1] == '\0') { + if (g_str_has_prefix ((char *)array->data, SCHEME_HASH)) + return TRUE; + } + break; default: break; } diff --git a/libnm-util/nm-setting-8021x.h b/libnm-util/nm-setting-8021x.h index a6016ae..443822c 100644 --- a/libnm-util/nm-setting-8021x.h +++ b/libnm-util/nm-setting-8021x.h @@ -57,6 +57,8 @@ typedef enum { * item data * @NM_SETTING_802_1X_CK_SCHEME_PATH: certificate or key is stored as a path * to a file containing the certificate or key data + * @NM_SETTING_802_1X_CK_SCHEME_HASH: certificate or key is stored as a path + * of the CA server hash * * #NMSetting8021xCKScheme values indicate how a certificate or private key is * stored in the setting properties, either as a blob of the item's data, or as @@ -65,7 +67,8 @@ typedef enum { typedef enum { NM_SETTING_802_1X_CK_SCHEME_UNKNOWN = 0, NM_SETTING_802_1X_CK_SCHEME_BLOB, - NM_SETTING_802_1X_CK_SCHEME_PATH + NM_SETTING_802_1X_CK_SCHEME_PATH, + NM_SETTING_802_1X_CK_SCHEME_HASH } NMSetting8021xCKScheme; @@ -183,6 +186,7 @@ const char * nm_setting_802_1x_get_phase2_ca_path (NMSetting8 NMSetting8021xCKScheme nm_setting_802_1x_get_ca_cert_scheme (NMSetting8021x *setting); const GByteArray * nm_setting_802_1x_get_ca_cert_blob (NMSetting8021x *setting); const char * nm_setting_802_1x_get_ca_cert_path (NMSetting8021x *setting); +const char * nm_setting_802_1x_get_ca_cert_hash (NMSetting8021x *setting); gboolean nm_setting_802_1x_set_ca_cert (NMSetting8021x *setting, const char *cert_path, NMSetting8021xCKScheme scheme, diff --git a/src/nm-device-wifi.c b/src/nm-device-wifi.c index 9695c07..e186aad 100644 --- a/src/nm-device-wifi.c +++ b/src/nm-device-wifi.c @@ -57,10 +57,14 @@ #include "nm-setting-ip6-config.h" #include "nm-system.h" #include "nm-settings-connection.h" +#include "nm-dbus-glib-types.h" static gboolean impl_device_get_access_points (NMDeviceWifi *device, GPtrArray **aps, GError **err); +static gboolean impl_device_probe_cert (NMDeviceWifi *device, + GByteArray *ssid, + GError **err); #include "nm-device-wifi-glue.h" @@ -100,6 +104,7 @@ enum { HIDDEN_AP_FOUND, PROPERTIES_CHANGED, SCANNING_ALLOWED, + CERT_RECEIVED, LAST_SIGNAL }; @@ -114,6 +119,7 @@ typedef struct Supplicant { guint sig_ids[SUP_SIG_ID_LEN]; guint iface_error_id; + guint iface_cert_id; /* Timeouts and idles */ guint iface_con_error_cb_id; @@ -200,6 +206,7 @@ typedef enum { NM_WIFI_ERROR_CONNECTION_INVALID, NM_WIFI_ERROR_CONNECTION_INCOMPATIBLE, NM_WIFI_ERROR_ACCESS_POINT_NOT_FOUND, + NM_WIFI_ERROR_INVALID_CERT_PROBE, } NMWifiError; #define NM_WIFI_ERROR (nm_wifi_error_quark ()) @@ -232,6 +239,8 @@ nm_wifi_error_get_type (void) ENUM_ENTRY (NM_WIFI_ERROR_CONNECTION_INCOMPATIBLE, "ConnectionIncompatible"), /* Given access point was not in this device's scan list. */ ENUM_ENTRY (NM_WIFI_ERROR_ACCESS_POINT_NOT_FOUND, "AccessPointNotFound"), + /* CA Probe was not valid. */ + ENUM_ENTRY (NM_WIFI_ERROR_INVALID_CERT_PROBE, "InvalidCertProbe"), { 0, 0, 0 } }; etype = g_enum_register_static ("NMWifiError", values); @@ -1725,6 +1734,89 @@ impl_device_get_access_points (NMDeviceWifi *self, return TRUE; } +static void +supplicant_iface_certification_cb (NMSupplicantInterface * iface, + GHashTable *cert, + NMDeviceWifi * self) +{ + NMDeviceWifiPrivate *priv = NM_DEVICE_WIFI_GET_PRIVATE (self); + GValue *value; + const char *subject, *hash; + guint depth; + + value = g_hash_table_lookup (cert, "depth"); + if (!value || !G_VALUE_HOLDS_UINT(value)) { + nm_log_dbg (LOGD_WIFI_SCAN, "Depth was not set"); + return; + } + depth = g_value_get_uint (value); + + value = g_hash_table_lookup (cert, "subject"); + if (!value || !G_VALUE_HOLDS_STRING(value)) + return; + subject = g_value_get_string (value); + + value = g_hash_table_lookup (cert, "cert_hash"); + if (!value || !G_VALUE_HOLDS_STRING(value)) + return; + hash = g_value_get_string (value); + + nm_log_info (LOGD_WIFI_SCAN, "Got Server Certificate %u, subject %s, hash %s", depth, subject, hash); + + if (depth != 0) + return; + + g_signal_emit (self, signals[CERT_RECEIVED], 0, cert); + + if (priv->supplicant.iface_cert_id > 0) { + g_signal_handler_disconnect (priv->supplicant.iface, priv->supplicant.iface_cert_id); + priv->supplicant.iface_cert_id = 0; + } + + nm_supplicant_interface_disconnect (iface); +} + +static gboolean +impl_device_probe_cert (NMDeviceWifi *self, + GByteArray *ssid, + GError **err) +{ + NMDeviceWifiPrivate *priv = NM_DEVICE_WIFI_GET_PRIVATE (self); + NMSupplicantConfig *config = NULL; + guint id; + gboolean ret = FALSE; + + config = nm_supplicant_config_new_probe (ssid); + if (!config) + goto error; + + /* Hook up signal handler to capture certification signal */ + id = g_signal_connect (priv->supplicant.iface, + "certification", + G_CALLBACK (supplicant_iface_certification_cb), + self); + priv->supplicant.iface_cert_id = id; + + if (!nm_supplicant_interface_set_config (priv->supplicant.iface, config)) + goto error; + + ret = TRUE; + +error: + if (!ret) { + g_set_error_literal (err, + NM_WIFI_ERROR, + NM_WIFI_ERROR_INVALID_CERT_PROBE, + "Couldn't probe RADIUS server certificate"); + if (priv->supplicant.iface_cert_id) { + g_signal_handler_disconnect (priv->supplicant.iface, priv->supplicant.iface_cert_id); + priv->supplicant.iface_cert_id = 0; + } + } + + return ret; +} + /* * nm_device_get_mode * @@ -4021,6 +4113,16 @@ nm_device_wifi_class_init (NMDeviceWifiClass *klass) _nm_marshal_BOOLEAN__VOID, G_TYPE_BOOLEAN, 0); + signals[CERT_RECEIVED] = + g_signal_new ("cert-received", + G_OBJECT_CLASS_TYPE (object_class), + G_SIGNAL_RUN_FIRST, + G_STRUCT_OFFSET (NMDeviceWifiClass, cert_received), + NULL, NULL, + g_cclosure_marshal_VOID__BOXED, + G_TYPE_NONE, 1, + DBUS_TYPE_G_MAP_OF_VARIANT); + dbus_g_object_type_install_info (G_TYPE_FROM_CLASS (klass), &dbus_glib_nm_device_wifi_object_info); dbus_g_error_domain_register (NM_WIFI_ERROR, NULL, NM_TYPE_WIFI_ERROR); diff --git a/src/nm-device-wifi.h b/src/nm-device-wifi.h index 31ac5ad..916c2b9 100644 --- a/src/nm-device-wifi.h +++ b/src/nm-device-wifi.h @@ -77,6 +77,7 @@ struct _NMDeviceWifiClass void (*hidden_ap_found) (NMDeviceWifi *device, NMAccessPoint *ap); void (*properties_changed) (NMDeviceWifi *device, GHashTable *properties); gboolean (*scanning_allowed) (NMDeviceWifi *device); + void (*cert_received) (NMDeviceWifi *device, GHashTable *cert); }; diff --git a/src/settings/plugins/ifnet/connection_parser.c b/src/settings/plugins/ifnet/connection_parser.c index b4aaa8d..9a52baa 100644 --- a/src/settings/plugins/ifnet/connection_parser.c +++ b/src/settings/plugins/ifnet/connection_parser.c @@ -292,6 +292,8 @@ done: return success; } +#define SCHEME_HASH "hash://server/sha256/" + static gboolean eap_peap_reader (const char *eap_method, const char *ssid, @@ -307,11 +309,18 @@ eap_peap_reader (const char *eap_method, ca_cert = wpa_get_value (ssid, "ca_cert"); if (ca_cert) { - if (!nm_setting_802_1x_set_ca_cert (s_8021x, - ca_cert, - NM_SETTING_802_1X_CK_SCHEME_PATH, - NULL, error)) - goto done; + if (g_str_has_prefix (ca_cert, SCHEME_HASH)) + if (!nm_setting_802_1x_set_ca_cert (s_8021x, + ca_cert, + NM_SETTING_802_1X_CK_SCHEME_HASH, + NULL, error)) + goto done; + else + if (!nm_setting_802_1x_set_ca_cert (s_8021x, + ca_cert, + NM_SETTING_802_1X_CK_SCHEME_PATH, + NULL, error)) + goto done; } else { PLUGIN_WARN (IFNET_PLUGIN_NAME, " warning: missing " "IEEE_8021X_CA_CERT for EAP method '%s'; this is" @@ -409,11 +418,18 @@ eap_ttls_reader (const char *eap_method, /* ca cert */ ca_cert = wpa_get_value (ssid, "ca_cert"); if (ca_cert) { - if (!nm_setting_802_1x_set_ca_cert (s_8021x, - ca_cert, - NM_SETTING_802_1X_CK_SCHEME_PATH, - NULL, error)) - goto done; + if (g_str_has_prefix (ca_cert, SCHEME_HASH)) + if (!nm_setting_802_1x_set_ca_cert (s_8021x, + ca_cert, + NM_SETTING_802_1X_CK_SCHEME_HASH, + NULL, error)) + goto done; + else + if (!nm_setting_802_1x_set_ca_cert (s_8021x, + ca_cert, + NM_SETTING_802_1X_CK_SCHEME_PATH, + NULL, error)) + goto done; } else { PLUGIN_WARN (IFNET_PLUGIN_NAME, " warning: missing " "IEEE_8021X_CA_CERT for EAP method '%s'; this is" @@ -1769,12 +1785,14 @@ error: typedef NMSetting8021xCKScheme (*SchemeFunc) (NMSetting8021x * setting); typedef const char *(*PathFunc) (NMSetting8021x * setting); +typedef const char *(*HashFunc) (NMSetting8021x * setting); typedef const GByteArray *(*BlobFunc) (NMSetting8021x * setting); typedef struct ObjectType { const char *setting_key; SchemeFunc scheme_func; PathFunc path_func; + HashFunc hash_func; BlobFunc blob_func; const char *conn_name_key; const char *suffix; @@ -1784,6 +1802,7 @@ static const ObjectType ca_type = { NM_SETTING_802_1X_CA_CERT, nm_setting_802_1x_get_ca_cert_scheme, nm_setting_802_1x_get_ca_cert_path, + nm_setting_802_1x_get_ca_cert_hash, nm_setting_802_1x_get_ca_cert_blob, "ca_cert", "ca-cert.der" @@ -1793,6 +1812,7 @@ static const ObjectType phase2_ca_type = { NM_SETTING_802_1X_PHASE2_CA_CERT, nm_setting_802_1x_get_phase2_ca_cert_scheme, nm_setting_802_1x_get_phase2_ca_cert_path, + NULL, nm_setting_802_1x_get_phase2_ca_cert_blob, "ca_cert2", "inner-ca-cert.der" @@ -1802,6 +1822,7 @@ static const ObjectType client_type = { NM_SETTING_802_1X_CLIENT_CERT, nm_setting_802_1x_get_client_cert_scheme, nm_setting_802_1x_get_client_cert_path, + NULL, nm_setting_802_1x_get_client_cert_blob, "client_cert", "client-cert.der" @@ -1811,6 +1832,7 @@ static const ObjectType phase2_client_type = { NM_SETTING_802_1X_PHASE2_CLIENT_CERT, nm_setting_802_1x_get_phase2_client_cert_scheme, nm_setting_802_1x_get_phase2_client_cert_path, + NULL, nm_setting_802_1x_get_phase2_client_cert_blob, "client_cert2", "inner-client-cert.der" @@ -1820,6 +1842,7 @@ static const ObjectType pk_type = { NM_SETTING_802_1X_PRIVATE_KEY, nm_setting_802_1x_get_private_key_scheme, nm_setting_802_1x_get_private_key_path, + NULL, nm_setting_802_1x_get_private_key_blob, "private_key", "private-key.pem" @@ -1829,6 +1852,7 @@ static const ObjectType phase2_pk_type = { NM_SETTING_802_1X_PHASE2_PRIVATE_KEY, nm_setting_802_1x_get_phase2_private_key_scheme, nm_setting_802_1x_get_phase2_private_key_path, + NULL, nm_setting_802_1x_get_phase2_private_key_blob, "private_key2", "inner-private-key.pem" @@ -1838,6 +1862,7 @@ static const ObjectType p12_type = { NM_SETTING_802_1X_PRIVATE_KEY, nm_setting_802_1x_get_private_key_scheme, nm_setting_802_1x_get_private_key_path, + NULL, nm_setting_802_1x_get_private_key_blob, "private_key", "private-key.p12" @@ -1847,6 +1872,7 @@ static const ObjectType phase2_p12_type = { NM_SETTING_802_1X_PHASE2_PRIVATE_KEY, nm_setting_802_1x_get_phase2_private_key_scheme, nm_setting_802_1x_get_phase2_private_key_path, + NULL, nm_setting_802_1x_get_phase2_private_key_blob, "private_key2", "inner-private-key.p12" @@ -1861,6 +1887,7 @@ write_object (NMSetting8021x *s_8021x, { NMSetting8021xCKScheme scheme; const char *path = NULL; + const char *hash = NULL; const GByteArray *blob = NULL; g_return_val_if_fail (conn_name != NULL, FALSE); @@ -1879,6 +1906,9 @@ write_object (NMSetting8021x *s_8021x, case NM_SETTING_802_1X_CK_SCHEME_PATH: path = (*(objtype->path_func)) (s_8021x); break; + case NM_SETTING_802_1X_CK_SCHEME_HASH: + hash = (*(objtype->hash_func)) (s_8021x); + break; default: break; } @@ -1893,6 +1923,15 @@ write_object (NMSetting8021x *s_8021x, return TRUE; } + /* If the object hash was specified, prefer that over any raw cert data that + * may have been sent. + */ + if (hash) { + wpa_set_data (conn_name, (gchar *) objtype->conn_name_key, + (gchar *) hash); + return TRUE; + } + /* does not support writing encryption data now */ if (blob) { PLUGIN_WARN (IFNET_PLUGIN_NAME, diff --git a/src/settings/plugins/keyfile/reader.c b/src/settings/plugins/keyfile/reader.c index 4128b9f..7cc7a32 100644 --- a/src/settings/plugins/keyfile/reader.c +++ b/src/settings/plugins/keyfile/reader.c @@ -852,6 +852,7 @@ get_cert_path (const char *keyfile_path, GByteArray *cert_path) } #define SCHEME_PATH "file://" +#define SCHEME_HASH "hash://server/sha256/" static const char *certext[] = { ".pem", ".cert", ".crt", ".cer", ".p12", ".der", ".key" }; @@ -876,6 +877,12 @@ handle_as_scheme (GByteArray *array, NMSetting *setting, const char *key) && (array->data[array->len - 1] == '\0')) { g_object_set (setting, key, array, NULL); return TRUE; + } else if ( (array->len > strlen (SCHEME_HASH)) + && g_str_has_prefix ((const char *) array->data, SCHEME_HASH) + && (array->data[array->len - 1] == '\0')) { + /* It's the HASH scheme, can just set plain data */ + g_object_set (setting, key, array, NULL); + return TRUE; } return FALSE; } diff --git a/src/settings/plugins/keyfile/writer.c b/src/settings/plugins/keyfile/writer.c index db43b23..6da0876 100644 --- a/src/settings/plugins/keyfile/writer.c +++ b/src/settings/plugins/keyfile/writer.c @@ -539,6 +539,7 @@ typedef struct ObjectType { NMSetting8021xCKScheme (*scheme_func) (NMSetting8021x *setting); NMSetting8021xCKFormat (*format_func) (NMSetting8021x *setting); const char * (*path_func) (NMSetting8021x *setting); + const char * (*hash_func) (NMSetting8021x *setting); const GByteArray * (*blob_func) (NMSetting8021x *setting); } ObjectType; @@ -549,6 +550,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_ca_cert_scheme, NULL, nm_setting_802_1x_get_ca_cert_path, + nm_setting_802_1x_get_ca_cert_hash, nm_setting_802_1x_get_ca_cert_blob }, { NM_SETTING_802_1X_PHASE2_CA_CERT, @@ -557,6 +559,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_phase2_ca_cert_scheme, NULL, nm_setting_802_1x_get_phase2_ca_cert_path, + NULL, nm_setting_802_1x_get_phase2_ca_cert_blob }, { NM_SETTING_802_1X_CLIENT_CERT, @@ -565,6 +568,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_client_cert_scheme, NULL, nm_setting_802_1x_get_client_cert_path, + NULL, nm_setting_802_1x_get_client_cert_blob }, { NM_SETTING_802_1X_PHASE2_CLIENT_CERT, @@ -573,6 +577,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_phase2_client_cert_scheme, NULL, nm_setting_802_1x_get_phase2_client_cert_path, + NULL, nm_setting_802_1x_get_phase2_client_cert_blob }, { NM_SETTING_802_1X_PRIVATE_KEY, @@ -581,6 +586,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_private_key_scheme, nm_setting_802_1x_get_private_key_format, nm_setting_802_1x_get_private_key_path, + NULL, nm_setting_802_1x_get_private_key_blob }, { NM_SETTING_802_1X_PHASE2_PRIVATE_KEY, @@ -589,6 +595,7 @@ static const ObjectType objtypes[10] = { nm_setting_802_1x_get_phase2_private_key_scheme, nm_setting_802_1x_get_phase2_private_key_format, nm_setting_802_1x_get_phase2_private_key_path, + NULL, nm_setting_802_1x_get_phase2_private_key_blob }, { NULL }, @@ -667,7 +674,7 @@ cert_writer (GKeyFile *file, const char *setting_name = nm_setting_get_name (setting); NMSetting8021xCKScheme scheme; NMSetting8021xCKFormat format; - const char *path = NULL, *ext = "pem"; + const char *path = NULL, *hash = NULL, *ext = "pem"; const ObjectType *objtype = NULL; int i; @@ -729,6 +736,11 @@ cert_writer (GKeyFile *file, g_error_free (error); } g_free (new_path); + } else if (scheme == NM_SETTING_802_1X_CK_SCHEME_HASH) { + hash = objtype->hash_func (NM_SETTING_802_1X (setting)); + g_assert (hash); + + g_key_file_set_string (file, setting_name, key, hash); } else g_assert_not_reached (); } diff --git a/src/supplicant-manager/nm-supplicant-config.c b/src/supplicant-manager/nm-supplicant-config.c index 4860314..4b6588e 100644 --- a/src/supplicant-manager/nm-supplicant-config.c +++ b/src/supplicant-manager/nm-supplicant-config.c @@ -173,6 +173,25 @@ nm_supplicant_config_add_option (NMSupplicantConfig *self, return nm_supplicant_config_add_option_with_type (self, key, value, len, TYPE_INVALID, secret); } +NMSupplicantConfig * +nm_supplicant_config_new_probe (const GByteArray *ssid) +{ + NMSupplicantConfig *probe_config; + + if (!ssid) + return NULL; + + probe_config = (NMSupplicantConfig *)g_object_new (NM_TYPE_SUPPLICANT_CONFIG, NULL); + + nm_supplicant_config_add_option (probe_config, "ssid", (char *)ssid->data, ssid->len, FALSE); + nm_supplicant_config_add_option (probe_config, "key_mgmt", "WPA-EAP", -1, FALSE); + nm_supplicant_config_add_option (probe_config, "eap", "TTLS PEAP TLS", -1, FALSE); + nm_supplicant_config_add_option (probe_config, "identity", " ", -1, FALSE); + nm_supplicant_config_add_option (probe_config, "ca_cert", "probe://", -1, FALSE); + + return probe_config; +} + static gboolean nm_supplicant_config_add_blob (NMSupplicantConfig *self, const char *key, @@ -845,6 +864,11 @@ nm_supplicant_config_add_setting_8021x (NMSupplicantConfig *self, if (!add_string_val (self, path, "ca_cert", FALSE, FALSE)) return FALSE; break; + case NM_SETTING_802_1X_CK_SCHEME_HASH: + path = nm_setting_802_1x_get_ca_cert_hash (setting); + if (!add_string_val (self, path, "ca_cert", FALSE, FALSE)) + return FALSE; + break; default: break; } diff --git a/src/supplicant-manager/nm-supplicant-config.h b/src/supplicant-manager/nm-supplicant-config.h index dad23e2..8886a91 100644 --- a/src/supplicant-manager/nm-supplicant-config.h +++ b/src/supplicant-manager/nm-supplicant-config.h @@ -52,6 +52,8 @@ GType nm_supplicant_config_get_type (void); NMSupplicantConfig *nm_supplicant_config_new (void); +NMSupplicantConfig *nm_supplicant_config_new_probe (const GByteArray *ssid); + guint32 nm_supplicant_config_get_ap_scan (NMSupplicantConfig *self); void nm_supplicant_config_set_ap_scan (NMSupplicantConfig *self, diff --git a/src/supplicant-manager/nm-supplicant-interface.c b/src/supplicant-manager/nm-supplicant-interface.c index 857cde5..28ad780 100644 --- a/src/supplicant-manager/nm-supplicant-interface.c +++ b/src/supplicant-manager/nm-supplicant-interface.c @@ -60,6 +60,7 @@ enum { NEW_BSS, /* interface saw a new access point from a scan */ SCAN_DONE, /* wifi scan is complete */ CONNECTION_ERROR, /* an error occurred during a connection request */ + CERTIFICATION, /* a RADIUS server certificate was received */ LAST_SIGNAL }; static guint signals[LAST_SIGNAL] = { 0 }; @@ -387,6 +388,17 @@ wpas_iface_scan_done (DBusGProxy *proxy, } static void +wpas_iface_got_certification (DBusGProxy *proxy, + const GHashTable *cert_table, + gpointer user_data) +{ + g_signal_emit (user_data, + signals[CERTIFICATION], + 0, + cert_table); +} + +static void wpas_iface_properties_changed (DBusGProxy *proxy, GHashTable *props, gpointer user_data) @@ -486,6 +498,18 @@ interface_add_done (NMSupplicantInterface *self, char *path) self, NULL); + dbus_g_object_register_marshaller (g_cclosure_marshal_VOID__BOXED, + G_TYPE_NONE, + DBUS_TYPE_G_MAP_OF_VARIANT, + G_TYPE_INVALID); + dbus_g_proxy_add_signal (priv->iface_proxy, "Certification", + DBUS_TYPE_G_MAP_OF_VARIANT, + G_TYPE_INVALID); + dbus_g_proxy_connect_signal (priv->iface_proxy, "Certification", + G_CALLBACK (wpas_iface_got_certification), + self, + NULL); + priv->props_proxy = dbus_g_proxy_new_for_name (nm_dbus_manager_get_connection (priv->dbus_mgr), WPAS_DBUS_SERVICE, path, @@ -1207,5 +1231,13 @@ nm_supplicant_interface_class_init (NMSupplicantInterfaceClass *klass) NULL, NULL, _nm_marshal_VOID__STRING_STRING, G_TYPE_NONE, 2, G_TYPE_STRING, G_TYPE_STRING); + signals[CERTIFICATION] = + g_signal_new ("certification", + G_OBJECT_CLASS_TYPE (object_class), + G_SIGNAL_RUN_LAST, + G_STRUCT_OFFSET (NMSupplicantInterfaceClass, certification), + NULL, NULL, + g_cclosure_marshal_VOID__BOXED, + G_TYPE_NONE, 1, DBUS_TYPE_G_MAP_OF_VARIANT); } diff --git a/src/supplicant-manager/nm-supplicant-interface.h b/src/supplicant-manager/nm-supplicant-interface.h index e32411d..b65cd87 100644 --- a/src/supplicant-manager/nm-supplicant-interface.h +++ b/src/supplicant-manager/nm-supplicant-interface.h @@ -89,6 +89,10 @@ typedef struct { void (*connection_error) (NMSupplicantInterface * iface, const char * name, const char * message); + + /* a RADIUS server certificate was received */ + void (*certification) (NMSupplicantInterface * iface, + const GHashTable * ca_cert); } NMSupplicantInterfaceClass;
[ "CVE-2012-2736" ]
[ "introspection/nm-device-wifi.xml", "libnm-glib/libnm-glib.ver", "libnm-glib/nm-device-wifi.c", "libnm-glib/nm-device-wifi.h", "libnm-util/libnm-util.ver", "libnm-util/nm-setting-8021x.c", "libnm-util/nm-setting-8021x.h", "src/nm-device-wifi.c", "src/nm-device-wifi.h", "src/settings/plugins/ifnet/connection_parser.c", "src/settings/plugins/keyfile/reader.c", "src/settings/plugins/keyfile/writer.c", "src/supplicant-manager/nm-supplicant-config.c", "src/supplicant-manager/nm-supplicant-config.h", "src/supplicant-manager/nm-supplicant-interface.c", "src/supplicant-manager/nm-supplicant-interface.h" ]
GPL-2.0-or-later AND LGPL-2.1-or-later
3,628
NetworkManager
systemd-network-config.patch
Index: NetworkManager-0.9.10.0/data/NetworkManager.service.in =================================================================== --- NetworkManager-0.9.10.0.orig/data/NetworkManager.service.in +++ NetworkManager-0.9.10.0/data/NetworkManager.service.in @@ -1,6 +1,6 @@ [Unit] Description=Network Manager -Wants=network.target +Wants=remote-fs.target network.target Before=network.target @DISTRO_NETWORK_SERVICE@ [Service] @@ -12,6 +12,6 @@ KillMode=process [Install] WantedBy=multi-user.target -Alias=dbus-org.freedesktop.NetworkManager.service +Alias=network.service Also=NetworkManager-dispatcher.service Index: NetworkManager-0.9.10.0/data/NetworkManager-wait-online.service.in =================================================================== --- NetworkManager-0.9.10.0.orig/data/NetworkManager-wait-online.service.in +++ NetworkManager-0.9.10.0/data/NetworkManager-wait-online.service.in @@ -7,7 +7,9 @@ Before=network.target network-online.tar [Service] Type=oneshot -ExecStart=@bindir@/nm-online -s -q --timeout=30 +Environment=NM_ONLINE_TIMEOUT=0 +EnvironmentFile=-/etc/sysconfig/network/config +ExecStart=/bin/bash -c "if [ ${NM_ONLINE_TIMEOUT} -gt 0 ]; then @bindir@/nm-online -s -q --timeout=${NM_ONLINE_TIMEOUT} ; else /bin/true ; fi" [Install] WantedBy=multi-user.target
[ "CVE-2014-8154" ]
[ "data/NetworkManager.service.in", "data/NetworkManager-wait-online.service.in" ]
GPL-2.0-or-later AND LGPL-2.1-or-later
15,296
cross-arm-none-gcc7
gcc7-pr94148.patch
commit 5c7e6d4bdf879b437b43037e10453275acabf521 Author: Segher Boessenkool <segher@kernel.crashing.org> Date: Thu Mar 12 07:12:50 2020 +0000 df: Don't abuse bb->aux (PR94148, PR94042) The df dataflow solvers use the aux field in the basic_block struct, although that is reserved for any use by passes. And not only that, it is required that you set all such fields to NULL before calling the solvers, or you quietly get wrong results. This changes the solvers to use a local array for last_change_age instead, just like it already had a local array for last_visit_age. PR rtl-optimization/94148 PR rtl-optimization/94042 * df-core.c (BB_LAST_CHANGE_AGE): Delete. (df_worklist_propagate_forward): New parameter last_change_age, use that instead of bb->aux. (df_worklist_propagate_backward): Ditto. (df_worklist_dataflow_doublequeue): Use a local array last_change_age. diff --git a/gcc/df-core.c b/gcc/df-core.c index 346849e31be..9875a26895c 100644 --- a/gcc/df-core.c +++ b/gcc/df-core.c @@ -871,9 +871,6 @@ make_pass_df_finish (gcc::context *ctxt) The general data flow analysis engine. ----------------------------------------------------------------------------*/ -/* Return time BB when it was visited for last time. */ -#define BB_LAST_CHANGE_AGE(bb) ((ptrdiff_t)(bb)->aux) - /* Helper function for df_worklist_dataflow. Propagate the dataflow forward. Given a BB_INDEX, do the dataflow propagation @@ -897,7 +894,8 @@ df_worklist_propagate_forward (struct dataflow *dataflow, unsigned *bbindex_to_postorder, bitmap pending, sbitmap considered, - ptrdiff_t age) + vec<int> &last_change_age, + int age) { edge e; edge_iterator ei; @@ -908,7 +906,8 @@ df_worklist_propagate_forward (struct dataflow *dataflow, if (EDGE_COUNT (bb->preds) > 0) FOR_EACH_EDGE (e, ei, bb->preds) { - if (age <= BB_LAST_CHANGE_AGE (e->src) + if (bbindex_to_postorder[e->src->index] < last_change_age.length () + && age <= last_change_age[bbindex_to_postorder[e->src->index]] && bitmap_bit_p (considered, e->src->index)) changed |= dataflow->problem->con_fun_n (e); } @@ -942,7 +941,8 @@ df_worklist_propagate_backward (struct dataflow *dataflow, unsigned *bbindex_to_postorder, bitmap pending, sbitmap considered, - ptrdiff_t age) + vec<int> &last_change_age, + int age) { edge e; edge_iterator ei; @@ -953,7 +953,8 @@ df_worklist_propagate_backward (struct dataflow *dataflow, if (EDGE_COUNT (bb->succs) > 0) FOR_EACH_EDGE (e, ei, bb->succs) { - if (age <= BB_LAST_CHANGE_AGE (e->dest) + if (bbindex_to_postorder[e->dest->index] < last_change_age.length () + && age <= last_change_age[bbindex_to_postorder[e->dest->index]] && bitmap_bit_p (considered, e->dest->index)) changed |= dataflow->problem->con_fun_n (e); } @@ -991,10 +992,10 @@ df_worklist_propagate_backward (struct dataflow *dataflow, worklists (we are processing WORKLIST and storing new BBs to visit in PENDING). - As an optimization we maintain ages when BB was changed (stored in bb->aux) - and when it was last visited (stored in last_visit_age). This avoids need - to re-do confluence function for edges to basic blocks whose source - did not change since destination was visited last time. */ + As an optimization we maintain ages when BB was changed (stored in + last_change_age) and when it was last visited (stored in last_visit_age). + This avoids need to re-do confluence function for edges to basic blocks + whose source did not change since destination was visited last time. */ static void df_worklist_dataflow_doublequeue (struct dataflow *dataflow, @@ -1010,11 +1011,11 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow, int age = 0; bool changed; vec<int> last_visit_age = vNULL; + vec<int> last_change_age = vNULL; int prev_age; - basic_block bb; - int i; last_visit_age.safe_grow_cleared (n_blocks); + last_change_age.safe_grow_cleared (n_blocks); /* Double-queueing. Worklist is for the current iteration, and pending is for the next. */ @@ -1032,30 +1033,30 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow, bitmap_clear_bit (pending, index); bb_index = blocks_in_postorder[index]; - bb = BASIC_BLOCK_FOR_FN (cfun, bb_index); prev_age = last_visit_age[index]; if (dir == DF_FORWARD) changed = df_worklist_propagate_forward (dataflow, bb_index, bbindex_to_postorder, pending, considered, + last_change_age, prev_age); else changed = df_worklist_propagate_backward (dataflow, bb_index, bbindex_to_postorder, pending, considered, + last_change_age, prev_age); last_visit_age[index] = ++age; if (changed) - bb->aux = (void *)(ptrdiff_t)age; + last_change_age[index] = age; } bitmap_clear (worklist); } - for (i = 0; i < n_blocks; i++) - BASIC_BLOCK_FOR_FN (cfun, blocks_in_postorder[i])->aux = NULL; BITMAP_FREE (worklist); BITMAP_FREE (pending); last_visit_age.release (); + last_change_age.release (); /* Dump statistics. */ if (dump_file)
[ "CVE-2020-13844" ]
[ "gcc/df-core.c" ]
SUSE-Proprietary-or-OSS
15,296
cross-arm-none-gcc7
gcc7-pr85887.patch
2019-10-22 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/85887 * decl.c (expand_static_init): Drop ECF_LEAF from __cxa_guard_acquire and __cxa_guard_release. Index: gcc/cp/decl.c =================================================================== --- gcc/cp/decl.c (revision 277292) +++ gcc/cp/decl.c (revision 277293) @@ -8589,14 +8589,14 @@ expand_static_init (tree decl, tree init (acquire_name, build_function_type_list (integer_type_node, TREE_TYPE (guard_addr), NULL_TREE), - NULL_TREE, ECF_NOTHROW | ECF_LEAF); + NULL_TREE, ECF_NOTHROW); if (!release_fn || !abort_fn) vfntype = build_function_type_list (void_type_node, TREE_TYPE (guard_addr), NULL_TREE); if (!release_fn) release_fn = push_library_fn (release_name, vfntype, NULL_TREE, - ECF_NOTHROW | ECF_LEAF); + ECF_NOTHROW); if (!abort_fn) abort_fn = push_library_fn (abort_name, vfntype, NULL_TREE, ECF_NOTHROW | ECF_LEAF);
[ "CVE-2020-13844" ]
[ "gcc/cp/decl.c" ]
SUSE-Proprietary-or-OSS
15,297
cross-arm-none-gcc7
gcc7-testsuite-fixes.patch
diff --git a/gcc/testsuite/gcc.dg/strncmp-2.c b/gcc/testsuite/gcc.dg/strncmp-2.c index ed6c5fa0880..db46d0af4e0 100644 --- a/gcc/testsuite/gcc.dg/strncmp-2.c +++ b/gcc/testsuite/gcc.dg/strncmp-2.c @@ -40,6 +40,7 @@ static void test_driver_strncmp (void (test_strncmp)(const char *, const char *, e = lib_memcmp(buf1,p2,sz); (*test_memcmp)(buf1,p2,e); } + mprotect (buf2+pgsz,pgsz,PROT_READ|PROT_WRITE); free(buf2); } diff --git a/libstdc++-v3/testsuite/ext/stdio_filebuf/char/79820.cc b/libstdc++-v3/testsuite/ext/stdio_filebuf/char/79820.cc index ba566f869c6..ca51d6d1a78 100644 --- a/libstdc++-v3/testsuite/ext/stdio_filebuf/char/79820.cc +++ b/libstdc++-v3/testsuite/ext/stdio_filebuf/char/79820.cc @@ -26,10 +26,12 @@ void test01() { FILE* f = std::fopen("79820.txt", "w"); + { + errno = 127; + __gnu_cxx::stdio_filebuf<char> b(f, std::ios::out, BUFSIZ); + VERIFY(errno == 127); // PR libstdc++/79820 + } std::fclose(f); - errno = 127; - __gnu_cxx::stdio_filebuf<char> b(f, std::ios::out, BUFSIZ); - VERIFY(errno == 127); // PR libstdc++/79820 } int diff --git a/gcc/testsuite/gcc.target/i386/xop-hsubX.c b/gcc/testsuite/gcc.target/i386/xop-hsubX.c index f0fa9b312f2..dc7944d8bb7 100644 --- a/gcc/testsuite/gcc.target/i386/xop-hsubX.c +++ b/gcc/testsuite/gcc.target/i386/xop-hsubX.c @@ -58,6 +58,7 @@ check_sbyte2word () check_fails++; } } + return check_fails; } static int @@ -75,6 +76,7 @@ check_sword2dword () check_fails++; } } + return check_fails; } static int @@ -92,6 +94,7 @@ check_dword2qword () check_fails++; } } + return check_fails; } static void
[ "CVE-2020-13844" ]
[ "gcc/testsuite/gcc.dg/strncmp-2.c", "libstdc++-v3/testsuite/ext/stdio_filebuf/char/79820.cc", "gcc/testsuite/gcc.target/i386/xop-hsubX.c" ]
SUSE-Proprietary-or-OSS
15,296
cross-arm-none-gcc7
gcc48-libstdc++-api-reference.patch
Index: libstdc++-v3/doc/html/index.html =================================================================== --- libstdc++-v3/doc/html/index.html (revision 210144) +++ libstdc++-v3/doc/html/index.html (working copy) @@ -18,7 +18,7 @@ </p></li><li class="listitem"><p> <a class="link" href="faq.html" title="Frequently Asked Questions">Frequently Asked Questions</a> </p></li><li class="listitem"><p> - <a class="link" href="api.html" title="The GNU C++ Library API Reference">API and Source Documentation</a> + <a class="link" href="api/index.html" title="The GNU C++ Library API Reference">API and Source Documentation</a> </p></li></ul></div><p> </p></div></div></div><hr /></div><div class="toc"><p><strong>Table of Contents</strong></p><dl class="toc"><dt><span class="book"><a href="manual/index.html">The GNU C++ Library Manual</a></span></dt><dd><dl><dt><span class="part"><a href="manual/intro.html">I. Introduction
[ "CVE-2020-13844" ]
[ "libstdc++-v3/doc/html/index.html" ]
SUSE-Proprietary-or-OSS
11,295
cross-arm-none-gcc7
tls-no-direct.diff
For i?86 negative offsets to %fs segment accesses cause a hypervisor trap for Xen. Avoid this by making accesses indirect. ??? Note that similar to the behavior on SLE11 this only affects the compiler built on %ix86, not that on x86_64, even with -m32. Index: gcc/config/i386/linux.h =================================================================== --- gcc/config/i386/linux.h.orig 2015-12-17 15:07:37.785650062 +0100 +++ gcc/config/i386/linux.h 2015-12-17 15:08:06.393983290 +0100 @@ -24,3 +24,9 @@ along with GCC; see the file COPYING3. #undef MUSL_DYNAMIC_LINKER #define MUSL_DYNAMIC_LINKER "/lib/ld-musl-i386.so.1" + +/* This slows down Xen, so take a very small general performance hit + for not accessing the %fs segment with negative offsets by making + GCC not emit direct accesses to %fs at all. */ +#undef TARGET_TLS_DIRECT_SEG_REFS_DEFAULT +#define TARGET_TLS_DIRECT_SEG_REFS_DEFAULT 0
[ "CVE-2019-15847", "CVE-2019-14250" ]
[ "gcc/config/i386/linux.h" ]
SUSE-Proprietary-or-OSS
End of preview.

openSUSE CVE Backport Dataset

Security patch backports from openSUSE maintenance incidents, with per-patch license information.

Created by an openSUSE community member to support research in automated security patch backporting.

Files

File Examples Description
data/training-data-v1.jsonl 5,601 Original training data used for v1 model
data/train.jsonl 20,000 Curated dataset with per-patch licenses for future training

Source and Licensing

This dataset is derived from publicly available patches in the openSUSE Open Build Service, distributed under the terms of the openSUSE License.

Per-patch licensing: Each patch retains the license of its original upstream project. The license field in each record indicates the applicable license (GPL-2.0, MIT, Apache-2.0, etc.). Users must comply with individual patch licenses.

License Distribution (curated set)

License Family Count Percentage
GPL-2.x 8,662 43.3%
MIT 1,656 8.3%
MPL 1,046 5.2%
GPL-3.x 905 4.5%
Apache 588 2.9%
BSD 527 2.6%
Other/Mixed 6,616 33.1%

Dataset Fields

Each example includes:

  • patch_content: The actual patch diff
  • cves: List of CVE identifiers addressed
  • package: The software package being patched
  • files_modified: List of files changed
  • license: SPDX license identifier (curated set only)
  • incident_id: openSUSE maintenance incident reference

Intended Use

Training models to assist with security patch backporting—adapting upstream security fixes for older or different codebases.

References

Downloads last month
14

Models trained or fine-tuned on anicka/opensuse-cve-backport-dataset