reset history

There is about four weeks worth of history, the interesting parts of
which I've documented in `CONTRIBUTING.md`. I'm now throwing this
history away because there is a lot of messing with data files in there
that bloats the repo unnecessarily, and this is my last chance to get
rid of that bloat before other people start pulling it.
This commit is contained in:
Stefan Majewsky 2021-04-18 14:13:28 +02:00
commit 5ceeec3acc
26 changed files with 195162 additions and 0 deletions

390
src/lib.rs Normal file
View file

@ -0,0 +1,390 @@
/*******************************************************************************
* Copyright 2021 Stefan Majewsky <majewsky@gmx.net>
* SPDX-License-Identifier: Apache-2.0
* Refer to the file "LICENSE" for details.
*******************************************************************************/
//! The [JMdict file](https://www.edrdg.org/jmdict/j_jmdict.html) is a comprehensive multilingual
//! dictionary of the Japanese language. The original JMdict file, included in this repository (and
//! hence, in releases of this crate) comes as XML. Instead of stuffing the XML in the binary
//! directly, this crate parses the XML at compile-time and generates an optimized representation
//! that is compiled into the binary. The crate's API affords type-safe access to this embedded
//! database.
//!
//! # WARNING: Licensing on database files
//!
//! The database files compiled into the crate are licensed from the Electronic Dictionary Research
//! and Development Group under Creative Commons licenses. Applications linking this crate directly
//! oder indirectly must display appropriate copyright notices to users. Please refer to the
//! [EDRDG's license statement](https://www.edrdg.org/edrdg/licence.html) for details.
//!
//! # Basic usage
//!
//! The database is accessed through the [entries() function](entries) which provides an iterator
//! over all database entries compiled into the application. While traversing the database and its
//! entries, you will find that, whenever you expect a list of something, you will get an iterator
//! instead. These iterators provide an abstraction between you as the user of the library, and the
//! physical representation of the database as embedded in the binary.
//!
//! The following example looks up the reading for お母さん in the database:
//!
//! ```
//! let kanji_form = "お母さん";
//!
//! let entry = jmdict::entries().find(|e| {
//! e.kanji_elements().any(|k| k.text == kanji_form)
//! }).unwrap();
//!
//! let reading_form = entry.reading_elements().next().unwrap().text;
//! assert_eq!(reading_form, "おかあさん");
//! ```
//!
//! # Cargo features
//!
//! ### Common configurations
//!
//! * The `default` feature includes the most common words (about 30000 entries) and only their
//! English translations.
//! * The `full` feature includes everything in the JMdict.
//!
//! ### Entry selection
//!
//! * The `scope-uncommon` feature includes uncommon words and glosses.
//! * The `scope-archaic` feature includes glosses with the "archaic" label. If disabled, the
//! [PartOfSpeech] enum will not include variants that are only relevant for archaic vocabulary,
//! such as obsolete conjugation patterns. (The [AllPartOfSpeech] enum always contains all
//! variants.)
//!
//! ### Target languages
//!
//! At least one target language must be selected. Selecting a target language will include all
//! available translations in that language. Entries that do not have any translation in any of the
//! selected languages will be skipped.
//!
//! * `translations-eng`: English (included in `default`)
//! * `translations-dut`: Dutch
//! * `translations-fre`: French
//! * `translations-ger`: German
//! * `translations-hun`: Hungarian
//! * `translations-rus`: Russian
//! * `translations-slv`: Slovenian
//! * `translations-spa`: Spanish
//! * `translations-swe`: Swedish
//!
//! The [GlossLanguage] enum will only contain variants corresponding to the enabled target
//! languages. For example, in the default configuration, `GlossLanguage::English` will be the only
//! variant. (The [AllGlossLanguage] enum always contains all variants.)
//!
//! ### Crippled builds: `db-minimal`
//!
//! When the `db-minimal` feature is enabled, only a severly reduced portion of the JMdict will
//! be parsed (to be exact, only chunks 000, 100 and 999). This is also completely useless for
//! actual usage, but allows for quick edit-compile-test cycles while working on this crate's
//! code.
//!
//! ### Crippled builds: `db-empty`
//!
//! When the `db-empty` feature is enabled, downloading and parsing of the JMdict contents is
//! disabled entirely. The crate is compiled as usual, but `entries()` will be an empty list.
//! This is useful for documentation builds like for `docs.rs`, where `--all-features` is given.
pub use jmdict_enums::{
AllGlossLanguage, AllPartOfSpeech, Dialect, DisabledVariant, Enum, GlossLanguage, GlossType,
KanjiInfo, PartOfSpeech, Priority, PriorityInCorpus, ReadingInfo, SenseInfo, SenseTopic,
};
mod payload;
use payload::*;
#[cfg(test)]
mod test_consistency;
#[cfg(test)]
mod test_feature_matrix;
#[cfg(test)]
mod test_ordering;
///Returns an iterator over all entries in the database.
pub fn entries() -> Entries {
Entries::new()
}
///An entry in the JMdict dictionary.
///
///Each entry has zero or more [kanji elements](KanjiElement), one or more
///[reading elements](ReadingElement) and one or more [senses](Sense). Elements contain the
///Japanese representation of the vocabulary or phrase. Whereas reading elements consist of only
///kana, kanji elements will contain characters from non-kana scripts, most commonly kanji. Senses
///contain the translation of the vocabulary or phrase in other languages, most commonly English.
#[derive(Clone, Copy, Debug)]
pub struct Entry {
///The sequence number for this Entry as it appears in the JMdict. Numbers start around 1000000
///and typically increment in steps of 5 or 10. (It's like BASIC line numbers, if you're old
///enough to understand that reference.) The [Entries] iterator guarantees entries to appear
///ordered by sequence number.
pub number: u32,
kanji_elements_iter: KanjiElements,
reading_elements_iter: ReadingElements,
senses_iter: Senses,
}
impl Entry {
pub fn kanji_elements(&self) -> KanjiElements {
self.kanji_elements_iter
}
pub fn reading_elements(&self) -> ReadingElements {
self.reading_elements_iter
}
pub fn senses(&self) -> Senses {
self.senses_iter
}
}
///A representation of a dictionary entry using kanji or other non-kana scripts.
///
///Each [Entry] may have any number of these (including none). For each kanji element, the entry
///will also have [reading elements](ReadingElement) to indicate how to read this kanji element.
#[derive(Clone, Copy, Debug)]
pub struct KanjiElement {
pub text: &'static str,
pub priority: Priority,
info_iter: KanjiInfos,
}
impl KanjiElement {
pub fn infos(&self) -> KanjiInfos {
self.info_iter
}
}
///A representation of a dictionary entry using only kana.
///
///Each [Entry] will have zero or more of these. When an entry has both kanji elements and reading
///elements, the kana usage will be consistent between them, that is: If the kanji element contains
///katakana, there is also a corresponding reading element that contains katakana as well.
#[derive(Clone, Copy, Debug)]
pub struct ReadingElement {
pub text: &'static str,
pub priority: Priority,
info_iter: ReadingInfos,
}
impl ReadingElement {
pub fn infos(&self) -> ReadingInfos {
self.info_iter
}
}
///The translational equivalent of a Japanese word or phrase.
///
///Where there are several distinctly different meanings of the word, its [Entry] will have
///multiple senses. Each particular translation is a [Gloss], of which there may be multiple within
///a single sense.
///
///For instance, the entry for 折角 contains one sense with the glosses "with trouble" and "at
///great pains". Those glosses all represent the same meaning, so they appear in one sense. There
///is also a sense with the glosses "rare", "precious", "valuable" and "long-awaited". Those
///glosses represent a different meaning from "with trouble" or "at great pains", so they appear in
///a separate sense. (And in fact, 折角 has even more senses.)
#[derive(Clone, Copy, Debug)]
pub struct Sense {
stagk_iter: Strings,
stagr_iter: Strings,
pos_iter: PartsOfSpeech,
cross_refs_iter: Strings,
antonyms_iter: Strings,
topics_iter: SenseTopics,
info_iter: SenseInfos,
freetext_info_iter: Strings,
loanword_sources_iter: LoanwordSources,
dialects_iter: Dialects,
glosses_iter: Glosses,
}
impl Sense {
///If not empty, this sense only applies to these [KanjiElements] out of all the
///[KanjiElements] in this [Entry].
pub fn applicable_kanji_elements(&self) -> Strings {
self.stagk_iter
}
///If not empty, this sense only applies to these [ReadingElements] out of all the
///[ReadingElements] in this [Entry].
pub fn applicable_reading_elements(&self) -> Strings {
self.stagr_iter
}
pub fn parts_of_speech(&self) -> PartsOfSpeech {
self.pos_iter
}
///If not empty, contains the text of [KanjiElements] or [ReadingElements] of other [Entries]
///with a similar meaning or sense. In some cases, a [KanjiElement]'s text will be followed by
///a [Reading Element]'s text and/or a sense number to provide a precise target for the
///cross-reference. Where this happens, a katakana middle dot (`・`, U+30FB) is placed between
///the components of the cross-reference.
///
///TODO: Provide a structured type for these kinds of references.
pub fn cross_references(&self) -> Strings {
self.cross_refs_iter
}
///If not empty, contains the text of [KanjiElements] or [ReadingElements] of other [Entries]
///which are antonyms of this sense.
pub fn antonyms(&self) -> Strings {
self.antonyms_iter
}
pub fn topics(&self) -> SenseTopics {
self.topics_iter
}
pub fn infos(&self) -> SenseInfos {
self.info_iter
}
///If not empty, contains additional information about this sence (e.g. level of currency or
///other nuances) that cannot be expressed by the other, more structured fields.
pub fn freetext_infos(&self) -> Strings {
self.freetext_info_iter
}
///If not empty, contains source words in other languages from which this vocabulary has been
///borrowed in this sense.
pub fn loanword_sources(&self) -> LoanwordSources {
self.loanword_sources_iter
}
///If not empty, this [Sense] of the [Entry] only appears in the given [Dialects] of Japanese.
pub fn dialects(&self) -> Dialects {
self.dialects_iter
}
pub fn glosses(&self) -> Glosses {
self.glosses_iter
}
}
///A source word in other language which a particular [Sense] of an [Entry] has been borrowed from.
///
///There may be multiple sources for a single [Sense] when it is not clear from which language a
///word has been borrowed (e.g. "セレナーデ" lists both the French word "sérénade" and the German
///word "Serenade" as loanword sources), or if the vocabulary is a composite word with multiple
///distinct sources (e.g. "サブリュック" is a combination of the English prefix "sub-" and the
///German word "Rucksack").
///
///Within an [Entry], glosses appear in the [Sense].
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct LoanwordSource {
pub text: &'static str,
///The [ISO 639-2/B code](https://en.wikipedia.org/wiki/List_of_ISO_639-2_codes) for the
///language from which the word was borrowed, e.g. "ger" for German or "chi" for Chinese.
pub language: &'static str,
///Whether this source applies only to part of the loanword. Note that this flag is not always
///present in the JMdict when it should be.
pub is_partial: bool,
///Whether this loanword is a [Wasei-eigo](https://en.wikipedia.org/wiki/Wasei-eigo).
pub is_wasei: bool,
}
///A particular translation or explanation for a Japanese word or phrase in a different language.
///
///Within an [Entry], glosses appear in the [Sense].
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct Gloss {
pub language: GlossLanguage,
pub text: &'static str,
pub gloss_type: GlossType,
}
///We cannot do `pub type KanjiElements = Range<KanjiElement, N>` etc. because Range<T, N> is
///private to the crate, so instead we declare a bunch of iterator types that wrap Range<T, N>.
macro_rules! wrap_iterator {
($val: ty, $size: literal, $iter: ident) => {
///An iterator providing fast access to objects in the database. Instances of this iterator
///can be copied cheaply.
#[derive(Clone, Copy, Debug)]
pub struct $iter(Range<$val, $size>);
impl From<Range<$val, $size>> for $iter {
fn from(r: Range<$val, $size>) -> $iter {
$iter(r)
}
}
impl std::iter::Iterator for $iter {
type Item = $val;
fn next(&mut self) -> Option<Self::Item> {
self.0.next()
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.0.size_hint()
}
}
impl std::iter::ExactSizeIterator for $iter {
fn len(&self) -> usize {
self.0.len()
}
}
};
}
wrap_iterator!(KanjiElement, 5, KanjiElements);
wrap_iterator!(KanjiInfo, 1, KanjiInfos);
wrap_iterator!(ReadingElement, 5, ReadingElements);
wrap_iterator!(ReadingInfo, 1, ReadingInfos);
wrap_iterator!(Sense, 5, Senses);
wrap_iterator!(&'static str, 2, Strings);
wrap_iterator!(PartOfSpeech, 1, PartsOfSpeech);
wrap_iterator!(SenseTopic, 1, SenseTopics);
wrap_iterator!(SenseInfo, 1, SenseInfos);
wrap_iterator!(LoanwordSource, 5, LoanwordSources);
wrap_iterator!(Dialect, 1, Dialects);
wrap_iterator!(Gloss, 3, Glosses);
///An iterator providing fast access to objects in the database. Instances of this iterator
///can be copied cheaply.
#[derive(Clone, Copy)]
pub struct Entries {
//This iterator is very similar to Range<T, N>, but cannot be implemented in terms of it
//because it iterates over ALL_ENTRY_OFFSETS instead of ALL_DATA.
start: usize,
end: usize,
}
impl Entries {
fn new() -> Self {
Self {
start: 0,
end: entry_count(),
}
}
}
impl std::iter::Iterator for Entries {
type Item = Entry;
fn next(&mut self) -> Option<Self::Item> {
if self.start < self.end {
let entry = get_entry(self.start);
self.start += 1;
Some(entry)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let count = self.end - self.start;
(count, Some(count))
}
}
impl std::iter::ExactSizeIterator for Entries {
fn len(&self) -> usize {
self.end - self.start
}
}

232
src/payload.rs Normal file
View file

@ -0,0 +1,232 @@
/*******************************************************************************
* Copyright 2021 Stefan Majewsky <majewsky@gmx.net>
* SPDX-License-Identifier: Apache-2.0
* Refer to the file "LICENSE" for details.
*******************************************************************************/
//! This file contains the type definitions for the database payload. Because we want the payload
//! format to be an implementation detail, the entire module is private and hence these types are
//! not part of the public API.
use crate::*;
use std::convert::TryInto;
use std::marker::PhantomData;
////////////////////////////////////////////////////////////////////////////////
// generic machinery for iterating over ALL_DATA
pub(crate) trait FromPayload<const N: usize> {
///Given `&ALL_DATA[offset..]`, unmarshals the data starting from that offset into a value of
///self. Returns the unmarshaled value, as well as the amount of u32 that were consumed.
fn get(data: &[u32; N]) -> Self;
}
#[derive(Clone, Copy, Debug)]
pub(crate) struct Range<T: FromPayload<N>, const N: usize> {
pub start: usize,
pub end: usize,
pub phantom: PhantomData<T>,
}
impl<T: FromPayload<N>, const N: usize> Range<T, N> {
pub(crate) fn new(start: u32, end: u32) -> Self {
Self {
start: start.try_into().unwrap(),
end: end.try_into().unwrap(),
phantom: PhantomData,
}
}
}
impl<T: FromPayload<N>, const N: usize> std::iter::Iterator for Range<T, N> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
if self.start < self.end {
let data = &as_u32_slice(ALL_DATA)[self.start..(self.start + N)];
let item = T::get(data.try_into().unwrap());
self.start += N;
Some(item)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let count = (self.end - self.start) / N;
(count, Some(count))
}
}
impl<T: FromPayload<N>, const N: usize> std::iter::ExactSizeIterator for Range<T, N> {
fn len(&self) -> usize {
(self.end - self.start) / N
}
}
////////////////////////////////////////////////////////////////////////////////
// concrete types
pub(crate) fn entry_count() -> usize {
as_u32_slice(ALL_ENTRY_OFFSETS).len()
}
pub(crate) fn get_entry(idx: usize) -> Entry {
let offset: usize = as_u32_slice(ALL_ENTRY_OFFSETS)[idx].try_into().unwrap();
let data = &as_u32_slice(ALL_DATA)[offset..(offset + 4)];
let (start, end) = (data[0], data[1]);
let mid1 = start + (data[2] & 0x0000FFFF);
let mid2 = start + ((data[2] & 0xFFFF0000) >> 16);
Entry {
number: data[3],
kanji_elements_iter: Range::new(start, mid1).into(),
reading_elements_iter: Range::new(mid1, mid2).into(),
senses_iter: Range::new(mid2, end).into(),
}
}
impl FromPayload<5> for KanjiElement {
fn get(data: &[u32; 5]) -> Self {
Self {
priority: jmdict_enums::EnumPayload::from_u32(data[0]),
text: get_str(data[1], data[2]),
info_iter: Range::new(data[3], data[4]).into(),
}
}
}
impl FromPayload<1> for KanjiInfo {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<5> for ReadingElement {
fn get(data: &[u32; 5]) -> Self {
Self {
priority: jmdict_enums::EnumPayload::from_u32(data[0]),
text: get_str(data[1], data[2]),
info_iter: Range::new(data[3], data[4]).into(),
}
}
}
impl FromPayload<1> for ReadingInfo {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<5> for Sense {
fn get(data: &[u32; 5]) -> Self {
let (start, end) = (data[0], data[1]);
let mid1 = start + (data[2] & 0x000000FF);
let mid2 = start + ((data[2] & 0x0000FF00) >> 8);
let mid3 = start + ((data[2] & 0x00FF0000) >> 16);
let mid4 = start + ((data[2] & 0xFF000000) >> 24);
let mid5 = start + (data[3] & 0x000000FF);
let mid6 = start + ((data[3] & 0x0000FF00) >> 8);
let mid7 = start + ((data[3] & 0x00FF0000) >> 16);
let mid8 = start + ((data[3] & 0xFF000000) >> 24);
let mid9 = start + (data[4] & 0x000000FF);
let mid10 = start + ((data[4] & 0x0000FF00) >> 8);
Self {
stagk_iter: Range::new(start, mid1).into(),
stagr_iter: Range::new(mid1, mid2).into(),
pos_iter: Range::new(mid2, mid3).into(),
cross_refs_iter: Range::new(mid3, mid4).into(),
antonyms_iter: Range::new(mid4, mid5).into(),
topics_iter: Range::new(mid5, mid6).into(),
info_iter: Range::new(mid6, mid7).into(),
freetext_info_iter: Range::new(mid7, mid8).into(),
loanword_sources_iter: Range::new(mid8, mid9).into(),
dialects_iter: Range::new(mid9, mid10).into(),
glosses_iter: Range::new(mid10, end).into(),
}
}
}
impl FromPayload<1> for PartOfSpeech {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<1> for SenseTopic {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<1> for SenseInfo {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<5> for LoanwordSource {
fn get(data: &[u32; 5]) -> Self {
Self {
text: get_str(data[0], data[1]),
language: get_str(data[2], data[3]),
is_partial: (data[4] & 0x1) == 0x1,
is_wasei: (data[4] & 0x2) == 0x2,
}
}
}
impl FromPayload<1> for Dialect {
fn get(data: &[u32; 1]) -> Self {
jmdict_enums::EnumPayload::from_u32(data[0])
}
}
impl FromPayload<3> for Gloss {
fn get(data: &[u32; 3]) -> Self {
let lang_code = data[2] & 0x0000FFFF;
let type_code = (data[2] & 0xFFFF0000) >> 16;
Gloss {
text: get_str(data[0], data[1]),
language: jmdict_enums::EnumPayload::from_u32(lang_code),
gloss_type: jmdict_enums::EnumPayload::from_u32(type_code),
}
}
}
impl FromPayload<2> for &'static str {
fn get(data: &[u32; 2]) -> Self {
get_str(data[0], data[1])
}
}
fn get_str(start: u32, end: u32) -> &'static str {
let start = start.try_into().unwrap();
let end = end.try_into().unwrap();
&ALL_TEXTS[start..end]
}
////////////////////////////////////////////////////////////////////////////////
// embedded data
//NOTE: We would only need 4-byte alignment, but 16-byte is the smallest alignment interval that
//the align_data crate offers.
//
//NOTE 2: as_u32_slice() cannot be made const because from_raw_parts() is not const, so we have to
//use it on every read access to the respective arrays.
use align_data::{include_aligned, Align16};
fn as_u32_slice(input: &'static [u8]) -> &'static [u32] {
unsafe {
let ptr = input.as_ptr() as *const u32;
std::slice::from_raw_parts(ptr, input.len() / 4)
}
}
static ALL_ENTRY_OFFSETS: &[u8] =
include_aligned!(Align16, concat!(env!("OUT_DIR"), "/entry_offsets.dat"));
static ALL_DATA: &[u8] = include_aligned!(Align16, concat!(env!("OUT_DIR"), "/payload.dat"));
static ALL_TEXTS: &str = include_str!(concat!(env!("OUT_DIR"), "/strings.txt"));

115
src/test_consistency.rs Normal file
View file

@ -0,0 +1,115 @@
/*******************************************************************************
* Copyright 2021 Stefan Majewsky <majewsky@gmx.net>
* SPDX-License-Identifier: Apache-2.0
* Refer to the file "LICENSE" for details.
*******************************************************************************/
use std::fmt::Debug;
#[test]
fn check_consistency() {
//This test runs through the data files in the repository a second time and checks that
//entries() contains exactly what we want. This test especially verifies that all indexes into
//omniarrays are within bounds and point to the right stuff.
struct Visitor(crate::Entries);
impl jmdict_traverse::Visitor for Visitor {
fn process_entry(&mut self, entry: &jmdict_traverse::RawEntry) {
match self.0.next() {
None => panic!("jmdict::entries() exhausted before end of traversal"),
Some(actual) => entry.check(&actual),
};
}
}
let opts = jmdict_traverse::Options {
is_db_minimal: cfg!(feature = "db-minimal"),
with_uncommon: cfg!(feature = "scope-uncommon"),
with_archaic: cfg!(feature = "scope-archaic"),
};
let mut v = Visitor(crate::entries());
jmdict_traverse::process_dictionary(&mut v, opts);
assert!(v.0.next().is_none(), "not all entries were exhausted");
}
trait Check<A> {
fn check(&self, actual: &A);
}
fn check_vec<A, E: Check<A>>(
expected: &Vec<E>,
actual: impl Iterator<Item = A> + ExactSizeIterator,
) {
assert_eq!(expected.len(), actual.len());
for (expected, actual) in expected.iter().zip(actual) {
expected.check(&actual);
}
}
impl<E: Debug + PartialEq<A>, A: Debug + PartialEq<E>> Check<A> for E {
fn check(&self, actual: &A) {
assert_eq!(self, actual);
}
}
impl Check<crate::Entry> for jmdict_traverse::RawEntry<'_> {
fn check(&self, actual: &crate::Entry) {
let expected = self;
check_vec(&expected.k_ele, actual.kanji_elements());
check_vec(&expected.r_ele, actual.reading_elements());
check_vec(&expected.sense, actual.senses());
}
}
impl Check<crate::KanjiElement> for jmdict_traverse::RawKanjiElement<'_> {
fn check(&self, actual: &crate::KanjiElement) {
let expected = self;
assert_eq!(expected.keb, actual.text);
check_vec(&expected.ke_inf, actual.infos());
}
}
impl Check<crate::ReadingElement> for jmdict_traverse::RawReadingElement<'_> {
fn check(&self, actual: &crate::ReadingElement) {
let expected = self;
assert_eq!(expected.reb, actual.text);
check_vec(&expected.re_inf, actual.infos());
}
}
impl Check<crate::Sense> for jmdict_traverse::RawSense<'_> {
fn check(&self, actual: &crate::Sense) {
let expected = self;
check_vec(&expected.stagk, actual.applicable_kanji_elements());
check_vec(&expected.stagr, actual.applicable_reading_elements());
check_vec(&expected.pos, actual.parts_of_speech());
check_vec(&expected.xref, actual.cross_references());
check_vec(&expected.ant, actual.antonyms());
check_vec(&expected.field, actual.topics());
check_vec(&expected.misc, actual.infos());
check_vec(&expected.s_inf, actual.freetext_infos());
check_vec(&expected.lsource, actual.loanword_sources());
check_vec(&expected.dial, actual.dialects());
check_vec(&expected.gloss, actual.glosses());
}
}
impl Check<crate::LoanwordSource> for jmdict_traverse::RawLSource<'_> {
fn check(&self, actual: &crate::LoanwordSource) {
let expected = self;
assert_eq!(expected.lang, actual.language);
assert_eq!(expected.text, actual.text);
assert_eq!(expected.is_partial, actual.is_partial);
assert_eq!(expected.is_wasei, actual.is_wasei);
}
}
impl Check<crate::Gloss> for jmdict_traverse::RawGloss<'_> {
fn check(&self, actual: &crate::Gloss) {
let expected = self;
assert_eq!(expected.lang, actual.language);
assert_eq!(expected.text, actual.text);
assert_eq!(expected.g_type, actual.gloss_type);
}
}

292
src/test_feature_matrix.rs Normal file
View file

@ -0,0 +1,292 @@
/*******************************************************************************
* Copyright 2021 Stefan Majewsky <majewsky@gmx.net>
* SPDX-License-Identifier: Apache-2.0
* Refer to the file "LICENSE" for details.
*******************************************************************************/
use crate::*;
//NOTE: Choose test words such that tests work with the `db-minimal` feature.
//We want the CI run to complete before we retire.
///Checks that glosses for the selected target languages are available.
#[test]
fn test_gloss_availability() {
let entry = entries()
.find(|e| e.kanji_elements().any(|k| k.text == "お母さん"))
.unwrap();
//while we're at it, test the decoding of entry numbers
assert_eq!(entry.number, 1002650);
let test_cases = &[
("eng", cfg!(feature = "translations-eng"), "mom"),
(
"dut",
cfg!(feature = "translations-dut"),
"moeder {honorifieke term}",
),
("fre", cfg!(feature = "translations-fre"), "mère"),
("ger", cfg!(feature = "translations-ger"), "Mama"),
("hun", cfg!(feature = "translations-hun"), "anya-"),
("rus", cfg!(feature = "translations-rus"), "мама, мамочка"),
("slv", cfg!(feature = "translations-slv"), "mati"),
("spa", cfg!(feature = "translations-spa"), "madre"),
("swe", cfg!(feature = "translations-swe"), "mamma"),
];
for (lang_code, selected, gloss) in test_cases {
let glosses: Vec<_> = entry
.senses()
.flat_map(|s| s.glosses())
.filter(|g| g.language.code() == *lang_code)
.map(|g| g.text)
.collect();
assert_eq!(
*selected,
!glosses.is_empty(),
"language code was {}",
*lang_code
);
if *selected {
assert!(glosses.contains(gloss), "glosses were {:?}", glosses);
}
}
}
///Spot checks for correct decoding of priorities.
#[test]
fn test_priorities() {
//Tests may be skipped if the test entry is not available, since entry
//availability depends on the selection of target languages.
if let Some((_, ke)) = find_by_keb("お参り") {
assert_eq!(
ke.priority,
Priority {
ichimango: PriorityInCorpus::Primary,
news: PriorityInCorpus::Secondary,
frequency_bucket: 36,
..Default::default()
}
);
}
if let Some((_, _, re)) = find_by_keb_reb("あの方", "あのかた") {
assert_eq!(
re.priority,
Priority {
additional: PriorityInCorpus::Primary,
..Default::default()
}
);
}
//`db-minimal` does not contain any gai1/gai2 vocabs
#[cfg(not(feature = "db-minimal"))]
{
if let Some((_, re)) = find_by_reb("アーク") {
assert_eq!(
re.priority,
Priority {
loanwords: PriorityInCorpus::Primary,
..Default::default()
}
);
}
}
}
///Spot checks for correct decoding of enums.
#[test]
fn test_enums() {
//Tests may be skipped if the test entry is not available, since entry
//availability depends on the selection of target languages.
//check for KanjiInfo
if let Some((_, ke)) = find_by_keb("屹度") {
assert_eq!(enum2str(ke.infos()), "Ateji");
}
//check for ReadingInfo (There are no entries with ReadingInfo in "db-minimal"
//unless we include "scope-uncommon".)
let (keb, reb, expected_infos) = if cfg!(feature = "db-minimal") {
if cfg!(feature = "scope-uncommon") {
("彼処", "あしこ", "OutdatedKanaUsage")
} else {
("", "", "")
}
} else {
("発条", "ばね", "GikunOrJukujikun")
};
if keb != "" {
if let Some((_, _, re)) = find_by_keb_reb(keb, reb) {
assert_eq!(enum2str(re.infos()), expected_infos);
}
}
//All Sense lookups rely on a certain gloss, so we need to feature-gate on the gloss language.
#[cfg(feature = "translations-eng")]
{
//check for PartOfSpeech
let sense = find_sense("あっさり", "easily");
assert_eq!(
enum2str(sense.parts_of_speech()),
"Adverb,AdverbTakingToParticle,SuruVerb"
);
//check for SenseTopic
let sense = find_sense("御田", "oden");
assert_eq!(enum2str(sense.topics()), "Food");
//check for SenseInfo
let sense = find_sense("うんこ", "poop");
assert_eq!(enum2str(sense.infos()), "Colloquialism,ChildrensLanguage");
//check for Dialect
let sense = find_sense("ええ", "good");
assert_eq!(enum2str(sense.dialects()), "Kansai");
//check for GlossType
let gloss_text = "in the time it takes to say \"ah!\"";
let sense = find_sense("あっという間に", gloss_text);
let gloss = sense.glosses().find(|g| g.text == gloss_text).unwrap();
assert_eq!(gloss.gloss_type, GlossType::LiteralTranslation);
}
}
///Spot checks for correct inclusion of various string fields.
#[test]
fn test_strings() {
//All Sense lookups rely on a certain gloss, so we need to feature-gate on the gloss language.
#[cfg(feature = "translations-eng")]
{
//check for stagk
let (sense, expected_stagk) = if cfg!(feature = "db-minimal") {
if cfg!(feature = "scope-uncommon") {
(Some(find_sense("遇う", "to treat")), "遇う")
} else {
(None, "")
}
} else {
(
Some(find_sense("アンド", "AND (boolean operator)")),
"",
)
};
if let Some(sense) = sense {
assert_eq!(strs2str(sense.applicable_kanji_elements()), expected_stagk);
}
//check for stagr
let sense = find_sense("彼処", "genitals");
assert_eq!(
strs2str(sense.applicable_reading_elements()),
"あそこ,あすこ,アソコ"
);
//check for xref
let sense = find_sense("彼の", "the");
assert_eq!(strs2str(sense.cross_references()), "どの,この・1,その・1");
//check for ant (`db-minimal` has absolutely none of those)
#[cfg(not(feature = "db-minimal"))]
{
let sense = find_sense("アンダー", "under");
assert_eq!(strs2str(sense.antonyms()), "オーバー・2");
}
//check for s_inf
let sense = find_sense("如何にも", "indeed");
assert_eq!(
strs2str(sense.freetext_infos()),
"indicating emotive conviction"
);
}
}
///Spot checks for correct encoding of loanword sources.
#[test]
fn test_loanword_sources() {
//All Sense lookups rely on a certain gloss, so we need to feature-gate on the gloss language.
//Also, `db-minimal` has nearly no loanword sources to work with.
#[cfg(all(feature = "translations-eng", not(feature = "db-minimal")))]
{
let sense = find_sense("アイメート", "seeing-eye dog");
assert_eq!(
&sense.loanword_sources().collect::<Vec<_>>(),
&[LoanwordSource {
text: "eye mate",
language: "eng",
is_partial: false,
is_wasei: true,
}]
);
//test with partial loanword sources
#[cfg(feature = "scope-uncommon")]
{
let sense = find_sense("サブザック", "small knapsack");
assert_eq!(
&sense.loanword_sources().collect::<Vec<_>>(),
&[
LoanwordSource {
text: "sub",
language: "eng",
is_partial: true,
is_wasei: true,
},
LoanwordSource {
text: "Sack",
language: "ger",
is_partial: true,
is_wasei: true,
}
]
);
}
}
}
fn enum2str<E: Enum>(vals: impl Iterator<Item = E>) -> String {
strs2str(vals.map(|v| v.constant_name()))
}
fn strs2str<'a>(vals: impl Iterator<Item = &'a str>) -> String {
vals.enumerate()
.map(|(i, v)| if i == 0 { v.into() } else { format!(",{}", v) })
.collect()
}
fn find_by_keb(keb: &'static str) -> Option<(Entry, KanjiElement)> {
let e = entries().find(|e| e.kanji_elements().any(|k| k.text == keb))?;
Some((e, e.kanji_elements().find(|k| k.text == keb).unwrap()))
}
fn find_by_reb(reb: &'static str) -> Option<(Entry, ReadingElement)> {
let e = entries().find(|e| e.reading_elements().any(|r| r.text == reb))?;
Some((e, e.reading_elements().find(|r| r.text == reb).unwrap()))
}
fn find_by_keb_reb(
keb: &'static str,
reb: &'static str,
) -> Option<(Entry, KanjiElement, ReadingElement)> {
let e = entries().find(|e| e.kanji_elements().any(|k| k.text == keb))?;
let ke = e.kanji_elements().find(|k| k.text == keb).unwrap();
let re = e.reading_elements().find(|r| r.text == reb)?;
Some((e, ke, re))
}
fn find_sense(jp_text: &'static str, gloss: &'static str) -> Sense {
entries()
.find(|e| {
(e.kanji_elements().any(|k| k.text == jp_text)
|| e.reading_elements().any(|r| r.text == jp_text))
&& e.senses().any(|s| s.glosses().any(|g| g.text == gloss))
})
.unwrap()
.senses()
.find(|s| s.glosses().any(|g| g.text == gloss))
.unwrap()
}

16
src/test_ordering.rs Normal file
View file

@ -0,0 +1,16 @@
/*******************************************************************************
* Copyright 2021 Stefan Majewsky <majewsky@gmx.net>
* SPDX-License-Identifier: Apache-2.0
* Refer to the file "LICENSE" for details.
*******************************************************************************/
use crate::entries;
#[test]
fn test_entry_order() {
let mut prev = 0;
for entry in entries() {
assert!(entry.number > prev, "{} comes after {}", entry.number, prev);
prev = entry.number;
}
}